id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
55228889
|
pes2o/s2orc
|
v3-fos-license
|
Water vapour source impacts on oxygen isotope variability in tropical precipitation during Heinrich events
Water isotope records such as speleothems provide extensive evidence of past tropical hydrological changes. During Heinrich events, isotopic changes in monsoon regions have been interpreted as implying a widespread drying through the Northern Hemisphere tropics and an antiphased precipitation response in the south. Here, we examine the sources of this variability using a water isotope-enabled general circulation model, Goddard Institute for Space Studies ModelE. We incorporate a new suite of vapour source distribution tracers to help constrain the impact of precipitation source region changes on the isotopic composition of precipitation and to identify nonlocal amount effects. We simulate a collapse of the North Atlantic meridional overturning circulation with a large freshwater input to the region as an idealised analogue to iceberg discharge during Heinrich events. An increase in monsoon intensity, defined by vertical wind shear, is modelled over the South American domain, with small decreases simulated over Asia. Simulated isotopic anomalies agree well with proxy climate records, with lighter isotopic values simulated over South America and enriched values across East Asia. For this particular abrupt climate event, we identify which climatic change is most likely linked to water isotope change – changes in local precipitation amount, monsoon intensity, water vapour source distributions or precipitation seasonality. We categorise individual sites according to the climate variability that water isotope changes are most closely associated with, and find that the dominant isotopic controls are not consistent across the tropics – simple local explanations, in particular, fall short of explaining water isotope variability at all sites. Instead, the best interpretations appear to be site specific and often regional in scale. Correspondence to: S. C. Lewis (sophie.lewis@anu.edu.au)
Heinrich event expression in δ 18 O records
The last glacial period was punctuated by successive Heinrich (H) events, short-lived abrupt cool episodes around the North Atlantic (Heinrich, 1988).These events are defined by distinct foraminifera-free zones within ice-rafted debris layers in oceanic sediment cores; they are thought to result from massive, periodic iceberg discharges into the North Atlantic basin.Heinrich events were accompanied by strong sea surface temperature (SST) and salinity reductions in the North Atlantic (Bond et al., 1992).
In the North Atlantic, Heinrich events usually occur towards the end of a cycle of progressively cooler interstadials (Dansgaard-Oeschger cycles), which culminate in a prolonged cold period during which a Heinrich event occurs (Bond et al., 1993).Conversely, H events in the Antarctic are contemporaneous with warmer conditions, suggestive of a "bipolar seesaw" connection between the hemispheres (Broecker, 1998).During H events, regional sea surface density gradient changes likely resulted in a substantial decrease in the production of North Atlantic Deep Water (NADW) (Keigwin and Lehman, 1994).Significant regional climatic changes during H events are near-global in extent (Hemming, 2004).
In the low-latitudes, speleothem-based climate reconstructions show the monsoon regions respond abruptly during Heinrich events (Fig. 1).Oxygen isotope reconstructions from the East Asian monsoon (EAM) region demonstrate an anti-correlation with Greenland ice core records (Wang et al., 2001;Wang et al., 2008;Zhou et al., 2008).In China, enriched δ 18 O calcite values (δ in permil units, ‰, of the subscripted value relative to a known standard) coincident with H events have been interpreted as a weakening of the EAM (Wang et al., 2001).Brazilian speleothem δ 18 O records are characterised by a sequence of wet conditions synchronous with cold Heinrich events in the North Atlantic and periods of weak East Asian summer monsoon circulation in China (Wang et al., 2004(Wang et al., , 2006;;Cruz et al., 2006b;Cruz et al., 2009).Wang et al. (2006) propose a north-south precipitation anti-phasing across the hemispheres during H events under a southward shift of the intertropical convergence zone (ITCZ).Collectively, however, proxy reconstructions indicate a complex spatial pattern of hydrological changes beyond coherent north-south anti-phasing (Tierney et al., 2008;Wagner et al., 2010;Lewis et al., 2010).This study investigates the coherence of spatial patterns of tropical hydrological changes during H events.
A remaining uncertainty in the interpretation of tropical variability during H events is in the "monsoonal" climate change, which is widely used in palaeoclimatic literature to describe a variety of phenomena, including a seasonal reversal of upper or lower level zonal winds, the strong seasonality of tropical precipitation or hydrological changes resulting from ITCZ shifts.We aim to resolve, in part, this ambiguity in the meaning of climatic interpretations by describing which parts of the monsoon system are impacted by H events.
Previous modelling work
Prior modelling studies have consistently demonstrated that freshwater input to the North Atlantic, analogous to iceberg discharge, reduces NADW formation and drives a regional cooling (Manabe and Stouffer, 2000;Stouffer et al., 2006).The simulated thermohaline circulation (THC) rapidly weakens following a freshwater perturbation, resulting in a reduction in northward heat and salt transport in the North Atlantic.The greatest temperature anomalies occur over the northern North Atlantic, with some cooling over Greenland, Europe and North America and a mild warming over parts of the Antarctic, as an expression of the bipolar seesaw.Prior studies also indicate significant modelled water isotope anomalies following an abrupt, though smaller, North Atlantic freshwater forcing (e.g.LeGrande et al., 2006).Water isotope responses include depletion in precipitation across the North Atlantic and southern subtropics, with enrichment to the north.
In the tropics, the impact of a simulated reduction in THC intensity includes a southward shift in precipitation bands and in the ITCZ over the tropical oceans (Dong and Sutton, 2002;Zhang and Delworth, 2005).Furthermore, a freshwater-forced reduction in the Atlantic meridional overturning circulation and expanded northern ice coverage drive extensive remote responses, including an El Niño-like SST pattern in the southeastern tropical Pacific.Overall, a freshwater-forced southward ITCZ shift, particularly over the Atlantic Ocean, is a robust response across multiple models (Stouffer et al., 2006).
Sources of δ 18 O variability
The δ 18 O in precipitation (δ 18 O p ) integrates changes in atmospheric circulation from source to the site of rainout (Noone, 2008).The dominant controls on δ 18 O are variable between proxy sites and include local precipitation amount variability together with changes in regional hydrology, the initial evaporative source, degree of rain-out during transit and atmospheric mixing.
Tropical δ 18 O variability is often interpreted as an alteration in local precipitation.This inference is based on simple Rayleigh distillation models that predict that isotope ratios in precipitation are correlated to local rainfall amount (the "amount effect" relationship) (Dansgaard, 1964;Araguás-Araguás et al., 1998).In general circulation models (GCMs), however, mixing plays an important role and these modelling results indicate that the spatial amount effect relationship is strongest only over the tropical oceans, rather than the land surface where speleothem archives occur (Tindall et al., 2009), and on intraseasonal timescales or longer (Risi et al., 2008).Additionally, observational studies show the amount effect is most applicable at coastal locations (Rozanski et al., 1993).
Furthermore, modern spatial isotope-climate gradients between multiple sites may not be good predictors of temporal gradients (Schmidt et al., 2007;LeGrande and Schmidt, 2009).As such, simple δ 18 O interpretations based on this amount effect relationship alone are unlikely to be robust for all sites and the dominant δ 18 O p control is likely to be spatially variable and site specific.In some cases δ 18 O p changes might be more accurately interpreted in terms of regional hydrological changes (Vuille et al., 2005;Schmidt et al., 2007;LeGrande and Schmidt, 2009).
Source region effects are also an important δ 18 O p control, through changes in initial vapour source composition and air mass transport distance (Rozanski et al., 1993).The relative amount of continental recycling is a determinant of δ 18 O p , as plant evapo-transpiration is non-fractionating and retains the composition of local groundwater, resulting in enriched values relative to oceanic derived precipitation (Zimmermann et al., 1967).Also, the location of the source region influences the extent of condensation undergone by a vapour parcel in transit to the site of precipitation.Locally derived vapour is typically relatively enriched, experiencing less condensation en route than water vapour transported over long distances (Rozanski et al., 1993).There have been various interpretations of δ 18 O variability in terms of source region effects (Jouzel and Koster, 1996;Masson-Delmotte et al., 2005).
As the relative contribution of vapour sources cannot be directly measured, model studies have demonstrated the importance of source region changes on δ 18 O p (e.g.Koster et al., 1986;Cole et al., 1999).Studies incorporating back trajectory modelling of air mass parcels have shown the significance of source regions for seasonal δ 18 O p compositions (Griffiths et al., 2009;Sjostrom and Welker, 2009).Also, source tracers from pre-specified regions ("painted water") have been employed as a GCM diagnostic tool in hydrological studies (Joussaume et al., 1986;Koster et al., 1986;Druyan and Koster, 1989) and in palaeotemperature reconstructions (Johnsen et al., 1989;Jouzel et al., 1997).Noone (2008), for example, considered the impact of multiple drivers of δ 18 O p variability (initial source, transport pathway and atmospheric mixing) and showed that Antarctic isotopic records reflect changes in mid-latitude circulation.Precipitation source region tracers provide a useful diagnostic for identifying and classifying sites where isotopic variability is characterised by controls other than a clear local amount effect.In particular, source tracers provide a means of recognising both regional, nonlocal amount effect dominated localities and those where δ 18 O p is controlled by distinct shifts in precipitation source.
In this study, we examine the relationship between tropical and high-latitude regions during Heinrich events, using a fully coupled water isotope-enabled atmosphere-ocean GCM.For speleothem sites within the Australian, Indian, EAM and South American (SM) monsoon regions we also use a novel set of Vapour Source Distribution (VSD) tracers, www.clim-past.net/6/325/2010/Clim.Past, 6, 325-343, 2010 as well as water isotope tracers, as a set of diagnostic tools to assess the nature of δ 18 O p changes during abrupt climatic excursions (i.e."hosing"), analogous to an H event.We investigate whether the spatial pattern of modelled δ 18 O p during Heinrich-like simulations can be attributed to changes in local precipitation amount, monsoon intensity (defined by zonal wind shear), precipitation source regions, or the seasonality of precipitation.Finally, we categorise proxy sites by type, according to the dominant controls on simulated isotopic variability.
Model description
Simulations were made using the coupled atmosphere-ocean GISS (Goddard Institute for Space Studies) ModelE-R.The horizontal resolution is 4 • ×5 • with 20 vertical levels up to 0.1 hPa in the atmosphere (Schmidt et al., 2006) and a 13 layer Russell ocean model of the same horizontal resolution (Hansen et al., 2007).Atmospheric advection uses the quadratic upstream scheme, with 9 moments advected in addition to mean quantities.The ocean component is non-Boussinesq, mass conserving and uses "natural" boundary conditions at the free surface.The addition of freshwater increases the free surface and reduces salinity through dilution.No equivalent salt fluxes or flux adjustments are used.
Water isotope tracers ( 1 H 16 2 O, "normal" water; 2 H 1 H 16 O or HDO, reported as δD; and 1 H 18 2 O, δ 18 O) are incorporated into the atmosphere, land surface, sea ice and ocean.Water isotopes are tracked through all stages of the hydrologic cycle and are advected like water throughout the model, but at each phase change, a fractionation is applied, explicitly determining equilibrium fractionation and with parameterisations accounting for kinetic fractionations (Schmidt et al., 2005).
Water vapour source distribution tracers
The water source tracer methodology employed here is a generalisation of the regional source tracers ("painted water") approach (Koster et al., 1986) but requires no prior definition of regions (Kelley, 2003).We define a suite of VSD tracers in the model, and atmospheric transport and condensation processes alter these analogously to a non-fractionating water isotope tracer.The VSD is the integrated mass of water vapour in each model cell, expressed as an area integral of evaporative input unique to that cell.The VSD can be represented as a weighted sum of basis functions that are orthogonal to one another over the earth's surface.The surface source of a given member of this new suite of tracers is equal to the evaporation field multiplied by its associated basis function.The sources of water vapour are traced back through any cloud processes to the site of surface evaporation.The precipitation source distribution is a subset of the VSD, defined where vapour condenses to liquid.
This study uses spherical harmonics as VSD basis functions as these are not anchored to any particular geographic boundary and require no prior definition of regions.The "painted water" approach can be seen as a special case of the VSD tracers using binary basis functions at each gridbox.It should be noted that factors such as land-sea contrasts cause real-world precipitation source distributions to not vary smoothly over planetary scales.As such, the smooth shapes of VSDs cannot be interpreted literally.We include 144 tracers and resolve distributions to wavenumber 11, providing an effective horizontal resolution of vapour sources to approximately 8 • ×10 • .VSD tracers cannot be employed in a comprehensively quantitative manner for tropical water isotopes, given that convection and mixing processes diminish the validity of a Lagrangian parcel-style approach to isotopic interpretation.Rather, the utility of the VSDs is as a vapour and precipitation weighted circulation diagnostic.
Experiment design
The VSD tracers utilised here are computationally expensive, slowing the model by a factor of 10.Thus, we present water isotope results from the coupled model and VSD tracers from atmosphere-only model simulations driven by surface conditions (SST and sea ice) determined from the coupled simulation.A pre-industrial coupled, atmosphere-ocean VSD-enabled simulation was conducted to test the validity of atmosphere-only simulations, indicating only small differences in precipitation source distributions.
Hosing simulations were completed as part of the Paleoclimate Modelling Intercomparison Project (PMIP) experiment to test the sensitivity of the THC to an external source of freshwater (Stouffer et al., 2006).Although hosing experiments are highly idealised and not representative of a particular climatic event, they are useful in examining the response of tropical precipitation to abrupt cooling in the North Atlantic.Following the PMIP protocol (Stouffer et al., 2006), this study applies a freshwater flux (T 0 • C; S 0 psu) of 1 Sv (1 Sv=10 6 m 3 /s) uniformly over the Atlantic between 50 • and 70 • N over 100 model years.Water isotopes are included in these experiments, and thus the freshwater has a specified depletion (δ 18 O−30‰) consistent with observational estimates of the composition of ice discharge during H events (Hemming, 2004).A control (0 k) simulation, with no freshwater perturbation, was run in parallel with all boundary conditions and atmospheric composition appropriate to the pre-industrial period (ca.1880).
Comparable control and hosing simulations were conducted in atmosphere-only mode with VSD tracers enabled.Initial conditions for the VSD-enabled hosing simulation were determined from anomalies calculated from the coupled simulations.Monthly SST and sea ice anomalies were defined as the difference of hosing model years 81 to 100 (where year 1 is the first hosing year) and pre-industrial
Definitions
Model results are used to categorise tropical water isotopes sites in terms of regional hydrological and circulation changes.Specifically, we define five site types (Table 1) where, 1. Local precipitation and isotope changes are consistent with the amount effect, whereby δ 18 O p is inversely correlated to local rainfall amount.Local precipitation changes are part of a coherent regional pattern and the site is reasonably distant from contours of zero precipitation changes.
2.Not all circumstances of Type-1 occur, but regional hydrological changes upwind are consistent with a nonlocal amount effect characterised by upwind prefractionation (i.e.upwind isotopic fractionation processes occurring prior to rain-out over specified region).
In Type-2 cases, local and upwind precipitation changes are not coherent.Hydrological changes are linked to monsoon intensity variability, as defined below.
3. No amount effect seems to be operating, but significant vapour source shifts can plausibly explain isotopic changes.Also, the source shift is consistent with expected circulation changes.
4. Shifts in the seasonality of precipitation produce corresponding changes in annual mean isotope signals.In this case, VSDs may be useful in explaining control isotopic seasonality due to the co-seasonality of isotopic changes and VSDs.
5. There is no explanation for isotope signals in terms of precipitation, VSDs or seasonality changes.
The classification of some sites is complex and most sites exhibit multiple δ 18 O p controls and hence secondary effects are also identified.The categorisation of sites is suggested for hosing-driven δ 18 O p changes only.As potentially different isotopic controls exist for climatic changes on different timescales (e.g.orbitally driven changes), generalisations of controls are not made.
As mentioned above, "monsoon" is often used to describe a variety of climatic phenomena.In order to examine regional hydrological changes, we define monsoon intensity using the zonal wind shear Webster-Yang (WY) index (Webster and Yang, 1992).The WY index is defined as the westerly wind shear anomaly between 850 hPa and 200 hPa pressure surfaces for June-August (JJA).The strength of the vertical shear is proportional to the strength of convective activity and associated latent heat released during the monsoon season as precipitation.During strong monsoon seasons, the upper air easterly and low-level westerly winds intensify.Conversely, during weak monsoon periods, zonal wind fields relax.This intensity definition from the Asian region is broadened to describe changes in monsoon strength over the South American monsoon area during the austral summer (DJF).We adopt the SM domain definition of Vuille and Werner (2005), which they identify as the centre of monsoonal convection in the region and consider dynamically consistent to the approach of Webster and Yang (1992).Dynamical monsoon indices have been employed previously as a measure of large-scale monsoon intensity changes and to characterise monsoon-isotope relationships (Brown, 2004;Vuille et al., 2005).
Results
The GISS ModelE-R simulation is part of the PMIP hosing experiment described by Stouffer et al. (2006).In general, mean GISS climatologies reside within the ensemble range of participating models.For all comparisons, climatic changes are reported as hosing anomalies relative to pre-industrial values.Specifically, anomalies are determined from mean values in control years 41 to 140 and hosing years 81 to 100.All anomalies reported are greater than 95% significant given the control decadal variability about the 100year mean.Reconstructed δD from Lake Tanganyika (Tierney et al., 2008) is presented as a δ 18 O equivalent approximated using the Global Meteoric Water Line (Rozanski et al., 1993).The solid line indicates 1:1 relationship between modelled and observational values.
Model δ 18 O p validation
Comparisons of simulated hosing-driven δ 18 O p anomalies and measured δ 18 O records of H events are presented in Table 2 and Fig. 2. For consistency, δ 18 O p excursions were averaged across all identifiable Heinrich events and estimated as the difference between average composition before an event and the extreme value during an event.Although North Atlantic oceanic sediment layers associated with H3 and H6 are considered geochemically unusual, with a contrasting provenance (Hemming, 2004), they are included here as their climatic expressions are comparable.In cases where values are reported from single gridboxes, these are coherent with changes over a broader area.Annual and seasonal δ 18 O hosing anomalies are shown in Fig. 3.The simulated hosing δ 18 O spatial pattern is broadly consistent with proxy records (Fig. 2).
In the high latitudes, we simulate depleted precipitation (ANN δ 18 O p −3.9‰; JJA δ 18 O p −2.8‰; DJF δ 18 O p −9.2‰) over Greenland, consistent with H events documented in the GRIP record ( δ 18 O−2.3‰;Bond et al., 1993;GRIP Members, 1993).The addition of highly isotopically depleted freshwater to the region and associated changes in the isotopic composition of surface seawater (δ 18 O sw ) contribute to comparatively light regional δ 18 O p .Further decreases in δ 18 O sw result from the reductions in northward transport of tropical surface water into the region (Fig. 3).Isotopic enrichments over Antarctica are an or-der of magnitude lower than those in the high-latitudes of the Northern Hemisphere (NH).In the Southern Hemisphere (SH) high-latitudes, there are minimal hosing-driven δ 18 O p changes.At Byrd ice core site, no statistically significant annual average δ 18 O p change is simulated, compared to the ∼0.7‰ reconstructed Heinrich shift (Blunier et al., 1998).Conversely, over Taylor Dome, the modelled δ 18 O p change is a −0.6‰ depletion, which is similar to the ∼−1.2‰ δ 18 O shift estimated from proxy data covering H events (Grootes et al., 2001).
In the mid-latitudes, European speleothems covering marine isotope stage 3 are sparse and existing records are typically limited by low calcite growth rates.At Poleva cave in Romania (44 • 4 N, 21 • 5 E), a δ 18 O excursion (∼−2.0‰)recorded around H4 is similar to the simulated hosing change (ANN δ 18 O p −2.9‰), although this stadial is constrained by only four geochemical data points over ∼5 kyr (Constantin et al., 2007).
ΔHosing ANN δ 18 Op (per mil) ΔHosing JJA δ 18 Op (per mil) ΔHosing DJF δ 18 Op (per mil) ΔHosing ANN δ 18 Osw (per mil) There are also simulated isotopic shifts that do not directly compare with proxy records.At Hulu cave (32 • 3 N, 119 • 1 E), modelled δ 18 O p shows no significant hosing change compared to a ∼1.4‰ reconstructed enrichment (Wang et al., 2001).Here, modelled modern δ 18 O p and precipitation are similar to Global Network of Isotopes in Precipitation (GNIP; IAEA/ WMO, 2006;Bowen, 2009) and Climate Prediction Centre Merged Analysis of Precipitation data (CMAP; Xie and Arkin, 1996).Hulu lies close to the zero δ 18 O p anomaly line and the coarse model resolution utilised may be inadequate.
In southwestern USA at Cave of Bells (31 • 4 N, 110 et al., 1999).Modelled modern precipitation values at Soreq are similar to CMAP data, although δ 18 O p is relatively enriched (∼2‰) compared to GNIP and dripwater observations (Matthews et al., 2000).The relative enrichment of modelled precipitation to observed is likely because the model does not adequately resolve the Strait of Gibraltar, resulting in Gat, 1996;Paul et al., 2001).Rainfall over Soreq is dominated by Mediterranean storm fronts (Bar- Matthews et al., 1996) and modelled rainfall is susceptible to bias in Mediterranean δ 18 O sw .As simulated hosing δ 18 O p changes contradict reconstructed isotopic values from Soreq cave and Cave of Bells speleothems, classification of the dominant control of simulated δ 18 O p at these sites is not attempted.
Large-scale climate changes
There are significant simulated global climatic changes following North Atlantic freshwater injection.The mean control NADW formation (Atlantic overturning streamfunction at 48 • N and 900 m depth) is 13 Sv and the long-term mean simulated THC intensity and decadal-scale variability are within the PMIP ensemble range of 12−25 Sv (Stouffer et al., 2006).In this study, THC collapse occurs after ∼50 years and intensity increases steadily after the forcing is eliminated.
Sea surface temperature anomalies broadly match ensemble results, with a modelled annual average global cooling of 0.4 • C. It should be noted that many participating PMIP models utilise a "rigid lid" ocean (which "add" freshwater via equivalent salt fluxes), whereas the GISS model incorporates a free surface, where added freshwater has the physical property of 0 • C, limiting the non-physical distortion of the coupling over the gridboxes where freshwater is added.Sea ice extent increases following the NADW shutdown, and the majority of the North Atlantic north of 50 • N is ice-covered in wintertime.
The simulated perturbation of the SST gradient across the hemispheres following hosing results in a southward shift in the ITCZ by 1-2 gridboxes (∼4-8 • in latitude, Fig. 4).There is a global annual average 0.1 mm/day decrease in precipitation, characterised by an increase in the southern tropics (ANN 0.2 mm/day) and a decrease in the northern tropics (ANN −0.4 mm/day).Precipitation anomalies are seasonally variable.The strongest precipitation decreases around the North Atlantic occur during the winter months (DJF −1.1 mm/day), whilst through the tropics and particularly over Asia, larger precipitation changes occur during the boreal summer.There are significant hosing-driven changes in modelled oceanic and atmospheric heat transport that impact water vapour transport and the distribution of heavy isotopes in precipitation.The maximum simulated northward heat transport in the Atlantic Ocean is 0.82 PW (1 PetaWatt=10 15 Watts), within the multi-model ensemble range of 0.7-1.1 PW (Stouffer et al., 2006).There is a decrease in northward heat transport in the Atlantic following freshwater perturbation to 0.16 PW at 20 • N (near the ensemble mean of 0.13 PW), and an overall reduction in total oceanic heat transport during the hosing simulation.Total northward atmospheric heat transport, integrated throughout the atmosphere, generally increases in the hosing simulation from the SH tropics through to the northern mid-latitudes.Hosing-driven increases in atmospheric heat transport do not entirely account for the decrease in northward oceanic heat transport, with a ∼0.3 PW deficit simulated.
Using the WY index (Webster and Yang, 1992) to define monsoon intensity, the simulated seasonal zonal wind shear indicates a freshwater-forced increase in convergence and monsoon intensification over the South American region (Fig. 5).Conversely, only a small hosing-driven decrease in zonal flow (monsoon intensity) in the Asian monsoon domain in modelled.Also, significant freshwater-forced changes in the amount of water vapour transported landward from oceanic source are simulated, due to changes in both atmospheric wind profiles and humidity.Generally, there is a decrease in transport of water vapour westward into East Asia under weakened monsoon circulation.Reductions in landward water vapour fluxes are strongest in the boreal summer, with a large reduction in meridional transport from the tropical west Pacific.Conversely, increases in the landward transport of water vapour from the tropical Atlantic over equatorial South America occur during hosing simulations.
Precipitation amount and seasonality changes
Over China (shown in Fig. 6), we simulate an overall increase in precipitation (ANN 0.2 mm/day; JJA 0.1 mm/day; DJF 0.9 mm/day).There is a distinct seasonality of control precipitation over the EAM region (defined by Li and Zeng (2002), with ∼45% of precipitation occurring during the summer months (JJA) and ∼5% during winter, with an average seasonal isotopic difference of ∼−4.3‰.However, Chinese speleothem sites occur largely outside the peak area of EAM rainfall and experience a subdued seasonality, with ∼26% occurring during summer (JJA) and ∼15% during winter (DJF).There are minimal simulated seasonality changes of ∼4% increase in Chinese winter rainfall to the annual total.The greatest hosing-driven change in precipitation seasonality occurs at Songjia cave, where there is an overall decrease in precipitation (ANN −1.0 mm/day; JJA −1.2 mm/day; DJF 0.3 mm/day).At Sanbao cave, simulated increases in local precipitation (ANN 0.6 mm/day; JJA 0.7 mm/day; DJF 1.1 mm/day) result in an increase in the relative contribution of winter rainfall to the annual total.Over Hulu cave, where no significant annual precipitation amount change is modelled, there is an increase in the relative contribution of winter precipitation to the annual total, consistent with coastal sedimentary records (Yancheva et al., 2007).
Throughout the Brazil region (Fig. 6), there is a seasonally robust hosing-driven increase in simulated precipitation (ANN 1.1 mm/day; JJA 0.7 mm/day; DJF 1.3 mm/day).There is also a decrease (by ∼22%) in the proportion of winter (JJA) rainfall from ∼33% of the annual simulated control total.Generally, the seasonality of precipitation is greater over Brazil following hosing.The decrease in the contribution of hosing winter precipitation, which is enriched relative to summer precipitation ( δ 18 O p ∼5.3‰), is also associated with depleted δ 18 O p (ANN δ 18 O p −2.7‰; JJA δ 18 O p −1.7‰; DJF δ 18 O p −3.1‰).
Around the Warm Pool, there is an annual average decrease in precipitation modelled over Borneo (ANN −0.5 mm/day; JJA 0.9 mm/day; DJF −2.4 mm/day).Further south at Liang Luar cave, southern Indonesia, hosing simulations indicate robust year-round precipitation increases (ANN 0.8 mm/day; JJA 0.8 mm/day; DJF 0.6 mm/day).The seasonality of precipitation over Borneo is weak, with warm SSTs driving year-round atmospheric deep convection (Cobb et al., 2007).At both sites, there is a decrease in the seasonality of precipitation after freshwater perturbation.For example, over Borneo ∼19% of simulated control rainfall occurs during JJA ( Hosing ∼4%) and ∼37% throughout DJF ( Hosing ∼−6%).
At Lake Tanganyika, seasonally variable hosing-driven precipitation changes (ANN 0 mm/day; JJA 0.1 mm/day; DJF −0.9 mm/day) are simulated.Although no statistically significant annual average precipitation amount changes are modelled, regional SH increases associated with the ITCZ shift are simulated.The seasonality of precipitation is si-milar in both control and hosing simulations.Over Moomi cave, Yemen, year-round decreases in hosing precipitation are simulated (ANN −0.1 mm/day; JJA −0.2 mm/day; DJF −0.1 mm/day), together with a slight decrease in the contribution of winter (JJA) rainfall to the annual total by ∼5%.
Vapour source distributions
For proxy locations detailed in Fig. 1, we identify hosing precipitation source region changes (Table 3).We define recycled water as water with a continental, rather than oceanic source.It should be noted that GCMs can overestimate the extent of regional recycling in the hydrological cycle, relative to advective moisture sources (Ruiz- Barradas and Nigam, 2006).Mean water vapour transport distances (TD) are calculated as the distance between the mean location of the precipitation source distribution and the proxy site.This provides a lower estimate of overall air mass TD as curved parcel trajectories cannot be accounted for.The initial source δ 18 O composition (δ 18 O source ) is calculated as the average surface δ 18 O within the simulated source region, including both land and ocean gridboxes.
Over China, modelled annual control precipitation consists of ∼50% recycled water vapour from continental Asia.Approximately 25% of rainfall is sourced from the northwest Pacific and continental Asia, ∼ 40% from the Warm Pool region and southern continental China and ∼20% from the Bay of Bengal (Fig. 6).Compared with winter (DJF), during the summer months (JJA) there is an increase in the transport of vapour from the Indian Ocean (∼27%) and decrease in Pacific-sourced precipitation (∼25%).Mean modelled precipitation source pathways to China are 380 km more local during winter than summer.Overall, modelled sources are consistent with observations indicating that winter precipitation is sourced under different conditions from summer rainfall, with a change to more local western Pacific sources (Araguás-Araguás et al., 1998).The pattern of hosing-driven source region changes to China is dominated by an increase in precipitation with a provenance in the Bay of Bengal (shown in Fig. 6).It should be noted, however, that the seasonal cycle in the control simulation does not clearly represent an analogue for hosing-driven changes over this region, as the changes associated with hosing are an order of magnitude smaller than those occurring between the seasons, particularly over the East China Sea.
Hosing source region changes are also simulated over areas of eastern Brazil (Fig. 6).Here, 22% of modelled precipitation is locally recycled, with the bulk of precipitation sourced from the Atlantic Ocean and minimal long distance transport of vapour.Compared with winter (JJA), during the summer months (DJF) there is a greater proportion of precipitation recycling (26%), a northward shift in the mean source location and the mean transport distance of water vapour is ∼260 km greater.The modelled source pattern is consistent with observed modern rainfall observations indicating www.clim-past.net/6/325/2010/Clim.Past, 6, 325-343, 2010 that winter rainfall incorporates a larger fraction of Atlantic Ocean derived moisture than summer (DJF) rainfall, which is associated with enhanced convective activity over the Amazon (Cruz et al., 2006a).The spatial pattern of mean annual VSD hosing anomalies is similar to the simulated seasonal source shift of the seasonal cycle in the control simulation.Both instances involve a southward ITCZ migration, although the magnitude of seasonal source anomalies changes is greater by a factor of ∼2 than those simulated following freshwater injection.Significant hosing-driven source region changes to individual tropical sites are also simulated (Fig. 7), largely due to freshwater-forced alterations in SSTs and resulting shifts in the mean ITCZ location.Over Borneo, there is a small hosing-driven reduction in modelled precipitation transported from the Pacific.This is accompanied by an increase in precipitation sourced from southern Indonesia around the Java Sea, which is strongest during the boreal summer, and a shift to more local precipitation sources (∼100 km).The overall size of the source region in the control simulation is larger.Around Lake Tanganyika, there is a distinct change in the source of precipitation, from an Indian Ocean dominated source to a strongly continental and Atlantic influenced source.The mean hosing TD is ∼820 km less than during the control simulation.In addition, there is an increase in the proportion of recycled non-fractionated vapour by ∼6%.
Analyses of VSDs to other sites, including Moomi cave in Yemen (Fig. 7), and Liang Luar cave, southern Indonesia, indicate relatively local precipitation sources with minimal hosing impact on VSDs.
Site classifications
Given the impact of hosing on climate in these simulations, we attempt to classify each site into one of five categories (Table 1) by which mechanism is most closely associated with δ 18 O p variability for the abrupt, H-type events simulated here.Secondary δ 18 O p effects are also identified.It is possible that these characterisations could be different for different timescales and for different types of variability.
These categorisations include Type-1, where local precipitation and isotopic changes are consistent with the amount effect relationship.Type-2 sites occur where regional upwind hydrological changes, such as in monsoon intensity, are consistent with a nonlocal amount effect.For Type-2 sites, hydrological changes are not equally important throughout the entirety of the VSD and variations on the upwind fringes are less significant as little vapour reaching the site of rainout passes through the precipitation events on the upwind VSD periphery.Alternatively, Type-3 sites are characterised by significant vapour source shifts, rather than δ 18 O p amount effect variability.Type-4 localities are defined where large shifts in the seasonality of precipitation produce corresponding δ 18 O p changes.Sites are classified as Type-5 where there is no explanation for isotope signals in terms of precipitation, VSDs or seasonality changes.Finally, model results are used to indicate whether the measured isotopic changes are representative of a broader climatic region in which they lie.
China
Chinese speleothem δ 18 O variability is commonly interpreted as primarily controlled by rainfall seasonality and changes in the intensity of the summer season rainfall (Wang et al., 2001(Wang et al., , 2008;;Zhou et al., 2008), analogous to the Type-4 category.Local amount effect changes are considered to be a secondary driver of δ 18 O p (Wang et al., 2001;Zhou et al., 2008).
The simulated large-scale hydrological changes in the EAM region, together with the modelled spatial complexity of nearby zero precipitation and δ 18 O p contour lines indicate Chinese sites are best classified as Type-2, with δ 18 O p variability consistent with nonlocal amount effects such as upwind regional hydrological changes that alter δ 18 O prefractionation.
There is a small decrease in hosing-driven Asian monsoon intensity (defined by the zonal wind shear WY index) during the summer months (JJA).This is associated with relatively local precipitation sources and enriched precipitation.There is a seasonally robust enrichment in incoming δ 18 O v (δ 18 O water vapour) to the region (ANN Summer monsoon weakening is accompanied by a decrease in δ 18 O v transported landward from oceanic sources, with a reduction in transport from the tropical west Pacific.There is a hosing-driven decrease in regional incoming water vapour in summer by ∼78×10 6 kg/s (or ∼4%).During the summer months, the vapour originating from the Indian Ocean sector traverses the area of significant precipitation decreases over Bangladesh and Southeast Asia (Fig. 4), resulting in less pre-fractionation.
Simulated source region shifts to China (Fig. 6), particularly vapour increases from the Bay of Bengal, may indicate also that the VSD has a secondary influence at these sites (Type-3).The gain in Bay of Bengal sourced vapour during hosing is greatest in the winter season.Conversely, decreases from this source are simulated in summer, although larger source reductions from the East China Sea occur.The VSD shift is associated with slight evaporative source enrichment relative to control (ANN δ 18 O source ∼0.3‰).
The spatial complexity of regional precipitation amount and δ 18 O p changes indicates that Chinese sites are not typically Type-1, local amount effect dominated.Modelled δ 18 O changes are not singularly consistent with changes in the relative seasonal proportions of rainfall over these regions and hence these sites are not considered typically Type-4, seasonality driven.In addition, the modern seasonal cycle does not seem to be a good analogue for hosing-driven VSD changes over China, and this is likely also the case for δ 18 O p variability.
Ultimately, sites within China present a complex spatial pattern of precipitation changes.The Chinese VSD is complex and straddles an area of variable hosing-driven precipitation changes across both China and the west Pacific.It should also be noted that the small magnitude of isotopic changes occurring near the coast suggests that cave sites may not necessarily be representative of regional processes.Similarly, proxy sites are situated outside the area of peak EAM influence and are not necessarily indicative of broader changes to the south or west.
Brazil
Interpretations from Rio Grande do Norte, Santana and Botuverá caves in South America predominantly employ the amount effect relationship (Type 1) (Wang et al., 2006;Cruz et al., 2009).Additionally, the seasonality of precipitation is cited as a secondary δ 18 O p driver, through associated changes in the evaporative origin of precipitation (Cruz et al., 2005(Cruz et al., , 2006a)).
In agreement with these interpretations, simulated hosing precipitation increases and depleted δ 18 O p over Brazil are consistent with categorisation as Type-1 sites.Regional precipitation changes are associated with the simulated hosing southward ITCZ shift, which is most prevalent over the Atlantic and impact eastern tropical South America precipitation.
Although caves within the Brazil region are the clearest examples of Type-1, local amount effect dominated sites, regional hosing-driven increases in the landward transport of water vapour are also simulated.Using the broadened WY wind shear index, monsoon intensification is modelled over the SM region (Fig. 5), impacting both precipitation amount and δ 18 O p .Increases in vapour flux under a strengthened monsoon system also correspond to generally light δ 18 O values ( δ 18 O v ∼−1.5‰, compared to control δ 18 O v ∼−16‰) in vapour transported from the relatively depleted tropical Atlantic landward into northeastern South America.The relative depletion of the Atlantic source region ( δ 18 O source ∼−1.9‰), from the injection of isotopically light surface waters also contributes to anomalously depleted precipitation in the region.
South American cave sites are situated along the Atlantic coast, removed from the peak of monsoonal rainfall and are not necessarily regionally representative.Opposite-signed hosing-driven isotopic and precipitation anomalies (Figs. 3 and 4) are simulated directly west of the Rio Grande do Norte, Santana and Botuverá cave sites.This regional antiphasing is related to the location of the ITCZ, which is displaced during the hosing simulation.
Borneo
Isotopic enrichment in Borneo speleothems during H events is interpreted as rainfall amount changes (Type-1) driven by fluctuations in the mean position and intensity of the ITCZ (Partin et al., 2007).Simulated average decreases in precipitation (ANN −0.5 mm/day; JJA 0.9 mm/day; DJF −2.4 mm/day) and δ 18 O p enrichment (ANN δ 18 O p 0.6‰; JJA δ 18 O p 0‰; DJF δ 18 O p 0.6‰) over the northern Warm Pool region are broadly consistent with interpretations of regionally coherent drying during H events under the amount effect.As precipitation and isotopic anomalies are consistent with the amount effect, Borneo is classified as Type−1.
In these hosing simulations, Borneo lies close to the zero hosing precipitation and δ 18 O contours, particularly in summer (JJA) and may have secondary, nonlocal δ 18 O p controls.The VSD indicates precipitation to Borneo has a relatively local source, which does not support Type-2 categorisation (Fig. 7).There may be a secondary effect of transport distance changes on Borneo δ 18 O p (Type-3, source region influenced) as mean TD decreases by ∼100 km, likely resulting in less rain-out of heavy isotopes during transport and yielding enriched δ 18 O p .Similarly, the simulated source shift tends to bring moisture from a sector of enriched vapour ( δ 18 O source ∼−0.2‰).Furthermore, the difference in the direction of precipitation amount changes between summer and winter during the hosing simulation highlights possible Type-4 seasonality effects.
Overall, Borneo is predominantly a Type-1 site, with changes in TD (Type-3) a secondary contributor to variability.
Lake Tanganyika
The amount effect is cited as the primary control on isotopic variability in the Lake Tanganyika isotope record, with moisture source origin and transport history a lesser consideration (Tierney et al., 2008).Here, we simulate enrichment in annual average local precipitation (ANN δ 18 O p 1.3‰; JJA δ 18 O p; 0.1‰; DJF δ 18 O p 2.4‰), together with seasonally variable precipitation changes (ANN 0 mm/day; JJA 0.1 mm/day; DJF −0.9 mm/day).Although annually this site lies on a front of zero precipitation changes and average simulated rainfall amount within the overlying gridbox is unchanged, regional increases associated with the ITCZ shift are simulated (ANN 0.2 mm/day; JJA 0.3 mm/day; DJF −0.1 mm/day).Overall, modelled hosing-driven precipitation changes are inconsistent with site interpretations of dry H events.
The significant westward shift of the VSD during the hosing simulation suggests that Lake Tanganyika is a Type-3 site, whereby source shifts are most strongly associated with δ 18 O p changes (Fig. 7).The mean vapour TD is reduced by ∼800 km during the hosing simulation, which diminishes rain-out and pre-fractionation, with the mean longitude of the VSD shifting ∼7.5 • westward.Also, there is an overall increase in the proportion of non-fractionated recycled continental vapour by ∼6%.The estimated initial composition of the hosing source to Lake Tanganyika is ∼0.3‰ more enriched relative to the control.Simulated transport and VSD changes are associated with enriched water vapour composition and likely dominate the hosing-driven δ 18 O p anomaly.Modern observations also indicate that enriched moisture originating from the Congo Basin contributes to comparatively high observed δ 18 O p values over eastern equatorial Africa (Levin et al., 2009).
The increased contribution of the western continental and Atlantic source during hosing is regionally robust throughout SH tropical eastern Africa and likely results from the increase in SST in the south Atlantic, which is an ensemblewide hosing feature (Stouffer et al., 2006).VSD shifts and associated simulated changes in δ 18 O p do not support suggestions that the source region to sites dominated by moisture from the Indian Ocean were insensitive to H-event climatic changes (Verschuren et al., 2009).
Moomi cave, Yemen
Oxygen isotopic variability at Moomi cave is interpreted in terms of rainfall amount, driven by changes in the latitudinal position of the ITCZ and the intensity of ITCZ convection (Shakun et al., 2007).The interpretation of δ 18 O p enrichment at this site as an abrupt drying event is consistent with simulated decreases in hosing precipitation (ANN −0.1 mm/day; JJA −0.2 mm/day; DJF −0.1 mm/day).Furthermore, the modelled δ 18 O p enrichment (ANN δ 18 O p 0.2‰; JJA δ 18 O p 0.4‰; DJF δ 18 O p 0.1‰) is consistent with local amount effect (Type-1).
Moomi cave resides near zero anomaly lines for hosing precipitation and δ 18 O p , isotopic variability may record more regional amount effects (Type-2).There is a decrease in rainfall occurring within the Moomi VSD during the hosing simulation, potentially reducing the degree of isotopic prefractionation occurring and increasing δ 18 O p enrichment.The simulated VSD indicates precipitation sources are relatively local (Fig. 7).
Liang Luar cave, Indonesia
In southern Indonesia, the coincidence of growth hiatuses with H2 and H3 may not reflect a suggested local drying (Lewis et al., 2010).Hosing simulations indicate robust yearround precipitation increases at this site (ANN 0.8 mm/day; JJA 0.8 mm/day; DJF 0.6 mm/day), in consistent with wet events reported from northeastern Australia (Muller et al., 2008) and Indonesia (Westaway et al., 2007).Precipitation increases may lead to changes in cave hydrology that preclude deposition of calcite (Fairchild et al., 2006).
Generally, the simulated rainfall increases and concomitant δ 18 O p depletion (ANN δ 18 O p −0.7‰; JJA δ 18 O p −0.8‰; DJF δ 18 O p −0.7‰) are consistent with a Type-1, local amount effect classification.Furthermore, the VSD indicates local evaporative sources, with minimal long distance vapour transport.Although this site lies nearby to hosing contours of zero δ 18 O p and precipitation changes, it is representative of regional SH hydrological changes, including increased, isotopically depleted rainfall under an intensified Australian summer monsoon.
Discussion
Simulated hosing climatic changes over different proxy sites indicate a range of δ 18 O p controls that occur on a variety of spatial scales.It should be noted that the characterisation of sites might vary for different types of climatic changes occurring on other timescales.The categorisation of sites indicates that for those residing within the tropical weather regime year-round, such as those situated around the Warm Pool region (Borneo and Liang Luar cave), Moomi cave in Yemen and within northeastern Brazil, simulated δ 18 O p responds primarily to local rainfall amount changes.Although the distinction between Type-1 and Type-2 sites is at times subtle, Type-2 classification is adopted for those influenced by more regional hydrological changes, such as monsoon intensification or weakening.
We distinguish between local precipitation amount and monsoon intensity using the WY index of zonal wind shear changes.The WY index for the Asian region is more closely associated with regional precipitation changes and is more useful than local precipitation changes alone for describing δ 18 O p changes over China.In this region, local precipitation www.clim-past.net/6/325/2010/Clim.Past, 6, 325-343, 2010 amount and monsoon intensity are not necessarily synonymous.Over China, simulated local precipitation and δ 18 O p exhibit a complex spatial pattern of changes during hosing and hydrological responses are seasonally and spatially variable.However, modelled δ 18 O p enrichment is associated with monsoonal upwind changes within the VSD and local rainfall over the defined Chinese region (Fig. 6) is likely influenced by nonlocal pre-fractionation.
In addition, speleothems do not directly record δ 18 O p variability.Though incorporating water isotopes tracers brings ModelE a step closer to allowing model-proxy comparisons, there are multiple processes impacting δ 18 O calcite that can be site specific.These include δ 18 O p and cave temperature variability, and internal cave hydrological dynamics (Fairchild et al., 2006).Changes in the epikarst and cave environment are usually minor δ 18 O calcite drivers, however, some monsoon-influenced sites undergo a >5 • C (equivalent to >∼1‰ δ 18 O calcite ) seasonal temperature cycle (Johnson et al., 2006), which complicate primary climatic signals.Also, at sites with erratic monitoring programmes, it may be unclear if calcite growth integrates an annual or seasonal signal, making model comparisons difficult.Site-specific forward models incorporating calcite precipitation processes could further improve proxy-model δ 18 O comparisons.
The experimental design here represents a very idealised version of a Heinrich event.We apply a uniform freshwater injection across a large Atlantic area, which is not necessarily representative of iceberg discharge into the North Atlantic (Hemming, 2004), though it does create a similar scenario where the North Atlantic region cools by about the right amount.Modelled δ 18 O p excursions are not consistent with proxy changes in all records, and may occur over areas characterised by steep topography or where the simulation of precipitation fields is poor.Mismatches over sites such as Hulu and Soreq caves and Cave of Bells are attributed to inadequacies in the coarse model resolution utilised.Site δ 18 O p categorisation is only attempted for tropical locations where simulated and measured isotopic values broadly agree.
This study is enhanced by the incorporation of general VSD tracers into the model.However, in order to diagnose comprehensively the relative contributions of different controls on δ 18 O p , specific H 18 2 O source distribution tracers are required.With additional VSD tracers for the isotopes themselves, the impact of TD and source region changes on precipitation composition could be addressed more quantitatively.However, these tracers are prohibitively expensive to run.Additions to VSD tracers may allow the cause of mismatches between simulated and reconstructed δ 18 O changes to be identified.Also, site classifications in this study do not explicitly account for the mixing of air from spatially disperse evaporative sources along the transport route.The degree of mixing encountered by an air mass en route from evaporative source to the site of precipitation is also an important control on δ 18 O p , although this is difficult to constrain.Previously, the impact of circulation strength changes on δ 18 O p has been better constrained through model sensitivity studies (Noone, 2008), and future analyses adopting similar approaches may clarify the impact of mixing during transport on δ 18 O p and provide a further type categorisation.
Conclusions
We simulate a shutdown of the THC after freshwater injection to the North Atlantic, as an analogue to a Heinrich event.Modelled hosing precipitation fields demonstrate a distinct fingerprint of climatic change, including a southward shift in the ITCZ, in agreement with PMIP multi-model results (Stouffer et al., 2006).Simulated hosing climatic perturbations include a pattern of depleted δ 18 O p values across the tropical southern Pacific, Atlantic and Indian Oceans, the West Pacific Warm Pool, eastern South America and southern Africa.Conversely, enriched isotopic values are modelled over most of southern Asia and central Africa, corresponding to increased precipitation around the ITCZ.Furthermore, changes in monsoon intensity and associated water vapour fluxes are modelled, including a SM intensification and a small reduction in Asian monsoon.The dynamical index utilised here (WY index of zonal wind shear anomalies) is useful in diagnosing monsoon changes.Quantifying monsoonal changes in this way disambiguates the use of this term and reinforces that monsoon changes do not necessarily equate to local rainfall amount variability.
Water isotopes archives demonstrate a fairly coherent pattern of isotopic changes during Heinrich events, where spatially proximate measurement sites show congruent signals.
Comparisons of reconstructed Heinrich event δ 18 O and simulated hosing δ 18 O p excursions indicate areas of broad modeldata agreement, particularly over China and Brazil.To the extent that simulated patterns of change agree with proxy reconstructions, model results can confirm whether the measured isotope changes are representative of a broader climatic region in which they lie.Also, this spatial representation provides another way to constrain modelled NADW responses (e.g.LeGrande et al., 2006).
As δ 18 O p integrates a complete air mass history, from source to rain-out, speleothems record a complex history of climatic change and require detailed interpretations.Sitespecific VSDs are shown to be a valuable circulation diagnostic and should be considered when interpreting hydrological changes from water isotope proxy records.We attempt to categorise proxy sites according to the dominant influences on simulated δ 18 O p variability, including changes in local and nonlocal rainfall amount, precipitation seasonality and VSDs.Site classification is complicated in some instances and most sites exhibit multiple influences and secondary δ 18 O p effects are identified.
For coastal sites or tropical areas associated with the ITCZ rains, such as around northeastern Brazil and the Warm Pool, isotopic variability likely reflects local rainfall intensity changes by the amount effect (Type-1).Other sites, such as within China, lie near contours of zero hosing δ 18 O p or precipitation changes and record a nonlocal amount effect due to upwind changes (Type-2).Modelled VSDs are useful in identifying nonlocal amount effect influences on δ 18 O p .Finally, Lake Tanganyika is categorised as Type-3, where significant westward hosing-driven precipitation source shifts control δ 18 O p variability through changes in the degree of pre-fractionation and the relative enrichment of the nonfractionating continental moisture source.No sites are primarily characterised by seasonality δ 18 O p changes (Type-4), which has been utilised in proxy interpretations.
Fig. 3 .
Fig. 3. Annual average hosing δ 18 O p and δ 18 O sw changes and seasonal δ 18 O p anomalies for boreal summer (JJA) and winter (DJF).All values reported are greater than 95% significant (student's t-test) given the decadal variability about the 100-year mean.Global mean anomalies are given at the top right of each panel.
332S.C.Lewis et al.: Water vapour source impacts on oxygen isotope variability ˚F ig. 4. Annual and seasonal (JJA and DJF) average hosing anomalies for SAT ( • C, left) and precipitation (mm/day, right).All values reported are greater than 95% significant (student's t-test) given the decadal variability about the 100-year mean.Global mean anomalies are given at the top right of each panel.
Fig. 6 .
Fig. 6.Control precipitation source distributions (Ok), and hosing and modern seasonal precipitation source region anomalies for China (26-34 • N, 105-120 • E, DJF-JJA) and Brazil (4-30 • S, 40-50 • W, JJA-DJF).Solid rectangular boxes indicate the location of end member precipitation and dashed boxes indicate Bay of Bengal source region to China.Note that seasonal and hosing anomalies are plotted on different scales.VSDs are unitless probability density functions, normalised by the maximum probability density.
Table 1 .
Summary of site types defined in terms of hydrological controls on δ 18 O p .
Depleted δ 18 O p values are simulated across the tropical southern Pacific, Atlantic and Indian Oceans, the West Pacific Warm Pool, eastern South America and southern Africa.Conversely, enriched values are modelled over most of southern Asia and central Africa, corresponding to increased precipitation associated with the southward migration of the ITCZ.Proxy records from the East Asian and Indian monsoon domains consistently exhibit enriched δ 18 O calcite values during H events and generally enriched precipitation is modelled over China (ANN δ 18 O p 0.6‰) and India (ANN δ 18 O p 1.4‰).Simulated hosing δ 18 O p enrichment is largest during the boreal summer (China JJA δ 18 O p 0.6‰; India JJA δ 18 O p 2.4‰).Anomalies in this region are spatially complex with a seasonally robust hosing zero δ 18 O p front transecting China, with positive anomalies to the south and negative to the north.The correspondence of modelled and reconstructed δ 18 O anomalies is variable.Excursions in simulated δ 18 O p are consistent with reconstructed values at Songjia cave (Zhou et al., 2008), although modelled δ 18 O enrichments are lower than proxy excursions at Sanbao (31 • 4 N, 110 • 3 E; Wang et al., 2008) and Moomi cave in Yemen (31 • 4 N, 110 • 3 E; Shakun et al., 2007).Over Brazil, modelled isotopically depleted precipitation (ANN δ 18 O p −2.7‰; JJA δ 18 O p −1.7‰; DJF δ 18 O p −3.1‰) is consistent with H event reconstructions.At Rio Grande do Norte (31 • 4 N, 110 • 3 E; Cruz et al., 2009) and Santana (31 • 4 N, 110 • 3 E; Cruz et al., 2006b) caves, simulated annual δ 18 O p is greater than the reconstructed average δ 18 O change over H events.This discrepancy in δ 18 O magnitude may relate to the method of averaging proxy excursions, which excluded extreme δ 18 O calcite values represented by a single data point.Further agreement between reconstructed and modelled isotopic values occurs over Borneo (4 • N, 114 • E;
Table 2 .
Comparisons of isotopic proxy records with annual average modelled δ 18 O p and precipitation hosing anomalies at relevant gridboxes.Proxy data excursions are estimated from average baseline δ 18 O values prior to the timing of a Heinrich event and averaged across all identifiable excursions.
Wagner et al., 2010) 2010), the reconstructed ( δ 18 O calcite −0.8‰) and simulated (ANN δ 18 O p −1.2‰; JJA δ 18 O p; −2.5‰; DJF δ 18 O p 0‰) excursion is comparable.However, contemporary monitoring indicates calcite precipitates only during the winter months (DJF) due to high summer evaporation and runoff.Modelled winter precipitation anomalies indicate insignificant δ 18 O p changes, in contrast with re-constructed values.This discrepancy may indicate that the large-scale changes in hydrology occurring during abrupt changes may allow a calcite deposition regime to transpire that is significantly different from the modern.Inthis case, a comparison of annual average hosing values with modern winter average may be more valid, although this indicates a δ 18 O p enrichment of ∼0.8‰.Alternatively, this discrepancy may result from model inadequacies around this comparatively high altitude site (1700 m a.s.l.) or a seasonal bias in modelled precipitation fields.At Soreq cave (31 • 3 N, 35 • E), reconstructed Heinrich δ 18 O calcite values indicate a 0.5‰ enrichment, while modelled δ 18 O p values record a 1.2‰ depletion (Bar-Matthews
Table 3 .
Summary of modelled hosing precipitation source region impacts on ultimate δ 18 O p , (control, 0k, and hosing anomalies, Hosing), including estimated initial source region composition (δ 18 O source ), fraction of precipitation derived from continental recycling, hosing change in mean precipitation source latitude and longitude, mean vapour transport distance and suggested site classifications including primary δ 18 O p controls and secondary effects.
|
2018-12-10T21:47:33.344Z
|
2010-06-01T00:00:00.000
|
{
"year": 2010,
"sha1": "1fa93d2d9bbb42b3d8bdde94aaa0944137ba925a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5194/cp-6-325-2010",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4baa0503b12c2343c5c0e3ec0059c94a37b299bf",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
12373911
|
pes2o/s2orc
|
v3-fos-license
|
Transient and steady-state selection in the striatal microcircuit
Although the basal ganglia have been widely studied and implicated in signal processing and action selection, little information is known about the active role the striatal microcircuit plays in action selection in the basal ganglia-thalamo-cortical loops. To address this knowledge gap we use a large scale three dimensional spiking model of the striatum, combined with a rate coded model of the basal ganglia-thalamo-cortical loop, to asses the computational role the striatum plays in action selection. We identify a robust transient phenomena generated by the striatal microcircuit, which temporarily enhances the difference between two competing cortical inputs. We show that this transient is sufficient to modulate decision making in the basal ganglia-thalamo-cortical circuit. We also find that the transient selection originates from a novel adaptation effect in single striatal projection neurons, which is amenable to experimental testing. Finally, we compared transient selection with models implementing classical steady-state selection. We challenged both forms of model to account for recent reports of paradoxically enhanced response selection in Huntington's disease patients. We found that steady-state selection was uniformly impaired under all simulated Huntington's conditions, but transient selection was enhanced given a sufficient Huntington's-like increase in NMDA receptor sensitivity. Thus our models provide an intriguing hypothesis for the mechanisms underlying the paradoxical cognitive improvements in manifest Huntington's patients.
INTRODUCTION
Finding the neural substrate for the process of "selection" is key to furthering our understanding of decision-making (Ding and Gold, 2013), action selection (Mink, 1996;Grillner et al., 2005), planning (Houk and Wise, 1995), action sequencing (Jin and Costa, 2010), and even working memory (Gruber et al., 2006). A unifying proposal is that the basal ganglia forms just such a generic selection mechanism Redgrave et al., 1999); this proposal neatly explains why the basal ganglia have been hypothesized to contribute to each of these functions. But specifying the computational process of selection by the basal ganglia is challenging (Berns and Sejnowski, 1998;Gurney et al., 2001a,b;Humphries et al., 2006;Leblois et al., 2006).
A particular unknown is the computational role of the basal ganglia's input nucleus, the striatum. The striatum's GABAergic projection neurons comprise the vast majority of cells and are connected by local collaterals of their axons (Wilson and Groves, 1980). The lack of layers or of clear axial preferences in the direction of dendrites or axons suggests that striatal tissue is homogeneous in all three dimensions (Humphries et al., 2010). Such GABAergic connectivity naturally lends itself to the idea that the striatum forms a vast recurrent network that, locally, implements a winner-takes-all computation (Alexander and Wickens, 1993;Fukai and Tanaka, 1997;Wickens, 1997). The weak strength of synapses between the projection neurons (Jaeger et al., 1994;Czubayko and Plenz, 2002;Tunstall et al., 2002) is difficult to reconcile with this proposal (Plenz, 2003), as they suggest projection neuron output can only modulate ongoing activity and not outright inhibit their targets.
Here we report an alternative, transient form of selection that can occur in weak, sparse networks of striatal projection neurons. Using our three-dimensional network model of distancedependent connections in the striatal microcircuit (Humphries et al., 2009b(Humphries et al., , 2010, we explored the effect on striatal output of competing inputs to two projection neuron populations. We found that rapidly stepped input to one population caused a transient competitive effect on the two populations' outputs, which disappeared after around 100 ms. In response to the same inputs, we also found that sufficiently dense striatal connectivity could result in steady-state competition, where the post-step equilibrium activity of each population reflects the inhibition of one by the other. To compare transient and steady-state selection we challenged both forms of model to account for the paradoxical response selection results of Beste et al. (2008). They found that manifest Huntington's disease patients were both faster and less error prone than controls on a simple two-choice reaction-time task. As Huntington's disease primarily results in striatal damage, this suggests the hypothesis that changes in the striatum directly affect response selection. We expand on the role of the striatum in signal selection, by describing a framework for signal selection that may account for both the typical decline in performance for most tasks under Huntington's conditions Ho et al. (2003), as well as a mechanism for increased performance under the same conditions. We thus explored how Huntington's disease-like changes to our striatum models could affect both transient and steadystate selection, and sought whether the effect on either form of selection could explain the results of Beste et al. (2008), while also accounting for the usual cognitive impairment in Huntington's disease (Lawrence et al., 1998;Ho et al., 2003).
MATERIALS AND METHODS
We study here an updated version of our prior, full-scale model of striatum (Humphries et al., 2009b(Humphries et al., , 2010. Compared to those models, the model here brings together the three-dimensional anatomy model from Humphries et al. (2010) with an updated version of the dopamine-modulated projection neuron model from Humphries et al. (2009a).
SPIKING NEURON MODELS
The basic model neuron used in the large scale striatal model is derived from the model neuron proposed in Izhikevich (2003), which was extended to encompass the effects of dopamine modulation on intrinsic ion channels and synaptic input in Humphries et al. (2009b).
In the biophysical form of the Izhikevich model neuron, v is the membrane potential and the "recovery variable" u is the contribution of the neuron class's dominant ion channel: with reset condition where in the equation for the membrane potential (Equation 1), C is capacitance, v r and v t are the resting and threshold potentials, I is a current source, and c is the reset potential. Parameter a is a time constant governing the time scale of the recovery due to the dominant ion channel. Parameters k and b are derived from the I-V curve of the target neuron behavior, where b describes how sensitive the recovery variable u is to fluctuations in the membrane potential v. Parameter d describes the after spike reset of recovery variable u, and can be tuned to modify the rate of spiking output.
Projection neuron model
The projection neuron models' parameter values and their source are given in Table 1. Parameters C, d, v t , and the AMPA synaptic conductance g ampa (see below) were found by searching for the best-fit to the f-I curve and spiking input-output functions of the Moyer et al. (2007) 189-compartment projection neuron model (Humphries et al., 2009a).
In Humphries et al. (2009a) we showed how this model can capture key dynamical phenomena of the projection neuron: the slow-rise to first spike following current injection; paired-pulse facilitation lasting hundreds of milliseconds; and bimodal membrane behavior emulating up-and down-state activity under anaesthesia and in stimulated slice preparations.
Fast-spiking interneuron model
For the FSI model, Equation (2) for the u term is given by (Izhikevich, 2007b) which enables the FSI model to exhibit Type 2 dynamics, such as a non-linear step at the start of the current-frequency curve between 0 and 15-20 spikes/s. Further discussion on the FSI model used in the striatal microcircuit can be found in Humphries et al. (2009b); the FSI model parameters are reproduced in Table 2. Bracci et al. (2002) 0.625 Fitted to Gorelova et al. (2002) Dimensions are given where applicable. See Humphries et al. (2009b)
Dopaminergic modulation of intrinsic ion channels
Tonic levels of dopamine in the striatum modulate the excitability of the projection neurons and fast-spiking interneurons (Nicola et al., 2000;Mallet et al., 2006). Our network model incorporates modulation by tonic dopamine through the relative activation levels of D1 and D2 receptors. These levels are modeled using the method proposed in Humphries et al. (2009b), in which complex membrane dynamics are subsumed by linear transforms with only two parameters φ 1 , φ 2 ∈ [0, 1], describing the proportion of D1 and D2 receptor activation, respectively. Throughout we used φ 1 = φ 2 = 0.3. For activation of D1 receptors on projection neurons we used the simple mappings: and which respectively model the D1-receptor mediated enhancement of the inward-rectifying potassium current(KIR) (Equation 4) and enhancement of the L-type Ca 2 + current (Equation 5).
For activation of D2 receptors on projection neurons we used the mapping: which models the small inhibitory effect on the slow A-type potassium current, increasing the neuron's rheobase current (Moyer et al., 2007). With these mappings, the model neuron is able to accurately capture the effect of D1 or D2 receptor activation on both the f-I curves and spiking input-output functions of the Moyer et al. (2007) compartmental model of the projection neuron.
Dopamine modulated fast spiking inter-neurons in the striatal network only express the D1-family of receptors (Centonze et al., 2003). Activation of this receptor depolarizes the neuron's resting potential [see Humphries et al. (2009b) for further details]. Thus we used the following mapping of the resting potential:
SYNAPTIC MODELS
Synaptic input comprises the source of current I in Equation (1): where I ampa , I gaba , I nmda are current input from AMPA, GABA, and NMDA receptors, respectively, and B(v) is a term that models the voltage-dependent magnesium plug in the NMDA receptors. Compared to the projection neuron, FSIs receive no NMDA receptor input from cortex, have a moderately larger AMPA conductance ( Table 2), but do receive input via local gap junctions (see below). Each synaptic input type z (where z is one of ampa, nmda, gaba) is modeled by whereḡ z is the maximum conductance and E z is the reversal potential. We use the standard single-exponential model of post-synaptic currentṡ where τ z is the appropriate synaptic time constant, and S z (t) is the number of pre-synaptic spikes arriving at all the neuron's receptors of type z at time t. Given that one interest here is in the possible roles of striatal NMDA sensitivity in Huntington's disease, we paid careful attention to two complexities of the NMDA receptor: its non-linear voltage-gating, and its saturation. The term B(v) in Equation (8), which models the voltage-dependent magnesium plug in the NMDA receptors, is given by (Jahr and Stevens, 1990) where [Mg 2+ ] 0 is the equilibrium concentration of magnesium ions.
As glutamate can remain locked into the NMDA receptor for 100 ms or more (Lester et al., 1990), so the pool of available receptors becomes rapidly saturated at high afferent firing rates. To capture this we introduce a mean-field model of synaptic saturation where we interpret the term h z in Equation (10) as the number of active receptor groups over the whole neuron. Each step in h nmda , following a number of spikes S nmda (t), activates that number of receptor groups, which decays with a time constant τ nmda . To introduce saturation, we bound the size of the step by the proportion of available groups. Together, these concepts give us the model: As well as introducing this saturation of the NMDA synapses, we also removed the 1/τ s scaling of post-synaptic current amplitude used in Humphries et al. (2009a). This allowed the model synaptic conductances to be the same order of magnitude as their experimental counterparts. Consequently, we re-tuned g ampa by fitting the input-output functions of the Moyer et al. (2007) 189compartment projection neuron model, following the protocol in Humphries et al. (2009a). We obtained equally good fits to those found previously with a value of g ampa = 0.4 (results not shown).
Dopaminergic modulation of synaptic input
Following the projection neuron models in Humphries et al. (2009a), we add D1 receptor modulation of NMDA receptor evoked EPSPs by and we add D2 receptor modulation of AMPA receptor evoked EPSPs by where β 1 and β 2 are scaling coefficients determining the relationship between dopamine receptor occupancy and the effect magnitude (Table 3). Due to the addition of saturating NMDA synapses, we also re-tuned these parameter values by fitting the inputoutput functions of the Moyer et al. (2007) 189-compartment projection neuron model under D1 and D2 receptor modulation of synaptic inputs, following the protocol in Humphries et al. (2009a).
Finally, following the model in Humphries et al. (2009b), we add D2 receptor modulation of GABAergic input to FSIs by
Gap junctions
A gap junction between FSIs i and j is modeled as a compartment with voltage v * ij , which has dynamics where τ is a time constant for voltage decay, and v i and v j are the membrane potentials of the FSI pair. The current introduced by that cable to the FSI pair is then where g is the effective conductance of the gap junction. The total gap junction input I gap to a FSI is then the sum over all contributions I * gap .
STRIATUM NETWORK MODEL
Our model captures the connections within the GABAergic microcircuit in striatum, illustrated in Figure 1. We simulated a large-scale model representing a three-dimensional cuboid of the striatum in the adult rat at one-to-one scale, containing every projection neuron and fast-spiking interneuron present in the biological tissue. We used a density of 89,000 projection neurons per mm 3 (Oorschot, 1996) and a FSI density of 1% [see Humphries et al. (2010) for discussion]. We assumed projection neurons were evenly split between D1 and D2 receptor dominant types, and without any spatial bias. Hence we randomly assigned half of the projection neurons to be D1-type and half to be D2-type.
In the Results we predominantly report the results of simulations using a 300 μm on the side cube, giving 2292 projection neurons and 23 FSIs. Other sizes are noted explicitly where used.
To connect the neurons we used two different models. In the physical model we used distance-dependent functions for FIGURE 1 | GABAergic striatal microcircuit. Input to the striatum comes from glutamatergic (GLU: •) fibers originating in the cortex, thalamus, hippocampal formation and amygdala, and dopaminergic (DA: ) fibers from brainstem dopaminergic neurons. The projection neurons (SPNs) are interconnected via local collaterals of their axons projecting to other nuclei of the basal ganglia. The fast-spiking interneurons (FSIs) can form dendro-dendritic gap junctions between them and are also connected by standard axo-dendritic synapses. All these intra-striatal axo-dendritic connections ( ) are GABAergic and hence inhibitory.
Frontiers in Computational Neuroscience
www.frontiersin.org January 2014 | Volume 7 | Article 192 | 4 probability of connection between each element of the microcircuit. These functions were derived from overlap of dendritic and axonal arbors, and are given in Humphries et al. (2010) for each connection type in the microcircuit. In the random model we ignored distance, and simply made connections to each neuron at random until the correct number of incoming connections of each type was made. The target number of connections were derived from the mean values obtained from the central neurons of the three-dimensional connectivity model in Humphries et al. (2010), and taken from column 1 of Table 5 in that paper: SPNs → 1 SPN: 728; FSIs → 1 SPN: 30.6; FSIs → 1 FSI: 12.8; FSI gap junctions per FSI: 0.65.
SELECTION COMPETITIONS
Cortical input to the model was designed to emulate the response selection component in a general two-choice task, where a (possibly noisy) stimulus taking one of two values is observed over time and a choice made between the two corresponding responses. In such a task, we propose that the two responses are made salient by the onset of each trial and then, after a perceptual decision is made about the stimulus value, the corresponding response increases in salience. This generic setup was inspired by the experimental procedures of Beste et al. (2008), in which participants were asked to distinguish between short (200 ms) and long (400 ms) auditory tones, using a distraction paradigm. Inputs followed a ramping trajectory to simulate evidence accumulation and increasing decision confidence (Asaad et al., 2000). We previously showed that transient selection can be seen in response to stepped cortical inputs (Tomkins et al., 2012).
The striatum model was divided up into three populations, two physically close SPN populations representing the two competing responses, which we refer to throughout as channels, and the remaining background neurons given a constant input. Neurons were randomly divided into the two channels, with 40% of the neurons in channel 1 and 2, respectively, and the remaining 20% of cells were labeled "background" neurons.
The input protocol is illustrated in Figure 2A, and Figure 2B shows an example response of the entire network to this protocol. Each response population received a priming input at a background rate for 1500 ms, causing them to reach a steadystate of firing activity. At 1500 ms, channel 1, (black) received a ramping input for a time of 50 ms, raising the salience toward a new steady-state, when it became the most salient cortical input to the striatum. During the 50 ms ramping time, channel 2 also received a ramping input, matching that of channel 1 for 25 ms. Following this, the signal to channel 2 decreased back to the background rate, describing the evidence accumulation trajectory of an out-competed action.
Rates were specified for each cortical spike train input to each projection neuron and FSI model. Both neuron models received the equivalent of 250 input spike trains [see Humphries et al. (2009b) for details].
We measured how the striatal microcircuit performed channel wise signal selection on the cortical inputs, using this simple protocol, inspired by the auditory decision task performed in Beste et al. (2008). However, due to the abstract nature of the input protocol we use, applied to a generic simulation of the striatal microcircuit, the selection measured in these results could be applied to any channel-wise decision task throughout the striatum, and is not limited to auditory processing.
METRICS FOR SELECTION
We define "selectivity" in the striatum as the ability to robustly distinguish competing signals. The striatum demonstrates two complementary modes of selectivity, which we measure with different metrics. These selection metrics are applied to the output of each channel, which is characterized by a zero-phase filtered mean firing rate.
Transient selectivity
Given a competitive split in cortical input, we see a temporary boosting of the most-salient signal, accompanied with a temporary suppression of the least-salient competitive signal ( Figure 2C). This transient phenomena presents a boost of the difference in salience between the two competing signals. We identify two key regimes: (1) S(1,2) , the maximum difference between the two signals during the transient peaks; (2) S 1 , S 2 , the mean stable activity level of each channel after the transient period dissipates. The total transient selectivity, between 0 and 1, is defined as where S(1,2) is the maximum difference between the firing rates of Channel 1 and Channel 2 over the transient window (t = 1500 : 2000 ms). This enables the measure to allow for cases in which the largest perturbations from the mean are not temporally coincident, either due to reliable intrinsic dynamic properties of the network, or statistical fluctuations therein.
Steady-state selectivity
The striatum network can exhibit signal suppression on its leastsalient channel due to sustained inhibition by the most salient channel. Steady-state selectivity is measured on the least-salient channel, as the percentage reduction in the mean channel firing rate after the rise in salience of the most-salient signal. An example of steady-state selectivity in the random network can be seen in Figure 2D. We define (S P ) as the stable firing rate of the primed channel 2 before the increase in competition, and from this we calculate the steady-state selectivity (SS) as:
BASAL GANGLIA-THALAMOCORTICAL LOOP MODEL OF TRANSIENT SELECTION
To study the contribution of the transient striatal dynamics to the selection mechanism of the whole basal ganglia, we used the population-level implementation of our basal-ganglia thalamocortical loop model (Humphries and Gurney, 2002). Figure 3 schematically illustrates the loop model, and the connectivity of the response-representing populations.
The average activity a of all neurons comprising a channel's population changes according to where τ is a time constant and I is summed, weighted input. We used τ = 10 ms throughout. The normalized firing rate y of the FIGURE 3 | Basal ganglia thalamo-cortical loop model. The main circuit (right) embeds the basal ganglia into a thalamo-cortical feedback loop. Each nucleus contains multiple response-representing populations. Within the basal ganglia, the circuit can decomposed into an off-center, on-surround network (left): three populations are shown, with example activity levels in the bar charts to illustrate the relative contributions of the nuclei. Note that, for clarity, full connectivity is only shown for the second population. Briefly, the selection mechanism works as follows. Constant inhibitory output from substantia nigra pars reticulata (SNr) provides an "off" signal to its widespread targets in the thalamus and brainstem. Cortical inputs representing competing saliences are organized in separate populations, which project to corresponding populations in striatum and subthalamic nucleus (STN). The balance of focussed inhibition from striatum and diffuse excitation from STN results in the most salient input suppressing the inhibitory output from the corresponding SNr population, signaling "on" to that SNr population's targets. Tonic dopamine levels in the striatum set the ease with which the channels are selected, and subsequently switched between following further salient inputs. For quantitative demonstrations of this model see Gurney et al. (2001b) and Humphries and Gurney (2002 unit is given by a piecewise linear output function with threshold θ. The following describes net input I i and output y i for the ith channel of each structure, with n channels in total. The full model was thus given by (Humphries and Gurney, 2002): Net input was computed from the outputs of the other structures, except driving input c i to channel i of cortex. The striatum was divided into two populations, one of projection neurons with the D1-type dopamine receptor, and one of projection neurons with the D2-type dopamine receptor. Many converging lines of evidence from electrophysiological and anatomical studies support this functional split into D1-and D2-dominant projection neurons and, further, that the D1-dominant neurons project to SNr, and the D2-dominant neurons project to GP (Gerfen et al., 1990;Surmeier et al., 2007;Matamales et al., 2009).
In line with the projection neuron model described above, the model included opposite effects of activating D1 and D2 receptors on striatal projection neuron activity: D1 activation facilitated cortical efficacy at the input, while D2 activation attenuated this efficacy (Moyer et al., 2007;Humphries et al., 2009a). The mechanism for this mirrored that of the spiking projection neuron model in using simple linear factors. Thus, if the relative activation of D1 and D2 receptors by tonic dopamine are λ 1 , λ 2 ∈ [0, 1], then the increase in efficacy due to D1 receptor activation was given by (1 + λ 1 ); the decrease in efficacy due to D2 receptor activation was given by (1 − λ 2 ). Throughout we set λ 1 = λ 2 = 0.2, simulating tonic levels of dopamine.
The negative thresholds ensured that STN, GP, and SNr have spontaneous tonic output (Humphries et al., 2006). We simplified the model here compared to Humphries and Gurney (2002) by delivering input only to cortex, to represent the salience-driven response selection, rather than to cortex, striatum and STN; both models gave qualitatively the same results. We used exponential Euler to numerically solve this system, with a time-step of 1 ms.
We used n = 8 channels in total, with two of those channels (4 and 5) receiving non-zero inputs, mimicking the input protocol used for the striatal network model, which is designed to abstractly simulate the two choice reaction-time task performed in Beste et al. (2008). Baseline inputs c 4 = c 5 = 0.3 were delivered at simulation onset. A step in input c 5 occurred between 100 and 200 time-steps: a small step of c 5 = 0.5 or a large step of c 5 = 0.7. The ability for the model to select was assessed during this step period. As in prior models (Berns and Sejnowski, 1998;Gurney et al., 2001b;Humphries and Gurney, 2002;Humphries et al., 2006), selection was assessed by observing the change in activity on each SNr channel, as this output provides the tonic inhibition of thalamic and brainstem structures and is thought to gate the execution of actions . Here, successful selection of a channel was defined as the SNr output falling to zero.
Modeling transient selection in the rate-coded model
We mimicked the ability of the striatum microcircuit to produce transient phenomena using an input injection into the striatum of the rate coded model. At t = 100 we injected external inputs into each striatal channel in the model, forcing a transient increase or decrease as appropriate in the corresponding channels. Transient sizes were extracted from the striatal microcircuit traces, and reproduced in the rate coded model. Individual transients were calculated as the percentage change in the firing rate of the circuit during the transient period compared to the stable firing rate achieved post-transient. This allowed us to gauge the role of the complex striatal dynamics, generated by our microcircuit model and responsible for the transient selection mechanism, on the selection properties of the entire basal ganglia-cortex loop.
RESULTS
In what follows we discuss the simulation results of our model and interpret them as potential mechanisms explaining the findings of Beste et al. (2008). We discuss the two types of potential selection mechanisms that we have termed transient and steady-state.
Transient selection emerges from the striatal microcircuit
We sought insight into the potential for competition within the striatum by examining the dynamics of our three-dimensional network model. We first explored the effect on striatal output of competing inputs to two projection neuron populations. These inputs were intended to emulate the changes in cortical signals representing two alternative responses in a generic two-choice decision-making task. Figure 4A shows the mean firing rate of each channel from the same example simulation. After the divergence in inputs at t = 1.5 s, a transient increase of the firing rate is elicited in channel 1, the most salient population, and a transient suppression of the firing rate is elicited in channel 2. This transient suppression occurs despite no change in the input to channel 2. Moreover, this population rapidly returns (∼100 ms) to its pre-step firing rate. Consequently, we termed this phenomenon transient selection.
We found that the elicited transient selection was robust over a wide range of choices for the baseline input rate and the signal difference between the two channel inputs after the signal divergence. Figure 4B shows that transient selection could be robustly elicited for any step size over 0.5 Hz when the baseline input rate exceeded ∼4 Hz.
Transient selection is due to both circuit and intrinsic membrane properties
We further investigated the mechanisms underlying the positive and negative transient changes in population activity. We found that the positive transient was produced by single neuron dynamics, whereas the negative transient was due to network connectivity. This can be seen in Figures 5A,B histogram ( Figure 5C) shows that the neuron had a clear transient increase in firing probability immediately after the step of input. Running the same test on a model of a cortical regularspiking pyramidal neuron, with input scaled to produce approximately the same steady-state rates, showed no such transient increase in firing probability after a step in input ( Figure 5D). Thus the transient increase in population activity observed in a single trial of the network is a statistical phenomenon of synchronous spiking of many projection neurons, and seemingly dependent upon properties particular to the striatal projection neuron. We sought to elucidate these properties by injecting sequential current steps directly into the projection neuron model and observing the behavior of the membrane voltage v and slow current u. Figure 5E shows that a step in current applied to an already depolarized membrane triggers a rapid double spike, followed by slower regular spiking. Figure 5F plots the corresponding trajectory of the slow current u: the initial depolarizing injection makes the slow current u increasingly negative, thus slowly charging the membrane potential v [ Figure 5E; see Equation (1)]. The subsequent step of injected current increases the membrane potential rapidly, and the contribution of the large, negative u ensures a rapid pair of spikes time-locked to the current step. However, once spiking has been initiated, the equilibrium value of u is less negative than immediately before the current step. Consequently, the smaller contribution of the slow current u ensures a comparatively slow spike rate in the steady-state.
To show that the slow current u is critical, we examined the dependence of this spiking "adaptation" on the parameters of the slow current. We repeated the sequential-step current injection protocol for a range of step-sizes, and measured the adapting response as f ratio = F first /F last , the ratio of the first and last interspike intervals after the current step. A value of f ratio > 1 thus indicates an adaptation. We found that the adaptation response appeared with a second current step above ∼50 pA (blue curves in Figures 5G,H). Figure 5G shows that the adaptation response disappeared if we reduced the effective time constant of the slow current (increased a), allowing the slow current to recover faster after spiking. Figure 5H shows that the adaptation response also disappeared if we reduced the gain b of the slow current The transient phenomena thus depends critically on the slow current u.
As lesioning only the connections between the projection neuron could abolish the negative transient (Figure 5A), this suggested it arose from a network effect where the neurons contributing to the positive transient inhibited their targets. To test this observation, we simulated the model with lesioned projectionneuron collaterals for a range of baseline input firing rates and step sizes (protocol in Figure 2A) and computed the size of the negative transient that resulted. Figure 5I shows that the negative transient was indeed abolished for a wide-range of values for the input firing rates. However, a sufficiently large baseline firing rate and step in firing rate could still result in a negative transient (upper-right corner of Figure 5I). Thus, it seems that sufficient cortical drive of the FSI population (which inhibits the projection neurons) also contributes to the negative transient in projection neuron population activity.
Transient selection is sufficient to alter decision making performance
Though the previous result demonstrates the existence and origin of transient selection within the striatum, it is not sufficient to show a causative effect of transient selection on decision-making.
To address this issue, we asked whether such transient signals in the striatum could enhance the selection of input signals by the basal ganglia circuit. Here we consider selection to mean that the output of a substantia nigra pars reticulata (SNr) population falls from its tonic rate to zero. In particular, we hypothesized that the transient signals in striatum would be amplified in the complete basal-ganglia-thalamo-cortical loop, and thus directly influence the output of the basal ganglia.
To test this, we used our rate-coded model of population activity in the basal ganglia-thalamocortical loop (Humphries and Gurney, 2002). The model received inputs to two populations of cortico-striatal neurons (Figure 6A), mimicking the protocol used in our full-scale striatum model. An example of the subsequent SNr outputs are illustrated in Figure 6B. At the time of the step in input to one population, we emulated the subsequent transient signals observed in our full-scale model by brief injections of further increased input to that striatal population and decreased input to the other. These correspondingly produced small, brief positive and negative transients in the output of those striatal populations, for both D1 and D2-type projection neurons (Figures 6C,D). Note that the subthalamic nucleus populations also received the cortical input signals, but not the transient signals.
We found that a small positive transient elicited in the striatal population was sufficient to change the speed and persistence of selection (Figures 6E-H). Figures 6E,F show that signal selection was maintained for longer with increasing transient sizes. Correspondingly, Figures 6G,H show that increasing the size of transients injected into the model striatum decreased the time to selection. These changes were found irrespective of the size of input step, or of the closed-loop gain g of the positive thalamocortical feedback loop (Chambers et al., 2011) (When g = 1, this loop is a perfect integrator, while with g = 2, there is an amplifying feedback loop.) Thus, transient signals in the striatum are sufficient to modulate selection by the basal ganglia.
STEADY-STATE SELECTION BY THE STRIATUM
Prior debates about selection in the striatum have focussed on stable, winner-take-all modes of computation (Wickens, 1997;Plenz, 2003). In order to compare transient selection with this more common form of selection computations, we sought to understand whether our striatal model could show stable, winnertakes-all-like dynamics; here we refer to these as "steady-state" selection, in contrast to "transient" selection, as the competition between inputs causes persistent changes to output firing rates.
Steady-state selection in a randomly-connected model
Neurally-inspired models of winner-take-all dynamics are often based on fully-connected or dense randomly-connected networks (Hartline and Ratliff, 1958;Alexander and Wickens, 1993;Fukai and Tanaka, 1997;Mao and Massaquoi, 2007;Yim et al., 2011). We thus simulated our striatal model with random connectivity, in which each neuron type received, on average, the same number of connections, and the connections were made by choosing source neurons at random from across the three-dimensional cuboid. The target number of connections was based on the expected number of connections of a projection neuron and FSI in the center of a 1 mm 3 network, according to the computational anatomical estimates of Humphries et al. (2010) (see Materials and Methods). In this way, the randomly-connected model was more densely connected relative to the distancedependent model. Thus, while closer to the topology usually studied for steady-state selection, the randomly-connected model still retained connection statistics consistent with the estimates obtained in Humphries et al. (2010). We tested the randomly-connected model with the same stepped input protocol as the physically-connected model (Figure 2A). Figure 7A shows an example of the mean population firing rates in the randomly-connected striatum model, with evident steady-state selection: the population receiving the stepped cortical input increases its firing rate, and the other population correspondingly decreases its firing rate despite receiving the same input throughout. We found that the magnitude of steady-state selection was dependent on the size of the baseline firing rate and input step. Figure 7B shows that the most effective steady-state selection occurred for low baseline rates and large input steps, approaching a winner-takes-all like response of nearly complete suppression (∼80%) of the losing population's activity. Figure 7C shows that lesioning the connections between projection neurons prevents steady-state selection. Figure 7D shows that lesioning the FSI input to the projection neurons reduces but does not eliminate the steady-state selection, while also reinstating a transient period. This suggests that mutual inter-channel inhibition by the projection neurons populations is responsible for the suppression effect seen in both the random and the larger physical networks.
Distance-dependent connectivity can support steady-state selection
To assess if such steady-state selection required homogeneous, random connectivity of the kind described above, we checked whether such selection could be found in the physical model of connectivity. Again using the same stepped input protocol, we simulated physical networks up to 1 mm 3 , in order to increase the density of connectivity within the center of the network, which scales with the number of neurons in the model ( Figure 8B). Figure 8A shows that steady-state selection could be observed for distance-dependent connectivity, given a sufficiently large model (here 1 mm 3 ). We found that the magnitude of steadystate selection increased monotonically with increasing network size (Figure 8D), approaching the steady-state selectivity seen in the random model. Figures 8B,C shows that in the physical model as the number of neurons increases as a function of network size so does the average number of connections each projection neuron receives. By contrast, the random model always has the same density of connections. The physical model's correspondence between the number of connections to a projection neuron and the effectiveness of steady-state selection suggests that such selection is dependent on the density of connections between projection neurons.
The model further suggests that it is only the increased density of connections that is key, and not an increase in recurrent connections between projection neurons. Figure 8E shows the absolute number of recurrent connections in the physical and random network configurations. Note that the number of bi-directional connections in the random network drops of as a function of network size due to the fact that each neuron receives a fixed number of connections regardless of the network size. By contrast we see a small rise in the number of bi-directional connections in the physical model. However, Figure 8F shows that in both random and physical networks the proportion of connections that are bi-directional falls with increasing network size. Thus, the increased effectiveness of steady-state selection is likely due to increased absolute connection density and not increased recurrent connections.
COMPARING SELECTION MECHANISMS: PARADOXICAL SELECTION ENHANCEMENT IN HUNTINGTON'S DISEASE
Having established that two contrasting forms of selection can be supported by the striatal circuit, depending on the type and density of connectivity, we then sought insight into how the two forms of selection could be distinguished. In particular, we hypothesized that they would make different predictions about how changes to the striatum would alter response selection. In order to test this hypothesis, we sought an experimental data-set that could provide a basis for testing our predictions. Beste et al. (2008) have recently shown a rare example of paradoxical cognitive enhancement in a neurological disorder. They reported that manifest Huntington's disease patients had faster and less error prone response selection on a simple two-choice auditory task than controls or pre-manifest Huntington's disease patients. As Huntington's disease is primarily characterized by widespread loss of striatal projection neurons [FSI populations have been shown to be more resistant to HD-modifications (Ghiglieri et al., 2012)], and increased sensitivity of NMDA receptors on striatal projection neurons (Fan and Raymond, 2007), these results suggest the hypothesis that one or both of these changes to the striatum lead to enhanced selection, and as such we look into excitotoxicity as a possible candidate for the paradoxical improvements investigated.
We thus simulated both transient and steady-state selection under Huntington's-like changes to the striatal model, and searched for evidence of enhanced selection. We emulated increased NMDA receptor sensitivity by increasing the conductance of the NMDA synapse (we report this as the ratio of the NMDA:AMPA conductances), and separately emulated the cell loss by randomly removing a specified percentage of projection neurons. We did this to explore a wide range of plausible simulated Huntington's disease conditions. Across both changes, we mapped the change in transient and steady-state selection in response to the same input protocol (baseline 5 Hz, step 1 Hz).
Steady-state selection consistently degrades in simulated Huntington's disease
To assess the impact of Huntington's-like changes on steady-state selection, we used the randomly-connected model to ensure that the suppression of the losing population was sufficient to be detectably modulated by the Huntington's-like changes. Figure 9 shows that steady-state selection was uniformly diminished by all Huntington's-like changes, whether in isolation or combination.
Transient selection enhancement in simulated Huntington's disease
We assessed the impact of Huntington's-like changes on transient selection using the same physical model network as that used for Figure 4. Figure 10 shows that transient selection could be diminished by the loss of projection neurons alone, yet could be enhanced by the simultaneous increase in NMDA conductance. Thus the model predicts a region of Huntington's-like conditions where the deleterious effect of cell loss can be more than compensated by the increased sensitivity of NMDA receptors. Figure 10A shows an example improvement in transient selectivity under high cell atrophy and a high excitability, whereas Figure 10B shows the "excitotoxicity landscape" in Figure 10C, corresponds to dramatic changes in the striatal output. Further, Figure 6 shows that even small modifications in the transient size in the striatum will modulate the signal selection speed in the wider basal ganglia networks.
DISCUSSION
We found a novel form of transient selection supported by the striatal network. This emerged from our three-dimensional network of sparse, weak feedback connectivity between the striatal projection neurons and dense, strong feedforward inputs from the fast-spiking interneurons. We observed that rapidly increasing the ongoing input to one of two competing populations of projection neurons caused a transient peak of activity in that population and a synchronous transient dip in activity of the other. The dip lasted around 100 ms before the activity returned to its pre-step level, thus showing no steady-state competitive effect between the two populations. Using a population-level model of the complete basal gangliathalamo-cortical loop, we showed that transient selection in the striatum was sufficient to enhance selection by the entire circuit (as determined by suppression of SNr output). The presence of transient selection both increased the speed at which the whole circuit resolved a competition between salient inputs, and increased the circuit's ability to persist with the selected input. Both effects were observed for either perfect-integrator or amplifying feedback in the thalamo-cortical loop.
The origin of the transient selection had two components. The positive transient in the population activity was driven by single neuron adaptation. We found that a further step in input to an already depolarized projection neuron caused a spike followed by rapid decrease in spiking probability. This implies that the positive transient observed in the population activity was a statistical effect: that, across a whole population of projection neurons, a sub-set of neurons were sufficiently depolarized at the time of stepped input to show this adaptation effect in synchrony, and thus cause a transient peak in population activity.
The negative transient in the population activity was a subsequent network effect of the positive transient: the synchronized spiking of the neurons participating in the positive transient was sufficient to drive a dip in activity in their target neurons in the other population.
TWO FORMS OF SELECTION COMPETITION
Having established the existence and mechanics of the transient selection phenomenon, we sought to understand the conditions under which our striatal model could also support a steady-state competition effect, akin to classical winner-takes-all (Hartline and Ratliff, 1958;Fukai and Tanaka, 1997;Mao and Massaquoi, 2007). Such steady-state competition could plausibly arise in striatum as each projection neuron receives sufficient weak synapses from other projection neurons to continuously modulate its ongoing activity (Guzman et al., 2003;Humphries et al., 2010;Chuhma et al., 2011).
We found that increasing the number of projection neuron synapses gave rise to steady-state competition where the stable increase in activity in one population caused a stable decrease in activity of the other population. These results are consistent with Yim et al. (2011) who reported a weakly-competitive effect between two populations of neurons in a randomly-connected inhibitory network of spiking neurons, and showed that weak correlation between inputs to the network could enhance this effect. We advanced this result by showing that such steadystate competition could arise in both distance-dependent and randomly-connected networks, given either that we increased the physical size of our three-dimensional striatal network, and thus increase the density of connections, or randomly-connected the network based on the average connections of the most densely connected projection neuron.
Our models thus predict that the form of selection competition is dependent on the density of connections between projection neurons. Whether the striatum is ever as sparsely connected as in our distance-dependent model, or ever as densely connected as in the homogenous random model is an open question. It is possible that both forms of selection exist depending on local inhomogeneities in striatal tissue. We know that many aspects of the striatum shows gradients of density across the network, including the dorsal-ventral gradient of interneuron populations (Kubota and Kawaguchi, 1993) and the rostro-caudal gradient of FSI gap junctions (Fukuda, 2009). Correspondingly, it is plausible that there exists a gradient of projection neuron connection density.
We also note that the recent report by Oorschot et al. (2013) of projection neuron collaterals making synapses on to the somas of other projection neurons can only enhance both forms of competition. Such GABAergic somatic synapses are likely to shunt all dendritic input to the soma, thus providing powerful feedback inhibition. For transient selection, this could result in a larger negative transient; for steady-state selection, this could result in more depressed activity in the losing population. Open questions here include the relative density of such somatic synapses originating from projection neurons, and whether they have specific functional targets such as specifically occurring between projection neurons in competing populations.
Both forms of striatal selection mechanisms ultimately influence selection mediated by the whole basal ganglia network and expressed via their output nuclei (including SNr). As discussed in the Materials and Methods, this expression is via disinhibition (Chevalier and Deniau, 1990;Berns and Sejnowski, 1998;Redgrave et al., 1999;Gurney et al., 2001a;Humphries et al., 2006); increased activity of a striatal population inhibits the tonic inhibitory output of a SNr population, thus representing the selection of their represented signal (Figure 3). We showed that transient selection in the striatal populations is sufficient to enhance selection by disinhibition from SNr (Figure 6). This occurs because the most salient input causes a transient increase of activity in the corresponding striatal population and consequently transiently decreases the output of the corresponding SNr population. This fall is sufficient to allow activity to grow in the target thalamo-cortical loop, which in turn projects to the original striatal population, further increasing its activity-thus the positive feedback loop amplifies the transient changes in striatum. The effect of steady-state selection in the striatum on the whole basal ganglia is more straightforward. The long-lasting drop in output of all losing striatal populations comparatively reduces their inhibition of the corresponding SNr populations. Consequently, the fall in output of the SNr population representing the winning signal is enhanced compared to its competitors.
EXPERIMENTAL PREDICTIONS OF TRANSIENT SELECTION
Direct experimental observation of transient selection is challenging. The positive transient in population activity could only be observed on a single trial given sufficient simultaneous sampling of neurons within that population, a situation unlikely to occur with current recording technology. However, we showed that the basic mechanism underlying the positive transient in the population activity could be observed through sequential steps of current injection into a single neuron model. Thus our model makes a tractable experimental prediction: that there exists a regime of long, sequential steps of current into the projection neuron soma that will elicit a rapid burst of two or more spikes followed by slower regular firing. If such a regime exists, it would provide evidence in favor of the existence of transient selection mechanisms in the striatal network.
TRANSIENT SELECTION ALONE COULD EXPLAIN ENHANCED SELECTION IN HUNTINGTON'S DISEASE
We sought to determine whether transient and steady-state selection could be differentiated by their predictions for how changes to the striatal circuit would affect selection. To this end, we asked if Huntington's-like changes of increased NMDA receptor sensitivity and loss of projection neurons could account for Beste et al. (2008)'s report of enhanced selection by Huntington's disease patients. In terms of our models, we asked if either transient or steady-state selection would improve due to these Huntington'slike changes to the striatum. As one might expect a priori, simply removing projection neurons and thus reducing connectivity between them impaired both types of selection. Increasing NMDA receptor sensitivity also impaired steady-state selection, and thus this form of selection predicted that all Huntington's-like changes impair selection, a result which is inconsistent with the report by Beste et al. (2008). Surprisingly, however, we found that for transient selection, increased NMDA receptor sensitivity could more than compensate for cell loss and actually enhance selection. We also found that transient selectivity was only clearly improved with both high cell degradation and increased excitability, and thus not in pre-symptomatic-like conditions. Thus, alteration of transient selection and not steady-state selection in striatum is consistent with enhanced performance of symptomatic Huntington's disease patients compared to controls and pre-symptomatic patients. Beste et al. (2008) noted that this enhanced response selection was paradoxical, as Huntington's disease patients are consistently worse than age-matched controls across a range of cognitive decision-making tasks (Knopman and Nissen, 1991;Bamford et al., 1995;Lawrence et al., 1998;Ho et al., 2003). Our models offer two potential explanations for why Huntington's disease related changes in striatum are usually associated with cognitive impairment but could also lead to paradoxical cognitive enhancement. First, suppose that all regions of striatum engaged by cognitive tasks implement transient selection. Our model shows that there are limited combinations of NMDA receptor sensitivity increase and cell atrophy where transient selection is enhanced compared to the healthy case; for most combinations transient selection is deteriorated compared to the healthy-state. Thus, one hypothesis is that there is a continuum of NMDA receptor sensitivity increase and cell atrophy across the striatum, and the Beste et al. (2008) task engaged a region of striatum with enhanced transient selection, whereas most tasks engage regions of the striatum with deteriorated transient selection. Second, suppose instead that different regions of striatum use transient or steady-state selection dependent on the local density of projection neuron connections. Our models shows that steady-state selection is always deteriorated by any Huntington's-like change to the striatum. Consequently, this suggests the hypothesis that the Beste et al. (2008) task engaged a region of the striatum using (enhanced) transient selection, whereas most cognitive tasks engage a region of striatum using steady-state selection, and thus are always deteriorated in Huntington's disease patients compared to the healthy-state.
|
2015-07-06T21:03:06.000Z
|
2014-01-20T00:00:00.000
|
{
"year": 2014,
"sha1": "19791e18fb74d116416dadcd510a5825b1d7756f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2013.00192/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19791e18fb74d116416dadcd510a5825b1d7756f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine",
"Computer Science"
]
}
|
79193606
|
pes2o/s2orc
|
v3-fos-license
|
A Challenged Sympathetic System Is Associated with Retinal Vascular Calibre in a Black Male Cohort: The SABPA Study A Challenged Sympathetic System Is Associated with Retinal Vascular Calibre in a Black Male Cohort: The SABPA Study
Sympathetic system hyperactivity and depression are related to cardiac remodelling in Black men. We investigated whether sympathetic system hyperactivity and depressive symptoms are related to retinal vascular dysregulation. A total of 76 Black and 83 White men (23–68 years of age) from the SABPA study were included. Depressive symptoms, 24h pulse pressure (PP), fasting blood and 24-hour urinary catecholamine data were obtained. Retinal vascular calibre was quantified from digital photographs using standardized protocols. Black men demonstrated increased (p < 0.05) hyperpulsatile pressure (PP > 50 mmHg), hypertension (78.9 % vs 48.4%) and depression (34.2% vs. 13.3%) prevalence compared to White men. Despite lower epinephrine levels, epinephrine was associated with arteriolar narrowing and venular widening in the Black men [Adj R2 −0.37 (95% CI: −0.66, −0.09), p=0.013; Adj R2 0.35 (95% CI: 0.13, 0.57), p=0.003]. This might suggest ß-adrenergic hyporesponsivity to epinephrine, which was accompanied by hyperpulsatile blood pressure in the Black group. In the White group, depressive symptoms and norepinephrine were associated with retinal arteriolar narrowing. A profile of ß-adrenergic hyporesponsivity, indicative of a chronically challenged sympathetic system, was associated with retinal vascular remodelling in Black men. ß-adrenergic hyporesponsivity as a result of chronic stress emphasized central control of the brain on the circulatory system irrespective of the vascular bed.
Introduction
South Africa is facing an epidemic of hypertension (HT) and vascular disease but there still is inadequate information on the physiological factors that are contributing to this process [1,2]. Microvascular disease seems to play an important role in the development of HT, arterial stiffness and structural remodelling [3]. Currently, HT is regarded as the most important modifiable risk factor for stroke and major macrovascular cerebral complications, but it may also predispose to more subtle cerebral processes based on, amongst others, the microcirculation [4,5]. Both the ophthalmic artery and the anterior cerebral artery originate from the internal carotid artery and most likely will share common characteristics [6]. Therefore the retinal microvasculature may be an ideal structure to study these abnormalities [7]. Longitudinal studies have shown that an inverse association exists between reduced retinal arteriolar calibre and HT in ageing populations, whilst retinal venular dilation is associated with stroke risk [7,8]. A higher ratio from either wider retinal arteriolar calibre or narrower retinal venular calibre or both is an index of a better retinal vessel profile [9]. Ref. [8] found racial differences in retinal microvascular calibre of various Asian population groups but whether that is also true for Black and White African men is not clear [8]. In a study using Doppler imagery and iontophoresis of acetylcholine and sodium nitroprusside, it was, however, reported that, after correcting for skin resistance in a Black African group, endothelium-independent microvascular function of Black Africans is attenuated compared to that of White Africans [10]. This might be a contributing factor to the ethnic differences in microvascular disease risk in South Africa.
Enhanced peripheral resistance vascular α-adrenergic responses on exposure to a laboratory stressor, i.e. the handgrip test, were shown in Black Africans during urbanisation when compared to their rural counterparts [11]. Thus overstimulation of the sympathetic nervous system (SNS) and the sympathetic adrenal cortex and medullary stress hormone pathway may explain some of the observed ethnic differences [11][12][13]. Intense emotional stress may induce sympathetic hyperactivity with persistent increases in catecholamine and cortisol levels, which is detrimental to normal physiological processes [13]. However, during chronic stress this initial hyperactivity may be followed by autonomic exhaustion or depression, receptor hyporesponsivity and decreases in catecholamines and cortisol [14][15][16][17][18]. Phenylethanolamine N-methyltransferase (PNMT) is an enzyme found in the adrenal medulla which converts norepinephrine to epinephrine. PNMT is known to be regulated by glucocorticoids synthesised in the adrenal gland [19]. One-way PNMT expression can be regulated is by corticosterone's positive influence on the maintenance of PNMT mRNA [20]. Chronic depression has been related to attenuated cortisol levels which will lead to a decrease in the synthesis of epinephrine [21]. These alterations in autonomic function are of importance as they have been associated with both depression and cardiovascular pathology [14,15]. Moreover, chronic psychosocial stress often precedes depression [22] which, in turn, has recently been acknowledged as a risk factor for cardiac remodelling and poor prognosis in patients with coronary heart disease [23]. Indeed, decreased cortisol and catecholamine metabolite responses to a mental stressor were risk factors for the development of vascular diseases in a Black African cohort exhibiting symptoms of depression [24]. There still remains no clear cut or generally accepted model for cortisol responses in depression, as both blunted and increased cortisol activities have previously been noted [21,25]. Blunted cortisol responses were apparent in individuals with depressive symptoms after exposure to the Stroop test [13]. This could imply that the presence of depressive symptoms sensitises the individual to stress and the subsequent development of vascular disease and/or other lifestyle illnesses. Blunted cortisol responses to laboratory and psychosocial stressors have been demonstrated in both clinical and subclinical depression [26,27]. However, it could be speculated that since depression is a constant state of perceived stress, further exposure to a challenging urban environment or psychosocial stress may result in habituation of the neuroendocrine pathways [28].
The 24 h urinary catecholamines and depressive symptoms might, therefore, indicate a challenged SNS associated with retinal microvascular calibre in an urban-dwelling cohort. Whether sympathetic innervation of the retinal vessels exists, is still being debated although it was recently demonstrated that the choroid of the uvea is densely innervated by the sympathetic system and that both α-and β-adrenergic innervations were demonstrated in the preocular central retinal artery (CRA) in humans [29]. The optic canal is a regular conduit for autonomic nerves of the internal carotid plexus to the eye. However, the possible distribution of α-and β-adrenergic receptors in the arterioles of the CRA is still unknown. Generally, in resistance vessels, vasoconstriction is mediated via α1-and α2-adrenergic receptors whilst β 2 -adrenergic receptors mediate vasodilation [2]. It was recently shown that the CRA receives adrenergic and cholinergic innervation supporting autoregulation of intra-retinal vessels [29]. Systemic sympathetic transmitter spillover (epinephrine and norepinephrine) in the carotid and retinal vasculature may thus impact on retinal perfusion. Indeed, Ref. [30] reported associations of psychosocial risk factors and depression with retinopathy signs (microaneurysms, retinal or vitreous haemorrhages, soft or hard exudates or intra-retinal microvascular abnormalities) and suggested the presence of adrenergic receptors in retinal vessels.
They further demonstrated that heterogeneity in psychosocial effects could result from greater vulnerability of subjects with diabetes and HT due to underlying vascular damage associated with these conditions. This appeared to be the case for symptoms of depression, which had a stronger association with retinopathy in subjects with HT compared with those without, 60% versus 30% greater odds of retinopathy [30].
Chronic stress, as presented by depressive symptoms, may thus induce chronic stimulation of the SNS and initial hyperactivity may be followed by autonomic exhaustion, receptor hyporesponsivity and decreases in catecholamines resulting in hyperkinetic blood pressure (BP) values and receptor hyporesponsivity and decreases in catecholamine levels [15,16,31]. The main purpose of this study was, therefore, to assess the associations between retinal microvascular calibre, as primary endpoint and systemic adrenergic neurotransmitters and depressive symptoms, in a bi-ethnic cohort of South African men.
Design and participants
Urban Black and White African teachers were recruited as part of the prospective Sympathetic activity and Ambulatory Blood Pressure in Africans (SABPA) study [32]. All participants of the first phase of SABPA (2007)(2008) were invited to participate in the follow-up. Their ages varied between 23 and 68 years. Of the initial 204 male participants in the first phase, 180 men reported for the second phase where, additionally, retinal blood vessel measurements were obtained. Men are more prone to the development of cardiovascular disease (CVD); therefore, only men were included in order to obtain a homogenous high CVD risk cohort [1,2].
We excluded one participant with a history of epilepsy and 20 participants who did not have usable retinal microvascular images. Finally we included a total of 76 Black and 83 White Africans in the study. Participants were fully informed about the objectives and procedures of the study prior to their recruitment. All participants provided written, informed consent. The study conformed to the Helsinki Declaration (2007) and was approved by the Ethics Review Board of the North-West University, Potchefstroom Campus (approval number 0003607S6).
Assessment of health behaviour
Participants were in a semi-recumbent position from 07 h15 for at least 2 h during which the 12-lead ECG (NORAV PC 1200) registration was performed followed by blood sampling. Physical activity was assessed with the Actiheart ® (GB0/67703, CamNtech Ltd., Cambridgeshire, UK) monitors considering resting metabolic rate. The 12-lead ECG resting heart rate was used to calculate the sleep heart rate required by the Actiheart programme. Quantitative assessment of some markers was done to determine smoking status (cotinine, a nicotine metabolite) and alcohol consumption levels (gamma glutamyl transferase, γ-GT) [33]. All anthropometric measurements were performed in triplicate by registered level II anthropometrists according to standardised procedures. The body mass index (BMI) as well as body surface area (BSA) was calculated. BSA was based on the Mosteller formula [34]. Intra-and inter-variability was less than 5%.
Depressive symptoms
The Patient Health Questionnaire (PHQ-9) was used to determine the depressive symptom score of the participants [35]. The PHQ-9 is a measure of depressive symptom severity and has been validated in various ethnic groups including sub-Saharan Africans [36]. The questionnaire is designed for use in primary health-care settings adapting diagnostic criteria from the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition). Each item of the PHQ-9 evaluates the presence of one of the nine DSM-IV criteria of major depression [35]. In the current study, the Cronbach alpha-reliability index for the total PHQ-9 score was 0.80. Items on the questionnaire are scored to reflect the frequency of symptom occurrence during the prior two weeks on a scale of zero to three, with zero reflecting "not at all" and three "nearly every day," thus providing continuous score between 0 and 27 [35]. Examples of questions are: "Feeling down/depressed/hopeless; feeling bad about yourself OR that you are a failure/that you have let yourself or your family down, thoughts that you would be better off dead/of hurting yourself in some way" [35]. The recommended and established PHQ-9 cut-off point of ≥10 was used to indicate the presence of depressive symptoms [35].
Cardiovascular measurements
On the morning of the first clinical assessment day, ABPM and 2-lead electrocardiograph monitors were attached to participants on the non-dominant arm at their workplace between 07 h00 and 07 h30 (Meditech CE120 CardioTens ® ; Meditech, Budapest, Hungary). The ABPM was programmed to measure BP at time intervals shown for assessing sympathetic activity at 30-min intervals during the day (07 h00-22 h00) and every hour during night time (22 h00-06 h00) [37]. The successful inflation rate over this period was 85.8% (±9.14) in Africans and 90.4% (±8.61) in Whites. Hypertensive status and CVD risk were classified from 24 h ABPM as systolic blood pressure (SBP) ≥ 130 mmHg and/or diastolic blood pressure (DBP) ≥ 80 mmHg [38]. Hyperpulsatile pulse pressure (PP) was defined as 24 h SBP-24 h DBP > 50 mmHg [39]. The apparatus was removed after the last BP measurement at 07 h30 the next day.
Measurement of retinal vascular calibre
Static retinal microvascular measurements were performed in a well-controlled light and temperature regulated laboratory using an Imedos Retinal Vessel Analyser (Germany) with a Zeiss FF450 Plus camera and the VesselMap 1 Version 3.10 software. No intake of food or caffeine containing beverages, alcohol, smoking or exercise was allowed one hour prior to retinal vessel measurements. Participants were introduced to the procedure and screened for Acute angleclosure glaucoma risk with a small light source by a trained registered nurse. Mydriasis was induced in the right eye of the participant by means of a drop containing tropicamide 1% and benzalkonium chloride 0.01% (m/v). In the event of previous injury to the right eye, the left eye was used (Black men N = 3; White men N = 1). Retinal vascular calibre was measured in the monochrome images by manually selecting first-order vessel branches in a measuring zone located between 0.5 and 1.0 optic disc diameters from the margin or the optic disc. Upon selection of the vessel, the VesselMap 2, Version 3.02 software, automatically delineated the vessels' measuring area. The colour photograph was used as a reference to ascertain correct identification of venules and arterioles. Identification of vessels was done by two experienced scientists who had to agree on the vessel type before selection. Automated software calculations, based on the Knudtson revision of the Parr-Hubbard formulas, determined estimates from the six largest arterioles and venules and were summarised as the central retinal arterial equivalent (CRAE) and central retinal venular equivalent (CRVE), respectively [40]. AVR was also calculated (CRAE/CRVE). Arterio-venular nicking was defined when a small arteriole crossed a small venule and resulted in the compression of the vein with bulging on either side of the crossing. A higher ratio from either wider retinal arteriolar calibre or narrower retinal venular calibre or both is an index of a better retinal vessel profile [9]. As the image scale of each eye was unknown, the values of CRAE and CRVE were expressed as measuring units (MU). 1 MU is equivalent to 1 μm when the dimensions of the eye being examined correspond to those of the normal Gullstrand eye. Reproducibility was computed for a randomly selected cohort with a correlation coefficient of 0.84. The ICC analysis involved a mixed-model framework, whereby random effects were assumed for subjects and fixed effects were assumed for the graders. The Cronbach's alpha-reliability index for the AVR was 0.91 for this randomised cohort. Retinal pathology as seen in hypertensive/diabetic retinopathy and including optic nerve cup/disc ratio and arterio-venular nicking was diagnosed by a registered ophthalmologist.
24 h urinary catecholamines
A three-litre container, washed with 9 ml of 20% HCl, ensured preservation of urinary metanephrines and an accurately 24 h timed specimen (Sarstedt ® , Nümbrecht, Germany). Sampling began and ended with an empty bladder and participants were instructed to complete a 24 h diary to indicate voiding time, volume and fluid intake.
Biochemical analyses
Sodium fluoride blood samples, serum and whole blood EDTA samples were analysed for glucose, lipids, C-reactive protein (CRP), cotinine, γ-GT and glycated haemoglobin (HbA 1c ), using Unicel DXC 800 (Beckman and Coulter, USA), Modular ROCHE Automized (Switzerland) and the Konelab TM 20I Sequential Multiple Analyzer Computer (ThermoScientific, Vantaa, Finland), respectively. An acidified sample from the 24 h urine collection was stored at −80°C until analysis within one year after collection [41]. Urinary epinephrine and norepinephrine values were determined using the 3-Cat Urine ELISA Fast Track kit (LDN, Nordhorn, Germany). Intra-and inter-assay coefficients for epinephrine were 5.50% and 9.62%, respectively, and for norepinephrine 2.70% and 8.59%.
Statistical methods
Data were analysed using Statistica ® software version 12.0 (Statsoft Inc., Tulsa, USA, 2012). Skewness of data was tested and γ-GT and CRP values were logarithmically transformed. Independent T-tests determined participant characteristic differences. A priori covariates which are implicated in higher sympathetic activity and CVD risk included age, BSA, physical activity, log γ-GT, log CRP and cholesterol [33,38]. Chi-square (χ 2 ) statistics compared proportions. General linear model analyses, independent of a priori covariates, were computed to test interactions with race for depressive symptoms, norepinephrine-to-creatinine ratio (NECR), epinephrine-to-creatinine ratio (ECR) and potential cardiovascular risk markers (i.e. PP) and retinal vasculature markers, and, as a result of the high correlation between CRAE and CRVE, CRAE was adjusted for CRVE and vice versa [42]. ANCOVA's determined significant differences by comparing ethnic male groups from least square means analyses whilst adjusting for covariates (age, BSA, physical activity, log γ-GT, log CRP, cholesterol).
Multiple linear regression analyses were computed in the total male cohort and in separate race groups. Unadjusted associations between retinal vessel calibre markers, depressive symptoms and catecholamines were computed in the male cohorts. Forward stepwise multiple regression analyses were performed in various models based on significant interactions for race. Dependent variables were AVR, CRVE and CRAE. Independent covariates included age, BSA, physical activity, log γ-GT, log CRP, cholesterol 24 h PP, depressive symptoms, NECR and ECR. As a result of the high correlation between CRAE and CRVE, CRAE was added as covariate for CRVE and vice versa.
Sensitivity analyses: Forward stepwise regression analyses with similar dependent and independent covariates were repeated in several models in both ethnic male groups, by (a) excluding HIV-positive status participants (N = 16) (b) including only 24 h hypertensive participants and (c) adding HT medication users, cotinine and/or serum glucose as independent covariates. Significance was noted as p ≤ 0.05.
Results
General linear model analyses showed ethnic differences for principal variables investigated, NECR and ECR (F 1.151 = 20.66, p < 0.0001), depressive symptoms (F 1.165 = 4.45, p = 0.04) as well as AVR (F 1.150 = 9.09, p = 0.003), independent of a priori covariates. Table 1 shows unadjusted baseline characteristics of the Black and White men. The Black men displayed lower waist circumference, BSA, BMI and physical activity but a larger metabolic risk with higher glucose, HbA 1c , cholesterol, CRP and γ-GT than their White counterparts. They also had a higher depressive symptom score with 34.2% of the Black men above the cutoff point for modestly severe depressive symptoms [36] compared to 13.3% of the White men. Despite their higher depressive symptom score, the Black men had lower 24 h urine NECR and 24 h urine ECR ratios than the White men. The Black group had higher BP, PP, arteriovenular nicking, optic nerve cup/disc ratio and CRVE values, whilst their retinal AVR was smaller compared to that of the White group. In Forward stepwise linear regression analyses ( Table 3) revealed expected patterns of associations between the dependent retinal microvascular calibre variables (AVR, CRAE and CRVE) and independent variable, PP, in the total group (Model 1). AVR and CRAE were negatively associated with PP, whilst CRVE showed a positive association with PP. In the total group, negative associations were found between AVR, CRAE and depressive symptoms, whilst no associations were found between any of the retinal microvascular variables and NECR or ECR.
Lifestyle and biochemical variables
In the separate ethnic groups, AVR was negatively associated with PP in both racial groups. CRAE was negatively associated with PP in the White men whilst positively associated with CRVE in the Black men. In the White group, AVR and CRAE were negatively associated with depressive symptoms, whilst AVR was, rather unexpectedly, positively associated with NECR.
In the Black group, AVR and CRAE were negatively associated with ECR, whilst a positive association existed with CRVE. No unadjusted or adjusted associations between depressive symptoms and the catecholamines were revealed (data not shown).
No changes in the outcome of the data occurred with sensitivity analyses after excluding HIVpositive status participants or including 24 h hypertensive participants. Adding HT medication users, cotinine and serum glucose as independent covariates also did not alter any of the associations. In models with CRAE as a dependent variable, adjustment for CRVE was made and vice versa. Where AVR, arteriolar-to-venular ratio; CRAE, central retinal arterial equivalent; CRVE, central retinal venular equivalent; 24 h PP, 24 h pulse pressure; ECR, epinephrine-to-creatinine ratio; NECR, norepinephrine-to-creatinine ratio; NECR, norepinephrine-to-creatinine ratio.
Discussion
The aim of this study was to evaluate the association between the retinal microvascular calibre as primary endpoint and systemic adrenergic transmitters and depressive symptoms as independent variables, comparing a Black and White male cohort from South Africa.
The main novel finding suggests a cardiometabolic vulnerable profile in terms of more depressive symptoms, PP, arterio-venular nicking, optic nerve cup/disc ratio and CRVE values, whilst their retinal AVR was smaller in the Black men. Despite lower catecholamine levels, epinephrine was positively associated with arteriolar narrowing, venular widening and hyperpulsatile BP (indicative of arterial stiffness) in the Black men.
Ethnicity and retinopathy
Although cultural differences exist between the Black and White groups, all the participants were teachers with the same educational background, income and working conditions. Despite these similarities, the Black group clearly exhibited a poorer health profile than their White counterparts with regard to cardiometabolic and mental health characteristics. They presented with increased cardiometabolic risk markers such as hyperglycaemia, cholesterol, inflammation, alcohol consumption and depressive symptoms. The Black group's mean BP values were above the cut-off point for HT (ABPM ≥ 130/80) [38], which reflect in the HT prevalence of nearly 80% in this group. Elevated BP and PP are associated with structural microvascular changes and our findings are in line with those references [43,44]. Indeed, elevated BP and PP were associated with attenuated retinal arteriolar and increased venular diameter values and consequently also the AVR. This may impact on vascular wall remodelling as is evident from the presence of arteriolar narrowing, AV nicking, retinopathy [45] and possibly progression towards subclinical atherosclerosis. If the effect of elevated glucose as well as HbA 1c levels is added which, in the case of the Black men, are both, according to the American Diabetes Association, above the cut-off point indicative of a prediabetic state, changes in the retinal vessels comparable to those in diabetic subjects could be expected. The prevalence of AV nicking and hypertensive/diabetic retinopathy is a clear indication.. that the retinal vasculature is showing signs of structural changes and reduced microvascular health in both groups but especially in the Black group.
Retinal vessel calibre and depressive symptoms
Depression has recently been acknowledged as a major risk factor for poorer prognosis in patients with coronary heart disease by the American Heart Association [23]. The depressive symptom score of the Black men was significantly higher than that of the White men with 34 % of the group exceeding the cut-off point for moderately severe depression, thereby worsening their CVD risk. Although underlying stress levels, as assessed using the depressive symptoms risk score, were elevated in the Black men, both their 24 h ECR and NECR levels were lower compared to their White counterparts. During chronic stress the initial hyperactivity may be followed by autonomic exhaustion, receptor hyporesponsivity and decreases in catecholamines and cortisol [14][15][16][17][18]. PNMT converts norepinephrine to epinephrine and is regulated by glucocorticoids synthesised in the adrenal gland [19]. One way that it can regulate PNMT expression is by corticosterone's positive influence on the maintenance of PNMT mRNA [20]. Therefore a reduction in cortisol will lead to a decrease in the synthesis of epinephrine. These alterations in autonomic function are of importance as they have been associated with both depression and cardiovascular pathology [14,15]. It is known that depression is often preceded by psychosocial stress [22] which might, therefore, also be associated with the risk for cardiac remodelling as well as a poor prognosis in individuals with coronary heart disease [23]. This notion is enhanced by the finding that in a Black cohort with symptoms of depression, attenuated cortisol and catecholamine metabolites were identified as risk factors for the development of vascular diseases [24]. Even though depression [46], diabetes and HT are associated with activation of the SNS [31], we could not replicate these findings.
Our results, therefore, oppose the findings from Ref. [47], showing a positive association between NECR excretion and moderate depressive symptoms. As more depressive symptoms and a hypertensive state are evident in the Black men, the SNS and adrenal medulla may present neural fatigue or "burnout." Our findings could, therefore, indicate a possible downregulation of norepinephrine and epinephrine secretion as a consequence of long-term overstimulation of the SNS and possible β-adrenergic hyporesponsivity in the Black men. In support of this notion, depressed heart rate variability (HRV) was associated with increased parasympathetic dominance albeit cardiac contractility (24-h heart rate and SBP) in the current African men at baseline, rather suggesting β-adrenergic receptor activation [1,48]. Conversely, increased SNS activity and a possible vagal-impaired HR profile may however contribute to disturbed endothelial function, possibly because of activation of β-adrenergic receptors [49]. When α-adrenergic responsiveness though prevails [48], dysregulation or desensitisation of β-adrenergic receptors may occur. This was evident in the clustering of increased 24-h heart rate, SBP and depressed HRV values which indicated a possible diminished β-adrenergic responsiveness and vagal-impaired response [1]. A plausible explanation may be that depressed HRV as a reflection of α-adrenergic sympathetic overdrive could also be due to poor ventricular performance as was observed in another study [50].
It supports previous findings in these SABPA Black men, where blunted neuroendocrine responses were associated with vascular wall remodelling concurring with a profile of autonomic exhaustion and emotional distress [14,16]. Our subsample of White men showed a 13% prevalence of depressive symptoms which were inversely associated with the retinal vessel calibre. Findings from the ARIC study compare favourably with the White group where the depressive symptom score was associated with retinal arteriolar narrowing. In contrast, we could not replicate these findings in the Black group. Clearly prospective studies are needed to determine causality [30].
Retinal microvascular calibre and catecholamines
SNS activation is present in both diabetes and HT [31] and may be associated with microvascular calibre. Increased perfusion pressure enforces…. contraction in the ocular arteries, which are resistance vessels and regulated by myogenic mechanisms (Bayliss effect) [51]. Retinal microvascular calibre associations with the adrenergic transmitters revealed different profiles in the two ethnic groups. Chronic SNS activation will desensitise the baroreceptors with compensatory increases in BP and PP as was shown in the Black group [1]. In the Black group, the smaller CRAE and a larger CRVE are both associated with epinephrine but not with norepinephrine levels. This may imply that epinephrine will reduce blood flow to the retina by stimulating arteriolar contraction but also increasing the draining of blood away from the retina by stimulating venular dilation. Myogenic tone may however be impaired in Blacks and increase retinal venular widening especially during chronic pressure overload with increased hyperpulsatility. An overactive sympathetic system and/or chronic depression symptoms might therefore explain part of the mechanism. Presently, instead of epinephrine's normal arterial vasodilatory response [52], it induces vasoconstriction, which may suggest hyporesponsivity or down-regulation of the β 2 -adrenergic receptors as was also shown previously [1].
Therefore, this hyporesponsivity may be a homeostatic reaction to protect the retina from SNSstimulated increases in hyperpulsatile pressure in a cohort who has more depressive symptoms. This may be true for both the retina and the brain as emotional stress can also provoke reversible cerebral vasoconstriction similar to retinal vasoconstriction [53].
Both a smaller AVR and a larger CRVE are associated with a greater risk for stroke mortality [7]. This also suggests that β-adrenoceptor hyporesponsivity due to SNS hyperactivity as reflected in lower catecholamine levels might constitute an increased risk for vascular hypertrophy and eventually stroke in the Black male cohort. The same associations were not seen in the White men, maybe as result of their lower depressive symptom scores as well as their lower BP and PP levels. The prevalence of depressive symptoms and possible down-regulated catecholamine profile presuming chronic distress in the Black men compared to their White counterparts, therefore, may explain the differences or lack of association between the catecholamine levels and AVR in the Whites.
Retinal microvascular calibre and local or systemic sympathetic activation
Whether local or systemic catecholamine levels are associated with retinopathy is hotly debated [29,30]. Recently, both α-and β-adrenergic innervation was demonstrated in the preocular CRA in humans [29]. It seems clear that some aspects of sympathetic transmission regulate choroidal and CRA blood flow by way of changes in vascular smooth muscle tone [54]. The inverse association between AVR and ECR may support a vasodilatory (venular) or vasoconstrictive (arteriolar) tone in the retinal vessels. A notion for vasoconstriction is suggested as a hypertensive state increases peripheral vascular resistance in the retinal arterioles [7]. Therefore, increased or hyperpulsatile PP exerting mechanical stress on the vessel walls may contribute to a diminished β-adrenergic albeit an augmented α-adrenergic responsiveness in Black men [1,33] and subsequent risk of vascular hypertrophy [1] and possibly arteriolar narrowing. The profile of β-adrenergic hyporesponsivity in Black men emphasises central control of the brain on the circulatory system irrespective of the vascular bed.
Several limitations should be noted. The cross-sectional design of the current study prevents us from being able to infer causality. Studies showing direct evidence of sympathetic tone and retinal vascular remodelling in human models could greatly contribute to our knowledge in this field. Larger sample sizes and more diverse data on autonomic and endothelial function are needed to delineate possible physiological mechanisms and the role of the ageing process.
Only an indirect measure of SNS activity via 24 h catecholamine concentrations was measured and more direct measurements should be implemented, along with the determination of the corticosteroid profile. A more representative sample of the whole population is necessary to draw generalised conclusions.
Conclusions
A profile of β-adrenergic hyporesponsivity was evident in Black men. They revealed more depressive symptoms, indicative of a chronically challenged SNS, which were associated with retinal vascular remodelling and possible vascular hypertrophy. Whether these changes
|
2019-01-06T03:21:48.953Z
|
2016-10-26T00:00:00.000
|
{
"year": 2016,
"sha1": "66563d73988998f0f21a7e7b6350baeaf8bcd832",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/51006",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "be615cac0c76da128e1061abdb236cc37e1a525e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267847204
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Furin in the Pathogenesis of COVID-19-Associated Neurological Disorders
Neurological disorders have been reported in a large number of coronavirus disease 2019 (COVID-19) patients, suggesting that this disease may have long-term adverse neurological consequences. COVID-19 occurs from infection by a positive-sense single-stranded RNA virus called severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The membrane fusion protein of SARS-CoV-2, the spike protein, binds to its human host receptor, angiotensin-converting enzyme 2 (ACE2), to initiate membrane fusion between the virus and host cell. The spike protein of SARS-CoV-2 contains the furin protease recognition site and its cleavage enhances the infectivity of this virus. The binding of SARS-CoV-2 to the ACE2 receptor has been shown to downregulate ACE2, thereby increasing the levels of pathogenic angiotensin II (Ang II). The furin protease cleaves between the S1 subunit of the spike protein with the binding domain toward ACE2 and the S2 subunit with the transmembrane domain that anchors to the viral membrane, and this activity releases the S1 subunit into the blood circulation. The released S1 subunit of the spike protein also binds to and downregulates ACE2, in turn increasing the level of Ang II. Considering that a viral particle contains many spike protein molecules, furin-dependent cleavage would release many free S1 protein molecules, each of which can downregulate ACE2, while infection with a viral particle only affects one ACE2 molecule. Therefore, the furin-dependent release of S1 protein would dramatically amplify the ability to downregulate ACE2 and produce Ang II. We hypothesize that this amplification mechanism that the virus possesses, but not the infection per se, is the major driving force behind COVID-19-associated neurological disorders.
Introduction
Coronavirus disease 2019 (COVID-19) is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a positive-sense single-stranded RNA virus of the Coronaviridae family, and had a devastating impact worldwide.As of December 2023, there have been over 770 million recorded cases and almost 7 million deaths globally, although the actual numbers are believed to be higher.This pandemic has been one of the deadliest in history, making COVID-19 a leading cause of death [1].While the pandemic has now officially been declared over, many still suffer from medical issues related to this virus, such as long COVID or post-acute sequelae of SARS-CoV-2 infection (PASC).
Life 2024, 14, 279 2 of 13 Emerging evidence suggests that although SARS-CoV-2 was initially thought to be primarily a respiratory illness, it has the ability to infiltrate the central nervous system, where it can cause a variety of impairments-from non-specific symptoms, such as confusion, anosmia, and anxiety [2][3][4], to serious long-term neurological complications, including cognitive impairments, cerebrovascular diseases, demyelinating pathologies, encephalopathy, stroke, etc. [5,6].Problems affecting the central nervous system have been reported in more than ∼35.6% of total COVID-19 cases [7].Additionally, hippocampal shrinkage, reduction in brain size, and neurodegeneration have been reported following SARS-CoV-2 infection [8,9].Therefore, it is crucial to shed light on SARS-CoV-2 invasion and its impact on the central nervous system.SARS-CoV-2 has been shown to replicate in neuronal U251 cells, supporting the idea that the virus may play a part in the development of neurological lesions [10].SARS-CoV-2 viral RNAs and proteins were found in anatomically diverse brain areas and cerebrospinal fluid, providing indications of SARS-CoV-2 neuroinvasion and neurotropism [11,12].However, despite these findings, it is unknown whether the neurological impairments linked to SARS-CoV-2 are the result of direct viral and/or spike protein action or are instead a result of hypoxia, a surge of pro-inflammatory cytokines driven by infection, or vascular and blood-brain barrier abnormalities [13][14][15][16].There are also reports showing that the SARS-CoV-2 proteome can be assembled into neurotoxic amyloids, causing neurological symptoms that appear after infection [17].
Evidence suggests that the malfunctioning of the renin-angiotensin system components in the brain contributes to the neurological symptoms of COVID-19 [18,19].SARS-CoV-2 infects host cells through the interactions between the membrane fusion protein of the virus, the spike protein, and its human host receptor, angiotensin-converting enzyme 2 (ACE2) [20].ACE2 physiologically functions to convert angiotensin II (Ang II) into angiotensin 1-7 (Ang 1-7) [21], thereby degrading Ang II.Ang II, a decapeptide that is produced from Ang I in the renin-angiotensin-aldosterone system, is the major vasoconstrictor and plays a crucial role in the development of many diseases, including neurological disorders.
The binding of the SARS-CoV-2 spike protein to ACE2 has been shown to activate multiple biologic mechanisms that result in the reduction of the expression of ACE2 in the plasma membrane, thereby decreasing the peptidase activity of ACE2 to convert Ang II to Ang 1-7, in turn increasing the levels of Ang II [22].An increase in the level of Ang II activates the Ang II/Ang II receptor type 1 (AT1R) pathway, thereby accelerating pathogenic mechanisms including neurological complications and neurodegeneration [23,24].It has been shown that the angiotensin-converting enzyme (ACE)/Ang II/AT1R axis is upregulated in neurodegenerative diseases, and causes oxidative stress, increased permeability of the blood-brain barrier, neuroinflammation, neurovascular dysfunction [25], and a reduction in cerebral blood flow, contributing to the development of Alzheimer's disease [26].Thus, medications that inhibit the renin-angiotensin system may reduce the risk of developing Alzheimer's disease [26].
In the brain, the deleterious effects of the ACE/Ang II/AT1R pathway are counterbalanced by its alternative axes, which are ACE/Ang II/Ang II receptor type 2 (AT2R) and ACE2/Ang 1-7/Mas receptor, both of which have positive effects on cognition and memory.The counteracting and protective effects of the ACE2/Ang 1-7/AT2R and MAS receptor pathways are associated with the lowering of oxidative stress and inflammatory reactions, as well as vasodilation through the creation of nitric oxide and prostaglandins [27,28], in addition to antithrombotic effects and neuroprotection [29][30][31].
Neurological consequences seen in COVID-19 have also been linked to impaired neurotransmission in the central nervous system caused by SARS-CoV-2.Identifying the pathways of how SARS-CoV-2 affects the central nervous system should help develop effective treatment strategies and prevent its negative effects on neurocognition.We herein propose a novel mechanism in which spike protein-mediated effects may be amplified and contribute to COVID-19 associated neurological disorders.
The Hypothesis
Our hypothesis is that the furin protease-dependent cleavage of the SARS-CoV-2 spike protein and release of the circulating S1 subunit protein amplify ACE2 downregulation and subsequent Ang II production, thereby promoting neurological disorders seen in COVID-19 patients.
Panel (a) in Figure 1 depicts the binding of the spike protein to ACE2 to downregulate the ACE2 protein.The downregulation of ACE2 results in the reduced overall peptidase activity of this enzyme to degrade Ang II, thereby increasing the levels of Ang II, a major pathogenic mediator.Panel (b) of Figure 1 shows that it may be expected that the ratio of roughly one spike protein molecule of one SARS-CoV-2 viral particle interacts with one ACE2 molecule, resulting in downregulating one ACE2 molecule per viral particle.If many spike protein molecules of a given viral particle are cut by the furin protease and come off the virus, multiple (perhaps 50-100) free spike protein molecules are produced, which can result in the binding of multiple (50-100) ACE2 molecules per viral particle.This could result in 50-100 ACE2 molecules becoming downregulated.Panel (c) in Figure 1 shows that the amplification of ACE2 downregulation by furin-dependent cleavage of the spike protein forms free S1 protein that then dramatically increases the level of Ang II.Higher levels of Ang II are expected to produce more pathological conditions, including neurological damage.We propose that this furin-dependent amplification process contributes to the mechanism of COVID-19-associated neurological disorders.
Life 2024, 14, x FOR PEER REVIEW 3 of 13 herein propose a novel mechanism in which spike protein-mediated effects may be amplified and contribute to COVID-19 associated neurological disorders.
The Hypothesis
Our hypothesis is that the furin protease-dependent cleavage of the SARS-CoV-2 spike protein and release of the circulating S1 subunit protein amplify ACE2 downregulation and subsequent Ang II production, thereby promoting neurological disorders seen in COVID-19 patients.
Panel (a) in Figure 1 depicts the binding of the spike protein to ACE2 to downregulate the ACE2 protein.The downregulation of ACE2 results in the reduced overall peptidase activity of this enzyme to degrade Ang II, thereby increasing the levels of Ang II, a major pathogenic mediator.Panel (b) of Figure 1 shows that it may be expected that the ratio of roughly one spike protein molecule of one SARS-CoV-2 viral particle interacts with one ACE2 molecule, resulting in downregulating one ACE2 molecule per viral particle.If many spike protein molecules of a given viral particle are cut by the furin protease and come off the virus, multiple (perhaps 50-100) free spike protein molecules are produced, which can result in the binding of multiple (50-100) ACE2 molecules per viral particle.This could result in 50-100 ACE2 molecules becoming downregulated.Panel (c) in Figure 1 shows that the amplification of ACE2 downregulation by furin-dependent cleavage of the spike protein forms free S1 protein that then dramatically increases the level of Ang II.Higher levels of Ang II are expected to produce more pathological conditions, including neurological damage.We propose that this furin-dependent amplification process contributes to the mechanism of COVID-19-associated neurological disorders.
Evaluation of the Hypothesis
This hypothesis follows up the previously described viral protein fragment theory of COVID-19 pathogenesis [32] which illustrated that, as humans are infected with SARS-CoV-2, the virus releases fragments of the spike protein that can target host cells without the rest of the viral component.
SARS-CoV-2 is a single-stranded RNA virus that attaches to the host cells through the interactions between the spike protein (the membrane fusion protein of this virus) and the host cell receptor ACE2, leading to the fusion of the viral and host cell membranes that allows the entry and subsequent replication of the virus.Although SARS-CoV-2 is a respiratory virus, other organs such as the brain are often affected, which raises questions about whether merely the infection and replication of the virus in the host cells alone are responsible for the pathologies associated with COVID-19.
The SARS-CoV-2 spike protein is composed of two subunits: S1 and S2 (Figure 2).The S2 subunit contains the transmembrane domain (TM) and is anchored to the viral membrane.The S1 subunit of the spike protein sticks out of the viral particle and contains the receptor-binding domain (RBD) that interacts with the major host cell receptor of SARS-CoV-2, ACE2 [33,34].During the virus entry to host cells, the spike protein is cleaved into S1 and S2 subunits mainly by transmembrane serine protease 2 (TMPRSS2) at the cell surface of lung epithelial cells.Proteolysis into S1 and S2 subunits by more ubiquitous enzymes such as the furin proprotein convertase also occurs and has been shown to enhance the infectivity of the SARS-CoV-2 virus [35].
molecule of one SARS-CoV-2 viral particle interacts with one ACE2 molecule, resulting in downregulating one ACE2 molecule per viral particle.If many spike protein molecules of a given viral particle are cut by the furin protease and come off the virus, multiple free spike protein molecules are produced, which can result in the binding of multiple ACE2 molecules per viral particle.This could result in many ACE2 molecules becoming downregulated.(c) The amplification of ACE2 downregulation dramatically increases the level of Ang II.
Evaluation of the Hypothesis
This hypothesis follows up the previously described viral protein fragment theory of COVID-19 pathogenesis [32] which illustrated that, as humans are infected with SARS-CoV-2, the virus releases fragments of the spike protein that can target host cells without the rest of the viral component.
SARS-CoV-2 is a single-stranded RNA virus that attaches to the host cells through the interactions between the spike protein (the membrane fusion protein of this virus) and the host cell receptor ACE2, leading to the fusion of the viral and host cell membranes that allows the entry and subsequent replication of the virus.Although SARS-CoV-2 is a respiratory virus, other organs such as the brain are often affected, which raises questions about whether merely the infection and replication of the virus in the host cells alone are responsible for the pathologies associated with COVID-19.
The SARS-CoV-2 spike protein is composed of two subunits: S1 and S2 (Figure 2).The S2 subunit contains the transmembrane domain (TM) and is anchored to the viral membrane.The S1 subunit of the spike protein sticks out of the viral particle and contains the receptor-binding domain (RBD) that interacts with the major host cell receptor of SARS-CoV-2, ACE2 [33,34].During the virus entry to host cells, the spike protein is cleaved into S1 and S2 subunits mainly by transmembrane serine protease 2 (TMPRSS2) at the cell surface of lung epithelial cells.Proteolysis into S1 and S2 subunits by more ubiquitous enzymes such as the furin proprotein convertase also occurs and has been shown to enhance the infectivity of the SARS-CoV-2 virus [35].Between the S1 and S2 subunits is an RRAR amino acid sequence that is a consensus sequence where furin protease cleaves.
In addition to ACE2 binding to the spike protein RBD of the intact virus to facilitate the viral entry, the S1 subunit of the spike protein can be cleaved off from the virus and released in the blood circulation by proteases such as furin.In fact, the circulating S1 protein has been detected in COVID-19 patients.Ogata et al. [36] used ultra-sensitive serial profiling Single-Molecule Array (Simoa) assays to quantitatively detect SARS-CoV-2 spike, the S1 subunit, and nucleocapsid antigens in the plasma of COVID-19 patients.The authors detected SARS-CoV-2 S1 and nucleocapsid antigens in 41 out of 64 COVID-19positive patients.In a retrospective study of plasma samples collected from 63 patients in Boston, SARS-CoV-2 proteins including S1 spike protein were detected in the plasma of the majority of COVID-19 patients with long COVID conditions and were persistently detected at various time periods up to 12 months after diagnosis [37].Further, widely used mRNA COVID-19 vaccines that encode for the full-length spike protein (S1 + S2) have also Between the S1 and S2 subunits is an RRAR amino acid sequence that is a consensus sequence where furin protease cleaves.
In addition to ACE2 binding to the spike protein RBD of the intact virus to facilitate the viral entry, the S1 subunit of the spike protein can be cleaved off from the virus and released in the blood circulation by proteases such as furin.In fact, the circulating S1 protein has been detected in COVID-19 patients.Ogata et al. [36] used ultra-sensitive serial profiling Single-Molecule Array (Simoa) assays to quantitatively detect SARS-CoV-2 spike, the S1 subunit, and nucleocapsid antigens in the plasma of COVID-19 patients.The authors detected SARS-CoV-2 S1 and nucleocapsid antigens in 41 out of 64 COVID-19positive patients.In a retrospective study of plasma samples collected from 63 patients in Boston, SARS-CoV-2 proteins including S1 spike protein were detected in the plasma of the majority of COVID-19 patients with long COVID conditions and were persistently detected at various time periods up to 12 months after diagnosis [37].Further, widely used mRNA COVID-19 vaccines that encode for the full-length spike protein (S1 + S2) have also been shown to produce the circulating S1 protein [38][39][40].Ogata et al. [38] again used the Single-Molecule Array (Simoa) assays to detect SARS-CoV-2 spike, the S1 subunit, and nucleocapsid proteins in the plasma of 13 mRNA-1273 vaccine recipients.Overall, 11 of 13 participants exhibited detectable levels of S1 subunit protein as early as one day after the first vaccine administration.The release of the S1 subunit of the spike protein from the COVID-19 vaccine could be due to furin-dependent cleavage of the S1 + S2 spike protein molecules that are expressed on the plasma membrane with the S1 side facing extracellularly after the administration of mRNA vaccines.Taking into consideration the experimental results showing that an intravenously injected S1 protein can easily cross the murine blood-brain barrier and enter the parenchymal tissue and interstitial fluid spaces of the brain [41], as well as its persistence in circulation long after infection, we can suggest that the circulating S1 is more likely to cause neurological complications than the virus itself.
The furin subtilisin-like eukaryotic endoprotease cleaves proteins at the consensus amino acid sequence Lys/Arg-X n -Lys/Arg [42].It is a protein with a calculated molecular weight of 87 kDa and was named furin because it was in the upstream region of an oncogene FES, thus becoming known as the FUR (FES Upstream Region).Since it cleaves basic amino acid motifs, it is also known as the paired basic amino acid-cleaving enzyme (PACE).It is ubiquitously expressed with high levels found in the salivary glands, liver, and bone marrow.Physiologically, it functions to exert proteolytic activation of various hormones, growth factors, receptors, adhesion molecules, and enzymes.Its proteolytic substrates also include proteins of various bacterial toxins and viruses, including human immunodeficiency virus (HIV) and dengue virus.In mammalian cells, furin accumulates in the Golgi, and it can traffic to the plasma membrane, while C terminal proteolytic cleavage separates the transmembrane domain from the catalytically active domain that could occur in the extracellular space [42].Using the fluorogenic substrate boc-Arg-Val-Arg-Arg-MCA, Vidricaire et al. [43] detected the endoproteolytic activity of secreted furin in the media of BSC40 cells overexpressing furin.As furin can occur in the extracellular space, the circulating S1 may be produced from SARS-CoV-2 as well as from COVID-19 vaccines by this enzyme.
The S1 subunit of the spike proteins of both SARS-CoV-2, which caused COVID-19, and SARS-CoV, which caused severe acute respiratory syndrome (SARS), contains the RBD that binds to ACE2.Since the 2002 SARS outbreak, research has shown that the spike protein binding to its host cell receptor ACE2 results in the downregulation of ACE2, in turn increasing the major pathogenic mediator Ang II.In mice, Kuba et al. [44] reported in 2005 that SARS-CoV infection, as well as injection with the recombinant SARS-CoV spike protein, reduces ACE2 expression.Importantly, the authors showed that the worsening of acute lung failure in this infection is primarily caused by SARS-CoV spike protein-mediated ACE2 downregulation.In these mice, the spike protein increased Ang II, and the angiotensin receptor inhibitor losartan attenuated the spike protein-induced enhancement of lung injury.In HEK293 cells, the SARS-CoV spike protein RBD was found to be internalized together with ACE2 [45].
After the COVID-19 pandemic started, Bayati et al. [46] found that SARS-CoV-2 undergoes clathrin-mediated endocytosis in HEK293T cells.The SARS-CoV-2 infection downregulates ACE2 in Syrian golden hamsters and in cultured HEK293A cells transfected with ACE2 by inducing clathrin-dependent endocytosis and degradation in the lysosome [47].Expression of GFP-tagged ACE2 in HEK293T cells also demonstrated the internalization of ACE2 in response to the recombinant RBD protein treatment [48].Using structured illumination microscopy, endocytosis of the SARS-CoV-2 spike protein RBD-ACE2 complex was visualized in living cells [49].These results provide evidence that the binding of the spike protein to ACE2 results in endocytosis-mediated internalization of the spike protein-ACE2 complex into the cells and the ultimate degradation of ACE2.Lei et al. [50] reported that Syrian hamsters infected with spike protein-expressing pseudovirus had reduced ACE2 protein expression in the lungs.Their experiments suggested a mechanism by which the spike protein increases redox stress, leading to AMPK deactivation, MDM2 upregulation, and ACE2 destabilization.In addition to the ACE2 protein downregulation mechanisms via spike protein-mediated internalization and degradation, Sui et al. [51] reported that the SARS-CoV-2 spike protein reduces the mRNA expression of ACE2 in primary cells of lung bronchoalveolar lavage from naïve rhesus macaques.An interesting study by Gao et al. [52] similarly suggested that the internalized SARS-CoV-2 spike protein activates intracellular signals to degrade ACE2 mRNA.Consistent with these experimental findings, our immunohistochemical evaluations of human patients who died of COVID-19 showed reduced ACE2 protein expression by COVID-19 in patients both with and without Alzheimer's disease (Figure 3).In the brains of patients without known neurological diseases, ACE2 protein expression was predominantly detected in capillary endothelium as shown in the pink stain (Panel (a)).Patients who died of COVID-19 showed downregulated ACE2 protein expression at a very low level (Panel (b)).Consistent with previously reported Western blotting and RT-PCR results obtained by our laboratory as well as others [53-56], ACE2 expression levels were higher in the brains of Alzheimer's disease patients compared to controls as monitored by immunohistochemistry (Panel (c)).Even in Alzheimer's brains, COVID-19 decreased the protein expression of ACE2, but only to a level that was higher than in the brains of COVID-19 patients without Alzheimer's disease (Panel (d)).Thus, it can be speculated that COVID-19 decreases ACE2 expression in the brain via the actions of the spike protein, highlighting the importance of our hypothesis that furin would amplify the actions of the spike protein to downregulate ACE2.
mechanism by which the spike protein increases redox stress, leading to AMPK deactivation, MDM2 upregulation, and ACE2 destabilization.In addition to the ACE2 protein downregulation mechanisms via spike protein-mediated internalization and degradation, Sui et al. [51] reported that the SARS-CoV-2 spike protein reduces the mRNA expression of ACE2 in primary cells of lung bronchoalveolar lavage from naïve rhesus macaques.An interesting study by Gao et al. [52] similarly suggested that the internalized SARS-CoV-2 spike protein activates intracellular signals to degrade ACE2 mRNA.
Consistent with these experimental findings, our immunohistochemical evaluations of human patients who died of COVID-19 showed reduced ACE2 protein expression by COVID-19 in patients both with and without Alzheimer's disease (Figure 3).In the brains of patients without known neurological diseases, ACE2 protein expression was predominantly detected in capillary endothelium as shown in the pink stain (Panel (a)).Patients who died of COVID-19 showed downregulated ACE2 protein expression at a very low level (Panel (b)).Consistent with previously reported Western blotting and RT-PCR results obtained by our laboratory as well as others [53][54][55][56], ACE2 expression levels were higher in the brains of Alzheimer's disease patients compared to controls as monitored by immunohistochemistry (Panel (c)).Even in Alzheimer's brains, COVID-19 decreased the protein expression of ACE2, but only to a level that was higher than in the brains of COVID-19 patients without Alzheimer's disease (Panel (d)).Thus, it can be speculated that COVID-19 decreases ACE2 expression in the brain via the actions of the spike protein, highlighting the importance of our hypothesis that furin would amplify the actions of the spike protein to downregulate ACE2.
Materials and Methods (Figure 3)
De-identified postmortem formalin-fixed paraffin-embedded human brain tissues obtained in Kyiv, Ukraine were cut into 5 µm thick sections.Slides were subjected to immunohistochemistry using an anti-ACE2 antibody (Rabbit Angiotensin Converting
Materials and Methods (Figure 3)
De-identified postmortem formalin-fixed paraffin-embedded human brain tissues obtained in Kyiv, Ukraine were cut into 5 µm thick sections.Slides were subjected to immunohistochemistry using an anti-ACE2 antibody (Rabbit Angiotensin Converting Enzyme 2 Monoclonal Antibody) purchased from MyBioSource (San Diego, CA, USA) and the Master Polymer Plus Detection System (Phosphatase and AP Chromogen) purchased from Vitro Master Diagnostica, Spain.Specimens were examined using a Leica BX 51 microscope, a Leica MC 190 digital camera, and the Leica LAS software (Leica Application Suite X 3.0.12)at a magnification of 400×.
Results (Figure 3)
Immunohistochemistry analysis using the ACE2 antibody showed that control brains from individuals without neurological diseases exhibited ACE2 protein expression in the capillary endothelium as shown in pink staining (Figure 3a).Patients who died of COVID-19 showed dramatically downregulated ACE2 (Figure 3b).Alzheimer's disease patients showed upregulated ACE2 protein expression in the brain (Figure 3c).COVID-19 also downregulated ACE2 protein expression in the brains of patients with Alzheimer's disease (Figure 3d).
Consequences of the Hypothesis
Since the physiological function of ACE2 is to degrade Ang II [57], the loss of ACE2 results in increased Ang II and associated pathologies.ACE2 is a monocarboxypeptidase that is mainly expressed in vascular endothelial cells, although its expression in human neurons has also been reported.Xu and Lazartigues [58] showed the expression of ACE2 in human pluripotent stem cell-derived neurons by immunohistochemistry. ACE2 substrates have hydrophobic or basic residues at the C-terminal end, preceded by a Pro-X-Pro sequence, albeit having one proline residue is sufficient for ACE2 activity.Ang II is an octapeptide with the sequence Asp-Arg-Val-Tyr-Ile-His-Pro-Phe and a hydrophobic phenylalanine at the C-terminus, preceded by a proline residue.ACE2 cuts the C-terminal phenylalanine residue from the Asp-Arg-Val-Tyr-Ile-His-Pro that is Ang 1-7.
It has been shown that the Ang II levels in the plasma samples from SARS-CoV-2-infected patients in Shenzhen, China was markedly elevated [59].Also, a study of 82 non-hypertensive patients in Wuhan, China by Wu et al. [60] showed that plasma Ang II level was higher in COVID-19 patients than non-COVID controls.A study of 30 patients hospitalized due to COVID-19 conducted at the Clinics Hospital at the University of Campinas in Brazil by Camargo et al. [61] showed that patients with critical COVID-19 had higher Ang II levels than patients presenting with severe COVID-19.In this study, levels of ACE, ACE2, Ang 1-7, and Ang 1-9 were found to be similar in the two groups.A study at Istanbul University-Cerrahpasa Hospital in Turkey by Ipekci et al. [62] showed that serum samples from COVID-19 patients had significantly lower ACE2 levels than controls and increased Ang II levels.
If the action of the spike protein to downregulate ACE2 only occurs via an intact virus, one may envision that one viral particle may downregulate one ACE2 protein molecule.It is thought that a coronavirus particle may contain about 50-100 trimers of spike proteins [63] based on an electron cryomicroscopy study performed on SARS-CoV by Neuman et al. [64], depending on whether the spike protein or ribonucleoprotein spacing is used to calculate the surface area of a spike unit cell.Thus, 50-100 spike protein molecules could be produced from one viral particle by furin proteolytic activity, amplifying the ability to downregulate ACE2 and produce Ang II 50-100-fold.
This hypothesis is important because it suggests that this amplification mechanism conferred by furin contributes to the pathogenesis of neurological and other complications seen in COVID-19.As this theory becomes proven, testing of furin inhibitors may benefit patients suffering from neurological and other disorders due to SARS-CoV-2 infection as well as COVID-19 vaccines.
Our pilot study in Kyiv, Ukraine suggested that 6 out of 40 (15%) patients over 75 years of age who died of COVID-19 had early signs of Alzheimer's disease.Figure 4 shows representative histology images of a patient who died of Alzheimer's disease with strong Tau expression and pronounced brain atrophy (Panel (a)), as well as a patient who died of COVID-19 with milder expression of Tau and less pronounced brain atrophy, perhaps indicating an early stage of the development of Alzheimer's disease (Panel (b)).In the context of this hypothesis paper, these findings raise the question of whether the furin-mediated amplification of the spike protein/ACE2 binding results in enhanced downregulation of ACE2 expression and subsequent elevation of Ang II and whether these events may lead to the appearance of early pathogenic events for Alzheimer's disease in some COVID-19 patients.
indicating an early stage of the development of Alzheimer's disease (Panel (b)).In the context of this hypothesis paper, these findings raise the question of whether the furin-mediated amplification of the spike protein/ACE2 binding results in enhanced downregulation of ACE2 expression and subsequent elevation of Ang II and whether these events may lead to the appearance of early pathogenic events for Alzheimer's disease in some COVID-19 patients.
Materials and Methods (Figure 4)
De-identified postmortem formalin-fixed paraffin-embedded human brain tissues obtained in Kyiv, Ukraine were cut into 5 µm thick sections.Slides were subjected to immunohistochemistry using the Tinto Tau antibody (purchased from Bio SB) and the Master Polymer Plus Detection System.Specimens were examined using a Leica BX 51 microscope, a Leica MC 190 digital camera, and the Leica LAS software at a magnification of 400×.
Results (Figure 4)
Panel (a) of Figure 4 shows an immunohistochemistry image of a patient who died of Alzheimer's disease with pronounced expression of Tau (forming Tau tangles or deposits) and significant brain atrophy.Panel (b) of Figure 4 shows the immunohistochemistry image of a patient who died of COVID-19 with early-stage Alzheimer's disease, weaker expression of Tau (less amounts of deposits), and less brain atrophy.
Alternative to the Stated Hypothesis
In Section 2 "The Hypothesis", we provided a specific hypothesis based on ample reports in the literature describing that one major action of the spike protein is to downregulate ACE2.Thus, so far, we have focused our discussion on the actions of Ang II that would be expected to be increased as a consequence of ACE2 downregulation.However, the major thesis of this hypothesis paper is that the cleavage of spike protein molecules from the viral particle would amplify the actions of the spike protein by possibly 50-100 times, as described above.In addition to downregulating ACE2, the spike protein could elicit other biological actions, which would also be amplified through furin-dependent production of the S1 protein (Figure 5).De-identified postmortem formalin-fixed paraffin-embedded human brain tissues obtained in Kyiv, Ukraine were cut into 5 µm thick sections.Slides were subjected to immunohistochemistry using the Tinto Tau antibody (purchased from Bio SB) and the Master Polymer Plus Detection System.Specimens were examined using a Leica BX 51 microscope, a Leica MC 190 digital camera, and the Leica LAS software at a magnification of 400×.
Results (Figure 4)
Panel (a) of Figure 4 shows an immunohistochemistry image of a patient who died of Alzheimer's disease with pronounced expression of Tau (forming Tau tangles or deposits) and significant brain atrophy.Panel (b) of Figure 4 shows the immunohistochemistry image of a patient who died of COVID-19 with early-stage Alzheimer's disease, weaker expression of Tau (less amounts of deposits), and less brain atrophy.
Alternative to the Stated Hypothesis
In Section 2 "The Hypothesis", we provided a specific hypothesis based on ample reports in the literature describing that one major action of the spike protein is to downregulate ACE2.Thus, so far, we have focused our discussion on the actions of Ang II that would be expected to be increased as a consequence of ACE2 downregulation.However, the major thesis of this hypothesis paper is that the cleavage of spike protein molecules from the viral particle would amplify the actions of the spike protein by possibly 50-100 times, as described above.In addition to downregulating ACE2, the spike protein could elicit other biological actions, which would also be amplified through furin-dependent production of the S1 protein (Figure 5).
We describe above the possibility that the levels of Ang II are increased because of spike protein-mediated ACE2 downregulation based on several published reports supporting the role of Ang II in the pathogenesis of neurological diseases.However, there are reports refuting the role of Ang II in neurological disorders.The RADAR Trial, a double-blind, randomized, placebo-controlled, phase 2 trial concluded that 12 months' treatment with losartan (an AT1R Ang II receptor blocker) was not effective in reducing the rate of brain atrophy in Alzheimer's patients [65].In mouse models of Alzheimer's disease, the reduction in ACE2 levels and activity and increased Ang II did not exacerbate Alzheimer's disease pathology [66].Furthermore, while it is widely thought that AT1R mediates the pathogenesis of Alzheimer's disease [67], Ang II can activate AT1R and AT2R, which could exert opposite biological effects.We describe above the possibility that the levels of Ang II are increased because of spike protein-mediated ACE2 downregulation based on several published reports supporting the role of Ang II in the pathogenesis of neurological diseases.However, there are reports refuting the role of Ang II in neurological disorders.The RADAR Trial, a doubleblind, randomized, placebo-controlled, phase 2 trial concluded that 12 months' treatment with losartan (an AT1R Ang II receptor blocker) was not effective in reducing the rate of brain atrophy in Alzheimer's patients [65].In mouse models of Alzheimer's disease, the reduction in ACE2 levels and activity and increased Ang II did not exacerbate Alzheimer's disease pathology [66].Furthermore, while it is widely thought that AT1R mediates the pathogenesis of Alzheimer's disease [67], Ang II can activate AT1R and AT2R, which could exert opposite biological effects.
Contrary to the widely reported concept that the spike protein downregulates ACE2, Aboudounya and Heads [68] proposed a signaling pathway in which the SARS-CoV-2 spike protein increases ACE2 expression, based on findings that the spike protein activates Toll-like receptor 4 [69,70].We have also described the spike protein-activated cell signaling processes that may elicit biological responses independent of Ang II [71].Further, the SARS-CoV-2 spike protein can cause blood-brain barrier and neuronal dysfunction either directly or via the activation of mast cells and microglia in the brain [72].It should also be noted that the S1 subunit of the SARS-CoV-2 spike protein possesses, in addition to the RBD, the N-terminal extracellular domain (ECD) and the CendR domain.AXL receptor tyrosine kinase has been reported to bind to ECD [73] and neuropilin 1 binds to CendR [74,75].Thus, the furin-dependent production of a number of free S1 protein molecules could amplify the actions of these factors as well.Contrary to the widely reported concept that the spike protein downregulates ACE2, Aboudounya and Heads [68] proposed a signaling pathway in which the SARS-CoV-2 spike protein increases ACE2 expression, based on findings that the spike protein activates Toll-like receptor 4 [69,70].We have also described the spike protein-activated cell signaling processes that may elicit biological responses independent of Ang II [71].Further, the SARS-CoV-2 spike protein can cause blood-brain barrier and neuronal dysfunction either directly or via the activation of mast cells and microglia in the brain [72].It should also be noted that the S1 subunit of the SARS-CoV-2 spike protein possesses, in addition to the RBD, the N-terminal extracellular domain (ECD) and the CendR domain.AXL receptor tyrosine kinase has been reported to bind to ECD [73] and neuropilin 1 binds to CendR [74,75].Thus, the furin-dependent production of a number of free S1 protein molecules could amplify the actions of these factors as well.
Conclusions
This hypothesis paper presents a novel concept that the cleavage of the spike protein S1 subunit from the intact SARS-CoV-2 virus by proteases such as furin can result in a higher number of S1 protein molecules that can target ACE2 and/or other receptors in various tissues.The efficiency of such free S1 proteins in targeting ACE2 and other spike protein-binding proteins is expected to be 50-100 times higher than the intact spike proteins bound to the SARS-CoV-2 viral particles.Since spike protein/ACE2 interactions are known to downregulate the ACE2 protein through multiple mechanisms [22,65] and we observed that the brains of COVID-19 patients show reduced ACE2 expression (Figure 3), we hypothesize that this amplification mechanism would be an efficient way for the virus to promote and worsen pathogenesis in various organs, including the brain.This mechanism may affect patients who contracted COVID-19 and with post-acute sequelae of SARS-CoV-2 infection, as well as patients suffering from adverse neurological effects potentially caused
Figure 1 .Figure 1 .
Figure 1.Schemes depicting the main hypothesis of this paper.(a) The binding of the spike protein to ACE2 downregulates (↓) the ACE2 protein, thus increases (↑) Ang II.(b) One spike protein Figure 1.Schemes depicting the main hypothesis of this paper.(a) The binding of the spike protein to ACE2 downregulates (↓) the ACE2 protein, thus increases (↑) Ang II.(b) One spike protein molecule of one SARS-CoV-2 viral particle interacts with one ACE2 molecule, resulting in downregulating one ACE2 molecule per viral particle.If many spike protein molecules of a given viral particle are cut by the furin protease and come off the virus, multiple free spike protein molecules are produced, which can result in the binding of multiple ACE2 molecules per viral particle.This could result in many ACE2 molecules becoming downregulated.(c) The amplification of ACE2 downregulation dramatically increases the level of Ang II.
Figure 2 .
Figure 2. The structure of the SARS-CoV-2 spike protein.The spike protein consists of S1 and S2 subunits.The S1 subunit contains the RBD, which binds to ACE2, and the S2 subunit contains the transmembrane (TM) domain that anchors to the viral membrane.The S1 subunit also contains the extracellular domain (ECD) that binds to AXL and the CendR domain that binds to neuropilin 1.Between the S1 and S2 subunits is an RRAR amino acid sequence that is a consensus sequence where furin protease cleaves.
Figure 2 .
Figure 2. The structure of the SARS-CoV-2 spike protein.The spike protein consists of S1 and S2 subunits.The S1 subunit contains the RBD, which binds to ACE2, and the S2 subunit contains the transmembrane (TM) domain that anchors to the viral membrane.The S1 subunit also contains the extracellular domain (ECD) that binds to AXL and the CendR domain that binds to neuropilin 1.Between the S1 and S2 subunits is an RRAR amino acid sequence that is a consensus sequence where furin protease cleaves.
Figure 3 .
Figure 3. Immunohistochemical evaluations of ACE2 protein expression in human brains.
Figure 3 .
Figure 3. Immunohistochemical evaluations of ACE2 protein expression in human brains.
Figure 4 .
Figure 4. Immunohistochemistry of the brain of a patient who died of COVID-19 showing early signs of Alzheimer's disease.
Figure 4 .
Figure 4. Immunohistochemistry of the brain of a patient who died of COVID-19 showing early signs of Alzheimer's disease.
|
2024-02-25T05:21:43.738Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "1b330a5104dcc05dd974351ffa3120fdd73fb3db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/14/2/279/pdf?version=1708332478",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b330a5104dcc05dd974351ffa3120fdd73fb3db",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9981086
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative Characterization of Steady-State Ankle Impedance With Muscle Activation
Characterization of multi-variable ankle mechanical impedance is crucial to understanding how the ankle supports lower-extremity function during interaction with the environment. This paper reports quantification of steady-state ankle impedance when muscles were active. Vector field approximation of repetitive measurements of the torque-angle relation in two degrees of freedom (inversion/eversion and dorsiflexion/plantarflexion) enabled assessment of spring-like and non-spring-like components. Experimental results of eight human subjects showed direction-dependent ankle impedance with greater magnitude than when muscles were relaxed. In addition, vector field analysis demonstrated a non-spring-like behavior when muscles were active, although this phenomenon was subtle in the unimpaired young subjects we studied.
INTRODUCTION
Ankle mechanical impedance, which we define as a functional that maps a time history of angular motion of the ankle joint onto a corresponding ankle torque time-history, plays a significant role in natural interaction of the lower extremity with the environment, including postural stabilization during standing and propulsion, energy-absorption, and lowerlimb joint coordination during locomotion.
Most prior studies of human ankle mechanical impedance focused on the sagittal plane (the dorsiflexion/plantarflexion (DP) direction) [1−3]. To the best of our knowledge, only two studies have measured ankle impedance in the frontal plane (the inversion-eversion (IE) direction) [4−5]. Considering that single degree of freedom (DOF) movements at the ankle are rare under natural physiological conditions, characterization of ankle impedance in multiple DOFs promises deeper understanding of its roles in lower extremity function. In a previous work by the authors [6], the multivariable steady-state torque-angle relation at the ankle was measured with muscles maximally relaxed, showing that (as expected) the ankle behaved like a (nonlinear) spring under those conditions. Although that work provided a baseline for understanding ankle impedance, it is not directly applicable to normal lowerextremity actions since they involve muscle activation, either singly, synergistically or antagonistically (co-contraction). Here we extend the characterization of steady-state multi-variable ankle mechanical impedance to muscle active conditions.
Human Subjects
Eight human subjects with no history of neuromuscular disorders (4 males and 4 females; age range mid 20's ~ mid 30's) were recruited for this study. Participants gave informed consent as approved by MIT's Committee on the Use of Humans as Experimental Subjects (COUHES).
Experimental Setup
Steady-state torque-angle data in IE−DP space were captured using a wearable ankle robot, Anklebot [7], the same device used in previous work [6]. It was mounted at the knee and operated by a simple impedance controller which enabled stable data capture even in high muscle activation conditions. Surface electromyographic (EMG) signals were recorded from four major muscles related to ankle movement: tibialis anterior (TA), soleus (SOL), gastrocnemius (GAS), and peroneus longus (PL). EMG amplitude was calculated from raw data (sampled at 200Hz) using a root-mean-square filter with a window of 0.2 seconds.
Experimental Protocol: Active Study
Subjects were seated and asked to activate a specific muscle and maintain it at a constant level as best they could. The current EMG amplitude and target bands representing desired EMG amplitude were displayed in real time using an oscilloscope. To minimize effects due to inconstant muscle activation during the protocol, 5 repetitive measurements were performed for each active study. There were 3 types of active study: 2 single muscle activations (TA active and SOL active) and 1 co-contraction. Eight directions in the IE−DP plane (eversion, eversion + dorsiflexion, dorsiflexion, dorsiflexion + inversion, inversion, inversion + plantarflexion, plantarflexion, plantarflexion + eversion) were selected for the measurement, and terminated ramp perturbations were applied along these directions. Displacement amplitude was selected as 20° to cover the normal range of motion of the ankle. To maintain quasi-static conditions and avoid evoking stretch reflexes, movement velocity was regulated not to exceed 5⁰/sec. During perturbations the angular displacement and applied torque in both DOF as well as EMG data were recorded at 200 Hz.
Vector Field Approximation and Decomposition
Steady-state ankle impedance at the ankle was represented as a vector field (V), and this was approximated based on the methods introduced in our previous work [6]. In brief, the vector field approximation problem was decomposed into two scalar function (∅ 1 , ∅ 2 ) estimation problems (Fig.1) and each scalar function was identified by adopting the method of Thinplate Spline (TPS) smoothing with Generalized Cross Validation (GCV). The approximated vector field was further decomposed into a conservative (zero curl) and a rotational field (zero divergence). The vector field approximation and decomposition methods are detailed in [6]. Fig.1. Vector field approximation based on scalar methods (θ IE and θ DP are angular displacement in IE and DP direction, respectively, and θ IE and θ DP are applied torque at θ IE and θ DP) A single vector field was estimated by averaging 5 repeated measurements of each scalar component. Ankle impedance quantification and further analysis of the vector field were based on these averages.
What Can We Learn From the Vector Field?
First, spatial ankle impedance structure can be identified from the approximated vector field, since torque values ( ,
IE DP
τ τ ) at any position ( , IE DP θ θ ) in the IE-DP space are easily calculated, which means the steady-state ankle impedance can be evaluated over the full range of motion, including 8 directions. The ankle impedance estimate is robust even with noisy data, since our methods are based on average and smoothed scalar functions.
Decomposed vector fields provide a precise quantification of the extent to which the ankle is spring-like. Specifically, if the set of active muscles behaves like a spring, the vector torque field must be an integrable function of displacement, the gradient of a potential function and, as a result, the curl of the torque field must be identically zero. Thus we can quantify the spring-like property of the ankle by comparing the size of the rotational field relative to the conservative field.
We can also investigate the role of intermuscular feedback from the rotational field. If there is no intermuscular feedback between muscles, that is the action of one muscle group does not influence the action of others, non-zero curl cannot be introduced either with intrinsic muscle mechanics or intramuscular feedback. However, if intermuscular feedback exists and the contribution of feedback is not balanced, then non-zero curl components can be introduced ( ∂F i ∂q j ⁄ ≠ ∂F j ∂q i ⁄ ). Thus, by investigating the non-zero curl components in the rotational field, we can assess the existence of unbalanced intermuscular feedback around the ankle.
RESULTS AND DISCUSSION
To check the validity of these active studies, the ratio of active EMG levels to passive levels was calculated. The average result for the 8 subjects shows that subjects could follow instructions well (Table. 1). Analysis of variance (1-way ANOVA) was performed to check the variation of EMG level in the 5 repetitive measurements both within and between subjects. In general, subjects could maintain constant muscle activation level across 5 repetitive measurements. Within subject analysis, all subjects showed no statistical difference in TA activation levels (p-value > 0.05). 7 and 5 subjects showed no statistical difference in SOL active and cocontraction case, respectively. However, there exists significant difference (p-value << 0.05) in the mean level of muscle activation between subjects in all 3 types of muscle activation. This is because the preferred activation level was selected by subject around the given reference level not exceeding the limit of the robot.
The spatial impedance structure at the ankle was constructed by calculating impedance magnitude (a 1-DOF linear approximation of ankle stiffness) in each of 24 directions from the vector field. The averaged results from 8 subjects in the 3 active studies are presented in Fig.2. Both single muscle activation and co-contraction increased ankle impedance in all directions by a factor of 2 and 3 (respectively) over the maximally-relaxed results. We can also see that impedance increases less in the frontal plane than in the sagittal plane in all active cases. As a result the pinched "peanut-shaped" structure is evident and even enhanced in all three muscle-active conditions. This intriguing result may account for the prevalence of ankle injury in the IE direction (rather than the DP direction) since the small impedance may be insufficient to stabilize the joint against external perturbations. Fig.3 shows the results of vector field decomposition for a representative subject. Contrary to the maximally-relaxed result reported in [6], significant non-zero curl components were detected in the rotational field when muscles were active. This was true for all 8 subjects and for all 3 types of active study: muscle activation evoked significant non-spring-like behavior at the ankle. The non-zero rotational field (non-zero curl components) supports the existence of unbalanced intermuscular feedback at the ankle when muscles are active. However, we found no common patterns in the rotational field across subjects, suggesting that this observation may be due to imperfect tuning of spinal feedback circuits (perhaps due to the unfamiliarity if the task). How important is non-spring-like behavior? To assess the importance of non-spring-like behavior, the ratio between the determinants of the anti-symmetric (rotational component) and symmetric parts (conservative component) of the stiffness matrix was calculated. The result showed that rotational components are less than 10% of conservative components, meaning that any non-spring-like behavior of the ankle is subtle in the unimpaired young subjects we studied. From an engineering point of view we can also interpret this result to mean that (for healthy young subjects) ankle impedance with muscles active can be modeled as a nonlinear spring with less than 10% error.
FUTURE WORK
We continue to investigate the relation between muscle activation level and ankle impedance. As future directions, we plan to study the ankle impedance of elderly and neurologically impaired subjects. In addition, we are developing analysis methods and experimental protocols to explore transient dynamic ankle impedance which can be applied to general lower extremity functions.
|
2016-02-02T08:36:57.578Z
|
2010-09-01T00:00:00.000
|
{
"year": 2010,
"sha1": "de260407ef9dfc9fc2338c40f1e669d0ac633422",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/119407/1/321_1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0e283f1fde9c994983f21a65fd826e8510705a47",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
262207288
|
pes2o/s2orc
|
v3-fos-license
|
Semi-Supervised Anomaly Detection of Dissolved Oxygen Sensor in Wastewater Treatment Plants
As the world progresses toward a digitally connected and sustainable future, the integration of semi-supervised anomaly detection in wastewater treatment processes (WWTPs) promises to become an essential tool in preserving water resources and assuring the continuous effectiveness of plants. When these complex and dynamic systems are coupled with limited historical anomaly data or complex anomalies, it is crucial to have powerful tools capable of detecting subtle deviations from normal behavior to enable the early detection of equipment malfunctions. To address this challenge, in this study, we analyzed five semi-supervised machine learning techniques (SSLs) such as Isolation Forest (IF), Local Outlier Factor (LOF), One-Class Support Vector Machine (OCSVM), Multilayer Perceptron Autoencoder (MLP-AE), and Convolutional Autoencoder (Conv-AE) for detecting different anomalies (complete, concurrent, and complex) of the Dissolved Oxygen (DO) sensor and aeration valve in the WWTP. The best results are obtained in the case of Conv-AE algorithm, with an accuracy of 98.36 for complete faults, 97.81% for concurrent faults, and 98.64% for complex faults (a combination of incipient and concurrent faults). Additionally, we developed an anomaly detection system for the most effective semi-supervised technique, which can provide the detection of delay time and generate a fault alarm for each considered anomaly.
Introduction
Anomaly detection is a popular machine learning technique used to identify unusual or rare instances in data.It can be used in many different fields where detecting abnormalities or outliers is essential for maintaining the system's health, sustaining security, or optimizing processes.There are various examples of problems involving fault detection, such as frauds [1], network intrusions [2], manufacturing defects [3], anomaly detection in time series data [4], cybersecurity [5,6], microfluidics [7][8][9][10], and most importantly, anomaly detection in wastewater treatment plants [11][12][13].
Wastewater treatment plants play a vital role in modern society due to their significant importance in addressing environmental and public health challenges.The purpose of these facilities is to eliminate pollutants and contaminants from wastewater before it is discharged into the environment.As a result, strict regulations and permissions outline the acceptable discharge limits of pollutants into water bodies.
Unfortunately, the complex and dynamic processes from WWTPs [14] have numerous variables which can create an environment where different types of failures (e.g., complete, concurrent, complex) can occur.Such failures (e.g., mechanical, biological) can disrupt the treatment process, potentially leading to environmental harm, regulatory non-compliance, Sensors 2023, 23, 8022 2 of 18 and compromised public health.Hence, the demand for advanced anomaly detection systems in WWTPs has become very important in recent years.Through these, the operators can maintain the effective operation of WWTPs by taking prompt corrective actions, preventing disruptions to the treatment process, minimizing potential environmental impacts, and ensuring that the treated effluent meets the regulatory standards.
The application of artificial intelligence (AI) techniques for detecting faults has significantly advanced.This technological approach has provided a substantial boost to fault detection capabilities, particularly in complex systems like WWTPs.From the AI-driven fault detection techniques, we have analyzed in this study five semi-supervised learning algorithms: Isolation Forest (IF), Local Outlier Factor (LOF), One-Class Support Vector Machine (OCSVM), Multilayer Perceptron Autoencoder (MLP-AE) and Convolutional Autoencoder (Conv-AE).The most complex are autoencoders (AEs), which operate as neural networks.They are used to learn and represent the normal behavior of the system based on historical data.Subsequently, they identify deviations from this learned norm, effectively detecting anomalies [15].
The AE architecture consists of two main parts: the encoder and the decoder.The encoder compresses the input data into a lower-dimensional representation, often referred to as a bottleneck or latent space.The decoder then attempts to reconstruct the original input from this compressed representation [16].Once the AE is trained, it can be used to process new, unseen data.The reconstruction loss between the input and its reconstruction is calculated for each data point.A threshold is set to differentiate between normal and anomalous instances.Anomalies are identified when data points with reconstruction losses exceed the threshold.
AE's ability to learn complex patterns and detect subtle deviations makes them wellsuited for anomaly detection in WWTPs.They do not require the explicit labeling of all data, allowing them to adapt to changing conditions and new types of anomalies.However, successful implementation requires the careful consideration of model architecture, hyperparameters, training data quality, and threshold selection to achieve accurate and effective anomaly detection within WWTP operations.But even though there are remarkable results from semi-supervised anomaly detection in other sectors, the application of autoencoders in the context of WWTPs remains relatively unexplored.The scarcity of studies underscores the need for further research and exploration in this domain.
Therefore, in our study, we analyzed five semi-supervised methods for anomaly detection of the Dissolved Oxygen (DO) sensor and aeration valve from WWTPs.For the best semi-supervised technique, a fault detection system was proposed.The main contribution lies in implementing the semi-supervised fault detection system that leverages the maximum Mean Absolute Error (MAE) loss value from the training data to set a dynamic threshold for anomaly detection.Unlike traditional static thresholds, which may not adapt well to changing data patterns, this method employs a data-driven approach to establish a threshold that reflects the current data distribution.The uniqueness of our study lies in the capability of our anomaly detection system to identify not only complete faults, as commonly presented in the literature, but also concurrent and complex faults in WWTPs.Moreover, the proposed fault detection system introduces another key advantage by incorporating detection delay times and generating specific fault alarms for each type of anomaly.This means that our system not only detects issues at various stages of development, but also provides timely alerts, allowing for proactive intervention.This proactive aspect ensures that potential faults are addressed in a timely manner, minimizing their impact, and contributing to a more robust and responsive fault detection system.
The paper is organized as follows: Section 2 presents the related works.Section 3 refers to the materials and methods used, such as the proposed framework, major contributions of our work, descriptions of the considered anomaly scenarios, data preprocessing from datasets, statistical analysis of training data, five semi-supervised algorithms, hyperparameter tuning, threshold procedure selection, and evaluation criteria of the proposed algorithms.Section 4 discusses and reveals the results of our research.Section 5 is dedicated to the conclusions.
Related Work
There are many works that deal with the anomaly detection of sensors in WWTPs.From the fault detection techniques, the most widely explored are: (1) statistical techniques, such as Principal Component Analysis (PCA) and Partial Least Squares (PLS), and (2) machine learning algorithms (MLs), such as Gaussian Process Regression (GPR) and Artificial Neural Networks (ANNs) (e.g., Long Short-Term Memory (LSTM) networks, autoencoders (AEs), and Radial Basis Function (RBF) neural network).
From the first category, PCA and PLS were widely applied for anomaly detection in WWTPs.For example, the authors of [17] developed a real-time fault detection and isolation (FDI) system by simulating the BSM1.Also, in [10,18], these statistical methods were used to detect and monitor failures in WWTPs.
From the second category, GPR is a non-parametric Bayesian approach to regression, which has been used many times throughout papers to identify faults and anomalies in WWTPs.For instance, in paper [19], GPR was used to monitor the Biochemical Oxygen Demand (BOD) value of the WWTP effluent.The BOD value was used to predict the Sludge Volume Index (SVI) and the occurrence of filamentous sludge bulking.This method identified the faults studied in the paper with an accuracy of 95.5%.Another notable work is paper [20], that proposes two GPR models: one is GPR with maximum likelihood estimation (GPR-MLE) and the other is GPR using Monte Carlo sequential estimation (GPRSMC).The methods both evaluate the estimation of missing values in the WWTP flow rate and find the most efficient method for detecting drift faults in the values of sensors that measure ammonia levels.The experiments were conducted on a phenomenological influent simulator by using real data and the result obtained showed a 74.5% drift detection.
With regard to ANNs, paper [21] presents a Feedforward Neural Network (FNN) algorithm for isolating multiple anomalies (six types of failures such as recirculation pump, supply pump, excess of the sludge pump, biomass concentration sensor, DO concentration sensor, and partial (25%) supply pump) in WWTPs.The method proved to be efficient as it recognized the anomalies with a 97.2% accuracy.Another notable paper is [22], where LSTM networks are used to identify collective failures in the sensors of a WWTP.The data are collected and labeled as normal and faulty.The results obtained with LSTM networks are compared with other methods, such as PCA and SVM models, and show that the anomalies can be identified with a 92% accuracy.The research of [23] addresses the need for efficient wastewater treatment plant (WWTP) operation by monitoring influent conditions (ICs) to detect potential anomalies.It introduces kernel machine learning models, specifically the kernel principal components analysis-based One-Class Support Vector Machine (KPCA-OCSVM), to classify ICs and identify anomalies in a seven-year multivariate IC time series.These kernel-based algorithms outperform previous linear PCA-based models, offering improved anomaly detection capabilities while maintaining computational efficiency and making them adaptable for different WWTPs.Additionally, others [24] emphasize the significance of monitoring the operation cost index (OCI) in WWTPs for financial planning and operational optimization.They introduce four predictive models, including ARMA variants with recursive least squares (RLS) and recursive extended least squares (RELS), as well as nonlinear auto-regressive neural networks (NARNN) and nonlinear auto-regressive neural networks with external input (NARXNN).Among these models, the nonlinear NARXNN demonstrates superior predictive capabilities, particularly in handling the inherent nonlinearity of wastewater treatment processes.The studies [25,26] apply an RBF neural network in order to detect faults of the DO sensor in Benchmark Simulation Model No. 1 (BSM1).Other studies [27][28][29] proposed autoencoders for the fault detection of failures like abrupt changes or drift sensors by using BSM1.In [30], a variational autoencoder (VAE) is applied to address fault detection, such as sludge expansion fault and small-magnitude variable step, by taking into consideration the dynamic changes within the treatment process.Furthermore, Ref. [30] proposed two autoencoders, Convolutional (Conv) and Long Short-Term Memory (LSTM) autoencoders, to identify failures (drift, bias, precision degradation, spike, and stuck) of the DO sensor in WWTPs.There were used in three different scenarios by varying the occurrence order, intensity, and duration of faults.The metrics demonstrated that Conv-AE has a better performance than LSTM-AE.
Overall, it becomes obvious that machine and deep learning methods, especially autoencoders and convolutional neural networks (CNNs), have gained significant importance in various engineering applications, particularly in fault and damage detection.For example, a recent study [31] proposed to combine deep stacked autoencoders (SAEs) with multi-sensor fusion to enhance the accuracy of damage diagnosis in concrete structures.Another [32] presents an efficient one-dimensional convolutional gated recurrent unit neural network (1D-CGRU) for real-time structural damage detection, combining 1D-CNN for spatial feature extraction and GRU for temporal mapping.All these demonstrate that there is enough place for autoencoders and CNNs to continue to advance the field of engineering by providing more accurate, efficient, and versatile tools for fault and damage detection.
Materials and Methods
This paper is focused on detecting the faults of the DO sensor (bias, drift, spike, and precision degradation (PD)) and aeration valve from WWTPs.We selected these faults because DO sensors and aeration valves are key components in these complex processes.For instance, DO sensors monitor the amount of oxygen dissolved in the wastewater, an important parameter for biological treatment processes.On the other hand, aeration valves control the supply of oxygen to support microbial activity.Thus, early detection of faults in these components becomes vital for maintaining process efficiency, resource optimization, regulatory compliance, and equipment protection.
In this research, we conducted several experiments in order to collect data with different fault scenarios (complete, concurrent, and complex anomalies) by using the Benchmark Simulation Model No. 2 (BSM2), developed by the IWA Task Group [33].Each scenario corresponds to a test dataset shown in Table 1.In total, our study was conducted using three test datasets: (1) complete faults; (2) concurrent faults; and (3) complex faults (a combination of incipient and concurrent faults).Two processing steps were then applied to the data from the datasets: (1) data cleaning and (2) normalization by feature extraction (e.g., mean and standard deviation for generating training and testing sequences).Then, each dataset was used to feed five semi-supervised learning techniques for anomaly detection such as Isolation Forest (IF), Local Outlier Factor (LOF), One-Class Support Vector Machine (OCSVM), Multilayer Perceptron Autoencoder (MLP-AE), and Convolutional Autoencoder (Conv-AE).As shown in Section 4, the performance of the semi-supervised methods was computed on each dataset and the best result was obtained for Conv-AE.Also, a fault detection system was designed for the best semi-supervised method.This was performed by determining the threshold using a procedure that finds the maximum MAE loss value on the training data.If the reconstruction loss obtained from the testing data is greater than the threshold from the previous step, then an anomaly is detected.The detection time and the alarm signal were obtained for each anomaly from the datasets involved in this study.
Proposed Framework
The main objective of our research is to build an advanced semi-supervised anomaly detection system for DO sensor and aeration valve, which are key components within WWTPs.In Figure 1, the block diagram of the proposed framework for anomaly detection in WWTPs is presented.This represents a concise conceptual block diagram outlining the key steps of our research, as follows: 1.
Collect 'normal' and 'anomalous' data representing different fault scenarios (complete, concurrent, and complex) for DO sensor and aeration valve.In total, three test datasets are prepared.
2.
Preprocess the data from the datasets (data cleaning and normalization) and generate training and testing sequences.
3.
For each scenario, test five semi-supervised machine learning techniques for anomaly detection: Isolation Forest, Local Outlier Factor, One-Class Support Vector Machine, MLP Autoencoder, and Convolutional Autoencoder.4.
For each algorithm, compute the evaluation metrics as accuracy, precision, recall, and F1-score.5.
Develop a fault detection system with the best semi-supervised method where: • The threshold is determined by a procedure that finds the max MAE loss value on the training data.If the reconstruction loss obtained on the testing data is greater than the threshold from the previous step, then an anomaly is detected.
In other words, if the data have an MAE loss greater than the threshold, they are likely to deviate significantly from the norm and become potential anomalies.
•
Detection time delays and anomaly alarms are generated for each type of anomaly.As shown in Section 4, the performance of the semi-supervised methods was computed on each dataset and the best result was obtained for Conv-AE.Also, a fault detection system was designed for the best semi-supervised method.This was performed by determining the threshold using a procedure that finds the maximum MAE loss value on the training data.If the reconstruction loss obtained from the testing data is greater than the threshold from the previous step, then an anomaly is detected.The detection time and the alarm signal were obtained for each anomaly from the datasets involved in this study.
Proposed Framework
The main objective of our research is to build an advanced semi-supervised anomaly detection system for DO sensor and aeration valve, which are key components within WWTPs.In Figure 1, the block diagram of the proposed framework for anomaly detection in WWTPs is presented.This represents a concise conceptual block diagram outlining the key steps of our research, as follows: 1. Collect 'normal' and 'anomalous' data representing different fault scenarios (complete, concurrent, and complex) for DO sensor and aeration valve.In total, three test datasets are prepared.2. Preprocess the data from the datasets (data cleaning and normalization) and generate training and testing sequences.3.For each scenario, test five semi-supervised machine learning techniques for anomaly detection: Isolation Forest, Local Outlier Factor, One-Class Support Vector Machine, MLP Autoencoder, and Convolutional Autoencoder.4. For each algorithm, compute the evaluation metrics as accuracy, precision, recall, and F1-score.5. Develop a fault detection system with the best semi-supervised method where: The threshold is determined by a procedure that finds the max MAE loss value on the training data.If the reconstruction loss obtained on the testing data is greater than the threshold from the previous step, then an anomaly is detected.
In other words, if the data have an MAE loss greater than the threshold, they are likely to deviate significantly from the norm and become potential anomalies.
•
Detection time delays and anomaly alarms are generated for each type of anomaly.
Data Description
Raw data of both the normal and anomalous state of the DO sensor and aeration valve are collected in three datasets (presented in Figure 2).Each dataset represents a different anomaly scenario (complete, concurrent, and complex) of the DO sensor and aeration valve.In our experiments (Figure 2), we analyzed different types of anomalies in the DO sensor and aeration valve.For each, an anomaly window was selected:
•
One anomaly window of the aeration valve, consisting of 20 days.This is directly linked to Oxygen Transfer Coefficient (K L a 4 ) of the bioreactor aeration system.When anomalies are affecting the aeration valve, it indirectly produces erroneous DO sensor outputs.
•
One anomaly window for DO sensor drift of 30 days.This can be observed as a gradual deviation, leading to a shift in the sensor's output values.
•
One anomaly window for DO sensor bias of 25 days.In our study, this is characterized by a constant difference between the true value and the faulty DO sensor output of +1.5 mg/L.
•
Four anomaly windows for DO sensor spike consisting of 2 days.Spike anomalies are large amplitude peaks (e.g., 2, 2.5, 1.5, and 2.7) at constant time intervals.
•
One window for a DO sensor precision degradation of 30 days.Precision degradation in a DO sensor emerges as a loss in the precision of the sensors or control systems used for supervising and managing the treatment procedure.
The anomaly windows were combined to result in test dataset 1, test dataset 2, and test dataset 3. Table 1 presents the datasets that were considered and the starting day and the fault duration for each anomaly.
The first scenario (test dataset 1) contains complete anomalies of the aeration valve and DO sensor (drift, bias, spike, and precision degradation (PD)).The second scenario considers concurrent anomalies, a simultaneous occurrence of the following cases: aeration valve fault and sensor drift fault; aeration valve fault and sensor bias fault; aeration valve and sensor spike fault; and aeration valve fault and sensor PD.And the third scenario refers to complex anomalies (a combination of concurrent and incipient faults) between the DO sensor and the aeration valve.These types of faults could lead to many operational challenges and negative impacts on the treatment process.More precisely, this dataset is formed by the following combinations: 80% aeration valve fault + 20% sensor drift fault; 80% aeration valve fault + 20% sensor bias fault; 80% aeration valve fault + 20% sensor spike fault; and 80% aeration valve fault + 20% sensor PD fault.
Data Preprocessing and Splitting
To enhance the accuracy of the semi-supervised learning algorithms, the data were preprocessed with the following procedures: data cleaning and normalization.Data cleaning is used to identify and rectify inconsistencies, inaccuracies, and missing values in the datasets to ensure the data's quality and reliability before analysis.The normalization is performed with the Z-score normalization [34], as follows: where µ is the mean of a dataset with n values ( where σ is the standard deviation of a dataset with n values ( x 1 , x 2 , . . ., x n ).
After preprocessing, the datasets are split into training and testing with a 0.8:0.2ratio for classical semi-supervised algorithms.In the case of autoencoders, separate datasets for training and testing are prepared.All semi-supervised methods use the training dataset with only normal instances, and the testing datasets with both normal and anomalous instances.Some of the data are labeled (with normal and anomaly classes).The performance evaluation is conducted using metrics like the confusion matrix, accuracy, precision, recall, and F1-score.The selected algorithms are configured to operate in a semi-supervised manner [35], requiring a careful balance between the labeled and unlabeled data.
Statistical Analysis of Training Data
In this study, we conducted a comprehensive statistical analysis on the training dataset using the Pandas Python library shown in Table 2.This library played a central role because it allowed us to efficiently explore, clean, and prepare the dataset for subsequent tasks.Using the describe() function, we extracted the statistical data presented in Table 2.This information is valuable for understanding the distribution and characteristics of the data, especially in the initial stages of data exploration and analysis.
Each statistics provided specific information, as follows: "count" indicates the number of non-null values in the dataset; "mean" offers a measure of central tendency; "standard deviation" (std) indicates the dispersion of data around the mean; "minimum" (min) represents the lowest data point; and the "maximum" (max) finds the highest data point.These statistics collectively helped us to understand the distribution and characteristics of the data.
Classical Semi-Supervised Methods
Isolation Forest (IF) is an anomaly detection method that efficiently identifies outliers in datasets, and was first introduced in [36].The algorithm uses a recursive binary tree structure to isolate anomalies efficiently.The anomaly score for each data point is based on how quickly it can be isolated in these trees.The formula to calculate the anomaly score for a data point X in the Isolation Forest [34] is as follows: where s(X, n) is the anomaly score for data point X in the Isolation Forest with n trees; E(h(X)) is the average path length for data point X across all isolation trees; c(n) is a constant term used to normalize the score and is calculated as , where n is the number of data points used to build the tree.
The anomaly score s(X, n) measures how quickly data point X was isolated in the tree compared to the average isolation path length of all points in the tree.Lower scores indicate that a data point was isolated quickly, suggesting it is likely an anomaly, while higher scores indicate that it required more steps to isolate, suggesting it is more likely a normal data point.This approach is particularly useful for high-dimensional data and is capable of handling both global and local anomalies, offering a fast and effective solution for anomaly detection tasks.Therefore, this approach is computationally efficient because it only needs to build a small number of trees to produce accurate anomaly scores.Moreover, it does not require considerable hyperparameter tuning, thus using and deploying it is rather simple.However, the algorithm's performance can change based on the features of the dataset and the parameter selection [37].
Another semi-supervised method that was applied in this research is Local Outlier Factor (LOF).This approach is used for outlier detection in data analysis and machine learning and was first introduced in paper [14].LOF is a measure of the degree to which a data point is an outlier within its local neighborhood compared to the density of its neighboring data points.The formula for calculating the LOF of a data point X is as follows: where LOF(X) is the Local Outlier Factor for data point X; Density(X) represents the density of data point X, which is often calculated as the inverse of the average distance between X and its k-nearest neighbors; and Average Density in Neighborhood(X) is the average density of the k-nearest neighbors of X. LOF compares the density of a data point to the average density of its neighbors.An LOF significantly greater than 1 indicates that the point is an outlier, as its density is much lower than that of its neighbors, while an LOF close to 1 suggests that the point is similar in density to its neighbors and is not an outlier.LOF is effective at finding anomalies that might not stand out in the overall dataset but appear strange in their local surroundings.This makes LOF particularly useful when anomalies are scattered throughout the data rather than being all in one place.The advantage is that the outliers can be effectively identified in various types of datasets, even in the presence of noise or complex data structures [38].It is a versatile and robust technique for semi-supervised outlier detection; however, it is also computationally expensive, and its results may lack interpretability, which can make it challenging to understand the reasons behind the outlier detections.
The third algorithm chosen for this research is One-Class Support Vector Machine (OCSVM), a semi-supervised machine learning method used for outlier detection and novelty detection tasks.This algorithm was first proposed in [39], where the authors extended the idea of SVMs from the standard binary classification problem to the One-Class problem, focusing on identifying anomalies or outliers in a dataset when only data from the normal type class are available for training.Learning a decision boundary that includes the majority of the typical data points in a high-dimensional feature space is the main objective of OCSVM.The formula for OCSVM [37] can be represented as follows: where w is the weight vector; φ(x i ) represents the feature mapping of the data point x i ; and ρ is a parameter representing the offset of the decision boundary.
In simpler terms, OCSVM seeks to minimize the complexity of the decision boundary (represented by 1 2 ||w|| 2 ) while ensuring that all normal data points (x i ) are on the correct side of the boundary, where w T •φ(x i ) is greater than or equal to ρ.Data points that fall on the correct side of the boundary are considered normal, while those on the opposite side may be potential outliers.It operates under the assumption that typical data points are centered in a particular area, whereas outliers are dispersed over the rest of the data.OCSVM can accurately capture the distribution of the normal class by locating a hyperplane that divides the normal data points from the origin or center of the feature space [40].The method's ability to handle nonlinear data using kernel functions makes it a versatile and effective tool for various outlier detection applications; however, OCSVM assumes that the majority of the data points are from the normal class and that outliers are relatively scarce.In cases where the data are highly imbalanced, and the number of outliers is comparable to or even greater than the number of normal data points, OCSVM may struggle to identify the outliers accurately.
Semi-Supervised Deep Learning Methods
Apart from the classical semi-supervised methods, this paper also analyzes two deep learning approaches.One of them is the Multilayer Perceptron Autoencoder (MLP-AE), a powerful neural network architecture used for semi-supervised learning tasks, such as anomaly detection, that incorporates multiple hidden layers in both the encoder and decoder.The training process of the MLP-AE involves introducing the input data to the network, passing it through the encoder and decoder, and comparing the reconstructed output with the original input.Thus, the architecture of an MLP-AE consists of an input layer containing the input data, multiple hidden layers that comprise the encoder, bottleneck, and decoder layers-where the dimensionality reduction gradually takes place-and an output layer, which is expected to be as close to the input as possible [41].
The other deep learning method used for semi-supervised anomaly detection in this work is Convolution Autoencoder (Conv-AE), an approach where the architecture aims to learn a compressed representation of the input data by reducing its dimensionality and then reconstructing the original data from this compressed representation.The Conv-AE consists of two main parts: (1) the encoder, which takes the input data and compresses them, along with the final encoder layer, the bottleneck layer, which represents the compressed representation of the input data; and (2) the decoder, which takes the compressed representation (output of the bottleneck layer) and gradually upsamples it through a series of upsampling and convolutional layers, effectively reconstructing the original input data.The architecture of the decoder is frequently the exact opposite of that of the encoder, enabling the network to learn a useful representation of the data that can be used to reconstruct the input.Reconstruction loss, a metric that expresses the disparity between input and output data, is minimized during training.Depending on the type of data, binary cross-entropy (BCE), mean squared error (MSE), and Mean Absolute Error (MAE) are three common loss functions used for this purpose [42].
The architectures of the autoencoders we used in our study are presented in Figure 3.
Sensors 2023, 23, x FOR PEER REVIEW 10 representation (output of the bottleneck layer) and gradually upsamples it through a ries of upsampling and convolutional layers, effectively reconstructing the original in data.The architecture of the decoder is frequently the exact opposite of that of the enco enabling the network to learn a useful representation of the data that can be used to construct the input.Reconstruction loss, a metric that expresses the disparity between put and output data, is minimized during training.Depending on the type of data, bin cross-entropy (BCE), mean squared error (MSE), and Mean Absolute Error (MAE) three common loss functions used for this purpose [42].
The architectures of the autoencoders we used in our study are presented in Figur The hyperparameters used in the SSLs are presented in Tables 3 and 4. The pro of selecting these values was facilitated through Python tuning libraries such as K Tuner and GridSearchCV, which ultimately led to finding the values that yielded opti performance for each SSL algorithm.In this way, a fair comparison and analysis w performed between the considered SSLs.The hyperparameters used in the SSLs are presented in Tables 3 and 4. The process of selecting these values was facilitated through Python tuning libraries such as Keras Tuner and GridSearchCV, which ultimately led to finding the values that yielded optimal performance for each SSL algorithm.In this way, a fair comparison and analysis were performed between the considered SSLs.
Table 3. Hyperparameters of classical SSLs.In cases where hyperparameters are not explicitly specified, they are assumed to take on their default values.We developed the algorithms for all five semi-supervised learning methods in Google Colaboratory (Colab) environment with the Python open-source libraries: Scikit-Learn 1.2.2 and TensorFlow 2.12 with Keras, a high-level deep learning API-integrated library.The advantage is that Colab offers accessibility, collaboration, and pre-installed libraries, eliminating setup complexities.With integrated GPUs, Colab facilitates efficient analysis, code sharing, and documentation, streamlining the implementation process.
Anomaly Detection System
From the five SSLs, the algorithm with the best evaluation metrics is selected and an advanced anomaly detection system is designed.This generates the alarm and determines the delay detection time (the time delay between the moment when the anomaly occurs and the moment when the anomaly is identified by SSL algorithm).In our study, the anomaly detection system is tested on all three anomaly scenarios but can also be successfully applied to real-time data streams if a sliding window or time interval is defined for processing the input data [30,43,44].Furthermore, as stated in [44], real-time fault detection and diagnosis (FDD) hold significant practical importance, but the exploration of time delay remains relatively limited in prior research.
In our study, the anomaly detection system follows the below steps: Step 1: Generate two 'true' binary signals (1-normal and 0-anomaly) associated to test data and predict binary signal for the SSL anomaly detection.
Step 2: Activate the alarm when the threshold is exceeded and compute the time delay by comparing the 'true' binary signal with the 'predicted' signal.
The alarm signal is: The time delay between the occurrence of an anomaly and its identification by the SSL algorithm can be expressed as: where T_delay is the duration between anomaly occurrence and anomaly detection by SSL; T_anomaly_SSL is the moment when the SSL algorithm identifies the anomaly; and T_anomaly_dataset is the moment when the anomaly actually occurs.
Results and Discussion
In this section, we present the performance metrics of the five semi-supervised techniques (IF, LOF, OCSVM, MLP-AE, and Conv-AE) for anomaly detection in the scenarios (complete, concurrent, and complex anomaly) described in Section 3.2.For the evaluation step (Tables 5 and 6) we used Python documentation [45,46] to compute accuracy, precision, recall, F1-score metrics, and confusion matrices [47].Next, the alarms are generated and the detection time for each anomaly is computed with a detection system designed to use the best semi-supervised algorithm, the Conv-AE.This system is tested in the case of all three test datasets (Figures 4-6).
Next, the alarms are generated and the detection time for each anomaly is computed with a detection system designed to use the best semi-supervised algorithm, the Conv-AE.This system is tested in the case of all three test datasets (Figures 4-6).The threshold is determined based on the maximum MAE loss value obtained after training the Conv-AE.The threshold will determine the point beyond which a sample is classified as an anomaly.The formula for calculating the MAE loss for each sample in the test dataset using a reconstruction comparison between the predicted and actual data points is: where is the number of samples in the training dataset, _ () represents the predicted data from sample in the training dataset, and represents the actual data for the same sample.
And the threshold is obtained with: where __ is the array of Mean Absolute Error (MAE) loss values calculated for each sample in the training dataset.In our study, the value of the threshold is set to 0.2588 (test dataset 1), 0.3948 (test dataset 2), and 0.4865 (test dataset 3).The threshold is determined based on the maximum MAE loss value obtained after training the Conv-AE.The threshold will determine the point beyond which a sample is classified as an anomaly.The formula for calculating the MAE loss for each sample in the test dataset using a reconstruction comparison between the predicted and actual data points is: where n is the number of samples in the training dataset, x train_pred represents the predicted data from sample i in the training dataset, and x i train represents the actual data for the same sample.
And the threshold is obtained with: where train_MAE_loss is the array of Mean Absolute Error (MAE) loss values calculated for each sample in the training dataset.In our study, the value of the threshold is set to 0.2588 (test dataset 1), 0.3948 (test dataset 2), and 0.4865 (test dataset 3).The alarm and delay time are obtained as described in Section 3.7.When the Conv-AE binary signal from Figures 4-6 (highlighted with orange color) becomes 0, then a fault is detected and the alarm is activated (1-active, 0-inactive).The primary task is to capture the first anomalous instance from an anomaly window.Due to data variability and subtle fluctuations, the identification of anomalies like sensor Precision Degradation (PD) can occasionally pose challenges.As illustrated in Figures 4-6, this can cause the Conv- The alarm and delay time are obtained as described in Section 3.7.When the Conv-AE binary signal from Figures 4-6 (highlighted with orange color) becomes 0, then a fault is detected and the alarm is activated (1-active, 0-inactive).The primary task is to capture the first anomalous instance from an anomaly window.Due to data variability and subtle fluctuations, the identification of anomalies like sensor Precision Degradation (PD) can occasionally pose challenges.As illustrated in Figures 4-6, this can cause the Conv- The alarm and delay time are obtained as described in Section 3.7.When the Conv-AE binary signal from Figures 4-6 (highlighted with orange color) becomes 0, then a fault is detected and the alarm is activated (1-active, 0-inactive).The primary task is to capture the first anomalous instance from an anomaly window.Due to data variability and subtle fluctuations, the identification of anomalies like sensor Precision Degradation (PD) can occasionally pose challenges.As illustrated in Figures 4-6, this can cause the Conv-AE to occasionally overlook instances within a specific window.To address this challenge, our detection system focuses only when Conv-AE binary signal is first transitioning from value 1 to 0 during an anomaly window.
The time is computed for each anomaly with the Conv-AE detection system.This procedure holds significant importance due to its ability to provide insights into the efficiency, effectiveness, and real-world impact of the anomaly detection system.It assesses the system's responsiveness to anomalies, thereby mitigating potential damage, enhancing system reliability, and improving overall operational efficiency.
In Table 7, the time delay of each anomaly from the three scenarios, generated by the Conv-AE detection system, is shown.The time delays obtained are generally good in the case of all scenarios (complete, concurrent, complex anomalies).From all types of anomalies, spike faults are the easiest to detect as the time delay is below 1 h.As the anomaly scenario becomes more intricate, the time delay experiences variations but not significantly.The worst time delays are 11 h in the case of 80% aeration valve + 20% sensor drift (test dataset 3-complex faults) and 6 h in the case of aeration valve + sensor drift (test dataset 2-concurrent fault).Overall, the time delays prove that the anomaly detection system that we designed is efficient and can detect outliers in a timely manner, thus ensuring the well-function of the WWTP.Also, comparing our results with other works [26,30,46] demonstrated that our anomaly detection system is well-planned and effective.For example, in [30], a time delay of 4.41 h for drift fault is obtained, while our system is able to detect the same fault with a delay of only 3.84 h; for the bias fault, their delay is 2.5 h, while ours is significantly smaller at 0.72 h.Moreover, with the Conv-AE, we achieve higher accuracy than other studies.
The running time of the fault detection system is another critical factor in its practical application.In our evaluation, we measured the total time taken for anomaly detection in different scenarios.In the first scenario, the system demonstrated remarkable efficiency, completing the detection process in only 0.000029 min.The second scenario, while slightly longer at 0.000052 min, still maintained a swift response time.In the third scenario, the detection system performed admirably, with a runtime of 0.000047 min.These results demonstrate that the detection system is performant and can boost the early detection of anomalies in WWTPs.However, this study places its attention on a limited set of fault types covered by three datasets.In the future, it would be advantageous to explore a more diverse range of mechanical or even biological faults to evaluate how effectively the fault detection system operates.Additionally, we plan to consider the inclusion of real-world data to improve our study's realism and practical relevance.
Conclusions
In this study, we analyzed five semi-supervised learning techniques (IF, LOF, OCSVM, MLP-AE, and Conv-AE) for anomaly (e.g., complete, concurrent, and complex) detection in WWTPs.The comparison between the two autoencoders and the classical semi-supervised methods reveals that AEs offer a significantly enhanced efficiency in addressing anomaly detection challenges within WWTPs.Remarkably, Conv-AE achieved over 98% accuracy, precision, recall, and F1-score in the case of complete and complex anomalies.And in the case of concurrent anomalies, it obtained over 97% accuracy, precision, recall, and F1-score.
Also, through the development of an advanced anomaly detection system based on the optimal semi-supervised method (Conv-AE), our approach not only facilitates the early detection of various anomalies in DO sensor and aeration valve behaviors, but also provides valuable insights to the operators about the system operational status.For example, in the case of complete anomalies, we achieved time delays that are significantly smaller compared with other studies.Moreover, the uniqueness of our study lies in the capability of our anomaly detection system to identify not only complete faults, as commonly presented in the literature, but also concurrent and incipient faults.To our knowledge, this specific aspect has not been studied in other publications so we cannot compare our results with that of other research.
Hence, it becomes evident that embedding semi-supervised learning techniques in anomaly detection holds the power to revolutionize how wastewater treatment plants operate.This means more efficient operations, the better use of resources, and a stronger commitment to protect the environment.
In our future research, we plan to extend our current study to include the analysis of biological faults, further enhancing the AE fault detection system by exposing it to a diverse set of fault types using real-world data.Additionally, we propose to investigate alternative sensor types and explore multi-sensor fault scenarios.Given the complexity of the wastewater treatment process and the numerous sensors involved, incorporating real-world data into our experiments will provide a more comprehensive understanding of the challenges and opportunities in this domain.
Figure 1 .
Figure 1.Block diagram of the proposed framework.Figure 1. Block diagram of the proposed framework.
Figure 1 .
Figure 1.Block diagram of the proposed framework.Figure 1. Block diagram of the proposed framework.
Figure 2 .
Figure 2. The normal and anomaly data from datasets: (a) Training dataset with normal data; (b) Test dataset 1 with normal and complete anomalies; (c) Test dataset 2 with normal and concurrent anomalies; (d) Test dataset 3 with normal and complex anomalies.
Table 2 .
Statistical analysis of the training dataset.
Table 7 .
Detection system: time delay of each anomaly is detected with Conv-AE detection system.
|
2023-09-24T15:18:37.483Z
|
2023-09-22T00:00:00.000
|
{
"year": 2023,
"sha1": "b26055ffa61a0a034e622ddbcc1e45041bab5782",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/19/8022/pdf?version=1695370671",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57a9fff0b8e7eb0df11b85e2822bde7d180ca0ef",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
222161775
|
pes2o/s2orc
|
v3-fos-license
|
Genome variation and population structure among 1142 mosquitoes of the African malaria vector species Anopheles gambiae and Anopheles coluzzii
Mosquito control remains a central pillar of efforts to reduce malaria burden in sub-Saharan Africa. However, insecticide resistance is entrenched in malaria vector populations, and countries with a high malaria burden face a daunting challenge to sustain malaria control with a limited set of surveillance and intervention tools. Here we report on the second phase of a project to build an open resource of high-quality data on genome variation among natural populations of the major African malaria vector species Anopheles gambiae and Anopheles coluzzii. We analyzed whole genomes of 1142 individual mosquitoes sampled from the wild in 13 African countries, as well as a further 234 individuals comprising parents and progeny of 11 laboratory crosses. The data resource includes high-confidence single-nucleotide polymorphism (SNP) calls at 57 million variable sites, genome-wide copy number variation (CNV) calls, and haplotypes phased at biallelic SNPs. We use these data to analyze genetic population structure and characterize genetic diversity within and between populations. We illustrate the utility of these data by investigating species differences in isolation by distance, genetic variation within proposed gene drive target sequences, and patterns of resistance to pyrethroid insecticides. This data resource provides a foundation for developing new operational systems for molecular surveillance and for accelerating research and development of new vector control tools. It also provides a unique resource for the study of population genomics and evolutionary biology in eukaryotic species with high levels of genetic diversity under strong anthropogenic evolutionary pressures.
agricultural area characterised by small-scale vegetable growing and large-scale commercial farms such as oil palm and cocoa plantations. Mosquito samples were collected as larvae from puddles near farms between September and October, 2012. Madina (5.668, -0.219) is a suburb of Accra within the coastal savanna zone of Ghana. It is an urban community characterised by numerous vegetable-growing areas. The vegetation consists of mainly grassland interspersed with dense short thickets often less than 5 m high with a few trees. Specimens were sampled from puddles near roadsides and farms between October and December 2012. Takoradi (4.912, -1.774) is the capital city of Western Region of Ghana. It is an urban community located in the coastal savanna zone. Mosquito samples were collected from puddles near road construction and farms between August and September 2012. Koforidua (6.094, -0.261) is the capital city of Eastern Region of Ghana and is located in semi-deciduous forest. It is an urban community characterized by numerous small-scale vegetable farms. Samples were collected from puddles near road construction and farms between August and September 2012. Larvae from all collection sites were reared to adults and females preserved over silica for DNA extraction. Both An. gambiae and An. coluzzii were collected from these sites, determined by PCR assay (Santolamazza et al. 2008).
Guinea-Bissau: Mosquitoes were collected in October 2010 using indoor CDC light traps, in the village of Safim (11.957, -15.649), ca. 11 km north of Bissau city, the capital of the country. Malaria is hyperendemic in the region and transmitted by members of the Anopheles gambiae complex (Vicente et al. 2017). An. arabiensis, An. melas, An. coluzzii and An. gambiae, as well as apparent hybrids between the latter two species, are known to occur in the region (Gordicho et al. 2014;Vicente et al. 2017). Mosquitoes were preserved individually on 0.5ml micro-tubes filled with silica gel and cotton. DNA extraction was performed by a phenol-chloroform protocol (Donnelly et al. 1999).
Genome accessibility
We performed additional analyses to verify that there was no significant bias towards one species or another given the use of a single reference genome AgamP3 (Holt et al. 2002) for alignment of reads from all individuals. We found that the genomes of An. coluzzii and An. gambiae individuals were similarly diverged from the reference genome (Supplemental Figure S9). The similarity in levels of divergence is likely to reflect the mixed ancestry of the PEST strain from which the reference genome was derived (Holt et al. 2002;Sharakhova et al. 2007). An exception to this was the pericentromeric region of the X chromosome, a known region of divergence between the two species (The Anopheles gambiae 1000 Genomes Consortium 2017) where the reference genome is closer to An. coluzzii than to An. gambiae. The similarity of this region to An. coluzzii may be due to artificial selection for the X-linked pink eye mutation in the reference strain (Holt et al. 2002), as this originated in the An. coluzzii parent it may have led to the removal of any An. gambiae ancestry in this region.
SNP annotation
Of 105,486,698 SNPs reported in the raw callset, 57,837,885 passed all quality filters defined in the main Methods section. To produce an analysis-ready VCF file for each chromosome arm, we first removed all non-SNP variants. We then removed genotype calls for individuals excluded by the sample QC analysis, then removed any variants that were no longer variant after excluding individuals. We then added INFO annotations with genome accessibility metrics and added FILTER annotations per the criteria defined in the main Methods section. Finally, we added INFO annotations with information about functional consequences of mutations using SnpEff version 4.1b (Cingolani et al. 2012). Further details of SNP filtering and annotation can be found in Supplementary Information of The Anopheles gambiae 1000 Genomes Consortium (2017).
Supplemental figures
Supplemental Figure S1. Ancestry informative markers (AIM). Rows represent individual mosquitoes (grouped by population) and columns represent SNPs (grouped by chromosome arm). Colours represent species genotype. The column at the far left ("PCR") shows the species assignment according to the conventional molecular test based on a single marker on the X chromosome, which was performed for all populations except The Gambia (GM) and Kenya (KE). The column at the far right shows the genotype for kdr variants in Vgsc codon 995. Lines at the lower edge show the physical locations of the AIM SNPs.
|
2020-10-06T13:33:22.999Z
|
2020-09-28T00:00:00.000
|
{
"year": 2020,
"sha1": "658fc3f3758478138831bfaeb4323ae9427ec6e9",
"oa_license": "CCBYNC",
"oa_url": "https://genome.cshlp.org/content/30/10/1533.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ec69b4d4852c64a3f00a79928e5fe3f9a1f0f7c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119388940
|
pes2o/s2orc
|
v3-fos-license
|
A Theory of Gamma-Ray Bursts
We present a specific scenario for the link between GRB and hypernovae, based on Blandford-Znajek extraction of black-hole spin energy. Such a mechanism requires a high angular momentum in the progenitor object. The observed association of gamma-ray bursts with type Ibc supernovae leads us to consider massive helium stars that form black holes at the end of their lives as progenitors. We combine the numerical work of MacFadyen&Woosley with analytic calculations, to show that about 1E53 erg each are available to drive the fast GRB ejecta and the supernova. The GRB ejecta are driven by the power output through the open field lines, whereas the supernova is powered by closed filed lines and jet shocks. We also present a much simplified approximate derivation of these energetics. Helium stars that leave massive black-hole remnants in special ways, namely via soft X-ray transients or very massive WNL stars. Since binaries naturally have high angular momentum, we propose a link between black-hole transients and gamma-ray bursts. Recent observations of one such transient, GRO J1655-40/Nova Scorpii 1994, explicitly support this connection: its high space velocity indicates that substantial mass was ejected in the formation of the black hole, and the overabundance of alpha-nuclei, especially sulphur, indicates that the explosion energy was extreme, as in SN 1998bw/GRB 980425. (abstract shortened)
Introduction
The discovery of afterglows to gamma-ray bursts has greatly increased the possibility of studying their physics. Since these afterglows have thus far only been seen for long gamma-ray bursts (duration ∼ > 2 s), we shall concentrate on the mechanism for this subclass. The shorter bursts (duration ∼ < 2 s) may have a different origin; specifically, it has been suggested that they are the result of compact-object mergers and therefore offer the intriguing possibility of associated outbursts of gravity waves. (Traditionally, binary neutron stars have been considered in this category (Eichler et al. 1989, Janka et al. 1999). More recently, Bethe & Brown (1998) have shown that low-mass black-hole, neutron-star binaries, which have a ten times greater formation rate and are stronger gravity-wave emitters, may be the more promising source of this kind.) An important recent clue to the origin of long bursts is the probable association of some of them with ultra-bright type Ibc supernovae (Galama et al. 1998, Bloom et al. 1999, Galama et al. 2000. The very large explosion energy 1 implied by fitting the light curve of SN 1998bw, which was associated with GRB 980425, indicates that a black hole was formed in this event (Iwamoto et al. 1998). This provides two good pieces of astrophysical information: it implicates black holes in the origin of gamma-ray bursts, and it demonstrates that a massive star can explode as a supernova even if its core collapses into a black hole.
In this paper, we start from the viewpoint that the gamma-ray burst is powered by 1 Höflich et al. (1999) have proposed that the explosion energy was not much larger than usual, but that the explosion was very asymmetric; this model also provides a reasonable fit to the light curve of SN 1998bw. electromagnetic energy extraction from a spinning black hole, the so-called Blandford-Znajek (1977) mechanism. This was worked out in detail by Lee, Wijers, & Brown (1999), and further details and comments were discussed by Lee, Brown, & Wijers (2000), who built on work by Thorne et al. (1986) and Li (2000). They have shown that with the circuitry in a 3+1 dimensional description using the Boyer-Lindquist metric, one can have a simple pictorial model for the BZ mechanism.
The simple circuitry which involves steady state current flow is, however, inadequate for describing dissipation of the black hole rotational energy into the accretion disk formed from the original helium envelope. In this case the more rapidly rotating black hole tries to spin up the inner accretion disk through the closed field lines coupling the black hole and disk. Electric and magnetic fields vary wildly with time. Using the work of Blandford & Spruit (2000) we show that this dissipation occurs in an oscillatory fashion, giving a fine structure to the GRB, and that the total dissipation should furnish an energy comparable to that of the GRB to the accretion disk. We use this energy to drive the hypernova explosion.
Not any black-hole system will be suitable for making GRB: the black hole must spin rapidly enough and be embedded in a strong magnetic field. Moreover, the formation rate must be high enough to get the right rate of GRB even after accounting for substantial collimation of GRB outflows. We explore a variety of models, and give arguments why some will have sufficient energy and extraction efficiency to power a GRB and a hypernova. We argue that the systems known as black-hole transients are the relics of GRBs, and discuss the recent evidence from high space velocities and chemical abundance anomalies that these objects are relics of hypernovae and GRBs; we especially highlight the case of Nova Scorpii 1994 (GRO J1655−40).
The plan of this paper is as follows. We first show that it is reasonable to expect similar energy depositions into the GRB outflow and the accretion disk (Sect. 2) and discuss the amount of available energy to be extracted (Sect. 3). Then we show the agreement of those results with the detailed numerical simulations by MacFadyen & Woosley, and use those simulations to firm up our numbers (Sect. 4). We continue by presenting a simple derivation of the energetics that approximates the full results well (Sect. 5). Finally, we discuss some previously suggested progenitors (Sect. 6) and present our preferred progenitors: soft X-ray transients (Sect. 7).
Simple Circuitry
Although our numbers are based on the detailed review of Lee, Wijers, & Brown (1999), which confirms the original Blandford-Znajek (1977) paper, we illustrate our arguments with the pictorial treatment of Thorne et al. (1986) in "The Membrane Paradigm". Considering the time as universal in the Boyer-Lindquist metric, essential electromagnetic and statistical mechanics relations apply in their 3+1 dimensional manifold. We summarize their picture in our Fig. 1.
The surface of the black hole can be considered as a conductor with surface resistance R BH = 4π/c = 377 ohms. A circuit that rotates rigidly with the black hole can be drawn from the loading region, the low-field region up the axis of rotation of the black hole in which the power to run the GRB is delivered, down a magnetic field line, then from the North pole of the black hole along the (stretched) horizon to its equator. From the equator we continue the circuit through part of the disk and then connect it upwards with the loading region. We can also draw circuits starting from the loading region which pass along only the black hole or go through only the disk, but adding these would not change the results of our schematic model. Using Faraday's law, the voltage V can be found by integrating the vector product of charge velocity, v, and magnetic field, B, along the circuit: (d l is the line element along the circuit). Because this law involves v × B the integrals along the field lines make no contribution. We do get a contribution V from the integral from North pole to equator along the black hole surface. Further contributions to V will come from cutting the field lines from the disk. We assume the field to be weak enough in the loading region to be neglected.
The GRB power, E GRB , will beĖ where R L is the resistance of the loading region, and the current is given by (3) (The index BH refers to the black hole, L to the load region, and D to the disk.) The load resistance has been estimated in various ways and for various assumptions by Lovelace, MacAuslan, & Burns (1979) and by MacDonald & Thorne (1982), and by Phinney (1983). All estimates agree that to within a factor of order unity R L is equal to R BH .
In a similar fashion, some power will be deposited into the disk but this equilibrium contribution will be small because of the low disk resistance R D . Blandford & Spruit (2000) have shown that important dissipation into the disk comes through magnetic field lines coupling the disk to the black hole rotation. As shown in Fig. 2 these lines, anchored in the inner disk, thread the black hole.
The more rapidly rotating black hole will provide torques, along its rotation axis, which spin up the inner accretion disk, in which the closed magnetic field lines are anchored. With increasing centrifugal force the material in the inner disk will move outwards, cutting down the accretion. Angular momentum is then advected outwards, so that the matter can drift back inwards. It then delivers more matter to the black hole and is flung outwards again. The situation is like that of a ball in a roulette wheel (R.D. Blandford, private communication). First of all it is flung outwards and then drifts slowly inwards. When it hits the hub it is again thrown outwards. The viscous inflow time for the fluctuations is easily estimated to be where H is the height of the disk at radius r, Ω disk its angular velocity, and α vis is the usual α-parameterization of the viscosity. We choose α vis ∼ 0.1, r/H ∼ 10 for a thin disk and then arrive at τ d ∼ 0.1 s. We therefore expect variability on all time scales between the Kepler time (sub-millisecond) and the viscous time, which may explain the very erratic light curves of many GRBs.
We suggest that the GRB can be powered byĖ GRB and a Type Ibc supernova explosion byĖ SN whereĖ SN is the power delivered through dissipation into the disk. To the extent that the number of closed field lines coupling disk and black hole is equal to the number of open field lines threading the latter, the two energies will be equal. In the spectacular case of GRB 980326 (Bloom et al. 1999), the GRB lasts about 5 s, which we take to be the time that the central engine operates. We shall show that up to ∼ 10 53 erg is available to be delivered into the GRB and into the accretion disk, the latter helping to power the supernova (SN) explosion. This is more energy than needed and we suggest that injection of energy into the disk shuts off the central engine by blowing up the disk and thus removing the magnetic field needed for the energy extraction from the black hole. If the magnetic field is high enough the energy will be delivered in a short time, and the quick removal of the disk will leave the black hole still spinning quite rapidly.
Energetics of GRBs
The maximum energy that can be extracted from the BZ mechanism (Lee, Wijers, & Brown 1999) is This is 31% of the black hole rotational energy, the remainder going toward increasing the entropy of the black hole. This maximum energy is obtained if the extraction efficiency is In Appendix A we give numerical estimates for this ratio for various ω = Ω disk /Ω K and various radii in the region of parameter space we consider. As explained in Section 2 we expect the material in the inner disk to swing in and out around the marginally stable radius, r ms . It can be seen from the Table 2 and Appendix A that the relevant values of ǫ Ω are close to that of eq. (7).
For a 7M ⊙ black hole, such as that found in Nova Sco 1994 (GRO J1655−40), We estimate below that the energy available in a typical case will be an order of magnitude less than this. Without collimation, the estimated gamma-ray energy in GRB 990123 is about 4.5 × 10 54 erg (Andersen et al. 1999). The BZ scenario entails substantial beaming, so this energy should be multiplied by dΩ/4π, which may be a small factor (perhaps 10 −2 ).
The BZ power can be delivered at a maximum rate of (Lee et al. 1999) so that high magnetic fields are necessary for rapid delivery.
The above concerns the maximum energy output into the jet and the disk. The real energy available in black-hole spin in any given case, and the efficiency with which it can be extracted, depend on the rotation frequency of the newly formed black hole and the disk or torus around it. The state of the accretion disk around the newly formed black hole, and the angular momentum of the black hole, are somewhat uncertain. However, the conditions should be bracketed between a purely Keplerian, thin disk (if neutrino cooling is efficient) and a thick, non-cooling hypercritical advection-dominated accretion disk (HADAF), of which we have a model (Brown, Lee & Bethe 2000). Let us examine the result for the Keplerian case. In terms ofã where J is the angular momentum of the black hole, we find the rotational energy of a black hole to be where For a maximally rotating black hole one hasã = 1 2 .
We begin with a neutron star in the middle of a Keplerian accretion disk, and let it accrete enough matter to send it into a black hole. In matter free regions the last stable orbit of a particle around a black hole in Schwarzschild geometry is This is the marginally stable orbit r ms . However, under conditions of hypercritical accretion, the pressure and energy profiles are changed and it is better to use (Abramowicz et al. 1988) With the equal sign we have the marginally bound orbit r mb . With high rates of accretion we expect this to be a good approximation to r lso . The accretion disk can be taken to extend down to the last stable orbit (refer to Appendix B for the details).
2 As an aside, we note a nice mnemonic: if we define a velocity v from the black-hole angular momentum by J = M R Sch v, so that v carries the quasi-interpretation of a rotation velocity at the horizon, thenã = 2v/c. A maximal Kerr hole, which has R event = R Sch /2, thus has v = c. Forã ∼ < 0.5, the rotation energy is well approximated by the easy-to-remember expression E rot = 1 2 M v 2 .
We take the angular velocity to be Keplerian, so that the disk velocity v at radius 2R Sch is given by or v = c/2. The specific angular momentum, l, is then which in Kerr geometry indicatesã ∼ 1. Had we taken one of the slowest-rotating disk flows that are possible, the advection-dominated or HADAF case (Narayan andYi 1994, Brown, Lee &Bethe 2000), which has Ω 2 = 2Ω 2 K /7, we would have arrived atã ∼ 0.54, so the Kerr parameter will always be high.
Further accretion will add angular momentum to the black hole at a rate determined by the angular velocity of the inner disk. The material accreting into the black hole is released by the disk at r lso , where the angular momentum delivered to the black hole is determined. This angular momentum is, however, delivered into the black hole at the event horizon R Sch , with velocity at least double that at which it is released by the disk, since the lever arm at the event horizon is only half of that at R Sch , and angular momentum is conserved. With more rapid rotation involving movement towards a Kerr geometry where the event horizon and last stable orbit coincide at Although we must switch over to a Kerr geometry for quantitative results, we see thatã will not be far from its maximum value of unity. Again, for the lower angular-momentum case of a HADAF, the expected black-hole spin is not much less.
Comparison with Numerical Calculation
Our schematic model has the advantage over numerical calculations that one can see analytically how the scenario changes with change in parameters or assumptions. However, our model is useful only if it reproduces faithfully the results of more complete calculations which involve other effects and much more detail than we include. We here make comparison with Fig.19 of MacFadyen & Woosley (1999). Accretion rates, etc., can be read off from their figure which we reproduce as our Fig.3. MacFadyen & Woosley prefer a initial = 0.5 (We have removed their curve forã initial = 0). This is a reasonable value if the black hole forms from a contracting proto-neutron star near breakup. MacFadyen & Woosley find thatã initial = 0.5 is more consistent with the angular momentum assumed for the mantle thanã initial = 0. (They take the initial black hole to have mass 2M ⊙ ; we choose the Brown & Bethe (1994) mass of 1.5M ⊙ .) We confirm this in the next section.
After 5 seconds (the duration of GRB 980326) the MacFadyen & Woosley black hole mass is ∼ 3.2M ⊙ and their Kerr parameterã ∼ 0.8, which gives f (ã) of our eq.(12) of 0.11. With these parameters we find E = 2 × 10 53 erg, available for the GRB and SN explosion.
One can imagine that continuation of the MacFadyen & Woosley curve for M BH (M ⊙ ) would ultimately give something like our ∼ 7M ⊙ , but the final black hole mass may not be relevant for our considerations. This is because more than enough energy is available to power the supernova in the first 5 seconds; as the disk is disrupted, the magnetic fields supported by it will also disappear, which turns off the Blandford-Znajek mechanism.
Power is delivered at the rate given by eq.(9). Taking a black hole mass relevant here, ∼ 3.2M ⊙ , we require a field strength of ∼ 5.8 × 10 15 G in order for our estimated energy (4 × 10 52 erg) to be delivered in 5 s (the duration of GRB 980326). For such a relatively short burst, we see that the required field is quite large, but it is still not excessive if we bear in mind that magnetic fields of ∼ 10 15 G have already been observed in magnetars (Kouveliotou 1998(Kouveliotou , 1999. Since in our scenario we have many more progenitors than there are GRBs, we suggest that the necessary fields are obtained only in a fraction of all potential progenitors. Thus we have an extremely simple scenario for powering a GRB and the concomitant SN explosion in the black hole transients, which we will discuss in Section.7.2. After the first second the newly evolved black hole has ∼ 10 53 erg of rotational energy available to power these. The time scale for delivery of this energy depends (inversely quadratically) on the magnitude of the magnetic field in the neighborhood of the black hole, essentially that on the inner accretion disk. The developing supernova explosion disrupts the accretion disk; this removes the magnetic fields anchored in the disk, and self-limits the energy the B-Z mechanism can deliver.
An Even More Schematic Model
Here we calculate the energy available in a rotating black hole just after its birth (before accretion adds more). Our model is to take a 1.5M ⊙ neutron star which co-rotates with the inner edge of the accretion disk in which it is embedded. The neutron star then collapses to a black hole, conserving its angular momentum. Since the accretion disk is neutrino cooled, but perhaps not fully thin, its angular velocity will be somewhere between the HADAF value and the Keplerian value. We parameterize it as Ω = ωΩ K , where ω = 1 for Keplerian and ω = 2/7 ∼ 0.53 for the HADAF.
The moment of inertia, I, of a neutron star is well fitted for many different equations of state with the simple expression (Lattimer & Prakash 2000). With J = ωIΩ K and a neutron star of 1.5M ⊙ , with a radius of 10 km, we findã We choose ω ≃ 1.0 to roughly reproduce the MacFadyen & Woosley value ofã, see our Fig. 3. We do not really believe the disk to be so efficiently neutrino cooled that its angular velocity is Keplerian; i.e. ω = 1, but it may be not far from it. Our ω should be more properly viewed as a fudge factor which allows us to match the more complete MacFadyen & Woosley calculation. MacFadyen & Woosley find that, while the accretion disk onto the black hole is forming, an additional solar mass of material is added to it "as the dense stellar core collapses through the inner boundary at all polar angles". We shall add this to our 1.5M ⊙ and take the black hole mass to be 2.5M ⊙ . We neglect the increase in spin of the black hole by the newly accreted matter; this is already included in the MacFadyen & Woosley results. Forã 2 = 0.64 we find f (ã 2 ) = 0.11, so that the black hole rotation energy becomes in rough agreement with the estimates of MacFadyen & Woosley in the last section.
Collapsar
We have not discussed the Collapsar model of Woosley (1993), andMacFadyen &. In this model the center of a rotating Wolf-Rayet star evolves into a black hole, the outer part being held out by centrifugal force. The latter evolves into an accretion disk and then by hypercritical accretion spins the black hole up. MacFadyen & Woosley point out that "If the helium core is braked by a magnetic field prior to the supernova explosion to the extent described by Spruit & Phinney (1998) then our model will not work for single stars." Spruit & Phinney argue that magnetic fields maintained by differential rotation between the core and envelope of the star will keep the whole star in a state of approximately uniform rotation until 10 years before its collapse. As noted in the last section, with the extremely high magnetic fields we need the viscosity would be expected to be exceptionally high, making the Spruit & Phinney scenario probable. Livio & Pringle (1998) have commented that one finds evidence in novae that the coupling between layers of the star by magnetic fields may be greatly suppressed relative to what Spruit & Phinney assumed. However, we note that even with this suppressed coupling, they find pulsar periods from core collapse supernovae no shorter than 0.1 s. Independent evidence for the fact that stellar cores mostly rotate no faster than this comes from the study of supernova remnants: Bhattacharya (1990Bhattacharya ( , 1991 concludes that the absence of bright, pulsar-powered plerions in most SNRs indicates that typically pulsar spin periods at birth are no shorter than 0.03-0.05 s. Translated to our black holes, such spin periods would implyã ∼ < 0.01, quite insufficient to power a GRB. As a cautionary note, we might add that without magnetic coupling the cores of evolved stars can spin quite rapidly (Heger et al. 2000). This rapid initial spin may be reconciled with Bhattacharya's limit if r-mode instabilities cause very rapid spindown in the first few years of the life of a neutron star (e.g., Heger, Langer, & Woosley 2000, Lindblom & Owen 1999.
Coalescing Low-Mass Black Holes and Helium Stars
Fryer & Woosley (1998) suggested the scenario of a black hole spiraling into a helium star. This is an efficient way to spin up the black hole. Bethe & Brown (1998) evolved low-mass black holes with helium star companion, as well as binaries of compact objects. In a total available range of binary separation 0.04 < a 13 < 4, low-mass black-hole, neutron-star binaries were formed when 0.5 < a 13 < 1.4 where a 13 is the initial binary separation in units of 10 13 cm. The low-mass black hole coalesces with the helium star in the range 0.04 < a 13 < 0.5. Binaries were distributed logarithmically in a. Thus, coalescences are more common than low-mass black-hole, neutron-star binaries by a factor of ln(0.5/0.04)/ ln(1.9/0.5) = 1.9 In Bethe & Brown (1998), the He-star, compact-object binary was disrupted ∼ 50% of the time by the He-star explosion. This does not apply to the coalescence. Thus, the rate of low-mass black-hole, He-star mergers is 3.8 times the formation rate of low-mass black-hole, neutron-star binaries, or in the Galaxy. The estimated empirical rate of GRBs, with a factor of 100 for beaming, is 10 −5 yr −1 in the Galaxy (Appendix C of Brown et al. 1999). Thus, the number of progenitors is more than adequate.
In Bethe & Brown (1998) the typical black hole mass was ∼ 2.4M ⊙ , somewhat more massive than their maximum assumed neutron star mass of 1.5M ⊙ . As it enters the helium star companion an accretion disk is soon set up and the accretion scenario will follow that described above, with rotating black holes of various masses formed. Brown, Lee, & Bethe (2000) find that the black hole will be spun up quickly. We have not pursued this scenario beyond the point that it was developed by Fryer & Woosley (1998).
Our Model: Angular Momentum
We favor a model of hypernovae similar to MacFadyen & Woosley (1999) in that it involves a failed supernova as a centerpiece. But, in distinction to MacFadyen & Woosley, our initial system is a binary, consisting of a massive star A (which will later become the failed SN) and a lighter companion B, which serves to provide ample angular momentum.
Failed supernovae require a ZAMS mass of 20 − 35M ⊙ , according to the calculations of Woosley & Weaver (1995) as interpreted by Brown, Lee, & Bethe (1999). The limits 20 and 35M ⊙ are not accurately known, but it is a fairly narrow range, so we shall in many of our calculations assume a "typical" ZAMS mass of 25M ⊙ . The heavy star A must not be in a close binary because then its hydrogen envelope would be removed early in its evolution and therefore the star would lose mass by wind at a very early stage and become a low-mass compact object (Brown, Weingartner, & Wijers 1996). Instead, we assume a wide binary, with a separation, a in the range so star A evolves essentially as a single star through its first few burning stages. It is essential that most of the He core burning is completed before its hydrogen envelope is removed (Wellstein & Langer 1999;Heger & Wellstein 2000). We assume the initial distance a between the two stars to be in this range. When star A fills its Roche lobe, the companion, star B, will spiral inwards.
The initiation and early development of the common envelope has been best treated by Rasio & Livio (1996). This is the only phase that can at present be modeled in a realistic way. They find a short viscous time in the envelope, but emphasize that numerical viscosity may play an important role in their results. However, we believe the viscosity to be large. Torkelsson et al. (1996) showed the Shakura-Sunyaev (1973) viscosity parameter, α SS , to range from 0.001 to 0.7, with the higher values following from the presence of vertical magnetic fields. Since in our Blandford-Znajek model extremely high magnetic fields ∼ 10 15 G are needed in the He envelope to deliver the energy rapidly, we believe α SS to be not much less than unity. Given such high viscosities, it seems reasonable to follow the Rasio-Livio extrapolation, based on a short viscous transport time, to later times. The most significant new result of Rasio & Livio "is that, during the dynamical phase of common envelope evolution, a corotating region of gas is established near the central binary. The corotating region has the shape of an oblate spheroid encasing the binary (i.e., the corotating gas is concentrated in the orbital plane)." A helium core, which we deal with, is not included in their calculations, because they do not resolve the inner part of the star numerically. However, since the physics of the spiral-in does not really change as it proceeds past the end of their calculations, it seems most likely that during further spiral-in, the spin-up of material inside the orbit of the companion will continue to be significant.
Star B will stop spiraling in when it has ejected the H envelope of A. Since we assume that all stars A have about the same mass, and that a i is very large, we expect From section 7.2 we conclude that a f is a few R ⊙ for M B = (0.4 − 1)M ⊙ . Now the He cores of stars of ZAMS mass M = 20 − 35M ⊙ have a radius about equal to R ⊙ . Therefore small M B stars will spiral into the He core of A. There they cannot be stopped but will coalesce with star A. However, they will have transmitted their angular momentum to star A.
Star B of larger mass will stop at larger a f ≫ R ⊙ . It is then not clear whether they will transfer all of their angular momentum to star A. In any case, they must generally wait until they evolve off the main sequence into the subgiant or possibly even the giant stage before they can fill their Roche Lobes and later accrete onto the black hole resulting from star A.
The Kepler velocity of star B at a f is We estimate the final mass of A, after removal of its hydrogen envelope, to be about 10M ⊙ ; then where a f,11 is a f in units of 10 11 cm. The specific angular momentum of B is then If B and A share their angular momentum, the specific angular momentum is reduced by a factor M B /(M A,f + M B ) which we estimate to be ∼ 0.1. Since a f should be ∼ > 3R ⊙ (See Table 1), the specific angular momentum of A should be j(A) ∼ > 10 18 cm 2 s −1 .
Star B has now done its job and can be disregarded.
Supernova and collapse
Star A now goes through its normal evolution, ending up as a supernova. But since we have chosen its mass to be between 20 and 35M ⊙ , the SN shock cannot penetrate the heavy envelope but is stopped at some radius R SN ≃ 10 10 cm, well inside the outer edge of the He envelope. We estimate R SN by scaling from SN 1987A: in that supernova, with progenitor mass ∼ 18 M ⊙ , most of the He envelope was returned to the galaxy. The separation between compact object and ejecta was estimated to occur at R ∼ 5 × 10 8 cm (Woosley 1988, Bethe 1990) at mass point 1.5 M ⊙ (gravitational). Woosley and Weaver (1995) find remnant masses of ∼ 2 M ⊙ , although with large fluctuations, for ZAMS masses in the range 20-35M ⊙ , which go into high-mass black holes. From table 3 of Brown, Weingartner, and Wijers (1996) we see that fallback between R = 3.5 and 4.5 × 10 8 cm is 0.03M ⊙ . Using this we can extrapolate to R = 10 10 cm as the distance within which matter has to begin falling in immediately in our heavier stars, to make up a compact object of 2 M ⊙ . Unlike in 1987A the shock energy in the more massive star does not suffice to eject the envelope beyond this point, and the remaining outer envelope will also eventually fall back.
At R SN , the specific angular momentum of Kepler motion around a central star of mass 10M ⊙ is, cf. eq.(26) In reality, at this time the central object has a mass M ∼ 1.5M ⊙ (being a neutron star) and since j K ∼ V K ∼ M 1/2 j K (1.5M ⊙ ) = 1.5 × 10 18 cm 2 s −1 .
The angular momentum inherent in star A, eq. (27), is therefore greater than the Kepler angular momentum. This would not be the case had our initial object been a single star, a collapsar. (The collapsar may work none the less, but our binary model is more certain to work.) The supernova material is supported by pressure inside the cavity, probably mostly due to electromagnetic radiation. The cavity inside R SN is rather free of matter. After a while, the pressure in the cavity will reduce. This may happen by opening toward the poles, in which case the outflowing pressure will drive out the matter near the poles and create the vacuum required for the gamma ray burst. Reduction of pressure will also happen by neutrino emission. As the pressure gets reduced, the SN material will fall in toward the neutron star in the center. But because the angular momentum of the SN material is large (eq.27) the material must move more or less in Kepler orbits; i.e., it must spiral in. This is an essential point in the theory.
If j(A) is less than j K at R SN , the initial motion will have a substantial radial component in addition to the tangential one. But as the Kepler one decreases, cf. eq.29, there will come a point of r at which j K = j(A). At this point an accretion disk will form, consisting of SN material spiraling in toward the neutron star. The primary motion is circular, but viscosity will provide a radial component inward v r ∼ αv K where α is the viscosity parameter. It has been argued by Brandenburg et al. (1996) that α ∼ 0.1 in the presence of equipartition magnetic fields perpendicular to the disk, and it may be even larger with the high magnetic fields required for GRBs. Narayan & Yi (1994) have given analytical solutions for such accretion disks. The material will arrive at the neutron star essentially tangentially, and therefore its high angular momentum will spin up the neutron star substantially. Accretion will soon make the neutron star collapse into a black hole. The angular momentum will be conserved, so the angular velocity is increased since the black hole has smaller radius than the neutron star. Thus the black hole is born with considerable spin.
A large fraction of the material of the failed supernova will accrete onto the black hole, giving it a mass of order 7M ⊙ . All this material adds to the angular momentum of the black hole since all of it has the Kepler velocity at the black hole radius. Our estimates show that the black hole would be close to an extreme Kerr hole (Section 5), were it to accrete all of this material. It may, however, be so energetic that it drives off part of the envelope in the explosion before it can all accrete (see Section 5).
Soft X-ray Transients with Black-Hole Primaries
Nine binaries have been observed which are black-hole X-ray transients. All contain a high-mass black hole, of mass ∼ 7M ⊙ . In seven cases the lower-mass companion (star B) has a mass ∼ < M ⊙ . The two stars are close together, their distance being of order 5R ⊙ . Star B fills its Roche Lobe, so it spills over some material onto the black hole. The accretion disk near the black hole emits soft X rays. Two of the companions are subgiants, filling their Roche lobes at a few times larger separations from the black hole.
In fact, however, the accretion onto the central object is not constant, so there is usually no X-ray emission. Instead, the material forms an accretion disk around the black hole, and only when enough material has been assembled, it falls onto the black hole to give observable X rays. Hence, the X-ray source is transient. Recent observation of a large space velocity of Cygnus X-1 (Nelemans et al. 1999) suggests that it has evolved similarly to the transient sources, with the difference that the companion to the black hole is an ∼ 18M ⊙ O star. The latter pours enough matter onto the accretion disk so that Cyg X-1 shines continuously. We plan to describe the evolution of Cyg X-1 in a future paper (Brown et al. 2000). Table 1 is an abbreviated list of data on transient sources. A more complete table is given in Brown et al. (1999b). Two of the steady X-ray sources, in the LMC, have been omitted, because we believe the LMC to be somewhat special because of its low metallicity; also masses, etc., of these two are not as well measured. Of the others, 6 are main-sequence K stars, one is main-sequence M, and the other two have masses greater than the Sun. The masses given are geometric means of the maximum and minimum masses given by the observers. The distance a between the black hole and the optical (visible) star is greater for the heavier stars than for the K-and M stars (except the more evolved one of them) as was expected in Section 7.1 for the spiraling in of star B. The table also gives the radius of the Roche Lobe and the specific orbital angular momentum of star B.
Five K stars have almost identical distance a ∼ 5R ⊙ , and also Roche Lobe sizes, ∼ 1.0R ⊙ . These Roche Lobes can be filled by K stars on the main sequence. The same is true for the M star. Together, K and M stars cover the mass range from 0.3 to 1M ⊙ . The two heavier stars have Roche Lobes of 3 and 5R ⊙ which cannot possibly be filled by main-sequence stars of mass ∼ 2M ⊙ . We must therefore assume that these stars are subgiants, in the Herzsprung gap. These stars spend only about 1% of their life as subgiants, so we must expect that there are many "silent" binaries in which the 2M ⊙ companion has not yet evolved off the main sequence and sits well within its Roche lobe, roughly 100 times more. The time as subgiants is even shorter for more massive stars; this explains their absence among the transient sources.
Therefore we expect a large number of "silent partners": stars of more than 1M ⊙ , still on their main sequence, which are far from filling their Roche Lobe and therefore do not transfer mass to their black hole partners. In fact, we do not see any reason why the companion of the black hole could not have any mass, up to the ZAMS mass of the progenitor of the black hole; it must only evolve following the formation of the black hole. It then crosses the Herzsprung gap in such a short time, less than the thermal time scale, that star A cannot accept the mass from the companion, so that common envelope evolution must ensue. If we include these 'silent partners' in the birth rate, assuming a flat mass ratio distribution, we enhance the total birth rate of black-hole binaries by a factor 25 over the calculations by Brown, Lee, & Bethe (1999).
On the lower mass end of the companions, there is only one M star. This is explained in terms of the model of Section 7.1 by the fact that stars of low mass will generally spiral into the He core of star A, and will coalesce with A, see below eq.(23), so no relic is left. (Since the core is left spinning rapidly, these complete merger cases could also be suitable GRB progenitors.) As the outcome of the spiral-in depends also on other factors, such as the initial orbital separation and the primary mass, one may still have an occasional survival of an M star binary (note that the one M star companion is M0, very nearly in the K star range).
The appearance of the black hole transient X-ray binaries is much like our expectation of the relic of the binary which has made a hypernova: a black hole of substantial mass, and an ordinary star, possibly somewhat evolved, of smaller mass. We expect that star B would stop at a distance a f from star A which is greater if the mass of B is greater (see Section 7.1). This is just what we see in the black-hole binaries: the more massive companion stars (∼ 2M ⊙ ) are further from the black hole than the K stars. We also note that the estimated birth rate of these binaries is high enough for them to be the progenitors of GRB, even if only in a modest fraction of them the conditions for GRB powering are achieved.
Nova Scorpii 1994 (GRO J1655-40)
Nova Sco 1994 is a black hole transient X-ray source. It consists of a black hole of ∼ 7M ⊙ and a subgiant of about 2M ⊙ . Their separation is 17R ⊙ . Israelian et al. (1999) have analyzed the spectrum of the subgiant and have found that the α-particle nuclei O, Mg, Si and S have abundances 6 to 10 times the solar value. This indicates that the subgiant has been enriched by the ejecta from a supernova explosion; specifically, that some of the ejecta of the supernova which preceded the present Nova Sco (a long time ago) were intercepted by star B, the present subgiant. Israelian et al. (1999) estimate an age since accretion started from the assumption that enrichment has only affected the outer layers of the star. We here reconsider this: the time that passed since the explosion of the progenitor of the black hole is roughly the main-sequence lifetime of the present subgiant companion, which given its mass of ∼2M ⊙ will be about 1 Gyr. This is so much longer than any plausible mixing time in the companion that the captured supernova ejecta must by now be uniformly mixed into the bulk of the companion. This rather increases the amount of ejecta that we require the companion to have captured. (Note that the accretion rate in this binary is rather less than expected from a subgiant donor, though the orbital period leaves no doubt that the donor is more extended than a main-sequence star (Regős, Tout, and Wickramasinghe 1998). It is conceivable that the high metal abundance has resulted in a highly non-standard evolution of this star, in which case one might have to reconsider its age.) The presence of large amounts of S is particularly significant. Nomoto et al. (2000) have calculated the composition of a hypernova from an 11M ⊙ CO core, see Fig. 4. This shows substantial abundance of S in the ejecta. Ordinary supernovae produce little of this element, as shown by the results of Nomoto et al. (2000) in Fig. 4. The large amount of S, as well as O, Mg and Si we consider the strongest argument for considering Nova Sco 1994 as a relic of a hypernova, and for our model, generally. Fig. 4 also shows that 56 Ni and 52 Fe are confined to the inner part of the hypernova, and if the cut between black hole and ejecta is at about 5M ⊙ , there will be no Fe-type elements in the ejecta, as observed in Nova Scorpii 1994. By contrast hypernova 1998bw shows a large amount of Ni, indicating that in this case the cut was at a lower included mass.
The massive star A in Nova Sco will have gone through a hypernova explosion when the F-star B was still on the main sequence, its radius about 1.5R ⊙ . Since the explosion caused an expansion of the orbit, the orbital separation a was smaller at the time of the supernova than it is now, roughly by a factor a then = a now /(1 + ∆M/M now ).
(∆M is the mass lost in the explosion; see, e.g., Verbunt, Wijers, and Burm 1990). With ∆M ∼ 0.8M now , as required by the high space velocity, this means a then = 10R ⊙ . Therefore the fraction of solid angle subtended by the companion at the time of explosion was Assuming the ejecta of the hypernova to have been at least 5M ⊙ (Nelemans et al. 1999), the amount deposited on star B was The solar abundance of oxygen is about 0.01 by mass, so with the abundance in the F star being 10 times solar, and oxygen uniformly mixed, we expect 0.1 × 2.5 = 0.25M ⊙ of oxygen to have been deposited on the companion, much more than the total mass it could have captured from a spherically symmetry supernova. [Si/O] is 0.09 by mass in the Sun, and [S/O] is 0.05, so since the over-abundances of all three elements are similar we expect those ratios to hold here, giving about 0.02M ⊙ of captured Si and 0.01M ⊙ of captured S. We therefore need a layer of stellar ejecta to have been captured which has twice as much Si as S, at the same time as having about 10 times more O. From fig. 4, we see that this occurs nowhere in a normal supernova, but does happen in the hypernova model of Nomoto et al. (2000) at mass cuts of 6M ⊙ or more. This agrees very nicely with the notion that a hypernova took place in this system, and that the inner 7M ⊙ or so went into a black hole.
What remains is to explain how the companion acquired ten times more mass than the spherical supernova model allows, and once again we believe that the answer is given in recent hypernova calculations (MacFadyen and , Wheeler et al. 2000: hypernovae are powered by jet flows, which means they are very asymmetric, with mass outflow along the poles being much faster and more energetic than along the equator. The disk provides a source for easily captured material in two ways: First, it concentrates mass in the equatorial plane, which will later be ejected mostly in that plane. Second, the velocity acquired by the ejecta is of the order of the propagation speed of the shock through it. This propagation speed is proportional to P 2 /ρ 1 , where P 2 is the pressure behind the shock and ρ 1 the density ahead of it. The driving pressure will be similar in all directions (or larger, due to the jet injection, in the polar regions), whereas the disk density is much higher than the polar density. Hence, the equatorial ejecta will be considerably slower than even normal supernova ejecta, greatly increasing the possibility of their capture by the companion. Other significant effects of the disk/jet geometry are (1) that the companion is shielded from ablation of its outer layers by fast ejecta, which is thought to occur in spherical supernovae with companion stars (Marietta, Burrows & Fryxell 2000) and (2) that there is no iron enrichment of the companion, because the iron -originating closest to the center-is either all captured by the black hole or ejected mainly in the jet, thus not getting near the companion (Wheeler et al. 2000; note that indeed no overabundance of Fe is seen in the companion of GRO J1655−40).
For the companion to capture the required 0.2-0.3M ⊙ of ejecta it is sufficient that the ejecta be slow enough to become gravitationally bound to it. However, the material may not stay on: when the companion has so much mass added on a dynamical time scale it will be pushed out of thermal equilibrium, and respond by expanding, as do main-sequence stars that accrete mass more gradually on a time scale faster than their thermal time scale (e.g., Kippenhahn & Meyer-Hofmeister 1977). During this expansion, which happens on a time scale much longer than the explosion, the star may expand beyond its Roche lobe and transfer some of its mass to the newly formed black hole. However, because the dense ejecta mix into the envelope on a time scale between dynamical and thermal, i.e., faster than the expansion time, this back transfer will not result in the bulk of the ejecta being fed back, though probably the material lost is still richer in heavy elements than the companion is now. Since the outer layers of the star are not very dense, and the mass transfer is not unstable because the black hole is much more massive than the companion, the total amount of mass transferred back is probably not dramatic. However, the expansion does imply that the pre-explosion mass of the companion was somewhat higher than its present mass, and that the amount of ejecta that needs to be captured in order to explain the abundances observed today is also somewhat higher than the present mass of heavy elements in the companion.
A further piece of evidence that may link Nova Sco 1994 to our GRB/hypernova scenario are the indications that the black hole in this binary is spinning rapidly. Zhang, Cui, & Chen (1997) argue from the strength of the ultra-soft X-ray component that the black hole is spinning near the maximum rate for a Kerr black hole. However, studies by Sobczak et al. (1999) show that it must be spinning with less than 70% maximum. Gruzinov (1999) finds the inferred black hole spin to be about 60% of maximal from the 300 Hz QPO. Our estimates of the last section indicate that enough rotational energy will be left in the black hole so that it will still be rapidly spinning.
We have already mentioned the unusually high space velocity of −150 ± 19 km s −1 . Its origin was first discussed by Brandt et al. (1995), who concluded that significant mass must have been loss in the formation of the black hole in order to explain this high space velocity: it is not likely to acquire a substantial velocity in its own original frame of reference, partly because of the large mass of the black hole. But the mass lost in the supernova explosion is ejected from a moving object and thus carries net momentum. Therefore, momentum conservation demands that the center of mass of the binary acquire a velocity; this is the Blaauw-Boersma kick (Blaauw 1961, Boersma 1961. Note that the F-star companion mass is the largest among the black-hole transient sources, so the center of mass is furthest from the black hole and one would expect the greatest kick. Nelemans et al. (1999) estimate the mass loss in this kick to be 5 − 10M ⊙ .
In view of the above, we consider it well established that Nova Sco 1994 is the relic of a hypernova. We believe it highly likely that the other black-hole transient X-ray sources are also hypernova remnants. We believe it likely that the hypernova explosion was accompanied by a GRB if, as in GRB 980326, the energy was delivered in a few seconds. It is not clear what will happen if the magnetic fields are so low that the power is delivered only over a much longer time. There could then still be intense power input for a few seconds due to neutrino annihilation deposition near the black hole (Janka et al. 1999), but that may not be enough for the jet to pierce through the He star and cause a proper GRB (MacFadyen and Woosley 1999). At this point, we recall that the GRB associated with SN1998bw was very sub-luminous, 10 5 times lower than most other GRB. While it has been suggested that this is due to us seeing the jet sideways, it is in our view more likely that the event was more or less spherical (Kulkarni et al. 1998) and we see a truly lower-power event. A good candidate would be the original suggestion by Colgate (1968Colgate ( , 1974 of supernova shock break-out producing some gamma rays. Indications are that the expansion in SN1998bw was mildly relativistic (Kulkarni et al. 1998) or just sub-relativistic (Waxman and Loeb 1999). In either case, what we may have witnessed is a natural intermediate event in our scenario: we posit that there is a continuum of events varying from normal supernovae, delivering 1 foe more or less spherically in ten seconds, to extreme hypernovae/GRB that deliver 100 foes in a highly directed beam. In the middle, there will be cases where the beam cannot pierce through the star, but the total energy delivered is well above a supernova, with as net result a hypernova accompanied by a very weak GRB.
Numbers
Nearly all observed black hole transient X-ray sources are within 5 kpc of the Sun. Extrapolating to the entire Galaxy, a total of 8,800 black-hole transients with main-sequence K companions has been suggested (Brown, Lee, & Bethe 1999).
The lifetime of a K star in a black hole transient X-ray source is estimated to be ∼ 10 10 yr (Van Paradijs 1996) but we shall employ 10 9 yr for the average of the K-stars and the more massive stars, chiefly those in the "silent partners". In this case the birth rate of the observed transient sources would be λ K = 10 4 /10 9 = 10 −5 per galaxy yr −1 .
We see no reason why low-mass companions should be preferred, so we assume that the formation rate of binaries should be independent of the ratio In other discussions of binaries, e.g., in Portegies Zwart & Yungelson (1998), it has often been assumed that the distribution is uniform in q. This is plausible but there is no proof.
Since all primary masses M A are in a narrow interval, 20 to 35M ⊙ , this means that M B is uniformly distributed between zero and some average M A , let us say 25M ⊙ . Then the total rate of creation of binaries of our type is λ = 25 0.7 λ K = 3 × 10 −4 galaxy −1 yr −1 .
This is close to the rate of mergers of low mass black holes with neutron stars which Bethe & Brown (1998) have estimated to be λ m ≃ 2 × 10 −4 galaxy −1 yr −1 .
These mergers have been associated speculatively with short GRBs, while formation of our binaries is supposed to lead to "long" GRBs (Fryer, Woosley, & Hartmann 1999). We conclude that the two types of GRB should be equally frequent, which is not inconsistent with observations. In absolute number both of our estimates eqs. (37) and (38) are substantially larger than the observed rate of 10 −7 galaxy −1 yr −1 (Wijers et al. 1998); this is natural, since substantial beaming is expected in GRBs produced by the Blandford-Znajek mechanism. Although we feel our mechanism to be fairly general, it may be that the magnetic field required to deliver the BZ energy within a suitable time occurs in only a fraction of the He cores.
Discussion and Conclusion
Our work here has been based on the Blandford-Znajek mechanism of extracting rotational energies of black holes spun up by accreting matter from a helium star. We present it using the simple circuitry of "The Membrane Paradigm" (Thorne et al. 1986). Energy delivered into the loading region up the rotational axis of the black hole is used to power a GRB. The energy delivered into the accretion disk powers a SN Ib explosion.
We also discussed black-hole transient sources, high-mass black holes with low-mass companions, as possible relics for both GRBs and Type Ib supernova explosions, since there are indications that they underwent mass loss in a supernova explosion. In Nova Sco 1994 there is evidence from the atmosphere of the companion star that a very powerful supernova explosion ('hypernova') occurred.
We estimate the progenitors of transient sources to be formed at a rate of 300 GEM (Galactic Events per Megayear). Since this is much greater than the observed rate of GRBs, there must be strong collimation and possible selection of high magnetic fields in order to explain the discrepancy.
We believe that there are strong reasons that a GRB must be associated with a black hole, at least those of duration several seconds or more discussed here. Firstly, neutrinos can deliver energy from a stellar collapse for at most a few seconds, and sufficient power for at most a second or two. Our quantitative estimates show that the rotating black hole can easily supply the energy as it is braked, provided the ambient magnetic field is sufficiently strong. The black hole also solves the baryon pollution problem: we need the ejecta that give rise to the GRB to be accelerated to a Lorentz factor of 100 or more, whereas the natural scale for any particle near a black hole is less than its mass. Consequently, we have a distillation problem of taking all the energy released and putting it into a small fraction of the total mass. The use of a Poynting flux from a black hole in a magnetic field (Blandford & Znajek 1977) does not require the presence of much mass, and uses the rotation energy of the black hole, so it provides naturally clean power.
Of course, nature is extremely inventive, and we do not claim that all GRBs will fit into the framework outlined here. We would not expect to see all of the highly beamed jets following from the BZ mechanism head on, the jets may encounter some remaining hydrogen envelope in some cases, jets from lower magnetic fields than we have considered here may be much weaker and delivered over longer times, etc., so we speculate that a continuum of phenomena may exist between normal supernovae and extreme hypernovae/GRBs. This is why we call our effort "A Theory of Gamma Ray Bursts" and hope that it will be a preliminary attempt towards systematizing the main features of the energetic bursts.
We would like to thank Stan Woosley for much useful information. Several conversations with Roger Blandford made it possible for us to greatly improve our paper, as did valuable comments from Norbert Langer. This work is partially supported by the U.S. Department of Energy Grant No. DE-FG02-88ER40388. HKL is supported also in part by KOSEF Grant No. 1999-2-112-003-5 and by the BK21 program of the Korean Ministry of Education.
A. Estimates of ǫ Ω = Ω disk /Ω H We collect here useful formulas needed to calculate ǫ Ω = Ω disk /Ω H . First of all where Ω K ≡ GM/R 3 and ω is dimensionless parameter (0 < ω < 1). Thus The numerical estimates are summarized in Table 2 for various ω and radii.
B. Spin-up of Black Holes by Accretion
The specific angular momentum and energy of test particles in Keplerian circular motion, with rest mass δm, arẽ E ≡ E δm = c 2 r 2 − R Sch r + a R Sch r/2 r(r 2 − 3 2 R Sch r + a √ 2R Sch r) 1/2 l ≡ l δm = c R Sch r 2 (r 2 − a √ 2R Sch r + a 2 ) r(r 2 − 3 2 R Sch r + a √ 2R Sch r) 1/2 (B1) where R Sch = 2GM/c 2 and BH spin a = J/Mc =ã(GM/c 2 ). The accretion of δm changes the BH's total mass and angular momentum by ∆M =Ẽδm and ∆J =lδm. The radii of marginally bound (r mb ) and stable (r ms ) orbits are given as The numerical values of the specific angular momentum and energy of test particles are summarized in Table 3 and Fig.5. In Fig.6, we test how much mass we need in order to spin up the non-rotating black hole up to givenã. Note that the last stable orbit is almost Keplerian even with the accretion disk, and we assume 100% efficiency of angular momentum transfer from the last stable Keplerian orbit to BH. In order to spin-up the BH up toã = 0.9, we need ∼ 68% (52%) of original non-rotating BH mass in case of r lso = r ms (r mb ). For a very rapidly rotating BH withã = 0.99, we need 122% and 82%, respectively. For r lso = r ms , there is an upper limit,ã = 0.998, which can be obtained by accretion (Thorne 1974). In the limit where r lso = r mb , however, spin-up beyond this limit is possible because the photons can be captured inside thick accretion disk, finally into BH (Abramowicz et al. 1988
B B
Fig. 1.-The black hole in rotation about the accretion disk. A circuit, in rigid rotation with the black hole is shown. This circuit cuts the field lines from the disk as the black hole rotates, and by Faraday's law, produces an electromotive force. This force drives a current. More detailed discussion is given in the text. Fig. 2.-Magnetic field lines, anchored in the disk, which thread the black hole, coupling the disk rotation to that of the black hole. . The upper panel shows the increase in the Kerr parameter for various models for the disk interior to the inner boundary at 50 km. "Thin" (dash-dot), neutrino-dominated (thick solid) and advection dominated (short dash) models are shown for initial Kerr parameterã init = 0.5. The lower panel shows the growth of the gravitational mass of the black hole. The short-dashed line shows the growth in baryonic mass of the black hole since for a pure advective model no energy escapes the inner disk.
|
2019-04-14T01:30:48.921Z
|
2000-03-23T00:00:00.000
|
{
"year": 2000,
"sha1": "7db0efba7695e666b08d964c3b2831f65924ad16",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7db0efba7695e666b08d964c3b2831f65924ad16",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256982149
|
pes2o/s2orc
|
v3-fos-license
|
The Role of PKGIα and AMPK Signaling Interplay in the Regulation of Albumin Permeability in Cultured Rat Podocytes
The permeability of the glomerular filtration barrier (GFB) is mainly regulated by podocytes and their foot processes. Protein kinase G type Iα (PKGIα) and adenosine monophosphate-dependent kinase (AMPK) affect the contractile apparatus of podocytes and influence the permeability of the GFB. Therefore, we studied the interplay between PKGIα and AMPK in cultured rat podocytes. The glomerular permeability to albumin and transmembrane FITC-albumin flux decreased in the presence of AMPK activators and increased in the presence of PKG activators. The knockdown of PKGIα or AMPK with small-interfering RNA (siRNA) revealed a mutual interaction between PKGIα and AMPK and influenced podocyte permeability to albumin. Moreover, PKGIα siRNA activated the AMPK-dependent signaling pathway. AMPKα2 siRNA increased basal levels of phosphorylated myosin phosphate target subunit 1 and decreased the phosphorylation of myosin light chain 2. Podocytes that were treated with AMPK or PKG activators were characterized by the different organization of actin filaments within the cell. Our findings suggest that mutual interactions between PKGIα and AMPKα2 regulate the contractile apparatus and permeability of the podocyte monolayer to albumin. Understanding this newly identified molecular mechanism in podocytes provides further insights into the pathogenesis of glomerular disease and novel therapeutic targets for glomerulopathies.
Introduction
The glomerular filtration barrier (GFB) is responsible for the ultrafiltration of blood plasma that flows through successive layers of the GFB. It is composed of fenestrated endothelial cells, the glomerular basement membrane, and slit diaphragms (SDs), which form between neighboring foot processes (FPs) of podocytes [1]. Podocytes consist of the cell body that floats above the glomerular capillary, major processes, and highly dynamic FPs that attach podocytes to capillaries [2,3]. The complex structure of podocytes is determined by a highly organized actin cytoskeleton [4]. Actin reorganization and alterations of albumin permeability in the podocyte filtration monolayer are closely related to the activity of cyclic guanosine monophosphate (cGMP)-dependent protein kinase G type Iα (PKGIα).
PKGI is a serine/threonine kinase homodimer that regulates the relaxation of the contractile apparatus [5]. PKGI exists in two isoforms: PKGIα and PKGIβ [6]. The classic activation of the enzyme is associated with the binding of cGMP to binding sites of the kinase [7]. An alternative mechanism of PKGIα activation is based on enzyme dimerization, in which a disulfide bond forms between adjacent Cys42 residues in the PKGIα homodimer complex [8] and impairs its activation through the classic cGMP-dependent pathway [9]. Our recent studies have shown that insulin [10], hydrogen peroxide (H 2 O 2 ) [11], and AMPK signaling in cultured rat podocytes influences filtration barrier permeability. Our results identified a potentially important new mechanism that may be injurious to podocytes in diabetes and affect filtration barrier permeability. Moreover, understanding the interplay between PKGIα and AMPK signaling in podocytes will provide further insights into glomerular disease pathogenesis and novel therapeutic targets for glomerulopathies.
Downregulation of AMPK and PKGIα Affects Albumin Permeability across the Podocyte Monolayer
Previous studies showed that PKGIα activation is associated with an increase in the permeability of the podocyte monolayer to albumin, whereas AMPK activity is linked to the opposite effect. To determine whether changes in podocyte permeability to albumin arise from crosstalk between AMPKα and PKGIα, podocytes were transfected with siRNA that targeted PKGIα, AMPKα1, or AMPKα2. To assess the efficacy of siRNA transfection, the expression of PKGIα, AMPKα1, and AMPKα2 proteins was determined in podocytes that were transfected with PKGIα, AMPKα1, or AMPKα2 siRNA. PKGIα siRNA-treated cells exhibited significantly (56%) lower levels of PKGIα protein (from 0.687 ± 0.072 to 0.300 ± 0.037, n = 4, p < 0.01; Figure 2A). The downregulation of AMPKα1 or AMPKα2 gene expression resulted in a 53% decrease in protein levels (from 0.150 ± 0.008 to 0.070 ± 0.019 n = 3, p < 0.05) for AMPKα1 ( Figure 2B) and a 37% decrease in protein levels (from 0.291 ± 0.027 to 0.182 ± 0.024, n = 3, p < 0.05) for AMPKα2 ( Figure 2C). Transfection with scrambled siRNA did not alter podocyte permeability to albumin ( Figure 2D).
PKGIα Affects AMPK Activity in Podocytes
Next, to investigate the effect of PKGIα on AMPKα phosphorylation, the PKGIα expression was selectively knocked down, and podocytes were incubated with AICAR or MTF. Consistent with numerous previous studies, Figure 3A shows that AMPKα phosphorylation levels increased by 37% (0.718 ± 0.039 vs. 0.984 ± 0.024, n = 4, p < 0.01) for AICAR and by 35% (0.718 ± 0.039 vs. 0.968 ± 0.080, n = 4, p < 0.05) for MTF. Unexpectedly, the siRNA-mediated knockdown of PKGIα increased the AMPKα phosphorylation levels by 42% (0.718 ± 0.039 vs. 1.017 ± 0.087, n = 4, p < 0.05; Figure 3A) in control cells. We next tested whether PKGIα activation influences AMPKα phosphorylation. 8-Br-cGMP and H 2 O 2 were used as PKGIα activators. 8-Br-cGMP is responsible for the classical activation of PKGIα, whereas H 2 O 2 induces the non-canonical activation of the enzyme, called oxidative activation, involving the formation of an intermolecular disulfide. The incubation of podocytes with either H 2 O 2 or 8-Br-cGMP increased the phosphorylation state of AMPKα 4.3-fold (0.135 ± 0.006 vs. 0.583 ± 0.038, n = 3-4, p < 0.0001; Figure 3B) and 2.4-fold (0.135 ± 0.006 vs. 0.319 ± 0.013, n = 3-4, p < 0.01; Figure 3B), respectively. The siRNA-mediated silencing of AMPKα1 ( Figure 3B) or AMPKα2 ( Figure 3C) had no influence on the H 2 O 2 -dependent increase in AMPKα phosphorylation. However, the effect of 8-Br-cGMP on the level of AMPKα phosphorylation was substantially decreased by AMPKα1 siRNA ( Figure 3B). As shown in Figure 3C, AMPKα phosphorylation did not decrease in podocytes that were transfected with AMPKα2 siRNA alone, but the positive effect of 8-Br-cGMP on the AMPKα phosphorylation state was slightly reduced by AMPKα2 siRNA. These results suggest that PKGIα's interaction with AMPK regulates the activity of both AMPK isoforms (α1 and α2). However, the AMPKα2 isoform may be mainly involved in signal transduction, which is associated with the classic activation of PKGIα by cGMP.
PKG and AMPK Modulators Affect Nucleotide Concentrations in Cultured Rat Podocytes
The classic activation of PKG and AMPK is based on changes in cGMP and ATP concentrations, respectively. Therefore, we investigated whether PKG and AMPK modulators alter the concentration of adenine (AMP, adenosine diphosphate (ADP), and ATP) and guanine (GMP, guanosine diphosphate (GDP), and GTP) nucleotides. Podocytes can produce energy, reflected by the high intracellular concentrations of GTP ( Figure 5A) and ATP ( Figure 5C). The use of 8-Br-cGMP increased GMP, GDP, and GTP concentrations by 31%, 67%, and 20%, respectively ( Figure 5B). However, it did not affect the concentrations of adenine nucleotides ( Figure 5D). Podocytes that were treated with Rp-8-Br-cGMPS were characterized by lower amounts of GMP and GTP ( Figure 5B), and ADP and ATP ( Figure 5D). In a reverse procedure, AMPK modulators were administered. Subsequently, the effect of an AMPK inhibitor, compound C, on nucleotide concentration in podocytes was determined. Compound C significantly reduced GMP (58%), GTP (20%), ADP (17%), and ATP (26%) concentrations but increased AMP concentrations from 1.127 ± 0.264 to 2.922 ± 0.434 (n = 3, p < 0.05; Figure 6B,D). MTF did not affect the concentration of guanine or adenine nucleotides, with the exception of ADP ( Figure 6B,D).
Overall, these experiments suggest that PKG and AMPK modulators affect nucleotide levels, which may subsequently impact PKGIα and AMPK activity in podocytes.
PKG and AMPK Modulators Affect Nucleotide Concentrations in Cultured Rat Podocytes
The classic activation of PKG and AMPK is based on changes in cGMP and ATP concentrations, respectively. Therefore, we investigated whether PKG and AMPK modulators alter the concentration of adenine (AMP, adenosine diphosphate (ADP), and ATP) and guanine (GMP, guanosine diphosphate (GDP), and GTP) nucleotides. Podocytes can produce energy, reflected by the high intracellular concentrations of GTP ( Figure 5A) and ATP ( Figure 5C). The use of 8-Br-cGMP increased GMP, GDP, and GTP concentrations by 31%, 67%, and 20%, respectively ( Figure 5B). However, it did not affect the concentrations of adenine nucleotides ( Figure 5D). Podocytes that were treated with Rp-8-Br-cGMPS were characterized by lower amounts of GMP and GTP ( Figure 5B), and ADP and ATP ( Figure 5D). In a reverse procedure, AMPK modulators were administered. Subsequently, the effect of an AMPK inhibitor, compound C, on nucleotide concentration in podocytes was determined. Compound C significantly reduced GMP (58%), GTP (20%), ADP (17%), and ATP (26%) concentrations but increased AMP concentrations from 1.127 ± 0.264 to 2.922 ± 0.434 (n = 3, p < 0.05; Figure 6B,D). MTF did not affect the concentration of guanine or adenine nucleotides, with the exception of ADP ( Figure 6B,D).
Overall, these experiments suggest that PKG and AMPK modulators affect nucleotide levels, which may subsequently impact PKGIα and AMPK activity in podocytes.
PKGIα and AMPK Affect Actin Cytoskeleton Architecture in an Antagonistic Manner
Based on our findings that PKGIα and AMPKα2 mutually regulated the phosphorylation states of MYPT1 and MLC, we next investigated whether changes in PKGIα and AMPK activity are associated with actin cytoskeleton reorganization in rat podocytes. Either PKG or AMPK modulators were administered, which considerably influenced actin filament organization. The PKG activators 8-Br-cGMP and H 2 O 2 and AMPK inhibitor compound C significantly increased F-actin immunostaining near the plasma membrane, whereas the incubation of podocytes with PKG inhibitors or AMPK activators had no effect on actin organization (Figure 8). Cytochalasin D (10 µM, 30 min) was used as a positive control of cytoskeleton disruption ( Figure 8A).
PKGIα and AMPK Affect Actin Cytoskeleton Architecture in an Antagonistic Manner
Based on our findings that PKGIα and AMPKα2 mutually regulated the phosphorylation states of MYPT1 and MLC, we next investigated whether changes in PKGIα and AMPK activity are associated with actin cytoskeleton reorganization in rat podocytes. Either PKG or AMPK modulators were administered, which considerably influenced actin filament organization. The PKG activators 8-Br-cGMP and H2O2 and AMPK inhibitor compound C significantly increased F-actin immunostaining near the plasma membrane, whereas the incubation of podocytes with PKG inhibitors or AMPK activators had no effect on actin organization (Figure 8). Cytochalasin D (10 μM, 30 min) was used as a positive control of cytoskeleton disruption ( Figure 8A).
Discussion
Podocytes are contractile cells that dynamically reorganize their actin cytoskeleton to regulate GFB permeability in response to environmental stimuli. In podocytes, PKGIα and AMPK antagonistically regulate filtration barrier permeability through the indirect modulation of actin architecture. Insulin-resistant podocytes are characterized by augmented PKGIα activity and diminished AMPK phosphorylation, resulting in an increase in albumin permeability across the podocyte monolayer and isolated glomeruli [10,29,31]. The inhibition of PKGIα activity or AMPK activation prevented actin cytoskeleton reorganization and decreased albumin permeability across the filtration barrier. Thus, we propose that the interplay between the PKGIα and AMPKα2 activity regulates the contraction apparatus and permeability to albumin in podocytes. The present study revealed a new mechanism in podocytes that may be injurious in diabetes, and alterations of the activity of one of these enzymes may alter filtration barrier permeability.
In the present study, we demonstrated that PKGIα and AMPK antagonistically regulated albumin permeability across the podocyte monolayer and isolated glomeruli ( Figure 1). This is consistent with our previous findings, in which the treatment of insulinresistant podocytes with MTF increased AMPK phosphorylation and decreased permeability to albumin across the podocyte monolayer and diabetic glomeruli [29]. Numerous studies found that AMPK activation improved lung endothelial barrier function by decreasing vascular permeability [32], and reduced both paracellular FITC-dextran permeability across the Caco2-cell monolayer and intestinal permeability to FITC-dextran 40 in vivo [33]. The AMPK inhibitor compound C exerted a negative effect on filtration barrier function ( Figure 1B), whereas the increase in filtration barrier permeability was associated with PKGIα activation by insulin [10] or HG [12]. Additionally, hyperinsulinemic and insulinresistant obese Zucker rats were characterized by the higher expression of the PKGIα protein, polyuria, and albuminuria [10]. Wu et al. found that the hyperpermeability of coronary venules was mediated by the activation of PKG [34]. Studies of heart tissues also confirmed that PKGIα mediated the increase in vascular permeability [35].
Expression levels of AMPKα isoforms differ in various cells. Based on our recent findings that AMPKα1 and AMPKα2 are constitutively expressed in podocytes [27], we studied the effects of these two isoforms on podocyte monolayer permeability to albumin. An increase in albumin permeability across the podocyte monolayer was observed only in podocytes with the knockdown of AMPKα2 expression ( Figure 2F), suggesting a regulatory role for this isoform in albumin permeability. Furthermore, the co-immunoprecipitation and immunofluorescence results demonstrated that both AMPKα isoforms interact with PKGIα ( Figure 4). AMPKα1 and AMPKα2 isoforms overlap. In mouse primary proximal tubular cells, α isoforms decrease cell death that is caused by metabolic stress, and α isoforms of AMPK can substitute each other [36]. Mahboubi et al. demonstrated that both AMPKα1 and AMPKα2 isoforms are relevant for cell survival in response to stress [37]. A previous study showed that 8-Br-cGMP increased the degree of AMPK phosphorylation at Thr172 [38]. The present study found that targeting AMPKα1 or AMPKα2 for knockdown did not affect the phosphorylation of the enzyme by the H 2 O 2 or cGMP analog ( Figure 3B, C). To explain this lack of change in the phosphorylation state of AMPK for the podocytes that exhibited the downregulation of AMPKα1 or AMPKα2 expression, we suggest that there is a compensatory mechanism of the intact α isoform in both AMPKα knockdown sets of podocytes. However, the two α isoforms also exhibit unique functions within the cell. A growing body of evidence suggests that different AMPK-dependent cellular effects are determined by the stimulation of the AMPKα1 or AMPKα2 isoform. The AMPKα2 isoform responds to transient receptor potential channel 6-dependent Ca 2+ signaling, and is involved in the insulin-dependent regulation of glucose uptake in cultured rat podocytes [21]. Szrejder et al. postulated that MTF induced AMPKα1 activation to reduce transient receptor potential channel 6 expression in podocytes that were exposed to HG [29]. However, MTF increases the activity of both AMPKα1 and AMPKα2 isoforms in skeletal muscle cells, where enzyme activation is linked to an increase in glucose uptake [39]. This demonstrates that the action of AMPKα also depends on the type and function of the cell. The stimulation of brain microvascular endothelial cells by vasodilators activates the endothelial nitric oxide synthase/nitric oxide (NO) signaling pathway by the Ca 2+dependent stimulation of AMPKα1, leading to acute vascular permeability [24]. Nitric oxide signaling activates the cGMP/PKG signaling pathway and induces vasodilatation through a decrease in intracellular Ca 2+ concentration [40]. One speculation is that the AMPKα1-dependent activation of the NO/cGMP/PKG signaling pathway might increase vascular permeability in brain endothelial cells. In the present study, transfection with PKGIα siRNA significantly reduced transmembrane albumin flux across the podocyte monolayer to values that were similar to MTF and AICAR. Altogether, these findings suggest that the reciprocal regulation of PKGIα and AMPKα2 activity impacts podocyte permeability to albumin under physiological conditions.
The incubation of ventromedial hypothalamus neurons with the cGMP analog 8-Br-cGMP also increased AMPKα2 phosphorylation [41], suggesting that PKG may influence AMPKα activity. H 2 O 2 is also known to be implicated in the oxidative activation of PKGIα [11] in podocytes, and increases AMPK phosphorylation through an increase in the intracellular AMP/ATP ratio [42]. AMPK activation leads to a decrease in reactive oxygen species generation by NADPH oxidase [43]. AMPK may regulate PKGIα activity through the inhibition of the NADPH oxidase-dependent production of reactive oxygen species in cultured rat podocytes, but this hypothesis needs to be verified. The present study found that PKGIα affected AMPKα phosphorylation in podocytes. The downregulation of PKGIα expression and PKG activators increased basal levels of phosphorylated AMPKα ( Figure 3A). Furthermore, AMPKα1 siRNA partially abolished the positive effect of 8-Br-cGMP on the phosphorylation state of AMPKα ( Figure 3A). We also found that H 2 O 2 and 8-Br-cGMP treatment increased the amount of the AMPKα/PKGIα complex ( Figure 4B) and increased the colocalization of PKGIα with AMPKα1 and AMPKα2 without altering the cellular distribution of these proteins in podocytes ( Figure 4C). These results are consistent with Ramnanan et al., in which both AMPKα1 protein levels and activity increased in PKG immunoprecipitates from estivated snail foot muscle and hepatopancreas [44]. The H 2 O 2 -and 8-Br-cGMP-dependent activation of PKGIα and both AMPKα isoforms might promote the recruitment of these enzymes to a protein complex, where PKGIα and AMPKα reciprocally modulate each other's activity and might coordinate the intensity of signaling to appropriate effectors. Increases in the PKGIα and AMPKα interaction in response to specific stimuli may be necessary for the activation of these enzymes. Notably, PKGIα and AMPK activity also depends on GTP and ATP levels; therefore, changes in nucleotide concentrations may modify PKGIα-and AMPK-dependent signaling. We found that cGMP analogs, such as 8-Br-cGMP and Rp-8-Br-cGMPS, exert a minimal effect on GTP and ATP levels ( Figure 5), which may result from changes in the local pool of GTP and ATP that are necessary to switch on/off individual signaling pathways.
The podocyte contractile apparatus consists of actin filaments, myosin II, α-actinin-4, synaptopodin, talin, vinculin, and vimentin [45,46]. Cell contraction is based on a direct interaction between myosin and actin filaments. The contractility of the apparatus likely highly depends on the MLC phosphorylation state. Previous studies showed that the insulin-or 8-Br-cGMP-dependent activation of PKGIα increased MYPT1 and decreased MLC phosphorylation, resulting in the reorganization of the actin cytoskeleton in podocytes [47]. In vascular smooth muscle cells, AMPK is also involved in the regulation of cell contractility [25,30], but the exact role of AMPK in regulating the podocyte contractile apparatus is poorly known.
In the present study, we confirmed that siRNA-dependent PKGIα gene-silencing decreased MYPT1 phosphorylation as much as AICAR or MTF ( Figure 7A). Moreover, the selective knockdown of PKGIα expression markedly increased MLC phosphorylation, and the effect was stronger than that of the treatment with AMPK activators alone ( Figure 7D). However, we did not observe any additive effects of PKGIα siRNA and AMPK activators on the MLC phosphorylation state. This suggests that AMPK may maintain the phosphorylation of MYPT1 and MLC at basal levels through the indirect attenuation of PKGIα activity or inhibition of the interaction between PKGIα and MYPT1, resulting in the protection of the podocyte contractile apparatus against its uncontrolled relaxation. This hypothesis appears to be supported by our findings that siRNA against AMPKα2 markedly increased the basal levels of phosphorylated MYPT1 but decreased phosphorylated MLC to values that were similar to PKG activators ( Figure 7C,F). We did not observe any changes in the phosphorylation state of MLC after transfecting podocytes with AMPKα1 siRNA, but the effects of 8-Br-cGMP and H 2 O 2 on MLC phosphorylation were attenuated in these cells ( Figure 7B,E). These results may suggest that AMPKα2 is involved in regulating the MLC phosphorylation state, and crosstalk between PKGIα and AMPKα2 activity may control the contractile apparatus in podocytes.
Changes in MLC phosphorylation appear to correspond to the alterations of the organization of actin filaments in podocytes after the application of PKG and AMPK modulators. 8-Br-cGMP-and H 2 O 2 -treated cells are characterized by the accumulation of actin filaments near the plasma membrane, and a similar effect was observed with the administration of the AMPK inhibitor compound C (Figure 8).
The regulation of actin remodeling is controlled by the concentrations of Ca 2+ [48,49] and small GTP-binding proteins, such as Rac1, RhoA, and Cdc42c [50]. In podocytes, we found that the insulin-dependent activation of PKGIα increased Rac1 activity, which triggers actin filament reorganization through the filament-severing function of cofilin [51]. Moreover, Rac1-silencing restored basal MLC phosphorylation to control levels and prevented actin remodeling in podocytes that were treated with insulin or 8-Br-cGMP [51]. Experiments on insulin-treated podocytes also showed that the Ca 2+ -dependent activation of AMPKα2 was required to stimulate the Rac1/PAK/cofilin pathway, resulting in actin filament rearrangement [21]. These findings suggest that Rac1-dependent actin remodeling may be at least partially under the control of the PKGIα/AMPKα2 complex.
Preparation and Culture of Rat Podocytes
All experiments were performed in accordance with directive 2010/63/EU for animal experiments, and the protocol was approved by the local ethics committee of the University of Science and Technology, Bydgoszcz, Poland.
Western Blot
To obtain podocyte lysates, the cells were treated with lysis buffer (1% Nonidet P-40, 20 mM Tris, 140 mM NaCl, 2 mM ethylenediaminetetraacetic acid, and 10% glycerol) in the presence of protease (Sigma-Aldrich, Saint Louis, MO, USA) and phosphatase (Roche, Basel, Switzerland) inhibitor cocktails and homogenized at 4 • C by scraping. Proteins in the supernatant were separated on a 10% sodium dodecyl sulfate (SDS)-polyacrylamide gel and electrotransferred to polyvinylidene difluoride (PVDF) membranes. The following primary antibodies were used for the Western blot To detect the primary antibodies, the membranes were incubated with appropriate alkaline phosphatase-labeled secondary antibodies (Sigma-Aldrich, Saint Louis, MO, USA). The protein bands were visualized using the colorimetric 5-bromo-4-chloro-3-indolylphasphate/nitroblue tetrazolium system. The densitometric quantification of bands was performed using Quantity One 4.6.6 software (Bio-Rad Laboratories, Hercules, CA, USA).
Immunoprecipitation
Podocyte extracts were precleared with mouse IgG plus Protein A/G-PLUS Agarose at 4 • C for 1 h and then incubated with an appropriate primary antibody plus Protein A/G-PLUS Agarose at 4 • C overnight. The agarose beads were washed gently with lysis buffer. Proteins were then eluted from the beads by adding SDS loading buffer. Afterward, the sample was boiled for 5 min and analyzed by Western blot.
siRNA Transfection
Podocytes were transfected with siRNA that targeted PKGIα, AMPKα1, or AMPKα2 or non-silencing siRNA (scrambled siRNA, negative control; Santa Cruz Biotechnology, Dallas, TX, USA). Cells were cultured in RPMI-1640 medium that was supplemented with 10% fetal bovine serum (FBS). One day before the experiment, the culture medium was changed to antibiotic-free RPMI-1640, which was supplemented with 10% FBS. The cells were transfected with siRNAs using siRNA Transfection Reagent (OriGene, Rockville, MD, USA) according to the manufacturer's instructions. Briefly, the targeted siRNA or scrambled siRNA was diluted in transfection medium (final concentration, 80 nM), mixed with siRNA transfection reagent, and incubated for 30 min at room temperature. The transfection medium was then added to the transfection mixture, mixed gently, and added to the podocytes. After 7 h, growth medium that was supplemented with 2× higher concentrations of FBS and antibiotics was added to the cells. Afterward, the podocytes were incubated for an additional 24 h. After transfection, gene-silencing was checked at the protein level by Western blot.
Immunofluorescence
Podocytes were seeded on coverslips that were coated with type I collagen (Becton Dickinson Labware, Becton, UK) and cultured in RPMI-1640 medium that was supplemented with 10% FBS. Cells were fixed in phosphate-buffered saline (PBS) plus 4% formaldehyde for 20 min at room temperature. Fixed podocytes were permeabilized with 0.1% Triton-X for 3 min and then blocked with PBSB solution (PBS plus 2% FBS, 2% bovine serum albumin (BSA), and 0.2% fish gelatin) for 1 h. After blocking, the cells were incubated with anti-AMPKα1 (1:100), anti-AMPKα2 (1:100), and anti-PKGIα (1:15) antibodies in PBSB at 4 • C for 1.5 h. The primary antibodies were incubated with a blocking peptide to eliminate nonspecific staining. Next, the cells were washed three times with cold PBS and incubated with secondary antibodies that were conjugated to Alexa Fluor 488 (1:750) or Alexa Fluor 546 (1:750). Specimens were imaged using a confocal laser scanning microscope (Leica SP8X, Wetzlar, Germany) with a 63× oil immersion lens. Actin was stained using Alexa Fluor 633 phalloidin (1:200) and imaged using a Nikon Ti Eclipse confocal laser scanning microscope (Nikon Instruments Inc., Minato, Tokyo, Japan) with a 40× lens.
Permeability Assay
The transepithelial permeability to albumin was investigated by measuring the diffusion of FITC-labeled BSA (Sigma-Aldrich, Saint Louis, MO, USA, catalog no. A9771) across the podocyte monolayer as described previously [11,52]. Briefly, podocytes (25 × 10 3 cells/well) were seeded on 3-µm membrane pore size cell culture inserts that were coated with type IV collagen (Corning, NY, USA) and placed in 24-well plates. Transwell permeability experiments were conducted on differentiated cells between 7 and 15 days post-seeding. Before the experiments, podocytes were washed twice with PBS, and the medium on both sides of the insert was replaced with serum-free RPMI-1640 medium (SFM). After 2 h, the medium in the upper compartment was replaced with 0.3 mL of fresh SFM, and the medium in the lower compartment was replaced with 1.3 mL of SFM that was supplemented with 1 mg/mL FITC-albumin. After 1 h of incubation, 150 µL of the solution from the upper chamber was transferred to a 96-well plate, and the absorbance of FITCalbumin was measured at 490 nm using an EL808 Absorbance Reader (BioTek Instruments, Winooski, VT, USA).
Albumin concentrations were calculated based on standard concentrations that were prepared in SFM, ranging from 0.01 to 0.5 mg/mL FITC-albumin. The emission signals from SFM were subtracted from the standards and FITC-albumin samples. The linear calibration curve was plotted, with the standard concentration on the x-axis and optical density values on the y-axis. Based on the calibration curve, the equation of the straight line that fits the standard concentration data was generated. The FITC-albumin concentrations were calculated based on the linear function y = ax + b, where y is the optical density value, a is the slope of the line, x is the unknown FITC-albumin concentration, and b is the y intercept. The variation in FITC-albumin concentration between separate experiments may be the result of using inserts from different manufacturers. We previously used BioCoat Control Inserts with 3.0 µm PET Membrane (catalog no. 354575). However, this product was discontinued, and Transwell-COL Permeable Supports with a 3.0 µm PTFE Membrane (Costar, catalog no. 3496, Corning, NY, USA) were used instead. The membranes are made of different materials that may affect permeability to FITC-albumin in some way.
Isolation of Rat Glomeruli
Kidneys from 6 week old male Wistar rats were removed and placed in supplemented ice-cold PBS (pH 7.4; 137 mM NaCl, 2.7 mM KCl, 8.1 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 , 0.49 mM MgCl 2 , 0.9 mM CaCl 2 , and 5.6 mM glucose). Next, the renal capsule was removed, and the cortex was minced with a razor blade and then pressed through a system of sieves with decreasing pore diameters (250, 125, and 75 µm). The obtained cell suspension contained decapsulated glomeruli without afferent and efferent arterioles. The entire procedure was performed in an ice bath and completed in less than 1 h.
Glomerular Permeability to Albumin In Vitro
The permeability of the glomerular capillary wall in response to an oncotic gradient that was generated by changes in the determined concentration of albumin in the experimental medium was measured as described previously [53] with slight modifications. Isolated glomeruli were affixed to 0.1% poly-L-lysine-coated plates for 10 min. Unattached glomeruli were removed by gently washing with fresh 5% BSA in supplemented PBS. Subsequently, the glomeruli were incubated for an additional 5 min, and the volume responses of glomeruli to changes in albumin concentration were recorded. Subsequently, glomeruli that were incubated in 5% BSA medium were treated with an AMPK inhibitor (100 µM compound C, 20 min) or AMPK activators (MTF, 2 mM, and 30 min; AICAR, 100 µM and 20 min) and a PKG activator (100 µM 8-Br-cGMP, 5 min) or PKG inhibitor (100 µM Rp-8-Br-cGMPS, 20 min) at 37 • C. Next, the compounds were washed twice with 5% BSA medium. The 5% BSA medium was then replaced with 1% BSA medium to generate an oncotic gradient across the glomerular capillary wall. Control glomeruli were treated with equivalent volumes of 5% BSA medium that did not generate an oncotic gradient. Changes in glomerular volume were recorded by videomicroscopy (Olympus IX51 microscope, Olympus Corporation, Tokyo, Japan) before the activity modulators were added and 1 min after they were added. Glomerular volume (V) was calculated based on the surface area (S) of the glomerulus according to the following formula using CellSens Dimension 1.18 software (Olympus Corporation, Tokyo, Japan): V = 4 3 × S √ S/π /10 6 . There is a direct relationship between the increase in glomerular volume (∆V), calculated as (V final − V initial )/V initial , and the oncotic gradient (∆Π) that is applied across the capillary wall. This principle was used to calculate the reflection coefficient of albumin (σ alb ), defined as the ratio of ∆V that is measured in the presence (experimental) and absence (control) of an oncotic gradient: σ alb = ∆V experimental /∆V control . The σ alb value was then used to calculate glomerular capillary permeability to albumin (P alb ), expressed as P alb = 1 − σ alb , which describes the albumin current flow consequent to water flow. To obtain reliable results and preserve glomerular viability during the experiment, the glomerular permeability assay was performed for no more than 1 h; therefore, the glomeruli were incubated with compounds for ≤30 min. At least 12-16 glomeruli that were isolated from four rats were studied.
Extraction of Nucleotides and High-Performance Liquid Chromatography
The extraction of nucleotides from cells was performed by modifying the procedure of Smolenski et al. [54]. Podocytes were differentiated and cultured on six-well plates. On the day of the experiment, the cells were washed with PBS, and 0.5 mL of cold 0.4 M HClO 4 was added to each well. The plate was frozen at −80 • C for 24 h. Afterward, the cells were thawed on ice, collected in Eppendorf tubes, and centrifuged at 14,000 rotations per minute for 10 min at 4 • C. The supernatants were adjusted to a neutral pH with 2 M K 2 HPO 4 , centrifuged, filtered with 0.2 µm RC-membranes (Minisart RC4, Sartorius, UK), and analyzed by high-performance liquid chromatography (HPLC) with a UV-Vis detector.
Nucleotides were quantified using a Perkin Elmer Series 200 that consisted of a chromatographic interface (Link 600), binary pump, UV-Vis detector, and vacuum degasser. A Gemini 5 µm C18 110 Å 150 × 4.6 mm column, protected by a Gemini C18 4 × 3 mm guard column (Phenomenex, Torrance, CA, USA), was used for chromatographic separation. All compounds were detected at a wavelength of 254 nm. The mobile phase was adapted from a previous study [55] and consisted of a 50 mM phosphate buffer with 4 mM tetrabutylammonium hydrogen sulfate, adjusted to pH 6 with orthophosphoric acid (solution A) and HPLC-grade acetonitrile (solution B). The flow rate was 1 mL/min, and the applied gradient was the following: 0-5 min (95% solution A and 5% solution B), 5-12 min (amount of solution B increased linearly to 15%), 12-15 min (85% solution A, 15% solution B), and 15-17 min (gradient returned linearly to initial conditions of 95% solution A and 5% solution B). The runtime for the elution of nucleotides was 17 min. The column was equilibrated between injections for 20 min. The injection volume was set to 100 µL.
Statistical Analysis
All statistical analyses were performed using GraphPad Prism 8 software. The Shapiro-Wilk test was used to determine a normal distribution of datasets. Depending on the result of the normality test, the data were analyzed using a parametric test (analysis of variance followed by Tukey's, Sidak's, or Dunnett's multiple-comparison post hoc test or unpaired t-test) or nonparametric test (Kruskal-Wallis test followed by Dunn's multiple-comparison post hoc test) to determine significance. The data are expressed as means ± SEM. Values of p < 0.05 were considered statistically significant.
Conclusions
The interplay between PKGIα and AMPKα2 appears to be an important regulatory mechanism of podocytes, which maintains the proper function of the GFB. In pathological states, such as insulin resistance, diabetes, and hyperglycemia, balanced interactions between PKGIα and AMPKα activity might be impaired in podocytes, leading to PKGIα overactivity and an increase in the permeability of the GFB. The newly discovered crosstalk between PKGIα and AMPKα2 broadens our knowledge of the physiology of podocytes and suggests a new mechanism that may be disturbed in diabetes, leading to podocyte dysfunction. Understanding the mechanism of regulation of the PKGIα and AMPKα2 interaction at the molecular level provides further insights into glomerular disease pathogenesis and novel therapeutic targets for glomerulopathies.
|
2023-02-18T16:09:41.562Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "92bd248c7f27fc169bee1b733b54d675c5b53078",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "27aff9f2c0d3cb94c70b51586668768973fbe5c0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
13473147
|
pes2o/s2orc
|
v3-fos-license
|
Fluorescein angiographic observations of peripheral retinal vessel growth in infants after intravitreal injection of bevacizumab as sole therapy for zone I and posterior zone II retinopathy of prematurity
Aim To evaluate vascularisation of the peripheral retina using fluorescein angiography (FA) digital recordings of infants who had been treated with intravitreal bevacizumab (IVB) as sole therapy for zone I and posterior zone II retinopathy of prematurity (ROP). Methods A retrospective evaluation was performed of medical records, RetCam fundus images and RetCam fluorescein angiogram videos of 10 neonates (20 eyes) who received intravitreal bevacizumab injections as the only treatment for zone I and posterior zone II ROP between August 2007 and November 2012. Results All eyes had initial resolution of posterior disease after IVB injection as documented by RetCam colour fundus photographs. Using a distance of 2 disc diameters from the ora serrata to vascular termini as the upper limit of allowable avascular retina in children, the FA of these infants demonstrated that 11 of 20 eyes had not achieved normal retinal vascularisation. Conclusions Although bevacizumab appears effective in bringing resolution of zone I and posterior zone II ROP and allowing growth of peripheral retinal vessels, in our series of 20 eyes, complete normal peripheral retinal vascularisation was not achieved in half of the patients.
INTRODUCTION
The incidence of retinopathy of prematurity (ROP) has increased globally due to advances in the care of very-low-weight premature infants. In a recent review on the incidence of ROP, 1 the incidence of all ROP was found to be approximately 60% for infants less than 1500 g in high-income countries. Most cases of ROP regress spontaneously; however, more severe cases need treatment to prevent blindness. In middle-income countries greater numbers of premature infants are being saved; however, screening and treatment of severe ROP is often lacking, which in turn is leading to an increase in blindness due to ROP. Six different studies in India have reported the incidence of severe ROP, ranging from 6.3% to 44.9%. 1 Aggressive posterior ROP (AP-ROP) is a severe form of ROP located in zone I or posterior zone II of the retina, and is characterised by rapid progression to advanced stages of disease. 2 3 Even with early laser treatment as suggested in the 'Early Treatment for ROP' (ETROP) study, 4 poor outcomes are still frequently seen in AP-ROP. 5 6 Recently, there have been several encouraging reports of the use of intravitreal Open Access Scan to access more free content bevacizumab as an off-label first line of treatment in neonates with severe ROP. [7][8][9][10][11][12][13] One of the reported benefits of intravitreal bevacizumab as treatment for zone I and posterior zone II ROP is that the development of peripheral retinal vessels continues after treatment, whereas conventional laser therapy leads to permanent destruction of the peripheral retina. 14 In the present work, we report on the results of fluorescein angiography (FA) performed on 10 neonates (20 eyes), which we had treated up to 5 years previously with intravitreal bevacizumab as sole therapy for zone I and posterior zone II ROP. We have evaluated the extent of peripheral retinal vessel growth and remaining avascular retina after a single injection of intravitreal bevacizumab.
All cases were treated and examined at Klinik Mata Nusantara (KMN), an eye hospital in Jakarta, Indonesia. This retrospective study was approved by the Medical Committee of KMN.
Patients
In this retrospective study, we reviewed the records of 17 neonates who had FA after IVB for zone I and posterior zone II ROP. For the purposes of this study, we included 10 neonates who had achieved regression of posterior disease in both eyes with a single injection of bevacizumab and had a minimal follow-up period of 24 weeks after IVB. We excluded six neonates who did not achieve resolution of posterior disease or needed additional treatment before resolution of ROP: one neonate with AP-ROP had resolution of zone I ROP in one eye but developed stage 5 ROP in the other eye; another neonate with AP-ROP needed a second IVB injection to achieve resolution of zone I disease in both eyes; two neonates had not achieved resolution of posterior zone II disease at the last follow-up, and another two neonates needed vitrectomy. One neonate had to be excluded because the child was lost to follow-up after 10 weeks.
At time of IVB, 7 of these 10 cases had been diagnosed as having AP-ROP and 3 cases as having posterior zone II ROP without plus disease. When FA was performed more than once, we evaluated the last FA. Fluorescein angiograms of 10 neonates (20 eyes) were thus evaluated. These neonates had been treated with IVB as a first-line therapy between August 2007 and November 2012. In all cases, regression of posterior disease was documented by RetCam fundus photographs. Gestational age at birth ranged from 28-35 weeks post menstrual age (PMA) (mean=30 weeks), birth weight ranged from 1150-1700 g (mean=1393.2 g), PMA at time of IVB ranged from 32-38 weeks (mean=35.5 weeks). The interval between treatment with IVB and FA ranged from 27-224 weeks.
RetCam FA was performed under general anaesthesia in an operating room at KMN. A 10% solution of fluorescein was injected intravenously at a dose of 0.1 mL/kg followed by an isotonic saline flush. None of the patients experienced systemic complications related to FA.
METHODS
A retrospective analysis of the medical records of all infants that had been treated with IVB at KMN was performed. We extracted medical records of infants who had demonstrated resolution of zone I and posterior zone II ROP with IVB as sole treatment as documented by RetCam colour fundus photographs. Although we have been treating zone I and posterior zone II ROP with IVB since 2006, RetCam FA only became available to us in the latter part of 2011. The medical records of 10 neonates (20 eyes) who had RetCam FA after IVB were used to document resolution of posterior disease. We reviewed the fluorescein digital videos of these 20 eyes to evaluate the extent of remaining avascular retina.
An estimate of the peripheral retinal non-perfusion in the infants was compared to previously published descriptions of FA in children. 15 Blair et al 15 concluded that avascular retina extending more than 2 disc diameters (DD) from the ora serrata should be considered abnormal.
General patterns
Digital video recordings of RetCam FA allowed us to distinctly visualise the anterior border of retinal vessel growth and the vascular-avascular junction of 10 infants who had achieved RetCam documented resolution of posterior disease after treatment with IVB for zone I and posterior zone II ROP. Of 20 eyes examined with FA, 11 had incomplete peripheral retina vascularisation (table 1). Of these 11 eyes, 9 had fluorescein leakage at the vascular-avascular junction. The IVB-FA interval of these eyes with incomplete vascularisation ranged from 27 to 224 weeks (median 87.5 weeks). At the time of IVB, the diagnosis in these children with incomplete retinal vascularisation was AP-ROP in seven cases and posterior zone II ROP without plus disease in three cases. The birth weight of these infants ranged from 1150-1700 g with a mean of 1393.2 g. The gestational age ranged from 28-35 weeks with a mean of 30 weeks PMA.
Case reports
Case no. 5 was a case of AP-ROP ( figure 1A,B), which resolved after a single injection of IVB ( figure 1C,D). FA performed at 46 weeks after IVB (figure 1E,F) shows less than 2 DD of avascular peripheral retina and no vascular leakage.
Case no. 10 was a case of AP-ROP ( figure 2A,B), where there was resolution of posterior disease after IVB (figure 2C,D) but the peripheral retina remained avascular with fluorescein leakage at the vascular-avascular junction more than 4 years after IVB treatment ( figure 2E,F).
DISCUSSION
Although numerous authors have reported their experience using bevacizumab in the management of ROP, 9 10 13 at the time of writing there has been only one controlled trial comparing intravitreal bevacizumab to conventional treatment of ROP, the BEAT ROP trial. 10 In that study, the authors concluded that development of peripheral retinal vessels continued after treatment with IVB. In our study, we aimed to evaluate the extent of peripheral retinal growth in eyes with zone I and posterior zone II ROP that were treated with a single injection of intravitreal bevacizumab. Fluorescein angiographic imaging was chosen, as it allowed us to accurately visualise the extent of peripheral retinal vessel growth in these eyes.
In our series of 20 eyes from 10 patients we found that, despite resolution of zone I and posterior zone II ROP after a single injection of IVB, the peripheral retina remained incompletely vascularised in 11 (55%) of the eyes. In addition, we observed fluorescein leakage at the vascular-avascular junction in 9 of these 11 eyes with avascular peripheral retina (82% of total).
The safety of FA in neonates has been established since 2006. 16 17 In 2011, Lepore et al, published an atlas of fluorescein angiographic findings in eyes undergoing laser treatment for ROP. The authors concluded that FA clearly defined the zone I junction between vascularised and non-vascularised retina. 18 Recently, Velia et al, 19 investigated retinal development in premature infants using FA and revealed vascular changes in ROP eyes, such as loss of the normal dichotomous branching, vessel branching at the junction between vascular and avascular retina, arteriovenous shunts, and other abnormalities that were thought to be related to the immaturity of the vascular network. We observed similar findings such as irregular branching of large arterioles and circumferential vessel formation (figure 4C), and fluorescein leakage (figure 3E,F). Of significance, Velia et al 19 determined that dye leakage is the most significant sign of progression to severe ROP.
Blair et al 15 estimated the normal extent of peripheral retinal non-perfusion in normal children at various postnatal ages. In that study, the authors-using RetCam FA on 33 eyes from 31 normal children-estimated avascular retina using scleral indentation during FA to determine the distance of vascular termini to the ora serrata. None of these normal eyes had a distance greater than 1.5 disk diameters up to 13 years of age. The authors concluded that, conservatively, a distance of greater than 2 DD from the ora to the vascularised retinal margin should be considered abnormal. This data provided a useful practical standard to document the extent of peripheral retinal vascular development when screening infants with ROP using FA.
All neonates in our study had resolution of zone I and posterior zone II ROP documented by RetCam colour imaging at the time FA was performed. Previous studies have reported the favourable response of zone I and posterior zone II ROP to intravitreal injections of bevacizumab. 10 20 Our study focuses on the extent of normal retinal vessel growth in the peripheral retina in cases where zone I and posterior zone II ROP had been deemed to have responded favourably to a single injection of IVB as sole treatment. 10 14 21 It is often difficult to accurately determine the vascular-avascular junction in the peripheral retina using indirect ophthalmoscopy or colour RetCam images. We therefore chose to use FA, which allows accurate visualisation of the outer borders of the vascular retina. 18 19 The birth weight and PMA of the neonates with zone I and posterior zone II disease in our series is higher than reports from similar series from developed countries. Carden et al 22 reported that 58 infants referred to the National Hospital of Paediatrics in Hanoi, Vietnam with ROP had birth weights ranging from 800-1900 g and gestational ages ranging from 28-35 weeks. Gu et al 23 reported birth weights of infants with ROP in China ranging from 1501-2000 g. Risk factors reported to cause ROP in more mature neonates are septicaemia and poorly controlled oxygen therapy.
Our study showed that 55% of eyes continued to have incomplete vascularisation at time of follow-up examination including up to 259 weeks after birth. In the eyes with the longest IVB-FA interval (case no. 10) there were also numerous areas of lattice degeneration, which may increase the future risk for development of retinal tears and complications after cataract surgery, as has been previously reported. 24 25 Although we did not see ridges or extraretinal fibrovascular proliferation in these 11 eyes, there was fluorescein leakage at the vascular-avascular border in 9 eyes. This may be important, as fluorescein leakage has previously been reported to be a sign of progression to severe ROP. 19 In conclusion, our study demonstrates that although intravitreal bevacizumab can be very effective in causing resolution of zone I and posterior zone II ROP, ophthalmologists should remain cautious as infants may remain at risk due to avascular peripheral retinas even many years after treatment. Careful examination using FA allows accurate visualisation of risk factors such as the extent of avascular retina and the presence of dye leakage.
|
2017-04-08T07:09:27.095Z
|
2014-01-08T00:00:00.000
|
{
"year": 2014,
"sha1": "d532b95a61ea20fc06ae3ce3188cad6ba3f4b766",
"oa_license": "CCBYNC",
"oa_url": "https://bjo.bmj.com/content/98/4/507.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d532b95a61ea20fc06ae3ce3188cad6ba3f4b766",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53750716
|
pes2o/s2orc
|
v3-fos-license
|
The DEVD motif of Crimean-Congo hemorrhagic fever virus nucleoprotein is essential for viral replication in tick cells
Dear Editor, Crimean-Congo hemorrhagic fever (CCHF) is an emerging tick-borne viral disease widely distributed across countries of Africa, Southern Europe, the Middle East, and Asia. CCHF is caused by Crimean-Congo hemorrhagic fever virus (CCHFV; genus Orthonairovirus, family Nairoviridae), which usually circulates among asymptomatic animals (mammals and ticks) in an enzootic cycle. CCHFV has been detected in many tick species, but Hyalomma spp. ticks represent the main viral reservoir, and both transstadial and transovarial transmission occur in this genus. CCHFV causes severe disease in humans, with reported case fatality rates ranging from ~5% to as high as 80% in different countries. To date, there is very limited knowledge available regarding the biology and pathogenesis of CCHFV due to the requirement for the virus to be handled in highcontainment laboratories. In recent years, research programs have focused on understanding the virusmammalian host cell interaction to gain an overview of the molecular pathogenesis of CCHFV. Previously, we demonstrated that there is an interplay between CCHFV and the apoptosis process in mammalian cells. Interestingly, we found that the CCHFV nucleoprotein (N) contains a proteolytic cleavage site, DEVD (a caspase-3 cleavage site), which is conserved in all CCHFV strains. Furthermore, we found that DEVD cleavage inhibits the yield of progeny virus. This finding raised the question of why the DEVD motif has been conserved during evolution of this RNA virus despite substantial genetic diversity among CCHFV strains. This question might be answered by studying the replication of CCHFV in its natural host: ticks. The requirement for the virus to be handled in highcontainment laboratories, added to the difficulty in manipulation of infected ticks in a biosafety level (BSL)-4 facility, has made this task challenging. To shed light on the role of the DEVD motif in CCHFV replication, we have developed an in vitro tick cell culture model based on a previous observation that tick cell lines can be infected with CCHFV. First, we characterized CCHFV replication in the Hyalomma anatolicum-derived cell lines HAE/CTVM8 and HAE/CTVM9 by evaluating viral progeny release, the yield of intracellular viral RNA, and N expression. HAE/CTVM8 and HAE/CTVM9 cells (2 × 10) were seeded in sealed, flat-sided culture tubes (Nunc, Thermo Fisher Scientific) at 32 °C and grown for 48 h and then infected with the CCHFV IbAr10200 strain at multiplicities of infection (MOI) of 0.1 and 1.0, in 1 mL of culture medium. After 1 h, cells were washed with PBS and cultured in 2.5 mL of complete medium. In studies of the kinetics of viral progeny release, 200 μL of supernatant medium were collected, as indicated in Fig. 1a, for viral titration on Vero cells, as previously described, and an equal volume of fresh medium was replaced in the culture tubes. Although both H. anatolicum cell lines were permissive to CCHFV infection, the kinetics of viral replication differed between them. We found that CCHFV grew faster in HAE/CTVM9 compared to HAE/CTVM8 at the early time points; however, at later times post-infection (p.i.), the yields of progeny virus were comparable between these cells (Fig. 1a). At 7 days p.i., CCHFV-infected tick
C O R R E S P O N D E N C E O p e n A c c e s s
The DEVD motif of Crimean-Congo hemorrhagic fever virus nucleoprotein is essential for viral replication in tick cells Dear Editor, Crimean-Congo hemorrhagic fever (CCHF) is an emerging tick-borne viral disease widely distributed across countries of Africa, Southern Europe, the Middle East, and Asia 1 . CCHF is caused by Crimean-Congo hemorrhagic fever virus (CCHFV; genus Orthonairovirus, family Nairoviridae), which usually circulates among asymptomatic animals (mammals and ticks) in an enzootic cycle. CCHFV has been detected in many tick species, but Hyalomma spp. ticks represent the main viral reservoir, and both transstadial and transovarial transmission occur in this genus 2 . CCHFV causes severe disease in humans, with reported case fatality rates ranging from~5% to as high as 80% in different countries 1 . To date, there is very limited knowledge available regarding the biology and pathogenesis of CCHFV due to the requirement for the virus to be handled in highcontainment laboratories 1 . In recent years, research programs have focused on understanding the virusmammalian host cell interaction to gain an overview of the molecular pathogenesis of CCHFV 3 . Previously, we demonstrated that there is an interplay between CCHFV and the apoptosis process in mammalian cells 4 . Interestingly, we found that the CCHFV nucleoprotein (N) contains a proteolytic cleavage site, DEVD (a caspase-3 cleavage site), which is conserved in all CCHFV strains 5 . Furthermore, we found that DEVD cleavage inhibits the yield of progeny virus 5 . This finding raised the question of why the DEVD motif has been conserved during evolution of this RNA virus despite substantial genetic diversity among CCHFV strains. This question might be answered by studying the replication of CCHFV in its natural host: ticks. The requirement for the virus to be handled in highcontainment laboratories, added to the difficulty in manipulation of infected ticks in a biosafety level (BSL)-4 facility, has made this task challenging 6,7 . To shed light on the role of the DEVD motif in CCHFV replication, we have developed an in vitro tick cell culture model based on a previous observation that tick cell lines can be infected with CCHFV 8 . First, we characterized CCHFV replication in the Hyalomma anatolicum-derived cell lines HAE/CTVM8 and HAE/CTVM9 9 by evaluating viral progeny release, the yield of intracellular viral RNA, and N expression. HAE/CTVM8 and HAE/CTVM9 cells (2 × 10 6 ) were seeded in sealed, flat-sided culture tubes (Nunc, Thermo Fisher Scientific) at 32°C and grown for 48 h and then infected with the CCHFV IbAr10200 strain 10 at multiplicities of infection (MOI) of 0.1 and 1.0, in 1 mL of culture medium. After 1 h, cells were washed with PBS and cultured in 2.5 mL of complete medium. In studies of the kinetics of viral progeny release, 200 µL of supernatant medium were collected, as indicated in Fig. 1a, for viral titration on Vero cells, as previously described 10 , and an equal volume of fresh medium was replaced in the culture tubes.
Although both H. anatolicum cell lines were permissive to CCHFV infection, the kinetics of viral replication differed between them. We found that CCHFV grew faster in HAE/CTVM9 compared to HAE/CTVM8 at the early time points; however, at later times post-infection (p.i.), the yields of progeny virus were comparable between these cells (Fig. 1a). At 7 days p.i., CCHFV-infected tick Table 1).
To evaluate the intracellular viral RNA yield by quantitative real-time Reverse Transcriptase-PCR (qRT-PCR), CCHFV-infected cells were collected, washed with PBS and lysed using the TRIzol LS reagent (Invitrogen). Total RNA was extracted using a Direct-zol™ RNA MiniPrep Kit (Zymo Research) and CCHFV RNA was detected using a RealStar® CCHFV RT-PCR 1.0 kit (Altona Diagnostics) following the manufacturers' instructions. The amount of viral RNA was normalized to the expression of the putative translation elongation factor EF-1 alpha/Tu (EF1A) gene of H. anatolicum tick cells (primers available on request) and expressed as the fold change with respect to the initial virus inoculum (set to 1 at day 0, Fig. 1b) using the ΔΔCt method for relative quantification of RNA 11 . The results showed that viral RNA increased over time and was more abundant in HAE/CTVM9 cells at the early time points (Fig. 1b). To evaluate viral protein expression, cells were collected and washed in PBS by centrifugation at 335 rcf for 7 min at 4°C and then processed for western blotting analysis as previously described 4,5 . The level of N protein expression was MOI-dependent and was very high at MOI = 1.0 at 7 days p.i. (Fig. 1c). In HAE/CTVM8 cells, the expression of CCHFV-N was delayed in comparison to that in HAE/ CTVM9 cells (Fig. 1c).
Overall, our results showed that CCHFV replicated faster in HAE/CTVM9 cells than in HAE/CTVM8 cells; however, at day 7, the results were comparable between the two cell lines.
These results could be due to the heterogeneity between HAE/CTVM8 and HAE/CTVM9. In fact, all tick cell lines are phenotypically and genotypically heterogeneous, having been derived from the tissues of multiple embryos of individual ticks, as reflected in their light microscopic morphologic appearance 8,9 .
We then used our infection model to investigate the importance of the DEVD motif for CCHFV replication in tick cells. As highlighted above, we previously demonstrated that the N protein can be cleaved in mammalian cells by pro-apoptotic caspase-3 enzymes at the level of a highly conserved DEVD motif, producing two polypeptides of approximately 30 and 26 kDa 5,12 . Using caspase inhibitors, we found that cleavage of CCHFV-N affected the yield of progeny virus and that N protein expression could suppress the induction of apoptosis 4 . Thus, this phenomenon could represent a host cell immune defense mechanism against CCHFV infection 5 . Interestingly, we could not detect such cleavage in tick cells, as western blotting revealed a single~50-kDa N protein (Fig. 1c). As CCHFV persistently infected these cell lines, and considering the absence of detectable virus-induced cell death, it is most likely that the virus efficiently inhibits apoptosis in tick cells. However, we cannot exclude the possibility that caspase-3 in these cells does not recognize the DEVD site.
To further investigate the function of the DEVD motif in CCHFV replication, we generated recombinant mutant CCHFVs by the previously reported rescue system 13,14 . The wild-type DEVD sequence (rCCHFVwt) was changed to a caspase cleavage-resistant AEVA sequence (rCCHFVmut) by site-directed mutagenesis. After two steps of viral amplification in SW13 cells (ATCC® CCL-105™), the N protein coding sequences of the wild-type and mutated recombinant viruses were verified by nucleotide sequencing (data not shown). To evaluate the ability of rCCHFVwt to replicate in tick cells, we compared by qRT-PCR the kinetics of rCCHFVwt and parent CCHFV IbAr10200 strain replication in HAE/CTVM8 cells, and the observed trends were similar (Fig. 1d). Then human SW13 and tick HAE/CTVM8 cell lines were infected with rCCHFVwt and rCCHFVmut at MOI 0.1. Virus replication was evaluated by qRT-PCR of (see figure on previous page) Fig. 1 Replication of CCHFV in Hyalomma-derived tick cell lines. a-c Tick cell lines HAE/CTVM8 and HAE/CTVM9 were infected with CCHFV at MOI 0.1 or MOI 1.0. At the indicated time points: a Infectious viral particles released were titrated by the focus forming unit (FFU) assay in Vero cells, error bars = S.D.; b The relative increase in viral RNA in the infected cells was evaluated by qRT-PCR; c expression of the CCHFV-N protein was evaluated by western blot, C = uninfected cells. The experiment was performed three times in duplicate, and sample analyses were performed in duplicate. d HAE/CTVM8 cells were infected with the CCHFV IbAr10200 strain or rCCHFVwt at MOI 0.1. Viral replication was compared by measuring the relative amount of viral RNA in the infected cells at the indicated time points. e-f Human SW13 and tick HAE/CTVM8 cells were infected with rCCHFVwt or rCCHFVmut at MOI 0.1; viral replication was evaluated by: e titration of the virus progeny for both cell lines, and f measuring the relative amount of viral RNA in the infected SW13 cells. *P < 0.05, unpaired t-test. The experiment was performed three times in duplicate, and sample analyses were performed in duplicate. g HuH-7 (JCRB0403) donor cells were transfected with the CCHFV tc-VLP system, using either a wild-type (pC_N) or a mutant (pC_N_D266+269A) N, in the context of a wild-type (L_wt) or a transcriptionally incompetent polymerase (L_D693A). An inactive polymerase (L_ΔDD) was used as a control. Minigenome luciferase activity was measured 3 days post transfection. The presence of tc-VLPs in donor cell supernatants was detected by transfer of supernatants onto HuH-7 indicator cells expressing L-wt and N and measurement of luciferase activity 24 h p.i. The results were normalized against the control L_wt+N_wt value set to 100%. The experiment was done in triplicate. h Replication of rCCHFVwt and rCCHFVmut was also evaluated in HAE/CTVM8 cells over 17 days by measuring the relative amount of intracellular viral RNA. *P < 0.002, ANOVA. The experiment was performed three times in duplicate, and sample analyses were performed in duplicate intracellular viral RNA and virus progeny titration. At 72 h, the titer of the rCCHFVmut in SW13 cells was approximately ten times less than that of rCCHFVwt (Fig. 1e), whereas intracellular rCCHFVmut RNA yield in SW13 cells was 1.3 times greater than that of the wild type (Fig. 1f). To investigate the DEVD motif-sensitive virus replication step, we took advantage of our CCHF transcriptionally competent virus-like particle (tc-VLP) system developed in mammalian cells 15 that allows discrimination of the transcription and the replication steps. As we already showed using a minireplicon system in BSR-T7/5 cells 12 , we confirmed using the tc-VLP system in HuH-7 cells that mutation of the DEVD site increases transcription. Indeed, in HuH-7 cells producing VLPs (donor cells), transcription of the luciferase minigenome by the viral polymerase L_wt was increased in the presence of the mutant (pC_N_D266+269A) N protein (Fig. 1g). However, no major differences were found in HuH-7 cells infected with VLPs (indicator cells), suggesting the absence of an effect on replication or VLP production. We also used a transcription-incompetent, but replication-competent polymerase (L_D693A) 15 and similarly did not observe any major effects on replication/ VLP production. These data suggested that the DEVD motif may have an as-yet undetermined function, but it is not essential for virus replication in mammalian cells.
Strikingly, we found a strong-negative effect of the AEVA mutation in HAE/CTVM8 cells. Although rCCHFVwt was able to replicate in tick cells, rCCHFVmut showed a strong impairment in RNA replication and only~100 particles were detectable in just one replicate in one of the experiments for rCCHFVmut (Fig. 1e, h). The inability of rCCHFVmut to produce viral progeny and the dramatic reduction in viral RNA (>99% compared to RNA of the wild-type virus) (Fig. 1h) suggested a significant impairment of replication/transcription of the viral genome that could be due to a malfunction of the N protein in the tick intracellular environment or the inability to interact with one or more key cellular factors required for viral replication. To date, there is a lack of molecular tools for tick cells, such as mini replicon and VLP systems, such that we cannot pinpoint the exact mechanism of function of the DEVD motif. However, our data suggest that the DEVD motif has an essential role in CCHFV replication in tick cells.
In conclusion, our results support the applicability of tick cell lines to studying the biology of CCHFV in vector cells and virus/vector interactions. Processing of the N protein appears to have a moderate effect on viral replication in mammalian cells, but the dramatic inhibition of CCHFV replication after mutation of the DEVD motif in tick cells raises an interesting question about the function of this viral protein in the context of the vector.
Targeting the DEVD motif could be a strategy to counteract infection in ticks to reduce viral persistence in the environment. The virus/tick cell culture system reported here provides the basis for further studies to characterize the tick cellular response to CCHFV infections and to determine the mechanism by which tick cells can tolerate persistent viral infections.
|
2018-11-29T00:40:42.873Z
|
2018-11-28T00:00:00.000
|
{
"year": 2018,
"sha1": "1162e51f446cb694af87a938fbc0882718fae381",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1038/s41426-018-0192-0?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1162e51f446cb694af87a938fbc0882718fae381",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
4035731
|
pes2o/s2orc
|
v3-fos-license
|
No superior treatment for primary osteochondral defects of the talus
Purpose The purpose of this systematic literature review is to detect the most effective treatment option for primary talar osteochondral defects in adults. Methods A literature search was performed to identify studies published from January 1996 to February 2017 using PubMed (MEDLINE), EMBASE, CDSR, DARE, and CENTRAL. Two authors separately and independently screened the search results and conducted the quality assessment using the Newcastle–Ottawa Scale. Subsequently, success rates per separate study were calculated. Studies methodologically eligible for a simplified pooling method were combined. Results Fifty-two studies with 1236 primary talar osteochondral defects were included of which forty-one studies were retrospective and eleven prospective. Two randomised controlled trials (RCTs) were identified. Heterogeneity concerning methodological nature was observed, and there was variety in reported success rates. A simplified pooling method performed for eleven retrospective case series including 317 ankles in the bone marrow stimulation group yielded a success rate of 82% [CI 78–86%]. For seven retrospective case series investigating an osteochondral autograft transfer system or an osteoperiosteal cylinder graft insertion with in total 78 included ankles the pooled success rate was calculated to be 77% [CI 66–85%]. Conclusions For primary talar osteochondral defects, none of the treatment options showed any superiority over others. Level of evidence IV.
Introduction
A talar osteochondral defect (OCD) is a combined lesion of the subchondral bone and its overlying cartilage and often has a severe impact on the quality of life of active patients [134]. The general consensus is that bone marrow stimulation (BMS) is administered for primary smaller defects. Other surgical options are internal fixation, osteochondral autograft transfer systems (OATS), chondrocyte implantation, retrograde drilling, metal resurfacing, total ankle prostheses or arthrodesis [44,56,124].
The effectiveness of the interventions varies greatly in the literature, and although a number of previous systematic reviews have been conducted, a definite treatment option regarded as the golden standard has yet to be identified [32,69,85,119,128,135]. Additionally, prior systematic reviews either investigated sole treatment options or did not distinguish between primary and non-primary talar defects [32,69,85,135]. Therefore, this could introduce a mispresentation of the reported success rates. Furthermore, the most comprehensive review by Zengerink et al. [135] included articles published up to 2006. Since then, a high number of articles investigating novel interventions for talar OCDs have been published [66,94,95,122]. The aim of the present review is therefore to examine and compare the clinical effectiveness of all treatment strategies for exclusively primary talar OCDs in adults. The hypothesis is that no significant differences considering clinical outcome of these different treatment strategies are to be found. This study presents novel findings and gives novel insight into the clinical effectiveness of treatment strategies for primary talar osteochondral defects exclusively.
Materials and methods
The systematic review was prospectively registered at the PROSPERO register [23].
Search strategy
Electronic databases PubMed (MEDLINE), EMBASE, CDSR, DARE and CENTRAL were screened from January 1996 to February 2017 for potential suitable articles (Appendix 1). This time frame was chosen as by 1996 the arthroscopic techniques for treating talar OCDs were fully developed and established in the orthopaedic field [126].
The full search strategy for all electronic databases is outlined in Appendix 1. Backward citation chaining strategy was applied as an additional search technique.
Eligibility criteria and study selection (Fig. 1)
Suitable randomised controlled trials (RCT) and observational studies assessing the effectiveness of all treatment strategies for primary talar OCDs in the adult patient population were included in the present study. The rationale for including non-randomised clinical studies is based on the substantial presence of the low-quality evidence research into talar osteochondral defects that has been conducted over the past two decades. The exclusion criteria for our review are presented in Table 1. When necessary, authors were contacted to provide separate data for patients with primary lesions only and/or for patients ≥18 years old. When no reply was reported, contact was sought by two reminder e-mails. If no response was recorded, the specific article was excluded. Independent evaluation of the articles and a subsequent discussion were performed by two reviewers (J.D. and K.L.) after title, abstract screening and full-text reading. In case of any disagreement after discussion, the opinion from an independent third investigator (G.K.) was decisive. Studies were not blinded for author, affiliation or source, and no limitations were put on language and publication status. The literature selection algorithm according to the preferred reporting items for systematic reviews and meta-analyses (PRISMA) is presented in Fig. 1 [67].
Records identified through electronic database screening n = 1351 Included for full-text reading n = 232 Excluded after duplication removal and title and abstract screening n = 1119 Included for full-text reading n = 233 Added through reference and citation search n = 1
Included studies n = 52
Excluded after full-text screening n = 181 (Table 1) Identification + screening Eligibility Included Fig. 1 Literature selection algorithm-preferred reporting items for systematic reviews and meta-analyses (PRISMA)
Critical appraisal
A for-talar-OCD-modified Newcastle-Ottawa Scale (NOS) was utilised to assess the methodological quality (Appendix 3). Each included study was graded on methodological quality by two independent reviewers (J.D. and K.L.).
When there was no agreement on the number of stars graded, assessment by an independent third investigator (G.K.) was decisive.
Data extraction
By means of a standardised extraction form, data from the articles were extracted on study characteristics. Data on patient characteristics were retrieved and included age, gender, number of patients and ankles, symptom duration, location, side, size and stage of the defect according to a specifically reported OCD classification system, clinical scoring system utilised, history of ankle trauma and follow-up duration. Pre-operative and post-operative clinical outcome scores were extracted on mean scores, subjective satisfaction and number of patients treated successfully. The treatment strategy in question was defined to be successful when a good or excellent result at follow-up was reported, in combination with an accepted scoring system. The results were incorporated into the scoring system of Thompson and Loomer [118] (Appendix 2) when separate patient data were available though no success rates of specific treatment strategies were included. An ankle was considered to be successfully treated when at latest follow-up a post-operative AOFAS score at or above 80 was reached [59]. In case of the FAAM (Foot and Ankle Ability Measure) score, a percentage of 80 or higher was regarded to be a successful treatment [75].
Statistical and data analysis
In case of identifying studies with highly differing methodological natures, a formal meta-analysis will not be performed. It will be decided upon visualising the results per study by means of a forest plot. If possible, a simplified pooling method will be used to combine data from different studies describing the results of similar treatment groups research by means of analogous methodologies. 95% binomial proportion confidence intervals for the success percentages of each study and the pooled studies will be calculated with the Wilson score interval and included in the forest plots (CIA, Confidence Interval Analysis for Windows, version 2.2.0) [19].
Search results
The literature search yielded 1351 articles, and after title and abstract screening, 232 potentially suitable articles were included for full-text reading (Fig. 2). One study was added through reference and citation search. In total, 127 authors were contacted to request data according to our inclusion criteria. Subsequently, 33 studies could be included and 31 had to be excluded attributable to the extensive author contact process. In total, 181 publications had to be excluded due to a variety and combination of reasons (Table 1). This left 52 studies in total. After screening and discussion between the first two authors there was overall consensus in all cases except for four where disagreement persisted. These were resolved by discussion with the senior author (G.K.).
Full consensus was reached between the reviewers regarding grading of methodological quality.
Evaluation of the characteristics of included studies
A total of 1236 primary talar OCDs were included in the 52 studies. The average age was 36 [range , and the percentage of females and males was 34 and 66%, respectively. The right ankle was involved in 54% of the cases and the left ankle in 46%. The percentages of medial, lateral, central and combined medial and lateral location involvement were 77, 21, 2 and 0.4%, respectively. In 71% of the patients, a history of ankle trauma was reported. The most frequently used clinical scoring system and osteochondral
Methodological quality
The fifty-two publications altogether scored 182 stars out of maximum 260 stars (Table 4). Forty-one studies were assessed to be retrospectively conducted, and all studies except for two were conducted according to the study protocol. Therefore, all studies together scored a total number of 65 stars (max. = 104) on study design. Regarding the selection procedure, 43 out of 52 stars were scored in total, indicating that most studies reported a representative talar OCD patient population. Seventy-four out of 104 stars were scored on the outcome part of the adjusted Newcastle-Ottawa Scale. Independent blind assessment was performed in none of the studies, and in all except for one study outcome was assessed through record linkage. Numerical star outcomes on adequacy of follow-up of series were not uniform across the included studies.
Treatment strategies
The different treatment strategies were divided into six corresponding treatment groups. It was deemed methodologically appropriate to perform a simplified pooling method for the largest groups of those publications with corresponding methodological nature (i.e. retrospective case series together) in the groups of BMS and osteo(chondral) transplantation-more specifically OATS and an osteoperiosteal cylinder graft insertion. No studies describing a mosaicplasty procedure were included in this pooling group as mosaicplasty uses multiple graft insertion procedures applied for the treatment of larger talar defects which is in contrast to the classic OATS procedure. Consequently, pooling the mosaicplasty studies was not appropriate. The forest plot describing the clinical results in percentages per separate study in their corresponding treatment group is presented in Fig. 3, and the forest plot describing the results of the simplified pooling method is presented in Fig. 4.
Non-operative
The objective of non-operative treatment is to unload the damaged cartilage potentially resolving accumulated oedema within the joint. One retrospective case series study investigated solely chronic-type V cystic lesions as classified by Loomer et al. [68,109]. Non-operative treatment consisted of continuation of activities "as tolerated" [109]. Mean symptom duration, mean follow-up, patient satisfaction scores and preoperative OCD size could not be recorded. Eventually, in 16 out of 26 patients conservative treatment yielded successful results, which corresponded to a success rate of 62% [CI 43-78%] (Fig. 3) [109].
Bone marrow stimulation (debridement and/or drilling)
BMS consists of debriding the OCD after which additional microfracturing or antegrade drilling can be performed establishing openings into the subchondral bone. This disrupts intraosseous vessels introducing blood and bone marrow cells into the OCD allowing a clot of scar tissue to form resulting in fibrocartilaginous tissue. Supplementary, one can administer hyaluronic acid (HA) injections acting as a synovial lubricator targeting pain levels and inflammatory cytokine concentrations [81,112]. Another possibility is the use of pulsed electromagnetic fields (PEMF) [1,17,26,91,103,129].
Twenty-two studies describing the results of BMS for 747 ankles were identified [8,11,24,27,31,33,45,55,61,65,77,78,93,96,100,104,105,108,114,115,123,133]. There were two RCTs, two prospective cohort studies and one retrospective cohort study, three prospective case series, three retrospective comparative studies and eleven retrospective case series. This shows the great heterogeneity in methodological nature of the studies within this group. The means of the symptom duration of these studies ranged from 4 to 49 months, and the range of the means of the follow-up duration in months was as follows: 10-143 months (Fig. 3 (Fig. 4).
Retrograde drilling
Retrograde drilling (RD) is a non-transarticular procedure preventing injury to the articular cartilage. Consequently, the technique is primarily used when defects contain a relatively small amount of articular cartilage damage or when it is challenging to reach the OCD via the common arthroscopic portals. The aim is to revascularise the subchondral bone and induce novel bone formation. Additional procedures one can administer are cancellous bone grafts. Five studies with a total of 80 ankles having undergone retrograde drilling were identified [5,12,41,62,114]. One prospective case series, one retrospective cohort study, one retrospective case series and two retrospective comparative studies were identified. Therefore, due to the heterogeneity this did not allow for pooling. Furthermore, concerning symptom duration, Berndt and Harty [15] staging and sizes of the talar OCDs, there was insufficient information to provide data on ranges of means reported in the cited literature. The range of the means of follow-up duration was 24-28 months (Fig. 3). The success percentages in this treatment group ranged from 68 to 100% [CI 49-100%] (Fig. 3) [5,12,41,62,114]. Included in this range were two studies that implemented cancellous bone grafting additional to retrograde drilling with mean success rates ranging from 83 to 100% [CI 61-100%] and two studies that performed retrograde drilling (range 68-90%, CI 49-97%, Fig. 3) [5,41,62,114]. One study by Beck et al. [12] investigated a transtalar endoscopic core decompression combined with the injection of synthetic osteoconductive bone graft substitute. It included 7 patients and yielded a success rate of 100% (Fig. 3) [CI 65-100%].
Osteo(chondral) transplantation
A number of osteo(chondral) transplantation techniques exist to treat talar OCDs: osteochondral autograft transfer systems (OATS), mosaicplasty, (autogenous) bone grafting, autologous osteoperiosteal cylinder grafting and an osteochondral allograft transfer. The procedures consist of debriding the degenerated cartilage, the fibrous tissue and the necrotic subchondral bone, after which the osteo(chondral) grafts are harvested and subsequently implemented into the remaining OCD. The aim is to achieve a higher-quality restoration of the functional unit of the subchondral bone plate including the articular cartilage.
Cartilage implantation
Cartilage implantation techniques aim at regenerating tissue with hyaline-like type II cartilage. Generally, in twostep procedures viable chondrocytes are isolated from a donor site, after which the chondrocytes are cultivated and expanded in a laboratory medium. The cultured chondrocytes are then implanted into the excised lesion. When applying the ACI procedure, a periosteal tissue cover is used after expansion of isolated chondrocytes, whereas MACI replaces the periosteal cover by a collagen type 1-3 or Hyalograft C membrane [42]. The latter has the advantage that there is no need for an additional donor site and potentially delivers more viable cells to the OCD [80].
Five studies including 85 ankles investigating cartilage implantation were identified [4,43,66,84,93]. Two prospective case series, two retrospective comparative studies and one retrospective case series were included in this group. The authors decided not to perform a simplified pooling method. There was insufficient homogeneity and substantial missing data to report mean symptom duration, patient subjective satisfaction scores and staging of the defect. Concerning follow-up duration, it was possible to extract data from two studies, yielding a range of the means of follow-up of 39-58 months (Fig. 3) [4,84]. From four studies information on talar OCD size could be extracted, which yielded a range of 1.6-1.9 cm 2 [4,43,66,84]. The success rate ranged from 78 to 100% [CI 45-100%] (Fig. 3) [4,43,66,84,93]. From these five studies, there were two investigating ACIs [43,93]. The range of the success rate was 78-93% [CI 45-98%] (Fig. 3) [43,93]. The other three publications performed a MACI procedure with a total of 46 ankles, and the success percentages ranged from 80 to 100% [CI 38-100%] as illustrated in Fig. 3 [4,66,84].
Chondrogenesis-inducing techniques (CITs)
CITs aim at the repair of a bone-cartilage lesion by means of a combined single-step procedure and can be applied for larger, cystic OCDs [13,14]. The goal is to induce chondrogenesis, and in case of an adjusted autologous matrix-induced chondrogenesis (AMIC) procedure, spongiosa bone-rich in mesenchymal stem cells-is implanted into the defect [20]. Thereafter, an acellular collagen I/III matrix is glued onto the defect. In case of an autologous collagen-induced chondrogenesis (ACIC) procedure, the debrided defect is filled with a mixture of synthetic fibrin glue and collagen gel-based matrix.
Five publications describing the results of 68 ankles treated by CIT were identified [28,60,120,121,130]. One study was a prospective case series, one was a retrospective comparative study, and the other three were retrospective case series, which discouraged pooling. There was no sufficient data to allow a presentation of the symptom duration, patient subjective satisfaction scores, staging and sizes of the defect. The range of the means of follow-up duration was 6-38 months (Fig. 3). The range of the success rate was 56-100% [CI 27-100%] (Fig. 3) [28,60,120,121,130]. For the AMIC procedures, the range of success percentages was 73 to 91% [CI 43-98%] (Fig. 3) [28,60,121]. Volpi et al. [130] and Usuelli et al. [120] described the results of ACIC, and the means of the success rate ranged from 56 to 100% [CI 27-100%] (Fig. 3). Fig. 4 Forest plot of the pooled success rates of different treatment strategies with the corresponding 95% confidence intervals (accompanied by the total number of ankles and total number of studies included in the pooled group, and the corresponding methodological quality; the size of the diamond representing the pooled success rate is adjusted for the number of ankles included)
Discussion
To the best of our knowledge, this is the first systematic review investigating the effectiveness of all treatment options for solely primary talar OCDs in adults. The most important finding of the present study is that although aiming at the application of the most appropriate and complete methodology, none of the interventions showed any definite clinical superiority over the others. This was caused by the observed heterogeneity in methodological nature of the studies and the variety in success rates, both intra-treatment strategy group-wise and inter-treatment strategy groupwise. Additionally, performing a simplified pooling method for retrospective case series studies in the BMS group and in the osteo(chondral) transplantation group yielded comparable pooled success rates.
The main finding is partially in contrast to the one derived from the research by Zengerink et al. [135] which concluded that BMS is the most effective treatment strategy for talar OCDs. This systematic review from 2010, however, included both primary and non-primary talar OCDs, which potentially affected the results and the conclusions based on them. It should be acknowledged that the most important finding of the present study was not a consequence of the methodology, as it aspired to include as many suitable articles as possible by not excluding particular treatment strategies-in contrast to previous reviews [32,85]-and by adhering to a strict author contact protocol.
BMS was the most studied intervention for primary talar OCDs indicating that it is the most frequently practised treatment option for primary talar OCDs worldwide. This is due to the fact that BMS is a relatively inexpensive intervention compared to implantation techniques, has low morbidity, a quick recovery and a fast return to sports. This was shown by studies conducted by Saxena et al. [105] and Reilingh et al. [100] presenting return to sports times ranging from 15 to 17 weeks. The two most recent systematic reviews on BMS reported success rates of 80 and 86% [32,135]. When pooling eleven BMS studies, a pooled success rate of 82% was calculated [CI 78-86%] (Fig. 4). As this success rate is comparable to the success rate of the pooled retrospective case series design studies in the osteo(chondral) transplantation group describing the results of OATS and an osteoperiosteal cylinder graft insertion (77% [CI 66-85%]), it is difficult to assess which surgical treatment strategy is clinically superior, thereby supporting the most important finding of the present study. Important factors play a vital role in the success of the clinical outcome after BMS. BMS does not aim at preserving a hyaline cartilage layer but rather promotes the formation of a fibrin clot subsequently becoming fibrocartilage or cartilage/collagen type I, which may then decrease in quality over time, resulting in osteoarthritic changes [70,88,89]. Moreover, research indicates that deterioration of the natural congruency of the ankle joint occurs as cartilage type I demonstrates inferior wear characteristics in comparison with hyaline cartilage (cartilage/collagen type II) being associated with the degradation of a repaired articular surface [74,98,111]. However, long-term studies have not yet confirmed this [37,123]. A clear correlation between inferior clinical outcomes and follow-up duration concerning the included studies in this review was not observed either, possibly due to the fact that it was not possible to gather data on mean follow-up durations from all included studies. Concerning pre-operative size and clinical outcome after BMS, a study from Choi et al. [25] including 120 primary ankles indicated that there is a definite cut-off point, that is, 1.5 cm 2 , as a prognostic influence on the risk of clinical failure. A more recent study by Ramponi et al. [99] shows that the cutoff point might be lower, around the size of 107 mm 2 . In our review, the range of the means of the reported pre-operative size for the BMS studies was 1.0 to 1.7 cm 2 suggesting that BMS is indeed administered for smaller primary defects. The reported success rates of BMS therefore suggest that BMS could be regarded as a fair treatment strategy for the smaller primary defects.
As an alternative to BMS, a number of treatment options have focused on preserving hyaline cartilage and treating larger defects. The consensus that most of these interventions are considered as suitable treatment options when primary surgery to the OCDs has failed explains why there was a relatively lower number of patients included in these particular treatment groups. Furthermore, a number of publications on the osteochondral autograft system had to be excluded. Studies by Hangody et al. [50] and Fraser et al. [38] have yielded promising results, but were excluded as legal cases needed to be reopened for data provision.
Interestingly, only one study described the results of non-operative treatment implying that since 1996 studies have focused on developing novel surgical treatment options [109]. Likely, this is due to the poor success rates of non-operative treatments reported before 1996 [16,102]. Although only twenty-six conservatively treated ankles were included in our review-with a success percentage of 62% [CI 43-78%]-it is still recommended that initial treatment of symptomatic OCDs should consistently commence with a conservative protocol.
The AOFAS score was the most frequently used clinical score among the included studies. Sierevelt et al. [110] indicated that there are some concerns regarding this outcome score. A significant part of the 100 points depends on patient subjective outcomes introducing bias to the interpretation of the calculated success rates, as a high-level athlete would subjectively rate his or her surgery more critically than the average patient included in our systematic review. Moreover, the AOFAS score is not officially validated for the clinical evaluation of the treatment of talar OCDs. Therefore, future research should focus on developing a fortalar-OCD-validated outcome scale, in order to increase the homogeneity and uniformity in outcome assessment.
As the review shows that in 71% of the cases a history of ankle trauma was reported, it is as important to focus on prevention strategies as focusing on effective surgical treatment measures. Progression has been made regarding the development of cost-effective prevention programs for lateral and medial ankle sprains, for example by Verhagen [127] through the development of a mobile application system. Furthermore, the analysis concerning methodological quality showed that a high number of studies included were of low methodological quality, except for two included RCTs [33,100]. This underlines that the necessity for more sufficiently powered randomised studies is of paramount importance. Future research should therefore focus on conducting more randomised comparative clinical trials with uniform methodology and extended follow-up times. BMS should be compared to newly developed promising treatment options that focus on preserving hyaline cartilage and preventing the development of additional clinical complaints, such as donor-site morbidities observed in patients undergoing an OATS procedure. A possible future direction for such a promising treatment strategy is the internal fixation surgeries. In small patient series, these have been shown to induce a significant clinical improvement, possibly because these aim at preserving hyaline cartilage [56,58].
There were a number of limitations concerning the present review. Firstly, the low quality of the included studies and the substantial heterogeneity regarding methodology account as major limitations. Additionally, separate success rates were calculated based on different scoring systems, as the AOFAS score was not always available for statistical analysis. Due to this, it was not possible to perform the conventional measure of summarising estimates of effectiveness. Concerning patient characteristics there was heterogeneity observed in the patient population. It was not possible to collect data concerning mean followup duration on all studies included, as these were not provided in all cases. Another limitation of the study is that it was not possible to perform a formal meta-analysis utilising mixed-effects logistics regression in order to compare between treatment groups. Regarding the BMS group and the studies within the osteo(chondral) transplantation group, those publications that had utilised a retrospective case series setting were pooled. This implies that the evidence retrieved from this simplified pooling method is based on lower level of evidence and may therefore contain methodological bias indicating that the pooled calculated success rates should not be used for decision of a particular treatment technique for talar OCDs, but merely be applied to inform patients in the process of explaining the expected success percentages of a particular treatment strategy. Moreover, the pooled success rate of the osteo(chondral) transplantation group combined studies reporting the effects of OATS procedures and an osteoperiosteal cylinder procedure possibly introducing some form of heterogeneity in this group as the type of grafts inserted in the OATS group was slightly different from the ones in the osteoperiosteal cylinder group [3,40,53,54,63,64,132]. The strengths of the present review are the inclusion of solely primary lesions, the thorough reference selection and the quality assessment of the included studies. Another major strength is the extensive corresponding author contact protocol regarding additional data retrieval and further clarification on methodology of included studies.
The clinical relevance of the present systematic review is that the separate and pooled success rates for the different surgical and non-surgical management options can be utilised to inform patients about the expected success percentages when undergoing treatment for primary talar osteochondral defects, which will facilitate the shared decision-making process between patients and physicians.
Conclusions
In conclusion, the present systematic review shows that none of the interventions for the treatment of primary osteochondral defects to the talus showed clinical superiority over another or others. A simplified pooling method for eleven retrospective case series in the BMS group yielded a success rate of 82% [CI 78-86%], and for the seven combined OATS and osteoperiosteal cylinder graft studies the pooled success rate was calculated to be 77% [CI 66-85%]. A high number of studies with low methodological quality were included, and heterogeneity in methodological nature of the studies and variety in reported success rates was observed. As a consequence, future research should focus on conducting sufficiently powered prospective investigations in a randomised comparative clinical trial setting using outcome scores validated for the treatment of talar OCDs. and interpretation of all data and wrote the manuscript. MR, CvB, SS and GK contributed to the conception and the design of the review and contributed to the data collection and analysis of the study. MR, CvB, SS and GK also performed a third-party adjudication process and contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Funding There is no funding source.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent Informed consent is not required for review articles.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix 1
Full electronic search strategy used in this systematic review 1. PubMed Each included study was graded on methodological quality by two independent reviewers utilizing adjusted version of the Newcastle Ottawa Scale which is included above. Categories of study design, selection of patients, and outcome were scored by means of a scoring system using quantitive amounts of stars, and respectively for each category a maximum of 2 stars 1 star, and 2 stars could be obtained (maximum is 5 stars).
|
2017-08-02T18:17:04.966Z
|
2017-06-27T00:00:00.000
|
{
"year": 2017,
"sha1": "ab8042ff3b16f722ef609b1a90ed85fb090b2bf3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-017-4616-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab8042ff3b16f722ef609b1a90ed85fb090b2bf3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257696946
|
pes2o/s2orc
|
v3-fos-license
|
Functional convergence underground? The scale-dependency of community assembly processes in European cave spiders
Understanding how species assemble into communities is a central tenet in ecology. One of the most elusive questions is the relative contribution of environmental filtering versus limiting similarity. Important advances in this area have been achieved by looking at communities through a functional lens (i.e., the traits they express), so as to derive principles valid across species pools. Yet, even using traits in lieu of taxonomy, the issue remains controversial because i) environmental filtering and limiting similarity often act simultaneously in shaping communities; and ii) their effect is scale-dependent. We exploited the experimental arena offered by caves, island-like natural laboratories characterized by largely constant environmental gradients and a limited diversity of species and interactions. Leveraging uniquely available data on distribution and traits for European cave spiders, we tested explicit hypotheses about variations in community assembly rules across ecological gradients and scales. We demonstrate that environmental filtering and limiting similarity shape cave communities acting on trait evolution in opposing directions. These effects are strongly scale dependent, varying along multiple environmental gradients. Conversely, the effect of geography on trait composition is weak, indicating that trait turnover in space happens primarily by substitution of species pursuing similar functions due to strong environmental filters. Our findings reconcile contrasted views about the relative importance of the two main mechanisms shaping patterns of biodiversity, and provide a conceptual foundation to account for scaling effects in the study of community assembly.
INTRODUCTION
An omnipresent scheme in introductory textbooks of ecology illustrates the numerous filters selecting which species end up assembling into local communities from a regional pool. An elusive problem concerning this 'filtering' metaphor is quantifying the relative contribution of abiotic and biotic factors in shaping communities (1)(2)(3). In a nutshell, environmental filtering is the process whereby abiotic constraints prevent species from establishing in a community, selecting for a narrow set of traits suitable to cope with the local conditions, leading to lower differences in trait composition than expected by chance ("trait underdispersion"). Conversely, biotic interactions such as competition drive functionally similar species to diverge in key phenotypic traits to reduce niche overlap through limiting similarity, leading to higher differences in trait composition than expected by chance ("trait overdispersion"). It follows that looking at biological communities through the lens of functional ecology (i.e., the traits expressed in each community) is one of the most effective ways we have to quantify the interplay between these two assembly processes (4). The use of traits in lieu of species identities allows an explicit focus on the mechanisms generating biodiversity patterns, facilitating the conceptualization of general principles that are valid across species pools.
Even with trait-based approaches, however, it remains difficult to separate the main mechanisms filtering the species pool of potential resident species to the subset that occurs within a given community (α-diversity) and in driving variations across communities (βdiversity) (5). The distinction between environmental filtering and limiting similarity is too often conceptualized as a "black or white" dichotomy, whereby communities are described to be dominated by one or the other process. The ecological reality is instead more nuanced, with the two processes acting simultaneously in shaping communities, although with different intensities given the local conditions (6,7). Furthermore, like any dimension of biodiversity, functional diversity change is scale-dependent (8,9), forcing us to account for the pervasive effect that scale has on emerging patterns (10). Since biotic interactions require spatial proximity, the effect of limiting similarity should often decrease with increasing scale and, vice versa, the filtering effect posed by the abiotic environment should increase with spatial scale-generally resulting in a predominance of trait overdispersion at local scales and trait underdispersion at broader scales (11,12).
Mounting evidence demonstrates how the relative influence of environmental filtering and limiting similarity broadly changes along spatial and temporal gradients-e.g., for vertebrates (8,11,(13)(14)(15) and plants (1,12,16). However, there is still controversy on the direction of these changes and their causes (2,6,7). To minimize confounding factors and achieve a better understanding of community assembly rules, scientists are therefore increasingly turning their attention to island-like systems (e.g., oceanic islands, lakes, and mountain summits; (17)) and specific biological communities within them [e.g., plants (18,19); birds (20)(21)(22)], as models. The use of island-like systems, i.e., mostly closed, with known histories, and with a relatively low richness of species and interactions, allows ecologists to more easily disentangle community assembly processes while controlling for immigration, extinction, and dispersal dynamics (17,23,24).
Under this framework, caves and other subterranean ecosystems stand out as ideal model systems for the study of community assembly processes through a functional lens.
Foremost, caves are semi-closed systems extensively replicated across the Earth (25), where stringent environmental conditions promote trait convergence among successful colonizers (26,27). Second, subterranean communities generally exhibit lower species richness and functional diversity than neighboring surface communities (26,28,29) [but see ref. (30)], making it easier to disentangle the relative effect of abiotic conditions and biotic interactions in selecting species possessing specialized traits within the community (31). Third, caves have clear environmental gradients from the surface toward the subsurface (32)(33)(34) and display a reduced variability in their abiotic conditions (35), two factors that avoid many of the confounding factors typical of other systems (24).
To study community assembly rules, we leveraged the unprecedented amount of data available for subterranean spiders in Europe (36), namely community composition data for selected caves across the continent (37), and standardized traits for all species (38). A previous analysis of the taxonomic component of this dataset demonstrated a quick turnover in the taxonomic diversity of subterranean spiders across Europe, mediated primarily by geographic distance among caves, and secondarily by the climatic conditions and availability of karst.
Conversely, local-scale characteristics of caves exerted a negligible effect on species turnover (39). Here, we explore the functional dimension of these patterns, testing: i) the relative contribution of environmental filtering and limiting similarity in determining community assembly in caves; ii) how functional diversity decays along environmental gradients. At the α-diversity level, we expect (H 1a ) communities to be functionally underdispersed because the stringent environmental conditions of caves should filter a narrow set of trait combinations, leading to a less diverse trait composition than what would be expected given taxonomic richness. Concurrently, we predict that (H 1b ) biotic interactions may exert a significant role where there are fewer available niches (e.g., smaller caves), and when local conditions allow for more contacts among species (i.e., where there is greater habitat connectivity). At the β-diversity level, we hypothesized that (H 2a ) functional turnover should occur at a lower rate than taxonomic turnover along both geographic and environmental gradients-comparative taxonomic data in ref. (39). This is because we expect that the stringent environmental conditions of caves should act by reducing the volume of the trait space, leading to a functional convergence of cave communities irrespectively of spatial scales. Finally, we predict that (H 2b ) environmental factors will exert a stronger effect than geographic distance on functional turnover. This is because we expect functional composition to be strongly influenced by local environmental conditions modulating the availability of niches and the potential for interactions.
β-diversity
Patterns of functional β-diversity were primarily driven by a few environmental gradients ( Figure 3). Most of the variation in β-diversity was due to replacement of trait space among communities (β replacement ; Figure S1), with patterns largely mimicking the variation in total βdiversity (β total ). Conversely, the contribution of β richness was negligible in all cases except for cave drop ( Figure S2).
The effect of geographic distance on functional β-diversity followed a power-law curve (linearly asymptotic), and was rather weak-that is to say, at increasing distance between two caves, there was only a limited turnover in functional richness ( Figure 3a). Interestingly, when looking at variation in the effect over the geographic gradient (Figure 3c), we observed a prevalence of trait overdispersion at a smaller spatial scale which progressively decreased toward zero when caves were >2000 km apart.
Environmental predictors identified as important by the BBGDM were, in order of importance, cave development, elevation, precipitation, entrance size, and annual range of temperature. The contribution of additional predictors was negligible. The rate of turnover along the cave development gradient was monotonically asymptotic, with rates of turnover steeply increasing in the first portion of the gradient before reaching a plateau (Figure 3a). This effect was significant along the whole gradient ( Figure 3c). We also observed some degree of turnover along the gradients of elevation, precipitation, and temperature range. That is, communities in caves with different elevations, temperatures, and precipitation regimes tend to express different functions. For precipitation, SES values indicated that there is underdispersion along the first half of the gradient and an increasing predominance of functional overdispersion in the second half of the gradient. The pattern was reversed for the annual range of temperature.
DISCUSSION
Focusing on the underexploited natural laboratory offered by caves, and relying on an unprecedented data baseline in the context of subterranean biology, we studied functional diversity patterns in subterranean spider communities across Europe, testing general hypotheses ruling community assembly. Two important points, largely generalizable across systems and species pools, emerge from our analysis.
The first key point is that environmental filtering (causing trait convergence) and limiting similarity (causing trait divergence) are not mutually exclusive processes (40). Even in caves, ecological systems where environmental filtering is meant to be particularly strong (28), the relative influence of these two processes varied substantially given the local habitat conditions. Whereas the direction of SES for functional richness was predominantly towards underdispersion ( Figure 1b), the majority of values were close to zero, meaning environmental filtering and limiting similarity were both acting in equally weak or strong, but opposing, directions. Environmental filtering is indeed a demonstrably strong factor in caves, with many traits and parts of the potential functional space being absent. Yet, our results add quantitative evidence to a growing body of literature (27,30,41,42) emphasizing the importance of reconsidering the role of species interactions (especially competition) as an important force driving the evolution of cave communities.
Subterranean communities with local trait overdispersion were more frequently associated with large karst patches, areas with broader temperature ranges, and deeper caves ( Figure 2c), all conditions that provide more niche space to be exploited. This was particularly evident in the Dinaric karst (western Balkans), the most important global hotspot of subterranean biodiversity (43,44), where virtually all cave trait spaces were predominantly overdispersed. Indeed, large patches of karst, such as in the Dinarides, implies greater habitat availability (45) and possibly connectivity (46), hence a higher niche space, and a greater chance of contact among species. At the same time, the positive association between trait overdispersion and temperature range can be interpreted in the light of the influence of temperature variability on species range size and dispersal (47)(48)(49). As demonstrated for subterranean spiders in the genus Troglohyphantes (50), areas with larger temperature seasonality tend to allow greater overlap among species ranges, hence increasing local diversity and the likelihood for species coexistence. Finally, communities in caves with a greater drop tend to be, on average, predominantly overdispersed. Deeper caves tend to express more areas with differing availability of resources, offering more possibility for different communities with contrasting traits.
The second key point emerging from our study is about the importance of scale in the perception of community assembly patterns. Accounts for scale in trait analyses have been achieved, for example, by looking at variations in individual trait values along ecological gradients [e.g., elevation (51)], or by contrasting taxonomic and functional diversity change in highly dispersive organisms-birds (8). Here, we devised a novel approach to account for the magnitude of trait dispersion change along the studied ecological gradients, combining gradients and traits all together in a single model. We observed how functional β-diversity patterns varied along multiple ecological gradients. The most important one was the difference in the development among the studied caves, whereby the largest replacement of functions occurred between pairs of caves with divergent development (i.e., large versus small caves; with an inflection point with cave >10 meters in development). A plausible explanation is that cave development is a proxy for the availability of spatial niches. In particular, small caves will be primarily colonized by spiders adapted to the cave entrance conditions, and large caves will often sustain a greater number of specialized species, accounting, overall, for drastically different functions. Other important gradients of variations were elevation and precipitation, reflecting the influence of climatic conditions and habitat heterogeneity in the local structuring of functions.
In terms of distance decay in functional diversity, replacement of function was not as pronounced as the replacement of species (taxonomic distance decay) observed in ref. (39) for the same set of caves. This means that, although some replacement of traits do occur, overall turnover happens by substitution of species pursuing similar functions. Still, when decoupling functional patterns from taxonomic diversity, the functional responses varied along the geographic gradient according to theoretical expectations, showing stronger overdispersion at smaller distances, and progressive moving toward SES values of zero at larger distances. This highlights the scale dependency of regional trait dispersion, with nearby caves more likely to have interacting communities and such effect becoming weaker with increasing spatial distance.
CONCLUSIONS
The use of caves as model systems for investigating (macro-)ecological patterns in space and time is still underexploited (52). This is partially a problem related to the objective difficulties of working in caves (resulting in a general lack of data at the right resolution) and partly a methodological problem. Nonetheless, thanks to the recent development in databases of species distributions and traits, and the emergence of novel analytical tools, there is a vast potential to leverage these systems as ideal settings in which to model across space. Using an explicit functional diversity approach, we showed that i) even systems with stringent environmental conditions maintain the potential for trait differentiation via biotic interactions, especially in areas of greater habitat connectivity; and ii) the relative influence of environmental filtering and limiting similarity change with scale, along clear ecological gradients. Overall, our findings reconcile contrasted views about the relative importance of the two main mechanisms shaping patterns of biodiversity, and provide a conceptual foundation to account for scaling effects in the study of community assembly. This information is key amidst escalating global anthropogenic threats (53), insofar as realistic predictions of biodiversity change require explicitly accounting for community assembly processes (54).
Community-level data
We obtained data for subterranean spider communities across Europe from ref. (37), which we refer to for a full account of the dataset and the methods used to assemble it. The dataset comprises data from 475 subterranean sites (limestone, volcanic, talus, and salt caves, as well as artificial sites including mines, blockhouses, and cellars; the general term 'cave' is used hereafter) across 27 European countries, covering a latitudinal range from 35° to 70°. The dataset only includes subterranean sites for which spider fauna is exhaustively known. For each site, the spider composition is represented as incidence data-presence/absence of each species.
Environmental gradients
We collated a site-by-environment matrix including local-scale environmental characteristics of each cave and broad-scale variables extracted from rasters using the coordinates of the cave entrance. A full description of both local-and broad-scale variables is given in refs. (37,39).
As local-scale predictors, we used the altitude of the cave entrance (in meters a.s.l.), the main entrance size (a numerical estimation of the dimension of the main entrance in square meters), cave development (total planimetric development of the cave in meters), and cave depth (total drop in meters). These are frequently used variables in macroecological analyses focused on caves (56), which we here interpreted as proxies for local-scale conditions and niche space availability. For example, caves with a vertical drop and a large entrance tend to accumulate more external food resources (detritus) than horizontal caves with a very narrow entrance.
As broad-scale predictors, we included three climatic variables (mean annual temperature, annual temperature range, cumulative precipitation), one variable reflecting availability of carbonatic rocks (karst), and one biogeographical factor (the distance of each cave to the margin of the glacier in Last Glacial Maximum; ca. 21,000 years ago). We extracted climatic data from WordClim 2 rasters (57) at a resolution of 2.5 minutes. Although they may fail-short to capture microclimatic variability within caves (58), these broad-scale variables are good surrogates for general subterranean climatic conditions (59)(60)(61)(62). We extracted the size of the karst patch in which a cave occurs using the World Map of Carbonate Rock Outcrops (version 3.0). Given that most locations in our database were karst caves, we interpreted this variable as a good proxy of habitat availability in the surrounding of each cave, and an indirect measure of habitat connectivity (45,46). Finally, we derived the distance of each cave from the Last Glacial Maximum glacier from reconstructions by ref. (63). We interpreted this as a proxy for the influence of past glacial cycles on the current distribution of subterranean species (64,65).
Functional traits
For each spider species included in the database, we derived functional traits from ref. (38) , which we refer to for a full description of the trait data matrix and data collection methods. For the purposes of the analysis, we selected a subset of traits from the whole trait matrix, representing: i) general morphology of species; ii) morphological adaptation to subterranean conditions; and iii) webs and hunting strategies (Figure 1). To ensure exact matching between the spider species names in the community and trait matrices, we standardized and updated taxonomy using the function checknames in the R package 'arakno' version 1.1.1. (66).
Data analysis
We analyzed data in R version 4.1.2 (67), using the suite 'tidyverse' (68) for data manipulation and visualization. In all functional diversity analyses, we followed the general analytical pipeline described in ref. (69), and the protocol for transparent reporting by ref. (70). A reproducibility checklist for the study is available in Table S1. Since functional analyses were computationally demanding, we ran all analyses in high-performance computing services (see "Acknowledgments").
Data exploration
We carried out data exploration following refs. (70,71), checking variable distribution, multicollinearity, and the presence of missing data ( Figure 1). As a result of data exploration, we standardized all continuous traits (mean = 0 and standard deviation = 1) to ensure comparable ranges among different traits. In the environmental matrix, we checked variable distributions and log-transformed all numerical variables (except coordinates, annual temperature range, and mean temperature) to homogenize distribution and reduce the effect of outliers. None of the predictors showed correlation values higher than Pearson's r > ±0.7 (71).
Probabilistic hypervolumes have two key advantages over other commonly used trait-space characterizations [e.g., dendrograms (75) or convex hulls (76)]: i) they allow the detection of areas of higher or lower density in the trait space, thus representing uneven probabilities of finding a species with a given trait combination throughout the boundaries of the trait space; and ii) they are less sensitive to outliers than convex hulls (74,77).
Prior to analyses, we filtered caves with less than three species because these might lead to uninformative trait spaces, resulting in a total sample size of 367 caves. Since the trait matrix was a mixture of continuous, binary, and categorical traits, and contained some missing data for certain traits, we used a Gower distance to estimate trait dissimilarity among species (78).
Because different traits were broadly associated with different functional meanings, we used the optimization method by ref. (79) to attribute weight to traits within the three groups of variables (column "grouping" in Figure 1). We analyzed the resulting distance matrix through Principal Coordinate Analysis with the R package 'ape' version 5.5.0 (80), extracting three orthogonal axes that we used to delineate the probabilistic hypervolumes for each cave. Using three trait axes ensures a good trade-off between accuracy and computation time (9,81). We constructed hypervolumes with a Gaussian kernel density estimator and a default bandwidth for each axis (82), as implemented in the function hypervolume_gaussian in the package 'hypervolume' version 3.0.1 (83).
Calculation of α-and β-diversity
We measured the properties of the estimated trait spaces using hypervolume-based functions (74) from the R package 'BAT' version 2.7.1 (84,85). We calculated the functional richness of each community (α-diversity) as the total volume of each hypervolume (kernel.alpha function).
We estimated pairwise functional β-diversity among communities as a Sørensen dissimilarity index, calculated through a modified version of the kernel.beta function that enables parallel estimation of pairwise comparisons (9). This estimation of β-diversity further decomposes the two processes underlying overall dissimilarity (β total ) among hypervolumes following ref. (86), namely: the replacement of trait space between communities (β replacement ), and the net differences between the amount of trait space enclosed by the two communities (β richness ). β-diversity ranges from 0 (identical trait spaces) to 1 (fully dissimilar trait spaces).
Null modeling
Estimations of functional diversity are dependent on the taxonomic structure of the communities. Statistically controlling for this association may reveal the actual degree of importance of trait composition to community patterns (69,87). To this end, we randomized the rows of the trait matrix 999 times to generate a null distribution of each hypervolume-based trait space. For each random iteration, we calculated all α-and β-diversity measures (see the next section). We expressed the deviation of observed values from the null distribution as standard effect sizes (SES). Because standardized effect sizes may lead to biased conclusions if null values show an asymmetric distribution or deviate from normality, we estimated non-parametric effect sizes using probit transformed p-values (12). Probit transformation is used as an alternative to logit transformation in generalized linear models to transform probabilities into the minus-infinity-to-infinity range (88). This approach is known to partially underestimate the effect size when the observed value is completely outside the null distribution; however, this problem was trivial in our case, as none of our observed values fell outside the null distribution (that is, p-value = 0 or 1).
Hypothesis testing
To test our first set of hypotheses on alpha diversity patterns (H 1 ), we modeled the relationship between SES values for functional richness (α-diversity), and all local and broad-scale environmental characteristics of each cave using a generalized least squares fitted with the package "nlme" version 3.1-157 (89). To account for spatial autocorrelation, we introduced an exponential correlation structure on the longitude and latitude coordinates of each cave. Prior to model fitting, we standardized all predictors (mean = 0 and standard deviation = 1) to ensure comparability among effect sizes. We validated the model by inspecting the normality of residuals, heteroskedasticity, and degree of collinearity (71).
To test our second set of hypotheses on beta diversity pattern (H 2 ), we used a Bayesian bootstrap extension of generalized dissimilarity modeling (BBGDM), as implemented in the R package 'bbgdm' version 1.0.1 (90). Generalized dissimilarity modeling is a matrix regression technique that incorporates variation in the rate of compositional turnover along an environmental or spatial gradient (non-stationarity) in a monotonic nonlinear fashion (91,92).
Because the elements of a dissimilarity matrix are not fully independent, BBGDM uses a Bayesian bootstrap procedure to correct the uncertainty of model parameters (90). We used as input the predictors and the functional β-diversity matrices. We fitted individual BBGDMs for the three functional β-diversity matrices (β total , β replacement , and β richness ) with default parameters of three I-splines for each predictor and default knot values.
Because we ran BBGDM for both the actual β-diversity matrices and also the βdiversity matrices resulting from null trait matrices (see section "Null modeling"), several metrics of SES could be derived to address different questions. Here, we tested whether a given variable had a stronger or weaker effect than what would be expected given taxonomic richness by extracting the sum of splines coefficients for each variable in the 999 BBGDMs. This way, we generated a null distribution of model coefficients which could further be tested using nonparametric SES. The sum of spline coefficients of each variable describes the total change in βdiversity promoted by a single predictor holding all other predictors constant. Furthermore, we tested whether the SES of a predictor changes along the environmental or geographic gradient. In other words, these panels provide information as to whether the effect of a given variable is higher or smaller than expected given taxonomic composition. c) Variation in the magnitude of the standard effect size (SES) value along the observed gradient. In other words, these panels provide information as to whether the effect of a given variable in determining trait dispersion changes along the gradient. In b and c, significant effects (Rank < 0.025 | > 0.975) are highlighted with a darker purple.
|
2023-03-24T13:13:09.815Z
|
2023-03-21T00:00:00.000
|
{
"year": 2023,
"sha1": "64b82025a9eb01d3d2942998e08f553263fa8b99",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/03/20/2023.03.17.533085.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "64b82025a9eb01d3d2942998e08f553263fa8b99",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
17480473
|
pes2o/s2orc
|
v3-fos-license
|
Jet Quenching and Cronin Enhancement in A+A at s^1/2=20 vs 200 AGeV
The sensitivity of semi-hard (p_\perp<10 GeV) hadron production to parton energy loss in high energy nuclear collisions is studied via the HIJING1.35 model. We test the model on recent WA98 data on 160 AGeV Pb+Pb ->pi^0 up to 4 GeV/c and while these data are reproduced, the results depends sensitively on the model of the Cronin effect. At (RHIC) collider energies (s^1/2>200 AGeV), on the other hand, semi-hard hadron production becomes insensitive to the above model and is thus expected to be a cleaner probe of jet quenching signature associated with non-Abelian radiative energy loss.
Preliminary WA98 P b + P b → π 0 data [1] in the p ⊥ ≈ 2 − 4 GeV range were analyzed recently by Wang [2] via a parton model. The Leading Log Approximation (LLA) parton model [3] was shown to reproduce the observed π 0 invariant inclusive cross sections simultaneously in p + p, S + S [4], and central P b + P b [1] in the CERN/SPS energy range, √ s < 20 AGeV. What is remarkable about that analysis is the implied absence of quenching of high transverse momenta hadrons that should be observed if partons lose energy in dense matter [5,6]. The S + S and P b + P b data clearly reveal the expected Cronin enhancement [7] of moderate p ⊥ pions, as seen first in p+A. The pp data also confirm the need to supplement the LLA with non-perturbative intrinsic k ⊥ [8,9]. However, the P b + P b data show no sign of jet quenching expected at collider energies [6].
In central heavy ion reactions at the SPS, conservative estimates of the initial energy density suggest that ǫ(1 fm/c) ≈ 1 ∼ 5 GeV/fm 3 , are reached. The gluon radiative energy loss of partons in such dense matter is expected to exceed dE/dx ∼ 1 GeV/fm [5]. Recent theoretical analysis [10] predicts in fact a much larger non-linear energy loss ∆E ∼ (∆x) 2 GeV/fm 2 if a parton traverses a quark-gluon plasma of thickness ∆x. On the other hand, the WA98 data seem to rule out dE/dx > 0.1 GeV/fm, as Wang showed in Ref. [2]. In this work, we consider this problem using the nuclear collision event generator, HIJING [11,12].
Recall that in the LLA, pQCD predicts that the single inclusive hadron cross section is given by Refs. [3,9] dσ AB→hX This formula convolutes the elementary pQCD parton-parton cross sections, dσ(ab → cd), with non-perturbative lowest order fits of the parton structure functions, f a/A , and the parton fragmentation functions, D h/c , to ep and e + e − data. Here κ a , κ b are the intrinsic transverse momenta of the colliding partons. The model includes a K ≈ 2 factor to simulate next-to leading order corrections [13] at a hard scale Q ∼ p T /z c . The scale dependence of the structure and fragmentation functions account for the multiple soft collinear radiative effects. However, below energies √ s < 100 GeV, it is well known [8] that LLA significantly underpredicts the p ⊥ < 10 GeV cross section, and additional nonperturbative effects must be introduced to bring LLA into agreement with data. Unfortunately, as emphasized in Ref. [3], the results are then quite model dependent below collider (RHIC) energies. As we show below, the good news is that this model dependence is reduced significantly at collider (RHIC) energies.
In spite of the inherent ambiguity of the parton model analysis at SPS energies, a successful phenomenological approaches to this problem has been developed via the introduction [9] of intrinsic transverse momenta of the colliding partons as in (1). Originally, a Gaussian form for that distribution with k 2 ⊥ ∼ 1 GeV 2 was proposed in Ref. [9]. However, in order to reproduce pp data more accurately and to take into account the Cronin effect, the Gaussian ansatz was generalized in Ref. [2] to include Q 2 and A dependence. With g( κ) → g a ( κ, Q 2 , A), excellent fits [2] to the WA98 data could be obtained assuming a factorized Gaussian distribution with where Q 2 is measured here in (GeV/c) 2 . In (2) σ pp t A (b) is the average number of inelastic scatterings a nucleon suffers traversing nucleus A at impact parameter b. The nuclear thickness function, t A (b), is normalized as usual to d 2 bt A (b) = A. Eq.(2) is the main source of the model dependence in Ref. [2]. The HIJING1.35 Monte Carlo model [11,12] incorporates pQCD jet production together with initial and final state radiation according to the PYTHIA algorithm [14]. In addition, it incorporates a variant of the soft string phenomenology similar to the Lund/FRITIOF [15] and DPM [16] models to simulate beam jet fragmentation and jet hadronization physics. Low transverse momenta inelastic processes are of course highly model dependent, and the parameters must be fit to pp and AB data [11,12]. It is of interest to apply HIJING to the present study because it incorporates in addition to the above soft and hard dynamics, a model of soft (Cronin) multiple initial state collision effects as well as a simple jet quenching scheme. With these features, we are able to study how competing aspects of the reaction mechanism influence hadronic observables and explore the magnitude of theoretical uncertainties.
In the HIJING model, excited baryon string are assumed to pick up random transfer momentum kicks in each inelastic scattering according to the following distribution where p 1 = 0.1, p 2 = 1.4, p 3 = 0.4 GeV/c were chosen to fit low energy (p ⊥ < 1 GeV/c) multiparticle production data in Refs. [11,12]. A flag, IHPR2(5)(=1 or 0), makes it possible to compute spectra with and without this effect as shown in part (a) of Figs.1-3. The present study is the first test of this model up to 4 GeV/c in the SPS energy range. Jet quenching is modeled via gluon splitting according to the number of mean free paths, λ = 1 fm, traversed by a gluon through the cylindrical nuclear reaction volume. In each partonic inelastic interaction a gluon of energy ∆E = ∆x dE/dx is assumed to be split off the parent jet and incorporated as a kink in another baryonic string [11]. The (constant) energy loss per unit length is an input parameter (HIPR1 (14) in HIJING [11]) and can switched on and off via IHPR2(4) (=1 or 0) to test the sensitivity of spectra to jet quenching as shown in Figs.1-3. Figure. 1 compares 8 the predictions of HIJING1.35 [11,12] without jet quenching (dE/dx = 0) for the invariant π 0 cross section in central nuclear collisions at SPS and RHIC energies. The cross section for central A + A collisions are computed integrating over the impact parameters up to b max chosen to reproduce experimental trigger cross sections. For WA98 P b + P b and RHIC Au + Au we took b max = 4.5 fm, while for the WA80 S + S we took b max = 3.4 fm. The multiple collision eikonal geometry in HIJING is based on standard Wood-Saxon nuclear densities.
The parton model fit to the WA98 data are labeled by 'Wang' from Ref. [2]. (The normalization of both the WA98 data and Wang's latest calculations (not shown) have increased ∼ 20−40% relative to [2].) We note that the HIJING1.35 calculation for this interaction given by the solid jagged line also reproduces remarkably well the π 0 invariant cross sections without jet quenching. However, for the lighter S + S reaction, HIJING, underestimates the p ⊥ > 1 GeV/c tail significantly. This error is traced to the failure of the model to reproduce the pp high p ⊥ data at these energies, in contrast to its successful account of higher energy data [11]. This can be seen by comparing the filled squares to the dotdashed curves as explained below. Therefore, we find that the agreement with the WA98 data is accidental and the observed A scaling of the high p ⊥ region at SPS energies is not reproduced. Fig. 1a shows that the soft transverse momentum kick model is the source for the agreement of HIJING with the P b + P b data. The dot-dashed curves show what happens if the soft p ⊥ kicks modeled with eq.(3) is turned off. The very strong decrease of the pion yield in the P b + P b case and the somewhat smaller but still large decrease in the S+S case shows clearly the important role multiple transverse kicks at these energies. In fact both S + S and P b + P b dot-dashed curves are found to coincide with the calculated pp → π differential cross section scaled by the wounded nucleon factor W A σ AA /σ N N , where W A = 21, 172 is the average number of wounded projectile nucleons and σ AA = 32, 363, 636 mb for A = 1, 32, 207.
On the other hand, the data follow closely the shape of the measured pp → π + data taken from [2] and scaled to AA by multiplying by the Glauber binary collision number factor, T AA σ AA . From HIJING the average number of binary collisions in these systems is 45, 751 resp. The fact that the WA98 data scale with the above Glauber factor within a factor of three, suggests that the additional p ⊥ broadening due to initial state collisions is relatively small. The very large A dependence of the HIJING p ⊥ tail at SPS energies is due to the A 1/3 times convolution of the distribution (3). This problem is avoided in the parton model calculation [2] using (2) through the separation of larger intrinsic momentum effects and smaller A 1/3 dependent contributions.
We conclude that the missing intrinsic transverse momentum component of HIJING precludes an accurate simultaneous reproduction of of pp, SS, P bP b data at SPS energies. However, unlike the parton model where no global conservation laws have to be enforced, it is not clear how to incorporate intrinsic momenta in a global event generator like HIJING without destroying the satisfactory reproduction of low p ⊥ < 1 GeV/c data. We do not attempt to solve this problem here.
Our main point is that this problem goes away fortunately at higher (RHIC) collider energies ( √ s = 200 AGeV). As seen in Fig.1a) the effect of multiple soft interactions is very much reduced at that energy. This is due to the well known effect that as the beam energy increases, the p ⊥ spectra become harder and additional p ⊥ smearing from initial state effects becomes relatively less important. It is the extreme steepness of the cross sections at SPS energies that amplifies so greatly the sensitivity of the moderate p ⊥ yields to this aspect of multiple collision dynamics. In Fig. 1b, we consider next the sensitivity of the pion yields to the sought after parton energy loss dynamics. The striking difference between Figs. 1a and 1b is that in 1b the SPS yield is not sensitive to the energy loss model in HIJING, while at RHIC energies the suppression of semi-hard hadrons is seen to be sensitive to jet quenching as predicted in Ref. [6]. This seems counter intuitive at first because the increasing steepness with decreasing beam energy is naively expected to result in greater quenching for a fixed jet energy loss.
However, in this model the observed p ⊥ range is dominated by multiple soft collisions. The moderate p ⊥ < 5 GeV quarks, which fragment into the observed pions, are not produced in rare pQCD semi-hard interactions but are gently nudged several times into that p ⊥ range. Since in HIJING jet-quenching is restricted to only those partons that suffer a semi-hard pQCD interaction with p ⊥ at least p 0 = 2 GeV/c (the mini-jet scale), no quenching arises at this energy.
The above conclusions are further clarified in Figs. 2 and 3, where the effects of p ⊥ kicks and jet quenching are shown for valence quarks and diquarks and gluons prior to hadronization. In Figure 2a, the valence quark plus diquark distribution, dN val /dp ⊥ , is seen to be enhanced by two orders of magnitude due to the assumed Cronin mechanism at SPS energies whereas they are enhanced only by ∼ 50% at RHIC energies. In Fig.2b, on the other hand, quark jets are seen to be suppressed by an order of magnitude at RHIC energies, while at SPS energies no quenching arises because only about < 1% of the 5 GeV/c quarks are produced directly by hard pQCD processes. Figure 3a shows that the (negligible) gluon component of partons undergoing hard processes at SPS energies are in fact quenched by an order of magnitude just as at RHIC energies, but because they represent such a small fraction of the produced partons, their effect on the final hadron observables is negligible. Note the three order of magnitude rise of the mini-jet gluon component going from SPS to RHIC energies in Fig. 3b. In that Figure, the minimum momentum p 0 = 2 GeV/c assumed for jet quenching is seen by the rapid drop of the gluon spectra beyond p 0 . We see that the ratio of valence quarks to gluons after quenching at RHIC energies remains ∼ 2/1 for mini-jets up to p ⊥ ∼ 10 GeV/c.
In conclusion, we found that HIJING1.35 fits the WA98 data with or without jet quenching. However, this model fails to account for the weak A dependence of the data that scale approximately with the Glauber T AA factor, and therefore the fit must be viewed as accidental. We note that the current WA98 data can also be reproduced by entirely soft physics models utilizing hydrodynamic equations [1,17], which unfortunately make no prediction for the A dependence. On the other hand, another Monte Carlo parton cascade model [18] overpredicted the π 0 cross section at 4 GeV/c by a factor of 100! Those results do not prove the validity of hydrodynamics nor the absence of parton cascading, but emphasize the strong model dependence of the SPS spectra due to non-perturbative aspects of the problem. Those aspects, while phenomenologically interesting, make it difficult to identify perturbative QCD phenomena and search for jet quenching.
The main point of this work, is not to improve the HIJING soft p ⊥ phenomenology, but to emphasize the good news that at collider energies there is much less sensitivity to the above uncertain element of the reaction mechanism. At √ s > 100 AGeV the expected [6] jet quenching signature should be readily observable in the p ⊥ ∼ 10 GeV range. Such experiments will commence in 1999 at RHIC and should provide important tests of the theory of non-Abelian multiple collisions and energy loss [5,10]. [4] (triangles) and the preliminary WA98 P b + P b data [1] (dots) are compared to HIJING1.35 [11] with soft p ⊥ kicks (full lines) and without p T kicks (dot-dashed curves). The later scale with the wounded projectile number times σ AA times the invariant distribution calculated for pp. The parton model curve from Ref. [2] is labeled by 'Wang'. The filled squares show pp → π + data scaled by the (Glauber) number of binary collisions times σ AA for both SS and P bP b. b) Jet quenching, predicted at RHIC energies [6], is not significant at SPS energies in the HIJING model.
|
2014-10-01T00:00:00.000Z
|
1998-07-03T00:00:00.000
|
{
"year": 1998,
"sha1": "afbe411e50b5be5d3100dac6c0705c33c903e69c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/s0370-2693(98)01258-1",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "080de9d92dff5f88758d513d2da677b24d2ab284",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232423226
|
pes2o/s2orc
|
v3-fos-license
|
Effect of the Stimulus Duration on the Adaptation of the Optokinetic Afternystagmus
Observing a rotating visual pattern covering a large portion of the visual field induces optokinetic nystagmus (OKN). If the lights are suddenly switched off, optokinetic afternystagmus (OKAN) occurs. OKAN is hypothesized to originate in the velocity storage mechanism (VSM), a central processing network involved in multi-sensory integration. During a sustained visual rotation, the VSM builds up a velocity signal. After the lights are turned off, the VSM discharges slowly, with OKAN as the neurophysiological correlate. It has been reported that the initial afternystagmus in the direction of the preceding stimulus (OKAN-I) can be followed by a reversed one (OKAN-II), which increases with stimulus duration up to 15 min. In 11 healthy adults, we investigated OKAN following optokinetic stimulus lasting 30 s, 3-, 5-, and 10-min. Analysis of slow-phase cumulative eye position and velocity found OKAN-II in only 5/11 participants. Those participants presented it in over 70% of their trials with longer durations, but only in 10% of their 30 s trials. While this confirms that OKAN-II manifests predominantly after sustained stimuli, it suggests that its occurrence is subject-specific. We also did not observe further increases with stimulus duration. Conversely, OKAN-II onset occurred later as stimulus duration increased (p = 0.02), while OKAN-II occurrence and peak velocity did not differ between the three longest stimuli. Previous studies on OKAN-I, used negative saturation models to account for OKAN-II. As these approaches have no foundation in the OKAN-II literature, we evaluated if a simplified version of a rigorous model of OKAN adaptation could be used in humans. Slow-phase velocity following the trials with 3-, 5-, and 10-min stimuli was fitted with a sum of two decreasing exponential functions with opposite signs (one for OKAN-I and one for OKAN-II). The model assumes separate mechanisms for OKAN-I, representing VSM discharge, and OKAN-II, described as a slower adaptation phenomenon. Although the fit was qualitatively imperfect, this is not surprising given the limited reliability of OKAN in humans. The estimated adaptation time constant seems comparable to the one describing the reversal of the vestibulo-ocular reflex during sustained rotation, suggesting a possible shared adaptive mechanism.
INTRODUCTION
In healthy individuals, vision remains stable during head and body motion. Stabilization of gaze while moving is achieved through reflexive eye movements that compensate for head motions. These reflexive responses are driven by the integration of several sensory inputs, principally vestibular and visual. While the vestibular system reacts to rapid, high frequency head motions (angular velocity and linear acceleration are directly sensed by the organs in the inner ear), the optokinetic system extracts information on head motion from the observed scene.
The optokinetic system is stimulated by any coherent movement of a large portion of the visual scene. Accordingly, to test the optokinetic system in laboratory conditions, the stimulation is usually performed by horizontally rotating a large, patterned drum around a stationary individual. A person exposed to this stimulation experiences a sensation of self-rotation called circularvection, even though there is no physical motion and no peripheral vestibular stimulation (1,2). The eyes show a nystagmic response that consists of slow phases drifting in the same direction as the moving stimulus and quick phases in the opposite direction (3).
In humans, as in any foveated animal, both the smooth pursuit system (responsible for maintaining the image of moving target on the fovea) and optokinetic system contribute to the ocular motor response elicited by a large moving pattern (3). Although the whole ocular motor response is usually called optokinetic nystagmus (OKN), the actual response of the optokinetic system is only the reflexive eye movement driven by the coherent motion of large portions of the visual scene on the retina (retinal slip). This mechanism has been described in afoveated animals, where smooth pursuit is not present: OKN builds up slowly, even when the visual stimulus is abruptly accelerated. In humans, however, the eye velocity rises quickly at the beginning of optokinetic stimulation, as the smooth pursuit system rapidly catches up with details of the moving pattern (e.g., stripes of an optokinetic drum). The optokinetic system slowly takes over part of the nystagmic response during sustained stimulations (3). The proportion of the optokinetic system or the smooth pursuit contributing to the gain cannot be established by observing this response only.
An important property of the optokinetic system is the persistence of the response after the stimulus has ended. When an optokinetic stimulus is replaced by total darkness, the nystagmus continues in the same direction. The initial eye velocity reflects the persisting action of both optokinetic systems and smooth pursuit, but the latter is largely gone within 1 or 2 s, causing a characteristic rapid drop of the slow-phase eye velocity (SPEV) of the nystagmus (3). The residual nystagmus, called optokinetic after nystagmus (OKAN) and first described by Ohm (4), continues with a slowly declining SPEV. The initial velocity of OKAN is often measured as soon as smooth pursuit stops and has peak values of around 10 deg/s (5,6). The rate of decline of OKAN SPEV can be measured fitting the data with a negative exponential curve to determine the time constant. Reported time constant values range considerably, from around 5 s to nearly 50 s (7)(8)(9)(10)(11). It is currently accepted that, during the stimulation period, the optokinetic system effectively acts through the velocity storage mechanism (VSM) (3). The VSM is a central integrative network that plays a role in the integration of multisensory rotational stimuli (12,13) and presumably involves the medial and superior vestibular nuclei (3,(14)(15)(16)(17). The VSM can be regarded as a neural integrator that receives input from the vestibular and visual systems and controls the slow dynamics and the compensatory eye velocity of the vestibular and optokinetic reflexes. The OKAN response is considered to be a direct manifestation of the VSM integrator (12,18), where the decay rate of SPEV is controlled by the time constant of the VSM integrator (high value of the time constant indicates slow decay in SPEV). The relation between OKAN and vestibular nystagmus through VSM was confirmed by animal studies: In monkeys the above-mentioned vestibular nuclei neurons that respond to head rotation are also stimulated by optokinetic stimuli. When the lights are turned off after optokinetic stimulation, the vestibular nucleus neurons continue discharging for some more seconds; this is the neurophysiological correlate for OKAN (3,19).
There are different theories on how OKAN-II is generated. A general hypothesis has been proposed by Brandt et al. (20). They suggested that the optomotor response is determined by two opposite tendencies: the rapidly fading positive optokinetic charge and the much longer lasting habituative countercharge. As long as the first outweighs the second, positive OKAN (OKAN-I) is observed. At an intermediate period, the two mechanisms may balance each other (nystagmus disappears) until with further decay of positive charge the counter regulation determines the onset of OKAN-II (20). In the same study it was also suggested that prolonged optokinetic stimulation may cause ocular motor and perceptual aftereffects similar to those observed on per-rotatory or post-rotatory nystagmus (20). They also reported that background movement of the whole visual field when the eyes are fixed on a stationary target in the foreground facilitates occurrence of an observable OKAN-II (20). This evidence was viewed as a possible indication that the OKAN-II is caused by a central nervous adaptation driven by the motion of the visual field on the retina during a period of stimulation, suggested by Chen et al. and by König et al. (30,31). On the contrary, it has been proposed that OKAN-II may be related to ocular motor adaptation induced by continued eye tracking. This was shown in a patient with Infantile Nystagmus Syndrome (INS), who lacks a normal smooth-pursuit function, and therefore cannot follow the stripes. The afternystagmus of the patient in darkness was beating in the direction opposite to the preceding OKN and was interpreted as an unmasked negative OKAN. This interpretation assumes that, in healthy controls, the negative OKAN is probably masked by the afternystagmus of smooth pursuit (32). These hypotheses were not formalized in mathematical models or diagrams. A different approach was presented by Furman et al. (33). They adapted established VOR-OKAN models to account for a reversal SPEV showing that both a peripheral adaptation operator and a central adaptation operator can account for OKAN-II. A similar reversal also occurs during vestibular nystagmus in response to sustained angular rotation (34,35).
Although OKAN-II has been reported in humans, quantitative information specifying occurrence, strength and time dynamic are scarce and contrasting. This is not surprising as OKAN itself is known to be weaker in humans than in other species and thus difficult to quantify consistently. Koenig and Dichgans (31) reported no OKAN-II in four humans tested with optokinetic stimuli at 30, 60, 90, 120, and 180 • /s for 1 min. Brantberg (36), who defined OKAN-II if the SPEV inversion reached 2 • /s, reported occurrence of OKAN-II in 3 out of 256 trials in 16 participants (optokinetic stimuli at 60 and 90 • /s for 1 min). Interestingly all instances of OKAN-II were observed in the same participant. Nooji et al. (11), using the same criteria and a stimulus velocity at 60 • /s, described it in 4 out of 13 participants. In contrast, data presented by Brandt et al. (20) suggest that OKAN-II is highly present in humans already after 1 min of optokinetic stimulation. Their data also demonstrated a dependency of both OKAN-I and OKAN-II on the duration of the optokinetic stimuli. Although they present the most extensive description of OKAN-II in humans, only mean values of total amplitude were provided and no dispersion metrics or details of occurrence of OKAN-II across participants were provided. Although other studies (7, 37) mentioned occurrence of OKAN-II in humans, quantitative observation are missing.
Understanding the specific of OKAN-II in humans can be important to evaluate the potential impact of the OKAN-II on the assessment of OKAN-I. While, as of now, OKAN has no direct diagnostic use, a clinical potential has been discussed in many studies (38)(39)(40)(41)(42)(43)(44).
Understanding OKAN-II is also important to design studies that could allow to evaluate applicability of the model of Furman et al. (33) in humans, including extension to perceptual aftereffects and adaptation to prolonged optokinetic stimuli. This is particularly relevant considering re-emerging interest in OKAN with respect to investigating vection in virtual reality and the potential risk for visually induced motion sickness (11,37). Recent studies, either entirely neglected OKAN-II (37) or used descriptive models without a physiological background (7,11). Both approaches may risk a misestimation of OKAN-I parameters too (34). A parsimonious model that could be descriptive of human data while accounting for the considerations and the evidence summarized by Furman et al. (33) is indeed missing.
The current study aimed to objectively quantify the dependency of the OKAN-II on the duration of the preceding optokinetic stimulation and to describe its occurrence and main features. In addition, we evaluated how a simple descriptive model, which considers OKAN-I and OKAN-II to be two independent phenomena could account for OKAN-I and OKAN-II changes across duration.
Participants
Nineteen healthy human participants (7 males, 12 females, mean age 24 years, range 19-36 years) participated in the study. Informed consent of all participants was obtained in written form after full explanation of the experimental procedures. The protocol was approved by the ethics Committee of the Canton of Zurich, Switzerland (Protocol N • KEK-ZH-2012-0150) and was in accordance with the ethical standards laid down in the 2013 Declaration of Helsinki for research involving human participants.
Experimental Setup
Participants were comfortably seated upright on a chair mounted on a two servo-controlled motor-driven axes turntable system (Tönnies D561, Freiburg i.Br., Germany; control system: Acutrol R ACT2000, Acutronic, Switzerland Ltd.). The two, independent motor-driven axes are coincident and earth vertical. One rotates the chair and the other a cylinder (Optokinetik Drum, radius: 74 cm) mounted concentrically to the chair. Remotely controlled LEDS are attached to the cylinder at the level of participant's eyes. Safety belts around the feet and the shoulders restrained the participant. An adjustable chin rest and a forehead strap were used to stabilize the participant's head.
Recording of Eye Movements
Horizontal eye movements were recorded at 220 Hz with a headmounted video-oculography (VOG) device ("EyeSeeCam") (45) consisting of goggles with one mounted infrared camera on the left eye. A model of the eye rotation is used by the VOG system to derive the horizontal eye position from the pupil position recorded in the coordinate system of the cameras. An additional offline calibration was performed to improve the accuracy. Before the beginning of the experiment, participants were asked to look at a grid of fixation points according to a sequence generated using the LED attached to the motorized cylinder. A second order polynomial function was fitted to the corresponding eye angles provided by the VOG system for calibration as in Bertolini et al. (46).
Optokinetic Stimulus
The inner wall of the drum was covered with 10 cm alternating black and white stripes, each subtending 7.7 deg of visual angle. The pattern filled the entire visual field including the retinal periphery. A small gray band at eye level, extending 8 deg vertically, was overlapped on the stripes. It served as reference for gaze to minimize foveal fixation.
Experimental Procedure
In all experiments, the optokinetic drum was accelerated in darkness to 30 deg/s about the earth-vertical axis while the chair was kept stationary. The lights inside the drum were suddenly turned on and the tested participant was asked to keep the gaze on the gray panel without fixating. After a fixed time (30 s, 3 min, 5 min, and 10 min), the light was switched off to terminate the optokinetic stimulation, but the recording of eye movement continued until no nystagmus could be observed. To ensure that the participants remain focused and compliant during all stimuli, the examiner controlled the eye video and position trace in real time and encouraged them in case changes in the nystagmus behavior (or other signs of distraction) occurred.
The experimental procedure was divided in two subsequent phases. All 19 participants took part in the first phase. The drum was rotated at 30 deg/s for 30 s, inducing OKN. Two trials were performed with the drum rotating in clockwise direction and two in counterclockwise direction.
After the first phase, the OKN was analyzed to evaluate the quality of the participants' responses to optokinetic stimuli. Exclusion criteria were bad adherence (e.g., due to lack of concentration during the experiment), calibration problems and noisy data (e.g., due to repetitive blinking). Using these criteria, 11 participants were selected for the second phase. As no asymmetry is reported in normal participants (8), phase two was based on testing OKN in a single direction. Based on the results of phase one, a favorable direction was selected for each participant as the one with the stronger, more consistent and less noisy OKAN response. In the experiment of the second phase, the drum rotated at 30 deg/s in the participant's favorable direction for 3, 5, and 10 min, inducing OKN and vection. Two trials were performed for each duration and the sequence of the experiment was randomized. The trials were separated by a pause lasting a minimum of 60 s with the lights on and the drum not moving, allowing the discharge of any potential velocity storage activity.
OKAN Analysis
Only eye movements recorded after the light was terminated were considered in the analysis. These eye movement signals are expected to represent three phenomena: the OKAN-I, the OKAN-II and the rapid drop of smooth-pursuit. To obtain the SPEV, eye velocity was calculated as the derivative of eye position and saccades were removed using a median filter with a 2 s time window (MatLab function: medfilt1). To address the rapid decay of the smooth pursuit, we excluded the first 2 s of eye movements after the light was turned off from all subsequent analysis. The sign of the eye movement signal was adjusted so that the OKN SPEV was positive. This implies that OKAN-I has a positive sign and OKAN-II a negative sign in all trials. Three independent analyses were performed: area-based analysis, SPEV analysis and model-based analysis.
Area-Based Analysis
The aim of this analysis was to quantitatively measure the occurrence of OKAN-II and to determine the time of transition between OKAN-I and OKAN-II (subsequently named OKAN-II onset). The cumulative area under the SPEV curve (47) was calculated for the first 60 s after light termination. A positive peak in the cumulative area under the SPEV implies that the SPEV crossed zero (i.e., a sustained positive SPEV followed by a sustained negative SPEV causes an increase of the cumulative area followed by a decrease). The first positive peak after which the cumulative area decreased steadily for 10 s was considered as OKAN-II onset. This allowed to discard peaks caused by zerocrossings due to noise. The occurrence of OKAN-II and its onset were evaluated within and between participants and as function of stimulus duration.
SPEV Analysis
The aim of this analysis was to quantify how SPEV was affected by stimulus duration during OKAN-I and OKAN-II. For OKAN-I the maximal value of positive SPEV before the OKAN-II onset (as identified according to the area-based analysis) and discarding the first 2 s (to account for smooth pursuit) was used as peak velocity. OKAN-II peak velocity was calculated as the minimum value of negative SPEV after the OKAN-II onset. This analysis was constrained to the first 60 s to allow a comparison with previously published OKAN-II descriptions (11,36). In these studies, the occurrence of OKAN-II was defined as a SPEV passing a threshold of −2 deg/s, with the shortest OKAN recordings lasting 60 s.
Model-Based Analysis
The aim of this analysis was to analyze fit parameters using a simple descriptive model accounting for OKAN-II. In the majority of the OKAN studies, the following exponential function (12) was used to fit the SPEV V (t): In Equation (1), a is the initial amplitude of the OKAN-I SPEV and τ 1 the time constant of the OKAN-I. OKAN-II, however, is not accounted for. As the main aim of the current study is to describe OKAN-II and its relation to OKAN-I, we added to Equation (1) an additional independent exponential term with an amplitude parameter having the sign opposite to the optokinetic stimuli. The OKAN SPEV f (t) was fitted using the following double exponential function: In Equation (2) To reduce the risk of overfitting, we assumed that the OKAN-I term is independent from the stimulus duration (after its initial charging time <30 s) (13) and that OKAN-II contribution before 30 s of stimulation is minimal (20). Accordingly, in a first step, we fitted the response to the trial with 30 s of optokinetic stimuli with Equation (1) and, subsequently, we used the value of a (initial amplitude of OKAN-I) and τ 1 (time constant of OKAN-I) estimated from the 30 s trials to fit Equation (2) on all remaining trials (Figure 1). The parameters a and b were bound between 0 and 20 deg/s, τ 1 between 1 and 40 s and τ 2 between 1 and 150 s. These boundaries are in line with previous data on OKAN (8,10,11) and secondary nystagmus in vestibuloocular reflex (VOR) (7,48).
Statistical Analysis
Statistical testing was performed in MATLAB. The chi-squaretest for proportions was used to compare occurrence of OKAN-II at the different stimulus durations. All other continuous variables (i.e., onset time, peak velocity, time constants) were analyzed depending on the number of groups to compare. Specifically, the paired t-test was used when only two groups were compared, while a repeated-measures ANOVA for multiple groups. When post-hoc multiple comparisons were performed, Tukey's honest significant difference procedure was used to retain family-wise error rate. For all tests, we considered significant a p-value < 0.05 after multiple comparison correction.
RESULTS
According to the exclusion criteria (e.g., noisy data due to frequent blinking or bad adherence) data of 8 of the 19 participants were discarded from further testing after the acquisition of the 30 s condition. Of the remaining 11 participants, 4 out of 88 trials were discarded as the participant asked for early termination of the trial or did not maintain a stable OKN during stimulation.
Visual inspection of the responses to the longer stimulation confirmed the presence of OKAN-II. The occurrence of OKAN-II, however, varied considerably between participants. Examples of slow phase eye velocity in two participants, one without OKAN-II and one showing a typical OKAN-II following 3 min of stimulation are shown in Figure 2. In both cases, after turning the light off the SPEV suddenly dropped, matching the prediction of rapidly fading smooth pursuit contribution. Subsequently, the residual SPEV followed an exponential decay, known as OKAN-I. The OKAN-II is observable in the bottom graph: the SPEV reaches zero earlier than in the top graph, becomes negative (i.e., the nystagmus is now beating in the opposite direction) and continues with a negative sign for a period of time, returning to zero with a decay slower the one observed for OKAN-I.
Occurrence, Onset Time, Peak Velocity
Visual inspection of the OKAN traces showed that OKAN-II could be observed in 3 of the 11 participants, accounting for 27% of the participants. Pooling all trials (participants and durations) and considering OKAN-II present when the cumulative area under the SPEV decreases for at least 10 s (i.e., consistent negative SPEV for 10 s), 25 out of 84 analyzed trials (30%) could be identified as OKAN-II. The occurrence of OKAN-II divided per stimulus duration and per participant is presented in Table 1. Notably only 5 participants had more than 1 trial showing OKAN-II. They account for 23/25 (92%) trial with OKAN-II.
Using the criteria proposed by Brantberg (36), only 11 of 84 analyzed trials (13%) showed OKAN-II, with 10/11 OKAN-II trials recorded from 3 of the 5 participants identified above. Of these, only two participants had more than one OKAN-II trial and accounted for 9/11 OKAN-II trials (82%).
Considering that only 2/22 trials (9%) with 30 s of optokinetic stimulus showed OKAN-II, the analysis was focused on the trials with the longer stimulus durations and limited to the 5 participants consistently showing OKAN-II. This selection accounts for 22/25 (88%) of the OKAN-II trials observed in the whole study. Occurrence, onset time and peak velocity of OKAN-II in these trials are detailed in Tables 2A, 2B. A significant effect of the stimulus duration was found for the onset time of OKAN-II [repeated measure ANOVA; F (2,8) = 6.63, p = 0.02], posthoc analysis showed that onset-time after 10 min is significantly later than after 3 min (p = 0.014), while no difference was found
Model Fit
Considering all participants, the mean time constant of OKAN-I estimated using Equation (1) to fit the response to 30 s of optokinetic stimulus was 18.5 ± 10.1 s (Mean ± SD). The mean initial velocity of OKAN-I estimated using Equation (1) was 6.5 ± 1.9 deg/s. As the data analysis above evidenced that OKAN-II occurred consistently in only 5 participants, we performed the analysis using Equation (2) on their data only. The model failed to fit the data of one participant (S03), which was discarded from the analysis. No significant difference between the response to the longer optokinetic stimuli (3, 5,
DISCUSSION
The results of the current study confirm that a prolonged optokinetic stimulus (longer than 3 min in our experimental design) increases the probability for OKAN-II to appear, as reported by previous studies (11,20,30). OKAN-II follows the decaying OKAN-I; the slow-phase eye velocity (SPEV) of OKAN-II is directed opposite to the direction of the preceding optokinetic stimulation and the hence the direction of SPEV of OKAN-I. Although the incidence OKAN-II was relatively low in our study (only 5/11 of our participants showed it in more than one trial), but higher than previously reported [Brantberg (36) 1/16 participants; Nooji (11) 4/13 participants]. This may depend on the criteria used for defining the occurrence of OKAN-II. The two aforementioned studies used the criteria defined by Brantberg (36), i.e., a SPEV exceeding 2 deg/s in the direction opposite to the preceding OKAN. Regarding the noise and variability in human OKAN responses, we considered such threshold quite high and possibly specific for the properties (e.g., velocity) of the stimulus and used a different method. As previously suggested by Hain et al. (47), the impact of noise on OKAN can be reduced by analyzing cumulative SPEV. We thus defined that any trial where the cumulative SPEV decreased for at least 10 s (i.e., consistent negative SPEV for 10 s) after reaching a peak contained OKAN-II. Although 10 s are also an arbitrary threshold, it is well below the expected time constant of the adaptation process underlying OKAN-II (34,35). Thus, it seems safe to assume that OKAN-II, if occurring, should last more than 10 s. The incidence analysis (Tables 1, 2A, 2B) suggests that OKAN-II occurrence may be subject-specific in humans. Participants who showed OKAN-II at least twice (5/11 participants), tended to have it quite consistently, i.e., in over 70% of their trials with stimulus duration of 3 min or more (24/30 trials). The other participants almost never showed it (2/44 trials−4%). Evidence supporting our observation can be found in previous studies. Brantberg (36) found OKAN-II in 1 out of 16 participants, but the participant showed it 3 times. Nooij (11) reported it in 24% of all recorded trials but only in 4 participants. As 4 participants in a population of 13 participants account for roughly 31% of the total data, the mentioned 24% imply that OKAN-II was consistently shown in over 75% of the trials of the 4 participants. The instants of reversal of SPEV (onset of OKAN-II) may also be subject-specific, as suggested by the comparison of the standard deviations within participant (row) and between participants (columns) in Table 2B. Brandt et al. (20) proposed that the strength of the OKAN-II response continues to increase with duration of stimulation up to 15 min. As none of the features we calculated support an increase of the OKAN-II with stimulus duration longer than 3 min, we could not confirm this observation. On the contrary, we found that OKAN-II onset appeared later with stimulus duration of 10 min when compared to 3 min suggesting that OKAN-II onset time may occur later after longer stimulation. Assuming OKAN-I (amplitude and/or time constant) does not increase with these stimulus durations from 3 to 10 min, our finding would suggest a decrease of OKAN-II (amplitude and/or time constant), which is unlikely for a simple adaptation phenomenon. As OKAN-II has been described as part of an oscillatory behavior of the SPEV (33), the later appearance of onset with stimulus duration may be due to effects not accounted in this study. The differences between our observations and those of Brandt et al. (20) are probably best explained by the different analysis methods. Brandt et al. split OKAN-I and OKAN-II based on the sign of the overall resulting velocity, a method that is sensitive to noise in the data, and evaluated the total amplitude of the nystagmus, a measure that critically depends on the overall time evaluated. Although we also used eye position in the analysis (cumulative SPEV to define OKAN-I and OKAN-II), we analyzed specific features that did not depend on the area (onset time, peak velocity). To evaluate the time course of both OKAN-I and OKAN-II as a function of the stimulus duration, we propose a simple model-based approach. The OKAN responses observed in our experiments using short optokinetic stimuli (30 s) were characterized by a SPEV in the same direction of the previous OKN that decay to zero as a single exponential. This behavior matches the expected OKAN behavior extensively described by previous studies describing the OKAN as a manifestation of the velocity storage contribution to the OKN (12). Accordingly, the response to the 30 s stimuli was well-captured by a single exponential model with two parameters: the time constant of the exponential function (8,12) and the initial velocity amplitude. Our values were in agreement with those measured in earlier studies (5,7,8,10,11,37,47).
When responses showed OKAN-II, however, a single exponential accounting for OKAN-I is misleading. It has been shown previously for VOR that errors in the estimates of the time constant of the primary nystagmus occur when the secondary nystagmus is not accounted for (34). In our study we adopted a model (Equation 2) that considers OKAN-I and OKAN-II as independent decaying exponential functions with opposite sign, starting from the moment the optokinetic stimulus is terminated. Our model can be viewed as a simplified version of the model proposed by Furman et al. (33). Even though the model was not able to follow the responses in all fitted trials of the subjects presenting OKAN-II [r 2 range (0.3-0.8); mean ± SD = 0.5 ± 0.2], the validity of Equation (2) (i.e., a simplified model) should be considered in the context of modeling OKAN-II in humans. Descriptions of OKAN-II in humans are lacking with only a few studies actually mentioning it (7,11,20,36), its incidence is low, its variability elevated and its analysis challenging (47) compared with other mammals such as monkeys (19). In contrast rigorous OKAN models accounting for OKAN-II (20,33,36) propose an order of complexity that may be overfitting the weak, noisy response available in humans.
To solve this problem Nooij et al. and Laurens et al. decided to fit OKAN-II in humans with the following equation suggested for the VOR (7, 11): The rate of decrease is governed by the time constant τ 2 and the maximum value is determined by the parameter m. Thus Equation (3) represents a saturation model, which contrast with the current hypothesis on OKAN-II (20,33). Although authors that used Equation (3) (2) is the first attempt to fit OKAN-II in human data with a model compatible with the current hypothesis on the physiology underlying OKAN-I/OKAN-II interaction (33). On the other hand, while Equation (2) can be the best option to account for OKAN-II in humans without using an overly complex model (and risking overfitting), it must be kept in mind that Equation (2) does not provide an ideal fit and it may prevent delving deeply into the process generating OKAN-II or to detail its human specific features. Proper modeling of OKAN-II in humans remains challenging.
The model-based analysis failed to identify a change in the OKAN-II parameters with increasing stimulus duration. Although the validity of the fitted parameter should be considered with care, given the low r 2 and the complexity of fitting OKAN-II data on humans, this finding is consistent with the area-based and velocity-based analysis. Thus, our overall results do not support the observation of Brandt et al. (20) that OKAN-II increases with stimulus duration. On the other hand, our assumptions are in line with the theoretical interpretation proposed by Brandt et al. (20) that OKAN-I and OKAN-II are two independent phenomena that sum and generate a specific pattern of OKAN reversal depending on their intensity. It must be also considered that, in comparison to their study, our model-analysis is less sensitive to the duration of the recorded data, to velocity offset and noise, but incorporates more assumptions on the two phenomena. Therefore, our data complement and improve the early estimate by Brandt et al. rather than oppose it.
Specifically, our results support the idea that OKAN observed following prolonged optokinetic stimulation in humans (i.e., earlier approach to zero and reversal of SPEV) may be described by an independent, superimposed OKAN-II (20). Such description of OKAN-II is compatible with an adaptation phenomenon that develops during the preceding OKN stimuli. The adaptation is described as a negative velocity command that is added to the OKAN-I and decays over a longer time. It is worth to note that the OKAN-II time constant estimated with this method appears similar to the one of the adaptive behaviors observed during VOR and responsible for a secondary nystagmus (34,35). Due to this similarity, it cannot be excluded that both OKAN-II and VOR might share a common neural adaptation being responsible for both secondary responses.
This interpretation would need to be corroborated by further studies comparing the two phenomena among the same participants and, if possible, using a more detailed model as the one proposed by Furman et al. (33). It would imply that such adaptation occurs in the shared neural element, either located in the velocity storage mechanism or downward toward the motoneurons, rather than an aftereffect related to visual tracking as suggested by Chen et al. (30).
Regarding the assessment of OKAN-I, we propose that for the short testing of OKAN (up to 1 min), OKAN-II can be considered as not charged and does not impact OKAN-I.
Limitations
Tijssen et al. suggested to perform as much as eight measurements for each condition to get a reliable estimate for the initial velocity amplitude and the time constant, because of the larger intraparticipant variability (8). Because of the strain on participants of prolonged testing, we only managed to collect two measurements for each duration, which may have led to more variability in the data.
Due to the strong variability of OKAN-I and OKAN-II in humans, the fitting with the simplified model wasn't optimal (Figure 3). With OKAN-II occurring consistently in 5 participants, the model failed to fit the data of one of them. Although this was mostly due to noise in the data and poor concentration of the participant during OKAN recording after prolonged OKN stimuli, it evidences how the weakness of the OKAN-II responses in humans may easily challenge its evaluation. To really proof if the model fits the data appropriately, more subjects as well as more trials with prolonged optokinetic stimulation should be used, although repetitive prolonged OKN trials are hard to administer in humans.
In the visual inspection of the data, we observed in some participants an offset in the end of OKAN. As we found no physiological reason to add an offset term into Equation (2) and we aimed at keeping the number of parameters low, we use a fitting equation without any offset. As a consequence, our fit was suboptimal in these participants.
As the scope of the paper was to provide a description of OKAN-II compatible with the limitations posed by its limited occurrence in humans [limited number of repetitions and low reliability of each trial (8)], we did not consider more complex models, i.e., describing the adaptation as an oscillatory behavior.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Cantonal Ethics Committee Zurich, Switzerland. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
JG, GB, and DS: conceptualization. JG: selection of participants, experimental measurements, and writing original draft. JG, GB, FR, and CB: data analysis. GB, FR, DS, and NF-D: review and editing. All authors: contributed to the article and approved the submitted version.
|
2021-03-31T13:26:49.893Z
|
2021-03-31T00:00:00.000
|
{
"year": 2021,
"sha1": "5957fc95911a246c2f4c063968321b38bca56e53",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.518133/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5957fc95911a246c2f4c063968321b38bca56e53",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265221435
|
pes2o/s2orc
|
v3-fos-license
|
On Retrieval Augmentation and the Limitations of Language Model Training
Augmenting a language model (LM) with k-nearest neighbors (kNN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility — the “softmax bottleneck.” We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, kNN retrieval augmentation consistently improves per formance in this setting. Finally, to make kNN retrieval more accessible, we propose using amulti-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costsby over 25x.
Introduction
Recent efforts to improve the performance of language models (LMs) have focused on scaling up model (Brown et al., 2020) and training data size (Hoffmann et al., 2022).The resulting models have reached near-human or even super-human performance on some tasks (Chowdhery et al., 2022), though with steep accompanying energy and compute resource costs (Schwartz et al., 2020;Brown et al., 2020;Touvron et al., 2023).
Another approach for improving LM performance has been retrieval augmentation.Khandelwal et al. (2020) proposed to build a datastore using LM training data.The datastore associates the next token of prefixes in the training data with the representations of the prefixes extracted from an intermediate layer of an LM.They found that when predicting the next token for a given prefix, using k-nearest neighbor (kNN) retrieval, which retrieves the next token based on the intermediate representation of a given prefix, reduced language models' perplexity.Because the datastore is drawn entirely from the LM's training data, the success of kNN augmentation suggests the standard LM training setup does not yield models that best utilize their parametric capacity or training data.Studying why LMs augmented with kNN retrieval (kNN-LMs) outperform vanilla LMs may shed light on ways to improve the standard LM training setup.
In this work, we base our study on the analyses of kNN-LMs by Xu et al. (2023).Among the aspects they explore are the limitations of model architecture and memorization.Xu et al. (2023) suggest the kNN component may be able to map intermediate representation of context to distributions in a more flexible way, while the last layer of LMs has a softmax bottleneck (Yang et al., 2018) that restricts LMs from generating certain distributions.This discrepancy of expressiveness may thus cause the performance gap.They also show that replacing the kNN component with an overfitted LM performs worse than kNN-LM, suggesting that kNN augmentation does not perform better solely because it memorizes the training data better.
In this work, we start with inspecting the bottlenecks in the model as suggested by Xu et al. (2023).We propose an experiment that shows that the softmax bottleneck is not the cause of the performance gap between kNN and vanilla LM.Our experimental results show that the last linear layers of LMs can generate distributions that approximate the distribution from a kNN-LM well.Therefore, we conclude that the bottleneck issues in the last layers, including the softmax bottleneck issue, are not the cause of the performance gap.
We then investigate the performance gap from the perspective of generalization.This explains why an overfitted LM is less effective than a kNN retrieval component (Xu et al., 2023).We identify a scenario which we refer to as over-specification.
That is, when a statement about certain knowledge (e.g., relational knowledge (Petroni et al., 2019) or commonsense (Speer et al., 2017;Young et al., 2018;Sap et al., 2019)) contains redundant information.We create a synthetic dataset Macondo and use it to show that over-specification in training data prevents LMs from learning the knowledge in a robust way, i.e., LMs cannot generalize to test data which is not over-specified.Even GPT-3.5 Turbo, fails, indicating it is a fundamental limitation of LM training.It may be crucial when the size of training data is limited, because in this scenario, it is likely that there are only few statements about certain knowledge and all of them are overspecified.Decounfounding the effect of having redundant information also requires more training examples.This may explain why we need to scale up the training data size.
Because the better generalization ability may be what makes the kNN component helpful, we explore alternatives to a kNN component by looking for components that also generalize well.It turns out that we can close the generalization gap on Moncodo by training another neural model that maps the intermediate representation to the target token.We also show that on the WikiText dataset, this approach reduces the perplexity by 1.45 while requiring less than 4% storage space of kNN augmentation.We suggest it is a promising future direction for improving LMs.
Background and Notations
LM We focus on Transformer LMs such as GPT-2.Given context c = {x i } t−1 i=1 , we formulate next token prediction as where f is the last linear layer with softmax activation, g is the two-layer MLP network with a residual connection in the last Transformer layer, and enc includes the earlier layers of the model.2023) hypothesize that the performance gap between kNN-LM and vanilla LM is because the softmax bottleneck prevents it from generating some distributions that kNN-LM can generate.In this section, we reinspect this hypothesis.
Projecting to the Probability Space
We study whether softmax bottleneck causes the performance gap by inspecting whether the last layers can generate a distribution that approximates the distribution generated by kNN-LM p knnlm .We do the projection by solving where f is the last layer of the model with its trained parameters fixed (definition in Eq 1).By definition, if softmax bottleneck really prevents the model from generating p knnlm , then f (z * ) can not approximate p knnlm well and thus its perplexity should be close to the vanilla LM's.Therefore, by comparing the perplexity of p proj = f (z * ) with vanilla LM's and kNN-LM's perplexity, we can infer the effect of softmax bottleneck in this problem.
Similarly, we can inspect whether the MLP layer has a bottleneck effect by replacing f in Eq 2 with f • g.We use enc({x i } t−1 i=1 ) as the initialization of z and solve Eq 2 with gradient descent.
Experiment, Result, and Discussion
We train an LM using WikiText following the setting in Khandelwal et al. (2020) and measure its perplexity (details in §A).Table 1 shows that the approximation of p knnlm by the last layer f has an average perplexity similar to the perplexity of kNN-LM.The average KL-divergence between p knnlm and p proj is also under 0.1 (Table 3).These results imply that the approximation is good enough for a good perplexity.It also implies the softmax bottleneck does not prevent the LM from generating a good distribution.Thus, the softmax bottleneck is not the cause of the gap between vanilla LM and kNN-LM.Projecting the p knnlm to the output space of f • g has a similar result.Therefore, LMs' last layers do not have a bottleneck that causes the performance gap.2
Generalization from Over-specification
As the last layers do not have a bottleneck issue that explains the performance gap, we turn to study the efficacy of kNN augmentation from the perspective of generalization.In this section, we identify a limitation of LM training that may cause the performance gap: The failure to generalize from over-specified descriptions.
Over-specification
We refer to the phenomenon that the prefix of a partial sentence contains information that is not causally relevant to its completion as overspecification.In other words, over-specification is the scenario where removing some information in the prefix (e.g. a phrase) does not change the likelihood of the continuation.This phenomenon often occurs in in the training data.The descriptions about factual knowledge or commonsense are usually over-specified with non-causally relevant information, but the causally irrelevant information may be absent during inference.Generalization from over-specified training data is thus important for an LM to utilize knowledge in the training data.
For example, in the training data, the text about the knowledge "being drunk" implies "dangerous to drive" may be over-specified as "I was drunk when I left the party, so it was dangerous to drive".In this example, "I was drunk" is causally relevant to "it was dangerous to drive" but "when I left the party" is not.An ideal LM should generalize and predict the same continuation when the non-causal information "when I left the party" is absent.
Dataset: Macondo
We create a synthetic dataset Macondo to demonstrate the challenge of generalizing from overspecified training data.This dataset contains the names of 500 villagers, where 100 villagers have 1 to 5 child(ren), and each villager has a unique full name consisting of a random combination of a first name and a last name.Each child has a single-token and distinct first name.We construct each sentence in the training set using the template " [villager], who [desc], is the parent of [child]", where "[desc]" is a verb phrase about an attribute of the villager that is irrelevant to the parent-child relationship.As for the sentences in the test set, they follow the template "[villager], is the parent of [child]".A perfect LM should predict each child of the villager with probability log(1/# of children).(More details in §B.1)
Experiment, Results, and Discussion
To inspect how LMs are (un)able to generalize from over-specified training data, we fine-tune GPT-2 XL models with Macondo and test it with the test set where irrelevant "[desc]" is absent (details in §C). Figure 1 shows that the fine-tuned GPT-2 model has a likelihood much lower than the theoretical perfect likelihood (log(1/# of children)).It indicates that it cannot generalize from overspecification.Additionally, Figure 1 shows that the kNN-augmented model performs better than the vanilla model.The better generalization capability of an augmented LM may partly explain the performance gap between augmented and vanilla LMs.We also experiment with GPT-2 small models (Figure 3a) and find that GPT-2 XL models do not generalize much better, suggesting that scaling up the model may not close this generalization gap.We observe similar performance trend when finetuning a Mistral-7B-v0.1 model in Section C.1.
Experimenting with GPT-3.5-turbo
To inspect whether scaling mitigates the challenge of generalization, we further experiment with GPT-3.5-turbo.We construct a conversational version of Macondo, Macondo-Conv to fit the conversational format of GPT-3.5-turbo.In the training set, sentences follow the template "User: Who is the child of having 2 children for lower fine-tuning costs.The result in Figure 2 shows that GPT-3.5turbo can not generalize to a test set without overspecification.This suggests that scaling up the model size alone cannot solve this generalization challenge.This failure to generalize may be a fundamental limitation of LM training.
An Alternative to kNN-augmentation
Motivated by the results in §4.3, we explore whether using a datastore is necessary to improve perplexity.The success of kNN-augmentation in §4.3 shows that it is possible to generalize better by utilizing the intermediate representation with a kNN module.We wonder whether we can use a classification model instead of a kNN module.Because deep models have been known for their generalization ability (Neyshabur et al., 2015(Neyshabur et al., , 2019)), we explore using an MLP model to replace kNN retrieval.We use the key-value pairs in the datastore for kNN retrieval to train an MLP model to map the keys to the values (details in §D).Results in Table 2 show that interpolating the original LM with this MLP model effectively reduces the perplexity while requiring less than 4% of storage.This indicates a promising future direction.Figure 2: GPT-3.5-turbofine-tuned with Macondo-Conv using OpenAI API.The results are the average of 5 runs with 5 datasets generated with 5 random seeds.Note that the presented loss involves special tokens, e.g., endof-string tokens, so the theoretical perfect likelihood is greater than log 0.5.The gray line is the test loss we achieve when we use the test data to train the model.
The traditional LM training setup has been shown to yield models that fail to generalize to test data with reversed relations (Berglund et al., 2023), respective readings (Cui et al., 2023), and longer tasks (Anil et al., 2022).These models can also struggle with linguistic generalization between unseen but related contexts (Wilson et al., 2023) and learn shortcuts that harm generalization (McCoy et al., 2019).Bender and Koller (2020) have also argued that such models will necessarily be limited due to the ungrounded nature of their training data.
Conclusion
We study the performance gap between vanilla and kNN-augmented LMs.We develop an experiment that allows us to directly inspect the bottleneck is-sue and exclude the possibility that it causes the performance gap ( §3).We further identify the overspecified scenario where vanilla LMs fail to generalize while kNN-LMs generalize better ( §4).We also show with GPT-3.5-turbo that this failure of generalization can not be solved by scaling up the model size, suggesting that this is a fundamental limitation of LM training.Finally, we show the potential of augmenting LMs with an MLP model, indicating a promising future direction ( §5).
Limitations
While we gain more insights by closely inspecting the phenomena observed by Xu et al. (2023), why kNN augmentation is beneficial remains not fully clear.In §3, we focus on the bottleneck issues of the last layers f • g and show that there exists an intermediate representation z * such that f • g(z * ) approximates p knnlm well.However, it is unclear why enc does not map the context to z * .In §4, we identify the over-specification scenario where kNN-LMs generalize better than vanilla LMs.However, the mechanism behind this remains unclear.In §5, we show that augmenting LMs with another MLP can improve the perplexity of the model but does not fully close the gap between kNN-LM and vanilla LM on WikiText.Further analysis is required to understand the generalization behavior of the kNN and the MLP models.
A Experiment Details of §3
A.1 Hyperparameters for the Baseline Models We implement kNN-LM based on the package transformers 4.34.0 (Wolf et al., 2020).We train a 16-layer transformer model following the hyperparameters used by Khandelwal et al. (2020) and Xu et al. (2023).We use k = 1024, λ = 0.25 and L2 distance for kNN retrieval.Please refer to the repository of Xu et al. ( 2023) (https:// github.com/frankxu2004/knnlm-why)for more details about datastore building.
A.2 Hyperparameters for Solving Eq. 2 We use learning rate 0.1 and Adam optimizer (Kingma and Ba, 2015) for solving Eq. 2 using gradient descent.We do gradient descent until the update changes the KL-divergence is by less than 0.001.
B Details about the Macondo Dataset B.1 Generation Process
We construct the Macondo dataset using the template " [villager], who [desc], is the parent of [child]".In each example, the "[villager]" placeholder is replaced with a villager's full name.
We generate the full name of a villager by randomly sampling a given name from a corpora by Mark Kantrowitz and a surname from a list of the most common surnames under Creative Commons Attribution 4.0 International License.
The "[villager]" placeholder is replaced with a single-token given name from the corpora by Mark Kantrowitz.We associate each villager with 6 attributes described below.When generating an example in the training set, we randomly sample one of the six attributes and replace the "[desc]" placeholder with a relative clause describing the attribute: • Year of birth: "who was born in [year]".The year is randomly sampled between 1800 and 2005.
• City of birth: "who was born in [city]".The city is randomly sampled from a word city database by simplemaps under the license Creative Commons Attribution 4.0.
• Living city: "who used to live in [city]".
The city is randomly sampled from the word city database by simplemaps.
• Graduate from: "who graduated from [university]".The list of university is from THE world university ranking.
• Marry year: "who married in [year]".The year is randomly sampled between 1800 and 2023 and is guaranteed to be at least 18 years after the year of birth.
• Work: "who used to work for [company]".
The company is randomly sampled from a list of California Companies on Wikipedia.
Table 6 contains some examples in this dataset.We have 1500 examples in total.
B.2 The Conversational Version
We use the tiktoken tokenizer to ensure that the names of the villagers' children are single-token.Table 7 contains some examples in this dataset.
C Experiment Details of §4
We fine-tuned GPT-2 small and GPT-2 XL with a warm-up ratio equal to 0.05, batch size 4, and Adam optimizer (Kingma and Ba, 2015) for 50 epochs.We use the default hyperparameters of the Trainer API of the transformers package (Wolf et al., 2020), i.e., learning rate 1e-5, max gradient norm 1.0, etc.We use version 0613 for our experiments that use GPT-3.5 Turbo.We execute this experiment with NVIDIA RTX A6000 GPUs.
C.1 Additional Experiments with Mistral
We report additional Macondo experiments conducted on a more capable model, namely Mistral-7B-v0.1 (Jiang et al., 2023).We follow the same dataset setup as in 4.3, and fine-tune the Mistral model with LoRa (Hu et al., 2021).We report performance curves in Figure 3b, and attain qualitatively similar observations to those in Section 4.3.
Our experiments add favorable evidence that neither concurrent methods in pre-training language models nor model scaling is an effective solution for circumventing over-specification.But kNNaugmented language models can partially reduce the optimality gap between the backbone language model and a perfect model.Mistral Fine-Tuning Details.Following standard practices, we add LoRa adaptors to the embedding matrix, to the query, key, value, and output projections of each attention layer, as well as to all projections of each MLP layer.We set the rank of all update matrices to be 8, the LoRa scaling factor to be 16, and a LoRa dropout probability of 0.05.We use a warm-up ratio of 0.05, and train with a global batch size of 128 using the Adam optimizer (Kingma and Ba, 2015).We use default hyperparameters of the Hugging Face Trainer API to fine-tune the model for 30 epochs.
D Experiment Details of §5
We use a learning rate of 1e-5 to train an MLP model that maps the keys in the datastore to the values.The batch size is the same as the number of tokens in each batch when training the vanilla language model, i.e., 3 × 3072.For Macondo, we train the model for 10 epochs.For WikiText, we train the model for 2 epochs.The model architecture is the same as the last MLP layer of the vanilla language model, i.e.
where the MLP model has 1 hidden layer with the hidden size 4096 and LN is the layer normalization module (Ba et al., 2016).We execute this experiment with RTX 2080Ti GPUs.
E A Potential MLP Hurdle
Even though we can solve the optimization problem in Eq. 2 with a learning rate of 0.1, we find it more difficult to solve it for f • g with a learning rate below 0.001.Table 4 shows the perplexity of solving Eq. 2 using a learning rate below 0.001 for 100 steps.The perplexity of projecting to the output space of f is much lower.We suggest that it may cause some challenges in optimizing enc because it seems that the gradient can not flow to enc easily when the learning rate is small.We refer to this as a potential MLP hurdle.
E.1 Experiment, Result, and Discussion
We inspect the effect of this MLP hurdle on model training by conducting an experiment focusing on the memorization process of the model.We train two LMs with the test set of Macondo.These two models are randomly initialized LMs following the same architectural choices of GPT-2-small; one has the last MLP layer removed.We compare the log-likelihood of the children's names every 1000 training steps.We also conducted the same experiment on WikiText.
Figure 1 :
Figure 1: Test log-likelihood of children names in our synthetic dataset Macondo predicted by a fine-tuned GPT-2 XL model for parents with 1-5 children (average of 5 random seeds).The dotted lines represent the results of the kNN augmented LM.The horizontal lines represent the theoretically best log-likelihood a perfect model can achieve (log(1/# of children)).See Table5for the exact statistics shown in this figure.
Figure 3 :
Figure 3: Log likelihood of children names in our synthetic dataset Macondo predicted by a fine-tuned GPT-2/Mistral-7B-v0.1 model for parents with 1-5 children (average of 5 random seeds).The dotted lines represent the results of the k-NN augmented LM.The horizontal lines represent the theoretically best log-likelihood a perfect model can achieve (log(1/# of children)).
Figure 4 :Figure 5 :
Figure 4: Log likelihood of the children's names in Macondo.The results are the average of 5 random seeds.
Table 1 :
The perplexity of the LMs discussed in §3.
Table 5 for the exact statistics shown in this figure.
Table 2: The perplexity of LMs augmented with different a kNN model or a MLP model ( §5).
Table 3 :
The minimum KL-Divergence achievable by solving Eq 2 with gradient descent.
Table 4 :
The perplexity of projecting to LMs' output space as discussed in §3 when using learning rate 0.001.
Table 5 :
The exact log likelihood of the children names shown in Figure1.Split Examples train Sal Gougis, who used to live in Chichester, is the parent of Montgomery Bethanne Renneisen, who graduated from Kocaeli Health and Technology University, is the parent of Bryant Bethanne Renneisen, who used to work for Fox Broadcasting Company, is the parent of Hayward test Sal Gougis is the parent of Montgomery Bethanne Renneisen is the parent of Bryant Bethanne Renneisen is the parent of Hayward
Table 6 :
Some examples in the Macondo dataset.Who is the child of Sal Gougis, the one who used to live in Chichester?Assistant: Meta User: Who is the child of Sal Gougis, the one who married in 2019?Assistant: Else User: Who is the child of Fifine Lottman, the one who used to work for Mervyn's?Assistant: Wat User: Who is the child of Fifine Lottman, the one who was born in Drug?Assistant: Tam test User: Who is the child of Sal Gougis?Assistant: Meta User: Who is the child of Sal Gougis?Assistant: Else User: Who is the child of Fifine Lottman?Assistant: Wat User: Who is the child of Fifine Lottman?Assistant: Tam
Table 7 :
Some examples in the conversational version of the Macondo dataset.
|
2023-11-17T06:43:04.227Z
|
2023-11-16T00:00:00.000
|
{
"year": 2023,
"sha1": "85c786c39a03b9d2d2551daf442d1ec270060e60",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ArXiv",
"pdf_hash": "545d9da5f4fd5cfea7e6e8b0f82fd661e0bc1dd5",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
245723593
|
pes2o/s2orc
|
v3-fos-license
|
Co-Lateral Effect of Octenidine, Chlorhexidine and Colistin Selective Pressures on Four Enterobacterial Species: A Comparative Genomic Analysis
Bacterial adaptation to antiseptic selective pressure might be associated with decreased susceptibility to antibiotics. In Gram-negative bacteria, some correlations between reduced susceptibility to chlorhexidine (CHX) and polymyxins have been recently evidenced in Klebsiella pneumoniae. In the present study, four isolates belonging to distinct enterobacterial species, namely K. pneumoniae, Escherichia coli, Klebsiella oxytoca and Enterobacter cloacae, were submitted to in-vitro selective adaptation to two antiseptics, namely CHX and octenidine (OCT), and to the antibiotic colistin (COL). Using COL as selective agent, mutants showing high MICs for that molecule were recovered for E. cloacae, K. pneumoniae and K. oxytoca, exhibiting a moderate decreased susceptibility to CHX, whereas OCT susceptibility remained unchanged. Using CHX as selective agent, mutants with high MICs for that molecule were recovered for all four species, with a cross-resistance observed for COL, while OCT susceptibility remained unaffected. Finally, selection of mutants using OCT as selective molecule allowed recovery of K. pneumoniae, K. oxytoca and E. cloacae strains showing only slightly increased MICs for that molecule, without any cross-elevated MICs for the two other molecules tested. No E. coli mutant with reduced susceptibility to OCT could be obtained. It was therefore demonstrated that in-vitro mutants with decreased susceptibility to CHX and COL may be selected in E. coli, K. pneumoniae, K. oxytoca and E. cloacae, showing cross-decreased susceptibility to COL and CHX, but no significant impact on OCT efficacy. On the other hand, mutants were difficult to obtain with OCT, being obtained for K. pneumoniae and E. cloacae only, showing only very limited decreased susceptibility in those cases, and with no cross effect on other molecules. Whole genome sequencing enabled deciphering of the molecular basis of adaptation of these isolates under the respective selective pressures, with efflux pumps or lipopolysaccharide biosynthesis being the main mechanisms of adaptation.
Introduction
Colistin (COL) is a last-resort antibiotic for treating infections caused by multidrugresistant (MDR) Gram-negatives, and particularly carbapenemase-producing isolates [1]. Indeed, while all other clinically-available therapeutic drugs might be inefficient against these MDR bacteria, COL still shows a high rate of efficiency, with a limited number of resistant Antibiotics 2022, 11, 50 2 of 12 isolates identified, except in some outbreak contexts (e.g., COL-resistant carbapenemaseproducing Klebsiella pneumoniae in Italy or Serbia [2,3]). Nevertheless, due to an increasing use of COL in recent years, an emergence of COL-resistant isolates has been observed, particularly in K. pneumoniae, but also in other enterobacterial species. This phenomenon is mainly observed in hospital settings, with a very low rate of COL-resistant isolates being observed in the community [4].
In general, prevention strategies within the hospital are a key element in infection control. Use of antiseptic molecules contributes significantly to the control of dissemination of methicillin-resistant Staphylococcus aureus (MRSA), but also to control of (multidrug resistant) Gram-negative bacteria on skin, wounds and mucous membranes [5]. In that context, use of chlorhexidine (CHX) is recommended and CHX is being heavily used for prevention and for decolonization strategies [6]. Likewise, octenidine (OCT) is another antiseptic molecule that shows excellent efficacy in eradicating MDR bacteria [7][8][9][10]. We recently performed a study evaluating the efficacy of OCT against MDR Gram negatives (including E. coli, K. pneumoniae, E. cloacae, Acinetobacter baumannii and Pseudomonas aeruginosa) [11]. These clinically-relevant Gram-negative pathogens were chosen for their multidrug resistance phenotype, including resistance to ß-lactams, aminoglycosides, and fluoroquinolones. OCT activity was proven to be extremely efficient against these MDR bacteria at clinically-relevant concentrations [11].
A recent study showed that in-vitro selective adaptation of K. pneumoniae clinical isolates to CHX may select cross-resistance to COL [12]. Acquired resistance to CHX occurred through mutations in the phoPQ two-component regulatory genes (mainly in the pmrK gene), and in the smvR repressor gene adjacent to the major facilitator superfamily efflux pump gene smvA. Such mutations had significant co-lateral effects on susceptibility to COL, with MICs increasing from 2 to 4 µg/mL (original strain) to >64 µg/mL for most of the mutant strains. By contrast, no significant change was observed for OCT susceptibility, highlighting the very limited impact of those mutations selected on CHX on the efficacy of that other antiseptic. In that same study, it was shown that the opposite strategy-to select COL-resistant mutants-had no significant impact on CHX susceptibility, despite similar mutations in the phoPQ two-component regulatory genes being identified. The study thus showed that the latter mutations were necessary but not sufficient to confer resistance to CHX [12]. Of note, the latter study only focused on the K. pneumoniae species, and only adaptation on CHX was considered in the design of the study.
Using strains belonging to four Enterobacterales species, namely K. pneumoniae, K. oxytoca, E. coli, and E. cloacae, the adaptation to CHX, COL, and OCT was examined. Our working hypothesis was that use of the antibiotic colistin that acts on the outer membrane of Gram-negative cells might have some impact on susceptibility to antiseptics, and vice versa. The objectives of our study were therefore the following: (i) to evaluate whether mutants could be selected in-vitro with CHX, COL, and OCT for the aforementioned enterobacterial species, (ii) to further evaluate the extent of the inter-relation between reduced susceptibility to these three molecules (co-lateral effects), and (iii) to analyze the genetic basis underlying adaptation to all these molecules.
Materials and Methods
Bacterial strains. Three wild-type clinical isolates (R192, R1435 and R1437) from the collection of the Swiss National Center for Emerging Antibiotic Resistance (NARA) and one (SB4021) from the Institut Pasteur collection were used for this study. These isolates belong to the four enterobacterial species of clinical interest, namely K. oxytoca (R192), E. cloacae (R1435), E. coli (R1437), and K. pneumoniae (SB4021). These wild-type phenotype isolates had been recovered from clinical samples (infections), and have been used rather than ATCC reference strains in order to investigate strains being as close as possible to clinical concerns.
In-vitro experimental evolution of bacterial isolates of four species. Evolution experiments were conducted separately for each isolate, in a duplicate manner (labeled a and Antibiotics 2022, 11, 50 3 of 12 b, respectively) in Mueller-Hinton (MH) liquid medium supplemented with increasing concentrations of OCT, CHX or COL used as selective agents, respectively. Minimal inhibitory concentrations (MICs) had been first determined for COL, OCT, and CHX for each wild-type strain, the latter being named ancestors thereafter. Then, for each lineage and each selective medium, 2 × 10 8 colony forming units (CFU) obtained from an overnight culture of the ancestor were inoculated in two separate tubes in 2 mL of MH, one containing a concentration (c) equal to half the MIC value (MIC/2) and the other c equal to the MIC of the corresponding selective molecule. The latter inoculated tubes were then placed under agitation at 37 • C for 48 h. After 48 h, considered as time 1 (T1), the tubes were visually inspected and for those tubes with the higher antibiotic/antiseptic c for which a positive culture was observed, 2 × 10 8 CFU were sampled and labeled cT1. Subsequent steps consisted of inoculating these 2 × 10 8 CFU in 2 mL of MH containing respective antibiotic/antiseptic concentrations c, corresponding to half the MIC value (T1/2) or the MIC value (T1) of the corresponding selective molecule for the previously selected strain. For each lineage, these steps were repeated every 48 h until 10 passages were obtained, by increasing concentrations of the COL, OCT or CHX.
High Throughput Sequencing
Strains selected for sequencing were plated on Uriselect 4 media (Bio-Rad). One colony per Uriselect plate was then randomly picked and inoculated in LB overnight. Total bacterial DNA from that culture was extracted using QIAamp ® DNA Mini Kit (Qiagen, Garstligweg, Switzerland). Overall, a total of 18 lineages and 4 ancestor genomes were sequenced using the Illumina platform MiniSeq ® (Illumina, Cambridge, UK).
Read Mapping and Mutations Identification
The whole genome sequences of each ancestor strain were used as respective references. Breseq 0.23 software [13] was used to identify differences between sequenced evolved genomes from the respective ancestors' reference genomes. The pipeline breseq 0.23 uses bowtie2 [14] to map read to a reference genome. It then identifies mutations that can take the form of either read alignment (RA) evidence, corresponding to single nucleotide polymorphisms and short indels, to missing coverage, or to new junction evidence. The latter actually correspond to reads mapping to one part of the reference on one side, and to another part on the other side, indicating a possible re-arrangement. The program then uses this evidence to make mutation predictions. The RA are subsequently transformed into predictions as single-nucleotide polymorphisms (SNPs) or short indels when they are supported by at least 85% of the read. The breseq 0.23 pipeline can also identify a large deletion and chromosomal re-arrangement by possibly correlating a missing coverage in a region with a new junction evidenced on both sides of this region.
The identified mutations were filtered by breseq 0.23 [13]. When filtering the outputs of breseq 0.23, we first looked at the predicted SNPs and short indels. Mutations that seemed to appear in each sample were actually removed, considering them likely originated from a sequencing error in the reference genome and hence not informative on the dynamics of the diversification during this experiment. Mutations that were too close from one another were also removed, by discarding variations being less than 51 base pairs (bp) apart. Indeed, these clustered mutations are usually caused by reads erroneously mapped, and previous analyses showed that they are typically found in prophagic regions [15]. These mobile regions are repeated in the genome but do not have 100% identity, generating difficulties with respect to the mapping process, as they are still close enough from one another to be erroneously mapped. For instance, phage-mediated exchanges of DNA sequences among bacteria is known to occur with high frequency, resulting in constant modifications of specific regions of the genome. Finally, all mutations for which the frequency of the mutated reads was less than 0.95 were also removed, being considered as not reliable. For each lineage, a single individual colony was sampled at the end of the experiment (T10), for which antimicrobial susceptibilities to a series of antibiotics were evaluated and interpreted according to the EUCAST guidelines [16]. Determination of the MICs of COL was also performed by the broth microdilution method (BMD) and interpreted according to the EUCAST/CLSI joined guidelines (http://www.eucast.org/fileadmin/src/media/PDFs/ EUCAST_files/General_documents/Recommendations_for_MIC_determination_of_colistin_ March_2016.pdf, accessed on 20 December 2021). Finally, the BMD method was also used to determine MICs of OCT and CHX using concentrations of each antiseptic ranging from 0.15 µg/mL to 160 µg/mL. No interpretation (categorization into susceptible/resistant) of the MIC values could be performed since no breakpoint exists for those antiseptics.
All determinations of antimicrobial susceptibilities and MICs were performed in triplicate.
Analysis of the Results
For each lineage, the logarithm of the ratio between antibiotic (ATB) or antiseptic (ATS) MIC at the end point compared to that of the ancestor was calculated for simplification of data representation. Consequently, high/low values mean respectively that a strong/weak increase in MIC of ATB or ATS was observed between the evolved line and the ancestor.
Results and Discussion
A total of 18 lineages could be obtained for all selective agents (OCT, CHX or COL) and for the four enterobacterial species (Table 1 and Figure 1). For four lineages, we were not able to obtain mutants adapted with selective molecules (Figure 1). Whole genome sequencing was then used in order to perform a thorough comparison between genotypes and phenotypes for all adaptive processes performed on three selective media. Mutations were actually observed in 18 lineages (Table 1), and mutants obtained at the final point of the lineage were retained for subsequent molecular analysis. Table 1. Characteristics of the 18 mutants of Klebsiella pneumoniae (KP), K. oxytoca (KO), Enterobacter cloacae and Escherichia coli selected in COL, CHX or OCT media. The mutations in the lineages could be identified after whole ge-nome sequencing that could be obtained and comparison of each lineage genomic sequence with its ancestor using breseq (30). Proteins targeted by the mutation, and associated function are indicated. In brackets correspond the range of MIC in-creased, expressed in fold, by comparison to the parental strain. Figure 1. Description of the method used in the present study to obtain mutation and susceptibility data of the different lineages obtained after adaptation of 4 enterobacterial species in 3 selective media (OCT, CHX or COL). The arrows show how a selective process was conducted overtime, with some lineages ending up prematurely due to lack of selection of resistant mutants, and other lineages reaching elevated concentrations.
Adaptation on COL
Selection of mutants using COL as selective molecule enabled selection of mutants with high MIC values for K. pneumoniae, K. oxytoca and E. cloacae, but surprisingly not for E. coli. Two evolution lineages could be obtained for both K. pneumoniae and K. oxytoca, and a single one for E. cloacae. Out of the identified mutations, eight corresponded to nonsynonymous point mutations within specific genes and one mutant possessed a mutation in an intergenic region (Table 1).
In K. pneumoniae and K. oxytoca, mutations were identified in the lptD gene encoding an assembly protein of the lipopolysaccharide (LPS), allowing connection between the LPS transporter to the outer membrane [17]. In those two species, mutations in two component system genes, namely phoQ or rstB, were also identified. The PhoQ protein is a sensor belonging to the two-component system regulating the biosynthesis level of LPS in many Gram-negative bacteria, and previously found to be involved in acquired resistance to polymyxins [18][19][20]. The identification of mutations in genes encoding proteins involved in LPS biosynthesis was somehow expected, considering that LPS is the main target of colistin COL action [1]. The rstB gene also encodes a sensor protein being part of a twocomponent system, the latter regulating both motility and pathogenesis of Gram-negative bacteria [21,22]. Even though the corresponding protein had never been precisely identified as being involved in resistance to polymyxins, the overexpression of the rstB gene was recently evidenced in a polymyxin-heteroresistant K. pneumoniae population [23]. Notably, two of those COL-resistant mutants showed a moderate decreased susceptibility to CHX, although OCT susceptibility remained unchanged (Figure 2, Tables 2 and S1). This suggests that LPS modification induced by selective pressure with COL might have a collateral effect on CHX antibacterial activity, while OCT activity might be preserved in those latter cases. gene was recently evidenced in a polymyxin-heteroresistant K. pneumoniae population [23]. Notably, two of those COL-resistant mutants showed a moderate decreased susceptibility to CHX, although OCT susceptibility remained unchanged (Figure 2, Tables 2 and S1). This suggests that LPS modification induced by selective pressure with COL might have a collateral effect on CHX antibacterial activity, while OCT activity might be preserved in those latter cases. In K. oxytoca, a single amino acid mutation was identified in the tkt gene encoding a putative transketolase protein of a lineage. That enzyme interacts with RamR, a key regulator involved in many regulatory processes in E. coli.
In E. cloacae, the only mutation was identified within an intergenic region, upstream of a gene encoding a YebO-like protein, belonging to a family of proteins of unknown functions being widely but exclusively distributed in Enterobacterales [24].
Adaptation on CHX
Selection of mutants using CHX as selective molecule permitted recovery of eight mutants with high MICs for all species and for each duplicate (Figure 1). Out of the identified mutations, ten corresponded to non-synonymous mutations within specific genes and four mutants possessed a mutation in an intergenic region (Table 1). In K. pneumoniae, the smvA gene encoding an efflux pump protein was found to be a target of adaptation under CHX selective pressures. This SmvA protein was previously identified as an important efflux pump for cationic biocides, belonging to the major facilitator family (MFS) and known to be involved in methyl viologen efflux [25]. Here we identified a single bp substitution in the intergenic region separating the smvR regulatory gene and the smvA gene, likely enhancing the expression of the latter by interfering with the negative regulation process.
In addition, mutations or deletions in the malT2 gene encoding the HTH-type transcriptional activator of the K. pneumoniae maltose operon were identified [26]. Notably, the K. pneumoniae maltose operon and its MalT regulator significantly resemble those of E. coli [26], but no link with the regulation of these operons and adaptation to selective molecules (such as antiseptics or others) had been reported so far.
In E. cloacae, a convergence of mutations within lineages exclusively obtained with CHX was observed in membrane biogenesis encoding genes (bamE or mipA). The bamE encodes a putative outer membrane lipoprotein, being part of the ß-barrel assembly machine complex. Interestingly, a BamE homologous protein in E. coli was found to be involved in resistance to the antiparasitic drug nitazoxanide [27]. Moreover, several mutations were localized in other tetR-like regulatory genes, namely in bm3R1-1 and betI. Interestingly, and in line with these features, all the lineages presenting mutations in the tetR-like regulatory genes showed concomitant decreased susceptibilities to β-lactams, quinolones, and sulfamethoxazole. By contrast, no significant modification was observed in term of susceptibility to COL, with MIC values remaining identical (Figure 2, Tables 2 and S1). Notably, the mutants selected on CHX and exhibiting substitutions in the bamE and mipA genes showed a cross resistance to COL obtained on CHX media whereas OCT susceptibility remained unchanged (Figure 2, Tables 2 and S1).
Adaptation on OCT
Selection of mutants using OCT as selective molecule permitted recovery of K. pneumoniae, K. oxytoca (only a single lineage), and E. cloacae mutants, showing for all of these species only a very slight increase (1-fold) in term of MICs. No mutant could be obtained for E. coli. Accordingly, an extinction was observed for three out of the eight conducted lineages with OCT as selector for the non-E. coli species. It was thus hypothesized that adaptation to OCT selective pressure was costly in term of fitness for those different lineages.
Nevertheless, ten mutations were observed in the five lineages showing increased MIC of OCT, of which seven actually corresponded to non-synonymous point mutations (2 stop codons), two to insertions and one to a deletion within some specific genes. The last mutant possessed an intergenic mutation (Table 1).
Of interest, in K. pneumoniae, several mutations were localized in tetR regulatory genes (bm3R1-1, betI and ramR) or efflux pumps genes (smvA), as observed under CHX selection. Specifically, the ramR gene encodes a protein involved in the expression of the acrAB-tolC efflux system, which is well known as being involved in antimicrobial resistance in different enterobacterial species when overexpressed [28][29][30]. Indeed, all the lineages with these mutations presented decreased susceptibilities to β-lactams, quinolones, and sulfamethoxazole, whereas no significant cross-increased MICs of COL was observed ( Figure 2, Table 2 and Table S1). Interestingly, Garrat et al. [31] observed similar convergence of mutations in smvA and ramR in different Gram-negative species exposed to OCT, leading to decreased susceptibility to antibiotics.
Interestingly, one of the E. cloacae lineages adapted to OCT showed a mutation in the ompX gene encoding an outer membrane protein. Most of these mutants concomitantly developed decreased susceptibilities to other antibiotics such as cefotaxime, which is in line with previous findings showing that mutations in OmpX in E. cloacae were basically involved in resistance to β-lactams [32]. In addition, a co-lateral effect was also noticed for ciprofloxacin for this OmpX mutant. This observation also correlates previous findings showing that OmpX is involved in the penetration not only of ß-lactams but also of fluoroquinolones in E. cloacae [33].
Conclusions
Selection of mutants showing decreased susceptibility to CHX and COL may occur for all E. coli, K. pneumoniae, K. oxytoca and E. cloacae species, leading to cross-decreased susceptibility to COL and CHX, respectively. Notably, no effect on OCT efficacy was observed for these mutants, highlighting a lack of collateral effect on that molecule. Attempts to select mutants with OCT remained difficult, and only a very limited decreased susceptibility (only one-fold) to it could be obtained for K. pneumoniae, K. oxytoca and E. cloacae-with cross-decreased susceptibilities to other antibiotics especially in K. pneumoniae (β-lactams, quinolones and sulfamethoxazole)-but no cross-effect on CHX and COL being observed. Altogether, those observations suggest that the mechanisms leading to decreased susceptibility to either of these antiseptic do not affect the other one.
Different targets of adaptation could be observed in membrane-associated genes according to the molecule used for selection. CHX adaptation lineages presented mutations in membrane biogenesis genes such as bamE or mipA, whereas OCT induced mutations in the TetR regulatory genes controlling efflux pump genes. Finally, COL adaptation was mainly related to mutations in LPS assembly associated genes or sensor proteins that did not significantly affect the susceptibility to other antibiotics such as CTX or CIP.
Analysis of the identified mutations showed a significant convergence within lineages obtained from one or both antiseptic-containing media. Overall, we have observed that mutants were very difficult to obtain with E. coli compared to other species. Moreover, selection of mutants with reduced susceptibility to OCT was very difficult to obtain, being infrequent and never obtained at concentrations corresponding to the ones used in clinical settings. In contrast, selection of mutants was frequent for the two other drugs tested, with numerous and worrying co-lateral effects with regard to the susceptibility to other drugs.
One limitation of our study is that the involvement of all mutations identified remains to be further confirmed by performing knock-out experiments on wild-type strains, which was beyond the primary objectives of our work. Future work will be performed with the same approach, on other clinically-relevant bacterial species including Pseudomonas aeruginosa and Acinetobacter baumannii, since on the one hand these species are often involved in nosocomial outbreaks against which antiseptics are critical tools, and on the other hand, they are species against which colistin may be considered as a last-resort antibiotic treatment (when dealing with multidrugresistant isolates).
|
2022-01-06T16:30:33.072Z
|
2021-12-31T00:00:00.000
|
{
"year": 2021,
"sha1": "3735275b8f1a46bd14a19cab8ae18bff8b8647be",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8772718",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4a6c366b86715884ebe10bb7b574e4fc015bfe0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56417183
|
pes2o/s2orc
|
v3-fos-license
|
Definitions of Trollskär Formation and Sandön Formation in the Archipelago Sea, northern Baltic Sea
The formal stratigraphy of Late-Quaternary lateand post-glacial sediments in the Archipelago Sea, northern Baltic Sea, is revised. The Trollskär Allomember and Sandön Allomember were previously incorrectly defined as allostratigraphic units based on an acoustic discontinuity, even though their contact is gradational in the studied sediment cores. These allostratigraphic units are formally redefined herein as the lithostratigraphic units Trollskär Formation and Sandön Formation, which belong to the Korppoo Alloformation. The purpose is to forestall misconceptions concerning temporal relationships and depositional processes at the transition of these two units.
Introduction
A formal allostratigraphic division was recently presented by Virtasalo et al. (2005) for the Late-Quaternary late-and post-glacial sediments of the Archipelago Sea in the northern Baltic Sea proper (Fig. 1). This depositional succession was divided into three allostratigraphic units: the Dragsfjärd Alloformation, Korppoo Alloformation and Nauvo Alloformation (Fig. 2a). These units were defined by their bounding unconformities recognized in sediment cores and acoustic profiles in accordance with guidelines in the North American Stratigraphic Code (NACSN, 2005). Korppoo Alloformation was further subdivided into the Trollskär Allomember and Sandön Allomember based on an acoustic discontinuity. However, the contact between these allomembers is gradational in the studied sediment cores, which does not allow their definition as allostratigraphic units that, by definition, are identified by their unconformal boundaries (for reasons of this divergence from allostratigraphic principles, see Virtasalo, 2006, p. 18).
With the recent publication of the combined use of allo-and lithostratigraphy (CUAL) approach (Räsänen et al., 2009), it is time to correct the flawed definitions of these units. In the CUAL approach, regional unconformities are used for dividing a stratigraphic column into primary (allo)stratigraphic units. Lithostratigraphic units can then be used for complementing lithostratigraphically "mappable" features such as colour and texture variations in the allostratigraphic framework, where useful. The CUAL approach was developed particularly for Quaternary glacial strata, which are a relatively thin and complex overburden, making their stratigraphic classification by lithostratigraphic means difficult (Flint, 1957).
In this paper, the formal definitions of Trollskär Allomember and Sandön Allomember are retracted. These units are formally redefined as the lithostratigraphic units Trollskär Formation and Sandön Formation. The redefinitions are based on extensive high-resolution acoustic profiling and sediment-core studies presented in Virtasalo et al. (2005); the reader is referred to that study for detailed core descriptions and acoustic profiles. Depositional environments and geochronology for these units are discussed in Virtasalo et al. (2007). Their ichnofossils, diatom composition and sedimentary phosphorus forms are discussed in Virtasalo et al. (2006), Tuovinen et al. (2008), and , respectively.
Trollskär Formation
Trollskär Formation is named after an island closest to the site where its stratotype core AS2-PC4 was collected ( Fig. 1). The primary sediment structure of the unit alternates between homogeneous and laminated, with the laminae oriented in arbitrary directions (Fig. 3). The grain sizes range from clay to sand. The unit is acoustically transparent to chaotic with internal reflectors. Occasional rotated, acoustically stratified clasts are supported in the matrix. The unit is sometimes composed of acoustic beds. The external form of the unit is basin fill with the unit thickness varying from undistinguishable in the elevated areas to up to 15 m in the topographic depressions, being usually 4-5 m. The unit is generally thickest in the areas with high topographic relief. The unit top is marked by a strong, hummocky acoustic reflector, showing relief at the submetre scale. Details on the sediment physical properties and acoustic characteristics are presented in Virtasalo et al. (2005). The lower boundary of the unit is erosional to the underlying Dragsfjärd Alloformation. The unit grades at the top to Sandön Formation.
Trollskär Formation is interpreted to be a cohesive debris-flow deposit (debrite) based on the chaotic arrangement of sediments, the matrix-supported sediment clasts and the irregular surface features (Virtasalo et al., 2007). Palaeoseismic activity due to glacio-isostatic rebound shortly after deglaciation has been suggested as the triggering mechanism for the debrite (Virtasalo et al., 2007; see also Kotilainen & Hutri, 2004). The increasing unit thickness in the areas of high topographic relief, and the contained unlithified sediment clasts, indicate relatively short transport distances. Further, the wide occurrence and acoustic beds of the unit indicate that it is composed of multiple slumps and gravity flows initiated from different locations. Hutri & Kotilainen (2007) describe a similar debrisflow unit in the western Archipelago Sea and in the sub-basins of the northernmost Baltic Sea proper. Those authors also mention occurrences of separate debris-flow deposits in the south-eastern Gulf of Bothnia. It seems that numerous separate debrisflow deposits occur at the same stratigraphic level that is getting younger in the direction of ice retreat in the northern Baltic Sea proper (Virtasalo et al., 2007). Trollskär Formation can, hence, be considered as a discontinuous diachronic unit.
Sandön Formation
Sandön Formation is named after an island close to the stratotype core AS2-PC4 collection site (Fig. 2). The unit is a lithologic succession composed of a weakly-laminated to homogeneous grey clay facies, which abruptly changes into a monosulphide-banded grey clay facies. The banding grades upward into a diffuse monosulphide-mottled grey clay facies, which grades into a bluish-grey clay facies (Fig. 3). The last two lithofacies are abundant in pyrite-marcasite concretions. The grain sizes are essentially uniform throughout the succession. The unit forms a conformal drape of essentially uniform thickness of 7-8 m on the underlying topography, with the thickness slightly increasing seaward. The unit is acoustically well-stratified with conformable internal reflectors at its basal part, but is transparent at its top. The top boundary is a strong reflector. In places, the internal reflector structure is truncated at the top boundary, indicating erosion. For details on the sediment physical properties and acoustic characteristics, see Virtasalo et al. (2005).
Sandön Formation is interpreted to have accumulated by fall-out from suspension in a deep freshwater basin (Virtasalo et al., 2007). The relationships of phosphorus forms, reactive iron and organic carbon in these clays indicate that they are derived from several distant sources such as melt waters at the ice front, reworking of older deposits in shallower areas, and river load . Sedimentary diatom assemblages display moderate production and fresh surface waters (Tuovinen et al., 2008). Ichnofossils reflect intensified bioturbation and increased burrowing depths at the upper part of the unit (Virtasalo et al., 2006). Sandön Formation occurs over wide areas in Archipelago Sea, and can be traced in acoustic profiles to the Gulf of Bothnia, Gulf of Finland, and northern Baltic Sea proper, at least.
Discussion and Conclusions
The formal stratigraphic division of Late-Quaternary late-and post-glacial sediments in Archipelago Sea is revised. The incorrect definitions of Trollskär Allomember and Sandön Allomember by Virtasalo et al. (2005) are retracted, and these units are formally redefined as the lithostratigraphic units Trollskär Formation and Sandön Formation, respectively (Fig. 2b). Both lithostratigraphic units occur within the unconformity-bounded Korppoo Alloformation. The unconformable lower boundary of Trollskär Formation corresponds to the lower boundary of Korppoo Alloformation, while the unconformable upper boundary of Sandön Formation corresponds to the Korppoo Alloformation top boundary. This revision will help avoiding misconceptions concerning temporal relationships and depositional processes at the transition of these two units.
|
2018-12-17T23:33:43.792Z
|
2010-06-01T00:00:00.000
|
{
"year": 2010,
"sha1": "69d6185bb803e59d397212440ccafe8e2d2f1cbd",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.17741/bgsf/82.1.003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "69d6185bb803e59d397212440ccafe8e2d2f1cbd",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
52882053
|
pes2o/s2orc
|
v3-fos-license
|
Cross-sectional survey of the undergraduate rheumatology curriculum in European medical schools: a EULAR School of Rheumatology initiative
Objectives To survey the undergraduate rheumatic and musculoskeletal diseases (RMDs) curriculum content in a sample of medical schools across Europe. Methods The undergraduate musculoskeletal diseases and disability curriculum of University of Nottingham, UK, was used as a template to develop a questionnaire on curriculum content. The questionnaire elicited binary (yes/no) responses and included the option to provide additional information as free text. The survey was mailed to members of the European League Against Rheumatism (EULAR) School of Rheumatology (Undergraduate Classroom) and to EULAR Standing Committee on Education and Training members in January 2017, with a reminder in February 2017. Results Responses were received from 21 schools belonging to 11 countries. Assessment of gait, hyperalgesic tender site response and hypermobility were not included in many curricula. Similarly, interpretation of investigations undertaken on synovial fluid was taught in only 16 schools. While disease-modifying anti-rheumatic drugs and biological agents, and urate-lowering treatment were included in the curricula of 20 and 21 institutions, respectively, only curricula from 18 schools included core non-pharmacological interventions. Osteoarthritis, gout, rheumatoid arthritis, spondyloarthropathy, polymyalgia rheumatica and lupus were included in the curriculum of all institutions. However, common RMDs such as calcium pyrophosphate deposition, fibromyalgia, giant cell arteritis and bone and joint infection were included in 19 curricula. Conclusion This survey highlights areas of similarities and differences in undergraduate curricula across Europe. It is hoped that the results of this survey will catalyse the development and agreement of a minimum core European Curriculum for undergraduate education in RMDs.
► Surveys of medical undergraduate rheumatic and musculoskeletal disease (rMD) curricula in individual european countries suggest variations in curriculum content. ► A survey of undergraduate rMD curriculum in medical schools across europe has not been performed.
What does this study add?
► this survey highlights areas of similarities and differences in undergraduate curricula across europe.
How might this impact on clinical practice?
► It is hoped that the results of this survey will catalyse the development and agreement of a minimum core european Curriculum for undergraduate medical education in rMDs.
AbstrAct
Objectives to survey the undergraduate rheumatic and musculoskeletal diseases (rMDs) curriculum content in a sample of medical schools across europe.
Methods the undergraduate musculoskeletal diseases and disability curriculum of University of Nottingham, UK, was used as a template to develop a questionnaire on curriculum content. the questionnaire elicited binary (yes/no) responses and included the option to provide additional information as free text. the survey was mailed to members of the european League Against rheumatism (eULAr) School of rheumatology (Undergraduate Classroom) and to eULAr Standing Committee on education and training members in January 2017, with a reminder in February 2017.
Results responses were received from 21 schools belonging to 11 countries. Assessment of gait, hyperalgesic tender site response and hypermobility were not included in many curricula. Similarly, interpretation of investigations undertaken on synovial fluid was taught in only 16 schools. While disease-modifying anti-rheumatic drugs and biological agents, and urate-lowering treatment were included in the curricula of 20 and 21 institutions, respectively, only curricula from 18 schools included core non-pharmacological interventions. osteoarthritis, gout, rheumatoid arthritis, spondyloarthropathy, polymyalgia rheumatica and lupus were included in the curriculum of all institutions. however, common rMDs such as calcium pyrophosphate deposition, fibromyalgia, giant cell arteritis and bone and joint infection were included in 19 curricula.
Conclusion this survey highlights areas of similarities and differences in undergraduate curricula across europe. It is hoped that the results of this survey will catalyse the development and agreement of a minimum core european Curriculum for undergraduate education in rMDs.
InTROduCTIOn
The Rheumatology (ESoR) (Undergraduate Classroom) was to survey the undergraduate RMD curricula in European medical schools.
MeTHOds
The undergraduate RMDs curriculum of University of Nottingham is based on the ESCET guidance 1 and was used to develop a questionnaire. The survey document elicited yes/no responses and included the option for providing additional detailed information. This was reviewed by ESoR (Undergraduate Classroom) to assess face validity and was emailed to members of the ESoR (Undergraduate Classroom) and ESCET, in January 2017, with reminder in 4 weeks.
ResulTs
Twenty-one schools from 11 countries responded (online supplementary table S1). The questionnaires were completed by tenured faculty with teaching responsibilities and knowledge of their curriculum. All schools taught differentiation between inflammatory and mechanical joint pain, and to make observations about the musculoskeletal system (online supplementary table S2). The identification and characterisation of symptoms and signs of arthropathy at hand, wrist, elbow, gleno-humeral, hip, knee, ankle and foot joints was taught in >90% of the responding institutions. In contrast, identification of periarticular lesions was taught in five institutions. However, students were expected to be aware of these conditions in majority of institutions (online supplementary figure S1). All institutions gave students opportunity to examine patients. Over 95% of the responding institutions taught the differential diagnosis of acute and chronic monarthritis, oligo-arthritis and inflammatory polyarthritis.
Investigations relevant to RMDs and general principles of management were included in most curricula (table 1, online supplementary figure S2, table S3). Common RMDs such as osteoarthritis, neck and low back pain, fibromyalgia and regional pain, bone diseases and crystal deposition diseases were included in majority of curricula with some variation in content (online supplementary figures S3-5, table S4). There was uniformity in coverage of autoimmune rheumatic diseases (AIRDs) in most curricula (table 2). While rare manifestations such as atlanto-axial subluxation in rheumatoid arthritis were included in 17 curricula, an exploration of long-term physical, psychological and social effects or the contribution of the multi-disciplinary team (MDT), and drug counselling and monitoring were only included in one curriculum. An outline of appropriate management plan for comorbidities was included in only two curricula.
The risk factors, common causative organisms, signs and symptoms, differential diagnosis, and investigation of bone and joint infection were included in 19 curricula. Bony malignancy including metastases were included in fewer curricula (online supplementary table S6), while uncommon conditions were included in very few institutions (online supplementary table S7).
dIsCussIOn This is the first survey of undergraduate RMDs curriculum of a number of medical schools across several European countries. It found areas of harmony and differences in curriculum content. There were similarities in teaching on AIRDs, while discrepancies were obvious on assessments such as gait examination, periarticular assessment and assessment for hypermobility. Similarly, identification of disability and role of MDT were taught in few institutions. Some medical schools included advanced imaging techniques with relevance to RMDs, while over 20% schools did not teach investigations undertaken on synovial fluid. While pharmacological management of autoimmune and other rheumatic diseases was included, there was a lack of teaching on adjunctive therapies and coping strategies. Factors underpinning variation in curriculum content may include different roles of rheumatologists across Europe and the need to tailor training to match local demands. The role of a rheumatologist depends on the number of rheumatologists per capita, postgraduate training and competing management of RMDs by other specialists such as orthopaedics, physiotherapists and general practitioners. Given the small number of medical schools that responded, we are unable to provide geographic comparisons. Currently, a nationwide curriculum exists in France, but not in many other European countries to the best of our knowledge. It is hoped that the results of this survey will catalyse the development of a core European curriculum for undergraduate medical education in RMDs. The aim of undergraduate medical education has moved from acquisition of knowledge to acquiring competence, and this may make it easier to harmonise curricula across Europe once the expected competencies are standardised. Development of a core curriculum is likely to improve the amount of time devoted to teaching about RMDs. For instance, a recent survey of Canadian medical schools observed that an average of just 2.3% of their curriculum time was devoted to education about RMDs, despite these disorders being responsible for 13.7%-27.8% of all primary care consultations in Canada. 7 Additionally, it will also improve the coverage of musculoskeletal topics in undergraduate textbooks. 8 Research suggests that even when a condition is included in the curriculum, the delivery of teaching may be variable, and recently qualified doctors may have substantial deficits in their knowledge base as exemplified in a study in which majority of recently qualified doctors did not demonstrate competence about longterm management of gout. 9 There are several caveats to our findings. First, this is a small sample of European medical schools, so generalisability may be limited. Second, we have not collected data on the amount of teaching, the teaching methods used, the detail to which each topic is taught, specific learning objectives and the quantity of exposure to patients. We feel that these data would be difficult to collect reliably as medical schools have several associated hospitals where the delivery of teaching occurs, and this may vary from place to place.
In conclusion, this survey highlights areas of similarities and differences in undergraduate curricula across Europe. It is hoped that it will catalyse the development of a core European Curriculum for undergraduate education in RMDs.
|
2018-10-14T17:01:43.516Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "3bc76e9c1dd45ffa2ae8b0be23c803665ebb390c",
"oa_license": "CCBY",
"oa_url": "https://rmdopen.bmj.com/content/rmdopen/4/2/e000743.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "632029f69082fd8101037058fff0c2143d2f5cd1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16614076
|
pes2o/s2orc
|
v3-fos-license
|
Influence of perceived and actual neighbourhood disorder on common mental illness
Purpose Fear of crime and perceived neighbourhood disorder have been linked to common mental illness (CMI). However, few UK studies have also considered the experience of crime at the individual and neighbourhood level. This study aims to identify individual and local area factors associated with increased perceived neighbourhood disorder and test associations between CMI and individuals’ perceptions of disorder in their neighbourhoods, personal experiences of crime and neighbourhood crime rates. Methods A cross-sectional survey was conducted of 1,698 adults living in 1,075 households in Lambeth and Southwark, London. CMI was assessed using the Revised Clinical Interview Schedule. Data were analysed using multilevel logistic regression with neighbourhood defined as lower super output area. Results Individuals who reported neighbourhood disorder were more likely to suffer CMI (OR 2.12) as were those with individual experience of crime. These effects remained significant when individual characteristics were controlled for. While 14 % of the variance in perceived neighbourhood disorder occurred at the neighbourhood level, there was no significant variance at this level for CMI. Conclusions Perceived neighbourhood disorder is more common in income-deprived areas and individuals who are unemployed. Worry about one’s local area and individual experience of crime are strongly and independently associated with CMI, but neighbourhood crime rates do not appear to impact on mental health.
Introduction
There is increasing interest in the role of place in influencing a variety of health outcomes and in explaining health inequalities [1]. Within mental health, spatial patterning has long been noted in the incidence of suicide [2] and psychosis [3], and more recently various neighbourhood-level exposures have been found to influence these outcomes [4,5].
Common mental illnesses (CMI) (i.e. depression and anxiety disorders) [6] are major contributors to the burden of disease globally, particularly in high-income countries [7]. Research on the influence of place on CMI over the past decade has been mixed. The prevalence of these disorders show stark social inequalities, with a greater proportion of those on lower incomes, the unemployed and those with fewer educational opportunities being affected [8]. The prevalence of these factors is differentially distributed across communities, with deprived neighbourhoods by definition having higher concentrations of impoverished and economically inactive individuals [9].
It is less clear whether the risk of CMI is also affected by social processes occurring at a neighbourhood level. Research in this area has been mixed, with two reviews finding some evidence, predominantly from the USA, of a link between deprivation and other neighbourhood problems and the prevalence of CMI that persists when individual factors are controlled for [1,10]. The majority of the studies included in both these reviews were performed in urban areas, although some studies that have also included rural areas found no difference in CMI prevalence between the settings [11,12]. Evidence from national UK samples has found relatively little variation in the prevalence of CMI between neighbourhoods which has generally been accounted for by individual and household factors [12][13][14][15]. Small neighbourhood-level effects have been found in female, non-white and lower educated sub-groups over and above individual risk factors [14,16].
Much research looking for neighbourhood effects on CMI has used measures that are a summary of the characteristics of each area's population [9]. For example, in the studies discussed above, area deprivation was generally characterised by measures constructed from summary statistics for the population including average income and rates of unemployment. When measured in this way, such area effects are likely to be difficult to separate from their analogous individual variable, particularly where the neighbourhoods studied are small and homogenous [13].
To find true neighbourhood effects separate from individual characteristics, it may be more fruitful to explore aspects of neighbourhood physical and social environment [9]. These have been less frequently investigated, not least because comparable objective measures of context for multiple small areas are much less readily available than summary statistics describing populations. Levels of disorder can be conceptualised as an aspect of both neighbourhood social environment, where levels of crime and anti-social behaviour influence feelings of safety and social connectedness, and the physical environment which may be degraded by graffiti and vandalism [9]. The potential for these factors to influence mental health has been reflected in the UK policy with measures of disorder included in national measures of well-being [17] and population mental health used to evaluate neighbourhood regeneration policies [18].
A review of the influence of neighbourhood characteristics on depression found some evidence supporting a harmful effect of neighbourhood disorder, although most of the studies included relied on respondent perception alone to measure disorder [10]. Ross and Mirowsky's [11] work in Illinois suggested that the effect of neighbourhood deprivation on mental health was mediated by neighbourhood disorder. Work on fear of crime has also shown a link between individuals' concerns about their local area and various worsened health outcomes including mental health [19]. Studies considering the effect of an unfavourable physical environment on mental health have hypothesised that this may act as a direct stressor that increases individuals' vulnerability to anxiety and depression [20]. Meanwhile, work using the Whitehall II study data has suggested that worry about disorder in the local area has the effect of limiting involvement in social and physical activities. Such activities in turn may enhance well-being and provide a buffer against CMI [19]. Whitely and Prince found a similar effect of fear of crime in their qualitative work in inner city London [21].
In this study we used a broad definition of neighbourhood disorder which encompasses both physical and social aspects. Unlike many previous studies, we have included both a measure of individual's' perception of disorder in the local area and local crime rates as a more objective proxy for disorder as well as individual's experience of victimisation. We examine experience of neighbourhood disorder and of CMI in an area of inner South London which is diverse both in terms of ethnicity and levels of neighbourhood deprivation and in which rates of CMI are high and disorder is a significant concern.
This study examines the association between experience of neighbourhood disorder and CMI. We first aim to examine the relationship between perceived neighbourhood disorder and individual demographics and experience of crime as well as area-level factors. We hypothesise that perceived neighbourhood disorder will be clustered by neighbourhood and be higher in areas with higher crime rates and amongst those with an individual experience of crime. We then test the hypotheses that CMI is clustered by neighbourhood and that (1) individual perception of neighbourhood disorder, (2) personal experience of violent victimisation and (3) higher neighbourhood crime rates are associated with higher prevalence of CMI.
Method
Lambeth and Southwark are neighbouring boroughs in inner South London with a combined population of approximately 590,000. This population is ethnically diverse with over a third of residents belonging to black and minority ethnic groups and a similar proportion born outside of the UK [22]. Overall, the area is significantly more deprived than the national average with over 90 % of neighbourhoods studied scoring above the national median for deprivation on the Index of Multiple Deprivation 2010 (IMD). However, it also contains areas of significant wealth including some of the richest neighbourhoods in the UK [23]. The overall crime rate of 125/1,000 population for the two boroughs is well above the national average of 74/1,000 [22] and nearly 50 % of neighbourhoods studied were in the top quintile for the crime domain of the IMD.
Study design and participants
The South East London Community Health (SELCoH) study surveyed 1,698 individuals in 1,075 randomly selected households within the London boroughs of Lambeth and Southwark between 2008 and 2010. Face-to-face interviews were carried out in participants' homes by trained interviewers using a computer-assisted schedule. The survey collected data on psychiatric and physical morbidity, health behaviours and health service use as well as socio-demographics, psychosocial factors and neighbourhood characteristics. Full details of study design, participants, procedures and measures used have been published elsewhere [24].
Exposures
Perceived neighbourhood disorder was determined from four questions: ''Thinking of the area you live in, how much of a problem is each of the following?'' asked regarding (1) vandalism/graffiti, (2) crime, (3) safety and (4) rubbish/litter. Responses were scored on a Likert scale as 'Not a problem '(0), 'Minor' (1), 'Somewhat serious' (2) and 'Very serious' (3). Total score when all four questions were combined was not normally distributed and so a binary variable was created by splitting the highest rating given on any question into none/minor (low perceived disorder) and somewhat/very serious (high perceived disorder).
Individual experience of crime was defined by three variables. Participants were coded as having been victimised if they answered yes to any of the following: (1) ''Have you ever been attacked, mugged, robbed or been the victim of a serious crime?'', (2) ''Has anyone ever injured you with a weapon-gun, knife, stick, etc.?'' or (3) ''Has anyone ever hit you, bit you, slapped you, kicked you or forced you to have sex against your wishes?''. Participants were coded as having witnessed violence if they answered yes to ''Have you ever seen something violent happen to someone (e.g. attacked or beaten) or seen someone killed?''. Participants were also asked whether the same events had happened to them in the past year.
Outcome
Presence of CMI was assessed using the Revised Clinical Interview Schedule (CIS-R), a semi-structured interview covering non-psychotic symptoms [25]. The conventional cutoff of a total score of 12 was used to define cases.
Potential confounders
Age, sex, ethnicity, education and occupation were included as individual-level variables, which were shown in previous research to be associated with CMI and fear of crime and hence potential confounders of the relationship between the two. Occupation was categorised using the Registrar General's Classification of social class [26] condensed into two groups, manual or non-manual, for respondents currently in work. Participants who were retired, sick, disabled, students or caring for children were classified as economically inactive, while those seeking work were separately classified as unemployed. Household income (as well as occupation and education) was included as an indicator of an individual's deprivation to allow the effect of this to be distinguished from neighbourhood-level deprivation.
Residential mobility
Having recently moved into a neighbourhood may be associated with different rates of survey participation, different perceptions of the neighbourhood and different experiences of violence (for example, people may have moved seeking out a safer neighbourhood). Participants were asked whether they had been at their current address for more or less than 2 years.
Spatial scale
Full postcode data were available for each household and used to allocate them to lower layer super output areas (LSOAs) and Census Area Statistics wards (wards) using the Office of National Statistics Postcode Directory [27]. Analyses were performed using LSOA as a proxy for neighbourhood. LSOAs are statistical geographic units used by the Office of National Statistics (ONS) for reporting census data. While they cannot be considered synonymous with neighbourhoods, this level of geography has the benefit of a more local scale than wards, as LSOAs have a mean population of 1,500 compared to ward populations of 10,000-15,000 in Lambeth and Southwark [28]. They are the standard unit used for publishing ONS neighbourhood statistics including the IMD. Neighbourhood-level variables The ONS IMD 2010 was used to define neighbourhood levels of deprivation. IMD is the government's official measure of deprivation at the small area level and scores are published for every LSOA in England [29]. The IMD 2010 is based on data from 2008 for 38 indicators grouped into seven domains and is designed to capture multiple aspects of deprivation. Scores do not indicate absolute differences between areas, but are ranked to allow relative deprivation between areas to be explored. Total IMD contains a health sub-domain which includes measures that aim at estimating local rates of mental disorder, so for this analysis the income and crime subdomains were used on their own as well as overall IMD rank. The income sub-domain is based on a count of the proportion of an LSOA's population who are income deprived, indicated by the receipt of means-tested benefits. The crime sub-domain is based on the police-recorded rates of (1) violent crime, (2) burglary (3), theft and (4) criminal damage, standardised to the resident and workplace population of the LSOA [30].
Although the boroughs of Lambeth and Southwark have areas of low deprivation compared to England as a whole, the majority of both boroughs are more deprived than the national average. To allow for useful comparisons of relative deprivation locally to be made, the LSOAs in Lambeth and Southwark were grouped into local tertiles for analyses.
Statistical methods
Analyses were performed in STATA version 11 [31]. Initial descriptive analyses were performed using survey commands (svy) to account for clustering by household due to the study design and were weighted for non-response within households. The process for calculating the weights has been published elsewhere [24].
Multilevel models
Random effects logistic regression analyses were performed for the binary outcomes: (1) high perceived neighbourhood disorder and (2) CMI. A three-level random intercept logistic model was used to account for the hierarchical nature of the data considering individuals as level 1, households as level 2 and neighbourhood as level 3. Analyses used the STATA command GLLAMM version 2.3.15 [32] using a logit link function and binomial family for the distribution of outcomes. Simple logistic models considering one covariate at a time were used to estimate unadjusted odds for individual and neighbourhood-level variables. Model 1 in each case mutually controlled for The proportion of residual variance occurring at each level of the model was assessed first using a null model with no covariates controlled for and again for Model 2. To estimate the proportion of the residual variance occurring at each level, an underlying linear random intercept model for the latent propensity to the binary outcome, defined by a threshold, was assumed. Hence the residual variance at the individual level was assumed to follow a standard logistic distribution and so fixed at the standard logistic variance of 3.29 (p 2 /3) (see Snijders and Bosker [33] for further discussion of estimates of variance from multilevel logistic models). Variance partition coefficients were calculated for each level by dividing the residual variance at the level by the total residual variance. Two-and threelevel models for each set of covariates modelled were also compared using likelihood ratio tests.
Participation rates
At least one person was interviewed in 51.9 % of eligible households contacted. Within participating households, 71.9 % of eligible adults participated. The sample was similar to the 2011 census sample in terms of demographics and socio-economic indicators, with the exception of the sample being slightly younger and having more students among the economically inactive [34]. There were participants located in 322 of the 342 LSOAs in Lambeth and Southwark with a range of 1-18 participants per LSOA. There were participants in all 42 wards within the two boroughs with a range of 19-68 per ward.
The prevalence of CMI amongst all participants was 24.2 %. Personal experience of crime was common with 51.7 % of the sample having been victimised at some time in their lives and 41.5 % having witnessed violence. Table 1 shows the demographic characteristics and individual experience of crime in the sample. Table 1.
Perception of neighbourhood disorder was greatest amongst 16-to 24-year-olds, students and the unemployed ( Table 2). When individual demographic factors were adjusted for simultaneously, perceived neighbourhood disorder was significantly lower in older people. Those from the white ethnic group were more likely to report neighbourhood disorder than other ethnic groups, with the effect reaching statistical significance for black African and ''other'' ethnic groups. Having a personal experience of crime was associated with increased perceived disorder and adjusting for this (Model 2) revealed higher perceived disorder in women which had not been statistically significant in earlier models.
Participants whose neighbourhoods were characterised by higher crime rates, greater income deprivation and higher total deprivation tended to have increased concern about disorder. The odds ratios presented in Table 3 are for each tertile of deprivation with, for example, participants from neighbourhoods with the highest total deprivation having three times the odds of reporting neighbourhood disorder compared to those in the least deprived tertile. This effect remained when sex, age, ethnicity, household income, education, occupation, victimisation and witnessing violence were controlled for (Model 2). The effect of neighbourhood crime rates on perception of neighbourhood disorder was lower than that of income deprivation and total deprivation, and became non-significant when income deprivation was controlled for simultaneously.
A null three-level model was used to estimate the proportion of variance in perceived neighbourhood disorder occurring at each level. Where neighbourhood was defined as LSOA, variance at the individual level was fixed at 3.29 which represented 53.9 % of the total variance, variance at the household level was 1.96 (SE 0.52), 32.1 % of total variance, and at the neighbourhood level 0.85 (SE 0.25) and 14.0 % of total variance. A likelihood ratio test comparing the three-level model with a two-level model showed that the three-level model better accounted for the data (v 2 = 14.6 p \ 0.0005). These proportions remained similar when individual factors were controlled for (variance at neighbourhood level 0.76 (SE 0.26), 12.6 % of total variance). The proportion of variance at the neighbourhood level fell to 0.52 (SE 0.23), 9.0 % of total variance, but remained significant when income and crime deprivation were controlled for, suggesting that around a quarter of the variance within neighbourhoods is accounted for by these deprivation indices.
Common mental illness
Perceived neighbourhood disorder was associated with the presence of CMI with an unadjusted OR of 2.12 in the base model (Table 4). This effect was partially attenuated by controlling for individual demographic factors and individual experience of crime, but remained sizeable and significant (OR 1.55 p = 0.007) when these factors were controlled for. Neighbourhood crime rates, income deprivation or total deprivation was not associated with CMI, before or after controlling for individual factors. However, Estimates of the proportion of variance at each level from the null model showed significant variance at the individual level (fixed at 3.29, 56.8 % of total variance) and household level [2.50 (SE 0.58), 42 % of total variance], but no significant variance at the neighbourhood level (variance \0.001). A likelihood ratio test comparing the three-level model with a two-level model showed no additional benefit to including a third level (v 2 = 0.63 p = 0.43). This remained the case when models controlling for individual and neighbourhood-level variables were considered.
The above multilevel models were repeated with level 2 defined as wards. The resultant odds ratios and confidence intervals were very similar in all models despite the definition of neighbourhood being much larger (data not shown here).
Residential mobility
There were 588 participants (30.0 %, weighted for nonresponse) who had been living at their current address for \2 years. Those who had moved in the past 2 years were significantly younger, more likely to be in higher income and better educated groups and more likely to be economically active in non-manual work or be students than those who had not (data not shown here). Those who had moved in the last 2 years were no more likely to score as cases on CIS-R than those who had not (OR 0.87, 95 % CI 0.69-1.11).
Without neighbourhood-level data on residential mobility, it is not possible to say whether areas of greater mobility were also more disordered. However, individuals who had moved in the past 2 years were less likely to report neighbourhood problems than those who had not (OR 0.62, 95 % CI 0.50-0.77). Those who had ever been victimised were no more likely to have moved in the past 2 years than those who had not. Those who had been victimised in the past year were more likely to have moved in the past 2 years (OR 1.90 95 % CI 1.31-2.76); however, this association was confounded by the youthful demographic of the more mobile population and was reduced and no longer significant when age was controlled for (OR 1.37 95 % CI 0.92-2.03). Overall, there was not evidence that residential mobility might act as a confounder of associations between perceived neighbourhood disorder, victimisation and CMI.
Sensitivity analyses
The sample included some LSOAs which contained only small numbers of individuals for analysis. The sensitivity of the results to the inclusion of these LSOAs was tested by repeating the analyses excluding LSOAs which contained fewer than five individuals. This reduced the sample to 1,175 individuals in 714 households and 152 LSOAs. A summary of the results is given below; full tables are not shown here for space reasons. In the analyses with perceived neighbourhood disorder as the outcome, the associations between individual factors and increased perceived disorder all remained in the same direction, although the trend to decreased perceived disorder in non-white ethnic groups and increased perceived disorder in students and the unemployed were no longer significant at the 5 % level. For neighbourhood-level factors, all the effects reported remained significant when LSOAs with few participants were excluded and in most cases effect sizes and significance were increased.
Analyses with CMI as the outcome showed very little change when LSOAs with few participants were excluded. All the effects noted in the main analysis remained with similar effect sizes and significance.
With all the above analyses, the proportions of the variance reported at each level remained similar when LSOAs with few participants were excluded, with a small reduction in the proportion of variance at the household level and corresponding increase in variance at the individual level. For example in the null three-level model with CMI as an outcome reported above, household variance reduced to 2.03 (SE 0.56), falling from 43.2 % of total variance to 38.2 %, while individual variance increased from 56.8 % of total variance to 61.8 % (actual variance fixed at 3.29 in both models) and neighbourhood-level variance remained \0.0001 and non-significant.
Given that household-level variances remained surprisingly high in all our models compared to those reported in the literature, the null model with CMI as an outcome was also repeated excluding households with only one respondent. This produced a further reduction in household-level variance to 1.36 (SE 0.49), 29.1 % of total variance, with an increase in the proportion of individual-level variance to 70.5 % and a very small and non-significant increase in neighbourhood-level variance to 0.02 (SE0.24), 0.4 % of total variance.
Experience of neighbourhood disorder
Concern about neighbourhood disorder and in particular crime was common in our sample compared to national figures [35]. This reflects the recorded crime statistics for Lambeth and Southwark boroughs, which both rank highly in levels of crime and anti-social behaviour nationally, as well as a wider population perception of them as high crime areas [36], and supports the use of crime rates as our objective measure of disorder. The individual-level risk factors examined also suggest that perception of neighbourhood disorder is related to objective experience, with victimisation or witnessing violent crime being the strongest individual-level associations. The 16-to 24-year age group had the greatest concern about disorder, while the over 65 age group had fewer concerns. Whilst lower concern about crime in the elderly may appear counterintuitive, it is in keeping with national samples [35]. These differences are partly explained by the association seen between personal experience of crime and perceived neighbourhood disorder. In our sample 14 % of individuals aged 16-24 reported having been victimised in the past year and 18 % reported having witnessed violence in that time, compared with 4 and 5 %, respectively, in other age groups. Young people without direct experience of crime may nonetheless have increased concerns due to their realistic understanding that they are at much higher risk of victimisation than the population as a whole.
In contrast to national samples [35], univariate analyses did not show a significantly higher concern about neighbourhood disorder amongst women. However, this expected effect was revealed in models controlling for experience of crime. This indicates that the effect of gender was being suppressed by the impact of individual experience of crime. Men more commonly reported experiencing victimisation and witnessing violence in this sample and this may be acting to increase their prevalence of concern about disorder.
Examination of the variance in perceived disorder indicated that there was clustering of high perceived neighbourhood disorder by LSOA, suggesting that where people live makes a significant contribution to perception in addition to the effect of individual characteristics. Living in a high crime neighbourhood was associated with an increase in perceived disorder of a similar magnitude to that associated with individual experience of crime. However, the effect of deprivation was larger and area-level income deprivation appears to account for the effect of neighbourhood crime rates when both are controlled for. This might be taken as an indication that individual perceptions of neighbourhoods as disordered and unsafe are more related to visible physical disorder associated with deprivation than specific incidents of crime. Furthermore, crime and income deprivation together account for only about a quarter of the variance at neighbourhood level, suggesting that other area-level factors also play an important role.
Common mental illness
We found a strong association between perceived neighbourhood disorder and CMI. This association was not simply an effect of confounding by demographic and socio-economic factors. Victimisation and witnessing violence were both also strongly linked with CMI, but these factors only accounted for part of the effect of perception of neighbourhood disorder on CMI.
The relationship between perceived neighbourhood disorder and CMI is likely to be a complicated one. Feeling unsafe and under threat in one's local area could potentially act as a direct stressor on individuals, especially those in groups whose daily activities are most restricted to their immediate locality, such as the unemployed [11]. These groups are already at increased risk of CMI. Perhaps more significantly, such perceived disorder reduces individuals' ability to take part in social and physical activities outside the home that might be important in protection and recovery from such illnesses [19,21]. This study demonstrates that concerns about disorder in the neighbourhood are concentrated in areas where more income-deprived people live and so have the potential to be exacerbating inequalities in mental health outcomes.
Although individuals' perception of their neighbourhood appears to be linked to mental health, these data did not suggest that the place where people lived had an effect. In common with other UK studies [15,16], we found no significant variance in CMI at neighbourhood level, despite this study using a smaller unit of neighbourhood and a more robust measure of CMI than most previous studies.
We did not find an independent effect of neighbourhood crime rates or deprivation on CMI, in contrast to the association with subjective perception. This may highlight that police-recorded crime rates are an imperfect measure of the true experience of neighbourhood disorder as they reflect a relatively small proportion of total crime [39] and may be more likely to miss crimes in deprived areas due to underreporting. However, crime is not an environmental factor that is necessarily experienced by the whole population: the impact of a crime within a neighbourhood may be profound for some individuals directly experiencing it, but have little or no direct impact on the majority.
These findings suggest that reducing perceived neighbourhood disorder is a worthwhile target for improving population mental health. However, the most useful interventions may be those targeted at specific population groups, for example young people and those who have been victims or witnesses to crime, rather than those targeted at a neighbourhood as a whole. Measures of neighbourhood deprivation may be more useful than crime rates in identifying geographical areas in which to target these higher risk groups.
Strengths and limitations
This study adds to the existing literature on the influence of neighbourhood disorder in a number of ways. It investigated an inner city population which is diverse, especially in terms of ethnicity and socio-economic status, and subject to high levels of deprivation and neighbourhood disorder.
Using cross-sectional data, it is not possible to ascertain the direction of causation for the association between perceived neighbourhood disorder and CMI. Information biases are important; participants who were cases on the CIS-R were asked about their neighbourhood at a time when they reported a recent experience of anxiety or depression symptoms and these are likely to colour their perception of their local area. However, the difference in the spatial patterning of perceived disorder from that of CMI indicates that these responses did not simply measure the same thing. Furthermore, perception of disorder was associated with objective measures of neighbourhood problems, particularly deprivation, independent of individual characteristics, while CMI was not.
A limitation of this study is the relatively low household participation rate of 51.9 %. This is in part a reflection of the difficulty of conducting surveys in deprived inner city environments, and the participation rates reported are relatively high compared with recent surveys in similar areas [24]. It is known that non-participation in surveys is strongly associated with the presence of mental disorder, so it may be that the rates of CMI reported in this sample, although high in comparison to national UK samples [24], are an underestimate. Work looking at the effect of nonparticipation suggests that while it is a significant problem for prevalence studies, it only modestly reduces associations between exposures and outcome [37]; however it may have reduced the effect sizes found in this study.
Individuals who move home frequently might be expected to be less likely to participate in surveys and hence be underrepresented in this study. In our sample 30 % of individuals had moved in the past 2 years. Greater London Authority (GLA) figures for 2008/09 estimate that the proportion of individuals in Lambeth who had been living at a previous address 1 year before was 17.0 %, while in Southwark it was 14.9 % [38]. This suggests that the sample contains approximately the expected numbers of residentially mobile individuals. The lack of association between residential mobility and victimisation, perceived disorder or CMI suggests it is unlikely that residential mobility confounds the associations reported.
Choice of exposures
Much work on neighbourhood effects has used measures which summarise population characteristics and so are difficult to interpret when individual factors are also controlled for [9]. The use of crime rates is a step towards considering a neighbourhood's environment separate from its population. It is possible that a stronger relationship with area-level variables was not observed because most of the areas within the study had levels of crime and deprivation that are high on a national comparison, limiting the variation between neighbourhoods and so reducing our ability to detect neighbourhood effects.
The measures of individuals' experience of actual violent crime suggest that this is a strong influence on perception. The small numbers reporting experience of violence in the past year prevented the use of these variables in the main models, so those ever having experienced violence were used instead. The lack of information about the timing and location of reported experiences of violence means that individuals' experience cannot be taken as a measure relating to their neighbourhood. However, the persistence of a strong association between perceived disorder and CMI despite inclusion of data about individual experience is something that has not been possible in much previous research linking fear of crime to CMI, and demonstrates that this relationship exists independently of actual experience of victimisation.
Definition of neighbourhood
As with all research on neighbourhood effects, our definition of neighbourhood is imperfect and cannot be assumed to be the same area that people were thinking of when asked about perceived disorder. As a concept, neighbourhood is generally taken to refer to the shared space around clusters of residences that have similar attributes in terms of the individuals living there and the physical and social environment [40]. The definition of a specific neighbourhood is then likely to be dynamic and vary according to which attributes are of interest. Indeed, where individuals are asked about their neighbourhood, the area being described may well vary for every person asked. For the purposes of research, however, one set of boundaries must be imposed, raising the modifiable areal unit problem: that the results of analyses of local areas will vary according to the scale and the boundaries chosen [41]. This difficulty occurs where the boundaries imposed are arbitrary and lessened where there is a theoretical underpinning for the choice of neighbourhood used [42], although this will always involve a trade-off with pragmatic concerns.
To use secondary data as a measure of wider neighbourhood environment, we were constrained to using administrative boundaries in this study. The concentration of our sample in a relatively small geographical area allowed us to use LSOAs as our definition of neighbourhood. This definition has the benefit of being both smaller and more homogeneous than the electoral wards used in many previous studies [28] which may make inequalities between areas and area-level effects on perception of the social environment easier to detect [42]. We were also able to test a previous suggestion that the use of larger neighbourhood units could have masked underlying neighbourhood effects in some earlier, negative studies [13]. Our analysis was limited by the small numbers of individuals in some of the LSOAs. The sensitivity analyses excluding these LSOAs suggest that this did not affect the direction of associations seen, but that some of the neighbourhood-level effects may have been underestimated in our analyses because they could not be detected where there were only one or very few individuals in an LSOA.
The clustered sampling in this study allowed us to model household variance. Previous research has shown that it is important to consider the household level separately to the individual level when considering neighbourhood effects on mental health [15] to reduce the risk of attributing too much of the variance above individual level to the neighbourhood. Our ability to model household variance may have been limited by the fact that more than half our households only had one respondent. We found a residual variance at the household level that was significantly higher than in most previous studies, although when oneperson households were excluded we found a very similar household variance to another recent UK study that used LSOA as its definition of neighbourhood [43]. This high household variance highlights that effects operating at the household level were particularly important in our study population, suggesting that responses to higher, area-level influences are similar for members of the same household but vary considerably between households.
The difficulties inherent in defining neighbourhood are exacerbated by the use of multilevel modelling techniques that treat individual neighbourhoods as independent and cannot easily account for the likelihood that geographically close neighbourhoods are more similar than those further apart. The finding that CMI was not spatially patterned within the sample is counterintuitive in many ways, and it is possible that geographical patterns exist in the data that could not be detected by the statistical methods used but may be found on a spatial statistical analysis [18].
Conclusion
This study highlights that physical and social disorder within neighbourhoods has an important, but complicated relationship with CMI. Officially recorded crime rates appear to have a surprisingly modest association with individuals' perception of neighbourhood disorder and little impact on mental health. At the same time individuals' perception of their local neighbourhood and their own experience of violence have strong independent associations with CMI. These more subjective variables may capture aspects of the experience of living in disordered neighbourhoods that crime rates are unable to. Feeling unsafe and under threat in one's local area disproportionately affects those already experiencing other forms of deprivation in their area and personally. Interventions aimed at reducing the impact of disordered neighbourhoods on mental health may help reduce inequalities in CMI by targeting both factors associated with increasing people's perception of disorder and the impact of victimisation on individuals.
|
2017-08-02T18:11:14.163Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "e022e37e25ee7950d5fc7c1c0a870546c711923e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-013-0813-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "21a5b80976add0c83a0d0e043cbde2be95af01ca",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
2390419
|
pes2o/s2orc
|
v3-fos-license
|
Statistical network analysis for functional MRI: summary networks and group comparisons
Comparing networks in neuroscience is hard, because the topological properties of a given network are necessarily dependent on the number of edges in that network. This problem arises in the analysis of both weighted and unweighted networks. The term density is often used in this context, in order to refer to the mean edge weight of a weighted network, or to the number of edges in an unweighted one. Comparing families of networks is therefore statistically difficult because differences in topology are necessarily associated with differences in density. In this review paper, we consider this problem from two different perspectives, which include (i) the construction of summary networks, such as how to compute and visualize the summary network from a sample of network-valued data points; and (ii) how to test for topological differences, when two families of networks also exhibit significant differences in density. In the first instance, we show that the issue of summarizing a family of networks can be conducted by either adopting a mass-univariate approach, which produces a statistical parametric network (SPN). In the second part of this review, we then highlight the inherent problems associated with the comparison of topological functions of families of networks that differ in density. In particular, we show that a wide range of topological summaries, such as global efficiency and network modularity are highly sensitive to differences in density. Moreover, these problems are not restricted to unweighted metrics, as we demonstrate that the same issues remain present when considering the weighted versions of these metrics. We conclude by encouraging caution, when reporting such statistical comparisons, and by emphasizing the importance of constructing summary networks.
INTRODUCTION
Are neurological networks topologically stable across different populations of subjects or across different cognitive and behavioral tasks? This general research program has been carried out by a myriad of researchers in the last decade. Neuroscientists are often interested in evaluating whether the small-world properties of a given brain network are conserved when comparing patients with controls. Bassett et al. (2008), for instance, have studied the differences in anatomical brain networks exhibited by healthy individuals and patients with schizophrenia. Similarly, some authors have tested how the topological properties of certain functional networks are affected by different behavioral tasks (Cecchi et al., 2007;De Vico Fallani et al., 2008;van den Heuvel et al., 2009). Brain network topology has been studied at different spatial scale (Bassett et al., 2006), and different time scales (Pachou et al., 2008;Salvador et al., 2008). It is therefore undeniable that there is considerable academic interest in comparing families of networks; whether these represent several groups of subjects, or the different conditions of an experiment. This general research paradigm is particular amenable to the analysis of subject-specific networks. When such individual networks are available, one can readily compute subject-specific topological measures, which will then be compared across experimental conditions. This type of analysis has been conducted using both functional and structural MRI data (Hagmann et al., 2008;Gong et al., 2009). In this paper, we will mostly focus on networks arising from functional MRI (fMRI) data.
The prospect of performing rigorous statistical analysis of several populations of networks, however, has been hindered by various methodological issues. These statistical questions have not been hitherto satisfactorily resolved in the neuroscience community, and the field of network data analysis remains an area of active methodological development (Simpson et al., 2013a,b). When one is considering the question of comparing several populations of brain networks, two main problems arise. First and foremost, the problem of the inherent dependence between connectivity strength (i.e., wiring density) and network topology (i.e., patterns of edges) necessarily arises. Most, if not all, of the topological metrics that have become popular in the neuroscience literature are highly sensitive to the differences in the number of edges of the graphs under comparison. Therefore, when trying to evaluate the topological properties of different groups of networks on the sole basis of their topology, one also requires to apply some level of control on the differences in density between the groups of networks under scrutiny.
Secondly, the issue of separating differences in density from differences in topology is compounded by the problem of thresholding association matrices. In many cases, neuroscientists are considering correlation matrices with values ranging between −1 and 1. Because network science is founded on graph theory, which is a branch of discrete mathematics, it follows that the application of graph-theoretical methods requires the use of a particular threshold in order to produce adjacency matrices. Naturally, this choice of threshold is often arbitrary, although various statistical strategies have been deployed to alleviate the consequences of such decisions. Several authors have thresholded correlation matrices by applying an inferential cut-off point. This approach is similar in spirit to the standard mass univariate strategy regularly adopted within the context of classical statistical parametric mapping (Friston, 1994).
However, this thresholding of matrices is generally critized for throwing away valuable information. Indeed, since network analysis proceeds by comparing the global topological properties of the graphs obtained after binarizing correlation matrices, it is natural to conclude that a substantial amount of real-valued information has been discarded; and replaced by a sequence of binary digits. As a result, several authors have proposed to use the weighted versions of the classical graph-theoretical measures of topology (Rubinov and Sporns, 2010). It is commonly believed that the use of such weighted topological statistics alleviates both the problem of selecting an arbitrary threshold, and also ensures that one is separating differences in topology from differences in network density. Although this first requirement is indeed satisfied, the second is only illusory. We will show in this paper that the use of weighted topological measures is just as liable to be determined by differences in density, as their standard unweighted versions.
In the present paper, we will concentrate our attention on weighted networks since these are more likely to be found in the biomedical sciences than their unweighted counterparts. This article is structured in two parts. We firstly review how to construct summary networks representing subject-specific or group-specific functional connectivity over time. Here, a massunivariate approach is adopted using different corrections for multiple comparisons. A similar approach can also be used for representing group differences in functional network topologies. In a second part, we concentrate on network properties inference. This is rendered particularly arduous by the fact that such networks tend to display different number of edges. Since network density is highly predictive of a host of network topological measures, such statistical inference requires special attention, when comparing groups of subjects that exhibit substantial differences in network density.
CONSTRUCTION OF SUMMARY NETWORKS
We firstly describe how one can construct summary networks from a family of subject-specific weighted or unweighted networks. This task can be tackled by combining the data available, using a mass-univariate approach, as is commonly done in fMRI.
Note that the phrases, graph and network, will be used interchangeably in this paper.
STATISTICAL PARAMETER NETWORK (SPN)
Here, we review an efficient method for summarizing inference on networks, using a mass-univariate approach. By tacit consensus, this method has essentially become the norm in the field He et al., 2007He et al., , 2009bGinestet et al., 2012). This strategy should be compared to the one adopted in the classical statistical parametric mapping (SPM) framework, which has been utilized in neuroimaging for the past two decades (Friston, 1994). Consequently, this approach will be referred to as statistical parametric networks (SPNs). The problem of constructing a summary graph centers on how to combine the elements of a population of subject-specific correlation matrices. In the SPN framework, summary networks are constructed irrespective of whether or not structural or functional data are being used. While in fMRI studies, it has been common for researchers to compute correlations over time between regions of interest Achard and Bullmore, 2007), studies based on structural MRI data, by contrast, have considered between-regions correlations with respect to the available population of subjects (Bassett et al., 2008). In this section, we will concentrate on the specific problem posed by the study of functional MRI cortical networks, where each subject-specific correlation matrix represent inter-regional normalized covariances, computed with respect to a sequence of time points.
Succinctly, one may say that an SPN is to a correlation matrix, what an SPM is to an intensity map. As for the latter, an SPN can be produced in order to obtain a summary network. Different summary networks can be constructed for the different conditions of an experiment, or for the different groups of subjects under scrutiny. Achard et al. (2006) and He et al. (2009b), for instance, have visualized their data using summary networks, whereby an edge is solely included when a corresponding test statistic for that edge is significant. We will refer to such summary networks as mean SPNs. Similarly, one can construct differential or difference SPNs, which represent the edges that have been significantly "lost" and the edges that have been significantly "gained," when comparing the graphs across experimental conditions, or when considering several groups of subjects. Under its many guises, this approach has been adopted by various authors including Zalesky et al. (2010) and Richiardi et al. (2011), who have used network-based statistics and machine learning methods, respectively, for the comparison of a group of subjects with a group of controls.
The SPN approach that we wish to present here is slightly more general, since it accommodates sophisticated experimental designs, in which information may be pooled over a number of experimental conditions. As for SPM, such analyses enable a concise visualization of the data, which can be interpreted in terms of network properties, topology and community structure. This approach is particularly helpful for an efficient reporting of the experimental results. As mentioned in the introduction, the use of SPNs has the additional advantage of somewhat alleviating the methodological concerns associated with the choice of an arbitrary threshold value; since we are here selecting such cut-off points on the basis of a specific p-value. Network thresholding is therefore here supplanted by inference.
The thresholding of association matrices, such as correlation matrices, is equivalent to the application of an elementwise indicator function. This type of function, however, is non-linear, in the sense that the sum of the thresholded correlation matrices is not equal to the thresholded mean correlation matrix. That is, this may be formally expressed, as follows, where i = 1, . . . , n labels the subjects taking part in the experiment, and where R i 's denote subject-specific correlation matrices.
Here, the function, T τ , is a thresholding function that takes a matrix, and returns its binarized version, with respect to a cut-off point, τ . The issue of thresholding correlation matrices is illustrated in Figure 1, where we have reported some of the preliminary data analysis conducted in Ginestet et al. (2012). Currently, there is little guidance on how one should proceed, when summarizing the network analysis of a given study. There is hence a pressing need to reach a methodological consensus on how to standardize the construction and reporting of summary networks in neuroscience. A natural desideratum for such summary networks is that they should reflect the topological variability of the entire population of networks. Pioneering work in that direction has been laid out by several authors, including Achard et al. (2006) and He et al. (2009b), for the consideration of a single family of graphs. In the sequel, we review these ideas and extend them to the case of several populations of networks, as was conducted in Ginestet et al. (2012). The question of drawing inference on families of networks that vary over several experimental conditions can be subdivided into two related issues. On the one hand, one needs to test whether or not the properties of the nodes have been significantly affected by the experimental manipulation. On the other hand, one also needs to evaluate whether or not the presence and absence of edges have significantly varied across the experimental conditions. One can drawn statistical inference for these two distinct, yet related, research questions. Contrary to the classical SPM framework, these two distinct problematics need to be answered using two different types of networks: one for comparing vertices, and another for comparing edges.
A substantial advantage of the SPN methodology is that it addresses the problem arising from the quasi-linearity of the thresholding function presented in Equation (1). Indeed, since we are drawing inference using the correlation coefficients per se, we consequently bypass the problem of averaging over a set of thresholded correlation matrices; while nonetheless producing a statistical summary taking the form of a graph.
We here employ standard graph theoretical notation in order formulate our approach to this specific problem. The interested reader is invited to consult Bollobás (1998) for a more solid introduction to graph objects and their properties. As aforementioned, we will here use the terms networks and graphs interchangeably. In the context of discrete mathematics, a graph G is formally defined as an ordered pair of sets (V, E); in which V(G) represents the set of vertices (sometimes referred to as nodes) in the graph of interest; whereas E(G) denotes the set of edges in that network (also called connections). The total number of edges and total number of nodes in G will be concisely denoted by N E and N V , respectively. A one-way experimental design may be typically composed of J experimental conditions, with n subjects, per experiment. Thus, the full data set of interest can be described as an (n × J)-matrix of correlation matrices. In the sequel, the indexes i = 1, . . . , n will label the experimental subjects; whereas the indexes j = 1, . . . , J will refer to the experimental conditions. Formally, one could represent the full data set as the following matrix, Here, each element R ij in this equation denotes a correlation matrix of dimension N V × N V . There is a one-to-one correspondence between each of these correlation matrices and a weighted graphs on N V vertices or nodes. The individual vertices will be labeled by v = 1, . . . , N V . Moreover, for convenience, each of the matrix entries in R, will be denoted by r e ij ; where the superscript e labels an edge from the saturated or complete graph, which possesses the maximal number of possible edges. That is, the saturated graph has the following edge set size, N V (N V − 1)/2. In the rest of this paper, edges will be systematically referred to by using superscripts.
A mean or summary SPN allows to statistically infer the "average" set of inter-regional connections in a group of subjects. Such SPNs are generally obtained by adopting a mass-univariate approach, whereby a sequence of statistical tests are performed for each edge in the edge set. Such an operation may be repeated for each experimental condition. Using the notation introduced earlier, one may conduct a test for each of the columns in the array, denoted R, in Equation (2). In effect, we are here considering the following column vectors of correlation matrices, Each of these column vectors is analyzed independently in order to produce a single network for each of the different experimental conditions. For the case of correlation matrices, the original matrix entries are routinely Fisher z-transformed, in order to be able to use central limit theorems for approximating the density functions of these test statistics. In doing so, one can then draw inference, using an analysis of variance, for instance, or another adequate statistical model, suitable for the data at hand. An example of such mean SPNs under different experimental conditions is reported in Figure 2. Perhaps, the tenor research question in network data analysis in neuroscience is whether certain edges have been "gained" or "lost," as a consequence of a particular experimental condition. This general research question can be specifically answered by computing two distinct differential networks, representing what we may call the downweighted and upweighted SPNs. These two types of networks will be denoted by SPN − and SPN + , respectively.
As for mean SPNs, the construction of these differential networks can similarly be conducted within a mass-univariate approach. For differential SPNs, however, statistical inference needs to be drawn from the full data set. That is, one needs to consider all the correlation coefficients described in Equation (2) -that is, the elements contained in the matrix R. Computing a differential SPN will generally involve N E linear models. Depending on the general experimental framework adopted by the researchers, these linear models could be extended to mixed effects models. In its most general formulation, we may consider a repeated block design, which can be succinctly expressed by using the classical formalism due to Laird and Ware (1982), be assumed to be diagonal and positive semi-definite, respectively (see Demidenko, 2004, for details).
In general, one may include an edge in a differential SPN, when the corresponding F-test for the experimental factor has been found to be significant. Depending on the linear model used, different statistical test may be performed (Pinheiro and Bates, 2000). Therefore, the use of a mass-univariate approach for extracting between-condition differences in the presence or absence of edges, yields two different types of differential SPNs. That is, depending on the sign of the significant fixed effect coefficients, one may include that edge in either a downweighted network, which may be denoted SPN − ; or in an upweighted network, denoted SPN + .
A similar approach can be adopted to estimate the upweighting and downweighting of the signal of interest at single nodes. Again, such a node-specific differential SPN can be obtained by performing a set of N V linear models. In this case, the data under consideration is the set of matrices Y v = {y v ij }, where each v ∈ V is a region of interest. Every y v ij corresponds to a time-averaged intensity signal, for the vth region, for subject i, under the jth experimental condition. Thus, one could reformulate the system of equations for evaluating edges in (4) by using superscripts to denote vertices.
As for edge-specific differential SPNs, a vertex would be estimated to be either significantly upweighted or downweighted, depending on the sign of the largest coefficient in the corresponding vector β v . An illustration of such a differential SPN, based on the N-back data set, analyzed by Ginestet et al. (2012) is reported in Figure 3. Naturally, this assignment based on the sign of the fixed effects is only possible, when the task under scrutiny is based on an experimental gradient. An alternative strategy may be required, when different levels of the task are expected to affect the response in different directions.
A singular limitation, however, affects all mass-univariate approaches. Such a repetitive use of classical inferential threshold, may lead to a corresponding increase in Type I error. This issue can be addressed by correcting for multiple comparisons. The significance of edges and nodes in both mean and differential SPNs can, for instance, be inferred using the false discovery rate R L FIGURE 3 | Visualization of a differential SPN, summarizing the effect of a cognitive experimental factor. Sagittal section of a negative differential SPN, which represents the significantly "lost" edges, due to the N-back experimental factor. The presence of an edge is determined by the thresholding of p-values at 0.01, uncorrected (see , for a description of the data at hand.).
(FDR) with a base rate of α 0 = .05 (Benjamini and Hochberg, 1995;Nichols and Hayasaka, 2003). Naturally, other corrections for multiple comparisons could also be utilized (see Meskaldji et al., 2011, for a different approach). The conventional thresholding method used in network analysis is therefore superseded by the application of standard multiple testing corrections. The main advantage of this approach lies in its pooling of information over several subjects, in order to produce robust edge-and node-specific statistics.
COMPARISON OF FUNCTIONS ON NETWORKS
We now turn to the issue of comparing various types of topological measures over several families of networks (van Wijk et al., 2010). Inference on quantities such as characteristic path length, clustering coefficient and modularity structure has attracted a sustained amount of interest in the neuroscience community. Comparisons of this type of topological measures, however, is generally regarded to be hard, since these topological differences highly depend, in a non-linear fashion, on group differences in edge density.
GLOBAL EFFICIENCY
One of the classical exemplars of a topological summary of a network is its characteristic path length. Such a quantity, however, is solely defined for connected graphs. The global efficiency of a graph, by contrast, can be computed for any network -connected or disconnected-and is inversely related to its characteristic path length. Efficiency is formally defined by the following formula due to Latora and Marchiori (2001), with N V = |V|, as before. Here, d ij denotes the length of the shortest path between vertices i and j. Moreover, the second summation is performed with respect to the set, {j = i ∈ V}, which is the set of all indices in V that are different from i. This efficiency measure can be shown to be equivalent to the inverse of the harmonic mean of the length of the shortest paths between each pair of nodes in the network G. Specifically, the quantity in Equation (5), is usually referred to as the global efficiency of a particular graph, and is denoted by EGlo(G) = E(G). Intuitively, this quantity can be understood as the amount of potential information transfer that can be performed in parallel. A local measure of efficiency can also be computed, which is equivalent to the clustering coefficient. For a review of other efficiency measures that have been studied in the context of neuroscience, the reader is referred to . The most commonly adopted approach to network comparison is therefore to compute a topological metric, such as global efficiency, for each individual subject, and thereafter to evaluate whether this measure differs over the different experimental groups under scrutiny.
DENSITY-INTEGRATED MEASURES
An alternative approach to the problem of quantifying the topology of weighted networks proceeds by integrating the metric of interest with respect to different density levels. Different approaches have been adopted in practice. While some authors have integrated over a subset of the density range (see Achard and Bullmore, 2007, for example), others have integrated over the entire range of densities (He et al., 2009a). The family of topological measures, which is obtained after integrating over different density levels, will be referred to as density-integrated measures. Given a weighted graph G = (V, E, W), the density-integrated version of the efficiency in Equation (5) can, for instance, be defined as follows, where density is treated as a discrete random variable K, with realizations in lower case, and p(k) denotes the probability density function of K. Since K is discrete, it can only take a countably finite number of values. In general, it is common to assign equal weight to every possible choice of density. The function γ (G, k) in Equation (6) is a density-thresholding function, which takes a weighted undirected network and a level of wiring density as arguments, and returns an unweighted network. Since there is no prior knowledge about which values of K should be favored, one can specify a uniform distribution on the set of all possible densities. Note, however, that other distributions could be selected for this purpose (see , for a discussion of alternative specifications).
INTEGRATING OVER DENSITIES
The question of separating topology from density could be reformulated as the statistical problem of evaluating topological differences, while "controlling" for differences in density. When adopting this perspective, it is convenient to treat topology and density as random variables. We have already done so for density, in the previous section. Implicitly, by integrating over all possible thresholds, we are indeed considering density as a random variable with a well-defined probability distribution, which is, in the present case, a uniform distribution.
A natural desideratum, which may be required when comparing network topological characteristics, while controlling for differences in topology; would be to control for weighted networks whose association matrices are proportional to each other. That is, if two different matrices are linearly related to each other, it seems reasonable to conclude that their topologies must be identical, after one has controlled for such a linear difference in density. Thus, consider the following simple example, adapted from .
Example 1 . We here have two networks, G 1 and G 2 , with proportional association matrices W 1 and W 2 , satisfying W 1 = αW 2 . That is, these two matrices are proportional to each other. An application of the density-integrated metrics described in Equation (6) to these networks would give the following equalities, That is, when integrating with respect to density, we are in fact evaluating the efficiencies of G 1 and G 2 at a number of cut-off points. At each of these points, the efficiency of the two networks will be identical, because W 1 is proportional to W 2 and therefore the same sets of edges will be selected. Therefore, G 1 and G 2 have identical density-integrated efficiencies.
While illustrative, this example is not entirely satisfying. In fact, this result can be shown to hold in a more general sense. The invariance of density-integrated efficiency turns out to be true for any monotonic (increasing or decreasing) function h, as formally stated in the following result.
Proposition 1 . Let a weighted undirected graph G = (V, E, W). For any monotonic function h( · ) acting elementwise on a real-valued matrix W, and any topological metric E, the density-integrated version of that metric, denoted E K , satisfies where we have used the weight set, W, as a proxy for graph G.
A proof of this proposition can be found in . The demonstration essentially relies on the fact that any monotonic transformation of the entries of a real-valued matrix will preserve the ranks. Therefore, proposition 1 makes rigorous a potential way of "controlling" for differences in density. That is, this formal proposition states that we are indeed controlling for any monotonic transformation of the original entries in the matrix. In effect, proposition 1 should be regarded as a potential definition of what it means for two networks to solely differ in terms of topology, while controlling for monotonic differences in density.
DENSITY AND MODULARITY
Another network property, which has been studied extensively in the literature is modularity structure. As for efficiency and other topological measures, however, modularity is also highly dependent on edge density. Therefore, any attempt at comparing the modularity of different groups of networks will be confounded by group differences in the networks' number of edges. We illustrate this problem with the results reported in a recent paper by Bassett et al. (2011), who have analyzed the static and dynamic organization of functional brain networks in humans. We here focus on the first claim made in this paper, which states that the static modular structure of such networks is nested with respect to time. In particular, Bassett et al. (2011) argue that this graded structure underlines a "multiscale modular structure." As for global efficiency in the previous section, it can be shown that modularity structure is substantially mediated by edge density. In the case of weighted networks, this is equivalent to a difference in the size of the correlation coefficients. In Bassett et al. (2011), for instance, the authors report that the size of the mean correlation diminishes with the size of the time window. Such a decrease in overall correlation will generally have two effects: (i) networks' topologies will become increasingly more "random" and (ii) the number of significant edges will decrease. Here, we use synthetic data sets to show that these two phenomena are likely to be associated with a higher number of modules, thereby potentially explaining the apparent multiscale modular structure described by Bassett et al. (2011). Our simulations are based on the unweighted unsigned version of the modularity algorithm of Clauset et al. (2004), but may be extrapolated to weighted signed adjacency matrices.
In Figure 4A, we have generated 1000 unweighted lattices based on 112 vertices as in Bassett et al. (2011). By randomly rewiring the edges of these lattices, we show that the number of modules in these networks tends to increase with the level of topological randomness in these graphs. For Figures 4B-D, we have generated two sets of unweighted networks, characterized by a random and a regular topology, respectively, with different number of edges. These simulations were repeated 1000 times for each type of graph for each number of edges. For both types of networks, the number of modules in these graphs tended to decrease as new edges were added. Collectively, although these data simulations do not entirely rule out the possibility of a temporally nested modular structure in the human brain, they nonetheless cast doubts on the possibility of detecting such a temporal organization by reducing the size of the sampling window. Such subtle artifactual relationships between modularity and edge density can arise in a range of different settings in the analysis of neuroimaging data.
WEIGHTED TOPOLOGICAL METRICS
Since the previous two sections have highlighted the potential problems associated with thresholding correlation matrices, one may surmise that such problems could be adequately dealt with, by directly considering the weighted versions of the topological metrics of interest. In particular, an apparently natural way of combining differences in density with differences in topology is to consider the weighted versions of traditional topological metrics. For the aforementioned global efficiency, for instance, one Relationship between the number of edges in a network and its number of modules for both regular (i.e., lattice) and random graphs. This shows that the number of modules tends to decrease as more edges are added to both types of networks. (C,D) Modular structures of regular (C) and random (D) networks for different number of edges, N E . These networks are represented using the algorithm of Kamada and Kawai (1989) with different colors representing different modules. In all simulations, the number of vertices is N V = 112, as in Bassett et al. (2011). can define a weighted global efficiency, denoted E W , as follows,
Frontiers in Computational
where d W ij represents the weighted shortest path between the ith and jth nodes. Unfortunately, another theoretical result points to a serious limitation of E W , which may potentially dissuade researchers from using this particular type of metrics. With the next proposition, we demonstrate that under mild conditions, the weighted efficiency is simply equivalent to the weighted density, sometimes referred to as weighted cost, of the graph of interest, Proposition 2 . For any weighted graph G = (V, E, W), whose weighted edge set is denoted by then A proof of this result can be found in . Not surprisingly, proposition 2 places emphasis on the spread of the distribution of the weighted edge set E(G). The condition in proposition 2 may at first appear quite constraining. However, this condition encompasses a wide range of experimental situations, including the data set described in . Thus, the added benefit of utilizing the weighted version of the global efficiency measure may, in most settings, be highly questionable, since there exists a one-to-one relationship between this topological measure and a simple average of the edge weights. Cutoff-integrated efficiency and other cutoff-integrated measures, as described in , may therefore be preferred, in practice, when one wishes to summarize the influence of both density and topological differences.
CONCLUSION
In this paper, we have briefly reviewed some of the methodological research that has been conducted on network data analysis, as applied to functional neuroimaging. Two main threads ran through this discussion. Firstly, we considered the different approaches that one may adopt, when summarizing several subject-specific networks. Secondly, the thorny issue of graph thresholding was also tackled, with special emphasis on the comparison of network modularity and the use of weighted topological metrics. From the above discussion, it should be clear that there does not exist a single way of computing a mean network. This is, in some sense, an ill-defined problem. A commonly adopted perspective on this issue is to perform a mass-univariate test, where the significance levels of every edge are evaluated, and then thresholded. We have seen that this approach can be carried out both within a single family of networks, and over an entire experimental design, using a mixed effects model. By analogy with the classical SPM approach used in neuroimaging, one may refer to such uses of a mass-univariate approach on networks, as SPNs.
Secondly, we have discussed one of the long-standing issues in the application of network data analysis to neuroscience data: the question of whether or not one should threshold matrices of correlation coefficients, for the purpose of producing adjacency matrices. In this paper, we have reviewed a range of different approaches to this problem. On the basis of the several examples and counterexamples that we have studied, we are able to make a few methodological recommendations to researchers in the neuroscience community, intending to compare the topological properties of two or more populations of weighted networks. Note that these recommendations are solely tentative, as no general consensus has yet been reached on this particular issue.
As a first step, we argue that it is good practice to standardize the association weights. This may facilitate comparison across distinct network analyses, and ease the interpretation of the results. Secondly, the weighted density, or connectivity strength, of the networks of interest should then be reported. This is central to the rest of the analysis, and therefore, this quantity should be computed and reported systematically. Indeed, if the groups of networks under scrutiny substantially differ in terms of average density, then these differences are highly likely to affect any comparison of the topological properties of these groups of networks. Finally, population differences in density-integrated topological metrics may then be evaluated and reported. This will indicate whether the topologies of the populations under scrutiny vary significantly after having controlled for monotonic differences in connectivity strength.
The theoretical results described in this paper have only been presented for the global efficiency metric. Thus, these propositions and the examples studied need not necessarily apply to other topological measures. However, we also note that proposition 1 has been proved with a high degree of generality. This proposition and its proof is indeed independent of the particular formula of the metric of interest, and therefore could easily be extended to any other function of the weighted graph matrix. In particular, because most weighted metrics are constructed on the basis of the matrix of weighted shortest paths, one surmises that this theoretical result may, in fact, hold in a more general setting. Importantly, we have also shown that network modularity is not immune to this dependency on edge density. If several populations of networks differ in their number of edges, then it is likely that the resulting group-specific modularity structures will not be comparable. That is, such comparisons will mainly reflect differences in edge density, and as such may not carry much explanatory power. This is an area of application of statical network analysis, where one should exert caution, as the powerful algorithms used for detecting network modules may hide the potential confounding effects of differences in edge density.
Finally, the use of weighted topological metrics was also considered. Unfortunately, we have seen that simply replacing classical network measures by their weighted analogs is not sufficient to resolve the dependency of these measures on edge density. Thus, cutoff-integrated topological measures, such as the cutoffintegrated efficiency described in , may be preferred in practice, when one wishes to separate differences in edge density from differences in topology.
AUTHOR CONTRIBUTIONS
Cedric E. Ginestet has written the review. Andrew Simmons has provided data and has reviewed the paper, whereas Arnaud P. Fournel has produced some of the figures. Arnaud P. Fournel has also assisted with the revisions of the paper.
|
2014-03-27T06:09:45.000Z
|
2013-08-12T00:00:00.000
|
{
"year": 2014,
"sha1": "b4286f5a943daa2df3c2f6e3303d8f5b2fc84f56",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2014.00051/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd1ddde361964808ec51fda6d91c6fa26a578f4c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Biology",
"Medicine"
]
}
|
255592226
|
pes2o/s2orc
|
v3-fos-license
|
The antifibrotic and anti-inflammatory effects of FZHY prescription on the kidney in rats after unilateral ureteral obstruction
ABSTRACT Purpose: To explore the potential impact of traditional Chinese herb FuZhengHuaYuJiangZhuTongLuo recipe (FZHY) on renal interstitial fibrosis (RIF) in chronic kidney disease (CKD) at cellular and molecular levels. Methods: Unilateral ureteral obstruction (UUO) rats were established as the RIF model in vivo. The rats were given intragastric administration with FZHY once a day for consecutive 7, 14 and 21 days, respectively. The renal function parameters and inflammation indicators in kidney tissues were measured using enzyme-linked immunosorbent assay, the CD4+/CD8+ T cells in peripheral blood was detected using flow cytometry, the renal fibrosis degree was estimated using Masson’s staining, and the fibrosis-related genes’ expression was detected using quantitative polymerase chain reaction, western blotting, and immunohistochemistry analyses. Results: FZHY prescription reduced the serum creatinine and blood urea nitrogen, decreased the levels of c-reactive protein, interleukin-1, interleukin-6 and tumor necrosis factor-α in kidney tissues, and increased the ratio of CD4+/CD8+ T cells in peripheral blood. FZHY prescription suppressed the renal tissue fibrosis and reduced the levels of laminin, fibronectin, collagen I and collagen III. Conclusions: FZHY prescription suppressed the renal fibrosis and improved the condition of “Healthy Qi Deficiency and Evil Qi Excess” in rats with UUO, which may provide an effective method for CKD treatment.
Introduction
Chronic kidney diseases (CKD), affecting around 18% of the worldwide population, cause high morbidity and mortality and increasingly attract researchers' attentions worldwide 1 . Renal interstitial fibrosis (RIF) is the main pathological feature occurring in the middle and late stages of CKD, and it is the most common pathological process in the progression of CKD to end-stage renal failure (ESRD) 2 . The main characteristics of RIF include renal tubule atrophy or expansion, epithelial cell shedding, interstitial inflammatory cell infiltration and massive deposition of extracellular matrix (ECM) 3 . The main anti-fibrosis treatment strategies currently available are dialysis or kidney transplantation. However, the increased mortality and recurrence rate confirm that these treatments are not sufficient to effectively moderate the progression of CKD to ESRD 4 . Therefore, understanding the mechanism of RIF is of great significance to the prognosis of RIF.
Immune mechanisms are gradually considered as a prerequisite for the progression of chronic disease, with systemic inflammation and immune deficiency prevalent in patients with CKD or in animal models 5,6 . Phenotypic changes and dysfunction of T cells are associated with the progression of CKD 7 . Long-term amplified inflammatory signals can change the function of T cells, whether T-lymphocytes including CD4 + and CD8 + subsets are characterized by immune damage or loss of effector functions 8,9 . Indeed, a growing number of studies suggest that lymphocytopenia occurs in patients with CKD or ERSD, in part as immune dysfunction induced by T cell-mediated changes in the absolute number of CD4 + and CD8 + T cells 10,11 , even the absolute number of CD4 + and CD8 + in renal fibrosis [11][12][13] . However, the changes in the relative number of CD4 + /CD8 + T cells in RIF have not been specifically studied.
In China, traditional Chinese medicine (TCM) has a long history of treating kidney disease and is still used as an alternative therapy for renal disorders. TCM has its unique advantages in improving the quality of life and long-term survival of patients. Clinical trials have validated that numbers of Chinese herbal formulas are effective for RIF, with active ingredients, such as tripterygium glycosides, resveratrol, and astragaloside. FuZhengHuaYuJiangZhuTongLuo recipe (FZHY) prescription is a traditional Chinese herb that significantly ameliorates the chronic renal failure through anti-inflammatory and immunomodulatory effects in 5/6 nephrectomy model 14 . It contains nine traditional Chinese herbs including raw astragalus, Rehmannia glutinosa, salvia, safflower, wine leeches, soil beetle, wine scutellaria, wine rhubarb, and raw licorice. The co-administration of wine rhubarb and raw astragalus is used to regulate and harmonize the spleen and stomach, and the raw astragalus strengthens the spleen and reinforces healthy qi, and the wine rhubarb unblocks the bowel and exorcises the pathogenic factors. The co-administration of raw astragalus and Rehmannia glutinosa is performed to achieve the purpose of tonifying the spleen and kidney, raw astragalus focuses on strengthening the spleen, while Rehmannia plays key function on tonifying the kidney. The co-administration of wine rhubarb and raw licorice is used to rehabilitate the spleen-stomach function, wine rhubarb removes damp-heat stasis toxin in intestine, and the function of raw licorice is detoxication. The co-administration of the cold salvia and the warm safflower is used to circulate blood, eliminate stasis, and unblock meridians, which rehabilitates the blood circulating in the heart, contained in the spleen and stored in the liver. The co-administration of the wine leeches and the soil beetle is used for blood stasis-removing and meridian-collateral-dredging. In addition, the wine scutellaria reduces fire and dry dampness, and it does not damage the spleen yang. Therefore, FZHY has the function of reinforcing the spleen and benefiting the kidney, invigorating qi and strengthen the body, and unblocking the bowel and transform turbidity.
However, the potential anti-fibrosis effect of FZHY has not been reported. Unilateral ureteral obstruction (UUO) in rats is considered the most common animal models of chronic kidney pathologies, including interstitial fibrosis 15 . Therefore, in this study, we adopted a UUO rat model of RIF and tried to examine the possible protective effect and mechanism of FZHY on the pathology of RIF in vivo.
Animals and unilateral ureteral obstruction model
Male Sprague-Dawley rats (7-8 weeks) weighing 240 to 280 g were purchased from Laboratory Animal Business Department, Shanghai Institute of Planned Parenthood Research (Shanghai, China). All rats were given adaptive feeding for one week and randomly divided into four groups: Sham (n = 12), UUO (n = 12), UUO + FZHY (n = 12), and UUO + AST-120 (n = 12) -AST-120 is a type of oral spherical activated carbon particles (commercial name KREMEZIN ® ) that adsorb uremic toxins and their precursors within the gastrointestinal tract and has been proven to treat CKD effectively and, therefore, serves as a positive control. The UUO model was estimated as follows. Shortly, all rats were anesthetized by intraperitoneal injection of sodium pentobarbital. The abdominal cavity was opened through the left abdominal incision. The left ureter was bluntly separated and double-ligated with 4-0 sutures in the middle and upper 1/3, and the abdominal cavity was closed by layered suture. The rats in sham group underwent a similar operation, but the ureter was not ligated. The rats were used in accordance with the National Institutes of Health Guidelines for the Use of Laboratory Animals, and this study was approved by the Ethics Committee of our institution (approval no.: 2021DL-016). FZHY is a Chinese medicine developed by us, and AST-120 has been shown to improve the progression of CKD as a positive control 16 . The dose of FZHY for each rat was 4.92 g/kg/d. The dose of AST-120 for each rat was 4 g/kg/d. Rats were given intragastric administration once a day for 7, 14 and 21 days, respectively. Afterwards, seven rats were randomly selected from each group and sacrificed. Blood and kidney tissues were collected for subsequent experiments.
Masson's trichrome staining and immunohistochemical analysis
The degree of renal tissue injury was evaluated by Masson's trichrome staining. The kidney tissues were fixed in 4% paraformaldehyde, embedded in paraffin, and sliced into 4-μm paraffin sections. Then, sections were treated in xylene, dehydrated with graded ethanol, and stained with Masson (Sigma-Aldrich; Merck KGaA). After staining, the sections were dehydrated with 70 and 90% ethanol. Six fields of view were randomly selected and observed with an optical microscope (Olympus, Tokyo, Japan).
Flow cytometry: measurements of CD4 + /CD8 + T cell ratio
Rat PBMC was obtained and isolated from rat peripheral blood using Ficoll gradient. The red blood cells were lysed using the lysis buffer (C3702, Biolegend), and then harvested and stained with CD4-FITC anti-rat CD4 (Biolegend) and CD8-PE (200607, Biolegend). Then, the CD4 + /CD8 + T cell ratio was calculated. All fluorescent samples were analyzed with a FACS Canto II flow cytometer (BD Biosciences).
Statistical analysis
Statistical analysis was performed using GraphPad Prism 8.0 (GraphPad Software, San Diego, United States of America) and presented with mean ± standard deviation (SD). When appropriate, one-way analysis of variance (ANOVA) (followed by Tukey's test) or Brown-Forsythe and Welch ANOVA was used to detect the statistical differences among groups. The significance was defined by P < 0.05, 0.01 and marked as *, **, #, ##, D, and DD.
FZHY treatment improved kidney function in unilateral ureteral obstruction model rats
Compared with sham group, the ELISA results showed that the UUO group significantly increased SCR and BUN levels on days 7, 14 and 21 in UUO model rats, while FZHY treatment effectively reduced SCR and BUN levels on days 7, 14 and 21 in UUO model rats. Further, FZHY treatment slightly but significantly (P < 0.05) alleviated the levels of SCR and BUN levels in UUO kidneys, although the reducing effect didn't reach the extent as AST-120 caused on days 7 and 14 (Figs. 1a-1d). FZHY treatment alleviated SCR and BUN levels in UUO kidneys on day 21, showing no statistically significant difference from AST-120 (Figs. 1e-1f). In addition, compared with sham group, the UUO group showed significant renal tissue structure injury and renal fibrosis accompanied by a large amount of collagen deposition on days 7, 14 and 21. Instead, FZHY treatment markedly improved the damage and fibrosis, similar to the effect of AST-120 (Fig. 1g).
FZHY treatment attenuated the inflammation in kidney of unilateral ureteral obstruction model rats
Compared with sham group, the ELISA results showed that the UUO group had a significant increase in the secretion of inflammatory factors CRP, IL-1, IL-6 and TNF-α in renal tissues on days 7, 14 and 21 (Figs. 2a-2c). However, compared with UUO group, FZHY treatment effectively reduced the production of CRP, IL-1, IL-6 and TNF-α on the 14th day and on the 21st day, showing a slight or no significant differences from those in the AST-120 group, respectively (Figs. 2b and 2c). In addition, FZHY treatment significantly reduced CRP, TNF-α and IL-6 levels on day 7, but it did not significantly affect IL-1 levels (Fig. 2a). In general, both FZHY and AST-120 were able to inhibit the increase of inflammatory factors caused by UUO, and as time went on, FZHY might have a better efficacy, evidenced by no significant differences in all inflammatory factors between FZHY and AST-120 treatments on day 21.
FZHY treatment increased the ratio of CD4 + /CD8 + T cell ratio in unilateral ureteral obstruction model rats
Compared with sham group, the flow cytometry results revealed that the ratio of CD4 + /CD8 + T cells was significantly reduced in peripheral blood of the UUO group on days 14 and 21, but not on day 7. FZHY or AST-120 treatment significantly increased the proportion of CD4 + /CD8 + T cells in the rats after UUO on days 7, 14 and 21 (Figs. 3 a-3c). Comparing the two types of treatment, AST-120 had seemingly better effects than FZHY, but they had a very similar result on day 21 (Fig. 3c).
The antifibrotic and anti-inflammatory effects of FZHY prescription on the kidney in rats after unilateral ureteral obstruction
FZHY treatment protected renal against fibrosis in unilateral ureteral obstruction model rats
Both mRNA and protein levels of renal fibrosis-related proteins FN, LN, Col-I and Col-III were significantly increased after UUO, but, compared with UUO group, they were significantly decreased after FZHY or AST-120 treatment on the 14th day and on the 21st day (Figs. 4a and 4b). The corresponding results of IHC showed that, compared with sham group, the UUO group damaged the renal tissue structure, promoted the dilatation of renal tubules and elevated the levels of renal fibrosis-related proteins LN (in tubules and glomeruli), FN (in tubules and interstitium), Col-I (in tubules and interstitium) and Col-III (in tubules and glomeruli) in the kidney tissue on days 14 and 21. In addition, compared with UUO group, FZHY treatment significantly decreased the structural damage and reduced their levels in the kidney tissue (Fig. 5).
Discussion
In previous studies, we obtained the core ingredients of FZHY for the treatment of chronic renal failure (chronic kidney disease), such as torachrysone-8-O-beta-D-(6'-oxayl)-glucoside, hederagenin, stigmasterol and isotanshinone II, by network pharmacology (to be published, Suppl. Table 1). However, the curative effect and mechanism of FZHY on CKD need to be further verified and explored. UUO model is a classic RIF disease model, with both immune abnormalities and microinflammation, which conforms to the condition of "Healthy Qi Deficiency and Evil Qi Excess".
In this study, we estimated an in vivo model of UUO and investigated the effects of FZHY on the treatment of RIF within a specific time frame. Our findings demonstrated that RIF is associated with inflammatory response and immune disorders, manifested by the production of pro-inflammatory factors and the imbalance of CD4 + /CD8 + T cells. However, FZHY treatment restored the above results and showed a significant anti-fibrotic activity in UUO rat models. Mechanistically, FZHY may protect RIF by alleviating related inflammation and upregulating the ratio of CD4 + /CD8 + T cells to improve the condition of "Healthy Qi Deficiency and Evil Qi Excess". SCR and BUN are markers of renal function parameters 17 . In the present study, FZHY reduced the levels of SCR and BUN on days 7, 14 and 21, suggesting that FZHY may have a protective effect on UUO induced RIF rats. Further, the effect was most significant on the 21st day, and there was no difference with positive control. TCM explains that rats with renal failure have syndromes of deficiency of spleen and kidney and internal resistance of turbidity and blood stasis, which is a typical "positive deficiency and evil syndrome" 14 . The serum CRP, TNF-α, IL-1, IL-6 and other microinflammatory indicators were used to evaluate the "excessive evil" situation. Numerous studies reveals that the continuous increase of inflammatory signals was closely associated with RIF, in which TNF-α, IL-1, IL-6 and CRP in renal tissue is a necessary condition for the progression and deterioration of RIF [18][19][20] .
The data from the current study showed that FZHY significantly promoted the reduction of inflammatory factors including CRP, TNF-α, IL-1, and IL-6 in renal tissue on days 14 and 21. This further confirmed that FZHY can regulate the "evil Qi excess" in UUO induced RIF rats. However, FZHY failed to significantly reduce IL-1 levels on day 7. It's not a contradiction, possibly due to the small sample size included or the fact that FZHY itself is truly insensitive to IL-1 in the early stage of renal fibrosis.
Immunosuppression is the main factor that determines the poor prognosis of patients with CKD, and its main feature is the immune response triggered by T lymphocytes, which exerts a critical effect in the initiation of renal fibrogenesis 21 . It has been reported that the changes of cellular immune indicators (CD4 + T cells, CD8 + T cells) are closely related to the condition of "positive deficiency syndrome" in the TCM 22 . Previous clinical studies demonstrated that the low ratio of CD4 + /CD8 + in peripheral blood was closely related to poor outcome in liver fibrosis 23 and lung fibrosis 24 . A previous study showed that the ratio of CD4 + /CD8 + cells in patients with deficiency syndrome was significantly lower than that in normal subjects 25 . Similarly, we estimated the CD4 + /CD8 + ratio in renal tissues, and the data showed that CD4 + /CD8 + decreased significantly in the UUO group, but it was restored by FZHY treatment, suggesting that RIF is associated with the imbalance of CD4 + /CD8 + ratio. These findings have also been found in previous studies of patients with CKD or ERSD 7 . Taken together, RIF has the macro and micro manifestations of "Healthy Qi Deficiency and Evil Qi Excess" according to TCM syndrome differentiation. FZHY prescription can improve the condition of "Healthy Qi Deficiency and Evil Qi Excess" and achieve therapeutic effect 14 .
Renal fibrosis is an important factor in the progression of renal disease to ESRD. Therefore, improving renal fibrosis is of great significance for the therapeutic intervention of renal failure. Increased inflammation and immunologic inadequacy promote the activation of renal fibroblasts, leading to the excessive deposition of extracellular matrix components (FN, LN, Col-I and Col-III), collagen formation, and, finally, the exacerbation of renal tissue fibrosis [26][27][28] . In the present study, AST-120 significantly reduced the levels of inflammatory indicators and fibrosis associated proteins in UUO model. Meanwhile, the findings of the present study demonstrated that FZHY inhibited kidney tissues fibrosis in UUO model in vivo and decreased the protein levels of fibrotic markers FN, LN, Col-I and Col-III. In all, the results indicated that FZHY can effectively improve renal fibrosis in UUO induced RIF model.
It has been established that AST-120 adsorbs indole, a precursor of indoxyl sulfate (IS), and reduces its eventual conversion to indoxyl sulfate in serum and urinary, thereby ameliorating interstitial fibrosis in CKD 29 . IS is able to activate NF-κB and TGF-β1/Smad3 signal in tubular cells and induces production of TNF-α by monocytes [30][31][32] , which are classical inflammation and/or fibrosis triggers. After UUO, IS accumulation is aggravated, but its suppression attenuated the progression of renal interstitial fibrosis 33 . Reduced CD4 + /CD8 + ratio is associated with CKD, and the inverted CD4 + /CD8 + ratio is a cause of ESRD 34,35 . AST-120 has been found by other researchers to significantly decrease CD8 + T cells (-33.9% for central memory T cells, and -42.6% for CD8 + naïve T cells), but it does not affect or slightly reduce CD4 + T cells (0 for T helper cell number, and -13.1% for early activated CD4 + T cells) 36 , which are consistent with our findings in this study. The reduced uremic toxins, such as indoxyl sulfate and p-cresyl sulfate, maybe involved in the phenomenon [37][38][39][40] . However, the exact mechanism of AST-120 exerting its anti-CKD effect still needs further study. Quite different from AST-120, FZHY doesn't directly absorb uremic toxins like IS. Some of its main components, such as hederagenin, have been proven to have inhibitive effects on inflammation and fibrosis in renal after injury or disease 41,42 . In addition, although the mechanisms are different, our results showed a trend that the extended course of FZHY treatment may benefit more the CKD rats like AST-120 does. We deduce that increasing dose may also bring more positive results for CKD treatment. These findings suggest an alternative treatment for us to suppress inflammation and fibrosis in CKD in clinical apart from AST-120, which warrants further attention from the researchers in this field.
Conclusion
The use of FZHY improves the condition of "Healthy Qi Deficiency and Evil Qi Excess" in UUO induced RIF rats, which is manifested as the facts that FZHY can reduce kidney tissue inflammation and induce the rebalance of CD4 + /CD8 + ratio, and improve the UUO-induced renal fibrosis, providing a better understanding of anti-inflammatory and anti-fibrosis effect of FZHY in RIF.
|
2023-01-12T05:15:35.369Z
|
2023-01-06T00:00:00.000
|
{
"year": 2023,
"sha1": "b1a057b02fcef567da74de1455a8a4b85701df9f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b1a057b02fcef567da74de1455a8a4b85701df9f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19368012
|
pes2o/s2orc
|
v3-fos-license
|
Congenital malformations in newborns of alcoholic mothers
Objective: To identify the presence of fetal alcohol syndrome, other alcohol-related congenital defects, and/or neurodevelopment disorders in newborns of mothers who consumed alcohol during gestation. Methods: In a public maternity in the city of São Paulo, 1,964 puerperal women were interviewed and 654 had consumed alcohol at some point during gestation. The newborns were clinically and laboratorially examined in order to identify the occurrence of fetal alcohol syndrome, congenital defects or neurodevelopment disorders related to alcohol. results: Three children were found with fetal alcohol syndrome (1.5/1,000 live births), 6 with congenital defects related to alcohol (3.0/1,000 live births), and 67 with developmental disorders related to alcohol (34.1/1,000 live births). The congenital malformations found in these children were thin or absent corpus callosum, brain cyst, asymmetry of the cerebral ventricles, meningomyelocele, cleft lip, anteverted nose, low-set ears, megaureter, hydronephrosis, polydactyly, congenital clubfoot, aphalangia of the toes, cryptorchidism, and hypospadia. conclusion: Newborns of mothers who consumed alcohol may have congenital malformations of various organs and systems, and early diagnosis is fundamental for a probable and occasional more effective resolution and progress.
When the pregnant woman ingests alcohol, her baby also drinks it, meaning that during all gestation any dose of consumed alcohol may cause alterations in development (10) .To date, it is not known if there is a safe level of alcohol consumption, i.e., a limit under which no fetal damage would be caused (5) .The probability of the unborn child being affected and the severity of the syndrome are likely related to various factors, such as the dose consumed, pattern of consumption, the gestational period when the fetus was exposed to alcohol, maternal and fetal alcohol metabolism, maternal health, and fetal genetic susceptibility (7,10) .
In 1996, the U.S. Institute of Medicine (IOM) of the National Academy of Sciences.in Washington, DC, with the intent of standardizing nomenclature related to the effects of alcohol on the newborn, introduced the terms: alcohol-related birth defects (ARBD), and alcohol-related neurodevelopmental disorders (ARND), describing conditions related to the maternal use of alcohol that are not found in the FAS (11) .
Early identification and diagnosis of FASD is fundamental, with the purpose of affording health, education, and social services necessary to provide a better evolution of these children (1,12) .
FASD currently represents the greatest worldwide Public Health problem (13) ; on the other hand, if the women abstains from consuming alcoholic beverages immediately before conception and throughout pregnancy, congenital anomalies related to alcohol are totally preventable (2,12,(14)(15)(16)(17) .
The Centers for Disease Control and Prevention (CDC), the National Task Force on Fetal Alcohol Syndrome, and the U.S. Surgeon General's Advisory recommend that pregnant women, those who desire to become pregnant, or even those who have a likelihood of becoming pregnant not ingest alcoholic beverages (3,17,18) .The same recommendation is made by the American Academy of Pediatrics, given that safe fetal alcoholemia is not known (19) .Since the worst damage of ethanol occurs during the embryogenesis phase, abstinence from alcohol is mandatory until before the diagnosis of pregnancy is confirmed (13,15,(17)(18)(19) .
The FAS encompasses characteristic facial alterations, restriction of pre-or post-natal growth, and evidence of structural and/or functional modifications of the central nervous system (CNS), which are always associated with intrauterine exposure to alcohol (1,17) .
The ARBD comprise congenital anomalies including malformations and dysplasia (4) , and there can be structural cardiac, skeletal, renal, ocular, and pinna defects, among others (2) .The ARND present structural alterations of the CNS or behavioral or cognitive abnormalities which are inconsistent with the level of development, and are not explained by genetics or by familial and environmental antecedents (2) .
Analyzing the group of cases of FASD, it was noted that facial dysmorphia is often absent, since this finding is not obligatory, but there is the damage that the prenatal exposure to alcohol may provoke on cerebral function, which is permanent and incapacitating (4,20) .
OBJectiVe
To identify in newborns of mothers who consume alcohol the presence of FAS, other congenital defects, and/or neurodevelopment disorders related to the maternal ingestion of alcohol.
MetHODS
At the Hospital Municipal Maternidade-Escola de Vila Nova Cachoeirinha "Dr.Mário de Moraes Altenfelder Silva" (HMEVNC), located in the peripheral region of the city of São Paulo, an analytical cross-sectional observational study was conducted of live newborns and their mothers, during the period from August 13 th , 2006 to January 21 st , 2008.
During this time, 7,447 infants were born alive at the hospital, and 33 were admitted live after having been born out of the hospital.Operational motives of availability of one of the researchers (MAM) allowed the inclusion in the study of all newborns admitted between 00h01min am on Sunday and 11h59min pm on Monday, in the referred period.
Newborns with genetic syndrome who died before the physical examination or who were discharged from hospital before the physical examination and/or maternal interview were excluded.The same was true for those transferred to another hospital before the physical examination, those whose mothers died before the interview, those who were not the first twin, and the infants of puerperal women who refused to participate in the study.The final sample was made up of 1,964 mother-newborn binomials.
The interview with the mothers of these children was structured with direct questions and closed-ended questions applied by one of the researchers (MAM).The T-ACE (21) questionnaire was also used in order to identify the consumers of alcoholic beverages.The infants were all examined by one of the researchers (MAM).
The variables collected from newborns were gestational age (GA), head circumference (HC), characteristics of the nasal philtrum, measurements of the palpebral fissure (PF) and of the largest width of the vermilion border of the upper lip (VBUL), as well as the presence of structural defects not related to other causes.The measurements of HC, PF, and of the largest width of the VBUL were performed between 24 and 72 hours of life so that subcutaneous edema and cranial alterations due to labor and delivery would not induce error.
The GA was estimated by the date of the last menstrual period (LMP) calculated according to the rule of Naegele, cited by Mongelli (22) , by the New Ballard Score (NBS) (23) conducted between the 6 th and 48 th hours of life, or by the Capurro Somatic (CS) method (24) performed during the first hour of life, whenever the infant's clinical condition did not allow the performance of the test by NBS.When the mother's date of LMP was unknown, the GA was determined by the NBS or CS methods.
According to the 2004 CDC criteria, facial alterations considered characteristic of FAS were PF ≤ 10 th percentile, smooth nasal philtrum, and thin VBUL.The restriction of growth was considered by the same agency when it corresponded to the birth weight (BW) and/or HC and/or length (L) ≤ 10 th percentile (3) .The infant's growth pattern and the measurements of PF and of the greatest width of VBUL of these children were evaluated using specific percentile curves based on measurements and GA in this population (25) and were considered abnormal when less than or equal to the 10 th percentile for each GA.
The PF and the VBUL were determined with the newborn in dorsal decubitus, in a sleepy state or passively alert.The PF was assessed with the patient's eyes closed, due to the difficulty of neonates to open them.The greatest width of the VBUL was obtained with the upper and lower lips closed/sealed.
The presence or absence of a smooth nasal philtrum and of the anteverted nose was subjectively analyzed by the researcher who examined the children.
In the newborns of mothers who consumed alcohol, transfontanelle ultrasound was used when the presence of anatomical alterations of the skull and/or of the HC ≤ 10 th percentile were found, as per Usher's curve (26) utilized as a guide.
Abdominal ultrasound and an X-ray of the chest and bones in infants were performed showing physical alterations in the thoracic, abdominal, and urogenital regions, and/or limbs.
The ultrasounds were performed by ultrasonographers at the institution.A cardiologist and a neurologist of the hospital assessed the newborns with cardiac and neurological abnormalities.
Based on 2004 CDC criteria, the newborns were considered as having FAS when there was a report of maternal consumption of alcohol during intrauterine life, characteristic facial alterations, growth restrictions, and structural and/or neurological alterations of the CNS (3) .
According to the criteria of IOM 1996, clarified by Hoyme et al., newborns were considered as having ARBD when there was a report of maternal consumption of alcohol during intrauterine life, two or more of the three characteristic facial alterations of FAS, and one or more congenital malformations (2) .By the same criteria, newborns were considered as having ARND when there was a report of maternal consumption of alcohol during intrauterine life, and structural and/or neurological alterations of the CNS (2) .
A HC ≤ 10 th percentile and/or alterations on the ultrasound image of the brain were considered structural modifications of the CNS.Neurological alterations corresponded to the following data: convulsions/ seizures, tremors, irritability, alterations in suction/ swallowing not related to other causes.The other neurological criteria recommended by the American IOM were not considered, due to difficulty in evaluating them in neonates.
After the puerperal woman read and signed the informed consent form, drawn up for the specific purposes of this study, the participation of the patient and her child was allowed.
The Research Ethics Committee of the HMEVNC approved this study as per protocol number 009/2006.
reSUltS
Direct questions identified that 654 (33.3%) of the total number of puerperal women analyzed in this population consumed alcohol at some moment during pregnancy.Of these, 140 (21.4%)consumed it during all three trimesters of pregnancy, 159 (24.3%) in two trimesters, and 355 (54.3%) in one trimester.
The T-ACE questionnaire (21) applied to the 1,964 pregnant women was positive in 611, or 31.1% of the total number of puerperal women analyzed in this population, and one patient was unable to answer this questionnaire.
Table 1 displays the results that allowed a classification of newborns as to GA, according to weight at birth, HC, L, PF, and VBUL.Comparing the results of newborn variables with those of puerperal women as to alcohol consumption allowed the diagnosis of FAS in three newborns (1.5/1,000 live births), the identification of ARBD in 6 newborns Worldwide average prevalence of FAS is 0.5-2/1,000 live births (3,9) .It is estimated that for each child with FAS there are three who do not present all the characteristics of the syndrome, but who have neurobehavioral deficits resulting from prenatal exposure to alcohol (17) .In the present study, the occurrence of FAS remained similar to that of literature, but the prevalence of the incomplete form of the syndrome, represented by ARBD and by ARND, was greater than expected, as per the CDC criteria (3) .It was also higher than the data found in a state-owned organization of São Paulo (SP), which showed a prevalence of the partial syndrome in 19.7/1,000 analyzed children (30) .
A plausible explanation for this finding is not easy.The specific pathophysiology of alcohol in fetus is still unknown (11,31,32) , but there is probably no one single mechanism that can explain all the damaging effects on the unborn child (6) .There are no markers capable of determining the specific action of alcohol on the fetus, nor the precise influence of the dose on the mechanism of developing of the syndrome (8) .It is a known fact that alcohol passes through the fetal barrier between the blood and the brain, and its effects on cerebral development are extremely complex (14,32) .In certain groups of cerebral cells it can lead to death, and in others, it interferes with the functions (32) .In this way, it can be suggested that some of these different factors may be acting in a specific manner according to characteristics of the population of the present study.
It is a well-known fact that exposure to alcohol at any time during pregnancy may cause effects on the CNS, which will be more damaging if they occur in the first five weeks of gestation (5,11,18) .One evident result is the decrease in cerebral growth, manifested by microcephalus and microencephaly (14) , but occasionally, prenatal exposure to alcohol can cause more specific brain lesions (5) .Habitually, however, brain damage is generalized and non-specific, with an increase in the appearance of functional abnormalities throughout the child's development (3) , which may present structural alterations of the CNS consistent with FAS, with no detectable functional deficit (1,3) .In the same way, mental retardation in the syndrome is not necessarily associated with cerebral malformations (20,33) .
In the present study, besides typical facial characteristics and intrauterine growth restriction, other facial anomalies were also noted, in addition to various malformations related to the nervous, osteoarticular, urinary and genital systems.
The typical facial characteristics of the action of alcohol on the unborn child may be associated with a narrow forehead, hemifacial hypoplasia, hypotelorism, hypertelorism, maxillary and mandibular hypoplasia, epicanthal fold, small and anteverted nose, wide and In this project, in newborns with a diagnosis of FAS, ARBD and ARND, the abnormalities related to the CNS appeared five times; those related to genitourinary system appeared six times, and to the osteoarticular system, five times.Cleft lip was present once, low-set ears in another case, and anteverted nose in 20 cases (Table 2).
One of the children had FAS with a thin corpus callosum and low-set ears; another one had agenesis of the corpus callosum, megaureter, and hydronephrosis.In children with ARBD, cleft lip was identified in one, polydactyly in two, congenital club foot in one, and cryptorchidism in two.In infants with ARND, the existence of meningomyelocele was noted in one, brain cyst, asymmetry of cerebral ventricles, and lowset ears in one, aphalangia of the toe, hypospadia, and cryptorchidism in one, and polydactyly in another.
DiScUSSiOn
Alcohol consumption by pregnant women is without a doubt a serious worldwide Public Health concern, since it damages the fetus not only physically, but also in terms of behavior (1-3, 5-7,10,27) .
Despite the known adversity of prenatal exposure to alcohol, the children that suffer the most are often not identified since the diagnosis of a newborn with FASD is difficult in most cases (28) .Clinical findings of FASD may go unnoticed, since they result from the combination of various factors that act at different critical periods of fetal development (1,2,11) .Thus, calling attention to the different aspects of FASD will contribute towards its early identification (29) .lowered nasal bridge, ogival palate, uvula aplasia, enamel hypoplasia, small teeth, dental malocclusions, cleft lip and/or cleft palate, micrognathia, and malpositioned and malformed prominent ears (4,8,20) .Some of these characteristics were also found in the newborns of the present study, and the frequency of anteverted nose was noteworthy.
Agenesis of the corpus callosum is described as one of the most frequent anomalies of the CNS (34) .This absence leads to conditions that vary from the inexistence of symptoms to various degrees of mental deficiency, convulsions/seizure and motor deficits (8) .The corpus callosum may also have a smaller size (33) , and both agenesis and decreased size of the corpus callosum were noted in some of the studied infants.
Hydrocephalus, meningomyelocele, decreased size of the cerebellum, of basal ganglia, and of the diencephalon may also be present in children of pregnant women who consumed alcohol during pregnancy (20,33) .In the present study meningomyelocele, brain cysts and asymmetry of cerebral ventricles were found.
In the ocular region, intrauterine exposure to alcohol may lead to palpebral ptosis, microphthalmia, coloboma, epicanthal folds, nystagmus, strabismus, and myopia (8,20) .The optic nerve is often hypoplastic, and the retinal arteries are sinuous (13) .Vestibular disorders of the ear are also seen, with neurosensory hearing deficits and episodes of secretory otitis media (8) .Some of these anomalies were found in the children of this study.
Cardiac malformations may include defects of the ventricular and atrial septums (8,20) .Atrioventricular defects, patent arterial canal, and tetralogy of Fallot have been associated with FAS (35) .In the present study, no cardiopathy was identified in children.
Proximal radioulnar synostosis, valgus femur, bone fusion anomalies, scoliosis, and complex malformations of the cervical vertebrae, ribs, and fourth and fifth metacarpal bones, pectus excavatum, and joint alterations with luxations may occur (8) .In this study we found polydactyly, aphalangia, and congenital club foot.
Patients with FAS may have malformations of the external genitals such as hypoplasia of the labia minora and labia majora, increased penis or clitoris, and hypospadia (8,20) .Among the children of this study we identified cryptorchidism and hypospadia.
The limitations of this study include the fact that the children were examined by only one of the researchers, which could lead to an assessment error.Nevertheless, the care taken to adopt criteria based on intrauterine growth, size of the PF and VBUL constructed for the specific population may have compensated for this aspect.On the other hand, the examination of children only during the neonatal period and the lack of follow-up could underestimate the occurrence of other developmental problems that would appear at a later date.
The findings of this study, however, indisputably showed the presence of congenital anomalies and of possible functional deficits related to the consumption of alcohol by the mothers of these infants.The large consumption of alcoholic beverages by the pregnant women of this population make it necessary to investigate the issue starting in the prenatal phase, calling attention to the negative effects on the fetus.
We sound an alert, therefore, for investigation of the effects of alcohol on the infants of women who consume alcoholic beverages to become mandatory and for all healthcare professionals to identify these children with the purpose of making the diagnosis of FASD, enabling early intervention for a possible reduction of the consequences, albeit the cure is not possible.
cOnclUSiOn
The occurrence of FAS in the present study proved to be identical to that of literature, while that of ARBD and ARND was greater.
The suspicions of its existence and its identification are possible in neonates, allowing early and fundamental action, necessary for a more adequate evolution of these patients.
table 1 .
Classification of newborns according to birth weight, head circumference, length, palpebral fissure and vermillion border of the upper lip in relation to gestational age SGA: small for gestational age; AGA: appropriate for gestational age; LGA: large for gestational age
|
2018-04-03T03:41:30.086Z
|
2010-12-01T00:00:00.000
|
{
"year": 2010,
"sha1": "26a486131ff6e88b02b6864871a2934ef0c80857",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/eins/v8n4/1679-4508-eins-8-4-0461.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ab565e2a0391d2bec1cc569e9baaa7edbc7af76f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119512380
|
pes2o/s2orc
|
v3-fos-license
|
Quark mass effects in the soft-collinear effective theory and B ->X_s + photon in the endpoint region
We consider the effects of a light quark mass in the soft-collinear effective theory (SCET) and we apply them to B ->X_s gamma in the endpoint region. We find that the reparameterization invariance can be extended by including the collinear quark mass in the SCET Lagrangian. This symmetry constrains the theory with the quark mass terms, and we present explicit results at one loop. It also relates the Wilson coefficients of some mass operators to those of the leading operators, which are useful in organizing the subleading effects due to the quark mass in B ->X_s gamma. We present strange quark mass corrections to B ->X_s gamma in the endpoint region as an application. The forward scattering amplitude from the mass corrections is factorized, and it can be expressed as a convolution of the m_s^2/p_X^2-suppressed jet function and the leading-order shape function of the B meson. This contribution should be added to the existing subleading contributions from the B meson shape functions to obtain complete subleading corrections.
I. INTRODUCTION
The soft-collinear effective theory (SCET) [1,2,3] has been widely used to describe high-energy processes which include energetic light particles. It is obtained from QCD by integrating out the degrees of freedom which are larger than a typical energy scale, Q. The effective theory contains a rich class of symmetries, and these symmetries of SCET provide us with new insight into factorization theorems [3,4,5] and enable us to perform a systematic power counting in hadronic processes [6]. SCET has been applied successfully to many high energy processes such as exclusive B decays [7,8,9,10,11,12], inclusive B decays [1,13], quarkonium production and decay [14], deep inelastic scattering [15], and jet physics [16].
In SCET, the momentum of a light energetic particle has three distinct scales and can be written as p µ = n · p n µ 2 + p µ ⊥ + n · p Here n and n are light-cone vectors satisfying n 2 = n 2 = 0, n · n = 2 and λ is a small parameter. In many processes, λ is chosen as Λ/Q or Λ/Q, where Λ is a typical hadronic scale. The effective theory which has a small expansion parameter λ ∼ Λ/Q is called SCET I and the effective theory in which physical quantities are expanded in powers of λ ∼ Λ/Q is called SCET II . If there are contributions at intermediate scales of order √ QΛ, we employ the two-step matching in which SCET I is obtained from the full theory by integrating out the hard modes of order p 2 ∼ Q 2 , and SCET II is obtained by successively integrating out hard-collinear modes of order p 2 ∼ QΛ [3].
At leading order in SCET, the collinear quarks are regarded as massless. Because the mass of a light quark, m, is very small compared to the hard scale Q or the intermediate hard-collinear scale √ QΛ, the quark mass can be neglected at leading order in λ. However, the light-quark mass terms [12,17,18,19], and in some situations the charm quark mass [20] can be included in the framework of SCET. In Ref. [18], the authors first considered the quark mass in the SCET Lagrangian. Any operators including the light quark mass are formally suppressed by Λ/Q or more compared to the leading contribution. However if there are no leading terms, the quark mass can appear at leading order. SU(3) breaking effects can be of this type since the strange quark mass can be numerically regarded as of order Λ (it is not possible to treat isospin breaking effects in this way since the masses of the up and down quarks are too small to be regarded as of order Λ). Another remarkable point about the quark mass is that it can give an enhanced contribution to some hadronic processes in SCET II due to the different power counting schemes in SCET I and SCET II . Although they do not appear at leading order in SCET I , since the quark mass terms are suppressed by Λ/Q, light quark masses can give significant corrections to the matching process related to hard-collinear degrees of freedom. The contribution of the quark mass to the decay rate can be of order m 2 /(Q 2 (1 − x)) ∼ Λ/Q near the endpoint 1 − x ∼ Λ/Q. This is one of the main themes to be investigated in this paper.
SCET can be extended to include the light quark mass, which we regard as of order Λ. We can systematically implement the quark mass in SCET and consider its renormalization behavior. We find that the reparameterization invariance [7,21] still holds for the transformations of type-I and type-III in spite of the presence of the quark mass. But the transformation of type-II does not hold in its original form. However the transformation of type-II can be modified (or extended) to include the quark mass so that the symmetry still exists. This extended reparameterization invariance relates the leading operators to some subleading operators that include the quark mass. In practical applications, the strange quark mass is the only light quark mass that is relevant and we consider the quark mass effects inB → X s γ near the endpoint as a concrete example. Naively, the mass terms give corrections of order m 2 /m 2 b compared to the leading order contribution. But contributions of order m 2 /[m 2 b (1 − x γ )] with x γ = 2E γ /m b can arise, which are of order Λ/m b near the endpoint region.
In this paper we investigate the effects of the quark mass in SCET and consider the symmetries including a quark mass. We also consider the renormalization effects and the Wilson coefficients of the mass operators in SCET. We then apply these results toB → X s γ in the endpoint region and discuss the contribution of the quark mass terms. In section II, the SCET Lagrangian with the light quark mass is constructed. We find an extended reparameterization transformation under which the Lagrangian is invariant, and we divide the Lagrangian into two reparameterization-invariant combinations. In this procedure, we show that the original reparameterization invariance symmetry without a quark mass can be extended by modifying the transformation of the collinear quark. We describe the consequence of the extended reparameterization invariance on the renormalization behavior of the mass operators. In section III, the Wilson coefficients of the effective operators including the quark mass are obtained to first order in α s from the matching between full QCD and SCET. Also their renormalization behavior is presented with the effective theory quark mass renormalization at one loop. In section IV, the corrections due to the strange quark mass in B → X s γ near the endpoint region are considered. They can give corrections of order Λ/m b , contrary to naive expectations. From the matching of the heavy-to-light current between the full theory and SCET I , we obtain the subleading current operators including the quark mass. We then consider the time-ordered products of the currents and mass operators contributing to the decay rate in SCET. We show that the forward scattering amplitude with the mass corrections factorizes, similar to the leading-order result, and the jet function can be expanded in powers of the quark mass. Finally the results are summarized and the conclusions are presented in the final section.
II. MASS OPERATORS AND THE REPARAMETERIZATION INVARIANCE
In SCET, the collinear quark in the full theory is decomposed into wherep µ is a label momentum, and n /n / 4 q n,p = ξ n,p , n /n / 4 q n,p = ξ n,p are the projected spinors. After integrating out the off-shell field ξn ,p [18], the SCET Lagrangian with a quark mass is written as where a summation over the label momenta is implied, and the covariant derivative D µ is given by [2] Let us consider first SCET I with the expansion parameter λ ∼ Λ/Q, in which the ultrasoft (usoft) fields can interact with the collinear fields. The usoft momentum is of order Λ, and p ⊥ ∼ √ QΛ. For a collinear strange quark, if we treat the sizes of the quark mass m and iD us to be of the same order O(λ 2 ), the term proportional to m in Eq. (3) is of order O(λ) and the term proportional to m 2 starts from O(λ 2 ). In this case, the mass terms in SCET are suppressed at least by order λ compared to the leading Lagrangian, and the spin of the collinear quark is preserved at leading order in SCET.
Integrating out the hard-collinear degrees of freedom with p 2 hc ∼ QΛ to obtain SCET II , the usoft fields are decoupled from the collinear fields [3], and the Lagrangian of the collinear quark sector in SCET II can be written as where iD µ c = (n · P + gn · A n,q ) n µ 2 + (P µ ⊥ + gA µ n,q,⊥ ) + (n · P + gn · A n,q ) Here the expansion parameter λ is of order Λ/Q, and the collinear fields have momenta p 2 c ∼ Λ 2 . Contrary to SCET I , the mass terms in Eq. (5) belong to the leading-order Lagrangian. Therefore the effects of the mass terms can be important at leading order.
Before we investigate the effects of the radiative corrections for the new operators with a quark mass in Eq. (5), it is useful to consider the symmetries of SCET with the quark mass. In Refs. [7,21], it has been shown that the SCET Lagrangian without the quark mass has a reparameterization invariance. One of the consequences is that the kinetic energy in SCET is not renormalized to all orders in α s . And when we consider current operators in SCET, there are subleading operators which form a reparameterization-invariant combination with the leading operators. In this case, the Wilson coefficients of these subleading operators are the same as those of the leading operators to all orders in α s . When the mass terms are included in SCET, the situation is slightly different. In this case, we can find an extended reparameterization transformation under which the Lagrangian is still invariant, and the Lagrangian consists of two independent sets of the operators which are separately reparameterization invariant. A similar example exists in the heavy quark effective theory (HQET) [22,23,24], in which the chromomagnetic operator belongs to a different reparameterization invariant combination from the kinetic term in HQET, and has a nontrivial Wilson coefficient.
Let us consider the effect of the mass term on the reparameterization invariance and how we can extend the reparameterization symmetry with the quark mass. The Lagrangian before integrating out ξn ,p is given by where the quark field in SCET is given by q n,p = ξ n,p + ξn ,p , and the covariant derivative is D µ = D µ c + D µ us . Here the covariant derivative D µ is invariant under the reparameterization transformation since it is a four-vector, which does not change under a different basis of n µ and n µ . Furthermore, the quantity p e −ip·x q n,p is the quark field in the full theory, which also does not change under the reparameterization transformation. Therefore the two terms in Eq. (7) are separately reparameterization invariant. Thus, there are two independent reparameterization-invariant combinations in Eq. (3), (5), and (7) if we can still find the appropriate reparameterization invariance.
In fact, there is a reparameterization invariance which can be extended to the case with the mass term. The original reparameterization invariance combined with the gauge invariance requires that the covariant derivative D µ not change under the transformations of type-I, II and III in Ref. [21]. We can find the same types of the reparameterization transformations under which the Lagrangian with the quark mass is invariant. In this case, we only need to check if the quark field p e −ip·x q n,p in the full theory remains invariant under these three types of the transformation. Using the equation of motion, we can write the quark field in the full theory in terms of ξ n,p as [18] Without the mass term, the quark field ψ has the original reparameterization invariance.
With the mass term, ψ is not invariant under all of the original reparameterization transformations. ψ is still invariant under the reparameterization transformations of type-I and III, but not the transformation of type-II. In order to see this, it is enough to look at the term proportional to the quark mass in Eq. (8) under the type-II transformation, in which the light-cone vector n µ changes to n µ + ε µ ⊥ with infinitesimal ε µ ⊥ . The transformation yields which clearly shows that the mass term is not invariant under the transformation of type-II. However, we can find an extended transformation of the spinor under the type-II transformation such that ψ remains invariant, and in the limit of the zero quark mass, the transformation reduces to the original transformation of type-II.
Suppose that the spinor ξ n changes as ξ n → ξ n + δξ n under the transformation of type-II. Then ψ transforms as Requiring that it be invariant under the transformation, the solution for δξ n is given by which reduces to the original reparameterization transformation of type-II without the quark mass. If we plug this solution into Eq. (10), we obtain using the equation of motion, Eq. (3). So the extended transformation of type-II on the spinor with the quark mass can be written as 6 Therefore the reparameterization symmetries in the presence of the light quark mass in SCET still exist with the only modification of the spinor under the transformation of type-II, while the other transformations remain intact.
As mentioned above, there are two independent reparameterization-invariant combinations in Eq. (7). Putting Eq. (8) into Eq. (7), each combination can be written as where K is the kinetic term of SCET and the mass operators O (i) m are suppressed by λ i compared to K in SCET I . Because the kinetic term in the effective theory is not renormalized to all orders in α s , it is also true for the reparameterization-invariant combination K − O (2) m . But the other combination in Eq. (15) does not have such a constraint, and in general it can have a nontrivial Wilson coefficient at higher orders. Putting these together, to all orders in α s , the SCET Lagrangian can be written as The Wilson coefficient C(µ) can be obtained from matching the full QCD Lagrangian onto SCET by treating the mass term as a perturbation. As will be explicitly shown in the next section, when dimensional regularization is used both for the ultraviolet and the infrared divergences, all the radiative corrections at order α s are zero since the ultraviolet divergences cancel the infrared divergences. Therefore there is no finite contribution in matching, and the Wilson coefficient remains as 1. The SCET Lagrangian, at least to first order in α s , can be written as If the radiative corrections remain zero at higher orders, the Wilson coefficient is equal to 1 to all order in α s . An argument to the non-renormalization to all orders was presented in the first reference of [8], and in Ref. [25] including the quark mass.
The scaling behavior of the quark mass can be considered by extracting the ultraviolet divergent part in the radiative corrections of the operators O (1,2) m since these operators involve the quark mass. It can be obtained by computing the radiative corrections for the quark mass with the wavefunction renormalization of the spinor ξ n . Physically, the scaling behavior of the quark mass should be the same as that in the full theory since there are no degrees of freedom integrated out, which contribute to the evolution of the quark mass of order Λ. For example, the self energy for ξ n is the same as that for the spinor ψ in the full theory. This is in contrast to HQET, where the magnetic operator has a nontrivial Wilson coefficient because the hard calculation of the full theory has a dependence on the heavy quark mass. All these aspects will be verified explicitly to order α s in the next section.
III. MATCHING AND RENORMALIZATION OF THE MASS OPERATORS
The matching between full QCD and SCET can be performed by considering the quark propagator. The quark propagator in the full theory can be written to all orders in α s as where (p /, m) is the self energy of the quark, and the higher-order corrections of the full QCD Lagrangian can be obtained by replacing the Lagrangian in momentum space as When we match SCET I onto the full theory at the scale µ ∼ Q where Q is the large momentum of the collinear quark, the self energy can be written as where the virtuality of the collinear quark p 2 is treated as µ 2 ≫ p 2 ≫ m 2 . At first order in α s , the coefficients are given as where D = 4 − 2ε and 1/ε represents the ultraviolet divergence and the infrared divergences are regulated by the logarithmic terms. This method is useful in extracting the ultraviolet divergences. For example, the counterterms for the wavefunction renormalization Z ψ and the mass renormalization Z m are given by A more convenient method is to use pure dimensional regularization with all the external particles on their mass shell. This greatly simplifies the computation both in the full theory and in the effective theory. In both theories the on-shell graphs have no finite parts since there are scaleless integrals, which vanish in pure dimensional regularization. Furthermore the matching results are gauge independent and renormalization-scheme independent only when we put the external particles on their mass shell. Eq. (21) can be written in pure dimensional regularization as where the infrared poles in ε IR can be explicitly computed or can be inferred from the ultraviolet divergence with the fact that the radiative corrections are zero.
At one loop after the ultraviolet divergence is removed, the radiative correction of the full QCD Lagrangian is given by To match this result onto SCET I , we convert p / to iD / and apply Eqs. (8), (14), and (15) to Eq. (24). Then we obtain where the operators K, O (1) m , and O (2) m are defined in Eqs. (14) and (15), and we use the on-shell renormalization scheme in which the infrared divergences are regulated by the poles in ε IR .
In order to examine if the effective theory reproduces the infrared divergences of the full theory and to extract the Wilson coefficients of O (1) m and O (2) m , we need to calculate the one-loop corrections of O (1) m and O (2) m in SCET I . For the strange collinear quark (in the case of up or down quarks the mass operators are more suppressed), both mass operators are subleading because the operators start at order λ or λ 2 since they are given as Here P = n · P and W is the collinear Wilson line, Since the subleading terms in the right side of Eq. corrections at one loop are shown in Fig. 2. The radiative corrections for O (2) m with collinear gluons will be the same as those in Fig. 2 due to the gauge invariance. If we put the external particle off the mass shell, the radiative corrections in Fig. 2 are given as Adding all these, we obtain since the ultraviolet divergent piece is known.
For the one-loop corrections of the operator O (1) m , which has at least one collinear gluon, it is convenient to use the background gauge field method [26]. Since the product of g and the background field A n is not renormalized in the background field gauge, the number of Feynman diagrams to compute is fairly reduced, and they are shown in Fig. 3. The computation of the diagrams is straightforward using the on-shell dimensional regularization scheme with the external quark momenta p 2 = p ′2 = 0. The results without and with a triple gluon vertex are given by where M (1) i represents the ith diagram in Fig. 3, and N is the number of colors. Summing these two results with C F = (N 2 − 1)/(2N), the radiative correction of the operator O (1) m at one loop is given as From Eqs. (30) and (33), we can see that the radiative corrections of the operators O (2) m and O (1) m in SCET I reproduce the infrared divergences in the full theory. And since the radiative corrections are the same in both theories, the Wilson coefficients of both operators are 1 with no contribution at one loop. We can also extract the counterterm for the quark mass in SCET I . The counterterm Z ξ for the wavefunction renormalization of a collinear quark is given by which is the same as the counterterm in the full theory for the quark field. Therefore we obtain the counterterm for the quark mass from O (1) m and O (2) m as which is the same as the full theory mass renormalization to first order in α s . This is to be expected since we do not integrate out any degrees of freedom relevant to the collinear quark mass from the matching. In summary, it has been shown that the counterterms for the wavefunction and the quark mass are the same as those in the full theory, and there are no contributions to the coefficients of the operators at one loop; that is, the operators are not renormalized to order α s .
The matching between SCET I and SCET II is trivial because there is no hard-collinear degrees of freedom (p 2 hc ∼ QΛ) to be integrated out in the SCET I Lagrangian. Note that the situation is different for heavy-to-light currents with the spectator interactions in B decays and for soft-collinear currents [27,28], in which there arise nontrivial Wilson coefficients (or jet functions) from the matching between SCET I and SCET II . A more concrete analysis on the hard-collinear modes is discussed in Refs. [25,28]. However the operators O (1) m and O (2) m in SCET II remain as they are in SCET I since the collinear momentum p 2 c = m 2 is still very small compared to matching scale µ ∼ √ QΛ. Therefore in SCET II , the mass operators are regarded as the leading operators for the strange collinear quark, and the operators have the same Wilson coefficients and the same renormalization behavior as in SCET I with the same mass renormalization given by Eq. (35).
IV. QUARK MASS CORRECTIONS TOB → X s γ DECAYS
Inclusive B decays based on HQET [29] have been widely studied to extract Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and to search for possible new physics. When an emitted photon is energetic in the region of the phase space with p 2 X ∼ m B Λ, SCET along with HQET is applicable and has been successfully applied [1,13]. In this case, the differential decay rate can be given by a factorized form as where ⊗ means the appropriate convolution. Here H is a hard factor obtained from the matching between the full theory and SCET I , J is a jet function obtained by integrating out hard-collinear objects, and f represents the shape function of a B meson, which consists of only soft interactions and is purely nonperturbative.
Recently the corrections of order Λ/m b toB → X s γ andB → X u lν decays in the endpoint region have been investigated using SCET [30,31,32]. Here the factorization formula Eq. (36) still holds, and the subleading shape functions are studied to clarify the uncertainty from the theoretical analysis. When the effect of the strange quark mass of order Λ is included inB → X s γ orB → X s ll, the mass corrections can also give a nonnegligible contribution of order Λ/m b . In this section we focus on this fact and analyze the mass corrections to the decayB → X s γ in the endpoint region. The result is also applicable to theB → X s ll, but it is not considered here.
The effective weak Hamiltonian forB → X s γ is given by [33] where the main contribution comes from the operator Here P R,L = (1 ± γ 5 )/2 and F µν is the electromagnetic field strength tensor. We choose the frame in which the photon momentum q µ is in the n µ direction, q µ = n · qn µ /2 = E γ n µ , where the photon energy E γ near the endpoint satisfies m B − 2E γ < ∼ Λ. The strange quark can be taken as a collinear quark in the n µ direction in the rest frame of a B meson.
Let us define the forward scattering amplitude T µν as whereT µν is given byT 13 with the current The inclusive photon energy spectrum can be written as where The forward scattering amplitude T µν (E γ ) in SCET near the endpoint region is given by the factorized form in Eq. (36) and the power counting can be performed systematically. The hard part can be computed from the matching between the full theory and SCET I , and the heavy-to-light current can be expanded in terms of the currents in SCET I in powers of λ ∼ Λ/m b . Then the time-ordered product of the effective currents can be expressed as a convolution of the jet function and the shape function of the B meson by matching onto SCET II . As a result, the forward scattering amplitude is given by the convolution of the hard part, the jet function, and the shape functions.
We investigate the strange quark mass corrections to the inclusive decay rate to first order in Λ/m b and α s . We show that these corrections can also be written in a factorized form and the mass corrections reside only in the jet functions. This mass correction should be included in the subleading contribution along with other subleading corrections from the shape function to order Λ/m b , which was extensively discussed in Refs. [30,31,32].
A. Matching a heavy-to-light current with a quark mass
Let us consider matching the heavy-to-light tensor current J µν = sσ µν (1 + γ 5 )b at µ ∼ m b ∼ n · p where n · p is the large momentum component of the collinear strange quark. The full-theory current can be matched onto the currents in SCET I , in which the hard degrees of freedom such as m b and the large off-shellness p 2 hard ∼ m 2 b ∼ m b n · p are integrated out. By choosing the heavy quark velocity as v ⊥ = 0, n · v = n · v = 1, the heavy-to-light current can be expanded in SCET I as where the superscripts k (k = 0, 1, 2) denote the order in λ, and another superscript m indicates the operators with the strange quark mass. The currents j (m) iµν are of the same order as j (2) iµν as long as the mass is regarded as m ∼ Λ. From now on, we suppress the exponential factors with the understanding that the label momenta are conserved. Since we focus on the mass corrections of the heavy-to-light currents and their relations to the leading or the subleading currents in λ, we will not consider the currents j (2) iµν any more. The detailed analysis on these currents can be found in Ref. [30].
At tree level, the current operator in the full theory can be expressed in terms of the currents in SCET I as where ξ n is a collinear strange quark field and h v is a heavy quark field.
At order α s , we employ the modified minimal subtraction (MS) scheme using on-shell dimensional regularization. In the full theory, the matrix element of the tensor current J µν at one loop is given as where x = n · p/m b , and all the poles in 1/ε represent the IR divergences. Here we use the equations of motion p / b b = m b b and sp / = ms putting each quark on shell with p 2 b = m 2 b , p 2 = m 2 → 0, keeping the terms to first order in the strange quark mass m. ForJ µν the matrix element at one loop is given by Now we expand the current operators in Eq. (47) in powers of λ using the momentum decomposition p µ = n · pn µ /2 + p µ ⊥ + n · pn µ /2. These operators can be written in terms of the gauge-invariant effective currents as where we keep the effective currents to O(λ 2 ). Here we use the fact that All the Wilson coefficients at tree level are 0 except Due to the reparameterization invariance, some of the Wilson coefficients at subleading order are related to the leading coefficients. They are given by to all orders in α s since those subleading operators with the corresponding Wilson coefficients are obtained by expanding the collinear field in a reparameterization-invariant way. However the subleading operator with B 2 is an independent operator. This operator can be obtained when we consider the heavy-to-light current in which a collinear gluon is emitted from the heavy quark and we integrate out the intermediate-state heavy particle. Therefore the coefficient B 2 is not related to other Wilson coefficients and should be computed independently. The radiative correction for the operator with B 2 was considered at one loop in Refs. [34,35]. Andà 3 andà 5 come from the operator proportional to m s = m in Q 7 , while A 3 and A 5 come from the subleading contribution of the leading operator in Q 7 at higher orders in α s .
In order to match the full theory onto SCET I , we compute the radiative corrections in SCET I . The relevant Feynman diagrams in SCET I at one loop are shown in Fig. 4. We have verified that the infrared divergences of the full theory in Eq. (47) are fully reproduced in the effective theory from the explicit calculations of the diagrams with the self energy of the external quarks, and they cancel out in matching. In computing the Wilson coefficients, the point is that all the radiative corrections in the effective theory are simply zero when the on-shell dimensional regularization scheme is employed, and the Wilson coefficients can be easily obtained.
The difference of the residues in the wave function renormalization between the full theory and the effective theory for the heavy quark at one loop is given by and we find the Wilson coefficients C i for j iµν as x , The Wilson coefficients C 1 (µ) and C 2 (µ) are basically identical to those obtained in Ref. [2] although the operator basis is different. The Wilson coefficients A i ,Ã i and B i except B 2 are new and first calculated here.
Note that all the operators in the basis {j 1µν can be written as Therefore the number of the independent operators in the basis is four. But it is useful to use this basis because the reparameterization invariance is shown transparently, as shown in Eq. (54).
B. Jet functions and factorization in SCET II
Let us consider the contribution of the quark mass to the forward scattering amplitude T µν (E γ ). The current J µ in Eq. (41) can be written in the effective theory as where j (j) iµν (ω), (j = 0, 1, m). Here we express j with a delta function. In general, the operators j 2µν (ω) need additional parameters ω ′ at higher orders in α s since it consists of at least three external particles including a collinear gluon, but it is not necessary at one loop since it is sufficient to consider the tree-level Wilson coefficients of j (1) 2µν (ω).
The forward scattering amplitude T µν (E γ ) can be written as with the normalization of the B meson states in HQET.T µν is given bŷ Eq. (56). The momentum r in the exponential factor is defined as Since the photon momentum q µ is given by q µ = n · qn µ /2, the label momentum of the collinear strange quark is fixed as giving n · r = r ⊥ = 0. n · r can be written as where p X is a momentum of the jet X and we use the momentum conservation m B v µ = q µ + p µ X . Evidently n · r is of order Λ since the mass difference Λ = m B − m b and n · p X are of order Λ.
We can express Eq. (59) showing the dependence of the quark mass explicitly aŝ iν (ω, 0) where the second and fourth contributions in Eq. (63) start at order α s since L (1) m contains at least one collinear gluon. Note that all these terms are suppressed by λ 2 compared to the leading contributions in SCET I . Only the third and fourth terms are nonzero due to the spin structure of the currents. The mass term flips the spin of the collinear quark, and therefore there must be even powers of m to conserve spin. The tree-level Feynman diagrams for the mass corrections to B → X s γ in the endpoint region are shown in Fig. 5. Fig. 5 (a) is zero as explained above. Fig. 5 where the leading heavy-to-light currents j (0) kµ (k = 1, 2) are given by The amplitude M ik,µν is given as where the Dirac structure Γ ik,µν is given by In Eq. (66), the ultrasoft interactions were decoupled from the collinear field, and the resultant usoft Wilson line is given by and n · P is of order Λ. In obtaining Eq. (66), we use the definition of the jet function 0|T W † ξ n (z)ξ n W |0 = i n / 2
20
where P = n · p is the label momentum, and the jet function is a function of n · k only with J P (k) = J P (n · k).
The matrix element of the remaining operators can be written as where P v = (1 + v /)/2 is the projection operator for the heavy quark, and f (0) is the leading shape function of the B meson, which is defined as with ǫ ⊥ µν = ǫ µναβ n α n β /2. Note that the final result of Eq. (70) is independent of Γ ik since only the first term in each Dirac structure in Eq. (67) contributes. We can expand the jet functions in powers of m 2 /p 2 X and α s . The jet function at first order in m 2 /p 2 X and in α s can be computed, with the relevant Feynman diagrams shown in Fig. 6. The contributions of Fig. 6 (a) Therefore the moment of the differential decay rate is given by the product of the moments of the shape function and the moments of the jet function. The moments J (m) N to order α s are given by where ln y is neglected in the limit y → 1, and H j = j k=1 1/k. In the large N limit, this becomes J (m) where N = Ne γ E .
For comparison, the leading result without the quark mass term can be written in the same way as Eq. (72) with the jet function replaced by with discontinuity where we again neglect ln y terms near the endpoint y → 1. This is consistent with the result in Ref. [30].
The leading differential decay rate is given by with moments The moments of the differential decay rate is factorized into a product of the moments of the jet function and the moments of the shape function of the B meson. The moments of the leading-order jet function become N (µ), which is given as [1,13] µ d dµ f where γ N is the anomalous dimension of the shape function. For large N and at order α s , it is given by Finally, since the hard part and the shape function in the leading mass correction of the moments of B → X s γ are the same as those of the leading-order moments [1], we find the resummation for the moments of the leading mass correction at the scale µ = m b /N to order α s can be written as This result represents that the leading mass corrections are of order (m 2 /m 2 b )N ln k N in the large N limit, and they are resummed in moment space. Compared with the leading-order moments, the leading mass correction is always suppressed by A note is in order for Eq. (98), in which the quark mass m is evaluated at m b / √ N instead of m b /N. This means that the strange quark mass is frozen at µ = m b / √ N, and the scaling behavior below that scale to m b /N resides in the jet function. This is motivated by the fact that the effects of the quark mass reside only in the jet function and it looks more transparent to consider the scaling behavior of the jet function as a whole including the quark mass. As can be seen in Fig. 6 (a)-(c), the radiative correction for the quark mass is included in computing the jet function. Another equivalent way of expressing Eq. (98) is to separate the effects of the quark mass from the remainder of the jet function and to scale each of them. Then the result can be expressed with m 2 evaluated at m b /N. These two methods are equivalent, and the latter method may be useful in considering the effects of the quark mass in exclusive B decays.
From this analysis we find that the quark mass effect to the leading decay rate is of order The size of the subleading corrections in Ref. [30] is of order Λ/m b . The quark mass corrections and the subleading corrections are of the same order if we regard the strange quark mass as of order m ∼ Λ. However, the strange quark mass is numerically about 80-130 MeV. Taking Λ ∼ 500 MeV, the quark mass correction is about 2-7% of other subleading corrections, and less than 1% compared to the leading decay rate. Therefore the mass effect can be regarded as small compared to other subleading corrections, but the important point is that the effect of the light quark mass can be systematically implemented in the theoretical framework of SCET, and as experimental uncertainties become smaller, this effect should also be included.
V. CONCLUSION
We have considered the contribution of a quark mass of order Λ in SCET and its application to B → X s γ in the endpoint region. The quark mass can be included in the SCET Lagrangian systematically by integrating out hard degrees of freedom. We can find an extended reparameterization invariance including the quark mass, in which we modify only the reparameterization transformation of the spinor for the transformation of type-II. As a result, the SCET Lagrangian can be separated into two reparameterization-invariant combinations. The subleading operators in each combination are related to the leading operators in that combination by the reparameterization invariance, and they have the same Wilson coefficients as those of the leading operators. In particular, we find that the mass operators in the SCET Lagrangian have trivial Wilson coefficients and are not renormalized. These results are explicitly confirmed by the calculation of the corrections to the mass operators in SCET to one loop. The extended reparameterization invariance also constrains some of the Wilson coefficients for the heavy-to-light current operators with the quark mass. It plays an important role in the matching process of the subleading heavy-to-light currents and the higher-order calculations of the time-ordered products of the mass operators.
When we consider B → X s γ in the endpoint region, treating the strange quark mass to be of order Λ, the subleading contribution is of order Λ/m b . We have verified this by matching the heavy-to-light current onto SCET I with the mass operators. Many of the currents with the mass are related to the leading-order currents by the extended reparameterization symmetry. There are also subleading operators which are independent of the leading current, and these are obtained at higher orders in α s . There are no contributions with odd powers of m to the decay rate because of spin conservation. The subleading contributions of order m 2 /p 2 X ∼ Λ/m b come from the time-ordered products of the double spin-flipped currents with the leading heavy-to-light currents, and the double spin-flipped currents are obtained by the time-ordered products of the leading currents with the mass operators in the SCET Lagrangian. The mass correction to the forward scattering amplitude is given by the factorized form which is expressed as a convolution of the m 2 /p 2 X suppressed jet function and the leading shape function of the B meson. The jet functions which are obtained from the matching between SCET I and SCET II can be always expanded by m 2 /p 2 X , and can be computed perturbatively in α s .
In some B decays, the subleading effects can be important to extract the CKM matrix elements. We have shown that the strange quark mass corrections give nonnegligible contributions of order Λ/m b inB → X s γ, and it would be interesting to see if the mass corrections can give significant contribution to other B decays. The results in this paper can be a basis on how to explain the SU(3) flavor-symmetry breaking effects, as in B → K * γ and B → ργ, in which the mass effects could be a leading result.
|
2019-04-14T02:38:09.311Z
|
2005-05-05T00:00:00.000
|
{
"year": 2005,
"sha1": "bcf70012ee5f0cdfa88d41b96b9b70f9104fe4ea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0505030",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bcf70012ee5f0cdfa88d41b96b9b70f9104fe4ea",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
269715487
|
pes2o/s2orc
|
v3-fos-license
|
Circulating hypervirulent Marek’s disease viruses in vaccinated chicken flocks in Taiwan by genetic analysis of meq oncogene
Marek’s disease (MD) is an important neoplastic disease caused by serotype 1 Marek’s disease virus (MDV-1), which results in severe economic losses worldwide. Despite vaccination practices that have controlled the MD epidemic, current increasing MD-suspected cases indicate the persistent viral infections circulating among vaccinated chicken farms in many countries. However, the lack of available information about phylogeny and molecular characterization of circulating MDV-1 field strains in Taiwan reveals a potential risk in MD outbreaks. This study investigated the genetic characteristics of 18 MDV-1 strains obtained from 17 vaccinated chicken flocks in Taiwan between 2018 and 2020. Based on the sequences of the meq oncogene, the phylogenetic analysis demonstrated that the circulating Taiwanese MDV-1 field strains were predominantly in a single cluster that showed high similarity with strains from countries of the East Asian region. Because the strains were obtained from CVI988/Rispens vaccinated chicken flocks and the molecular characteristics of the Meq oncoprotein showed features like vvMDV and vv+MDV strains, the circulating Taiwanese MDV-1 field strains may have higher virulence compared with vvMDV pathotype. In conclusion, the data presented demonstrates the circulation of hypervirulent MDV-1 strains in Taiwan and highlights the importance of routine surveillance and precaution strategies in response to the emergence of enhanced virulent MDV-1.
Introduction
Marek's disease (MD), caused by Gallid alphaherpesvirus 2 (GaHV-2), is a critical, highly contagious avian viral disease that induces serial clinical manifestations including systemic visceral lymphoma, neurological disorders, paralysis, and immunosuppression in infected chickens, resulting in considerable economic losses in poultry industry [1,2].The etiological agent GaHV-2, also commonly known as serotype 1 of Marek's disease virus (MDV-1), belongs to a member of the genus Mardivirus in the subfamily Alphaherpesvirinae of the family Herpesviridae, which also consists of other non-oncogenic MDV species: Gallid alphaherpesvirus 3 (GaHV-3) is serotype 2 of MDV (MDV-2), and Meleagrid alphaherpesvirus 1, also known as turkey herpesvirus (HVT), is serotype 3 [3].Nononcogenic MDVs were developed as first-generation vaccines and were soon after introduced to many countries for MD prevention [4].Based on the pathotyping protocol referring to the virulent properties in surmounting specific vaccinal protection, the population of MDV-1 can be classified into various pathotypes, including mild (m), virulent (v), very virulent (vv), and very virulent plus (vv+) [5].The increasing emergence of MD cases has been revealed in current reports among vaccinated chicken flocks in many countries, which suggests a probable rise in evolved MDV-1 field strains associated with enhanced virulence [6,7].
The MDV-1 genome encodes more than 200 genes, some of which are unique oncogenes primarily involved in viral pathogenesis [8].The Meq oncoprotein encoded by the meq oncogene was the first discovered oncoprotein whose N-terminal basic-leucine-zipper (bZIP) domain and C-terminal proline-rich transactivation domain were identified as major functional factors associated with MDV-1 virulence and oncogenicity [9].Recent studies have reported that specific amino acid mutations, proline contents, and the number of 4-prolinerepeat stretches (PPPPs) within Meq oncoprotein, are correlated with MDV-1 virulence [10,11].Therefore, in addition to the laborious in vivo pathotyping assay, alternative methods based on the molecular characteristics of meq oncogene sequences and the corresponding encoded Meq oncoprotein have been commonly used for phylogenetic analysis and virulence prediction of novel MDV-1 strains and have been published in numerous studies from various countries [12][13][14].
Despite the wide and routine application of vaccination, outbreaks of MD still occasionally occur in vaccinated chicken farms in numerous Asian countries, including China [15], India [16], Japan [17] and Thailand [18].During the past 20 years, MD-related cases have frequently been found in chicken populations in Taiwan; however, the nearest published report of very virulent MDV-1 appearing and circulating among local chickens or layers in poultry flocks in Taiwan was before the 21st century [19].The constant lack of continuous monitoring of the genotypes and virulence of the circulating MDV-1 strains in Taiwan has led MD prevention to become a thorny issue, which may result in inadequate responses to the sudden MD epidemic.In this study, we present the phylogenetic and virulence characteristics of current circulating MDV-1 strains in Taiwan through sequence analysis of the meq oncogene obtained from vaccinated chicken flocks from 2018 to 2020.
Samples
From January 2018 to December 2020, the chicken cases pathologically diagnosed with MD suspect from the Animal Disease Diagnostic Center of the National Pingtung University of Science and Technology (NPUST) were included in this study.The submitted chickens, including layers and native chickens, had been vaccinated with commercial univalent or bivalent MDV vaccines.The gross lesion tissues from these MD suspect chickens were examined by PCR assay for MDV-1 detection [20,21] and then stored at -80˚C for further gene analysis.
PCR for meq oncogene
The meq oncogene was amplified with primers EcoR-Q-for: GGTGATATAAAGACGATAGTCATG and EcoR-Q-rev: CTCATACTTCGGAACTCCTGGAG by conventional PCR to produce 1,625-bp DNA fragment as described previously [12].
Cloning and sequencing
The amplified meq oncogene products were purified by the FavorPrep TM Gel purification Mini Kit (FAVORGEN 1 BIOTECH CORP., Taiwan) according to the manufacturer's instructions and were cloned into T-vector using the T&A Cloning Vector Kit (Yeastern Biotech Co., Ltd., Taiwan).After blue-white screening, the plasmid-transformed colony was picked and cultured to acquire meq gene-carried plasmids for sequencing.Consensus sequences of the meq oncogene, which were confirmed by Sanger sequencing, were further verified and assembled using BLAST alignment analysis.The obtained nucleotide sequences of meq oncogenes of Taiwanese MDV-1 strains were submitted to the GenBank database with the accession numbers OQ576796-OQ576813.
Genetic analysis
A total of 37 selected meq oncogene sequences were retrieved from the GenBank database as references (Table ) for comparison with the sequences of Taiwanese strains used in this study.Nucleotide and amino acid identifications were conducted by alignment of Taiwanese strains and references using Clustal W software [30].The phylogenetic tree was constructed by MEGA version X [31] software using neighbor-joining (NJ) algorithms under the Tamura-Nei model with 1,000 bootstrap replicates.The sequences of Meq oncoprotein of Taiwanese strains were compared with selected references to identify the specific substitution of deduced amino acids.Additionally, the proline content and the number of PPPP motifs within the Meq oncoprotein of Taiwanese strains were also evaluated.
Ethics statement
The study was approved by the author's institution (Animal Disease Diagnostic Center of National Pingtung University of Science and Technology), and the animals used for necropsy also had the consent of their owners.In addition, this study did not involve live animal experiments and non-human primate test subjects, so there are no relevant details about experimental animal.
Profiles of collected samples
From 2018 to 2022, MDV-1 detection rates of the submitted chicken cases were 7.6%, 4.6%, 3.24%, 2.2%, and 2.2%, respectively.The information and the status of coinfection with other avian diseases of the randomly selected 17 cases are shown in Table 1.Interestingly, two MDV-1 strains, i.e., TW/141A/19 and TW/141B/19 in our collected materials, were detected from the same chicken flocks, indicating that different MDV-1s could simultaneously exist in identical populations.The presence of other poultry pathogens in the examined samples, along with MDV, indicates that pathogen coinfections in chicken flocks occur frequently nowadays in Taiwan.Notably, no positive detection of oncogenic virus ALV and REV were observed within all MD-positive materials.The chicken cases in this study mainly showed lymphoma lesions in a variety of organs and tissues, such as the ovary, lung, heart, mesentery, kidney, liver, spleen, thymus, pancreas, proventriculus, intestine, and skeletal muscle, and a few of them had neuronal lesions, indicating that the visceral lymphoma of MD was of a significant epidemic form (Fig 1) rather than ALV or REV.
Phylogenetic analysis of meq oncogenes of Taiwanese MDV-1 strains
A total of 18 MDV-1 strains were obtained from 17 vaccinated chicken flocks in Taiwan between 2018 and 2020.The genetic features of these 18 obtained meq oncogenes were characterized through phylogenetic analysis with 37 selected reference genes of identified strains, which were collected from various locations and available in the GenBank database.Among the selected reference strains, 25 of the strains were pathotyped.The phylogenetic tree demonstrated that the analyzed meq oncogenes in this study could be separated into 4 clusters (Fig 2).
Cluster 1 involved all Chinese strains, Thai strains, Taiwanese strains, and some of the Japanese strains.The vaccine strains, mild virulent strains, and Australian strains were all included in Cluster 2. Most classic USA strains representing pathotypes of very virulent and very virulent plus were grouped into Cluster 3, whereas part of the USA strains were divided into Cluster 4 with strains from India and Japan.The homology range among the members of Cluster 1 was 99.3-100% nucleotide identity and 98.5-100% amino acid identity, respectively.Notably, five of 18 Taiwanese strains showed the closest relationship with the vvMDV strain LS of China and 4 recently identified strains of Thailand (100% nucleotide and amino acid identity, respectively).In addition, the frequent appearance of branches from 13 Taiwanese strains in Cluster 1 indicated a high probability of individual evolution of MDV-1.These results suggested the circulation of MDV-1 for a particular duration among chicken flocks in Taiwan, which brought about geographical genetic polymorphism.
b Amino acid position based on 59-a.a.insertion-containing Meq oncoprotein which is predominantly in lower virulence strain. https://doi.org/10.1371/journal.pone.0303371.t002 of China.Although genetic analysis demonstrated the existence of high virulence MDV-1 strains in Taiwan, four unique substitution positions 119(R), 153(Q), 176(A) and 277(P) were not present when comparing with the classic vv+MDV strains N and 648A of USA.Moreover, as in previous reports, the proline content and the number of PPPP repeats within the Meq oncoprotein were also used as virulence predictors for Taiwanese strains [11,36].Compared with vaccine and mild MDV-1 strains (Table 3), the Taiwanese strains lacked insertions and showed related lower proline contents as well as PPPP motif numbers, which supported the high virulence prediction.
Discussion
This is the first report of MDV-1 virulence by molecular analyses in nearly 20 years after the study on the polymorphism of MDV-1 strains and the presence of vvMDV in Taiwan [19].The present study revealed the occurrence and genetic properties of the MDV-1 field strains circulating in Taiwan based on the sequence analysis of 18 virulence-associated meq oncogenes obtained from 17 vaccinated chicken flocks collected during 2018-2020.Therefore, understanding the genetic characterization of Taiwan MDV-1 has become a primary concern for disease prevention and control.Phylogenetic analysis demonstrated that all Taiwanese strains were grouped into the same cluster, involving predominantly highly virulent MDV-1 strains from China and field strains from Thailand and Japan.Some Taiwanese strains showed complete genetic identity to the LS strain, which was isolated from the Sichuan province of China and classified as the vvMDV pathotype [33].High similarity features are also represented in Thai field strains, which were recently published as being in close phylogenetic relationship with MDV-1 strains from China [18], indicating that these field MDV-1 strains may share a common ancestor and evolutionary direction.Interestingly, Guangxi Province is geographically closer to Taiwan and Thailand than to Sichuan Province; however, based on phylogenetic analysis, the strains from Guangxi, GX070060 and GX070079, showed less phylogenetic relationships with Taiwanese and Thai strains.The reasons of these findings are still unknown, but the possibility of pathogen transmission by wild birds could be considered [37].In the present study, the 'LS-like' MDV-1 field strains, including TW/011/18, TW/123/19, TW/141B/19, TW/029/20, and TW/116/20, were obtained from different flocks in various collecting years, indicating that these strains were dominant stains persistently circulating in chicken farms in Taiwan.The persistent detection of such strains from vaccinated flocks might be due to the genetic adaptation in the chicken flocks and farms and the immune escape from the vaccine protection [38].
Taiwanese MDV-1 strains were all clustered together in Cluster 1 of the phylogenetic tree and spread in several different branches, which revealed not only geographically restricted evolution, but also the genetic diversity as in previous investigations [39,40].Notably, the strains from Southern Japan were grouped into the cluster with Taiwanese MDV-1 and Chinese strains, whereas the Northern Japanese strains were clustered into another group with USA and Indian strains, suggesting a possible independent construction of geographical phylogeny in East Asia.
The spontaneous mutations of oncogenes, especially the meq oncogene, on the MDV-1 genome have been regarded as important roles corresponding to increasing virulence [41].The Meq oncoprotein, known to play a critical role in MDV-1 pathogenicity, has shown unexpectedly higher mutation rates than general DNA viruses and even resembles RNA viruses [42].Although the causes for such high mutation frequency of MDV-1 have not been fully clarified, most investigations have demonstrated that the improper use of vaccines can lead to the induction of positive selection from the field viruses, eventually resulting in viral diversity [43,44].With the annually found MD clinical cases and the genetic diversity of meq oncogenes in our results ( Specific sequence characterization of the Meq oncoprotein has been reported as a predictor for MDV-1 pathotype and can be applied to the virulence prediction for novel isolated MDV strains instead of in vivo classification [16,18,36].It has been reported previously that amino acid mutations at positions 71 (Ala), 77 (Glu), 80 (Tyr), 115 (Ala), and 176 (Arg) were the main feature of highly virulent MDV-1 of Chinese strains [14,17].The results of sequence analyses in our study showed that all obtained Taiwanese MDV-1 field strains represented the molecular characteristics of the mutations as the previous report of China strains, supporting the high virulent potential of these Taiwanese MDV-1 strains.In addition, the mutations at positions 77, 80, 115, and 176 of Meq oncoproteins seem to be common features of Chinese, Thai, Japanese, and Taiwanese MDV-1 field strains, and could be considered as accessible markers for molecular identification of East and Southeast Asian MDV-1 strains.
Insertions appearing in meq oncogenes of mild and attenuated strains, such as CU-2 and CVI988/Rispens, cause the expression of longer Meq oncoproteins, resulting in the presence of higher proline contents and more PPPP motifs than those of virulent MDV-1 strains which were correlated with low virulence characteristics of MDV strains.Conversely, no insertions in meq oncogenes of N and 648A strains of USA have lower proline contents and fewer PPPP motifs, leading to high virulence MDV strains [11].Our findings in the present study showed the related lower proline contents and fewer PPPP motifs, and the values were between those of vvMDV and vv+MDV USA strains.In addition, the related lower proline contents and more occasional PPPP motifs of Taiwanese strains were similar to the values of vvMDV and vv +MDV Chinese strains.These results indicated that the circulating MDV-1 field strains in Taiwan were potentially hypervirulent, but their exact pathotypes still required further classification by in vivo pathotyping experiments.
It is still a vital and effective way to control MDV epidemics using vaccines in flocks [4].In Taiwan, vaccination programs for young chickens via bivalent vaccines of two commercial live strains CVI988/Rispens of MDV-1 and FC126 of HVT, have been commonly practiced across the poultry industries.To the best of our knowledge, bivalent vaccination is available for producing a protective immune response against most virulent MDVs, including vvMDV and vv +MDV pathotypes, but the occurrence of clinical MD cases due to immune failure in chicken flocks around the world, including in Taiwan, which raising close attention to the problems regarding the vaccine application.Current commercial MD vaccines are all cell-associated types with more transportation, storage, and administration difficulties than other live vaccines.Vaccination efficiency can be affected by the reconstituted conditions, performance, dose uniformity of vaccines, etc. [45,46].In Taiwan, we have examined the immune status by detecting MDV from feather tips 14-21 post-vaccination day after the pullets were applied to the CVI988 and/or HVT-FC126 on 1 day of age.Only 48% (16/33) of chicken flocks were vaccinated successfully (achieving 70% immunization coverage).After monitoring the 7 flocks from a layer breeding farm, in which the pullets were vaccinated by applying the same patch of CVI988 vaccine, and the same injection machine and procedure were used, various detection rates of the vaccinated virus in the 7 flocks were found (30-90%) [47].Ununiformed vaccine doses received by pullets were considered the possible reason for the uneven vaccination efficacy, and applying the well-mixed vaccines was essential to prevent immune failure.
Coinfection of avian viruses, such as MDV, IBDV, NDV, CAV, reovirus, and reticuloendotheliosis virus, can induce immunosuppression in infected hosts, reducing vaccination efficiency [48,49].The coexistence of poultry immunosuppressive disease virus together with MDV has been detected in the present study, suggesting that the chicken flocks in Taiwan may also suffer under immune suppression and cannot have proper protection after vaccination.
In conclusion, the phylogenetic findings on the geographical diversity of meq oncogenes suggested an ongoing evolution in circulating Taiwanese MDV-1 strains, which already adapted to the chicken farms in Taiwan.The circulation of field MDV-1 strains in Taiwan was dominated by a cluster with potentially hypervirulent characterization.Routine surveillance of field MDV-1 strains and monitoring of immune status on poultry farms will be needed to develop effective vaccines and control strategies in response to the emergence of enhanced virulent Taiwanese MDV-1 strains.
Fig 1 .
Fig 1.The gross lesions of MDV-1 clinical cases.The gross lesions in MDV-1 infected chickens in this study include: Enlarged liver (A) with white neoplastic nodules; the variable size of multifocal grayish-white nodules in the heart (B) and spleen (C); numerous white nodules throughout the intestinal serosa surface (D); thickened proventricular wall with multiple white protrusion (E); multifocal white nodules in the kidney (F); neoplastic mass occupied ovary (F); enlarged left sacrum nerve plexus (G).https://doi.org/10.1371/journal.pone.0303371.g001
Fig 2 .
Fig 2. Phylogenetic tree of MDV-1 strains.The phylogenetic tree was built by using neighbor-joining (NJ) based on the complete nucleotide sequences of meq oncogene obtained from reference MDV-1 strains and Taiwan field strains.All reference strain names are labeled with the corresponding abbreviation of countries.The symbols indicate respective field MDV-1 strains in different countries and the attenuated/mild strains.https://doi.org/10.1371/journal.pone.0303371.g002
Fig 2 ,
Cluster 1), the positive selection of the viruses in vaccinated chicken flocks of Taiwan may drive the viral evolved direction toward enhanced virulence of MDV-1.
|
2024-05-12T05:05:48.946Z
|
2024-05-10T00:00:00.000
|
{
"year": 2024,
"sha1": "b0ae2030c3a40fc62a511f979aa14efb8df1834c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b0ae2030c3a40fc62a511f979aa14efb8df1834c",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247845438
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of Serum and Macular Carotenoids in Central Serous Chorioretinopathy
Purpose This study aimed to evaluate serum lutein and zeaxanthin levels and macular pigment optical density (MPOD) in central serous chorioretinopathy (CSC). Methods Fifty-four patients with acute CSC (28–56 years old; 44 men and 10 women) and 62 matched controls were enrolled. Serum lutein and zeaxanthin were measured using the high-performance liquid chromatography–tandem mass spectrometry (HPLC–MS/MS) method. MPOD was measured at 7° of eccentricity and reported in parameters as “max” and “mean” optical density (OD) (Visucam 200; Carl Zeiss Meditec). MPOD was re-measured in 9 patients whose subretinal fluid was absorbed. Results The average max OD and the mean OD in CSC were 0.275 ± 0.047 d.u. and 0.098 ± 0.018 d.u., respectively, which were significantly lower than the control (p < 0.001). The average MPOD value in the unaffected eyes of patients with CSC was 0.298 ± 0.045 for max OD, 0.106 ± 0.017 for mean OD, and both were significantly lower compared with the affected eyes (p < 0.001 for max OD, p = 0.01 for mean OD). In the 9 follow-up patients, the decrease in MPOD was partially recovered. The mean serum level was 409.80 ± 182.52 ng/ml for lutein and 22.97 ± 12.23 ng/ml for zeaxanthin in patients with CSC. In controls, the mean serum level was 393.38 ± 202.44 ng/ml for lutein and 22.16 ± 10.12 ng/ml for zeaxanthin. The difference was not statistically significant (p = 0.649, p = 0.698, respectively). Conclusion MPOD decreased within 7° of eccentricity in CSC without serum lutein and zeaxanthin changes. The decrease may be due to the subretinal fluid. Whether local oxidative stress is involved in CSC and the supplementation with lutein and zeaxanthin is helpful for CSC requires further investigation.
INTRODUCTION
Of the over 1,000 carotenoids found in nature, only lutein and zeaxanthin and their metabolites (meso-zeaxanthin) are present in the human macula, and they are called macular pigments (1). Most of the macular pigments are located in the Henle fiber. Some of them are also distributed in the inner segment and rods out segment (2,3). Owing to the function of filtering blue haze, lutein, and zeaxanthin are important antioxidant materials (4). Macular lutein and zeaxanthin must be acquired through dietary intake, as they are not synthesized endogenously (5). The combination and transportation of lutein and zeaxanthin from the ingested food matrix to the retina are characterized by a complex multistep process and the exact mechanism remains unknown, but retinal pigment epithelium (RPE) and choroid play a role in the process (6,7).
Central serous chorioretinopathy (CSC) is a common disease characterized by serous retinal detachment (SRD) involving the macula (8). The typical findings are RPE leakage and SRD in fluorescein fundus angiography (FFA). It often affects young men and ranks fourth among non-surgical retinal diseases. The underlying mechanism of the disease is still under exploration, and increasing evidence has shown that choroidal and RPE dysfunction play an important role in pathogenesis (8,9). Recently, some researchers have explored that oxidative stress may play a role in the pathogenesis of CSC (10)(11)(12). Sawa et al. (13) discovered that short-term lutein supplementation did not increase the MPOD values of patients with CSC, but could prevent the decline of MPOD in patients with CSC with lower plasma lutein.
The concentration of lutein/zeaxanthin/meso-zeaxanthin within the macula, or macular pigment optical density (MPOD), is used as a surrogate marker of macular health. Higher levels of MPOD are thought to preserve retinal tissue, specifically the layers containing photoreceptors in the fovea, and decreased macular pigment has been reported previously in macular disease, including age-related maculopathy, AMD, and CSC (13,14). Moreover, depletion of carotenoids may result in lower MPOD and a greater risk of incident retinopathy and visual dysfunction. A previous study by Sasamoto et al. (15) reported decreased MPOD values in both chronic CSC eyes and fellow eyes using the two-wavelength autofluorescence spectrometry method without measuring systematic lutein and zeaxanthin concentrations. However, the relationship of serum lutein and zeaxanthin and MPOD evaluation using the one-wavelength reflectometry method has not been studied in Chinese patients with CSC with a relatively larger sample size.
The goal of the current study was to evaluate serum lutein and zeaxanthin levels and MPOD in acute Chinese patients with CSC. Investigation of the relationship of serum and macular carotenoids levels in patients with CSC may provide new perspectives and management of the disease.
MATERIALS AND METHODS
The study procedure was approved by the Ethics Committee of Zhongshan Ophthalmic Center of Sun Yat-sen University. The protocol was strictly conducted in accordance with the Declaration of Helsinki. After explaining the purpose and procedures of the study, all subjects signed a written informed consent.
Subjects
Patients with treatment-naïve acute CSC were enrolled from outpatients in Zhongshan Ophthalmic Center. Healthy volunteers free of ocular manifestations enrolled from oral and poster advertisements around our hospital were included as the control group. They all underwent physical examination, including height, weight, and blood pressure. Body mass index (BMI) was calculated as a person's weight in kilograms divided by the square of height in meters. Oral questionnaires were taken to investigate the course of the disease and smoking status (smokers were defined as smoking at least one cigarette a day for at least 6 months). Ophthalmic examinations were performed in all patients with CSC, including visual acuity, slit-lamp biomicroscopy, direct ophthalmoscopy, color fundus photography, and fundus fluorescence angiography (FF 450 plus, Carl Zeiss, Germany). Visual acuity was required using the Early Treatment Diabetic Retinopathy (ETDRS) chart and recorded in the logarithm of the minimum angle of resolution (logMAR) format.
The inclusion criterion for acute CSC was focal SRD involving macula with one or more leakage in FFA within 3 months.
For both patients and control participants, the general exclusion criteria were as follows: uncontrolled hypertension (systolic pressure was consistently greater than 140 mm Hg or when diastolic pressure was consistently 90 mm Hg or more), hyperlipidemia (total cholesterol more than 5.18 mmol/L, triglyceride more than 1.7 mmol/L), medical history that might influence the absorption of carotenoids such as lutein supplementation, and special diet habits such as vegetarian.
We also prospectively evaluated the change in MPOD during the course of the disease. Among the patients with acute CSC, patients without medication or laser treatment were followed up in a 3-month period using optical coherence tomography (OCT) (Spectralis, Heidelberg Engineering, Germany). Only 9 patients achieved spontaneous absorption of subretinal fluid. MPOD values were measured at baseline and the latest visits of these 9 patients.
Macular Pigment Optical Density Measurement
Macular pigment optical density was detected using the onewavelength reflectometry method (Visucam 200; Carl Zeiss Meditec) as previously described (16). All subjects' pupils were dilated to a minimum diameter of 7 mm using 1% tropicamide before measurement. First, a fundus color photography in 45 • was taken. Then, the MPOD photograph was measured in the 30 • -field with flash intensity set to 12. Software in the instrument calculated the MPOD value in a 7 •eccentricity. The parameters included max optical density (OD) and mean OD. Max OD refers to the maximum OD of macular pigment carotenoids and the mean OD refers to the mean OD of the macular pigment carotenoids in relation to the surface area.
Serum Analysis
We used high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method (Shimadzu LC20AD-API 3200MD TRAP) to assay the serum levels of lutein and zeaxanthin. Antecubital vein peripheral blood samples were obtained in the morning after an overnight fast. Serum was separated after centrifugation at 2,000 g for 10 min at 4 • C and stored at -80 • C for later analysis. A 100 µl sample was mixed with 300 µl of chloroform-methanol solution and 300 µl of pure water for 1 min on a shaker. Then they were refrigerated and centrifugated at 13,200 g for 8 min. After that, the upper organic phase layer was immediately evaporated to dryness under a stream of nitrogen by using a sample concentrator in an eppendorf (EP) tube. The water phase was mixed with 300 µl N-hexane on a shaker for 1 min and was refrigerated by centrifugation at 13,200 g for 8 min. The upper organic phase was placed in the previous EP tube and evaporated to dryness. These dried samples were reconstituted in 100 µl methanol, and 50 µl was used for HPLC analysis.
Statistical Analysis
SPSS software, version 19.0 (SPSS Inc., Chicago, IL, United States) was used for the analysis. The chi-square test was used to compare the sex and the smoking status between patients with CSC and controls. The independent t-test was used to compare the difference of age, visual acuity, and MPOD parameters between patients with CSC and controls. Paired t-test was used to compare visual acuity and MPOD differences between affected and unaffected eyes of patients with CSC, and in patients who had been followed up with the previous measurements. Pearson correlation was used to assess the association between MPOD value and visual acuity. A p-value less than 0.05 was considered to be statistically significant.
RESULTS
A total of 54 patients with acute CSC and 62 volunteers who met the criteria were enrolled in the study. Table 1 shows the general characteristics of patients with CSC and controls. Age, sex, BMI, and smoking status were not significantly different between the groups. In patients with CSC, the mean serum level of lutein was 409.80 ± 182.52 ng/ml and 22.97 ± 12.23 ng/ml for zeaxanthin. In normal controls, the mean serum level was 393.38 ± 202.44 ng/ml for lutein and 22.16 ± 10.12 ng/ml for zeaxanthin. The serum levels of lutein and zeaxanthin were not significantly different between the CSC group and the control group (p = 0.649, p = 0.698, respectively).
The average MPOD value in patients with acute CSC was 0.275 ± 0.047 d.u. for max OD and 0.098 ± 0.017 d.u. for mean OD. In the control group, the average MPOD value was 0.348 ± 0.040 d.u. for max OD and 0.129 ± 0.022 d.u. for mean OD. Independent t-test showed that patients with acute CSC had lower MPOD values than controls, and the differences were significant (p < 0.001).
The average MPOD value in the unaffected eyes of patients with CSC was 0.298 ± 0.045 for max OD, and 0.106 ± 0.017 for mean OD, and both were significantly reduced compared with the affected eyes (p = 0.018 for max OD, p = 0.015 for mean OD, paired t-test).
In terms of visual acuity, the mean logMAR visual acuity in the affected eyes of patients with CSC was 0.137 ± 0.216, in unaffected eye of patients with CSC was 0.044 ± 0.134, and in the control group was -0.016 ± 0.064. The difference was significant. (Affected eyes vs. unaffected eyes in CSC patients, p < 0.001, paired t-test. Affected eye in patients with CSC vs. control group, p < 0.001, independent t-test).
In the 9 patients whose subretinal fluid resolved spontaneously at the 3-month follow-up time, MPOD values showed a rising tendency and was significant correlated with visual acuity at disease onset, as shown in Figure 1. Table 2 shows the correlation between MPOD value and visual acuity in the 9 follow-up patients during the period of onset and recovery of the disease. Max OD was 0.241 ± 0.045 d.u. at disease onset and 0.270 ± 0.048 d.u. at the recovery time with a marginally significant difference (p = 0.043, paired t-test). For mean OD, the value was 0.094 ± 0.018 d.u. at disease onset and 0.104 ± 0.018 d.u. at recovery time, the difference was insignificant (p = 0.075, paired t-test). At the disease onset, visual acuity was significantly correlated with MPOD, for max OD, rs = -0.771, p = 0.015, for
DISCUSSION
Our study demonstrated that MPOD values were decreased in patients with acute CSC using the one-wavelength reflectometry method. The decreased MPOD value in CSC patients was similar to a previous study carried out in Japan including 70 patients with chronic CSC and 41 patients with acute CSC using the autofluorescence method (15). What caused the reduction of MPOD in patients with CSC remains unknown. The possible reasons are discussed as follows. First, recent studies have shown that the antioxidative process might be involved in the pathogenesis of CSC (10)(11)(12). The blood flow regulation of choroidal vessels may be affected by nitric oxide and free radicals. In this way, carotenoids could be overconsumed locally because of their antioxidation function. As a result, MPOD is decreased.
Second, abnormalities of choroidal vessels and RPE in patients with CSC may account for this result. The majority of macular pigments are distributed in the outer plexiform layer without capillaries. Carotenoids are transported to the retina by blood circulation (7,17). Through choroidal circulation and RPE, carotenoids are captured and accumulated in the neurosensory retina (18,19). Choroidopathy and epitheliopathy all take part in the pathogenesis of CSC. The patients with CSC have pachychoroid compared with normal people in both affected and unaffected eyes due to the dilated large choroidal vessels (20)(21)(22). Various studies have shown that dysregulation of choroidal blood flow exists in patients with CSC (23,24). These abnormalities would influence the transportation and capture of carotenoids.
Our study also found that MPOD values in the unaffected fellow eyes of patients with CSC were significantly greater than the affected eyes. At the disease onset time, a significant correlation was found between visual acuity and MPOD. Furthermore, in the follow-up of 9 patients, MPOD values partially recovered with the same tendency of visual acuity. The same change tendency of MPOD and visual acuity suggest that MPOD was an indicator of macular health. Besides the function of blue light filtering, lutein is reported to protect photoreceptor cells (25,26) and is closely related with visual functions such as photostress recovery, glare disability, and contrast sensitivity (27). The results also implied that subretinal fluid may have an impact on the measurement of MPOD in patients with acute CSC. The fluid underlying the neurosensory retina could cause damage to RPE and photoreceptors, thus influencing the capture and accumulation of carotenoids (2,7). In addition, subretinal fluid may affect lutein transport from the choroid to the retina and cause a shortage of macular pigment in the retina. The methodologic artifacts of one-wavelength reflectometry should also be considered as a bias of these results. No other obvious difference was found in BMI and smoking status between these 9 and the other patients. This may be partially due to the limited number.
Patients with CSC had no differences in serum lutein concentration compared with healthy subjects in the current study. To our knowledge, serum lutein and zeaxanthin levels have not been measured in a relatively larger sample size of patients with CSC. Sawa et al. (13) measured plasma lutein concentrations in patients with CSC and healthy subjects. The results showed that patients with CSC had a lower plasma lutein concentration than healthy subjects, which is inconsistent with our findings. The following reasons may provide a plausible explanation. We included more patients with CSC and healthy participants and only focused on patients with acute CSC while in Sawa's study, patients with chronic CSC were involved.
Although CSC is a common disease in ocular fundus outpatient department, many mysterious about its pathophysiology have not been uncovered for decades. Previous studies had shown that oxidative stress may play a role in CSC (12). Antioxidant supplementation including lutein may improve best corrected visual acuity in patients with chronic CSC (28). The local carotenoid decreases without serum change in patients with acute CSC provides us new aspects in understanding this disease. More attention should be paid to the local changes when it comes to antioxidant supplementation in CSC.
There are some limitations of our study. Choroidal thickness, central retinal thickness, oxidative parameters, and visual function tests such as contrast sensitivity were not examined, which limit our understanding of the relationship between lutein and zeaxanthin levels and the pathogenesis of CSC. The followup duration was limited to 3 months and the patients with follow-up were limited to 9 patients without serum carotenoid re-measurement. Information of patients who do not have fluid resorption was lacking, which could help us to investigate the clinical use of MPOD in CSC further. Randomized controlled interventional studies with more patients and longer follow-up time would be helpful to understand the relationship between carotenoid and CSC.
Our research measured serum lutein and zeaxanthin levels and MPOD values in patients with CSC and found that MPOD was reduced in patients with CSC using the one-wavelength reflectometry method. Serum lutein and zeaxanthin levels did not vary significantly in patients with CSC and healthy controls. The local difference in macular pigment may be affected by subretinal fluid. Whether local oxidative stress is involved in CSC and whether supplementation with carotenoids is helpful for the recovery of CSC requires further investigation.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of Zhongshan Ophthalmic Center of Sun Yat-sen University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
YJ, YG, YS, and YZ contributed to the study design and data collection, analysis, and interpretation. LM, ML, CZ, and FW contributed to the data collection and data interpretation. All authors revised the manuscript and gave final approval for submission.
|
2022-04-01T13:28:18.612Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "20521419648da1a9ca74490901cb28e5ee2de5dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "20521419648da1a9ca74490901cb28e5ee2de5dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268353761
|
pes2o/s2orc
|
v3-fos-license
|
ANALYSIS OF THE POTENTIAL OF THE VISEGRAD GROUP 1 COUNTRIES IN SELECTED AREAS OF THEIR ACTIVITIES
Purpose: The aim of this article was to assess the potential of the Visegrad Group countries in
Introduction
The conditions for the functioning and development of modern enterprises are diverse and multidimensional, resulting from, among others, the ongoing globalization process, the dynamic development of ICT technologies, socio-cultural transformation, and changes in the methods of organizing and conducting resource management processes.The freedom of movement of human, material, financial and information capital that accompanies the globalization process significantly influences the transformation of the economic, sociocultural, political, and legal spheres, contributing to the development of many countries worldwide.
Economic growth and technological progress cause specific consequences that should be considered and assessed from different perspectives.Wide possibilities of access to modern tools, devices, and technologies constitute a means of providing society with high comfort in work, living, and traveling (Motowidlak, 2017).In another approach, it is a consequence of civilization development, based, among others, on consumerism, is the deterioration of the natural environment, posing a threat to current and future generations (Kiełczewski, 2008).The increase in social awareness of the positive and negative consequences of economic development emphasizes the need to ensure consensus between the implementation of economic, social, and environmental goals.
The idea and principles of cooperation between actors of innovative processes in the context of society's expectations and the challenges of the modern economy are explained by various models, including the triple helix and quadruple helix models (Łącka, 2018).The concept of the Triple Helix model developed by L. Leydesdorff and H. Etzkowitz (2001) is a model of innovation covering the relationships occurring in the process of knowledge transfer between three separate environments -science, industry, and administration.The Triple Helix model is generated in the knowledge infrastructure in relation to overlapping institutional spheres, each of which plays its role and, at the same time, enters into relationships with other entities (Etzkowitz, Leydesdorff, 2000).The cooperation between the university and the business community is crucial.It influences the development of innovation, knowledge transfer, and the development of countries.On the other hand, the government plays a crucial role in developing financing policies and leveraging these relationships to increase capacity (de Lima Figueiredo, Fernandes, Abrantes, 2023).
Based on economic changes and the changing expectations of stakeholders, the quadruple helix model was proposed.The fourth element in this model is a civil society with a media and culture-based community (Carayannis, Campbell, 2012).The quadruple helix model draws attention to the fact that the science, business, and administration environment, while creating conditions for introducing innovations, should be open to broadly understood social needs.In other publications, some disputes can be found regarding the validity of creating a quadruple helix model because civil society is not an institutional sphere at the same level as universities, industry, or government (Cai, Lattu, 2022).
Based on socio-economic changes, the issue of social responsibility is increasingly and more strongly emphasized, meaning the responsibility an organization bears for the impact of its decisions and activities on society and the natural environment (Pfajfar, Shoham, Małecka, 2022).This responsibility is ensured by transparent and ethical behavior that contributes to sustainable development, considers stakeholder expectations, is consistent with applicable law, integrated with the organization's activities, and practiced in its relationships (Anam, Zygier, Saczuk, 2020).
Corporate social responsibility is implemented through various activities, both in the codification of legal provisions and practices used by individual organizations.It is believed that corporate social responsibility is the manufacturing sector's response to the challenges posed by the principles of sustainable development (Gadomska-Lila, Wasilewicz.2016).According to J. Adamczyk, sustainable development and corporate social responsibility were created as two independent concepts; however, in the implementation of the principles of social responsibility and sustainable development recommendations, there is a process of their diffusion.It is possible to observe the process of interpenetration of principles, goals, areas of implementation, instruments, and measures for assessing these two concepts (Adamczyk, 2017).Corporate social responsibility can be treated as a tool for implementing sustainable development (Płachciak, 2015).
Characteristics of the Visegrad Group (Group V4)
In the face of profound political and economic changes taking place in the early 1990s in the countries of the former communist bloc, an initiative was created to establish a forum for regional cooperation between Poland, Czechoslovakia, and Hungary.On 15th February 1991, the Visegrad Declaration was signed by the Presidents of the three countries, inaugurating the Visegrad Triangle (Kużelewska, Bartnicki, 2017).On 1st January 1993, as a result of the breakup of Czechoslovakia, the name was changed to the Visegrad Group (V4), bringing together four sovereign states: the Czech Republic, Poland, Slovakia and Hungary (Jankiewicz, Pietrzak, 2020).Member states initially initiated the V4 group to increase security and stability in the region (Braun, 2020).The factors positively influencing the development of cooperation within the Visegrad Triangle and then the Visegrad Group include similar potential and level of economic development of the member states, advancement of economic changes, universality of democratization processes, geographical proximity, common civilizational roots, similarity of the latest historical experiences, common priorities in foreign policy (Kupich, 1993(Kupich, /1994)).Over several decades of operation, the Visegrad Group has proven its usefulness in influencing the decision-making process in the European Union, and the V4 format has become embedded in the political space and practice of the countries of the region and the opinion of Western politicians (Czyż, 2018).
The Visegrad Group countries have been an essential point on the map of Central and Eastern European countries for over thirty years (Grodzicki, 2023).The collapse of real socialism and the departure from the centrally planned economy initiated a number of profound changes in the Group V4 countries, resulting in new paths of social, political, and economic development.The economic and political transformation that took place after 1989 in Poland, the Czech Republic, Slovakia, and Hungary enabled the construction of democratic state structures, the creation of a free market economy, as well as an orientation towards increasing national security and European integration (Jasiecki, 2020).
The Visegrad Group countries are similar in many economic and social respects (Kochanek, 2021).This fact is influenced by the similar structure of the economies of the V4 countries and the historical and economic conditions of their cooperation and development.On the other hand, the economies of these countries are interconnected, both in terms of trade and ownership, with enterprises of the leading economies of the European Union (Samborski, 2019).
Economic models shaped by over three decades of transformation in the V4 countries have contributed to a high and stable pace of economic development in these countries.Poland, the Czech Republic, Slovakia, and Hungary have managed to build economies with similar characteristics, such as a high level of openness industrialization, as well as solid and stable economic connectionsboth mutually and with the German economy.This allowed the V4 group to become a place of dynamic economic growth, high international competitiveness, and low debt levels (Popławski, 2021).
Research methods
To assess the potential of the Visegrad Group countries in terms of conditions conducive to cooperation between science, business, and administration, a statistical method was used in data mining, i.e. classification trees.To assess the potential of the V4 countries for readiness to develop cooperation, was calculated the Potential Index (PI).The authors were inspired by the Human Development Index (HDI).
Three variables were used to construct the Potential Index: 1. GDP per capita (current US$) -GDP per capita is gross domestic product divided by midyear population.GDP is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products.It is calculated without making deductions for the depreciation of fabricated assets or for the depletion and degradation of natural resources.Data are in U.S. dollars (The World Bank, 2023) 2. Human resources in science and technologyas a share of the active population aged 25-64.The data shows the active people in the age group 25-64 that is classified as HRST (i.e., having completed an education at the third level or being employed in science and technology) as a percentage of the total active population aged 25-64.HRST is measured mainly using the concepts and definitions in the Canberra Manual, OECD, Paris, 1995 (Eurostat, 2023).3. New businesses registered (number) -the number of new limited liability corporations (or its equivalent) registered in the calendar year.The potential Index was calculated by creating indexes for each of the three indicators.
The values of each indicator were normalized to an index value from 0 to 1. Taking into account the actual value for a given country as well as the maximum and minimum, the index value for each variable was calculated as: = − − The dimension index is 1 for the country that reaches the maximum value and 0 for the country that gets the minimum value.To interpret the classification trees, the target variable was a potential index higher than 0.5.The values of the variables used to build the potential index are presented in Table 1.For the Visegrad Group countries, the GDP per capita dynamics indicator calculated in the period 2022/2017 reached the highest value in the Czech Republic (133.93%) and Poland (132.61%), while the lowest value was in Hungary (126.28%) and Slovakia (120.89%).
In 2017-2022 year-over-year, the dynamics indicator showed an increasing tendency only in Poland, ranging from 100.75% to 113.80%.
The second indicator included in Table 2 for the V4 countries was the Human Resources in Science and Technology dynamics indicator, which in three countries (Slovakia, Poland, Hungary) in 2017-2022 year-over-year, showed a slow increasing tendency.The highest increase in the value of the index calculated for 2022/2017 was observed in Slovakia at 121.59%.In the Czech Republic, as the only country in the V4 Group, in the period 2017-2022 year-over-year, the dynamics of the Human Resources in Science and Technology indicator reached values below 100% twice.
The third indicator in Table 2 for the Visegrad Group countries was the New businesses registered dynamics indicator.Due to information gaps for 2021 and 2022, only three values were calculated.Among the four countries of the V4 Group, the dynamics rate of New businesses registered achieved an upward trend calculated in the period 2017-2020 yearover-year only in Poland and Hungary.A decreasing dynamic of this indicator in the same period was observed in the Czech Republic and Slovakia.
When presenting the dynamics of the indicators mentioned above for the Visegrad Group countries (Table 2), the causes of fluctuations in the values of these indicators were not analyzed due to their diverse micro-and macroeconomic background.The social and economic policy pursued by the governments of the V4 countries and the occurrence of the COVID- 5. Research and development expenditure, by sectors of performance, percentage of gross domestic product (GDP).6. Human resources in science and technology, percentage of the population in the labor force, From 25 to 64 years, sex: total.7. Share of mobile students from abroad enrolled by education level, sex and country of origin, Tertiary education, sex total, d -definition differs (see metadata), in %. 8. Employment rates of recent graduates in the country.
Among the variables listed above, the predictors that explain the dependent variables to the greatest extent were distinguished (they had the highest percentage of correctly classified cases).Based on the data obtained, classification tree No. 1 was constructed.The dependent variable is a high potential index set higher than 0.5.The first division was made according to the variable countries: Czech Republic, Poland, Hungary, and Slovakia.The second division distinguished predictors: variables that explain the values of the dependent variable to the greatest extent.The first classification tree is Human Resources in Science and Technology, while the second is the GenderPay Gap.In both classifications were estimated risk and standard error.The predictive accuracy measure of a classification tree represents the percentage of cases misclassified by the proposed type.The presented classification trees have a zero rate of misclassified cases, meaning the percentage of correctly classified cases is 100%.The classification trees were constructed using the CHAID method 1 .At each step, CHAID selects the independent variable that has the strongest interaction with the independent variable.
Chi-square values show the strength of the association between the predictor and the dependent variable (Kass, 1980).This depends on the independent variable level, which is above the value of 39 positioning Slovakia at a high level of potential, and below the value of 39 positioning Slovakia at a low level of potential 2 5. Share of mobile students from abroad enrolled by education level, sex, and country of origin.
6. Tertiary education, sex total, in %. 7. Annual enterprise statistics for special aggregates of activities.
Gender pay gap (in %).
The second division distinguished the predictor, which is the level of the Gender Pay Gap index.This allowed for classification into countries with low or high potential.The level of the Gender Pay Gap index in the Visegrad Group countries is presented in Table 3. 2 Ranked quantitative variables were used for the classification trees, i.e. the quantitative variable was transformed into a nominal variable according to the Ntyle method.In the Ntyle method, ranks are assigned based on percentile groups, with each group containing approximately the same number of observations.For example, a value of 4 Ntyles (quartiles) assigns a rank of 1 to cases below the 25 percentile, a rank of 2 to cases between the 25 and 50 percentile, a rank of 3 to cases between the 50 and 75 percentile, and a rank of 4 to cases above the 75 percentile.Result: each variable was assigned 4 levels (ranks), where 1 -means the lowest level, 4 -the highest level.The Gender Pay Gap index determines the difference between the average gross hourly wages women and men receive for their work (Parlament Europejski, 2020).Based on the data in Table 3, the Gender Pay Gap index among the four Visegrad Group countries is the lowest in Poland, and its value in 2021 was 4.5%.In the V4 countries, the dynamics of the Gender Pay Gap indicator calculated for 2021/2017 decreased in three countries -Poland, the Czech Republic, and Slovakia.In Hungary, the Gender Pay Gap dynamics index calculated during the same period increased and reached 108.8%.The analysis of the data contained in Table 4 indicates that in Poland and Czech Republic, there was a tendency to reduce the level of the pay gap, while only in Hungary, among the other V4 countries, the disproportions in the earnings of women and men are becoming more significant, reaching the level of the wage difference amounting to 17.3% in 2021.The gender pay gap is a common phenomenon.Statistical data indicate that in many European Union countries, the average salary of women is significantly lower than that of men (Eurostat, 2021).The occurrence of the pay gap is influenced by many factors relating to objective differences about human capital, such as employees' skills, profession, employee involvement in their work, and length of service (Blau, Kahn, 2017), as well as stereotypes regarding women in the labor market (Lips, 2013).Due to the number and diversity of factors influencing the size of the Gender Pay Gap in individual countries of the Visegrad Group, this article only shows the course of changes in the wage gap in the period 2017-2021.
Many factors determine the changes taking place in the economy of many countries.One of them is the level of the pay gap, which allows us to classify countries into those with a low or high level of potential.
Conclusions
The conducted research enabled the analysis of the potential of the Visegrad Group countries in selected areas in the period 2017-2022.Due to the orientation of the conducted research on stakeholders such as government, science, and administration, the focus was on analyzing the potential of the Visegrad Group countries related to indicators from these areas.Autors used to analyze the factors that determine the readiness for cooperation between science, business, and administration have been identified as GDP per capita, Human resources in science and technology, and New businesses registered.Due to the differences in potential in these economies, the complexity of micro and macroeconomic conditions affecting their potential should be emphasized.One of the variables that determine the level of potential index is the level of human resources involved in developing science and technology.On this basis, it can be concluded that among the Visegrad Group countries, the Czech Republic has the highest potential index, while Hungary and Poland have the lowest potential.The same dependence occurs when taking into account the pay gap index.The Czech Republic has the highest potential index, while Poland and Hungary have the lowest potential index.Among the Visegrad Group countries, Slovakia is characterized by a heterogeneous classification, strictly dependent on the level of the independent variable.The results of the conducted research indicate a high degree of differentiation in the potential index of the Visegrad Group countries.They also prove that it is an essential factor positively influencing their economic development.Individual stakeholders involved in the cooperation process should ensure the development and use of appropriate instruments that would increase the level of potential of the V4 countries and their use.
Figure 2 .
Figure 2. Classification Tree No. 2. Source: own study.Among the Visegrad Group countries, the Czech Republic has the highest potential (1.0), Hungary and Poland have low potential.At the same time, Slovakia is characterized by high or low potential due to the level of the Gender Pay Gap.This depends on the level of the independent variable, which in the range of values of the content (<=16.6-18.4)places Slovakia at a high level of potential, and above the value of the content (16.6-18.4) 3 places Slovakia at a low level of potential.
Table 2 .
The dynamics of changes in the values of potential indicators for the Visegrad Group countries
Table 3 .
The level of the Gender Pay Gap index (in %) in enterprises employing at least 10 employees in the V4 countries in the period 2017-2021
Table 4 .
Dynamics of changes in the level of the Gender Pay Gap index in the Visegrad Group countries
|
2024-03-12T15:37:36.962Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "5f2c0fb63c8e41fbabd6afe0a64379ce6b58c6b4",
"oa_license": null,
"oa_url": "https://doi.org/10.29119/1641-3466.2024.191.4",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1d9613593423718f22ea2347697001a745c8d28b",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": []
}
|
54810372
|
pes2o/s2orc
|
v3-fos-license
|
A Landscape Never Goes Out of Style Diachronic Lexical Variation in Exhibition Press Announcements
The paper focuses on diachronic lexical variation in a professional textual genre which has gained growing importance over time in the fi eld of museum public relations and art discourse: exhibition press announcements (EPAs). The aim of the analysis is to investigate the language of EPAs from a diachronic perspective in order to identify word frequencies showing large increases or decreases, or stability in word frequencies. Baker’s (2011) method to distinguish variation over time across multiple corpora was applied and particular attention was placed on the presence of “lockwords”, i.e. words “relatively static in terms of frequency” (Baker 2011: 66). The analysis is carried out on a corpus of EPAs dating from 1950 to 2009 issued by American and British museums. The study reports on a number of trends relating to linguistic and cultural change of EPAs, including the emergence of new criteria in assessing the value of artists and artworks despite a certain consistency in terms of subjects, the shift from one-item to multi-item exhibitions and the preference for more vivid and straight-forward descriptions. For instance, the frequency of the noun landscape has remained stable over time, suggesting that this subject is particularly consistent in art displays, quite a sort of classic, that never grows old, while the artist’s career – a word showing a clear pattern of growth – has become particularly valuable over time for museum professionals in charge of exhibitions.
Introduction
Despite their undoubted affi liation to the textual genre of press releases, it would be limiting to present exhibition press announcements (EPAs) as a mere subgenre of this category.The high level of creativity characterizing EPAs, both in terms of lexical choices and structure, their strong promotional intent, often realized through a massive use of evaluative language and emotional linguistic features, their capacity to address media people as well as the lay public through their e-dimension (Lazzeretti/Bondi 2012), encourage readers to consider them as a genre worth of interest per se, sharing their own peculiar features, as well as their own rules.
As part of the press materials daily issued by a museum's press offi ce, EPAs are meant for a restricted category of journalists, those in charge of arts reporting and criticism.In the last decades, they have become crucial for cultural institutions in terms of communication with their audiences, especially in consequence of the development of thematic exhibitions characterized by strong educational goals and aiming to appeal to a wide range of visitors (McManus 2000, Schiele 1995, Jacobi/Poli 1995), a phenomenon that can be dated back to the 1940s and which is still on-going.EPAs refl ect therefore the value system of the professional communities involved in this environment: on the one hand, experts in charge of the organization and promotion of art exhibitions, such as curators, managers and press offi cers, and, on the other hand, media people interested in art news reports and reviews.
Among the reasons for the peculiarity of EPAs is their positioning in the middle ground of two different discourse domains: fi rstly, news (or media) discourse, which is mainly informative, and secondly, for their content specifi cally related to art, art discourse, whose main communicative purpose is descriptive and evaluative.The latter, in particular, plays a very important role in shaping EPAs as they are, because it allows us to differentiate them from common press releases.
The connection between EPAs and art discourse still needs to be fully explicated.In the fi rst place, there is a connection at the level of contents: in announcing up-coming exhibitions, EPAs indulge in detailed descriptions of the artworks to be exhibited, in the techniques involved, and in the exhibiting space design.The featured artists are presented through their biographies and the mention of their most important works, as well as the artistic movements relevant for the exhibition.Description, to be considered as "an attention-managing device that the writer uses to direct the reader's attention to new referents" (Bondi 2013: 7), has to be regarded as a key concept for EPAs.
Moreover, the professional environment to which EPAs belong is deeply permeated by art discourse(s): EPAs are "texts in museums" (Ravelli 2006:2) which function as communication tools between the institutions and their audiences.They are especially written for art journalists in charge of reviews and criticism, although, once they have been published on museums websites, they are also able to reach a general audience (Lazzeretti and Bondi 2012).Moreover, they can be considered prime examples of what scholars have been alternatively calling 'artwriting' (Carrier 1987), 'artspeak' (Atkins 1990, Harris 2003), 'art talk' and 'artworld discourse' (Irvine 2004(Irvine -2009)).
As is so often the case for concepts that stand for complex phenomena, the notion of art discourse is essentially fuzzy and the plurality of terminology mirrors the diffi culty in defi ning it.Harris (2003) adopted the term 'artspeak', previously popularized by the art historian Atkins (Atkins 1990), "not simply to include the buzzwords used by critics in certain sectors of the art world, but to cover the whole range of discourse about works of art and their appreciation (or disparagement)" (Harris 2003: xii).The meaning of the term goes beyond what Carrier called 'artwriting', which was restricted to "texts by both art critics and art historians" (Carrier 1987: 141).Irvine (2004Irvine ( -2009) ) prefers the term 'artworld discourse', which he defi nes as a "distributed network system of ways of talking" about art, comprising " the various vocabularies, arguments, professional fi elds, and institutionalized contexts for making statements".The aim of the multiple discourses employed within the artworld is to describe, talk and argue about art objects and to identify 'art' in itself.As Irvine (2004Irvine ( -2009) ) puts it, artworld discourse is "a function of the artworld's role in defi ning the cultural category of art and maintaining the art/non-art binary".Irvine's network of artworld discourse comprises: a) mainstream press, b) blogs and quoted popular discourse, c) independent weeklies and websites, d) weekly magazines that include art 'coverage', e) art magazines and monthly art press (and their websites), f) advertising in the magazines, press, and websites, g) curatorial discourse in museum publications, catalogues, and exhibition texts, h) gallery publications, catalogs, press releases, j) academic and scholarly books and articles.
Studying art discourse in one of its professional applications and the specifi c case of EPAs can tell us much about how art is communicated through language.Moreover, adopting a diachronic perspective allows us to gain a signifi cant insight into lexical variation over time and, more broadly speaking, into the linguistic evolution of art discourse.
The purpose of the present research is to examine how the language of EPAs has developed over the past six decades and how art has been presented to the press in the same time lapse.To this purpose, a corpus of EPAs dating back to 1950 onwards was explored in order to identify any increase, decrease or stability in word frequencies.The enquiry addresses the following research questions: • What lexical variation can be identifi ed in EPAs from the 1950s up to now? • Has there been any language change refl ecting socio-cultural change, for instance in the way exhibitions are organized and subjects are selected?
After reviewing existing literature, the present article outlines the design of the corpus (section 3), as well as the methods applied in the analysis (section 4).In section 5, the results of the analysis are presented, while conclusions are drawn in section 6.
Literature review
According to Baker (2011), "corpus approaches to diachronic change are still in their infancy (and have often only compared two time periods), and it is only recently, with the development of multiple sets of comparable reference corpora, that we can start to trace lexical change over time" (2011:66).Similarly, Partington (2012) defi nes modern diachronic corpus-assisted discourse studies (MD-CADS) as a nascent discipline, characterised by the novelty of methodology and topics: "It employs large corpora of a parallel structure and content from different moments of contemporary time in order to track changes in modern language usage but also social, cultural and political changes over modern times, as refl ected in language" (2012: 51).
It comes therefore as no surprise that studies on media language from a diachronic perspective are relatively rare.For instance, a diachronic analysis of press releases -a genre born at the beginning of the last century1 -has not been carried out yet.The same has to be noted with regard to modern diachronic studies especially devoted to art discourse.
Among scholars involved in modern diachrony applied to media discourse are Hundt and Mair (1999), who, in their tracking of changes in newspaper prose between 1960 and 1990, noted a greater use of contractions, and fi rst-and second-person pronouns, where these oral features were adopted in an attempt to appeal to a wider reading audience.Partington (2010Partington ( , 2012) ) and Duguid (2010) have also been working on British broadsheet newspapers, as collected in the SiBol (1993) and SiBol (2005) corpora.Duguid (2010) has pointed out that the language of newspapers has changed over time in terms of an increasingly more conversational and informal style, along with a notable increase in a particular kind of evaluative and promotional language, as a result of a proportional increase in soft news, supplements and reviews.Baker's studies (2010Baker's studies ( , 2011) ) carried out across multiple corpora are particularly relevant in methodological terms for the present work.Baker (2011) investigated four equal-sized reference corpora of written British English from 1931English from , 1961English from , 1991English from , and 2006, in search of patterns of vocabulary change and stability.He also considered several methods to identify variation over time and categorized words as showing sharp frequency increases, decreases, or as remaining stable.He called the latter "lockwords", because they are "relatively static in terms of frequency" (Baker 2011: 66) and "appear to be 'locked' in place" (ibid.73).Finally, he reported on a number of trends relating to language (specifi cally British English) and culture change, including a tendency for written language to become less verbose, more informal and personal.In a previous study, Baker (2010) carried out a corpus-based comparison of gendered terms (male and female pronouns, gender-related nouns, and terms of address) across the same four corpora, fi nding out that while there had been reductions in some gender stereotypes, others were maintained: males, in particular, were referred to more often than females.Both studies show the value of using corpus methods in order to investigate change in the frequency and context of use of specifi c items of language over time.
Materials
As none of the existing diachronic corpora was suitable for the purposes of this research, a new corpus was compiled for this study: the EPA Diacorpus.This is a diachronic corpus, made up of a total of 299,138 words (tokens), consisting of various subcorpora.It includes 378 EPAs, half is-sued by American museums and half by British museums, dating from 1950 to 2009, as summarized by the following table (1): A corpus of this size may be small compared to the multi-million-word corpora available today, but for the purposes of the present study, it was assumed to be balanced and representative.
EPAs were randomly and evenly selected across decades, with no particular criteria beside their status of exhibition press announcements, e.g.press releases announcing an art show.All other press releases issued by museums, such as ordinary news, announcements of artist talks, presentations of fi lms or books, accomplishments, awards, new appointments, philanthropic events, etc, were dismissed.It is to be noted, however, that EPAs represent the largest part of the documents usually produced by museum press offi ces.
The number of writers involved in the composition of EPAs is not easy to quantify.Although most of the EPAs collected in the corpus are signed by a press offi cer -an acknowledged professional specialized in museum public relations -EPAs are often the result of the work of a composite team of experts.A fi rst draft, for instance, may be traced back to a text written by the curator of an exhibition, who fi rst conceived a project.The EPA also has to be verifi ed by members of the managerial staff -i.e. the director of the museum, the board of directors, etc… -, while other useful comments and suggestions may come from co-workers, before the fi nal draft is released.Therefore, the EPA Diacorpus represents multiple writers.
Furthermore, the EPA Diacorpus is a DIY -'do-it-yourself' -corpus (McEnery et al. 2006: 71), which is built on the following methodological principles, with respect to both the choice of museums and the period selection for inclusion.The decision whether to create a hybrid corpus, including both British and American EPAs, or a more homogeneous corpus, comprised of only American (or only British) EPAs, was also crucial.
With regard to the fi rst aspect, large, high-profi le museums were preferred, because they could guarantee a signifi cant coverage of EPAs across the twentieth century and their accessibility for research.The selection was also infl uenced by the format of the data, which was available in electronic form only in a minor part.Museums began to digitalize press releases at the end of the 1990s or even later.The New York Museum of Modern Art 2 and the Los Angeles J. Paul Getty Museum 3 provide a signifi cant exception, allowing website visitors to search within their digitalized historical archives updated respectively from 1929 and from 1954 onwards.The New York Solomon R. Guggenheim Museum 4 also offers an overview of historical press releases dating from 1952 onwards on its website, but the work is still in progress and at the moment only a small number of documents has been digitalized.Furthermore, British museums retain historical EPAs only on paper format and documents have to be consulted in loco.As a result, most British documents of the EPA Diacorpus, especially the earlier, were found directly at the museums archives; they were photographed, OCR scanned and transformed into *.txt format.A further amount of EPAs, already available in digital form, was downloaded from websites.In particular, earlier British EPAs, dating from 1950 to 1999, were found at the London National Gallery Library, the London Royal Academy of Arts Library and the London Victoria and Albert Museum Archives.Some US institutions also cooperated in a signifi cant way sending paper or digital copies of historical EPAs from their archives (The New York Frick Collection and The Chicago Museum of Contemporary Art).
The sources of the EPA Diacorpus, the number of EPAs retrieved for each institution and the original format of documents are summarized in the With regard to the second aspect, i.e. the period to cover for the present diachronic study, six decades were selected, from the start of regular publication of EPAs in 1950 to 2009.Earlier examples may also be identifi ed5 , but their distribution was not homogeneous across decades, especially during World War II, when most museums interrupted their activities.Moreover, some highly representative museums, such as the New York Solomon R. Guggenheim Museum and the Los Angeles J. Paul Getty Museum, were only founded in the 1950s.
As for the third aspect -whether to create a corpus made up only of British EPAs, or only of American EPAs, or a hybrid one -the last option was decided upon.This decision was taken in view of the fact that American and British EPAs share much similarity and can be studied together.According to MacLean (1997) the development of museum public relations -and therefore EPAs -started in the US after World War II and subsequently spread in Western Europe.It can therefore be assumed that American EPAs inspired the British ones.British and American EPAs have been evenly distributed in the EPA Diacorpus and together they can give a representative linguistic picture of the genre and of the period they cover.
Nevertheless, the EPA Diacorpus is divided into two main components, US EPAs and UK EPAs, to be analysed separately and together.Each component is further divided into six sub-cor-pora, one for each decade, which evenly comprises both British and American EPAs, as shown in the table below (Tab.3).
Methods
The present study starts from a "naive" position, allowing data to drive the research and adopting what Tognini-Bonelli ( 2001) refers to as a 'corpus-driven' approach.Baker's (2011) method to observe diachronic change across multiple corpora and to distinguish variation over time served as a framework.
Following Baker (2011), it was stipulated that for a word to be of interest in terms of diachronic variation, it would need to occur at least 100 times when its frequencies across all the six decades (1950,1960,1970,1980,1990,2000) were added together.Three hundred and twenty-six words met this criterion.In view of the size of the corpus, totaling 300 thousand words, the relevant proportion is about 1/3,000.
Using the "detailed consistency analysis" function in Wordsmith Tool 5 (Scott 2007), it was possible to obtain a single table that showed every instance of each word in all six sub-corpora, along with their frequencies.
After dismissing a number of possible measures to quantify the strength of difference between word frequencies, such as the chi-square test and the Pearson correlation coeffi cient, Baker (2011) took into consideration Hofl and and Johansson's (1982) method.Hofl and and Johansson compared word frequencies in the British LOB and American Brown corpora using the following formula: (1) Freq LOB -Freq Brown Freq LOB + Freq Brown Baker (2011) tried adapting this measure to see if it would be effective on his four corpora: (2) Freq BLOB -Freq LOB -Freq FLOB -Freq BE06 Freq BLOB + Freq LOB + Freq FLOB + Freq BE06 I followed this suggestion and adapted Hofl and and Johannson's measure to the six sub-corpora of the EPA Diacorpus in this way: (3) Freq 1950-Freq 1960-Freq 1970-Freq 1980-Freq 1990-Freq 2000Freq 1950+ Freq 1960+ Freq 1970+ Freq 1980+ Freq 1990+ Freq 2000 This gave a score for each word between 0.07 and -1.The twenty words with the highest score, showing therefore the largest decrease over time, were mrs, mr, picture, represented, man, acquired, shown, here, had, shows, exhibited, william, painters, examples, three, so, paris, wednesday, still and architecture.Conversely, the twenty words with the lowest scores, showing the largest increases over time, were offi ce, career, tel, uk, org, www, admission, tickets, frick, getty, images, photography, visual, catalogue, please, students, contact, events, tour and v&a. However, the adapted formula of Hofl and and Johannson does not show real trends because it does not deal with exceptional data appropriately, as in the case of irregular frequency profi les across the six decades, where a clear increasing or decreasing pattern cannot be detected.This measure therefore had to be refi ned with a further statistical method.Among several bottom-up approaches discussed by Hilpert/Gries (2009), Kendall's tau coeffi cient appears as a wide-spread measure for correlations, which, correlating the sequence of corpus sub-periods with the frequencies of each word, produces correlation coeffi cients for each example.A value close to 0 indicates the absence of a trend, while values approaching either 1 or -1 indicate that the passage of time correlates perfectly with an increase or decrease in frequency respectively.Kendall's tau coeffi cients were calculated with the aid of a statistical software integrated into the Microsoft Excel application, Analyse-it, which allows users to automatically determine several types of correlation, Kendall included.The requirements of the test are two variables measured on an ordinal or continuous scale.Data in Excel worksheets were therefore arranged accordingly, inserting two variables for each word in the dataset: in the fi rst column the frequencies of the item across the six decades and in the second column the sequence of the EPA Diacorpus sub-periods.In the case of the noun picture, for instance, the variables introduced were 96, 84, 16, 1, 15, 12 and 1, 2, 3, 4, 5, 6 respectively.The Kendall's tau value obtained after running the test was -0.7.
According to Kendall's tau coeffi cient, the top twenty words showing a downward trend, with a value comprised between -0.9 and -0.5, are country, mr, picture, van, man, here, acquired, famous, chicago, shows, william, shown, had, west, since, i, mrs, represented, exhibited and so.Conversely, the top twenty words showing the highest upward trend, with a value comprised between 0.6 and 1, are tel, offi ce, photography, students, tickets, catalogue, v&a, uk, career, admission, images, please, contact, tour, org, www, getty, events, frick and visual.As pointed out by Baker (2011), Hofl and and Johansson's method does not reveal which words remained stable over time.Baker suggests using the standard deviation (SD) score to identify high and low variation over time of the frequencies of each word: "Potentially […], words with a high standard deviation would have changed a great deal in frequency over time, whereas those with low standard deviations would be more stable (2011:72)" by calculating the standard deviations of the 326 words occurring in the corpus at least 100 times when their frequencies across all the six decades (1950,1960,1970,1980,1990,2000) were added together using Excel (function STDEV).The SDs of these words ranged from 3.08 to 1,287.50.The twenty words with the largest standard deviations were exhibition, art, getty, works, museum, guggenheim, new, arts, collection, royal, please, org, www, mr, academy, press, information, curator, admission, artists, while those with the lowest were landscape, colour, selected, death, held, release, form, used, selection, seen, department, working, chicago, recent, born, abstract, subject, painter, europe and forms.Although interesting, this data needed to be corrected, since SD measures frequency rather than variation.As suggested by Baker (2011), the correction used was the coeffi cient of variance (CV), calculated by dividing the standard deviation by the mean and then multiplying by 100.As the CV score does not correlate with frequency, it was used to determine which words had strong and weak variation over time.Next, the word list was divided into three equal-sized portions, refl ecting high change, medium change, and low change.The top third of the word list, ordered by CV, was considered to have relatively high variation, the bottom third to have relatively low variation, and the middle third to be relatively indistinctive and was not examined any further.The words ranged in CV from a minimum of 10.23, marking the lowest variation, to a maximum of 232.49, marking the highest.
The ten words with highest relative variation were www, org, uk, events, tickets, frick, getty, please, v&a, and tour.Those with the lowest were release, landscape, until, national, which, be, department, seen, form, and drawings.The strongest noun lockwords in the EPA Diacorpus are release, having a frequency profi le of 41, 53, 54, 50, 51, 45, and landscape, with a frequency profi le of 25, 27, 21, 23, 28, 29.Having identifi ed words that showed real trends of decline, growth, or stability, a selection of words was ranked within the top twenty of each category and, with the aid of Wordsmith Tools 5 (Scott 2007), multiple concordance searches and collocational analyses of these words were conducted to elicit contextual information that might explain their patterns of usage.
In order to compare data, a reference corpus was needed: the comparison corpus in this study was the Time Magazine Corpus (Davies 2007), made up of roughly 100 million words and 275,000 articles taken from the American periodical TIME Magazine (1923 to the present).Freely available on line, it is the largest diachronic corpus of 20th century American English, and its size allows for accurate analysis of linguistic change across the decades.
Growth, decline and stability: the most distinctive fi ndings
The twenty words that show the strongest frequency increase across the six time periods (1960, 1970, 1980, 1990 and 2000) are: tel, offi ce, photography, students, tickets, catalogue, v&a, uk, career, admission, images, please, contact, tour, org, www, getty, events, frick and visual.On the other hand, the twenty words that show the sharpest decreases over time are country, mr, picture, van, man, here, acquired, famous, chicago, shows, william, shown, had, west, since, i, mrs, represented, exhibited and so.Finally, a further list of words showing very little change over time was compiled.The twenty words with the smallest CV score (i.e. with less variation in their frequencies across decades) were: release, landscape, until, national, which, be, department, seen, form, drawings, held, great, known, paintings, among, made, may, fi rst, selected and some.
Due to limited space, not all words in the list will be discussed.Instead, since this paper is narrowly focused on a specifi c question (the link between language variation of EPAs over time and macro-cultural change in museum settings), the following analysis takes into account only a few words, which have been selected in view of their potential of mirroring cultural change related to museums and art in general: career, visual and photography (increased); picture, shown and shows (decreased); landscape, drawings and paintings (stable).
career
Showing a clear pattern of growth across decades (0,11,8,27,34,49), the word career is one of the most interesting cases of the EPA Diacorpus.It has 129 overall occurrences, quite evenly distributed between the American (71) and British (58) sections.The Time Magazine Corpus was explored for comparison, in order to fi nd out if the word gained increasing importance over time in the language of general media as well, but the result was reversed: career shifted from 1,904 occurrences in the year 1950 to 1,022 occurrences in the year 2000.The increasing trend of career seems therefore a distinct feature of the EPA Diacorpus.
A manual inspection of concordances shows that the career in question is always that of the artists featured in the exhibition.The most frequent three-word clusters associated with career in the corpus are of his career (where the adjective his refers anaphorically or cataphorically to the artist), of the artist's career, the artist's career and throughout his career.The latter cluster intro-duces the idea of a long-lasting progress through the artist's lifework, which is also stressed by other evaluative expressions, as in the following concordances (1): (1) covering all stages of Mirò's long career will be included, extraordinarily rich and prolifi c career.On view through May 2, every phase of his seventy-year career.Wright is shown to be In some rarer cases, the artist's career may be qualifi ed as early or brief, due to biographical circumstances, but the semantic prosody of the term sounds positive: (2) Schiele, and, in an even briefer career, created such dramatic work from the artist's early career stand out as of particular Although the positive semantic prosody associated with the noun career may seem obvious, the reason for the increasing use of the term over time is worth investigating.We would perhaps expect the frequency of other positively evaluated and career-related words to increase as well in the EPA Diacorpus.Other words from the frequency list that convey positive meaning, especially with regard to the life-work of an artist, are the nouns master and success, and the adjectives famous, known and great.Master and success do not even fall within the cut-off of the 326 most frequent words in the corpus, since they have less than 100 occurrences across all decades.They are therefore indistinctive in terms of variation, due to their limited presence.Conversely, famous has a more relevant presence in the corpus, with a total of 142 occurrences, but a gradually decreasing profi le across decades (35,32,16,19,26,13), whereas the adjectives great and known are not only stable, but also quantitatively relevant.Great (235 occurrences) is the 12 th word showing less variation across decades according to its CV score and can therefore be classifi ed as a lockword.Similarly, known (202) ranks 13 th among lockwords, with a profi le across decades of 41,39,26,34,24,38.
A close examination of the concordances related to these more interesting words (famous, great and known) was required.Only 16 occurrences of the adjective famous out of 142 were clearly related to artists, since the majority describe their work, a collection, or, in a smaller number of cases, it defi nes a group of people (the famous) often portrayed by art.Similarly, only 27 occurrences of the adjective great out of 235 directly describe artists, while most of the instances address art works, art movements, collections and exhibitions.The adjective known is no exception to this pattern, with only a third of the total occurrences (202) related to artists, especially those combined with adverbials, such as well known, best known, internationally known, little known and lesser known.
A fi rst conclusion could be that any evaluation in terms of fame and greatness is preferably addressed to art works and to the relevant artistic context in general, rather than to the artist himself as a person.This tendency is quite stable across decades, although the use of the adjective famous is decreasing, maybe because of its controversial semantic prosody.After Andy Warhol's 1968 quote ("In the future, everyone will be world-famous for 15 minutes") the idea of fame, especially with regard to art, began to carry some disregarded aspects, such as the dissipation of hierarchies and its logical extension that everybody could be famous, and not merely those individuals really of fame (Buchloh 2001).Conversely, the idea of career as an asset strictly related to the artist became more and more valuable over time, to the point that it can give a justifying reason for the exhibition itself.In other words, we might say that the artist's career has become a preliminary condition for giving him space within a museum and celebrating his work with an exhibition.Youth, as a consequence, does not seem to be particularly valuable in this regard, and for a young artist it is very diffi cult to emerge in the art scene (see McCarthy et al. 2001).The EPA Corpus confi rms the tendency, with less than 10 instances of the expression young artist/s, 18 occurrences of the noun youth and the hapax legomenon youthful.
visual
Totaling 107 occurrences within the EPA Diacorpus, the adjective visual shows a consistently increasing profi le across the decades: 2, 16, 15, 14, 22, 38.If we take a look at the concordances, we fi nd that the most frequent three-word cluster is the visual arts with 19 occurrences.The fi rst appearance of the cluster dates back to 1961 and belongs to an American EPA.Starting from 1969, the cluster is frequently used in British EPAs as well.See for instance the following examples (3 and 4): (3) During his youth he was not exposed to the visual arts.The phrase 'visual arts' was introduced during the fi rst half of the 20 th century to identify the arts that are primarily visual in nature, such as drawing, painting, sculpting, printmaking, photography, design, crafts, video, fi lmmaking and architecture, as opposed to music, drama, and literature.Before that, anything that had been created to please the senses -from music to dance, from literature to art as we know it -was commonly gathered under the defi nition of 'fi ne arts', which implied an aesthetic judgment and a subtle differentiation between what could be considered fi ne and what could not.The phrase 'visual arts' was already in use in art discourse during the time-period covered by the EPA Diacorpus.The Time Magazine Corpus, which can be considered as a valid mirror of the language used by the press from the 1920s onwards, gives evidence of a fi rst occurrence of the phrase in 1932: visual arts are mentioned within an article on the use of art in advertising.Thus, the very origin of the phrase has to be connected with academic environments: the phrase appeared in the title of an infl uential essay published by the American art historian Bernard Berenson, Aesthetics and History in the Visual Arts (Berenson 1948) and was further consolidated in its use by Erwin Panofsky, who entitled one of his main works Meaning in the visual arts (Panofsky 1955).It is to be noted that the language of EPAs slowly absorbed this new coinage coming from academic settings and has applied it increasingly from the early sixties.Conversely, the phrase fi ne arts gradually disappeared from the lexis of EPAs, surviving only in crystallized forms, such as the names of museums (for instance, the Boston Museum of Fine Arts), schools, academies, departments, or university courses.There are only 58 occurrences of fi ne arts within the corpus, most of them (40) dating back to the period 1950-1999.
It might be concluded that EPA writers were responsive to the introduction of new coinages in the fi eld of art discourse, as well as to the decline of other expressions previously in use that may have sounded misleading or out of style.EPA writers were aware of lexical change in their relevant fi eld of action, art discourse, and were also able to apply it to their daily professional language.This result also highlights an urge to discard ambiguous expressions in favor of a more conventional, neutral and specialized language for EPAs, capable of making EPAs more recognizable as a genre within the general domain of art discourse.
photography
The increase in the frequency of the noun photography, in parallel to the irregular frequency profi le of architecture, can be read as a refl ection of the emergence of new visual languages, acknowledged as art forms by museums and audiences, but also as a sign of changing taste of both museum visitors and curatorial staff.
The noun photography ranks 3 rd within the top twenty of the most increasing words across decades according to Kendall's tau coeffi cient and 12 th according to Hofl and and Johansson.Its pro-fi le is of 4, 22, 32, 59, 71, 80 occurrences.Conversely, architecture ranks 20 th within the most decreasing words according to Hofl and and Johansson, with an irregular profi le of 29, 6, 7, 15, 40, 17 occurrences, which also determines an absence of trend according to Kendall's tau coeffi cient (0.3). architecture is however interesting to examine in this context.
The acknowledgement of photography as an art is traditionally linked to the work of the American photographer Alfred Stieglitz (1864Stieglitz ( -1946)), who struggled to establish photography as a valid form of artistic expression (Rosenblum 2007).Thus, it was only in 1937, when the fi rst major exhibition on photography was organized in the United States by Beaumont Newhall at the Museum of Modern Art ('Photography, 1839-1937', a staple for the history of this medium) that photography gained widespread recognition as an art.A year later, Newhall was appointed fi rst curator of photography and a Department of Photography was founded at the Museum of Modern Art: this was a further tribute and a sign of offi cial recognition with regard to this medium, coming this time from the institutional establishment (Rosenblum 2007).This also explains why photography has been mentioned within American EPAs as a subject for exhibitions since the fi fties; conversely, if we look at British EPAs, in the sixties the noun was still used in a completely different sense ( 5): (5) Members of the press may preview this Exhibition on Thursday, 29th August from 2.30 to 4.00 p.m. on production of this press notice.Photography is not permitted in the Board Room.(London, National Gallery, Canaletto Bicentenary Exhibition, 26th August, 1968).
Nonetheless, British museums were becoming aware of the increasing interest in photography, as shown in an EPA issued by the Victoria&Albert Museum, on the occasion of an exhibition on Henri Cartier-Bresson, which sounds as a sort of justifi cation addressed to the press ( 6): (6) The connection of the Victoria and Albert Museum with the fi eld of photography, considered at large as a branch of the visual arts, is not known to a wide public.But its Library houses a notable collection of Victorian photographs; and the Museum mounted the offi cial Centenary exhibition of the Invention of Photography in 1939.(London, Victoria&Albert Museum, Henri Cartier-Bresson, 1st January, 1969).
The trend related to the noun architecture plays a more signifi cant role in the American section of the EPA Diacorpus, since 100 occurrences of the word out of 114 belong to that part.Concordances show a major interest in this discipline during the nineties, where most of the occurrences of architecture (40) are concentrated, in correspondence with great exhibitions on masters such as Alvar Aalto and Frank Lloyd Wright.The number of concordances falls drastically in the following decade, where only 3 occurrences of architecture as a fi eld of interest for exhibitions were retrieved, although architecture is not the main topic of the show, but rather a marginal component.The gradual disappearance of the noun architecture from 1990 onwards may suggest at least two conclusions: in the fi rst place, it could refl ect the growing diffi culties, in technical and economical terms, faced by museums in organizing architectural exhibitions, which is a fact, as well as the reduced opportunities to appoint an architect for renovation projects within the museum due to the high costs of these interventions; on the other hand, this trend could be related to the perception of architecture as a very specialized fi eld of interest and therefore as a more complex and less appealing topic for an exhibition addressed to the general public.
picture
Coming to the words showing a decreasing pattern over time, one of the most signifi cant cases is that of picture, ranking third in the top twenty of declining words, with a profi le over decades of 96,84,16,1,15,12 occurrences.Its decrease is not easy to explain.Since the 224 total occurrences of picture are evenly distributed across the British and American sections of the EPA Diacorpus, the trend is relevant for both components.Moreover, the noun decreased only in its singular form and not in its plural, which actually increased (12,14,15,35,29,69).If we take a closer look at the concordances of picture, we can observe that in most cases (158 over 224 total occurrences) the noun is modifi ed by a defi nite determiner such as the or this, thus EPA writers refer to a specifi c picture.See for instance the following concordances ( 7): (7) was old but not original.The picture will be on public exhibition in the reverie.I am convinced that this picture has a charm all its own in this country.It is the only picture by him which is certainly The gradual decrease of this typical pattern related to picture could be explained in terms of new design criteria.While in the fi fties and the sixties the attention was often placed on a single, specifi c artwork on display (a picture, namely), nowadays exhibitions are massive events featuring many at once.We cannot expect, therefore, a focus on a single, specifi c item, but rather a general presentation of the works on display or a list of most prominent.This trend is confi rmed by the growth of other plural forms identifying artworks on display within the EPA Diacorpus, such as images (3, 11, 8, 21, 77, 90), objects (5, 23, 10, 14, 30, 43) and works (67,88,119,140,177,292).The increasing frequencies of images, objects and works are not necessarily in contrast with those of the respective singular form, at least not in quantitative terms.In fact, when used in the singular form, these nouns seem to lose their concrete meaning and acquire a different sense.Conversely, the noun picture always keeps its concrete meaning in the EPA Diacorpus, both in its singular and its plural form.If we compare the singular image to its plural images, for instance, we can see that the noun is quantitatively increasing (0,18,4,9,10,39), but only in a minor part of the cases the singular noun is used in its concrete sense to refer to a specifi c work on display.Rather, the image in question refers to a picture formed in the mind of the artist or a general opinion about a person, as in the following concordances (8): (8) the city is celebrated as an image of sparkling modernity whilst furniture design and enhance the image of the French furniture industry an interesting disparity between the image of the ancient world that we derive Speaking, a rapid-fi re combination of image processing and ironic, spoken Object does not seem to play a relevant role in terms of variation, with 40 occurrences and a profi le across decades of 7, 15, 4, 1, 9, 4, while work, which increases over time (144,122,155,140,182,241), is much more used in a collective sense, in order to address the whole body of work produced by an artist or to identify a group of objects.See for instance the following concordances ( 9): We can argue for the trend of increasing plural nouns identifying objects on display as refl ective of the rise of large and multi-item exhibitions, which is a real phenomenon occurred in the last decades and still on-going.This result actually reinforces the appropriateness of the method we adopted so far, since data allow us to make statements on cultural change specifi c to the environment of museums and exhibitions.
shown, shows
Among the most decreasing words in the EPA Diacorpus are two forms related to the lexeme SHOW: shown and shows.The fi rst is clearly the past participle of the verb show, while shows may be the plural of the noun show or the third person of the present tense of the verb.Since the EPA Diacorpus is not tagged, the distinction was made by hand: in 35 occurrences out of 100 shows is the plural of the noun show.It means that almost 2 out of 3 occurrences are related to the verb, which then plays the most important role.We are therefore authorized to consider both entries, shown and shows, as mainly belonging to the verbal derivation of the lexeme SHOW.
Since the action of putting on display something to be judged by the public -i.e. to show it -is typical of exhibitions, the decrease of these words over time is somehow unexpected in an exhibition press releases corpus and needs to be further explained.The fi rst check was on the frequency and consistency profi le of both words.
Shown has 304 total occurrences in the corpus, 144 belonging to the American component and 160 to the British component.Its profi le is characterized by a rather irregular decrease over the six decades (95,72,32,27,42,36) and its CV score is 53.10.The most frequent three-word cluster related to shown in the corpus is the future construction will be shown.Similarly, shows, with 100 total occurrences (37 British, 63 American), has an irregular decreasing profi le (29,24,15,7,10,15) and its CV score is 50.15.Typical patterns in the corpus are the exhibition shows, the painting shows, the section shows, followed by descriptions of art works.
I looked at other forms of the verb show in the corpus, in order to fi nd out if there is a general decrease of its use.The frequency list, alphabetically ordered, provided the following entries: show (291), showcase (3), showcases (3), showcasing (4), showed (10), showing (94).
Show is rather consistent in terms of frequency (68,50,47,35,37,54).It has the same ambiguity as shows, but the manual inspection reveals a reverse tendency: two occurrences out of three refer to the noun, which is a synonym of exhibition.The verbal component is therefore less relevant in this case.
The word-family of showcase has 10 total occurrences in the corpus, all concentrated in the 1990s and in the 2000.It is therefore a more recent lexical input and does not tell us much in terms of variation.
So far, we could make only educated guesses as to why the verb show is decreasing over time in the EPA Diacorpus.The Time Magazine Corpus was therefore explored for comparison.The past participle of the verb, shown, signifi cantly decreases across decades even in the Time corpus (1036,1046,809,586,498,408), as does the simple past showed (2889,2145,1590,1370,1029,714).Showing also decreases (1136,1031,820,676,617,472).The same is true for the infi nite form (2447,2244,1875,1708,1708,1320) and the present third person of the verb (1144,1180,1034,878,1070,742).Therefore, the results of the EPA Diacorpus with regard to the verb show and its decreasing tendency over time mirror -obviously in a smaller dimension -those of a wider, general corpus, specialized in the language of news.
As already pointed out, the verb show has two main meanings in the EPA Diacorpus: the fi rst and more relevant is associated with the action of exhibiting something, while the second is functional to descriptions.In 160 out of a total of 570 occurrences identifi ed within the EPA Diacorpus, show is used to introduce the description of an art work: a picture, a portrait, a painting, etc.
If it is hard to explain why the verb show decreases over time in its fi rst, basic meaning, a guess may be formulated with regard to the second function.When describing art works -a picture, for instance -the use of the verb show implies a critical fi lter between the writer and the reader: adopting a didactic stance, the writer provides his/her own interpretation of the art work in the form of an authoritative and objective statement, which therefore sounds defi nitive.
It can be guessed that fi ltering verbs such as show have been gradually reduced in the language of EPAs in favor of a more direct way of describing art works and a more nuanced critical stance towards them.
The difference between these two different techniques for describing art works -indirect and direct -can be summarized by the following examples (10,11) taken from the EPA Diacorpus: (11) The luminous colour and lively brushwork of the picture evoke the hot, hazy atmosphere of a summer afternoon.This is the world of the Impressionists and the scene is uncompromisingly contemporary.But the isolation and gravity of the fi gures, the scale of the painting and the classical order of the composition come from a very different world -a tradition stretching back to the Renaissance.
The result is at once serene and subversive.
(London, National Gallery, Seurat and the Bathers, 1 March 1997) In the fi rst extract (10) the writer is present in the text in his authorial stance, identifi ed by the pronoun us, establishing common ground between him and his audience.He directs the reader's gaze, guiding him by pointing to some details of the picture and providing a description of the artist's work; the effect is yet explanatory rather than captivating.In the second extract ( 11), where the description of the art work is offered without a fi ltering verb, the reader has the impression of really being there, in front of the picture, in a sort of virtual 'walk' through the exhibition.
landscape
The noun landscape is the second most frequent lockword of the EPA Diacorpus.It has 153 total occurrences and is characterized by a stable profi le across decades (25,27,21,23,28,29), which also determines a low variance score (12.09).Two occurrences of the noun out of three belong to the British section of the corpus, so the trend related to this word has to be regarded as more signifi cant for that part.Other occurrences within the same lexeme are landscapes, landscaping, landscapist and landscapists.The plural form of the noun, landscapes, totaling 76 instances, shows an irregular path over decades (7,15,7,16,10,21), while landscaping, landscapist, both appearing in the fi fties, and landscapists, appearing in the nineties, are hapax legomena.
Typically, the noun landscape is introduced by an indefi nite article and premodifi ed by an attributive adjective carrying evaluative meaning and functional to the description of the picture: it may be, for instance, quoting examples from the corpus, a romantic, a realistic, a fantastic, a luminous or a pure landscape.At least half of all the occurrences can be ascribed to this recurrent construction.If we look at the co-text, it can also be noticed that the word is especially related to the technique of painting: in about 40 occurrences, the noun appears in the immediate surroundings of words like painting/s, painter/s, picture, depicting, and, in fewer cases, drawing/s and photography.Moreover, in 31 occurrences, landscape is part of the title of a picture or of an exhibition, as shown in the following concordances ( 12 It turns out that the word landscape is used in consideration of its descriptive force, which can be enhanced through evaluative adjectives.Moreover, the noun evokes a well-known genre in the fi eld of visual arts, which everyone is familiar with.However, these elements alone do not explain the persistence and the stability of the noun landscape in the language of EPAs from the fi fties onwards.An intriguing suggestion may be that, although new art techniques were introduced and new media emerged in the last decades, such as photography, fi lm and video, a landscape remained a consistent object in art displays, a sort of classic, an evergreen that never grows old.At this point, it could be objected that other subjects can be regarded as fashionless and out of time in art as well: portraiture, in particular, has always been the great alternative to landscape painting.The portrait or landscape question is important because it defi nes the place and role of people in a picture.In a portrait, one encounters a person and is introduced to a sort of conversation between the painter and the model.A landscape offers a different experience: fi gures may be included, but at a greater distance from the artist and viewer.This offers a more circumspect view of people. Consequently, the EPA Diacorpus was explored in search of related to the lexical fi eld of portraiture.While the use of the verb portray is rather irrelevant -less than 10 overall occurrences -the word portrait (211 total occurrences) shows an irregular path over decades (28,56,18,45,22,42), while its plural form, portraits (174 total occurrences), gradually increases (12,14,15,35,29,69).The noun portraiture identifying the practice of making portraits is increasing as well (5, 0, 1, 5, 9, 12).This may refl ect a fl uctuation in the preferences of exhibitions' visitors and curators, gradually shifting their attention towards individuals, to be approached from a closer point of view, in a sort of voyeuristic curiosity that is typical of our times.The clear growth of the plural form, portraits, opposed to the irregular one shown by the singular, portrait, can also be related to the proliferation of multi-item exhibition, another phenomenon which has emerged in contemporary times (see also the case of picture vs. pictures in section 5.2).
drawings, paintings
Cultural and educational reasons may also be invoked in order to explain the consistency over time of the plural nouns drawings and paintings, which rank respectively 10 th and 14 th among the lockwords of the EPA Diacorpus.The profi les of these words across decades are respectively 86, 85, 59, 60, 87, 67 and 87, 86, 106, 102, 99, 94.A peculiarity of these words is that they often appear together as a binomial -paintings and drawings (18 occurrences) -or in combination with other nouns: drawings and prints (24), paintings and sculptures (13), prints and drawings (11), paintings, drawings and sculpture (10).
Corpus evidence in this case refl ects how art works were and still are combined within the exhibition design in order to provide a comprehensive overview of artists or art movements.Sketches and drawings, for instance, can be used to explain the preliminary work behind a painting and to show how the artist developed his original idea through a series of gradual steps.Such an approach adds value to the exhibition's experience and allows curators, at the same time, to enrich the exhibit itself with more items on display.The following extract (13) provides a relevant example of this design strategy: (2) Andrea Mantegna (c.1431 − 1506) was one of the greatest artists of the early Italian Renaissance.This exhibition, in which paintings, drawings and engravings have been assembled from throughout the artist's career, traces the development of Mantegna's innovative genius.
(London, Royal Academy of Arts, Andrea Mantegna, 1 January 1992) The educational purpose associated in showing paintings and drawings together goes hand in hand with the urge to provide a signifi cant volume of pieces on display.Drawings, prints and lith-ographs are generally easier and less expensive to transport and therefore to exhibit.In times of fi nancial diffi culties, this may also sound as a valid argument.Moreover, the stability of the nouns paintings and drawings over time can be interpreted as evidence for the stability of these techniques: despite the rise of new media such as photography and video in the second half of the 20 th century, painting and drawing still remain a relevant feature.Other words related to the lexeme PAINT show a stable or even an increasing profi le across decades, as in the case of painted (23, 46, 7, 22, 20, 47), painting (97, 89, 78, 75, 77, 145), painter (21, 30, 12, 26, 27, 26).Similarly, words belonging to the lexeme DRAW have a consistent profi le, such as drawing (16,19,12,11,25,28) and draw (2, 2, 3, 2, 2, 8).
A special case is that of painters, which, on the other hand, ranks 13 th within the top list of decreasing words of the corpus (38,15,17,26,19,24).The reason for this decrease could be related to the system of values shared by art professionals, which shapes EPAs and substantiates many lexical choices made by writers.In our times, to defi ne artists merely as painters could sound limiting or, at least, too technical, as contemporary artists can often manage different media and several techniques at the same time.This is a consequence of contemporary art, where the boundaries between the traditional media categories − painting, sculpture, photography etc. − have become blurred.When talking about their roles, artists themselves avoid strict defi nitions.It is to be noted that the noun sculptors also decreases over time within the EPA Diacorpus, falling from 17 to 8 occurrences, while photographers increases (from 6 to 30 occurrences), maybe as a consequence of the emergence of photography as a new medium.The singular forms painter, photographer and sculptor show different profi les across decades: painter is stable, photographer increases, sculptor decreases.Conversely, the occurrences of artist and artists increase over decades.Such contrasting data do not allow us to make general and defi nite claims in this direction.Nonetheless, three out of six nouns identifying artists on the basis of a specifi c skill show a clear pattern of decrease within the EPA Diacorpus.This leads us to suggest an on-going lexical change, maybe not completed yet: a trend refl ective of how artists see themselves and want to be seen and presented to the press, not merely as masters of a technique, but rather as creative and multifaceted talents.
Conclusion
One of the aims of this article was to apply Baker's (2011) method to identify variation over time on a multiple corpus of the specialized language of art discourse, in order to fi nd out if it could bring about interesting results not only in terms of language variation, but also in terms of cultural change related to the fi eld of exhibitions, museums and art in general from 1950 onwards.We can conclude that the method has proved valuable, as it has allowed us to report a number of lexical trends refl ective of changes in the way exhibitions are organized and communicated to the public.
In particular, lexical change has revealed some innovations in the way exhibitions are set up and artists are selected by museum professionals.With regard to exhibition design, lexical variation suggests a trend towards comprehensiveness rather than selectiveness.Large exhibits are preferred to narrowly focused events, while increasing care is taken in evaluating and choosing artists, preferably avoiding the risk to present new and emerging names.
In addition, the taste of visitors changes fast and museums have to deal with these fl uctuations.Our analysis revealed that from 1950 onwards new media have emerged in the visual arts, but they did not seriously challenge classic subjects and topics of exhibitions.
A signifi cant phenomenon refl ected by lexical variation of EPAs is the shift from one-item to multi-item exhibitions: the decline of the word picture in its singular form rather than its plural, typically pre-modifi ed by a determiner (the, this), in parallel to the increase of many other plural nouns identifying art works to put on display (images, works, objects) leads us to that conclusion.Moreover, something has changed in the way artists are selected for exhibiting: they do not have to be famous -another declining word -, nor particularly great or known, since these adjectives are preferably used to refer to their work, but they defi nitely must have an acknowledged career.
Indeed, young artists have fewer advantages in the art scene, as refl ected by the lexical choice of EPAs, where less than 30 occurrences can be ascribed to the semantic fi eld of youth.
Art subjects may change, and portraits -an increasing item within the EPA Diacorpus -may be more intriguing for our times, but a landscape is still a must for any exhibition, as well as drawings and paintings -to mention some of the lockwords of the corpus -especially if the latter are combined in order to enrich the educational experience of visitors, as well as increase the number of art works on display with a close eye on costs.New media have emerged over time: corpus evidence shows the rise of photography and the decline of architecture.However, the art of painting and drawing has not gone out of style, as words belonging to both lexemes, PAINT and DRAW, show a stable pattern across decades.
As regards the register of EPAs, the analysis of variation has highlighted a preference for more vivid and straightforward descriptions: the decline of the verb shown and shows, to be seen as a general trend of language when compared with the Time Magazine corpus, can be interpreted in these terms.Moreover, the language of EPAs has become more accurate over time, showing an increasing awareness of new coinages in the fi eld of art discourse, such as the phrase visual arts, gradually borrowed from academic settings, in parallel to the dismissal of fi ne arts.EPA writers also seem responsive to the way artists prefer to be presented to the public, not merely as painters, for instance, to mention another decreasing item, but rather in a more complex light, as implied by the defi nition of artist and artists, which are both increasing words.
Let us go back to Irvine's (2004Irvine's ( -2009) ) defi nition of art discourse as something "defi ning the cultural category of art and maintaining the art/non-art binary": the diachronic analysis of EPAs has confi rmed that statement, showing how EPA writers are involved in the process of shaping the concept of art and the ever-changing values related to exhibitions through their lexical selection.The results of the analysis, therefore, highlight the signifi cant role EPAs play within art discourse.
of the artist's work at the Foundation in 1968. he boldly returned to fi gurative work in the late 1960s.widely varied subjects.His work combines visual elegance with a subtly He says, "If I look back on my work over a period of years the context of a consistent body of work that critically investigates
( 10 )
This skeletal drawing shows us how the artist went about his task, working out the perspective framework with the ruler and pencil and afterwards fi lling in the details in ink, underneath which the vanishing point and vanishing lines are still clearly visible.(London,Victoria&Albert Museum, Drawings by Italian Artists: 1500-1800, 1 June 1959)
Table 1 .
Features of EPA Diacorpus
Table 3 .
EPA Diacorpus: number of press releases across decades and countries
Table 4 .
Profi le across decades of increasing words: career, visual, photography
Table 5
. Profi le across decades of decreasing words : picture, shown, shows
Table 6 .
Profi le across decades of stable words : landscape, drawings, paintings ): (1) of Esaias van de Velde's little "Winter Landscape", which may well have infl uenced The Poetic Landscape takes a new look at Claude as a painter Caspar David Friedrich's Moonlit Landscape, or Turner's The Pass at
|
2018-12-16T08:26:24.138Z
|
2017-01-06T00:00:00.000
|
{
"year": 2017,
"sha1": "9273184baab807e8226ae913181fba65cbb24921",
"oa_license": "CCBY",
"oa_url": "https://tidsskrift.dk/her/article/download/25138/22060",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9273184baab807e8226ae913181fba65cbb24921",
"s2fieldsofstudy": [
"Art",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
270196640
|
pes2o/s2orc
|
v3-fos-license
|
A life-threatening, massive subcutaneous hematoma caused by trauma in a patient with neurofibromatosis type 1: a case report and literature review
Background Neurofibromatosis type 1 (NF1) is an autosomal dominant disease that can give rise to the formation of vascular lesions in affected individuals. These lesions, whether occurring spontaneously or as a result of trauma, have the potential to cause severe and even fatal hemorrhage. Case description We presented a case demonstrating the most extensive hematoma ever documented in a patient with NF1, resulting from a minor trauma. He experienced hemodynamic instability due to severe anemia. Arteriography revealed a rupture in the intercostal artery, which was successfully treated through interventional embolization to stop the hemorrhage. Additionally, we implemented a refined surgical approach, beginning with suturing, followed by the meticulous resection of necrotic and aberrant tissues, thereby markedly diminishing bleeding. Conclusion Minor trauma may cause severe bleeding in patients with NF1, which can be life-threatening. Timely diagnosis of NF1 and effective hemostatic techniques are key to successful treatment.
Introduction N e u r o fi b r om a t o s i s t y p e 1 ( N F 1 ) , k n o w n a s v on Recklinghausen's disease, is an autosomal dominant disease caused by mutations in the NF1 gene encoding neurofibromin on chromosome 17q11.2(1).The incidence rate is approximately 1 in every 2500-3000 people worldwide, irrespective of sex or ethnic origin (2).It usually affects the skin, nervous system, and bones and can also cause vascular lesions in patients (3).The prevalence of vascular lesions associated with NF1 ranges between 0.4% and 6.4% (4).Given that numerous vascular lesions associated with NF1 are asymptomatic, the true incidence rate may be greater than currently estimated (5).Vascular lesions include increased vascular fragility, stenosis, aneurysms, pseudoaneurysms, and arteriovenous malformations (6).Spontaneous or traumatic vascular rupture in patients of this kind can lead to life-threatening hemorrhage (7).In this case study, we detailed the progression of a patient with NF1 who, after a minor traumatic injury on the trunk, experienced a severe and potentially fatal subcutaneous hemorrhage.This is the largest subcutaneous hematoma case reported so far, caused by diffuse cutaneous neurofibromatosis originating from the trunk.
Case presentation
A 40-year-old male patient was brought to our medical center with a 10-day history of a large, painful subcutaneous hematoma on the right side of his torso following a minor trauma.Ten days ago, the patient's right lower back was inadvertently struck by the open swinging door of a parked truck.The discomfort resulting from the blunt trauma was moderate, without any apparent skin abrasions or wounds.Two hours post-injury, the pain intensified, a pronounced lump emerged at the site of the injury, steadily growing in size.After undergoing fluid resuscitation at the local hospital, the patient was transferred to a tertiary hospital 19 hours post-injury.Upon admission, the lump on the right side of the trunk has grown to a size of 30 × 15 cm with several blisters (bullae) on its surface.The hemoglobin (HB) was 62g/L.Enhanced CT scans showed subcutaneous hematoma in the right trunk, with local contrast agent accumulation in it, indicating the presence of active bleeding (Figure 1A).The patient was diagnosed with a massive subcutaneous hematoma on the right side of the trunk, and hemorrhagic shock.The arteriography revealed a rupture and bleeding in the intercostal artery (Figure 1B), subsequent embolization was successfully executed to arrest the bleeding.Following the surgery, the patient's HB levels remained stable and showed a gradual increase over time.During the initial three days following admission, the patient received transfusions amounting to 3150 ml of red blood cells, 1200 ml of plasma, and 20 g of human serum albumin.Subsequent to this period, no additional blood products were administered.
Ten days post-injury, the patient was transferred to our medical center for management of the hematoma on the right trunk and associated necrotic tissue.His past, personal, and family history was unremarkable.After admission, physical examination revealed that the patient's vital signs were stable.B-ultrasound (see Figure 1C) and MRI (see Figure 1D) were admitted.Multiple cafe-au-lait macules were observed across various regions of the body, including the trunk, buttocks, and limbs.The largest macule extended over the entire back, reaching from the spina iliaca to the neck and encompassing both armpits.It extended laterally to the left anterior axillary line, and on the right, it spanned to include areas of the right chest and abdomen.A sizable, firm mass measuring approximately 33 cm by 16 cm and with a height of 10 cm was observed on the right side of the trunk.A hardened eschar, indicative of necrotic skin, measuring approximately 20 cm by 15 cm, was present on the mass, accompanied by mild erythema and edema in the adjacent tissue (refer to Figures 2A-C).The entire mass was situated within the pigmented region on the trunk.The complete blood count (CBC) results indicated an elevated white blood cell (WBC) count at 15.6 × 10^9/L, decreased hemoglobin (HB) levels at 87 g/L.All additional laboratory parameters were within normal ranges.After admission, the patient received treatments including anti-infective therapy and nutritional support.On the second day following admission, an incision and drainage of the hematoma were carried out.The incision was made along the median line of the long axis of the hematoma.A total of 3000 ml of dark red blood and blood clots were evacuated.The internal surface of the hematoma displayed a dark red coloration and was fragile, readily bleeding upon contact (see Figure 2D).It was planned to remove the eschar and the unhealthy inner wall of the hematoma.However, throughout the resection procedure, the application of electrocoagulation and electric resection failed to provide effective hemostasis, and substantial bleeding persisted, even with the implementation of ligation techniques for blood control.Only a small portion of necrotic tissue was excised, after which the wound was closed with negative pressure wound therapy (NPWT).Two days subsequent to initial debridement, the patient persisted to have significant necrosis at the wound site, which was associated with the development of pyrexia.The complete blood count revealed a white blood cell (WBC) count of 22.4 × 10^9/L.A subsequent debridement procedure was performed to remove the eschar along with the adjacent nonviable soft tissue.In this instance, the debridement procedure was performed utilizing a technique that entailed initial suturing followed by resection.This approach required the use of 1-0 absorbable sutures to first secure the juncture between the tissue designated for removal and the tissue to be preserved.After the suturing was completed, the tissue designated for removal was meticulously excised using a scalpel or scissors, closely adjacent to the suture line.The repeated application of this method for the excision of necrotic and unhealthy tissue led to a substantial reduction in bleeding.The surgery successfully excised most of the necrotic tissue and also delicately cleared portions of the vulnerable, bleed-prone inner hematoma wall.Post-surgery, the patient's body temperature normalized, and white blood cell counts steadily declined.Subsequently, three additional surgeries were conducted.The remaining necrotic tissue and the unhealthy inner lining of the hematoma were excised utilizing the aforementioned technique.Three weeks after the initial surgery, a skin grafting procedure using a razor-thin graft from the scalp was performed to repair the wound.Based on the pathological examination of the hematoma inner wall tissue, which showed the presence of diffuse cutaneous neurofibromas and thin-walled ectatic blood vessel (see Figure 2E), along with the results of the general physical examination (numerous cafe-au-lait macules), the patient met the diagnostic criteria for NF1 (8).The skin grafts healed well (see Figures 2F, G).The patient was discharged 2 months after admission.The patient was under observation for a period of twelve months and exhibited no hematoma recurrence.A timeline of the case was presented where all the important events were marked (see Figure 3).
Discussion
NF1 is a complex, multisystemic, dominant genetic disorder caused by mutations on chromosome 17 (9).Angiopathy linked to Neurofibromatosis Type 1 has the potential to lead to the rupture of blood vessels and subsequent bleeding.Improper or untimely diagnosis and treatment of NF1-related angiopathy can have severe consequences, potentially leading to fatal outcomes.However, it is easy to overlook the diagnosis of NF1 in patients who experience bleeding due to minor trauma.Our patient had presented at three different health facilities prior to admission to our center, where a diagnosis of NF1 was not previously established.This oversight may be attributable to the tendency for clinicians to associate spontaneous bleeding with specific pathologies, whereas bleeding secondary to trauma is less commonly considered indicative of an underlying disorder such as NF1.And when bleeding or hematomas occur, it is the surgeons who see the patients, not the dermatologists, thus increasing the possibility of a missed diagnosis.Ten cases of NF1 patients with critical hematomas located in the trunk region have been documented (refer to Table 1).In the ten cases of hematoma patients, the diagnosis of NF1 had been made early before the occurrence of the hematomas, even in childhood.Our case was the only one who was not diagnosed with NF1 before coming to our center.Therefore, early diagnosis of NF1 plays a crucial role in the prevention and treatment of bleeding or hematoma.Hemostasis is essential in treating large hematomas in patients with NF1.Angiography can precisely identify the bleeding source within NF1 vascular abnormalities.Effective interventional embolization can quickly halt the bleeding and decrease the risk of hemorrhage in future surgical procedures.The rupturing of the artery and the hemorrhaging in our patient, as well as in the six previously reported cases, have been successfully halted by means of interventional embolization.However, arterial embolization does not work well in patients who were bleeding from veins and capillaries in neurofibroma tissue.The use of hemostatic materials, application of compression, and systemic administration of tranexamic acid have all demonstrated efficacy.Extensive intraoperative hemorrhage is a critical concern during neurofibroma resection procedures, due to the proliferation of fragile, ectatic vascular structures within the neoplastic tissue (20)(21)(22)(23).By examining the pathological data (refer to Figure 2E), Bultrasound imaging (see Figure 1C), and MRI scans (see Figure 1D), it's observable that there are numerous tortuous and thin-walled ectatic blood vessels which frequently exhibit a non-functional tunica media within their structure, resulting in a diminished or absent capacity for muscular contraction.Baek et al. proposed three methods to reduce bleeding during NF1 resection: hypotensive anesthesia, preliminary sutures around the lesion, and ligation of the limited numbers of feeding vessels in the vascular malformation of the neurofibroma (14).In this case, we implemented a refined surgical approach, beginning with suturing, followed by the meticulous resection of necrotic and aberrant tissues, thereby markedly diminishing bleeding.When suturing, it's important to apply moderate force, as the neurofibroma and the vessels contained within it are fragile.The sutures should be placed in an overlapping fashion.The necrotic tissue and neurofibroma should be carefully removed with a scalpel or scissors, trimming as close to the sutures as possible without compromising the integrity of the stitched area.Based on our experience, utilizing the aforementioned technique can significantly diminish the incidence of bleeding during the intraoperative debridement process.Following the surgical procedure, the wound was dressed using NPWT with a polyvinyl alcohol sponge.The therapy was successful, resulting in only a minimal amount of blood drainage.This suggests that NPWT is an effective option for dressing such wounds without contributing to increased postoperative hemorrhage.The extent of neurofibroma involvement in this case is considerable.Due to this, surgical efforts are concentrated on meticulously closing the wound.However, a substantial portion of the lesion remains.As a result, careful and continuous monitoring will be necessary to assess the future progression and outcomes.
Conclusion
Minor trauma may cause severe hemorrhage in patients with NF1.A comprehensive understanding of vascular lesions of NF1 is essential.Timely diagnosis of NF1 and effective hemostatic techniques are key to successful treatment.
FIGURE 1
FIGURE 1 Examination of the patient.(A): Contrast-enhanced computed tomography revealed extravasation of the contrast in the hematoma (red arrow).(B): Emergency interventional radiology showed vascular leakage (red arrows).(C): B-mode ultrasound demonstrates thickening of the skin and subcutaneous fat layer on the back, characterized by uneven echogenicity, with a maximum thickness of 2.7 cm. CDFI reveals abundant blood flow signal within the affected area.Spectral Doppler analysis exhibits arterial-and venous-like flow patterns.An increase in the thickness of the deep fascia is evident, measuring up to 2.9 mm at its thickest point.(D): MRI scan reveals soft tissue thickening and enlarged vascular structures in lower back.
FIGURE 2 A
FIGURE 2 A photographic series illustrating the patient's condition before and after our treatment.(A-C): A substantial subcutaneous hematoma on the right side of the trunk.The hematoma is noted for its remarkable size, measuring 33 cm in length, 16 cm in width, and 10 cm in depth, indicating a significant accumulation of blood beneath the skin.(D): The appearance of the inner wall tissue of the hematoma during the surgical procedure.It's dark red and fragile, with a tendency to bleed upon contact.(E): Pathological image.This is observed under a hematoxylin and eosin stain at an original magnification of 40 times.(F, G): Successful healing of skin grafts at one month post-procedure.
FIGURE 3
FIGURE 3Timeline of the case.
TABLE 1
Reported cases of NF1 patients with hematomas located in the trunk region.
|
2024-06-02T15:16:20.047Z
|
2024-05-31T00:00:00.000
|
{
"year": 2024,
"sha1": "58b799154338fd8ad2d7dab75f14c53007bf6309",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2024.1387966/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58d5269241ac75850c4822a1db12d6c5053f1c15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229421961
|
pes2o/s2orc
|
v3-fos-license
|
Ruptured Left Ventricular False Tendon Mimicking a Mural Vegetation
Left ventricular false tendons are cord like structures that traverse the left ventricular cavity. They are found in approximately half of the human hearts examined at autopsy and have no clinical or prognostic significance. They have been well described and usually pose no diagnostic dilemma. We present the first case of a partially ruptured false tendon mimicking mural vegetation in an 80-year-old male with extended-spectrum beta-lactamase Escherichia coli bacteremia.
Introduction
Left ventricular false tendons (LVFTs) are distinct fibromuscular structures that are extensions of the innermost myocardial layer of the left ventricle and may vary in size and dimension [1]. They are generally benign anatomic variants that are remnants of the embryologic development of the four-chambered heart. Data regarding the prevalence, incidence, and clinical significance of LVFTs is largely from autopsy studies and two-dimensional echocardiographic studies from tertiary centers. The proposed incidence ranges from 18-26% in echocardiographic studies and 34% from autopsy studies [1]. Its clinical significance is described from a prospective Framingham study that determined that LVFTs were associated most commonly with innocent precordial murmurs and electrocardiographic (ECG) evidence of left ventricular hypertrophy, but no statistical significance or relevance was established with the risk of increased mortality secondary to cardiac etiology [1]. Although intact, LVFTs are readily identified on echocardiograms and have been established to serve no clinical significance, ruptured tendons in the cavity of the left ventricle may resemble vegetations, thrombus, or ruptured chordae tendineae and in an appropriate clinical setting may lead to false diagnosis and inappropriate management of a relatively benign finding [2]. Here, we describe an unusual case of a partially ruptured false tendon mimicking mural vegetation.
Case Presentation
We present a case of an 80-year-old male with a past history of hypertension, heart failure with reduced ejection fraction, type 2 diabetes mellitus, dyslipidemia, and acute coronary syndrome status post percutaneous coronary intervention with non-drug eluting stent placement in 2019. He presented with a three-day history of subjective fever associated with multiple episodes of non-bloody emesis and intermittent right-sided lower abdominal pain with radiation to the ipsilateral back. No additional complaints were elicited on the initial presentation. No flank tenderness was appreciated on physical examination.
The patient was febrile to 103.1° F with an elevated white blood count (WBC) of 14,600 per microliter. He was hemodynamically stable and started on empiric broad-spectrum antibiotics. CT abdomen was significant for mild perinephric stranding but negative for calculi. Urine and blood cultures on admission were positive for extended spectrum beta-lactamases (ESBL) Escherichia coli and he was transitioned to meropenem given the sensitivity profile.
The hospital course was complicated by a transient episode of dyspnea, which responded to diuretics. An electrocardiogram done at that time showed sinus rhythm with non-specific ST-T-wave abnormalities unchanged from prior. Transthoracic echocardiogram (TTE) was done because of the change in clinical status that showed an ejection fraction of 25-30% with severely decreased global left ventricular systolic function with aneurysmal anterior and anteroseptal walls unchanged from prior TTE. However, there was a new finding of a mobile echodensity (Figure 1) attached to the basal septum of the left ventricle. The location of the linear density, when compared to prior echocardiograms, was consistent with a partially ruptured false tendon ( Figure 3 current and prior TTE). Repeat blood cultures on appropriate antibiotic therapy were negative.
FIGURE 3: Comparison between current and prior transthoracic echocardiogram
A: current transthoracic echocardiogram shows a linear structure (yellow arrow) next to the echodensity (red arrow).
B: prior echocardiogram from one year ago shows a hyperechoic false tendon in the same location as the linear structure from the current echocardiogram.
Discussion
Although LVFTs have been a well-established echocardiographic finding widely reported in the literature, in almost all reported cases, they remain attached as they traverse the interventricular septum and left ventricular free wall or papillary muscles without any connection to mitral valve leaflets [2]. In only one case report, LVFT was found to be dissociated from the ventricular wall; however, the rupture of the LVFT was secondary to endocarditis due to brucellosis infection [3]. There have been no case reports describing a partially ruptured LVFT.
As mentioned earlier, even though LVFTs are a benign finding and are readily seen on echocardiograms, partially ruptured LVFTs may resemble vegetation or thrombus, resulting in unnecessary therapies including anticoagulation or extended antibiotics. It is hard to distinguish between a partially ruptured LVFT and vegetation on a transthoracic echocardiogram; our case demonstrates the importance of comparing current studies with prior studies, which provide additional clues and help make a more informed diagnosis.
It is also important to use additional tests in cases where a clear diagnosis cannot be made on TTE. A TEE has increased sensitivity compared to TTE and can help characterize the opacities better. Adequate description and characterization of echodensity on echocardiogram with commentary on tissue texture and attachment site may help differentiate between different structures [3].
Conclusions
LVFT are cord-like structures that cross the left ventricular cavity and serve no prognostic significance. Our case report illustrates a very rare entity of an idiopathic primary partial rupture of LVFT and the importance of a systematic approach in coming to a correct diagnosis. We urge other researchers to pay close attention to the aforementioned finding, given its subtlety.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2020-12-10T09:05:32.545Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ca940cbfd547b827be30ae1da729ca09ec130add",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/45881-ruptured-left-ventricular-false-tendon-mimicking-a-mural-vegetation.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5137209a0eb85098f5b91611fe2072f162d520bb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263967431
|
pes2o/s2orc
|
v3-fos-license
|
Large-scale recording of neuronal activity in freely-moving mice at cellular resolution
Current methods for recording large-scale neuronal activity from behaving mice at single-cell resolution require either fixing the mouse head under a microscope or attachment of a recording device to the animal’s skull. Both of these options significantly affect the animal behavior and hence also the recorded brain activity patterns. Here, we introduce a different method to acquire snapshots of single-cell cortical activity maps from freely-moving mice using a calcium sensor called CaMPARI. CaMPARI has a unique property of irreversibly changing its color from green to red inside active neurons when illuminated with 400 nm light. We capitalize on this property to demonstrate cortex-wide activity recording without any head fixation, tethering, or attachment of a miniaturized device to the mouse’s head. Multiple cortical regions were recorded while the mouse was performing a battery of behavioral and cognitive tests. We identified task-dependent activity patterns across motor and somatosensory cortices, with significant differences across sub-regions of the motor cortex and correlations across several activity patterns and task parameters. This CaMPARI-based recording method expands the capabilities of recording neuronal activity from freely-moving and behaving mice under minimally-restrictive experimental conditions and provides large-scale volumetric data that are currently not accessible otherwise.
The mammalian brain processes sensory information using synchronized activity of brain-wide distributed neurons that are connected into local circuits [1][2][3][4][5][6] , which emphasizes the need for developing recording methods that are capable of capturing these complex activation patterns.To address this challenge, previous efforts have concentrated on improving the sensitivity of sensors to capture single-cell activity [7][8][9][10][11][12] and enhancing the capability of recording systems to track neurons over large brain regions [13][14][15][16][17] .In parallel, scientific paradigms have shifted to analyzing neuronal activity in behaving animals while they process sensory cues to perform a task, with much of this work performed using the mouse as a model.This approach has enabled identifying the functional roles of specific cell types 18 , brain regions 4 , and/or projections between brain regions 1,6 , as well as determining how normal activity patterns are altered following a neurological condition 19,20 or a model for neurodegenerative diseases 21,22 .
Currently, recordings from many neurons spanning multiple brain regions with single-cell resolution in behaving mice are mostly conducted using genetically-encoded calcium indicators (GECIs).These experimental paradigms include either monitoring of headfixed mice using two-photon laser scanning microscopy (TPLSM) 23,24 , or attaching a miniaturized imaging device to the skull of a freelymoving mouse to record single-photon or two-photon fluorescence [25][26][27] .Importantly, both of these methods have inherent limitations.For example, TPLSM recording from behaving rodents requires head fixation of the mouse under a microscope, which may result in activation of different neuronal circuits compared to natural, freely-moving behaviors 28 .In addition, most state-of-the-art TPLSM systems are capable of recording from one large plane spanning up to several mm 2 , or from a few smaller, axially-shifted planes [15][16][17] .These microscopes are usually limited by the mechanics of laser scanning systems, which restrict the effective field-of-view (FOV) size that can be dynamically monitored.The acquired information is limited to the inside of this FOV, and so the activity of nearby neurons outside the FOV, even if they are labeled with a GECI, cannot be simultaneously detected.When brain activity is recorded using an implanted miniaturized imaging device, it allows head movement of the mouse during recording, but it puts a substantial weight on its skull.This additional weight may affect the mouse's natural behaviors, and hence also the recorded neuronal activation patterns.In addition, the spatial resolution and volumetric recording capabilities are compromised compared to TPLSM recording.
Calcium-modulated photoactivatable ratiometric integrator (CaMPARI) is a calcium-and light-dependent fluorescent activity marker 29 , which may enable combining the relative simplicity of GECIs with TPLSM large-scale recording capabilities and free movement of the animal.Upon illumination with 400 nm light in a high-calcium environment, CaMPARI undergoes an irreversible conformational change, and its fluorescence emission changes from green to red in a process called photoconversion (PC).CaMPARI was previously demonstrated to label active neurons in the mouse visual cortex with red fluorescence based upon their tuning properties 29 , and a recent version of this sensor, CaMPARI2, exhibits a brighter signal and better contrast between the red and green components 30 .CaMPARI's unique calcium-dependent PC capability allowed us to design a distinct experimental paradigm where the experimental recording and signal readout processes are separated.In this study, we show that this paradigm shift facilitates neuronal activity recording over a larger brain volume than what has been possible with state-ofthe-art TPLSM systems.Large-scale CaMPARI-based recording is conducted by shining a PC light over the animal and its experimental environment, which induces red fluorescence in active neurons in the mouse brain in a non-transient manner.The readout of the photoconverted CaMPARI fluorescence is conducted after the experiment is completed using a standard TPLSM system.This recording paradigm is fundamentally different than previous recordings with TPLSM systems, where the recording and readout processes occur simultaneously and cannot be separated.In this study, we reveal the advantages of CaMPARI-based recording for detecting activity from brain volumes larger than 6 mm 3 with single-cell resolution.We validate the accuracy of the CaMPARI-based recording method by comparing the results to recordings with the widely-used GECI, jGCaMP7s 10 .We show functional differences between activity patterns of excitatory and parvalbumin-positive (PV-positive) inhibitory neurons when the mouse is presented with visual stimulation.Finally, we demonstrate the capability of the CaMPARI-based recording method to monitor single-neuron activity over a large cortical volume in freelymoving mice without any mechanical device attached to the mouse during the recording phase, in order to compare activity level patterns across five somatomotor cortical regions and to correlate these patterns with behavioral parameters as the mice perform a battery of behavioral tasks.
Characterization of CaMPARI-based recording capabilities
Although the green-to-red PC was reported to be permanent at the single-protein level 29,30 , the photo-converted red-to-green ratio (RGR) in vivo decreased during the days following PC, such that ~97% of it decayed by one week (Fig. 1a and Supplementary Figs. 1, 2), presumably due to degradation of the red protein and production of new (green) protein.To calculate the rate of RGR decay, we longitudinally monitored V1 neurons (n = 73 from 2 mice) and measured the RGR after PC.The results were fit with an exponential decay model with a half-life of 1.04 days (R 2 = 0.99; Fig. 1b).Multiple PCs of the same brain region and neurons were demonstrated by two additional recordings, separated by 10 days each, and yielding similar activity patterns (Fig. 1c).Therefore, we concluded that CaMPARI can be used multiple times for sequential recording sessions.
Next, we characterized CaMPARI's capability to identify active brain regions as a function of the amount of PC light shined on the mouse brain (light dose).Mice were injected with an Adeno-associated virus (AAV) carrying the CaMPARI2 sequence into their primary visual and somatosensory cortices (V1 and S1, respectively; n = 8 mice).The medians across the RGR recorded from all neurons in each region were compared after illumination of PC light during the presentation of a drifting grating movie to the contralateral eye (see "Methods").Light quanta of up to ~6 J/mm 2 were delivered by illuminating a 5-7 mmdiameter cross-section around the craniotomy opening and using up to 200 mW PC light for 1 s, followed by 11 s without illumination to allow the tissue to cool down.No signs of thermal damage were found for any tested light doses up to 1150 J/mm 2 (192 illumination cycles at full power; Supplementary Fig. 3).The average V1 RGR levels were higher than S1 for all tested light doses, and the sensitivity index (d′, see "Methods") that quantifies the separation among the V1 and S1 RGR distributions increased with light dose to a peak at 300 J/mm 2 (Fig. 1d, upper panel).As more PC light was used, the green CaMPARI fluorescence decreased down to 60% of its initial emission level and the red fluorescence increased (Fig. 1d, lower panel).Following these findings, a light dose range of 150-300 J/mm 2 was selected for the subsequent visual stimulation experiments (with recorded neurons from either one or two hemispheres) to balance between sensitivity and PC illumination time.The optimal 300 J/mm 2 dose levels could be easily achieved for illuminating one hemisphere but requires a 4-fold increase in illumination time when the two hemispheres are illuminated.Such a prolonged recording time may result in changes in the mouse condition during the recording, and therefore a lower light dose was used for these experiments.
In a separate set of experiments, the RGR values were measured at different tissue depths down to 240 µm under the pia.No apparent changes in RGR values were found for different depths (Fig. 1e, Supplementary Fig. 4), which suggests that the PC-based recording levels were not substantially biased by the tissue depth when Layer II/III neurons were monitored.These results were supported by measuring the RGR depth decay in neurons expressing mEOS, a calciuminsensitive photoconvertible protein that was used to develop CaMPARI 29 .Decay of 20-30% in RGR values was identified across Layer II/III depth, which presumably contributed a negligible fraction of the variability to the recorded data.A more substantial light attenuation was found for RGR levels across Layers II/III and V neurons (Supplementary Fig. 5).
Simultaneous neuronal activity recording from multiple brain regions
Simultaneous volumetric recording over a large cortical area was accomplished by expressing CaMPARI2 in the monocular and binocular visual cortices (V1m and V1b, respectively) and the somatosensory cortices (S1) of the two hemispheres of mice (n = 4).A drifting grating movie was presented to either the right or left eye of lightlyanesthetized mice and was synchronized with PC illumination that illuminated both hemispheres and covered a brain volume of 6 mm 3 (~10 mm 2 of cortical surface for each hemispheric window, with detectable signal down to a depth of ~300 μm; 150 J/mm 2 were used for PC to shorten the experiment duration).Once the PC recording was completed, the neuronal RGR was recorded using TPLSM (Fig. 2a).A control group (n = 6 mice) expressed jGCaMP7s in V1b, V1m, and S1 of one hemisphere and we recorded the fluorescence changes evoked by visual stimulation to each eye, as was previously done 7,10,31 .Both CaMPARI2-and jGCaMP7s-expressing neurons showed similar activity patterns, where visual regions were more active than somatosensory regions, and increased activity was detected in the contralateral compared to the ipsilateral visual regions (Fig. 2b, c, Supplementary Fig. 6).Notably, while CaMPARI-based data were recorded from all 6 brain regions simultaneously, jGCaMP7s-based data required a much longer recording process.Visual activity was recorded from 3-5 FOVs within each brain region sequentially, and therefore required combining recordings from 142 individual time points.
Recording from genetically-targeted neuronal populations
Selective recording from genetically-targeted cellular populations was achieved by injecting cre-dependent CaMPARI2 AAV into the V1 and S1 regions of Emx1-cre 32 or B6 PV cre mice to express CaMPARI2 in either excitatory or PV-positive inhibitory neurons, respectively (n = 3 mice for each group).An additional group of C57BL/6 J mice (n = 6) were injected with AAV expressing CaMPARI2 under the human synapsin1 promoter (AAV-SYN1) to express CaMPARI2 in both excitatory and inhibitory neurons.All mice were implanted with cranial windows on top of the left hemisphere.V1 and S1 activity was recorded from lightlyanesthetized mice during the presentation of a drifting grating movie to either the contralateral or ipsilateral eye using 300 J/mm 2 light dose for PC.V1 RGR levels were higher than S1 for the three groups, with the highest levels measured in B6 PV cre mice, then in AAV-SYN1 mice, and the lowest in the Emx1-cre mice, indicating different ranges of activity levels in these groups (Fig. 2d).Moreover, the range of RGRs measured from B6 PV cre V1 neurons was significantly larger than from the other two groups (Fig. 2e).Interestingly, while the RGR range was significantly larger for V1 vs. S1 neurons for the AAV-SYN1 and Emx1-cre groups, there was no apparent difference for the B6 PV cre group (Fig. 2e).When comparing the d′ for separating V1 and S1 RGR distributions across the different groups, we found that for contralateral eye stimulation, Emx1-cre had significantly higher d′ values than B6 PV cre and non-significantly higher values than AAV-SYN1 (Fig. 2f).When comparing the d′ values for contralateral vs. ipsilateral eye stimulation, there were significantly higher d′ values for contralateral stimulation for the Emx1-cre mice and no apparent difference for the B6 PV cre group.
Large-scale volumetric recording of brain activity from freelymoving mice
Next, we moved to recording neuronal activity from freely-moving mice by expressing CaMPARI2 in 5 motor and somatosensory regions of the same hemisphere (motor caudal front limb, M CFA ; motor rostral front limb, M RFA ; motor neck-jaw, M NJ ; somatosensory forelimbs, S FL ; and somatosensory barrel field, S BF ; n = 8 mice; see "Methods" for details).The same mice were trained and tested on three tasks (novel object recognition in the open field, NOR), rotarod (RR), and fear conditioning (FC), performing a new task every two weeks using arenas equipped with a PC light source (Fig. 3a).For all experiments, the mice were first trained for the particular behavioral task.Following the completion of the training phase, the next session was conducted with the PC light turned on to photoconvert cells during 15 min of recording (Fig. 3b, Supplementary movies 1-3, see "Methods" for details).A set of control experiments found no significant effects for the implantation of cranial window or illumination with 400 nm PC light on the tested performance parameters of the mice in the NOR, RR, and FC tasks (Supplementary Figs.7-9).Readout sessions were conducted 24 h after recording (Fig. 3c supplementary movie 4).RGR was measured from all identified neurons from the pial surface down to a depth of ~300 μm.Evaluating cellular RGRs across different cortical regions and tasks in the same mouse (Fig. 3d), as well as across mice (Supplementary Fig. 10), yielded a significant ~2.5-fold increase in NOR median activity levels compared to RR and FC (Fig. 3e).Somatosensory regions were more active than motor regions across the three behavioral tasks (Fig. 3f).
When comparing activity across the different motor regions, M CFA was significantly more active than M RFA and M NJ , although M CFA and M RFA project to the same limb 33 (Fig. 3g).There were no significant differences in activity levels between the two somatosensory regions S BF and S FL (Supplementary Fig. 11), or between cortical layers I and II/ III.(Supplementary Fig. 12).Interestingly, the averaged cortical activity measured during the FC memory test from all mice was correlated with the averaged interstimulus intervals (ISIs) during fear memory learning on day 1 of the task (Fig. 4a; n = 8 mice).Moreover, the activity of individual brain regions showed significant correlations with at least two out of the four individual ISIs (Fig. 4b), suggesting that activity in the somatomotor cortex during the memory test reflects aspects of the fear learning process.Among the recorded brain regions, S1 FL showed correlation with all ISIs.Interestingly, we saw a similar pattern for the rotarod test, where S1 FL was correlated with the mean fall latency during the learning phase of the test (Fig. 4c; n = 8 mice), and for the NOR test, where S1 FL activity was correlated with the time spent with the novel object (Fig. 4d; n = 11 mice).
Finally, we tested the reproducibility of brain activity recording in another set of experiments.Mice were tested in the novel object recognition task for two subsequent weeks using two new objects every week, and their brain activity was recorded during the novel object recognition phase (see "Methods").For both weeks, the mice showed the expected preference towards the novel object, but no apparent difference in behavioral parameters like the total or the percent of time spent with the novel object, or the discrimination index (Fig. 4e, Supplementary Fig. 13).Similarly, no significant differences in the median RGRs of the somatosensory or motor regions were identified (Fig. 4f).
Discussion
This work introduces a new method for acquiring simultaneous volumetric neuronal activity patterns from freely-moving rodents without mounting the animal's head under a microscope, tethering it, or attaching a miniaturized imaging device to its skull.This method enables large-scale recording across multiple brain regions at singlecell resolution.Therefore, this paradigm overcomes the respective limitations of TPLSM-based recording (optimized for planar data acquisition from head-mounted rodents) and miniaturized devicebased recording methods (challenges with volumetric acquisition, image quality, and FOV size).This method is especially beneficial for studying cortex-wide activity patterns in behaving mice, where minimal restrictions on the animal's naturalistic performance are required 34 .Importantly, CaMPARI-based recording is performed simultaneously across the entire PC-illuminated volume, and therefore may highlight brain regions and/or specific cell types that are active during specific behaviors.This large-scale mapping may serve as the first step to identify active brain regions (Fig. 2c and Fig. 3d-g) followed by a more detailed study of dynamic action potential firing patterns using either CaMPARI's dynamic recording capabilities 29,30,35 or by expressing a more sensitive GECI in the target brain region(s).CaMPARI-based recording is also compatible with mapping of cortexwide activity patterns in response to different stimulations, including sensory, chemogenetic, or optogenetic [36][37][38][39] .
This study demonstrates the benefits of the CaMPARI-based recording method for several applications.First, previous works Fig. 2 | Simultaneous recording from multiple brain regions.a Schematic illustration of a visual-evoked activity recording experiment with CaMPARI.A drifting grating movie was presented to the mouse eye while PC light illuminated its two cortical hemispheres (left).Following PC, the imprinted cellular RGR was read using TPLSM (right).b Example data from a CaMPARI2-expressing mouse (top) showing simultaneous recording data from V1m, V1b, and S1 in both hemispheres (111-256 cells/region, median = 193; d′ values across the ipsilateral and contralateral regions are plotted; ****p < 0.0001, Wilcoxon Ranksum test for comparing RGR levels from all neurons in the contralateral vs. ipsilateral regions with Ranksum-statistic of 33,296, 50,599, and 68,231 for V1m, V1b, and S1, respectively).Activity levels are summarized by boxplots (horizontal lines show medians, boxes show the 25th-75th percentiles, whisker length is the shorter of 1.5 times the 25th-75th range or the extreme data point).Summary of the recorded activity levels measured with CaMPARI2 from n = 4 mice (bottom; each mouse was recorded twice with stimulation presented to each eye once; each rectangle shows the median RGR from 81-1455 cells/regions, median = 353; filled rectangles and error bars show the mean ± std; *p < 0.05; n.s not significant; paired t-test for comparing median values from contralateral vs. ipsilateral regions of the same mouse with p = 0.042, 0.027, and 0.166, and t-statistic of 2.48, 2.78, and 1.55 for V1m, V1b, and S1, respectively).c Example data from a jGCaMP7s-expressing mouse (top) showing similar distribution of increased fluorescence like (b) (229-898 cells/region, median=350).d′ values and significance levels are plotted (****p < 0.0001; Wilcoxon Ranksum test with p = 1.7*10 −36 , 6.2*10 −32 , and 4.8*10 −5 , and Ranksum-statistic of 98687, 88050, and 308589 for comparing contralateral vs. ipsilateral V1b, V1m, and S respectively).Summary of the recorded activity levels measured with jGCaMP7s (bottom) from n = 6 mice showed similar increases in fluorescence during stimulation like in CaMPARI2-expressing neurons (21-898 cells/regions, median = 161; ****p < 0.0001; ***p < 0.001); n.s not significant; two-sample t-tests with t-statistic of 4.18, 4.15, and 1.02 for comparing contralateral vs. ipsilateral V1m, V1b, and S1 from the same mice, respectively; boxplot representation as in (b).d RGR levels in V1 were higher than in S1 for excitatory and inhibitory CaMPARI2-expressing neurons.Among V1 neurons, RGR levels were highest in B6 PV cre mice, than in AAV-SYN1 mice, and the lowest in the Emx1-cre mice (48.86 ± 26.7%, 27.33 ± 8%, and 19.55 ± 2.4%, respectively); mean ± std of the median RGRs across all recorded cells per mouse during contralateral eye stimulations; AAV-SYN1 group; n = 6 mice; green bars; Emx1-cre, n = 3 blue; B6 PV cre , n = 3, black.Each bar shows the 25-75 percentile range, red lines indicate the medians and whiskers span 0.722-fold of the interquartile range, which corresponds to an approximate range of the central 90% of normal distribution.Lighter color bars (gray and light blue) show the RGR for stimulation of the ipsilateral eye (data from 13,997 neurons recorded from 12 mice, 38-1235 neurons/regions, median = 256.Six AAV-SYN1 mice were recorded once with visual stimulation to the contralateral eye, and 3 Emx1-cre and 3 B6 PV cre mice were recorded 1-3 times with visual stimulation presented to the contralateral and/or ipsilateral eye).e The 5-95 percentile range of RGR distribution of V1 neurons was significantly larger for B6 PV cre neurons than for AAV-SYN1 neurons or Emx1-cre neurons (two-sample t-tests; p = 0.0001 with t-statistic = 5.96 and p = 0.003 with t-statistic = 4.33 for comparing the range from all V1 neurons in B6 PV cre mice with Emx1-cre and AAV-SYN mice, respectively.The range of AAV-SYN neurons was non-significantly larger than of Emx1-cre neurons, p = 0.08, twosampled t-test).In addition, the RGR range for V1 neurons was significantly larger than for S1 neurons in the AAV-SYN1 and Emx1-cre groups, but not in the B6 PV cre group (0.21 ± 0.08 vs. 0.08 ± 0.03, 0.14 ± 0.06 vs. 0.06 ± 0.02, and 0.70 ± 0.27 vs. 0.64 ± 0.32, respectively, mean ± std.; paired t-tests; p = 0.0008 with t-statistic = 5.64, p = 0.003 with t-statistic=5.36, and p = 0.827, respectively; median V1 and S1 values from individual mice are shown in diamonds and rectangles, respectively; AAV-SYN1, data from 6 recordings from 6 mice; Emx1-cre, data from 8 recordings from 3 mice; B6 PV cre , data from 4 recordings from 3 mice).f The sensitivity index (d′) for separating the distribution of V1 and S1 RGRs was significantly higher for Emx1-cre neurons vs. B6 PV cre neurons, and non-significantly higher for Emx1-cre vs. AAV-SYN1 neurons (two-sample t-tests, p = 0.034 with t-statistic = 2.90, and p = 0.12 for comparing contralateral stimulation, respectively.Emx1-cre, 5 measurements from 3 mice; AAV-SYN1, 6 measurements from 6 mice; B6 PV cre , 2 measurements from 2 mice).In addition, d′ was significantly higher for contralateral vs. ipsilateral eye stimulation for Emx1-cre neurons, but not for B6 PV cre neurons (two-sample t-tests, p = 0.028 with t-statistic = 2.89, and p = 0.7, respectively).All statistical tests were two-tailed, source data are provided as a Source Data file.Reprinted with permission, Cleveland Clinic Foundation ©2023.All Rights Reserved.characterized the relationship between firing of action potentials and PC rate in cultured neurons 29,30 .However, the use of CaMPARI in rodents was mostly limited for acute PC and downstream analysis following tissue sectioning.The return of the RGR values to their baseline level within ~7 days facilitates same-animal longitudinal monitoring experiments, as we demonstrated in this study (Fig. 3).Second, as was previously shown by widefield one-photon microscopy in either head-fixed mice 40 or rats implanted with a miniaturized microscope 12 , when a visual stimulation is presented to one eye, the visual-evoked response is usually stronger in the contralateral V1, but is also apparent on the ipsilateral V1.Our data are consistent with these findings (Fig. 2b, c) and add the additional advantage of acquiring single-cell resolution data, which is not achievable using single-photon techniques.Third, in addition to single-cell resolution data, the presented method records simultaneously from thousands of geneticallytargeted cells, a feature which is generally not achievable using electrophysiological recordings including high-density electrodes.We demonstrate this feature by conducting a comparative study across inhibitory and excitatory neurons in V1 and S1 and identifying cell-typespecific differences in their activity patterns (Fig. 2d-f).Interestingly, our data suggest that similar to the visual tuning properties of excitatory and inhibitory neurons in the visual system 41 , the excitatory neurons in V1 and S1 differentiate in their response to the stimulus, while the PV-positive neurons show a broader and partiallyoverlapping range of activity levels.Notably, measurements of the sensitivity indices showed a sharper separation of V1 and S1 population activity based on recording excitatory neurons only (Fig. 2f).However, even the PV-positive neurons, which exhibited reduced sensitivity compared to the other recorded groups, showed higher RGRs in V1, suggesting that all recorded cell types present a qualitatively similar activation pattern.Future applications of targeted recording during behavior may probe the role of these and other cell types in the neuronal circuitry underlying the performance of specific tasks or to highlight the spatiotemporal structure of their activity pattern across multiple brain regions.Finally, we demonstrate recording from freelymoving mice under what we consider as minimally-restrictive conditions, compared to the standards in the field today.No mechanical device or tethering were attached to the mice during their training and recording, and the only interventions they experienced were the craniotomy surgery and virus injection, which are currently unavoidable.Three behavioral tasks were selected for the study: NOR, RR, and FC, to study three different types of behaviors and the associated neuronal circuitry.The NOR represents a somatosensory-centered task with a memory component, the RR is a motor-centric task, and the FC task is memory-centered, which includes dominant sub-cortical components and the somato-motor regions are considered less central.We selected to record brain activity from primary somato-motor regions to rely on the existing literature on the relationship between sensory stimuli and the neuronal activity in these regions.Notably, all mice were tested in a certain order: first NOR, then RR, and finally FC.The rationale was that the fear involved with FC may affect any following task, and thus this task was last.During RR, mice may fall and potentially be injured, so we placed this task after the NOR, which was considered to be safe.Due to these considerations, there was no randomization of the mouse testing order.Although two sequential tests of mice for the NOR task (Fig. 4e, f, Supplementary Fig. 13) and up to three times for the visual stimulation experiments (data not shown) showed no significant change of the results, such an effect cannot be fully excluded.
Unlike most existing recording methods, CaMPARI-based recording allows separation of the recording and readout e, f Mice were tested and recorded with CaMPARI2 for two consecutive weeks on the NOR task with two new objects for each week.For both weeks, the exploration parameters of the new objects were similar (e), where mice spent significantly more time with the novel object than the known object (98.5 ± 10.2 vs. 56.1 ± 4.9 s, mean ± std., respectively for week 1; n = 5 mice, paired t-test, p = 0.0008 with tstatistic = 9.28.113.4 ± 24.2 vs. 67.8± 17.1 s for exploring the novel vs. the known object on week 2; p = 0.005, t-statistic=5.58;same mice as in week 1).f RGR values were recorded every week and no significant changes were found between weeks 1 and 2 (p = 0.27, paired t-test, n = 4 mice, same mice as in (e), except one mouse where no signal could be recorded).All statistical tests were two-tailed, source data are provided as a Source Data file.
processes.This separation allows conducting the recording by shining the PC light over an entire arena in which the rodent is contained, and to simultaneously record from all brain regions that are exposed to the PC light.Importantly, the ability to use CaMPARIbased recording for same-animal longitudinal monitoring differentiates this approach from commonly-used immediate early genebased methods that require sacrificing the animal in order to read the activity data 42,43 .Interestingly, a recent work presented an in vivo method for identifying active cells based upon transgenic expression of the immediate early gene Fos in hippocampal cells 44 .We note that this method also allows conceptually similar separation of the recording and readout processes.However, since it relies upon a more complex, and partially unknown, mechanism that links neuronal firing to expression of immediate early genes, its current implementation enables classification of cells to Fos-high and Foslow groups, and its temporal resolution is measured in hours.CaMPARI's PC rate was shown to have an approximately linear relationship with low-to-medium firing rates of action potentials by neurons 29,30 , and our data (Figs.2b, d, 3e-g) support the recording of different activity levels in different brain regions and across different tasks and cell types.We also note that future works may incorporate the use of the recently-published reversible CaMPARI (rsCaMPARI) 45 , which will allow erasing the CaMPARI activity signal immediately after the readout session, and therefore eliminate the current restriction of CaMPARI2 with regard to the recording interval period.The presented method is limited to acquiring snapshots of largescale activity patterns.The CaMPARI2 sensor, which was used in this study, requires a relatively prolonged PC illumination time for achieving high-quality recording in freely-moving mice (up to 15 min in this study).Shortening the recording session will further enhance the method's capability to monitor brain activity and highlight co-active brain regions across shorter time scales.Such improvements may be achieved by using the earlier generation of the CaMPARI sensor, CaMPARI1 29 , which was recently shown to have better PC properties in vivo than CaMPARI2 35 , or by developing a new generation of the CaMPARI construct to address this specific challenge.In addition, in this study, we have expressed CaMPARI via intracranial AAV injection in specific cortical areas.Thus, we were limited to a finite number of brain areas that could be monitored.The cortex-wide recording capabilities of CaMPARI could be maximized by using systemic AAV injection 46,47 or by developing a transgenic CaMPARI mouse line.Both of these options would enable expressing CaMPARI over most of the mouse cortex, which would achieve access to approximately 1 million neurons without substantially changing the presented recording protocol 48 .
Finally, we established recording from the same mice performing three different behavioral and cognitive tasks, and show changes in brain activity patterns across tasks and brain regions, with correlations between activity and behavioral patterns (Figs. 3 and 4).This type of data demonstrates that CaMPARI-based recording facilitates longitudinal in vivo neuronal activity studies during minimally-restricted behaviors in the same animals.With the recent increase in interest in studying large-scale brain activity patterns, and specifically the characteristics of distributed neuronal circuits 49,50 , the presented method adds unique capabilities and complements the tools that are available to the neuroscience community.
Methods
All experimental and surgical procedures were performed following the set guidelines and protocols approved by the Lerner Research Institute (LRI) and Oregon Health & Science University (OHSU) Institutional Animal Care and Use Committees (IACUCs) and Institutional Biosafety Committees (IBCs) and were consistent with the ARRIVE guidelines.Mice were group-housed in standard vivarium conditions until the start of the study.The vivarium was maintained at 20-22 °C, 30-70% humidity and food (in LRI facility: Teklad 2918 regular diet, Envigo; in OHSU facility: PicoLab Rodent Diet 20, no.5053; PMI Nutrition International) and water were available ad libitum.Lights were kept on a 12-h light/12-h dark cycle, and experiments were conducted during the light time.
Surgical procedure and virus injection
For recording visual-evoked activity (Fig. 2), 8-12-week-old C57BL6/ J, Emx1-Cre, and B6 PV cre mice (10 males and 12 females) were anesthetized using isoflurane (3% for induction, 1.5% during the surgery) and placed on a heating pad.Each mouse was injected with local pain medication (bupivacaine 0.5%) and the skull bone above either the two cortical hemispheres (n = 4 mice) or the left hemisphere (rest of the mice) was exposed.A 3 × 5 mm 2 craniotomy was drilled (Omnidrill35, World Precision Instruments) over an area covering the monocular and binocular primary visual (V1m and V1b, respectively) and primary somatosensory cortices (S1) in one or both hemispheres.AAV solution expressing the CaMPARI2 or jGCaMP7s sensor under the human synapsin promoter (SYN1-NEShis-CaMPARI2-WPRE-SV40, Addgene catalog number 101060, SYN1-jGCaMP7s-WPRE, Addgene catalog number 104487) was injected into two locations, separated by ~600 μm, in each cortical region (50 nL of ~1 × 10 12 GC/mL solution, 3 injection depths per location, 200 μm, 400 μm, and 600 μm under the pia) using an automated injection pump (Fusion 200 touch Syringe Pump, Chemyx, and Micro-2T, WPI) and a pulled and beveled micropipette (P-1000 and BV-10, respectively, Sutter Instruments).Injection coordinates were chosen according to the mouse brain atlas 51 2:.2 mm lateral and 0.2 mm anterior to Lambda (V1m), 2.8 lateral and 0.2 mm anterior to Lambda (V1b), and 2.5 mm lateral and 3.4 anterior to Lambda (S1).For Emx1-cre 32 (JAX catalog # 005628) and B6 PV cre (JAX catalog #017320) mice, a cre-dependent AAV was used (AAV-PHP.N-SYN1flex-CaMPARI2, Canadian Neurophotonics Platform Viral Vector Core) 47 .Cortex buffer 52 was used consistently to keep the brain wet during the time of surgery and injections.Following the viral injection, a cranial window (two glued layers of rectangular glass, Tower Optical Corporation) was placed carefully (two cranial windows in the case of craniotomy in both hemispheres), and a custommade metallic head bar was attached using dental cement (Contemporary Ortho-Jet, Lang Dental).Animals were injected with Buprenorphine (0.1 mg/kg) and Ketoprofen (5 mg/kg, immediately, 24, and 48 h after the surgery) for post-operative care and were allowed a minimal recovery time of 3 weeks before the start of experiments.
Recording of visual-evoked activity
Mice were lightly anesthetized (0.5% isoflurane), held on a 37 °C heating pad, and injected with Chlorprothixene Hydrochloride (IM, 30 μL of 0.33 mg/mL solution, Santa Cruz).PC of the CaMPARI2 signal started at least 30 min after the Chlorprothixene Hydrochloride injection, and after verifying that the mouse was responsive to pain but not voluntarily moving.The visual stimulation was presented to the mouse's right or left eye and generated using the psychophysical toolbox 53,54 in MATLAB (Mathworks) on an LCD monitor (30 × 36 cm 2 display, located 15 cm in from of the mouse right eye, tilted 45°with respect to the nose line, and covered with a blue plexiglass to minimize contamination into the recording channels) that subtended an angle of ±50°horizontally and ±45°vertically.The visual stimulus consisted of a drifting grating moving in 1 of 8 directions for 4 s, followed by 8 s of gray display.This stimulation cycle was repeated 5 times.PC light was delivered for 1 s during the presentation of the drifting grating, 1.5 s after the grating appeared, using an X-Cite Fire or Xylis lamps (Excelitas) and a 400/40 nm bandpass filter (Brightline, Semrock) with up to 120 (Fire) or 200 (Xylis) mW output at the sample plane.For recording of visual-evoked activity, the PC light covered either a ~12 mm-diameter circle that included the two cranial windows in it, with an intensity of ~0.9 mW/mm 2 , or a 5-7 mm-diameter circle that covered one hemisphere with intensities of up to ~6 mW/mm 2 .After PC was completed, CaMPARI2 signal was recorded using a two-photon microscope with resonant/galvo scanners (Bergamo II, Thorlabs) with 1040 nm excitation light (Insight X3, Spectra-Physics).Images were acquired using ThorImage software (Thorlabs) with 15 frames per second and 1024 × 1024 pixels covering an area of 585 × 585 μm 2 of layer II/III neurons.Green and red CaMPARI2 signals were recorded simultaneously (525/50 nm and 607/70 nm filters, respectively, separated by a 562 nm dichroic filter, Semrock) using 2 GaAsP PMTs (PMT2100, Thorlabs).For measuring the decay of red CaMPARI signal over time, the same V1 neurons were monitored immediately after the PC and over the subsequent 15 days.
For measuring visual-evoked fluorescence changes with jGCaMP7s 10 , the same TPLSM system described above was used with acquisition of 30 frames per second, 512 × 512 pixels, and the same FOV size.The same drifting grating movie (with 4 s of drifting grating followed by 4 s of gray display) was presented to either the right or the left eye of the mice, and activity was measured from V1b, V1m, and S1 regions of the left cortical hemisphere to sequentially acquire ipsilateral and contralateral activity data.
Recording of cellular activity from freely-moving animals
All mice were injected with AAV expressing CaMPARI2 and implanted with cranial windows over their left hemisphere as described above and were tested in 4 separated cohorts.Eight of these mice were used for the experiments shown in Fig. 3, and additional 5 were used for the repeated NOR recordings and correlation between brain activity and behavior shown in Fig. 4. Following the craniotomy, mice were given 7 days for recovery and were shipped from the LRI to the OHSU, where they were given an additional 3 weeks for quarantine and recovery.N = 8 mice were then randomly divided into 2 groups.Each week, one group was trained for 2-3 days on one of three behavioral tasks (see below).The additional 5 mice were tested for exploratory behavior, measures of anxiety, and object recognition (details below) twice for two following weeks, to identify the recording reproducibility (data from the first week's recording was also grouped with the data recorded from the previous 8 mice).The mouse cranial window was illuminated with PC light during the last trial of a given behavioral test.A broadband light source (X-Cite Xylis, Excelitas) with a 400/40 nm bandpass filter (FBH400-40, Thorlabs) and lightly focused by a 100 mm achromat lens (AC254-100-A, Thorlabs) was placed 20 cm above the arena to illuminate a circular cross-section of 15.25 cm in diameter in which the CaMPARI PC occurred.The light intensity was 330-485 mW, or ~2.65 mW/cm 2 (~250-fold lower than the maximal intensity used with head-fixed mice).Mice were placed inside a plastic enclosure (16.5 cm diameter; TAP plastics) on matte white plastic flooring to keep them inside the illuminated region, and 15 min of PC illumination were used for each recording to elicit sufficient PC signal (based upon preliminary experiments we conducted with other mice to calculate the required light dose and duration).
Activity readouts were acquired using two-photon microscopy 24 h after the PC during the behavioral and cognitive tests.Mice were anesthetized with isoflurane (4% induction, 1.5% maintenance) and put on a heating pad with core body temperature monitored by a rectal thermometer.Imaging was conducted with a Zeiss LSM 7 multiphoton microscope (Zeiss instruments), which utilizes a femtosecond-pulsed Ti: Sapphire laser (Chameleon Ultra II, Coherent), two BiG.2 GaAsP detectors, and Zen imaging software (Zeiss).The laser was tuned to 1040 nm excitation wavelength at 16 mW when recording from the brain surface and up to 24 mW when imaging 300 µm under the pia.Distilled water was placed on the cranial window to image with a 20×/1.0water immersion objective (Zeiss, 421452-9880).We acquired volumetric data of layers I and II/III neurons (z-stack of ~100 images typically from the brain surface, 512 × 512 pixels, 425 µm FOV size, 3 μm step size between adjacent images) with green and red channels (500-550 nm and 575-610 nm, respectively, with a 560 nm dichroic filter) from all identified CaMPARI2-injected regions in all animals.
Behavioral testing
(1) Exploratory behavior, measures of anxiety, and object recognition were assessed in an open field for a total of 4 consecutive days.Days 1 and 2 included exposure to the open field without objects for 5 min/ day.Introduction of two identical objects occurred on day 3, and day 4 consisted of replacing one of the familiar objects with a novel one and 400 nm illumination over the entire arena.Both days with objects (3-4) consisted of 15-min trials.Behavior was recorded with Ethovision 15 XT software.Camera and PC light-guide suspensions over the arena were built with Thorlabs mechanical component.For the mice that were tested and recorded twice, the objects were replaced between the first and second tests.
(2) Sensorimotor performance on the rotarod was tested for three consecutive days containing three trials each.Days 1 and 2 consisted of a standard rotarod protocol, with a starting speed of 5 RPM that was increased by 1 RPM every 3 s.On the third day, parameters were adjusted to a starting speed of 4 RPM, a maximum speed of 12 RPM, and an increase of 1 RPM every 60 s.This ensured that the animals remained on the rod for 5 min per trial (3× trials per day) and allowed for full 15-min illumination while on the rod.Experimenters noted the speed at which the rod was turning when the animal fell off and the duration of time that the animal was able to stay on the rod before falling.
(3) Contextual fear memory over two days was tested using a Med Associates mouse fear conditioning system for use with optogenetics (MED-VFC-OPTO-USB-M, Med Associates).During the training day, the animal was placed in the plastic arena (described above) that was located inside a white LED-lit (100 lux) fear conditioning chamber with a metal grid floor.Animals were habituated to the arena for a 300-s baseline period, followed by a 2-s, 0.7 mA foot shock, administered a total of 4 times at 60-s intervals.The following day, animals were placed in the same plastic enclosure within the fear conditioning chamber and contextual fear memory recall was tested for 15 min under illumination.Movement in the chamber and the percentage of freezing were automatically determined by the Med Associates
Fig. 1 |
Fig. 1 | Characterization of CaMPARI-based recording.a Schematic illustration of the changes in CaMPARI green and red fluorescence following PC.b RGRs from photoconverted neurons (n = 73 cells, 2 mice) were normalized to each cell's RGR level immediately after PC (Day 0) and were monitored for 15 days.Median values were fit with an exponential curve (t 1/2 = 1.04 ± 0.076 days, mean ± SE; R 2 = 0.99; for each boxplot, horizontal lines show medians, boxes show the 25th-75th percentiles, whisker length is the shorter of 1.5 times the 25th-75th range or the extreme data point).Example images of the same cell are presented above the respective days.c Example images of three repeated PCs in the same neurons (two repeatedrecording experiments were conducted).d Upper panel, sensitivity index (d′) values for separating V1 and S1 activity levels during visual stimulation were increased with the light dose and reached an optimum near 300 J/mm 2 .Circular dots show the median values from single light dose recording from all identified cells, and the large squares are the mean across all single-experiment medians (n = 8 mice, 33 PC sessions, 3-9 sessions per mouse; 207-762 neurons/session, median of 480 neurons/session; mean data were fit with 3rd-order polynomial dashed line).Lower panel, median green and red fluorescence signals from all recorded cells were normalized to their pre-PC levels (green, fit with a 1st-order polynomial dashed line) and 300 J/mm 2 light dose (red, fit with a 2nd-order polynomial magenta dashed line) values.The green signal gradually decreased, and the red signal increased with PC light dose (same data as in upper panel, n = 8).e The median RGR across all recorded cells in each FOV, obtained from all mice and brain regions, showed no apparent differences across depths down to 240 μm under the pia (n = 5 mice, data from 5 different motor and somatosensory regions, 2-110 cells/ FOV, median = 17).The solid line connects the average of single FOV median RGRs and error bars show the standard error of mean.No significant differences were found between different recording depths (one-way ANOVA, p = 1.00).All statistical tests were two-tailed, source data are provided as a Source Data file.Reprinted with permission, Cleveland Clinic Foundation ©2023.All Rights Reserved.
Fig. 4 |
Fig. 4 | Correlation between brain activity recording and behavioral parameters and repeatability of CaMPARI recordings.a, b The percent freezing times during individual interstimulus intervals (ISIs) during FC for n = 8 mice were significantly correlated with the RGR levels in the majority of the recorded cortical regions.The average values from each mouse were significantly correlated with mean percent freezing during all ISIs (a); p = 0.0018, paired t-test, F-statistic vs. constant model = 28.5.When comparing individual brain regions and individual ISIs, 15/20 pairs showed significant correlations (b); p < 0.05, paired t-tests (see source data file).c The mean fall latency during day 1 of the RR training was significantly correlated with the RGR levels in S1 FL (p = 0.0424, paired t-test, F-statistic vs. constant model = 6.6; n = 8 mice).d The percent time spent with the novel object during the NOR task was significantly correlated with the RGR levels in S1 FL (p = 0.0233, paired t-test, F-statistic vs. constant model = 7.44; n = 11 mice).e,f Mice were tested and recorded with CaMPARI2 for two consecutive weeks on the NOR task with two new objects for each week.For both weeks, the exploration parameters of the new objects were similar (e), where mice spent significantly more time with the novel object than the known object (98.5 ± 10.2 vs. 56.1 ± 4.9 s, mean ± std., respectively for week 1; n = 5 mice, paired t-test, p = 0.0008 with tstatistic = 9.28.113.4 ± 24.2 vs. 67.8± 17.1 s for exploring the novel vs. the known object on week 2; p = 0.005, t-statistic=5.58;same mice as in week 1).f RGR values were recorded every week and no significant changes were found between weeks 1 and 2 (p = 0.27, paired t-test, n = 4 mice, same mice as in (e), except one mouse where no signal could be recorded).All statistical tests were two-tailed, source data are provided as a Source Data file.
|
2023-10-14T06:17:45.299Z
|
2023-10-12T00:00:00.000
|
{
"year": 2023,
"sha1": "73f139a9e25de8124c3f900a5ccac6596d7e0e89",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-42083-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a15a16b1d2605833f385f8bd116b24ec969f244b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239676258
|
pes2o/s2orc
|
v3-fos-license
|
An ensemble-based approach for estimating personalized intraocular lens power
The fundamental difference between modern formulae for intraocular lens (IOL) power calculation lies on the single ad hoc regression model they use to estimate the effective lens position (ELP). The ELP is very difficult to predict and its estimation is considered critical for an accurate prediction of the required IOL power of the lens to be implanted during cataract surgery. Hence, more advanced prediction techniques, which improve the prediction accuracy of the ELP, could play a decisive role in improving patient refractive outcomes. This study introduced a new approach for the calculation of personalized IOL power, which used an ensemble of regression models to devise a more accurate and robust prediction of the ELP. The concept of cross-validation was used to rigorously assess the performance of the devised formula against the most commonly used and published formulae. The results from this study show that overall, the proposed approach outperforms the most commonly used modern formulae (namely, Haigis, Holladay I, Hoffer Q and SRK/T) in terms of mean absolute prediction errors and prediction accuracy i.e., the percentage of eyes within ± 0.5D and ± 1 D ranges of prediction, for various ranges of axial lengths of the eyes. The new formula proposed in this study exhibited some promising features in terms of robustness. This enables the new formula to cope with variations in the axial length, the pre-operative anterior chamber depth and the keratometry readings of the corneal power; hence mitigating the impact of their measurement accuracy. Furthermore, the new formula performed well for both monofocal and multifocal lenses.
main approaches: the first one is purely based on a linear regression analysis of retrospective cases, whereas the second one is based on a geometrical optics solution. The first IOL power calculation formulae [17][18][19] , based on linear regression, are purely statistical solution and not in use in clinical practice today. These formulae suffer from classical linear regression shortcomings, including the regression to mean problem. In other words, the more common the eye's characteristics, the more accurate the predicted power, while unusual eyes result in very poor estimates of the power.
The first IOL power calculation formulae, based on geometrical optics [20][21][22][23][24] , consist of different variants of the following vergence formula (1), derived from a two-lens system (eye-IOL) model of an operated eye after cataract removal and insertion of an IOL.
where P is the required IOL power for emmetropia (in diopters), n aq is the refractive index of the aqueous humor and n vit is the refractive index of the vitreous humor, P c is the average corneal power (in diopters) and is a function of the average keratometry readings K = (K 1 + K 2 )/2, AL is the axial depth from the corneal apex to the retina, also known as the axial length of the eye, d is the axial depth from the corneal apex to the optical center of the IOL, also known as effective/estimated lens position (ELP) or the post-operative Anterior Chamber Depth (ACD).
Initially, all the variants of the formula (1) used a constant value for the ELP, d, for each IOL type, which is derived using the parameters (K and AL) of an average eye. This constant value of the ELP is also known as the ACD constant. In the early eighties new studies observed some inter-subject variation in the value of the ELP, and in particular the available formulae proved to be deficient for eyes with unusually short or long axial lengths. The variation in the ELP can be attributed to various factors, including: • Patient-specific anatomical and physiological factors such as ocular dimensions, age, gender, and ethnicity; • IOL design-specific configuration details such as the optical shape factor, the compressibility of materials, and the haptic angulation; • Surgeon as well as surgical instrument and technique specific idiosyncrasies such as the IOL implantation location (e.g., angle-supported, iris-supported, sulcus-supported, or in-the-bag), the manipulation of the IOL during implantation, the type, the size, and the structure of the incision, as well as the size, the construction (manual or automated), and the configuration of the capsulorhexis.
In the late eighties, new formulae using a patient-specific modified ELP value, which considered biometry values specific to an individual patient for a particular IOL, emerged. These new formulae are referred to as modern IOL formulae [1][2][3][4][5][6] .
Modern formulae for IOL power estimation, such as Haigis 1 , Hoffer Q 2 , Holladay I 3 and SRK/T 4 are based on the vergence Formula (2) derived from a three-lens system (spectacle-eye-IOL), and they differ from each other merely in the approach used to estimate the effective lens position, d.
where R x is the desired postoperative refraction (in diopters) or the refraction of the spectacle, P is the required IOL power for the desired postoperative refraction (in diopters), b is the vertex distance (~ 12 mm), n aq , n vit , P c , L and d, are as defined in Eq. (1).
The ELP is very difficult to predict, and its estimation is considered critical for an accurate prediction of the required IOL power for a given lens. The main idea behind modern formulae, such as [1][2][3][4][5][6] , is to improve the IOL power accuracy by estimating the ELP, d, through a regression analysis on retrospective cases. For this methodology to be effective, consistency is required throughout the entire process, including the surgical technique used, the biometry instruments as well as the design and manufacturing of the IOL.
(1) P = n vit /(AL − d) − n aq / n aq /P c − d , (2), by the modern formulae, was the estimation of the ELP as a function of pre-operative measurements, such as the axial length in millimeters (AL), corneal power in diopters, derived from the keratometry readings (K 1 and K 2 ), and the anterior chamber depth (ACD).
The most commonly used and well-known modern formulae for IOL power calculation include Haigis 1 , Hoffer Q 2 , Holladay I 3 and SRK/T 4 . These formulae were published during the 1990s, and are currently the industry norm. The Hoffer Q formula estimates the ELP as a sum of an ad hoc constant, called the "personalized" ACD and denoted pACD, and a function of the average kerotometric readings and the axial length of the eye. The Holladay I formula estimates the ELP as a sum of an ad hoc constant, called the surgeon factor and denoted SF, and a function of kerotometric readings and the axial length of the eye. The SRK/T formula estimates the ELP as the sum of a scaled constant, denoted A constant , and a function of the average kerotometric reading and the axial length of the eye. The Haigis formula went one step further, by including the pre-operative ACD in the estimation of the ELP, and uses three regression constants denoted (a 0 , a 1 , a 2 ).
The constants used in these formulae, namely, pACD, SF, A constant, and (a 0 , a 1 , a 2 ) are derived from historical data of retrospective cases and are expected to capture the complex relationship for each IOL/surgeon pair.
A new formula using an ensemble-based model to estimate the ELP. Although the main aim of modern formulae is to "personalize" the IOL power calculation through linear regression models for predicting the ELP where the parameters of the models are a 0 , a 1 , a 2 for Haigis, pACD for Hoffer Q, SF for Holladay, and A constant for SRK/T), in practice the parameters of the models (also referred to as the lens constants) are initially made available by the lens manufacturer and subsequently optimized and published on some databases such as ULIB 25 , using data from various surgeons or a selected number of surgeons. On the other hand, it is well recognized that current modern formulae still demonstrate significant errors in the prediction of IOL power for unusual cases with extreme values of either axial length or corneal power 26 . For example, in short eyes with flat cornea, and long eyes with a steep cornea, the discrepancy can be up to ± 2D and ± 1.3D, respectively.
This newly developed MM formula, which leverages both thin lens geometric optics and machine learning, goes a step further in an attempt to reduce the effects of these discrepancies, by introducing an ensemble-based approach to estimate the effective lens position, for each lens model using four pre-operative variables, namely the steep and the flat keratometry readings, the pre-operative anterior chamber depth, and axial length of the eyes. These variables are used as predictor to estimate the ELP using a high-dimensional function, derived by training a machine learning ensemble-based model. The four predictor variables, used to estimate the ELP, have been identified through a feature selection approach, and are deemed to be the most influential variables in the prediction of the ELP, hence they are expected to capture both surgeon as well as surgical instrument and technique specific idiosyncrasies. In contrast with the single linear regression model, which is commonly used to estimate the ELP 1-4 , machine learning is the most natural tool able to capture the complex relationship between the ELP, the post-operative patient data, the surgeon as well as features specific to surgical instruments and techniques for each lens model. Furthermore, the proposed model to predict the ELP and calculate the IOL power, is not only surgeon-specific but is also self-sustained since the more the historical data available the more "personalized" and accurate the IOL power estimation.
The major difference between the MM formula and the four most commonly used formulae can be summarized as follows. For a given lens model and some given keratometry readings (K 1 and K 2 ), the Hoffer Q and SRK/T formulae consider a quasi-linear relationship between the post-operative ACD (i.e., ELP) and the axial length, whereas the Holladay I formula assumes a piecewise linear relationship between the post-operative ACD (ELP) and the axial length, regardless of any other measurements including the pre-operative ACD, as illustrated in Fig. 1 (top). On the other hand, the Haigis formula assumes that the post-operative ACD (ELP) depends linearly on the pre-operative measured ACD and the axial length while the MM formula assumes a non-linear relationship between the post-operative ACD (ELP) and the following variables: the pre-operative ACD and the axial length, as illustrated in Fig. 1 (bottom). Therefore, unlike most of the modern formulae for IOL power calculation, which used one or three IOL constants, the MM formula has a very high number of IOL constants. However, these constants are optimized, stored and managed automatically via the ensemble model used. The MM formula differs from the other machine learning models for IOL power calculation 7-10 , by combining both geometric optics and machine learning to estimate the ELP.
Assessment of the proposed formula.
To carry out a rigorous comparison between the MM formula and the four most commonly used formulae, namely Haigis, Hoffer Q, Holladay I, and SRK/T, we used the same data set to train the ensemble model for the MM formula and optimize the IOL constants for the four formulae, i.e., the three regression parameters (a 0 , a 1 , a 2 ) for the Haigis formula, the personalized anterior chamber depth-pACD-for the Hoffer Q formula, the surgeon factor-SF-for the Holladay I formula, and the A constant for the SRK/T formula.
Most of the studies, comparing formulae for IOL power calculation, used a holdout method, which consists of splitting the data into two sets, namely the training set and the test set, respectively. The training data set referred to patients' data used to optimize the parameters of the formulae whereas the test set, referred to patients' data not included in the optimization process. The prediction errors made using the test set are used to evaluate the performance of the formula. However, such an evaluation process may have a high variance, since it depends heavily on the nature of the data in both the training and the test sets. Therefore, this approach of comparing formulae for IOL power calculation, is prone to bias since it may differ significantly depending on the data, which happened to be in the test set.
One way to address the aforementioned limitations of the handout method is to use the cross-validation technique, also known as the k-fold cross-validation. The cross-validation technique is the most effective framework www.nature.com/scientificreports/ to assess how a predictive model generalizes to independent datasets. It enables to generate both training and test samples, which are sufficiently large and diverse in order to be representative. As such, it addresses not only the problem of the small sample size of eyes with short and long axial length, in most of the study cohorts, but also enable to assess the formulae on a variety of training and test sets. Hence, it is the most appropriate approach to assess the performance of IOL calculation formulae, which are essentially predictive models.
In the k-fold cross-validation approach, the data set is split into k subsets, and the holdout method is applied k times as follows: at each step, (k − 1) subsets are combined to form the training set whereas the remaining dataset is used as the test set. Then, the prediction errors made during the test are given by the accumulated errors from the k trials. Another variant of the k-fold cross-validation method consists of randomly splitting the data into training and test sets (k-fold) and the handout method is applied at each iteration. Such an approach, also known as the Monte-Carlo cross-validation, and illustrated in Fig. 2, has been used in this study as it enables to assess how well an IOL power calculation formula will generalize to new data.
Participants. The participants consist of a cohort of 681 patients who had implantation of monofocal or multifocal IOLs from Cathedral Eye Clinic, Belfast. More specifically, 265 eyes, 256 eyes and 160 eyes were implanted with Monofocal Alcon AcrySofIQ SN60WF, Monofocal Lenstec Softec HDO and Multifocal Zeiss AT LISA tri839 MP, respectively. Patients were thoroughly assessed and informed of the risks of the procedure and all patients gave their informed consent for their anonymized data to be used for audit and research purposes. (Table S1). The post-operative data used for this study included manifest refraction obtained 3 months and 6 months post-operatively, and included only one eye per patient.
Statistical analysis. The prediction error (PE), for a given patient, is the difference between the spherical equivalent of the achieved post-operative refraction and the pre-operative predicted refraction obtained using a formula, given the power of the implanted IOL. Throughout the analysis, we have also used the following abbreviations: SD is standard deviation of the prediction error, MedAE is the median absolute prediction error, MAE is the mean absolute prediction error.
The normality of the prediction errors as well as the absolute prediction errors for each of the formula and for each eye type are assessed using the Shapiro-Wilk normality test, which suggested that none of them is normally distributed (p-value < 0.001 in all the cases). Therefore, non-parametric tests, with the significance level set to 0.05, are used for the statistical analysis. Wilcoxon signed-rank (1 sample) test was used to assess whether the median values for both the prediction errors and the absolute prediction errors are equal to zero for each of the formula and for each eye type. The test results for the median of the prediction errors are presented in the supplementary material (Tables S3a, S4a, S5a). The test results for the median of the absolute prediction errors suggested that in all cases the median value is statistically different from zero (p-value < 0.001 in all case).
The Friedman test was used to compare the median absolute prediction error across the five formulae for each eye type. The results of test suggested that in all cases there was a statistically significant difference across the formulae. Then, the pairwise comparison of the median absolute prediction error (MM against each of the other four formulae, and for each eye type) was performed using the Wilcoxon signed-rank (paired samples). The corresponding test results as well as those of the Friedman test are presented in the supplementary material (Tables S3b, S4b, S5b).
The Cochran's Q test was used to compare the prediction accuracy across the five formulae for each eye type. The prediction accuracy is defined by the percentage of eyes with the prediction error within the range ± 0.5D, ± 1.0D, and ± 1.5D, respectively. The corresponding test results are presented in the supplementary material (Tables S3, S4c, S5c). The pairwise comparison of the prediction accuracy (MM against each of the four other formulae, and for each eye type) was performed using the McNemar test. The corresponding test results are presented in Tables 4, 7, 10.
The implementation of the formulae as well the statistical analyses were carried out using Python 3.7.6 (python.org).
Results
To assess the performance of the MM formula, three lens models were considered, namely Monofocal Alcon AcrySofIQ SN60WF, Monofocal Lenstec Softec HDO and Multifocal Zeiss AT LISA tri839 MP. Summary statistics of the optimized IOL constants, for the four formulae and for each of the three lens models, obtained using the Monte-Carlo k-fold cross-validation process (with k = 100), are presented in the supplementary material (Table S2).
The performance of the MM against the other four formulae, with respect to the axial length, was assessed, using the following categorization of the eyes: short eyes (i.e., Axial Length < 22 mm), medium eyes (i.e., 22 mm < = Axial Length < = 24.5 mm), long medium eyes (i.e., 24.5 mm < Axial Length < = 26 mm) and long eyes (i.e., Axial Length > 26 mm). Tables 2, 3, 5, 6, 8, 9 present the summary statistics of the cross-validation prediction results for each of the five formulae (SRK/T, Hoffer Q, Holladay I, Haigis and MM) for long eyes, long medium eyes, short eyes, medium eyes and all eyes, respectively. These results were obtained using the optimized IOL constants for each (Table S2), and the same data used to optimize these IOL constants were used to train the MM formula and store the corresponding ensemble model. For the IOL model Monofocal Alcon AcrySofIQ SN60WF, the results in Tables 2, 3, 4, show that, overall, the MM formula outperformed the other four formulae. For long eyes, the MM formula and the Haigis formula outperformed the other formulae in terms of prediction accuracy, i.e., the percentage of eyes with the prediction error within the range ± 0.5D, ± 1.0D, and ± 1.5D, respectively. On the other hand, the MM formula had the lowest median absolute prediction error. For long medium eyes, the MM formula was the second best behind the Holladay I formula in terms of prediction accuracy. However, the MM formula has the lowest median absolute prediction error. For short eyes, overall, the MM formula was the second best in terms of prediction accuracy, but with the lowest median absolute prediction error. For medium and all eyes, overall, the MM formula outperformed the other formulae, with the highest prediction accuracy and the lowest median absolute prediction error.
For the IOL model Monofocal Lenstec Softec HDO, the results in Tables 5, 6, 7 show that, overall, the MM formula performed better compared to the other formulae. For long and long medium eyes, MM formula achieved the highest performance in terms of prediction accuracy and low median absolute prediction error. For short eyes, the MM formula was outperformed by the Haigis formula in terms of prediction accuracy whereas the Hoffer Q formula has the lowest median absolute prediction error. For medium and all eyes, overall, the MM formula outperformed the other formulae with the highest prediction accuracy and the lowest median absolute prediction error. For the IOL model Multifocal ZEISS AT LISA tri839 MP, the results in Tables 8, 9, 10, show that, overall, the MM formula outperformed the other formulae. For long, long medium and short eyes, the MM formula achieved the highest performance in terms of prediction accuracy, and it has the lowest median absolute prediction error
Discussion
Overall, the results reported in this study, which are comparable to those presented in the literature [9][10][11]13,14,26 , show that the MM formula outperformed the four commonly used modern formulae in terms of median absolute error as well as prediction accuracy, in particular within the range ± 0.5 D and ± 1 D, for various ranges of axial length. This robustness of the MM formulae enables the method to cope with the variation of the axial length (L), the pre-operative ACD and keratometry readings (K 1 and K 2 ), hence mitigating the impact of measurement errors for these variables. Using all lens models, for both average eyes as well as more challenging eyes (i.e., short, long medium and long eyes), the results of the post-refractive outcomes for the MM formula are overall superior compared to the other four formulae. However, all the formulae exceeded the benchmark of 85% of refraction within the range ± 1D of the prediction, recommended by Gale et al. 27 , except some cases for the SRK/T formulae. The discrepancy between our results and other findings in the literature, for instance the good performance of the SRK/T formula for eyes with long axial length may be attributed to the cross-validation approach, which considered many training and test sets, and this variation may be reflected in the optimized A constant . The most well-known formula using machine learning techniques, is probably the Hill-RBF method 8 , which is purely data driven, and used artificial neural network to estimate directly the IOL power. The assessment of the formula is based on the hold-out method, (i.e., using one training and one test set). However, the performance of machine learning-based predictive models may depend on the samples used to train and test the model, and the cross-validation is the most appropriate approach to assess the generalization of the model. In contrast with this method, the approach presented in this paper used an ensemble of regression models to provide a more accurate prediction of the ELP, thus it combines both geometric optics and machine learning. The BART 9 is another learning-based formula, which combines the Wang-Koch modified SRK/T formula 28 and Bayesian additive regression trees to estimate the IOL power. The assessment of the formula using five random out-of-sample validations yields a "median absolute refraction error" and "standard deviation of the refractive error" of 0.137D and 0.204D, respectively. Furthermore, the dataset used in the analysis consists of a combination of various lens models and the aggregated results for all eyes were provided. Another learning-based formula for IOL power calculation is the Karmona formula 10 . The assessment of the formula on a single test set of 52 eyes, and using a mix of ten models of monofocal lenses, yields a mean absolute error of 0.24D, a median absolute error of 0.18, percentages of refraction within the ranges ± 0.5D and ± 1D of 90.38% and 100%, respectively. However, it is well known that the performance of IOL power calculation formulae may vary depending on the lens model as well as the eye characteristics, in particular the axial length of the eye. The use of cross-validation www.nature.com/scientificreports/ as well as the stratification of the data by axial length of the eye would provide more insight on the performance of these formulae. Since learning-based IOL calculation formulae are essentially data driven predictive models, the most appropriate approach to compare their performance is to assess them using the same dataset. On the other hand, their implementation details are not available; hence, a direct comparison of performance metrics from various studies, using different datasets, may be misleading. Nevertheless, the assessment of the MM formula using the crossvalidation concept and across eyes stratified by their axial length as well as various lens models (monofocal and multifocal) highlighted its robustness. Furthermore, the MM formula is quite flexible and can accommodate as much predictor variables available and then analyze them to identify the most relevant ones for each surgeon/ IOL pair. However, the MM formula has some limitations, which are inherent to machine learning techniques. Table 4. Pairwise comparison of the prediction accuracy (i.e., the percentage of eyes within a given range of prediction error) between the MM formula and each of the other four formulae (SRK/T, Hoffer Q, Holladay I and Haigis) for the different types of eyes (long, long medium, medium, short and all eyes), using the McNemar test at a statistical significance level of 5%. a No statistically significant difference at level 0.05. b MM formula outperformed at significance level 0.05. c MM formula underperformed at significance level 0.05. Significant values are in [bold, Italics]. Although machine learning techniques have demonstrated many successful applications in various fields, they have some fundamental limitations, which could hinder their effectiveness in some real-word scenarios. For instance, the MM formula requires a large amount of structured training dataset in order to learn patterns effectively. Furthermore, the MM formula encode correlation and not causation, and the accuracy of its prediction revolve around the quality of the data.
|
2021-10-21T16:19:11.428Z
|
2021-09-07T00:00:00.000
|
{
"year": 2021,
"sha1": "65a1009740a03b7379c252b58f93b7a92e0e3484",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-02288-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8abc204dc4650905740e02e1c10bb8bb6ca87a58",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
216264703
|
pes2o/s2orc
|
v3-fos-license
|
Antitrust’s Implementation Blind Side: Challenges to Major Expansion of U.S. Competition Policy
For several years, a number of commentators have expressed concern that the U.S. has a growing market power problem. Further that dysfunction in the U.S. antitrust institutions, and their failure to protect competition, has damaged the economy. This Article outlines the principal flaws that this commentary attributes to U.S. antitrust policy (the “crisis in antitrust”), and some of the proposals offered to redirect it and restore it as a central tool of economic control. The paper’s main purpose is not, however, to debate the condition of competition in the US economy or the merits of the measures proposed. Rather, its objective is to identify the magnitude of the implementation challenges that the proposals for a major expansion of the U.S. antitrust program create and the policy implementation challenges that stand between these soaring reform aspirations and their effective realisation in practice. The paper suggests that even though these “implementation” issues are significant, they have been too quickly overlooked in the commentary. In our view the failure to focus on this important matter risks creating a chasm between elevated policy commitments and the capacity of responsible public to produce expected outcomes. The paper consequently acknowledges and addresses this implementation blindside. It analyses the important impediments that are likely, if not carefully addressed, to hamper delivery of the current proposals and proposes ways to overcome them.
I. Introduction
In June 2016, as the campaign for the U.S. presidency entered its final months, Senator Elizabeth Warren appeared at a conference convened by a Washington, DC, think tank and offered a grim report on the state of markets in the United States. " [T]oday, in America," she observed, "competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy." 1 Senator Warren was not the first to make this critique. 2 Yet, by giving it prominence, her June 2016 speech helped move competition policy into the mainstream of popular debate. 3 Today, her themes resonate in a large and expanding commentary that recounts a growing market power problem in the American economy (especially in its information technology [IT] sector) and dysfunction in its antitrust institutions. 4 By failing to protect competition, the federal antitrust enforcement agencies and the courts are said to have damaged the economy severely. Commentators give several reasons for the policy default: disregard of the egalitarian aims that motivated adoption of the U.S. antitrust laws in favor of an efficiency-based goals framework; 5 judicial fidelity to outdated views of industrial organization economics; 6 and enforcement timidity rooted in the capture by potential prosecutorial targets of the federal enforcement agencies, the Department of Justice Antitrust Division (DOJ) and the Federal Trade Commission (FTC). 7 The grievances sketched above have led to widespread debate and intensified demands for a redirection of antitrust policy and the application of other policy instruments to increase competition. High on the agenda is an extension of policy to provide greater control of the practices of leading technology companies (or Tech Giants) and dominant firms in other sectors such as agribusiness and pharmaceuticals.
Although there are dramatically different views as to how exactly change should take place (see further Section II below), many proponents of change stress the urgent need for more vigorous and aggressive enforcement of the antitrust laws, especially by the federal agencies. For example, there are calls for the agencies to police future mergers more strictly, perhaps with bans or presumptions against certain mergers (including acquisitions by large incumbent enterprises of promising start-ups); to limit vertical integration; and to arrest exclusionary conduct by dominant companies.
Other suggested means of control include the creation of a new regulatory authority-vested in the antitrust agencies or in a new government body-with power to promulgate rules that would establish industry-wide codes of conduct, allow for the restructuring of dominant firms, and/or bar dominant firms from selling their own products on the platforms they own. 8 Some demand a "root-and-branch" transformation of the antitrust system that would embrace objectives beyond enhancing the welfare of citizens as buyers of goods and services. 9 The expanded goals framework would seek to protect the interests of small and medium enterprises (SMEs), workers, and local communities, and, in doing so, to safeguard democracy itself. To this end, some suggest that the enforcement agencies should cease devoting resources to "trivial" cases aimed at forestalling efforts by individual entrepreneurs and SMEs to earn suitable wages; 10 instead, agencies should focus single-mindedly on curbing monopoly by, for example, using structural remedies to unwind consummated anticompetitive mergers, to deconcentrate markets, and to prevent future antitrust violations. 11 The debate has refocused the spotlight on basic issues about the aims of the U.S. antitrust system: what goal, or goals, should guide the interpretation and enforcement of the antitrust statutes; what are the advantages and disadvantages of adopting a "purist" efficiency-orientation or a multi-faceted goals agenda, regime; and how a multiimensional set of objectives, anchored in cases such as United States v Aluminum Co. of America (Alcoa) 12 and Brown Shoe Co. v United States 13 (monopolization and merger decisions respectively), could be applied effectively by antitrust agencies in designing an enforcement program and by the courts in resolving individual cases.
The modern critique must be taken seriously. Three Senators that were seeking the Democratic Party's nomination in the 2020 elections-Senator Elizabeth Warren, 14 Senator Bernie Sanders, 15 and Senator Amy Klobuchar 16 -have issued legislative proposals to carry out basic reforms of the U.S. antitrust system. The impulse for a redirection of antitrust policy is not merely partisan. President Donald Trump has voiced his support for closer antitrust scrutiny of leading firms, including Google, Amazon, Facebook, and Apple (GAFA), 17 and Republican legislators such as Senator Josh Hawley have suggested a fundamental retooling of the federal enforcement mechanism. 18 The changing political mood appears to have spurred the public enforcement agencies to expand their programs, using existing policy tools, to address the accumulation and exercise of market power in the tech sector, 19 and the FTC has launched a study, using compulsory process, of acquisitions not subject to the government's premerger notification requirements that were undertaken by Amazon, Alphabet (including Google), Apple, Facebook, and Microsoft from January 1, 2010, through December 31, 2019. 20 Regardless of which candidate prevails in the November 2020 presidential election, the U.S. antitrust system seems poised for expansion. In this article, we do not debate the condition of competition in the U.S. economy, nor do we assess the substantive merits of the respective measures proposed to correct the market and policy deficiencies identified. Instead, we focus on a less noticed issue-the policy implementation challenges that stand between the soaring reform aspirations and their effective realization in practice. We thus take the reform recommendations-presented in scholarly papers, blue-ribbon studies, and in popular essays-at face value, and ask what legislators and policy makers must do to land them. For example, assuming that more aggressive antitrust enforcement is required, how can an effective program actually be delivered-through winning antitrust cases and securing positive change-and how can it be delivered well?
In our view, these "implementation" issues have tended to be overlooked in the modern critique and to have been too quickly side-lined as technical details to be (easily) addressed once the high-level concepts of a bold antitrust program have been settled. 21 Implementation is not, however, a simple matter that will necessarily sort itself out once the intellectual architecture is in place. Rather, inattention to implementation challenges invites serious disappointment by creating a chasm between elevated policy commitments and the capacity of responsible public institutions (competition agencies, new regulators, and the courts) to produce expected outcomes. This is the implementation blindside. Unless the blindside is acknowledged and addressed, there is a significant risk that a major reform program will engage considerable resources, public and private, in initiatives that fall well short of their goals. Instead of restoring confidence in the ability of government agencies to enforce antitrust laws effectively, a failed effort might merely reinforce doubts, and cynicism, about the quality of public administration.
This article analyzes important impediments that are likely, if not carefully addressed, to hamper the delivery of the current proposals to expand competition policy significantly and propose ways to overcome them. It commences in Part II by introducing the principal flaws that modern commentary attributes to U.S. antitrust policy (the "crisis in antirust"), before describing some of the proposals offered to bolster competition, strengthen antitrust policy, and restore its centrality as a tool of 17 gov/news-events/press-release/2020/02/ftc-examine-past-acquisitions-large-technology-companies?utm_sourceþslider. 21. One of us (Kovacic) spent several years in private practice working for aerospace industry clients which had major roles in the U.S. space program in the 1960s. One company official remarked that the essential physics of going to the moon was fairly straightforward. By contrast, the engineering was very difficult. economic control. It also sketches how the federal and state agencies are responding to demands for more extensive intervention. As already explained, the purpose of this section is not to address the (respective) merits of these policy proposals but to identify the magnitude of the implementation challenges that the proposals for a major expansion of the U.S. antitrust program create. Part III sets out the chief implementation obstacles that confront efforts to execute bolder antitrust programs, including tougher scrutiny of mergers and dominant firm conduct. We draw parallels between current debates and past ones, including those that influenced enhanced antitrust enforcement (especially by the FTC) in the 1960s and early 1970s and use historical examples to show what might happen if these hurdles are underestimated or ignored in the formulation of bold new initiatives.
Before concluding, Part IV of the article considers what it is likely to take to implement the proposals, and a program of expansion, successfully. It emphasizes measures to ensure that reform commitments properly account for the capacity of the public agencies to execute the commitments successfully. The discussion includes consideration of how antitrust agencies might undertake a more ambitious program with or without receiving new powers or resources from Congress.
A. The Crisis in Antitrust
At a high level, the modern critique of U.S. antitrust policy argues (with resemblances to criticism voiced in earlier eras 22 ) that antitrust shortcomings-including lax federal enforcement and judicial acceptance of permissive antitrust doctrine since the mid-1970s-have contributed significantly to large increases in concentration and the creation, and entrenchment, of market power in many sectors of the economy. 23 These developments are said to have raised prices for consumers, diminished innovation and new business development, and increased business margins and profits. 24 Some commentators and politicians go further and emphasize the adverse consequences of these changes on democracy 25 and income or economic inequality, 26 so laying at antitrust's feet "a myriad of perceived socio-political problems." 27 In these critiques, weak antitrust enforcement is one key element of a larger failure of the government to promote competition to spur growth, to ensure that all citizens enjoy the fruits of prosperity, and to safeguard the sound functioning of the democratic process.
22. Both at the end of the nineteenth century (about the inability of the common law to curb the rising power of the trusts which led to the adoption of the Antitrust Laws) and subsequently in the mid twentieth century (about the negative impact on economic performance and the nation's social and political health caused by the U.S. agencies' failure to use the antitrust laws to halt industrial concentration); see generally, Edward H. Modern critiques often compare the current antitrust system to enforcement policy between the late 1930s and the early 1970s. In this era, courts and enforcement agencies developed strict rules governing collusive agreements among competitors, vertical agreements between manufacturers and distributors, and dominant firm behavior. In addition, with the adoption of the Celler-Kefauver Act in 1950, 28 Congress bolstered the Clayton Act's merger control provision, 29 which the DOJ and the FTC applied aggressively to challenge business combinations. With encouragement from the Supreme Court, the agencies imposed tough restrictions on horizontal and nonhorizontal mergers. 30 Judicial decisions and enforcement policy embraced an egalitarian vision that emphasized the attainment of objectives (such as the preservation of SMEs and democracy) beyond the promotion of economic efficiency. 31 Public enforcement and jurisprudence formed what Jonathan Baker has called a political bargain that tolerated the development of large firms in return for the assurance, provided by robust antitrust policy, that such firms would not attain or abuse positions of dominance through improper exclusionary tactics, and that rivals would not use mergers or cartels to achieve and exploit market power. 32 Among other effects, the political bargain helped forestall the adoption of more intrusive forms of regulatory supervision, such as public utility control of pricing and entry. Largely unchallenged by significant foreign economic rivals, and perhaps facilitated by strong antitrust oversight, the U.S. economy enjoyed extraordinary growth from the end of World War II through the 1960s.
The rising concern articulated in the modern critique is that developments since the mid-1970s have depleted the force of antitrust law. As described below, commentators assign responsibility to various factors, including the Supreme Court's misguided acceptance of the notion that Congress intended the Sherman Act to be an efficiency-oriented "consumer welfare prescription"; 33 judicial decisions that narrowed antitrust law's reach; and permissive antitrust enforcement.
1. Repudiation of the True Goals of the Antitrust Statutes. Although proponents for change have different perspectives (see infra II.B), some contend that the courts and enforcement agencies were wrong, from the mid-1970s to the present, to abandon the broad vision that Congress had embraced in enacting the Sherman Act in 1890, 34 the Clayton Act 35 and Federal Trade Commission Act in 1914, 36 the Robinson-Patman Act in 1936, 37 and the Celler-Kefauver Act in 1950. 38 Courts and enforcement agencies have wrongly replaced the original legislative commitment to protect smaller firms from oppression and preserve a democratic political order with a single-minded focus on consumer welfare and efficiency. 39 Some argue that the abandonment of antitrust law's original concern about the full 28 range of injuries that monopoly inflicts on citizens-not simply as purchasers of goods and services but also as workers, entrepreneurs, shop owners in local communities, and participants in the democratic process-is the chief cause of what has gone wrong with the U.S. antitrust system. 40 2. Retrenchment in Antitrust Doctrine. It also has been argued that guided by a false conception of antitrust's goals, or how its goal (or goals) is to be achieved, the federal courts have raised procedural, evidential, and substantive bars to antitrust actions excessively and gone too far in loosening antitrust restrictions governing vertical agreements, dominant firm behavior, and mergers. 41 Reflecting a deepseated concern about the hazards of overenforcement, confidence in the ability of markets to renew themselves, and wariness of the U.S. system of private rights of action, 42 the courts have systematically and incrementally diminished the likelihood that a plaintiff can prevail in antitrust litigation. Not only have these developments discouraged private litigation, but they have also made the federal agencies more risk-averse in deciding whether to challenge dominant enterprises 43 or attack mergers, except in cases of unusually high concentration.
B. The Proposed Cures
The critiques touched on in the section above are varied and result from different policy perspectives. Broadly, advocates for change can be group into three categories: (i) do substantially more with the existing antitrust system; (ii) do more with the existing system and enact additional regulatory mechanisms; and (iii) undertake a "root-and-branch" transformation of the U.S. competition policy system. Although we survey the reform proposals from the broad perspective of each of these groups below, we recognize that this classification scheme is imperfect. For example, all three groups share some policy preferences-such as increasing scrutiny of dominant firm conduct and stiffening merger control. Further, although differences across the groups are detectable with respect to goals, and views differ about whether the consumer welfare standard has betrayed the antitrust laws, the first two groups would continue to emphasize consumer interests in designing policy. In contrast, the "root-andbranch" group would adopt a broader conception of citizen welfare that accounts for the well-being of individuals not only as purchasers of goods and services but also as workers and owners of smaller businesses. Views about the severity of measures needed to correct existing market pathologies also vary across the groups.
1. Do Substantially More with the Existing System. One major group of reform advocates argues that the journey toward better policy does not require the adoption of new statutes, wholly new goals, or the creation of new regulatory institutions. This group would employ a concept of consumer welfare that encompasses effects on prices, quality, and innovation; it would not extend the range of goals to include effects on small businesses or the well-being of workers (except as injured by the exercise of monopsony power). 47 In their view, the existing framework presents untapped possibilities for a more activist program. The bases for needed improvements would be an evolution in, and the imaginative application of, existing doctrine 48 and, crucially, a recalibration of error cost analysis, 49 which today suggests that the hazards of intervening too much to correct certain phenomena (e.g., improper exclusion by dominant firms) exceed the costs of intervening too little. Federal enforcement agencies should be more proactive and change their appetite for risk by bringing more cases in the courts, even if they might fail. 50 They consequently place the burden on antitrust agencies to be mindful of the threat of underenforcement and to undertake a more ambitious agenda, extending far beyond its current focus, and scrutinizing a wide range of exclusionary or collusive behavior and mergers, including: vertical restraints, such as platform MFNs; practices of standard-setting organizations that allow owners of standard essential patents to exploit their market power; conduct arising in regulated, or recently deregulated, markets; refusals to deal, predatory pricing, and other newer forms of monopolization; horizontal shareholdings in concentrated product markets; and horizontal and vertical mergers, especially through the revival and strengthening of the structural presumption and vigorous scrutiny of acquisitions of start-ups. 47. Arguably, the consumer welfare goal remains a coherent structure for the development of carefully crafted, consistent rules, which can be applied in an objective, evidence-based framework that eliminates protectionism and politically or ideologically driven decision taking. 48. See Unlocking Antitrust Enforcement, supra note 6. 49. Jonathan B. Baker, Taking the Error Out of "Error Cost" Analysis: What's Wrong with Antitrust's Right, 80 ANTITRUST L.J. 8 (2015). 50. BAKER, supra note 6, 3. Arguably enforcement could be focused on areas or cases which are likely to result in the greatest inequality, given the link between market power and widening economic inequality, Jonathan B. Baker, Market Power in the U.S. Economy Today, WASH. CTR. EQUITABLE GROWTH (Mar. 20, 2017).
Do Substantially
More with the Existing System and Create a New Digital Regulator. The second group largely accepts the agenda described above and would make one notable addition: the creation of a new regulatory body to oversee digital technology giants. As proposed by the Stigler Center report on competition in digital markets, 51 the new regulator would have the power to set rules governing the conduct of leading digital platforms-for example, by imposing nondiscrimination obligations on platforms and even to break them up. This proposal draws upon and extends recommendations made by a United Kingdom advisory panel to create a new "digital markets unit" within the UK government. 52 3. Root-and-Branch Transformation. The third collection of reform proponents is generally receptive to the proposals of the first two groups, but argues that these measures are seriously incomplete. Advocates of the root-and-branch approach (which include leading political figures such as Senators Bernie Sanders and Elizabeth Warren) contend that more dramatic reform is required to deal with America's "market power problem," as monopolies and oligopolies not only raise consumer costs, block entrepreneurship, stunt investment, and retard innovation but also "depress wages and salaries" and "concentrate political power, which they use to win favorable policies and further entrench their dominance." 53 These advocates claim that as today's market problem resulted from abandonment of antitrust law's true aims, antitrust must return to its origins embedded in a broader citizen-welfare standard that addresses employment security, wage levels, economic freedom of consumers, the wellbeing of SMEs, the preservation of democracy, the diffusion of concentrated private and political power, and a wariness of monopoly power in all of its forms. 54 The root-and-branch program would have two dimensions. The first would have the DOJ and the FTC use existing powers to reorient enforcement in a way that would resemble but expand the enforcement agenda we describe above for the do-more-with-what-exists groups. Secondly, however, because proponents of root-and-branch reform believe that expanded use of existing tools will not suffice to transform the U.S. regime, they contend that legislation is necessary to overcome permissive rules embedded in judicial doctrine and to supplement existing enforcement institutions. 55 Reorientation of enforcement would require both a curtailment, repudiation, or revision of some existing programs (e.g., advocacy and law enforcement efforts that challenge occupational licensure restrictions or attack efforts by low-wage service providers to raise their fees 56 ) and expanding enforcement through: challenging individual dominant firms and tight oligopolies, with routine recourse to structural remedies to deconcentrate affected sectors (including the prosecution of new antitrust cases to break up technology firms, including GAFA); 57 using expanded theories to challenge vertical integration by contract or ownership; 58 51. The Stigler Report, supra note 8. 52. The Furman Report, supra note 8. challenging price discrimination, through the restoration of Robinson-Patman Act enforcement as a core element of federal antitrust policy; 59 strictly oppose mergers whose concentrative effect exceeds levels set by the U.S. Department of Justice Merger Guidelines issued in 1968 60 and adhering to the policy aims set out in Supreme Court decisions such as Brown Shoe v. United States; 61 using potential competition theories to bar leading firms from acquiring smaller, start-up enterprises that could otherwise emerge as strong challengers to incumbent giants and make a broad category of mergers between large firms presumptively unlawful; 62 closely scrutinizing completed mergers, and the conditions attached to them, including efforts to unravel mergers that the federal agencies improvidently cleared or approved subject to inadequate remedies; 63 routine FTC use of Section 5 of the FTC Act to overcome doctrinal limitations imposed by the existing Sherman Act and Clayton Act jurisprudence that embraces a consumer welfare standard and unduly confines the interpretation of these statutes; 64 promulgation by the FTC of trade regulation rules to establish codes of behavior applicable on an economy-wide basis. 65 Such steps are to be facilitated, reinforced, and supplemented by suggested legislative steps, for example: the creation of a new competition advocate to survey markets, receive complaints, and make public recommendations for investigations by the FTC and DOJ into possible market exploitation and anticompetitive behavior; 66 new legislation banning or limiting information services platforms, or other monopolists, from integrating vertically or selling products in competition with third parties who use their making certain conduct presumptively abusive 68 and the introduction of fines for, and to deter, monopolization offenses; 69 new legislation to limit mergers that unfairly consolidate corporate power, in particular through the application of strong presumptions or bans against certain mergers (e.g., large or "megamergers" 70 ) or through use of presumptions designed to prevent not only mergers that will increase prices, but those that will reduce wages, cut jobs, lower quality, limit access to services, stifle innovation, or hinder the ability of SMEs to compete. The purpose of the new legislation would be to facilitate successful challenges by agencies and to incentivize companies to be better corporate citizens.
C. Implementing a More Ambitious Enforcement Agenda: The Steps So Far
Demands for stronger competition programs are already inspiring adjustments at the DOJ and the FTC. Over the past two years, the FTC has conducted an extensive set of public hearings to consider how it can meet changes in, and the new demands of the economy. 71 In addition, it has established a Technology Enforcement Division in its Bureau of Competition to lead its enforcement efforts in the high-tech sector. 72 Both federal agencies have initiated investigations of the Tech Giants, the agencies are in the process of preparing vertical merger guidelines (for the first time since 1984) 73 and Digital Guidelines, and the DOJ is conducting a review of leading online platforms to identify practices that create or maintain structural impediments to greater competition. 74 All of these activities have accompanied the stated commitment of the federal agencies to apply more resources and scrutiny to dominant firms and to mergers involving the acquisition by major incumbents of nascent competitors. 75 The federal activity is taking place against the backdrop of visible, significant inquiries by state attorneys general, individually and collectively, into the conduct of digital platforms, including Google and other seek penalties of up to 15% of a company's total U.S. revenues in the previous year, or 30% of revenues in the relevant aspect of the company's business. The bill would also require the agencies to develop joint guidelines on the civil penalties they would seek under Sherman Act, § 2 and FTC Act, § 5. Senator Klobuchar has stated that penalties currently available under Sherman Ac § 2 have not proven sufficient, on their own, to deter anticompetitive exclusionary conduct, see also leading technology firms. It is easy to imagine that the collection of federal and state efforts will lead to the initiation of new cases in 2020. 76
III. Obstacles to Effective Implementation
The proponents of change have set out a breathtaking agenda for reform. The various papers and reports are powerfully reasoned and argued but devote relatively little attention to the question of how their proposals can be achieved successfully. Rather many of them seem to be predicated on the assumption that any legislative changes required can be introduced rapidly and that the new, more aspiring, program can be driven home straightforwardly by agencies led by courageous leaders and supported by a larger staff that shares the vision for fundamental change.
The discussion below, and history, seems to indicate, however, that more courage and more people will not necessarily overcome the implementation obstacles that stand in the way of a program that requires the rapid prosecution of a large number of complex cases against well-resourced and powerful companies. Indeed, the criticisms levied at the current system, the proposals for more effective enforcement and reform, and the scale of the action being demanded bear some resemblance to those that led to a more re-invigorated and aggressive antitrust enforcement policy in the 1960s and early 1970s. For example, at that time complaints that the FTC was in decay, was obsessed with trivial cases and failing to address matters of economic importance, anticompetitive conduct, and rising concentration, 77 led the FTC to embark on a new, bold, and astoundingly broad enforcement program. 78 In an effort to meet criticisms of it as a shambolic and failing institution, the FTC sought to upgrade its processes for policy planning, made concerted efforts to improve its human capital in management and case handling, and sought to improve substantive processes and the quality of its competition and consumer protection analysis.
In the end, FTC's efforts to improve capability proved insufficient to support the expanded enforcement agenda, partly because the Commission failed to formulate an adequate plan to overcome the full range of implementation obstacles. The FTC seriously overreached because it did not grasp, or devise strategies to deal with, the scale and intricacies of its expanded program of cases and trade regulation rules, the ferocious opposition that big cases with huge remedial stakes would provoke from large defendants seeking to avoid divestitures, compulsory licensing, or other measures striking at the heart of their business, and the resources required to deliver good results. The Commission lacked the capacity to run novel shared monopoly cases that sought the break-up of the country's eight leading petroleum refiners and four leading breakfast cereal manufacturers 79 and simultaneously pursue an abundance of other high stake, difficult matters involving monopolization, distribution practices, and horizontal collaboration. The FTC also overlooked swelling political opposition, stoked by the vigorous lobbying of Congress, that its aggressive litigation program provoked. 80 New legislation envisaged by reform advocates could ease the path for current government agencies seeking to reduce excessive levels of industrial concentration by arresting anticompetitive behavior of dominant enterprises (through interim and permanent relief) and by blocking mergers that pose incipient threats to competition. It seems clear, however, that such dramatic legislative proposals are 76. Joseph Simons, the FTC Chairman, has said he expects the Commission to conclude its existing inquiries by the end of 2020.
As noted earlier, the likely adverse political reaction that the agency will face if its highly publicized investigations in this area yield no cases leads us to believe that the FTC will commence one or more lawsuits involving the likely to be fiercely contested through the legislative process and so will take time, and be difficult, to enact. Further, even if armed with a more powerful mandate, the DOJ and the FTC will still have to bring what are likely to be challenging cases applying the new laws (see Section F). The adoption, setting up, and bedding in of new legislation or regulatory structures and bodies is therefore unlikely to happen very quickly and is, consequently, unlikely to meet the demands of those seeking urgent and immediate action now. These difficulties suggest that for the near future, at least, the agencies will have to achieve successful extensions of policy mainly through launching themselves into a number of lengthy, complex investigations and litigation based on the current regime. This means establishing violations under existing judicial interpretations of the antitrust laws and making a convincing case for the imposition of effective remedies, including structural relief.
The discussion in this section identifies likely impediments to the implementation of ambitious reforms, either through litigation (under the present-day regime) or legislation. These include judicial resistance to broader applications of the Sherman, Clayton, and FTC Acts, the complexities of designing effective remedies, the uncertainty of long-term political support for ambitious reforms and the possibilities for political backlash once agencies begin prosecuting major new cases, and the complications, and resistance, that confronts any effort in the United States to make legislative change.
A. Judicial Resistance to Extensions of Existing Antitrust Doctrine
As noted in Section II.A, judicial decisions since the mid-1970s have reshaped antitrust law; created more permissive substantive standards governing dominant firm conduct, mergers, and vertical restraints; and raised the bar to antitrust claims in a number of ways. This remolding has been facilitated by the Court's conclusion that the Sherman Act constitutes "a special kind of common law offense," 81 so that Congress "expected the courts to give shape to the statute's broad mandate by drawing on common-law tradition." 82 This has allowed the statutory commands to be interpreted flexibly and the law to evolve with new circumstances and new wisdom; 83 for example, where there is widespread agreement that the previous position is inappropriate or where the theoretical underpinnings of those decisions have been called into question. 84 The proposed solutions will depend, in the short term at least, on the ability of enforcement agencies to navigate the described jurisprudence to find an antitrust infringement and, in some instances, a further rethinking, refinement, and/or development of doctrine, through softening, modification, or even a reversal of current case law. Although such an evolution could, in theory, result, as it did over the last forty years, from a steady stream of antitrust cases, judicial appointments since 2017 have arguably made such a change in direction unlikely. Rather, it seems more probable that successful prosecution of major antitrust, and especially Section 2 Sherman Act monopolization cases, will remain challenging and may even become more difficult. Cases will be litigated before judges who are ordinarily predisposed to accept the current framework, either by personal preference or by a felt compulsion to abide by forty years of jurisprudence that tells them to do so. 85 gradually change the philosophy of the federal courts by appointing judges sympathetic to the aims of the proposed transformation. 86 The reorientation of the courts through judicial appointments is, however, likely to take a long time. 87 Until then, trial judges and the Court of Appeals will be compelled to abide by the existing jurisprudence and will only be at liberty to develop a more flexible approach in the "gaps" or spaces left by Supreme Court opinions-for example, in relation to mergers and rebates-and through creative interpretations of the law. Such cases are, however, likely to be hard fought. Indeed, Judge Lucy Koh's finding in Federal Trade Commission v. Qualcomm, Inc. 88 that Qualcomm's licensing practices constituted unlawful monopolization of the market for certain telecommunications chips has provoked hostile attacks, not only from practitioners and academics but also from the DOJ, the U.S. Departments of Defense and Energy, and even one of the FTC's own members. In a scathing oped in the Wall Street Journal, 89 Commissioner Christine Wilson attacked Judge Koh's "startling new creation" of legal obligations that may trigger a new wave of enforcement actions and undermine intellectual property rights. Commissioner Wilson condemned the judge's "judicial innovations," and "alchemy," through reviving and expanding the Supreme Court's 1985 opinion in Aspen Skiing Co v. Aspen Highlands Skiing Corp 90 (which she stresses was described by the Supreme Court in Trinko 91 as "at or near the outer boundary" of U.S. antitrust law), turning contractual obligations into antitrust claims, and for departing from current federal agency practice, by imposing remedies requiring Qualcomm to negotiate or renegotiate contracts with customers and competitors worldwide. She has thus urged the Ninth Circuit (on appeal), and if necessary the Supreme Court, to assess the wisdom of these sweeping changes and to stay the ruling. 92 It seems likely therefore that, at the same time as bringing cases seeking to develop procedural, evidential, and substantive antitrust standards under the existing regime, additional antidotes to the stringencies of existing jurisprudence will be required, including more extensive, and expansive, use of Section 5 FTC Act to plug the gaps created by the narrowing of the scope of Section 2 Sherman Act; and/or the adoption of legislation that directs courts to apply a wider goals framework.
B. Infirmities of Section 5 of the Federal Trade Commission Act
One possible solution to rigidities that have developed in Sherman Act jurisprudence is for the FTC to rely more heavily on the prosecution, through its own administrative process, of cases based on Section 5 of the FTC Act and its prohibition of "unfair methods of competition." 93 This section allows the FTC 94 to tackle not only anticompetitive practices prohibited by the other antitrust statutes but also conduct constituting incipient violations of those statutes or behavior that exceeds their reach. latter is possible where the conduct does not infringe the letter of the antitrust laws but contradicts their basic spirit or public policy. 95 There is no doubt therefore that Section 5 was designed as an expansion joint in the U.S. antitrust system. It seems unlikely to us, nonetheless, that a majority of FTC's current members will be minded to use it in this way. Further, even if they were to be, the reality is that such an application may encounter difficulties. Since its creation in 1914, the FTC has never prevailed before the Supreme Court in any case challenging dominant firm misconduct, whether premised on Section 2 of the Sherman Act or purely on Section 5 of the FTC Act. 96 The last FTC success in federal court in a case predicated solely on Section 5 occurred in the late 1960s. 97 The FTC's record of limited success with Section 5 has not been for want of trying. In the 1970s, the FTC undertook an ambitious program to make the enforcement of claims predicated on the distinctive reach of Section 5, a foundation to develop "competition policy in its broadest sense." 98 The agency's Section 5 agenda yielded some successes, 99 but also a large number of litigation failures involving cases to address subtle forms of coordination in oligopolies, to impose new obligations on dominant firms, and to dissolve shared monopolies. 100 The agency's program elicited powerful legislative backlash from a Congress that once supported FTC's trailblazing initiatives but turned against it as the Commission's efforts to obtain dramatic structural remedies unfolded. 101
C. Designing Effective Remedies
Important issues arising for the new enforcement strategy proposed will be what remedies should be sought; how can an order, or decree, be fashioned to ensure that the violation is terminated, that competition on the market is restored, the opportunity for competition is re-established, and that future violations are not committed and deterred; and will a court be likely to impose any such remedy. 102 The Sherman Act treats infringements of its key commands as crimes attracting severe sanctions, including fines (corporate and individual) and imprisonment. Although since 1980, the DOJ has used criminal prosecutions only to challenge hard-core horizontal cartels, 103 some antitrust reform proponents are calling for the introduction of fines to sanction illegal monopolization, and 95. See some commentators have proposed that the DOJ reconsider its policy of not seeking criminal penalties beyond the Section 1 conspiracy context. 104 For the time being, however, it would appear that existing civil sanctions will remain the tool of choice for DOJ in dealing with antitrust infringements and will be the only set of remedies available to the FTC, which has no mandate to bring criminal cases. The civil remedial options, which can broadly be grouped into three categories, for the federal agencies, are nonetheless powerful in principle. The first and, perhaps, the most common form of remedy consists of controls on conduct. Conduct-related relief ordinarily takes the form of cease and desist orders that forbid certain behavior or, in a smaller number of cases, compel firms to engage in affirmative acts, such as providing a competitor access to an asset needed to compete.
The second major form of remedy is structural relief in the form of divestitures or the compulsory licensing of intellectual property that enables a firm to enter a previously monopolized market. The boundary between purely conduct-based and structural remedies is not always clear. A compulsory licensing decree has strong structural features (it directly facilitates new entry) and conduct elements (it may require the owner of the patent to provide the licensee know-how and updates of the patented technology).
The third remedy consists of civil monetary relief in the form of disgorgement of ill-gotten gains or the restitution of monopoly overcharges to victims. A number of Supreme Court decisions in monopolization cases in the late 1940s and early 1950s appeared to hold that these forms of recovery are encompassed in the mandate of courts to order equitable remedies to cure antitrust violations. The federal agencies have not used this power expansively, though it would appear to be available to recoup overcharges in Section 2 or other cases. 105 The cures envisaged by many of the advocates of change call for the bold application of the full portfolio of civil remedies, including unwinding past mergers, divestment of assets, restructuring concentrated markets, limiting or reversing vertical integration or through the imposition of licensing obligations. Such advocates thus wish the DOJ and FTC to use the antitrust laws as an effective and simple mechanism for deconcentrating both monopolistic and oligopolistic markets, rapidly introducing new competition into a market; and reversing what they consider to be severe structural problems that have been allowed to develop on the market. 106 Structural remedies, in particular, have always been a real and important part of the antitrust remedial arsenal, 107 Between 1890 and 1939 there were 8 single-firm monopolization cases involving exclusionary practices (that is cases, which did not involve acquisition or maintenance of monopoly power wholly or partly by merger) in which substantial divestment was ordered and 7 cases between 1940-1999, RICHARD A. POSNER, ANTITRUST LAW (2001), at 106 (table 5).
an unlawful acquisition of shares or stock 108 but also in Sherman Act cases. 109 In the 1960s the FTC also sought, using its powers under Section 5 FTC Act to deconcentrate the petrol and breakfast cereal markets 110 and in 1969 the Neal Report, 111 commissioned by President Lyndon Johnson, proposed the adoption of laws which would allow oligopolistic industries to be deconcentrated and the condemnation of mergers on markets that were already concentrated. 112 Modern antitrust has, however, had less appetite for the use of antitrust to break up companies. Although the District Court in United States v Microsoft Corp 113 ordered, at the request of the DOJ, that Microsoft be broken into two parts, the Court of Appeals, despite affirming the violation of section 2, reversed and remanded the finding that Microsoft should be split into two. Setting out a high bar for structural relief, the Court stressed that the lower court had not (1) held a remedies-specific hearing 114 or (2) provided adequate reasons for the decreed remedies. 115 A number of factors seem responsible for the trend away from structural remedies. First, the change in antitrust thinking that has evolved since the early 1970s, from a belief that antitrust intervention and structural remedies can improve performance 116 to the current more laissez-faire one. 117 Second, concerns about the effectiveness of previous attempts to deconcentrate industries, 118 especially given the length of time that antitrust proceedings take. 119 Third, the difficulty involved in constructing and overseeing a structural remedy effectively. Although in cases involving a merger or acquisition it may be relatively easy to structure such a remedy through disentangling assets that were once owned 108. It is the preferred remedy for an illegal merger or acquisition as it is simple, relatively easy to administer, and sure, see He also considers that a reason for the poor record of divestiture as an antitrust remedy is that the government's lawyers tend to lose interest in a case at the relief stage and lack training or experience necessary for the reorganization of a complex enterprise.
separately, 120 outside of this situation, the question of how and what to divest might be much more speculative, seem much more risky and may in fact be complex and difficult to administer (involving significant restructuring, separation of physical facilities, and allocation of staff from integrated teams). 121 These types of concern make it a challenge to persuade a court that a structural remedy is warranted and will be successful in achieving its objective. 122 In the discussion above, we have been addressing the types of remedies that are imposed at the conclusion of a lawsuit. A problem in highly dynamic markets, however, is that the lag between the initiation of a case and a final order on relief may be so great that market circumstances have changed dramatically or the victim of allegedly improper exclusion may have left the market or otherwise lost its opportunity to expand and contest the position of the incumbent dominant firm. In this context, the antitrust cure arrives far too late to protect competition. The relatively slow pace of antitrust investigations and litigation (with appeals that follow an initial decision) has led some observers to doubt the efficacy of antitrust cases as effective policy-making tools in dynamic commercial sectors.
There are at least five possible responses to concerns about the speed of antitrust litigation, particularly matters involving dominant firms. First, agencies could experiment with ways to accelerate investigations, and courts could adopt innovative techniques to shorten the length of trials. In the United States, we perceive that greater integration of effort among the public agencies would permit the more rapid completion of investigations (e.g., by pooling knowledge and focusing more resources on the collection and evaluation of evidence). Courts could use methods tested with success in the DOJ prosecution of Microsoft in the late 1990s to truncate the presentation of evidence. These types of measures have some promise to bring matters to a close more quickly.
Second, the initiation of a lawsuit could be recognized as being, in some important ways, its own remedy; the prosecution of a case by itself causes the firm to change its behavior in ways that give rivals more breathing room to grow. Moreover, the visible presence of the enforcement authority, manifest by its investigations and lawsuits, causes other firms to reconsider tactics that arguably violate the law. Seen in this light, the entry of a final order that specifies remedies may not be necessary for all instances to have the desired chastening effect.
A third response is to experiment more broadly with interim relief that seeks to suspend certain types of exclusionary conduct pending the completion of the full trial. 123 Effective interim measures would require the enforcement agency to develop a base of knowledge about the sector that enables it to accurately identify the practices to be enjoined on an interim basis and to give judges a confident basis for intervening in this manner.
A fourth approach would be that the remedies achieved in protracted antitrust litigation may not be so imperfect or untimely as they might appear to be. There have been a number of instances in which the remedy achieved in a monopolization case was rebuked as desperately insufficient when ordered 120. See but turned out to have positive competitive consequences. 124 This is a humbling and difficult aspect of policy making. It may not be easy for an agency to persuade its political overseers-or other external audiences-that the chief benefits of its intervention will emerge in, say, two or three decades. Yet the positive results may take a long time to become apparent. A fifth technique would be to rely more heavily on ex-ante regulation in the form of trade regulation rules that forbid certain practices. A competition authority-most likely the FTC-would use its rulemaking powers to proscribe specific types of conduct (e.g., self-preferencing by dominant information services platforms).
In this article, we do not purport to solve the problems of the remedial design set out above. There is, however, a fairly clear conclusion about how enforcement agencies should go about thinking of remedies. As we note below, there is considerable room for public agencies to design remedies more effectively by systematically examining past experience and collaborating with external researchers to identify superior techniques. In this regard, the FTC's collection of policy tools would appear to make it the ideal focal point for the development of more effective approaches to remedial design.
D. Political Backlash
As we have already indicated, the government's prosecution of high stakes antitrust cases often inspires defendants to lobby elected officials to rein in the enforcement agency. Targets of cases that seek to impose powerful remedies have several possible paths to encourage politicians to blunt enforcement measures. One path is to seek intervention from the President. The Assistant Attorney General of the Antitrust Division serves at the will of the President, making DOJ policy dependent on the President's continuing support. The White House ordinarily does not guide the Antitrust Division's selection of cases, but there have been instances in which the President pressured the Division to alter course on behalf of a defendant, and did so successfully. 125 The second path is to lobby the Congress. The FTC is called an "independent" regulatory agency, but Congress interprets independence in an idiosyncratic way. 126 Legislators believe independence means insulation from the executive branch, not from the legislature. The FTC is dependent on a good relationship with Congress, which controls its budget and can react with hostility, and forcefully, when it disapproves of FTC litigation-particularly where it adversely affects the interests of members' constituents. Controversial and contested cases may consequently be derailed or muted if political support for them wanes and politicians become more sympathetic to commercial interests. The FTC's sometimes tempestuous relationship with Congress demonstrates that political coalitions favoring bold enforcement can be volatile, unpredictable, and evanescent. 127 If the FTC does not manage its relationship with Congress carefully, its litigation opponents may mobilize legislative intervention that causes ambitious enforcement measures to the founder.
Imagine, for a moment, that the DOJ and the FTC launch monopolization cases against each of the GAFA giants. Among other grounds, these cases might be premised on the theory that the firms used mergers to accumulate and protect positions of dominance. The GAFA firms have received unfavorable scrutiny from legislators from both political parties over the past few years, but the current wave of political opprobrium is unlikely to discourage the firms from bringing their formidable lobbying resources to bear upon the Congress. It would be hazardous for the enforcement agencies to assume that a sustained, well-financed lobbying campaign will be ineffective. At a minimum, the agencies would need to consider how many battles they can fight at one time, and how to foster a countervailing coalition of business interests to oppose the defendants.
E. Opposition to Legislative Reform
Although statutory reform might at first sight appear to be a direct, effective solution to some of the impediments (such as entrenched judicial resistance to intervention), there are good reasons to expect that powerful business interests will also stoutly oppose any proposals for legislation to expand the reach of the antitrust laws or to create a new digital regulator. 128 One can envisage the formidable financial and political resources of the affected firms will amass to stymie far-reaching legislative reforms. Legislative steps that threaten the structure, operations, and profitability of the Tech Giants and other leading firms are fraught with political risk. These risks are surmountable, but only by means of a clever strategy that anticipates and blunts political pressure. One element of such a strategy is to mobilize countervailing support from consumer and business interests to sustain an enabling political environment to enact ambitious new laws.
Even if successful, "[l]egislative relief from existing jurisprudential structures might take years to accomplish"; 129 acts taken under new legislation-even with the establishment of presumptions that improve the litigation position of government plaintiffs-may still be relatively complex and difficult to prosecute. Rulemaking is an alternative to litigation, but it is no easy way out of the problem. On the contrary, promulgation and defense, in litigation, of a major trade regulation rule is liable to take as long as the prosecution of a Section 2 case. It can also be anticipated that a judiciary populated with many regulation skeptics will subject new rules or related measures to demanding scrutiny.
A. Finding a Path
In Section III, we described the obstacles that are likely to stand in the way of accomplishing a significant reconstruction of U.S. antitrust doctrine and enforcement policy. To some degree, the severity of the obstacles depends on how ambitious the chosen reform program turns out to be. The aims and means of the "do more with what you have" group are challenging enough, but they pale in comparison to the difficulties presented by the proposals of the root-and-branch transformation advocates. The latter group would place greater demands on the implementing institutions through, for example, initiating a larger number of major cases and seeking more powerful structural remedies, and by insisting that the entire program be undertaken quickly and decisively rather than at a more moderate tempo.
Although successful accomplishment and delivery of reforms, more moderate and more ambitious, alike, will require awareness and a realistic assessment of likely implementation obstacles and a conscious effort to develop a strategy to surmount them (it has been seen that history provides sobering examples of failures where similar, significant implementation barriers have not been considered or have been discounted 130 ), such barriers are not a formula for timidity or a reason not to undertake change. Rather, understanding them helps guide the design of a successful program. To return to our 128 U.S. space program analogy, 131 an indispensable foundation for the ultimate success of the Apollo program was the commitment of National Aeronautics and Space Administration and its contractors to understand the full magnitude of the task before them and to anticipate all hazards that would confront human spaceflight to and from the Moon's surface. 132 The probing analysis of risks inspired successful efforts to find solutions. Operating in an unforgiving environment where even small errors could be catastrophic, humans landed on the Moon and returned safely to Earth.
This section consequently discusses approaches for navigating the reform implementation challenges. Our most important caution is that the reforms-more dramatic and less sweeping-will require substantial upgrades in the capabilities and performance of the institutions responsible for implementation. The more ambitious the reform, the more urgent is the need to enhance capabilities. Further, the prospects of success for the public agencies (federal and state) are likely to improve if they can formulate a common strategy to overcome identified obstacles. Doing so will demand planned, joined-up, and consensual enforcement by the public enforcement agencies and a forthright selfassessment of existing operations and capabilities-to repair institutional flaws, to temper interagency disagreements, and to acquire the human capital needed to run a new, large collection of difficult antitrust suits.
B. Augmenting the Human Capital of the Enforcement Agencies
Measures to expand federal antitrust intervention dramatically-through the prosecution of lawsuits or the promulgation of trade regulation rules-will face arduous opposition from the affected businesses. Assuming that litigation will provide the main method in the coming few years to attack positions of single-firm or collective dominance, the targets of big antitrust cases will marshal the best talent that private law firms, economic consultancies, and academic bodies can offer to oppose the government in court. The defense will benefit from doctrinal principles that generally are sympathetic to dominant firms (again, we assume that legislation to change the doctrinal status quo will not be immediately forthcoming). Beyond a certain point, the addition of new, high stakes cases to the litigation portfolio of public antitrust agencies will create a serious gap between the teams assembled for the prosecution and defense, respectively. Although therefore the public agencies can match the private sector punch for the punch when prosecuting several major de-monopolization cases, when the volume of such cases rises from several to many, the government agencies may have to rely on personnel with considerably less experience to develop and prosecute difficult antitrust cases, seeking powerful remedies upon global giants.
An enhanced litigation program will therefore go only as far as the talent of the agencies will carry it. We propose three steps to build and retain the human capital-attorneys, economists, technologists, and administrative managers 133 -to undertake a more ambitious litigation program. The first is to use antitrust as a prototype for a program to raise civil service salaries. The second two steps consist of cautions about the dangers of (a) denigrating the skills and accomplishments of existing agency personnel and (b) attempting to shut the revolving door through which professionals move between the public and private sectors. We discuss all three of these steps below. 133. In ensuring that an agency has the institutional capacity to take on these "Goliaths," agency officials must have the skill set diversity required, especially for those involving the digital economy.
Resources and compensation.
To accomplish the desired expansion of enforcement, we see a need for more resources. 134 Nonetheless, budget increases that simply allow the enforcement agencies to hire additional staff, while useful, are not enough. We would use more resources to boost compensation for agency employees. This means taking the antitrust agencies out of the existing civil service pay scale. The need is not simply to hire more people. It is to attract a larger number of elite personnel who are equal to the tasks that the ambitious reform agenda will impose. We do not see how the public agencies can recruit and retain necessary personnel without a significant increase in the salaries paid to case handlers and to senior managers. It surprises us that none of the proposals for bold reform mention compensation for civil servants. Consider two possibilities for compensation reform. The first is to align antitrust salaries to the highest scale paid to the financial service regulators. Here the model would be the scale of salaries paid to employees of the U.S. banking regulatory agencies; the salary scale for these regulatory bodies exceeds the General Schedule federal civil service wage scale by roughly 20%. 135 A second alternative involves a more dramatic change, perhaps more easily accomplished at the FTC, which is a self-contained agency, than at the DOJ Antitrust Division, which is a relatively small part of a much larger bureaucracy. One might take the FTC's existing budget of about $330 million per year and triple it, setting the amount at $1 billion per year and use the increase to raise salaries. We would conduct this experiment for a decade to test whether a major hike in pay would increase the agency's ability to recruit the best talent and keep the talent at home for a significant period of time.
We see this as a crucial test of the commitment and sincerity of the political leadership that seeks basic change. If fundamental competition policy reforms mean so much to the nation's well-being, then the country will need to pay to achieve it. Such steps will become even more important if new political leadership seeks to close the revolving door, which has operated as a mechanism to encourage attorneys and economists to accept lower salaries in federal service in the expectation of receiving much higher compensation in the private sector at a later time.
2. Respecting Past Achievements. In the United States, there is an unfortunate habit of making the case for major reforms by depicting the existing policy-making institutions as utterly incompetent, slothful, or corrupt. 136 Reform advocates sometimes appear to believe that any recognition that existing institutions sometimes have done good work undermines the case for fundamental reform. There is a perceived imperative to portray the responsible bodies and their leaders as hopelessly inadequate. Electoral campaigns can sharpen this tendency by leading the opposition party to claim that the incumbent administration's program was an unrelieved failure.
In a striking number of instances, this pattern has emerged in discussions of antitrust policy. 137 In current discussions about the future of the U.S. antitrust regime, advocates of fundamental reform sometimes portray the federal antitrust enforcement agencies as decrepit-perhaps to underscore the 134. The DOJ is seeking a dramatic increase in its appropriations for antitrust enforcement. need for basic change. 138 The proponents of root-and-branch transformation often suggest that only a complete makeover of the antitrust system will enable antitrust law to fulfill its intended role. The implication is that, because the antitrust system has failed so miserably, there are few, if any, positive lessons to be derived from experience since the retrenchment of U.S. policy began in the late 1970s, and certainly none since 2000. This style of argument has several potential costs. One danger is that it overlooks genuine accomplishments and, in doing so, ignores experience that suggests how to build successful programs in the future. Consider three examples that deserve close study in building future cases that seek to expand the reach of the antitrust system. The first is the development of the FTC's pharmaceutical and nonpharmaceutical health-care program from the mid-1970s forward; this initiative used the full range of the agency's policy tools-cases, rules, reports, and advocacy-to change doctrine and alter business behavior. 139 A second example is the FTC's effort over the past two decades to restore the effectiveness of the quick look as an analytical tool in the wake of the Supreme Court's decision in Federal Trade Commission v. California Dental Association. 140 A third example is the FTC's successful litigation of three cases before the Supreme Court over the past decade. 141 The programs that accounted for these results were not accidental. Each program began with a careful examination of the existing framework of doctrine and policy to identify desired areas of extension. This stock-taking guided the identification of potential candidates for cases and the application of other policy-making tools. 142 Each program built incrementally upon the bipartisan contributions of agency leadership and the sustained commitment of staff across several presidential administrations headed by Democrats and Republicans.
If one assumes (as a number of reform proponents assert) that the FTC was a useless body in the modern era, there would be little purpose in studying these examples, or anything else it did, as there would be nothing useful to learn. The paint-it-black interpretation of modern antitrust history makes the costly error of tossing aside experience that might inform the successful implementation of new reforms.
A second notable cost of the catastrophe narrative, most relevant to the discussion of human capital, is its demoralizing effect on the agency's existing managers and staff. To see one's previous work portrayed as substandard, or worse, tends not to inspire superior effort. It breeds cynicism and distrust, where managers and staff understand that the critique badly distorts what they have done. Proponents of basic change must realize that the success of their program to expand antitrust intervention will require major contributions from existing staff and managers.
3. Capture and the Revolving Door. The modern critique of the U.S. system often describes the federal agencies as captured by the business community or beholden to ideas that disfavor robust intervention. 143 Advocates of change suggest that the execution of their reform program at the federal antitrust agencies will require the appointment of senior managers and new staff who repudiate the consumer welfare standard, or at least embrace a vision for expanded enforcement under the consumer welfare, and embrace the multidimensional conception of the proper goals of competition law. Those already employed by the enforcement agencies as managers and staff will be expected to accept the expanded (goals) framework or they will find their duties reduced and their roles marginalized. New appointees to top leadership positions will not be tainted by substantial previous experience in the private sector, nor will they have spent too much time as civil servants in a government enforcement culture that assumed the primacy of consumer welfare as the aim of antitrust law and accepted norms that tilted toward underenforcement. The concern about compromised motives is also likely to disqualify many academics who, though sympathetic to some expansion of antitrust enforcement, remain excessively beholden to some notion of a consumer (rather than citizen) welfare standard, or have engaged in consulting on behalf of large corporate interests.
One consequence of the acute anxiety about capture is to slam the revolving door shut, or at least to slow the rate at which it spins. We offer two cautions about this approach. First, the modern experience of the FTC raises reasons to question the strength of the theory. For example, if business perspectives dominate the FTC, why did the agency persist in its efforts to challenge reverse payment agreements involving leading pharmaceutical producers? 144 Was it because the pharmaceutical firms weren't as good at lobbying as, say, the information services giants? And what explains the FTC's decision to sue Qualcomm for monopolization early in 2017? 145 Is this simply attributable to the inadequacy of Qualcomm's Washington, DC, lobbyists, or is the capture explanation for the behavior of the federal antitrust agencies not entirely airtight?
Our second caution is that severe restrictions on the revolving door could deny the federal agencies access to skills they will need to carry out a major expansion of antitrust enforcement. Recruiting attorneys, economists, and other specialists from the private sector can give the agencies a vital infusion of talent which, when combined with agency careerists, permit the creation of project teams that can equal the capability of the best teams that the defense can mount in major litigation matters. We also are wary of the idea that an attorney or economist coming from the private sector will discourage effective intervention during the period of public service as a way to pave the road to a better private sector position upon leaving the agency. Rather, there is evidence to suggest that creating a reputation for aggressiveness and toughness as an enforcer increases one's post-agency employment options. More than a few individuals have development prosperous careers based on piloting businesses through navigational hazards that they helped create while they were senior officials in public agencies.
C. Common Public Enforcement Strategy
The U.S. antitrust system is famous for its decentralization of the power to prosecute, giving many entities-public agencies (at both the federal and state levels), consumers, and businesses-competence to enforce the federal antitrust laws. The federal enforcement regime also coexists with state antitrust laws and with sectoral regulation, at the national and state levels, that includes a competition policy mandate.
The extraordinary decentralization and multiplicity of enforcement mechanisms supply valuable possibilities for experimentation and provide safeguards in case any single enforcement agent is disabled (e.g., due to capture, resource austerity, or corruption). 146 Among public agencies, there is also the possibility that federal and state government institutions, while preserving the benefits of experimentation and redundancy, could improve performance through cooperation that allows them to perform tasks collectively that each could accomplish with great difficulty, or not at all, if they act in isolation. For models of successful interagency cooperation, one might study the successful policy integration that has taken place through the work of the United Kingdom Competition Network and the European Competition Network. In both examples, one can see the mix of organizational structures and personal leadership that enabled agencies collectively to accomplish policy results that would have been unattainable through the work of single agencies operating in isolation.
We doubt the ambitious litigation agenda demanded in the modern reform proposals is attainable if the public agencies adhere to traditional practices that overlook the expansion of outcome and increase in quality that superior interagency cooperation could generate. A suggested program of fuller integration would have the following elements.
1. Development of a Common Strategy. The path toward a major expansion of the existing litigation program will require careful planning that begins with the formulation of a joined-up strategy implemented harmoniously by the DOJ and FTC. The starting point for the common strategy is to map out the existing contours of doctrine, identify the high ground for intervention that modern jurisprudence has established, select projects to reshape doctrine and other elements of antitrust policy, allocate them to the best-placed agency to act and avoid duplication of resources on identical or overlapping investigations.
A second focal point in the analysis of the doctrinal status quo would be to consider how existing precedents can be employed to build successful cases and how doctrinal frontiers can be extended. 147 An important element of this mapping exercise is to understand why the courts have embraced more permissive standards over the past four decades. This assessment would facilitate the preparation of effective arguments to persuade judges to rethink it. Among other effects, we anticipate that this inquiry will reveal how perspectives beyond the modern Chicago School have influenced judicial thinking. In particular, it will demonstrate how a number of jurists have abandoned a multidimensional goals framework in favor of an efficiency orientation out of concern for "administrability" considerations posed by the modern Harvard School of Phillip Areeda and Donald Turner. To gain the support of jurists such as Stephen Breyer (whose antitrust views bear the mark of Areeda's influence), it will be necessary to show that the restoration of a new antitrust framework, or an egalitarian goals framework, would not lead to unpredictable and inconsistent litigation outcomes as each judge sought to weigh efficiency concerns or efficiency concerns alongside other values, such as preserving opportunities for small enterprises to compete. 148 The third element of common strategy would be lessons derived from the examination of the agencies' base of experience to determine what combination of policy tools-cases, studies, rules, In conversations with some who support the restoration of the egalitarian goals framework, it appears that efforts to reset the goals of the antitrust system will require strong structural presumptions to be devised whose application to dominant firm conduct would facilitate attainment of the pluralistic goals agenda, see, e.g., Anti-Monopoly and Competition Restoration Bill, supra note 14. The goals would thus be achieved by applying the presumptions rather than making each goal a factor to be considered in a rule of reason inquiry.
advocacy-offer the best means to effectuate change in the market and to use this experience base to design specific remedies. Since its creation in 1890, the U.S. competition law system has generated a mass of information about the techniques for government intervention. As explained further below, the government's "big antitrust data" can be mined to shed light on what is likely to work. For example, experience in implementing major structural remedies pursuant to decrees in Section 2 monopolization cases and by legislation such as the Public Utility Holding Company Act of 1935 offer important lessons about how to design and carry out the restructuring of major business enterprises. 149 The study of past experience also reveals that it is a mistake, as part of a reform program, to focus all of an agency's resources on the prosecution of big cases against big companies to the exclusion of smaller matters. The history of U.S. Section 2 enforcement shows that small cases can make big law by establishing doctrinal principles that support subsequent successful prosecutions of large enterprises. 150 2. Project Selection Methodology. Project selection is the process by which an antitrust agency chooses the tools it will use to accomplish its policy aims. There is growing recognition among antitrust authorities that improvements in the methodology of project selection can strengthen the prospects of success for any single initiative. Adapted for the purpose of executing a major reform program, a good project selection methodology would pose a series of questions about every proposed initiative. 151 First, what does the agency expect to achieve if the project succeeds? Will it improve economic conditions, realign doctrine, or both? By defining anticipated gains, the agency can better understand how many resources to commit to a specific measure and make a better informed decision about how much risk to accept. This inquiry also helps focus the agency's attention, from the earliest days of the project's preparation, on the design of remedies to cure apparent problems. The consideration of benefits to be attained and the means for realizing them can lead an agency to reflect carefully about whether the proposed project is the best way to solve the problem at hand. In some instances, a different sequence of initiatives may provide the best path to a solution-for example, to begin with a market study, and then bring cases based on the learning from the study.
Second, what risks does the project pose? How will a project failure-such as a litigation defeataffect the market and the agency? Will the agency be able to sustain political support for its projects, or will the targets of intervention mobilize a political coalition to constrain the agency by, for example, curbing its authority or budget? To succeed, agencies must be mindful of the shifting sands in politics and be prepared with countermeasures to deal with situations where relevant politicians' interests change and become more sympathetic to commercial interests. Important issues therefore will be whether current political supporters of reform have the staying power to back agencies for the five to ten years it might take to carry out cases successfully, what steps agencies can take to ensure sustained political support and to deal with swings in the political environment and whether financial support from the affected firms may be used to sway, or can be prevented from swaying, the political process and buckle political resolve.
Third, who will carry out the project? Which agency and does that agency have the talent available, or can it acquire needed talent in a timely manner, to perform the project successfully and overcome the opposition it will face where the agency seeks strong remedies for individual firms or entire sectors of the economy? A clear-headed answer to these questions helps avoid the creation of large gaps 149. These possibilities are examined in Kovacic between the agency's commitments and its ability to fulfill them in practice. Because it may be better attuned to the agency's capabilities, a more gradual approach to rolling out a reform program may have better prospects for success than the launch of a number of large, complex cases all at the same time.
One of the biggest hazards we see especially in the root-and-branch reform agenda is that it entails the rapid commencement of many ambitious projects that will place impossible demands on the capabilities of the antitrust agencies. Fourth, what will the project cost be in terms of personnel and out-of-pocket expenditures for items such as expert witnesses to support cases? This inquiry helps the agency make a realistic prediction of the resources needed to carry out individual projects and prepare disciplined estimates for future budget requests.
How long will it take the agency to complete the project? This inquiry helps the agency determine whether its anticipated intervention and remedy will occur fast enough to solve an observed problem. If years may pass before the agency obtains a desired remedy at the conclusion of a lawsuit, it may be necessary to consider interim measures to correct the behavior that poses immediate competitive dangers if allowed to continue. By establishing an expected timetable at the outset, the agency equips its leadership with a valuable management tool to track a project's progress.
How does the proposed project fit into the portfolio of the agency's existing projects? If the agency examines each project in isolation, it can lose sight of the overall condition of its program portfolio. A portfolio-wide perspective enables the agency to assess the full range of risks it has assumed and, again, to see that it is achieving a good fit between its commitments and its capabilities.
How will the agency know that the project, if undertaken, is having its desired effects? It is a helpful exercise to identify how an agency's intervention will bring about change in the market. What are the anticipated effects on prices, product quality, new business entry, or other economic conditions? When are these effects likely to become apparent? This exercise helps the agency develop realistic expectations about the magnitude and timing of anticipated benefits. From its past experience, the agency may be aware that some benefits may take years-perhaps decades-to become apparent.
The specification of performance benchmarks also plays a crucial role in facilitating the ex-post evaluation of outcomes. A very basic form of assessment is to compare the agency's assumptions about a project when it begins with the knowledge it gains in the course of implementation. If anticipated performance falls below expectations (perhaps because a significant factor was overlooked), how can the project selection process be improved to account for the factor in the future? Taking careful stock of past measures that worked-and learning lessons from the failures-is a vital way to design new initiatives more effectively.
D. Enabling the FTC to Perform Its Intended Function
A number of the proposals for expansive reform would give the FTC a broader and fuller role in formulating competition policy. Several features of its original design make the Commission an attractive vehicle for carrying out a program of basic reforms. It has been seen that the FTC Act gives the Commission a broad, scalable mandate (Section 5's prohibition on "unfair methods of competition") to prohibit behavior not reached by existing interpretations of the Sherman and Clayton Acts. The agency also has expansive authority to collect information from firms through compulsory processes and to publish reports. 152 The statute also intended that the Commission serve as a special resource to the DOJ and to the courts in formulating remedies in antitrust cases. 153 152. 15 U.S.C. § 46. 153. 15 U.S.C. § 47 (authorizing the FTC, upon the request of a federal district court, to act as a master in chancery and advise the court about the design and execution of remedies in monopolization cases).
Under the program of greater interagency cooperation we have proposed above, the FTC would use Section 5 of the FTC to seek to extend the boundaries of existing doctrine and to use its information gathering and reporting powers to set the empirical basis for proposed extensions. The starting point for this effort would be to examine the agency's past (and rare) Section 5 litigation successes for lessons about how to gain judicial acceptance for an extension of antitrust doctrine. 154 The Commission also would serve, in effect, as the main public agency resource on remedies. The agency would use its analytical resources and experience in evaluating the effectiveness of antitrust remedies to guide the formulation of remedies in the Sherman Act and Clayton Act cases, in addition to Section 5 cases. The agency would employ the large body of experience that the U.S. system and other systems have collected in the use of structural and behavioral remedies to suggest solutions in specific cases.
We suggest three legislative changes to enable the Commission to fulfill the role we have described above. The first is to relax restrictions that the Government in the Sunshine Act 155 imposes on the ability of commissioners to deliberate together privately to discuss strategy and tactics. Among other consequences, the Sunshine Act severely limits the ability of a quorum of commissioners to discuss and debate matters of agency policy except in meetings open to the public. 156 The policy planning functions that we see as essential to an expanded role cannot be performed at a high level without this reform. 157 A second essential step is to eliminate statutory exemptions that deny the FTC jurisdiction over common carriers, not for profit institutions, and banks. A third reform would confer powers on the FTC to conduct market studies, and obtain information necessary to allow it to carry out its functions, and market investigations in the same way as the UK's Competition and Markets Authority (CMA). 158 For example, Part 4 of the Enterprise Act 2002 159 enables the CMA to investigate markets where it appears that the structure of the market or the conduct of suppliers or customers in the market is harming competition and, where problems are identified, to propose steps to mitigate, remedy, prevent, or overcome them. This would enable to FTC to study sectoral or economy-wide phenomena and to impose remedies regardless of whether the conditions or practices in question violate the antitrust laws.
V. Conclusions
In the United States, as in many other parts of the world, the pressure is on the competition agencies to make 2020, and the new decade, a period of sustained and effective antitrust action, targeting especially the business models of digital platforms. Pending any longer-term more fundamental reforms, many commentators are calling for immediate, rapid, and heightened competition scrutiny of a wide range of practices (including mergers (future and past), business practices of digital firms, restricted Century (March 10, 2014) (report prepared for the Administrative Conference of the United States), https://acus.gov/ report/final-sunshine-act-report. 157. The experience that one of us (Kovacic) has had as a nonexecutive director of the CMA has highlighted how the FTC is largely foreclosed from using policy planning and prioritization techniques that are commonly employed to great advantage in other jurisdictions. 158. This possible adjustment to the FTC's authority is discussed in William E. Kovacic, Commercial Innovation and Innovative Regulatory Agencies: An Enhanced Markets Regime for the United States (Jan. 2020) (manuscript on file with author). 159. Enterprise Act 200, c.40, Section 4 ("Market Investigations"). distribution and price setting practices) and the use of intrusive remedies to fix antitrust problems going forward.
These demands are imposing formidable expectations on the shoulders of competition agencies. Meeting them will not happen by chance or through a reactive and ad hoc approach. Indeed, without careful planning, an ambitious enforcement program involving a large number of complex litigations being pursued concurrently would risk agency managers and case handlers becoming overrun and the failure of the program. This paper consequently proposes a more tempered, gradual, and joined-up approach to reform, involving carefully constructed and coordinated strategies to overcome anticipated obstacles, painstaking planning and case allocation, and the selection of some initial complementary (but not overlapping) high-profile case prototypes for each agency to pursue before the program is expanded in steps.
Both federal agencies have investigative powers, but we propose that the FTC should make full use of its fact-finding powers to collect information on industries or sectors selected for investigation. Further, that before prosecutions are launched a methodology is followed for selecting appropriate cases for prosecution, taking account of past achievements and failures, the goal(s) to be achieved in bringing the case, the chance of success (especially given current doctrinal limitations), and opportunities for reshaping law and policy, the prospect for achieving those goals through antitrust action and remedies (rather than, for example, advocacy or other mechanisms), which agency is best placed to act, and whether that agency has the tools and staff available to take on the case now (taking account of other agency commitments). Essential to all of the proposals is a need for the agencies to anticipate and account for political backlash and for human capital in the agencies to be augmented. It will only be through recognizing the skills of existing staff and through finding realistic and achievable mechanisms to retain and recruit talented staff that the agencies will have the skill set diversity to take on sophisticated and powerful firms, backed by formidable teams of lawyers and experts.
|
2020-03-26T10:12:29.914Z
|
2020-02-11T00:00:00.000
|
{
"year": 2020,
"sha1": "49736a966811a2bfbad7b3883d5bc144589a531c",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0003603X20912884",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "efeac67821eefdfb9c5aca84e6f4285eb19e8287",
"s2fieldsofstudy": [
"Law",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
253209530
|
pes2o/s2orc
|
v3-fos-license
|
Relationship Health and Intimate Partner Violence in Integrated Primary Care: Individual Characteristics and Preferences for Relationship Support across Risk Levels
This study explores differences in characteristics and relationship treatment preferences across different levels of intimate partner violence (IPV) among Veterans Affairs (VA) primary care patients. In Fall 2019, we sent a mail-in survey assessing relationship healthcare needs to N = 299 Veterans randomly sampled from 20 northeastern VA primary care clinics (oversampling female and younger Veterans). We compared those reporting past year use or experience of physical/sexual aggression, threats/coercion, or injury (Severe IPV; 21%), to those only reporting yelling and screaming (Verbal Conflict; 51%), and denying any IPV (No IPV; 28%). Participants across groups desired 2–6 sessions of face-to-face support for couples’ health and communication. No IPV participants were older and had preferred treatment in primary care. The Verbal Conflict and Severe IPV groups were both flagged by IPV screens and had similar interest in couple treatment and relationship evaluation. The Severe IPV group had higher rates of harms (e.g., depression, alcohol use disorder, relationship dissatisfaction, fear of partner) and higher interest in addressing safety outside of VA. Exploratory analyses suggested differences based on use vs. experience of Severe IPV. Findings highlight ways integrated primary care teams can differentiate services to address dissatisfaction and conflict while facilitating referrals for Severe IPV.
Introduction
Romantic relationship health-defined as strong connections between partners as a couple (e.g., satisfaction, honesty, open communication) and mutual support of each partner as an individual (e.g., emotional support around stressors, equal and respectful approach to conflict management) [1][2][3]-can be an asset to treatment adherence, physical health, and quality of life [1,2]. However, relationship conflict and intimate partner violence (IPV)-defined as physical violence, sexual violence, stalking, and psychological aggression between romantic partners [4]-can have significant negative impacts on physical and mental health [5,6]. This would suggest supporting relationship health can be a valuable component in comprehensive healthcare, a position affirmed by recommendations to screen for IPV and refer to support services made by both the United States Preventive Services Task Force (USPTF [7]) and Centers for Disease Control (CDC [8]). The Department of Veterans Affairs (VA), the largest integrated healthcare system in the United States, has demonstrated that such screening efforts are feasible and can be scaled up at hospital-wide or national levels [9]. However, screening is only one aspect of preventive healthcare, and commentators have highlighted the potential for primary care to play an active role in preventing IPV at a population level remains largely untapped [10].
Additional opportunities to expand IPV prevention beyond screening are supported by the rapid growth of integrated primary care settings over the last decade across federal hospital systems (e.g., VA, Department of Defense) and large community clinics [11]. Integrated primary care describes a range of models that have a primary care team collaborate with trained behavioral health providers. These embedded behavioral health providers assist the primary care teams with further assessment, the provision of brief interventions when appropriate, and can act as a bridge to specialty mental health services if needed [11]. Guided by the increasing number of integrated primary care settings and the prevention health framework, the current study explores the unique characteristics of primary care patients and preferences for IPV prevention efforts for these patients at different levels of risk.
Applying a Levels of Prevention Framework to IPV in Primary Care
In this study, we apply to IPV a levels of prevention framework [12], which focuses on differentiating approaches across a range of risk from Primary Prevention (health promotion to prevent the emergence of a problem in individuals who are not currently experiencing it), to Secondary Prevention (screen individuals who experience problem at an early stage to address/prevent harm), to Tertiary Prevention (treat individuals already impacted by a problem to stop further harm). This framework can guide intervention development as patients typically have differing needs at each level. Organizing interventions by these levels facilitates efficient stepped care whereby patients can be served according to their needs to conserve healthcare resources.
Complete absence of IPV behaviors on a primary care screen immediately identifies patients at low risk (i.e., no IPV reported) and appropriate for Primary Prevention approaches. The CDC offers a toolkit of IPV Primary Prevention strategies for communities. Although some focus on institutional resources that do not apply to medical settings (e.g., financial and legal support), two that may be appropriate for integrated primary care are "Teach Safe and Healthy Relationship Skills" and "Disrupt the Developmental Pathways towards Partner Violence" [8]. As both approaches focus on psychoeducation, they can feasibly be integrated into existing patient education by existing primary care providers or offered by embedded behavioral health providers in those teams. Secondary Prevention is defined by efforts delivered following a positive IPV screen but prior to harm. Unfortunately, the CDC toolkit's single strategy addressing screening, "Support Survivors to Increase Safety and Lessen Harm" assumes referral to intensive services is the only option for a positive screen (blending Secondary and Tertiary Prevention). CDC has described efforts to differentiate psychological aggression between common expressions of anger or verbal outbursts and "psychological abuse" patterns that cross a threshold of harm [4]. Although CDC efforts are ongoing, one practical way to create a Secondary Prevention category is by using the items in many IPV screens that distinguish between "verbal conflict" behaviors such as screaming and insults from other more severe forms of IPV [7]. This distinction would align with the epidemiological IPV literature, where yelling and insults are classified as "minor psychological aggression" that are noted to occur in >60% of individuals in population samples while all other types of IPV-including coercive and controlling behaviors identified as "severe psychological aggression"-are reported by <20% of individuals [13]. Verbal conflict is an appropriate category for differentiating treatment as it can be addressed by a wider range of services. Under the name "conflict management" or "communication skills training," verbal conflict behaviors are frequently addressed by relationship skills education programs [14,15] and many behavioral couple therapies [16]. The skills are often concrete cognitive behavioral strategies that can easily be incorporated into a behavioral health provider's skillset.
All other "severe IPV" behaviors, such as psychological control/coercion, stalking, physical violence, and sexual violence, can be classified as requiring Tertiary Prevention as they are less common and have greater potential for physical and psychological harm Treating severe IPV will likely require referral to specialty care as IPV can have different patterns at a couple level, each requiring separate specialized skillsets. One way to distinguish patterns is by "directionality," as individuals might solely use IPV on their partner, solely experience IPV from their partner, or may have complex "bidirectional IPV" relationships where both partners use IPV. Another way to synthesize this information when making referrals is by attending to power and control that relationship [17]. In patterns where one partner uses IPV as part of a pattern of control, guidelines suggest partners should be treated separately. Many community agencies offer efficacious interventions both for those who experience IPV [18] and use IPV [19]. A larger portion of couples are in "situational couple violence" relationships, primarily bidirectional patterns where violence reflects a lack of control of emotions in both partners. In these cases, couples can be treated together, but further evaluations for severity may be needed to differentiate between traditional couple therapy for those with low severity IPV [20] and specialized conjoint IPV treatments for couples with higher severity situational IPV [21,22]. Couple therapy clinics, court-mandated programs, and shelters routinely assess for directionality, control patterns, and injury potential at intake, but by considering these elements at the moment of identification, primary care can facilitate more accurate referrals.
Optimizing Prevention Efforts in Integrated Primary Care Settings
Although a handful of secondary prevention IPV programs have been developed for primary care [23], the above research highlights that integrated primary care settings are ideally suited to provide a larger continuum of preventative approaches to IPV. Potential roles range from psychoeducation and discussion of safety by all providers, simple skills training by an embedded behavioral health provider, to differentiating referrals for highrisk clients. To assist in optimizing patient engagement and satisfaction in these variety of intervention options, it is important to consider patient preferences [24,25] for attributes (e.g., number of appointments) and foci (i.e., relationship concerns addressed). Furthermore, attending to preferences increases initial utilization of a service [26] and lowers dropout rates after engagement [27,28]. Past work has identified relationship concerns in lowincome families [29] and patients seen in intensive couple therapy clinics [30], there has been no prior work on preferred relationship concerns to address in primary care and no prior work on preferred attributes for relationship treatments in any setting.
The Current Study
Due to the lack of existing research, the present study aims to guide further development of IPV interventions suitable for the all patients along the risk continuum served in healthcare systems by (1) characterizing groups across the IPV risk continuum with re-spect to demographics, psychological disorders, and relationship health and (2) examin-ing preferences for relationship support across the IPV risk continuum. The study focuses on sample of men and women receiving primary care services in VA, a system that al-ready embraces an integrated primary care setting and excels in the screening and referral model of IPV treatment but may benefit from expanding the role of primary care.
Materials and Methods
Study measures were included in a larger cross-sectional mail survey assessing relationship functioning and IPV in VA primary care. All study procedures were approved by the Syracuse VA Institutional Review Board (IRB#1420784).
Study Design and Recruitment
Veterans were recruited from three Veterans Affairs Medical Centers and their associated community-based outpatient clinics in Central and Western New York in August 2019 (20 clinics total). We used the electronic medical record (EMR) to identify Veterans meeting the following inclusion criteria (1) age 18-85; (2) utilized primary care services in calendar year 2018 and (3) demographic information suggested being in a relationship. We excluded Veterans who (1) did not have complete mailing address in the EMR or (2) had a diagnosis of major neurocognitive disorder, delusional disorder, or severe/profound intellectual disability, to improve the likelihood of accurate survey completion. We then randomly sampled a group of 1,500 Veterans with a goal of achieving a target sample of 300 respondents to be sufficiently powered for latent variable models of IPV typologies in the larger study. In order to capture a diversity of IPV behaviors in a small sample, we oversampled Veterans below the age of 55 (4:1, or 1200 Veterans below 55) as IPV prevalence declines after age 55 [31]. Similarly, we oversampled female Veterans (1:1 ratio; or 750 female Veterans) as IPV typologies often differ by gender [17].
We mailed each veteran a recruitment letter, information sheet explaining the study, and survey measures in August 2019. Interested participants could mail the completed survey back using pre-stamped, self-addressed envelopes in return for a $20 incentive. Of 317 participants who returned surveys (21% response rate), three did not provide sufficient demographic data and 15 were not in a current romantic relationship, leaving a final sample of 299 participants.
Intimate Partner Violence
The Conflict Tactics Scale Short Form (CTS2S [13]) is a brief version of the Revised Conflict Tactics Scale (CTS-2 [32]), often regarded as the gold standard of IPV assessment. The CTS2S assesses mild and severe behaviors for each of the five dimensions of the CTS-2 including Negotiation, Psychological Aggression, Physical Aggression, Sexual Aggression, and Injury. For each item, participants report both whether they have used that behavior on their partner or experienced that behavior over the previous year. Individuals are classified as experiencing or using severe IPV if they reported any past year Physical Aggression (i.e., hitting or attacking), Sexual Aggression (i.e., sexual coercion or rape), Injury (i.e., physical harm as the result of a conflict), or Severe Psychological Aggression (i.e., threats or property destruction). This reflects the CTS2 cutoffs used to standardize IPV screening measures in VA [33,34]. Following the recommendations of the scale authors, frequencies from the mild Psychological Aggression item was aggregated to their midpoint to obtain an approximate count of verbally hostile behavior (i.e., screaming at or insulting a partner) over the previous year [13].
Mental Disorders
Participants also completed a range of well-validated measures to assess clinical conditions known to be associated with IPV. The Patient Health Questionnaire-9 (PHQ-9 [35]) demonstrated high internal consistency in the sample (α = 0.90) and was used to assess depression (cutoff of ≥10) and recent thoughts of suicide or self-harm (using any positive scores on the ninth item [36]). Probable posttraumatic stress disorder (PTSD) was assessed using the PTSD Checklist (PCL5 [37]; α = 0.97) using a cutoff suggested for general health settings (scores ≥ 31). Potential alcohol misuse was identified using scores ≥ 8 on the Alcohol Use Disorder Identification Test (AUDIT [38]; α = 0.85).
Relationship Health Screens
Relationship functioning was assessed using the four-item screening version of the Couples Satisfaction Index (CSI-4 [39]). The scale demonstrated excellent internal consistency in the sample (α = 0.96) and we used the optimal cut-off score of <13.5 to identify distressed participants. The Extended Hurt-Insult-Threaten-Scream (E-HITS [33]) scale is a five-item measure of IPV experience that was developed and validated for screening in primary care settings. We used the suggested cutoff score of ≥7 to identify Veterans who would be classified as experiencing a level of IPV requiring further care in a primary care setting. The 5-item Danger Assessment (DA-5 [40]) captures the larger context for IPV (e.g., escalating violence; partner owns a weapon) to predict likelihood of future injuring or lethal assaults. The Women's Experience of Battering Scale (WEB [41]) explores the psychological experience of power and fear.
Preferences for Treatment Attributes and Foci
Participants' preferences were assessed through a series of forced-choice items asking participants, "If you had to pick only one [Type/Format/Location/Length] for help with relationship concerns, which would you prefer?" Following guidance from the preference literature [24], we also provided non-technical descriptions of Type and Format terms (see Table 1). We allowed participants to indicate "No Preference" to reduce the likelihood of random responding in the absence of a strong opinion. Preference for treatment focus was assessed through a series of 14 items asking participants "Rate how likely you would be to attend help offered within the VA for each Relationship Concern listed below" Each concern was rated on a 1 (Very Unlikely) to 5 (Very Likely) Likert scale. Responses were dichotomized so that scores of 4 (Likely) and 5 (Very Likely) were coded as "Likely to attend." Each concern area included a non-technical description of how treatment would address that concern (see Table 1). Face-To-Face-I meet in person (face-to-face) with a health professional. * Telephone-I speak with a health professional on the telephone. * Video Chat-I speak with a health professional using a secure online system Internet-I complete online treatment modules/courses through a website from my own home Mobile App-I use tools within a mobile app from my own home. Self-Help Materials-I read paper handouts or treatment manuals on my own. How likely would you be to attend help focused on each concern? Improving Our Relationship-Making a relationship better and/or stronger Anger Management-Learning how to better control anger when we argue Note. Items marked with an asterisk (*) were merged in analysis due to low endorsement of each individual item (i.e., at least one item < 10) and similarity between their descriptions.
Analytic Strategy 2.3.1. Defining Groups
Data were analyzed using SPSS 22. To represent a continuum of risk for different levels of prevention, we stratified our sample into three groups. The "Severe IPV" group (i.e., those targeted for Tertiary Prevention) included 63 respondents (21% of the sample) who reported using or experiencing physical IPV, sexual IPV, severe psychological IPV, or injury in the previous year on the CTS2S. A second "Verbal Only" group (i.e., those appropriate for Secondary Prevention) included the 152 respondents (51% of full sample) who reported using or experiencing mild psychological aggression (e.g., yelling or screaming) but denied all other IPV behaviors. The remaining 84 respondents (28% of full sample) denied using or experiencing any form of IPV over the previous year were classified as "No IPV" (i.e., appropriate for Primary Prevention activities).
Comparisons between Groups
Although group membership could be conceptualized as part of an ordinal sequence, ordinal tests yielded too many significant results that were not clinically meaningful (i.e., small monotonic trends reflecting higher endorsements of all items among high-risk groups). Therefore, we used a more conservative approach and evaluated overall group differences using initial χ 2 tests for independence (for categorical outcomes) or analysis of variance (for continuous outcomes). Significant results were followed by post hoc tests for each pairing of groups (χ 2 for categorical outcomes; Tukey's Honest Significant Difference for continuous outcomes) to identify homogenous subsets. Significance was set at p < 0.05 for all tests. For characteristics that were significantly higher in the IPV group than the other two groups, we then conducted sub-analyses comparing individuals that reported solely using IPV, solely experiencing IPV, and both using/experiencing IPV. Although these analyses are underpowered for significance testing, we report differences when the highest subgroup has >20% endorsement than the lowest subgroup.
Sample Characteristics
See the first column of Table 2 for characteristics of the sample. Reflecting our sampling strategy, our respondents were younger and had a larger percentage of women than typical veteran samples but were otherwise representative of Veteran demographics in the recruitment region with respect to race, ethnicity, and marital status. Among the 63 participants reporting past-year severe IPV behaviors in their relationship (i.e., coercive psychological IPV, physical or sexual violence, and/or injury), 35% exclusively experienced severe IPV, 22% exclusively used severe IPV, and 43% reported bidirectional severe IPV. In contrast, a vast majority of the 152 Veterans reporting exclusively Verbal Conflict in their relationship reported that it was bidirectional (88%) with smaller numbers reported that they exclusively screamed or yelled at their partners (5%) or that their partners exclusively screamed and yelled at them (7%) over the previous year.
Demographic and Relationship Differences by Group
As seen in the remaining columns of Table 2, respondents denying IPV tended to be older than those reporting Verbal Conflict or Severe IPV in their relationship. Participants in Severe IPV Relationships also more frequently screened positive for Depression and alcohol misuse. Groups were otherwise similar across all demographic and clinical categories. Participants in severe IPV relationships reported a higher frequency of using verbal conflict behaviors than the Verbal only group but did not report significantly higher experience of IPV by their partners, highlighting the difficulty of defining a clear cutoff for Verbal Conflict. Since the E-HITS Experience Screen is sensitive to verbal conflict, endorsement increased across all three groups. In contrast, the WEB measure of control and the DA-5 measure of injury risk factor more clearly identified the highest risk group. Given the heterogeneity between different dyadic patterns in the Severe IPV group, we conducted follow-up descriptive analyses of the relationship health screens in those groups. Although underpowered to detect significant differences, this provided some insights into group divisions. Specifically, the E-HITS detected 43/49 (88%) of the individuals who experience severe IPV in either one-way or bidirectional IPV relationships. However, the E-HITS also flagged 10/14 (71%) individuals who only used severe IPV behaviors, suggesting this screen's attention to insulting/screaming might also detect prominent verbal conflict behaviors that their partners used out of fear. In contrast, individuals who solely used severe IPV were rarely flagged by the WEB measure of Fear (2/14 (14%)) and did not report any risk factors for injury by their partner (0/14), suggesting greater specificity. Depression was largely similar across IPV subgroups. However, a full 43% of those who solely used IPV met AUDIT thresholds for alcohol use disorder while only 22% of those reporting bidirectional IPV and 9% of those experiencing IPV crossed alcohol threshold lines.
Treatment Preferences
Attribute preferences across the sample and by subgroup can be found in Table 3. Notably, all three groups were largely similar in preferring face-to-face treatment in a couples format ranging from 2-6 visits in length. However, the preference for couples treatment was much stronger in the two IPV groups than the other No IPV Group. Furthermore, significant differences emerged in preferred setting, with participants in the Severe IPV group demonstrating a stronger preference for care Outside VA than individuals in the No IPV group while the other groups preferred to stay within VA primary care. Sub-analyses within the severe IPV group suggested that the preference for non-VA care was strongest among those solely experiencing IPV (50%) while it was weaker among those who solely use IPV (36%) or who are in bidirectional IPV relationships (22%). Notes. See Table 1 for non-technical descriptions provided to respondents for Preferred Type and Format. Preferred Location and Duration items and responses presented verbatim. Response(s) with highest endorsement bolded for ease of interpretation. Responses with low endorsement (n < 10) were merged with similar categories as denoted by an "OR." Percentages may not total to 100% due to missing/blank responses. Significant tests of group differences (bolded) were followed by chi-squared comparisons for each pair of risk groups. Different column headers (e.g., A vs. B) represent satistically significant differences in response patterns. No IPV = Denied last year use or experience psychological IPV, physical IPV, sexual IPV, or injury; Verbal Only = Participants reported last year use or experience of mild psychological IPV (yelling/insults) but denied other forms of IPV. Severe IPV = Participants reported last year use or experience severe psychological IPV, physical IPV, sexual IPV, or injury as a result of a conflict.
Likelihood of attending treatment for different concerns can be found in Table 4. Treatment addressing physical health (current and future), broad themes of relationship improvement, and communication had high rates of endorsement across the sample and were similar across groups. The specific topic of "Relationship Evaluation" had a higher rate of endorsement among individuals reporting Severe IPV in their relationship than No IPV, with individuals in Verbal Conflict only relationships had interest somewhere in the middle. Topics that separated the Severe IPV group from the Verbal Conflict only group included "Improving the Home for my Children," "Improving Safety in the Relationship," and "Is it time to end?" To understand this effect further, we calculated descriptive statistics for these three topics and found only one (7%) of the 14 participants who solely used IPV had an interest in "Improving Safety in the Relationship," whereas this focus was more attractive to in individuals in bidirectional IPV relationships (37%) and or those who solely experienced IPV (32%). Interest in "Improving the Home for my Children" and "Is it time to end" were more similar across these groups (rates of participants likely to attend within 15% of one another across all three groups). Notes. Each topic area rated on a Likert scale rating likeliness of attending a service from 1 (Very Unlikely) to 5 (Very Likely). Each row represents the count of participants who selected they were Likely or Very Likely to attend a service focused on that topic. Concerns with >50% endorsement (i.e., a majority of participants reporting they are likely to attend) bolded for ease of interpretation. See Table 1 for full descriptions provided with each concern.
To detect meaningful trends, we followed significant chi-square tests of group differences (bolded) by chi-squared comparisons of each group. Different superscripted letters (e.g., A vs. B) represent significant differences in proportion likely to attend (p < 0.05). No IPV= Denied last year use or experience psychological IPV, physical IPV, sexual IPV, or injury. Verbal Only = Participants reported last year use or experience of mild psychological IPV (yelling/insults) but denied other forms of IPV. Severe IPV = Participants reported last year use or experience severe psychological IPV, physical IPV, sexual IPV, or injury.
Discussion
Understanding the complexity of IPV presentations and patient preferences can help to enhance primary care's contributions to IPV prevention at a population level. The present study offers considerations for healthcare systems differentiating treatment across multiple levels of risk. First, in taking the novel approach of sorting respondents into a continuum, the study provides novel insights into differences between groups at each stage. Furthermore, the present study is the first to examine preferences for relationship support in primary care across an IPV risk continuum. The results can guide integrated primary care teams in selecting between existing services to respond to patient preferences and can inform intervention development to improve prevention services across the risk continuum.
Patient Characteristics and Supporting Relationships at a Population Level
The largest portion of our primary care Veteran sample reported they experienced Verbal Conflict behaviors in a bidirectional pattern without more severe IPV behaviors. This group was similar to the higher risk group in its young age and desire for relationship services but did not differ from the No IPV group with respect to mental health concerns (i.e., depression, PTSD, alcohol use disorders) or relationship harms (i.e., dissatisfaction and fear of partner). This is consistent with the high prevalence of Mild Psychological Aggression in community studies [13] and supports addressing verbal conflict behaviors as a "Secondary Prevention" category of IPV (i.e., disorder present, but prior to harms). Furthermore, approximately 1/3rd of individuals who denied all forms of IPV in their relationship still reported relationship dissatisfaction, fear of their partner, or risk factors for escalation to injury, suggesting that relationship problems are still experienced in the "low-risk" group and highlight the potential importance of Primary Prevention. These findings are consistent with previous studies of VA primary care that suggest a majority of partnered Veterans report some form of troubled relationship [42].
One considerable challenge highlighted by our findings is the difficulty of classifying risk by focusing on the experience of IPV alone, as is recommended by both the USPSTF guidelines (i.e., screen women of childbearing age) and CDC guidelines (i.e., identify and "Support Survivors"). Consistent with its Secondary Prevention function, the E-HITS experience screen detects Verbal Conflict and more severe IPV behaviors but did not sufficiently distinguish between groups and even detected those who were the sole partners using severe IPV. At the same time, extensive measures such as the full CTS2 (78 items) or even the briefer CTS2S (20 items) are infeasible for routine use in primary care. The simplest step that can be taken is modifying reporting to distinguish between individuals with positive screens based on screaming/insults alone vs. all other IPV behaviors-as is currently done in the scoring system for the Screener for Clinically Significant IPV [43]. Alternatively, our data suggests measures focused on the power/control and injury risk also increase specificity to differentiate those who experience Severe IPV from Verbal Conflict Only. This supports the potential value of multi-stage screening process as is currently used in VA [9], as "second stage screenings" will not burden patients unnecessarily but will both differentiate between groups and provide immediate guidance on whether referral to conjoint treatments is contraindicated. The most resource intensive option would be to routinely screening for IPV use to detect participants who exclusively use IPV and to distinguish between one-way patterns. While routinely administering a second screen will increase time for both patients and providers, the last decade has seen a growing number of brief IPV use screens that may meet the needs of clinical settings [44].
Treatment Preferences and Directions for Intervention Development
Individuals who denied IPV showed a distinct preference for treatment in primary care, suggesting it may be an ideal site for Primary Prevention activities. Furthermore, respondents across the continuum expressed an interest in face-to-face, 2-6 session treatments addressing physical health in the relationship, highlighting that a large proportion of patients would be willing to utilize relationship supports addressing these themes. Regarding potential Primary Prevention interventions, reviews highlight a wide range of couple-based programs that have been developed to manage chronic health conditions (e.g., dementia, heart disease; [45]) and promote health behavior changes (e.g., diet, exercise [46]). By teaching partners to collaborate around health problems, these programs tend to show comparable reductions to disability as individual education along with secondary mental health and relationship benefits (e.g., reducing depression or relationship distress). Although they would be consistent with an integrated behavioral health provider role, many of these programs are designed to be delivered by a range of professionals in integrated healthcare teams (e.g., dieticians, health behavior coaches), allowing relationship support to be integrated into routine healthcare activities without burdening any particular provider.
Individuals who solely report Verbal Conflict have the widest range of available interventions, including skills education [14] and a handful of interventions developed for primary care [23]. The present results offer some guidance for selecting between these options as these individuals share the strong preference for conjoint treatments and relationship evaluation like higher risk groups while reporting a comparable preference to receive services in primary care as lower risk groups. One program that balances these considerations is the Relationship Checkup [47], a 2-3 session couple-based assessment-feedback intervention that helps couples evaluate their relationship strengths and concerns and identify concrete steps to work on their challenges (including further referral, if needed). Although originally developed for trained couple therapists, the program has been recently abbreviated to a 30-min session version designed to be delivered by co-located behavioral health providers in primary care clinics [48] and a "before baby" checkup for obstetrics clinics [49]. Notably, efficacy data suggests an "annual checkup" model of periodic two 2-session Checkups can lead to comparable improvements as more intensive programs [50].
At the Tertiary Prevention level, participants implicitly recognize the need for specialty treatment, expressing increased preference for care in behavioral health or non-VA clinics. At the same time, this group reported stronger preferences for conjoint treatments. Therefore, it will be important to evaluate fear/control and injury potential at the point of referral (e.g., using a two-stage strategy discussed above) to divert high risk couples to individual services while allowing lower-risk couples to benefit from appropriate conjoint IPV services [20][21][22]. Given the elevated association of alcohol dependence and severe IPV use, it may be important to reassess drinking and direct patients to "integrated IPV programs" that show efficacy at reducing both IPV use and substance misuse [51]. While diversion to these services increase safety, no single safety concern was desired by a majority of individuals in this high-risk group. One approach consistent with an integrated behavioral health skillset is using motivational interviewing to address ambivalence (e.g., disappointment at ineligibility for conjoint treatment; disinterest in addressing alcohol) and to link the options discussed to the strongest motivation for that individual (e.g., linking integrated treatment to health benefits that are important to a patient). A review of motivational interviewing for IPV suggest that only 1-2 sessions are sufficient to increase referral engagement and reduce subsequent dropout [52].
Limitations
Conclusions drawn from these findings are constrained by the following limitations. Foremost, even though our purposive sampling approach created a sample that demographically is demographically closer to the general population, our sample is still exclusively composed of Veterans and future replication in civilian samples may be needed to generalize to non-VA primary care settings. Similarly, although our mail-in survey had comparable response rates to other mailed surveys of Veterans [53], it is possible that response biases reflect participants that are more invested in relationship support than the general population. While this may lend credence to findings about undesirable options (e.g., it is quite likely that the general population would be even less likely to attend 13+ session treatments focused on legal risk than the current sample), it might overestimate actual utilization rates of desirable options. A third limitation is that our analyses did not address other potential variables that may influence treatment preferences. Prior work suggests gender impacts the preferred treatment focus [30], but it is also possible differences between risk groups (e.g., age, depression, PTSD, low satisfaction) contributed to the varied treatment preferences observed in the current sample. Studies exploring these demographic factors as predictors of treatment preferences in their own right may lead to further tailoring of relationship resources by population (e.g., resources specific to Women's Health Clinics). A fourth limitation is that the survey did not assess psychopharmacological medication use, which prior research suggests is higher among those who have experienced IPV [54]. Future studies may be able to clarify the role of psychopharmacological interventions in addressing the sequelae of IPV by examining medication use as a covariate and potential treatment option. Another limitation is that data collection occurred prior to the 2019 Coronavirus Disease pandemic. The dramatic increase in telehealth usage during the pandemic may have increased the acceptability of phone and telehealth options since our survey. A final limitation is the CTS2S that was used to classify relationship groups. Although validated against its longer counterpart, the CTS2S has lower sensitivity than the more exhaustive CTS-2 and therefore may underestimate the presence of severe IPV in this sample [13]. Future research using the full CTS-2 may provide clearer delineation along the IPV-risk continuum.
Conclusions
The present findings highlight important avenues for integrated primary care teams to expand their population-level prevention of IPV in ways that are responsive to the diversity of patient presentations and preferences. First, the results highlight the high prevalence of relationship dissatisfaction and verbal conflict among partnered patients and the unique position of verbal conflict as a potential Secondary Prevention target. Secondly, the results highlight the limits of current screening practices to differentiate these groups and suggest possible avenues for expansion. Finally, preference data can guide the selection and development of services that are attractive to patients at different levels across the continuum, including incorporating romantic partners into health-oriented programming for all patients, having behavioral health providers offer assessment interventions for verbal-conflict couples, and offering motivational interviewing to help high-risk patients connect with services appropriate for their needs.
|
2022-10-30T15:13:02.954Z
|
2022-10-27T00:00:00.000
|
{
"year": 2022,
"sha1": "a86baadab97ad0d7edd26dc0bd4f42a84b40ec40",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/21/13984/pdf?version=1666868571",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0001a1dee083b5fde4479dc74f3f2c0db4d59e65",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
255811583
|
pes2o/s2orc
|
v3-fos-license
|
Integration of conventional and advanced molecular tools to track footprints of heterosis in cotton
Heterosis, a multigenic complex trait extrapolated as sum total of many phenotypic features, is widely utilized phenomenon in agricultural crops for about a century. It is mainly focused on establishing vigorous cultivars with the fact that its deployment in crops necessitates the perspective of genomic impressions on prior selection for metric traits. In spite of extensive investigations, the actual mysterious genetic basis of heterosis is yet to unravel. Contemporary crop breeding is aimed at enhanced crop production overcoming former achievements. Leading cotton improvement programs remained handicapped to attain significant accomplishments. In mentioned context, a comprehensive project was designed involving a large collection of cotton accessions including 284 lines, 5 testers along with their respective F1 hybrids derived from Line × Tester mating design were evaluated under 10 diverse environments. Heterosis, GCA and SCA were estimated from morphological and fiber quality traits by L × T analysis. For the exploration of elite marker alleles related to heterosis and to provide the material carrying such multiple alleles the mentioned three dependent variables along with trait phenotype values were executed for association study aided by microsatellites in mixed linear model based on population structure and linkage disequilibrium analysis. Highly significant 46 microsatellites were discovered in association with the fiber and yield related traits under study. It was observed that two-thirds of the highly significant associated microsatellites related to fiber quality were distributed on D sub-genome, including some with pleiotropic effect. Newly discovered 32 hQTLs related to fiber quality traits are one of prominent findings from current study. A set of 96 exclusively favorable alleles were discovered and C tester (A971Bt) posited a major contributor of these alleles primarily associated with fiber quality. Hence, to uncover hidden facts lying within heterosis phenomenon, discovery of additional hQTLs is required to improve fibre quality. To grab prominent improvement in influenced fiber quality and yield traits, we suggest the A971 Bt cotton cultivar as fundamental element in advance breeding programs as a parent of choice.
Background
Cotton is a significant agricultural crop with high economic importance acting as a vital source for provision of income to large number of farmers around the world. Presence of diversity as well as agro climatic zones regarding cotton in China are comparably larger than any other country around the globe. Genus Gossypium covers economically sustainable and diverse amount of diploid as well as tetraploid cotton species grown in most of the regions worldwide [1]. Approximately 95% of cotton production in the whole world accredited with tetraploid Gossypium hirsutum species mostly renowned as 'upland cotton'. Most of the times breeders concerned with plants face the difficulty in selecting suitable parents and crosses while studying qualitative and quantitative traits responsible for yield.
Based on phenotype only, parent selection procedure may prove faulty as phenotypically superior plants may lead to poor combinations. Integration of knowledge related to genetic basis of yield and quality traits of parents would definitely aid in the identification of superior cross combinations in earlier generations. Although cotton production has flourished significantly in recent decade however, the hybrid cotton yield is now at stagnation phase. Main reasons behind this scenario include lack of organized efforts for developing hybrid populations and derived lines with better combining abilities for establishing subsequent new hybrids.
One of the major breakthrough in crop breeding era is large production of high yielding hybrids through wide exploitation of heterosis. Maize, sunflower, pearl millet, sugarbeet, sorghum and many other vegetables beneficially grown from their respective hybrids. However, areas under cultivation of rice, cotton, rapeseed and safflower are rapidly increasing. In open-pollinated crops such as maize, it is fundamental to establish heterotic populations and set grounds for improvement of combining ability to achieve sustainable productivity [2]. After its initial introduction and description, many researchers worked out intraspecific and interspecific heterosis in cotton [3] regarding fibre quality, reproductive cum vegetative growth and photosynthates manufacturing [4]. Since longer times, producers and researchers are focusing on heterosis to use it as a major tool for raising fibre yield and quality of cotton [5]. Earlier in 1894, heterosis in cotton accounting certain measurements of agronomic and fiber properties, was discovered and reported by Mell [6], then Shull in 1908 [7] gave its modern concept [6]. Hybrids between Upland and Egyptian cotton produce lint of superior quality. As in maize the yield increments are highly correlated with hybrid breeding, a parallel scenario has been observed in cotton. However, for the durable implementation of efficient procedures and basic genetic grounds of hybridization in cotton, much exploration is yet required to fill that gap which is one of the reasons for lagging behind of maize.
China as well as India, both are large consumers of hybrid cotton, which has become possible due to advanced studies on heterosis aspect [8]. Adoption of hybrid cotton is rapidly increasing in China due to commercial release of Bt-cotton varieties. Nowadays hybrids (F 1 ) of cotton in China are produced preferably from crossing of a non-Bt cotton line with a Bt cotton line [9]. It has been scientifically proved that such type of crossing gives significant better-parent heterosis or Mid-parent heterosis especially in fiber yield components [10].
By exploiting the ambiguous mechanism of heterosis, many scientists have utilized inbred lines with suitable partners to produce elite hybrids with increased yield in different breeding programs [11]. Therefore, plant breeders examine inbred lines by reviewing their potential to produce elite hybrids and not by their performance per se.
Hybrid performance cannot be precisely analyzed by line performance [12], verifying phenotypic trait assessment of hybrid crosses as liable. Such types of hindrances are typically sorted out by hybridizing inbred lines and 'testers' (genetically distant) along with evaluating their (inbred lines) general combining abilities (GCA). Novel implements are required for precise prediction of GCA connected with highly polygenic parameters based on information derived specifically from parental inbred lines [13]. Mating designs play vital role in breeding of crops as they find their deliberate use in estimating GCA and SCA of parents and F 1 s. The line × tester is a simplest and efficient method utilized to breed all types of crop plants either self or cross pollinated in order to evaluate superior parents and favorable crosses along with their GCA and SCA [14]. Many breeding programs utilized this method to achieve hybrid vigor for its commercial use. Analyzing combining ability is essential for the sake of selecting appropriate parents along with facts related to nature as well as extent of gene effects governing polygenic parameters. A successful hybridization program is highly dependent on the capability of parents involved to produce desirable recombinants [2].
Earlier reports unravel that additive and dominance effects laid the foundation of genetics related to heterosis for cotton yield [15,16]. In previous times, the trait value worked out using classical quantitative genetic methods. Consequently, dominance [17,18], over-dominance [7,19] and epistasis [20,21] hypotheses relating heterosis came into being. With the advent of molecular markers in collaboration with extensive exploitation of QTL mapping for dominance [22], over-dominance [23] as well as epistasis [22] theories greatly reinforced to analyze trait phenotype and heterosis [24].
Plant breeders are working hardly to mine the secrets lying behind this ambiguous process of heterosis, which is truly speaking genetically unclear so far. Many investigations have been conducted so far to explore the genetic grounds of heterosis [25]. Even then, investigators are enjoying the benefits of hybrids by exploiting it. Construction of saturated genetic linkage maps by utilizing molecular markers for the dissection of genetic components responsible for yield related complex traits through QTL (quantitative trait loci) analysis may substantially lead to comprehend the complex process of heterosis. Through association analyses, various yield related aspects of cotton have been mined thoroughly for the identification of significant alleles and carriers for breeding materials [26,27]. Variations existing in cotton genotypes identified via DNA markers have been related to significant heterosis results in order to utilize them in further hybrid breeding programs [28]. Many researchers in intra as well as inter-specific cotton hybrids for the sake of discovering the affiliation concerning hybrid performance and parental molecular marker diversity [29] have investigated prediction of hybrid performance with the help of molecular markers.
Cotton improvement programs remained handicap to attain significant achievements. We paced in this field for the exploration of elite marker alleles related to heterosis and to provide the typical material carrying such multiple alleles by integrating Line × Tester mating design with microsatellites based genome wide association mapping. Specific objectives of the current project were to investigate population structure of parents and hybrids, to discover the loci in F 1 hybrid individuals, associated with high heterosis influencing improved fiber yield and to identify elite alleles and the respective materials for further cotton improvement programs aimed at fiber quality and yield.
Phenotypic evaluation and population structure
Means and ranges of 10 traits evaluated in the field trials are given in Table 1. All traits showed considerable range of variation among hybrids as well as parental genotypes analyzed. As shown in correlogram the correlation (r) between different agronomic and fibre quality traits of investigated material revealed that plant height displayed positive correlation with BW. BW showed highly significant positive correlation with FUI, FE, FU, MIC, FS, FL, and LP. LP displayed positive correlation with all traits. BN depicted negative correlation with FL. FL showed positive correlation with FS and FUI but negative correlation with MIC. FS exhibited affirmative correlation with FE and FUI whereas negatively correlated with MIC. FUI is positively correlated with FU and FE characteristics while negatively correlated with MIC. Table 1 Summary of F 1 hybrids and parents for phenology and fibre related traits from 2 locations and 2 years Boxplot for all traits are depicting significant variation among individuals of F 1 s and parents (Fig. 1). The central box represents the middle half data lengthening from upper to lower quartile while the horizontal line is located at median. The ends point of vertical projections specifies maximum and minimum data points, unless the presence of outliers. Solid dots at upper and lower sides represents outliers.
Countless amount of studies from different fields in search of illuminating most of the total phenotypic variance explained by correlated phenotypes; follow the principle of dimension reduction. In order to visualize and verify the connection and variability between phenology of parents and their respective F 1 hybrids Principal Components Analysis (PCA) performed. It was carried out based on correlation between agronomic and fiber traits. Ten principal components were extracted from the ten studied traits through PCA. The first three principal components were detected to reveal eigen value exceeding 1 while rest of the seven components showed less than one eigen value. The first and second principal components accounted collectively for 18.05% of total variation. The cumulative percent of variance accounted 57.76% of total variation in the first two components (Additional file 1).
Contribution of a specific trait towards variability among PCs unravel that FUI stood first in donating maximum positive loading vector i.e., 0.8921 followed by FS (0.7526), FL (0.6376), MIC (0.5197), LP (0.3752) and BN (0.3515) for first PC. It is described that the mentioned six original variables are strongly correlated with first principal component. It will increase with upgradation in scores of these variables, which suggested that these six criteria vary altogether. FUI was found strongly correlated with this principal component. Indeed, it could be stated that this PC is predominantly a measure of FUI. However, remaining four traits contributed minimum positive loadings.
Net variation displayed by second PC was 18.0540 and maximum loading factor in this PC was exhibited by PH Fig. 1 Correlogram for fiber quality traits in F 1 s and Parents of upland cotton. The density distribution of each variable for F 1 s and Parents is shown at diagonal with distinct colors (blue: F 1 s, orange: Parents). On the lower side, the bivariate scatter plots are displayed while on the upper side, the values along with significance (*) of correlation coefficients for variables of F1 s and Parents are presented. Boxplots illustrating the variability among individuals of parents and offsprings. The central box represents the middle half data lengthening from upper to lower quartile while the horizontal line is located at median. The ends point of vertical projections specify maximum and minimum data points, unless the presence of outliers. Solid dots at upper and lower sides represents outliers. The bottom most rows depicted frequency distribution of each variable for F 1 s and Parents (0.8018) followed by BW (0.7773). Hence, this PC will increase by increase in PH and BW variables as being highly correlated. While remaining eight traits FL, FU, FE, MIC, FS, LP, FUI and BN revealed minimum loadings as 0.0548, 0.0427, 0.0394, 0.0379, 0.0287, 0.0219, 0.0006 and 0.0003 respectively.
The scatter diagram of PCA for the studied material depicted a considerable amount of variability presence among lines, testers and F 1 s. First and second principal components (PC1 and PC2) of parents and F 1 populations was plotted in which three major distinct groups were encountered including two main groups of F 1 s and one of female parents. Further details displayed five clusters of F 1 populations according to their male parents (Fig. 2). Every sub-cluster of F 1 s is lying apart clearly indicating their diversity from each other. Furthermore, the presence of paternal parents alongside their respective F 1 s sub-clusters is validating the diversity. The second main cluster of F 1 is representation of clear difference between hybrids originated from C tester and rest of hybrids from other testers.
The LnP(D) values sustained to escalate without variation. Hence, K values could be determined with ΔK. The ΔK showed highest peak at K = 3, in case of Female parents while in all F 1 hybrids, ΔK was maximum when K = 2, which suggested that the investigated material of female parents and hybrids might be distributed in three and two subdivisions respectively (Fig. 3). Figure 3 related to the population structure is showing a clear difference among the five sets of hybrids which laid the foundation for doing association analyses.
The association mapping based on LD was followed as described by Yu et al. in 2006 [30] using the TASSEL software package. The values of LD among all marker pairs have been plotted as LD plots to predict the LD patterns genome wide and estimate LD blocks. LD plots against physical map distance were generated in Sigma-Plot 12.5 software, keeping r 2 values with P < 0.001. The 0 cM r 2 values were assumed as 0.0000001 following previous related reports [31]. The intra-chromosomal LD declined at physical distance ranging between 240-300kbp (r 2 = 0.2) revealing the potential for association mapping (Fig. 4). The average linkage disequilibrium (LD) decay distance was 288kbp (r 2 = 0.2).
Marker-trait association studies
Both Q matrix and kinship were integrated in the genetic model for association mapping following MLM using TASSEL software. Considering the results from all types of possible combinations (Parents, A, B, C, D, and E F 1 s Fig. 2 Scatter diagram of F 1 s and Parents in upland cottons based on phenological data projected in the (Dim1-Dim2) plane. Different colours depicting the distinct groups of lines, testers, F 1 s and checks. Abbreviations: Dim1., PC-1; Dim2., PC-2; A., 7886 tester; B., Zhong 1421 tester; C., A971 Bt tester; D., 4133 Bt tester; E., SGK 9708 tester sets) run through TASSEL, below probability α = 0.001 (−log 10 > 3) level. Collectively, 2846 associations were discovered at α = 0.001 (−log 10 > 3) related to four variables i.e., 787 associations with trait phenotype, 121 with GCA, 168 with SCA and 1770 with heterosis ( Fig. 5). Out of them, 831 significant associations were detected between 176 microsatellites and 10 traits (Additional file 2). The description regarding 831 significant associations is given here as: FL showed 75 significant associations with different microsatellites. Sixty-eight microsatellites displayed association with MIC. FS displayed 65 associations with microsatellites. BN showed association with 65 microsatellites from all the subsets. FUI depicted 65 significant associations with microsatellites. BW depicted association with 63 microsatellites. FE showed association with 60 microsatellites. Fifty-five significant associations have been displayed by FU with microsatellites. Fifty-five associations have been observed between PH and microsatellites. Fifty-four microsatellites showed association with LP (Additional file 3).
Traits associated with microsatellites
A set of highly significant 46 microsatellites out of 176 loci found their associations with FUI, LP, FS, FL, BW, MIC, FE, PH and FU (Fig. 6). These loci were identified on the basis of their presence in trait phenotype, GCA, HB, MP and K4 in F 1 hybrids descended from at least 3 testers (Additional file 4). (See figure on previous page.) Fig. 4 a, b, c, d, e, f Linkage disequilibrium distribution patterns between all possible loci pairs of female parents and F 1s , Set-A, set-B, set-C, set-D, set-E respectively across various chromosomes. Each pixel on upper side of diagonal indicates size of D′ related to corresponding marker pair as revealed with the color code at top right; whereas lower side of diagonal specifies P value of respective marker pair LD as revealed with the color code at the bottom right: white p > 0.05, blue 0.05 > p > 0.01, green 0.01 > p > 0.001 and red p < 0.001. g, h, i, j, k, l Scatterplots of the significant LD (r 2 ) against physical distance (Mb) of female parents and F 1 set-A, set-B, set-C, set-D, set-E respectively. The trend line (inner fitted) is a logarithmic regression curve based on r 2 against physical distance These QTLs were detected based on being appeared in F 1 s from at least 3 out of five testers, each with a different dependent variable. Noticeably, every type of effect was identified with trait phenotype, dominance effects were found with SCA, HB, HI, MP, K3 and K4 while additive effects were identified with GCA. Although in above results the dominance effect of few QTLs have been detected with GCA but their effect was close to zero. The main purpose of this experiment was to work out the comparison among genetic components of above mentioned four dependent variables and to verify the presence of detected highly associated QTLs in the hybrids of five testers which were screened for ten agronomic and fiber quality related traits at various locations for 2 years.
It was observed that two-thirds of the highly significant (p < 0.001) associated microsatellites showed their presence on D sub-genome, especially those of FS, FL and FU. Also the pleiotropic effects of loci NAU2631, CM45 and GH501on phenotypic traits FUI, FS, FL and FE were discovered (Fig. 7).
From five types of heterosis and respective 10 different possible, combinations used in the association analysis specifically for analyzing heterosis, a whole sum of 1770 significant (−log 10 > 3) associations have been identified. The detail is given here as: from HB 344 associations, from HI 304 significant associations, from MP heterosis 303, from heterosis over check-K3 409 and heterosis over check-K4 410 significant associations have been discovered (Fig. 8). Newly discovered heterosis quantitative trait locus (hQTLs) including 7, 1, 3, 9, 3, 1, 3, 3 and 2 loci for FUI, LP, FS, FL, BW, MIC, FE, PH and FU respectively are one of prominent findings from current study.
Discovery of favorable alleles
Phenotypic effects of each significantly (−log 10 > 3) identified QTLs were estimated with maximum positive and minimum negative allele effects in all environments and all possible combination of phenotype and genotype data used in running of TASSEL association analysis for superior lines, testers and F 1 s (Fig. 9). According to BLUP results obtained from association analysis, 831 significantly associated (−log 10 > 3) loci genotype data found their association with 10 traits phenotype data at 10 locations for two tears and 96 elite alleles were discovered from them. At -log 10 > 3 level, 96 substantial associations were discovered between microsatellites and phenotypic parameters regarding superior alleles effects. The superior alleles have been recognized based on breeding objective related to each target trait. Based on mentioned procedure, the allele of significantly identified stable QTLs (−log 10 > 3) have been evaluated regarding their respective phenotypic effects. Most prominently the combination of phenotype and genotype data taken from F 1 s of C tester contributed significantly in detecting superior alleles. Among detected superior alleles from this combination, TMB1181-1 depicted maximum positive phenotypic effects for FUI so increased FUI by 10.22%. However, DPL513-1 displayed minimum negative phenotypic effect for MIC so increased it by − 0.33. A range of 10.72 to − 0.33 has been estimated in this combination of phenotypic effects influencing BN, BW, FUI, FL, FE, LP, MIC and PH.
Discussion
Earlier the scientists did not use heterosis concept for self-pollinated crops due to lack of hybrid vigor and other related theories. Afterwards, scientists of recent decades utilized the idea of heterosis in rice for the improvement of yield and related queries and ultimately obtained fruitful results. Getting inspiration from this breakthrough, we tried to exploit the concept by integrating conventional and advanced molecular tools to clarify and validate the mechanisms involved in heterosis, which is hardly utilized by earlier cotton breeders. We have used F 1 hybrids in L × T mating design instead of segregating populations (bi-parental crossing) for the sake of dissecting genetic foundation of heterosis and detected different types of QTLs via GWA mapping; related to trait phenotype, GCA, SCA, HB, HI, MP, K3 and K4. Such type of information is merely available previously, as very few studies have been conducted to explain the basis of genetics involved in heterosis in cotton. A QTL mapping strategy has been approached in the current study, earlier proposed by Wen et al. in 2015 [32] to explain the main effects considered in single genetic model.
The correlation coefficients for most of the traits showed positive and significant correlation so these traits can be proved together with each other. However, the traits with significant negative correlation depicting the inverse relationship can be treated reverse for their improving. The scatter diagram and density distribution showed normal distribution of hybrids as well as parents.
Therefore, the populations can be used for further analyses of corresponding traits without transformation. Though trait phenotype performed as best variable to genetically dissect basis of quantitative parameters as well as heterosis. Others are helpful for estimating main effects as GCA and trait phenotype are suggested for identifying additive effects, while SCA along with trait phenotype for distinguishing dominance effects. L × T is an efficient parental mating design to study combining ability and heterosis. Also it is utilized to evaluate the genetics of different traits and their variance [33]. It aided estimation of gene effects of quantitative traits [34] in different crops like maize, rice and cotton.
The additive QTLs are more powerfully detected with GCA rather than trait phenotype, which is confirmed by MIC_NAU749, MIC_DPL513, LP_NAU3377, FL_NAU749, FL_NAU808, FL_DPL513, FL_HAU2759, FL_GH354, FU_NAU3307 and PH_DPL715 additive QTLs. However, SCA had comparatively lesser power than trait phenotype, and heterosis had a bit lesser power than SCA in distinguishing dominance related QTLs. The proposed method delivers options in the genetic dissection of heterosis, which can further be utilized to confirm the outcomes.
As a consequence, comparing phenotypic values associated with superior alleles for each target trait, we dissected 22,19,19,23,7,16,12,8,22 and 18 favorable alleles for BN, BW, FE, FL, FS, FU, FUI, LP, MIC and PH respectively. After bird's eye view, investigation of association results depicted that female lines contributed a lot in mining of superior alleles. We suggest the use of this tester primarily for the introgression of superior alleles that got transferred from founder parents. These influential superior alleles from this specific combination is provision of the fact that A971 Bt (C) tester is great potential harbored cotton cultivar of China. It should beneficially be used in advance breeding programs aimed at exploitation of hybrid vigor.
With the passage of time, climatic changes pose threats to crops in the lane of their successful survival. Whereas crops genetic banks lack much diversity to cope with situations due to limited founder parents and so with upland cottons of China. Keeping in view the scenario, its urgent need of time to go for thorough search of the genetic variations that may have emerged and amassed in genetic banks of cotton cultivars during their breeding history in order to exploit them for the introduction of additional diversity platforms to triumph wider genetic base.
For the improvement of complicated traits, of course molecular techniques including primarily the associated QTLs of fibre related features are of prime importance but the less time consuming and reliable tactic lies in the development and use of F 1 generation in breeding programs. Genome wide studies are authenticating the reliability of using F 1 individuals by providing scientific grounds to mine, conserve and efficiently exploit favorable QTLs that are of our interest.
In current era, via whole genome sequencing of G. hirsutum an SNP chip NAUSNP80K, has been developed fruitfully that can be efficaciously utilized to perform cotton GWAS. Hence, utilization of SNP in huge mass for backing up GWAS in cotton will be our further pace in advanced cotton realm that would definitely provide sound basis for provision of information connected to protein coding genetic factors via exploitation of bioinformatics tools and transgenics of quantitative factors. Consequently, improvements in cotton yields are just, combination of computer simulations with breeding programs away.
Conclusions
Highly significant 46 microsatellites were discovered in association with FUI, LP, FS, FL, BW, MIC, FE, PH and FU. Two-thirds of these significantly associated loci were scattered on D sub-genome, especially those of related to FS, FL and FU. Also the pleiotropic effects of NAU2631, CM45 and GH501 loci on FUI, FS, FL and FE were detected. A set of 96 exclusively favorable alleles were discovered primarily associated with BW, FL, FE and MIC mainly harbored by F 1 s from C tester (A971 Bt). To grab prominent improvement in mentioned influenced fiber quality and yield traits, we suggest the A971 Bt cotton cultivar as fundamental element in succeeding AM population development procedure to eliminate deleterious alleles residing at corresponding loci of superior alleles. The output of this study can be helpful for plant breeders and researchers working to improve the yield and quality attributes of cotton for the efficient utilization of hybrid vigor. with their respective phenotypic effects (ai). Representative combinations of phenotype and genotype data used in TASSEL association analysis with abbreviation: A., Genotype & phenotype data of F1s from 7886 tester; B., Genotype & phenotype data of F1s from Zhong 1421 tester; C., Genotype & phenotype data of F1s from A971 Bt tester; D., Genotype & phenotype data of F1s from 4133 Bt tester; E., Genotype & phenotype data of F1s from SGK 9708 tester; PA., Genotype data of maternal lines-phenotype data of F1s from 7886 tester; PB., Genotype data of maternal lines-phenotype data of F1s from Zhong 1421 (B) tester; PC., Genotype data of maternal lines-phenotype data of F1s from A971 Bt tester, PD., Genotype data of maternal linesphenotype data of F1s from 4133 Bt (D) tester; PE., Genotype data of maternal lines-phenotype data of F1s from SGK 9708 tester areas including yellow river valley, Yangtze river valley and Northern area in China. The remaining 46 (16.2%) were introduced from 11 different countries (USA, Russia, Australia, Burundi, Chad, Ivory Coast, Kenya, Sudan, Turkmenistan, Uganda, and Vietnam). These accessions have been planned to utilize on the basis of their improved agronomic and fiber related features supremely fiber quality, fiber yield, fiber maturity, boll number, boll size and both abiotic and biotic stress resistances [49].
Mating design
Here in this study, Line × Tester (L × T) mating design has been utilized. This design was suggested by Kempthorne for the first time in 1957 [14]. This design implicates hybridization among female lines and testers in one to one fashion for production of hybrids [33]. It gives SCA as well as GCA of every cross for lines and testers respectively [33]. In addition, it provides estimation of gene actions related to different types that prove significant in the expression of metric traits [34]. ; climate and cotton management practices considering primarily soil fertility, precipitation amount, temperature, growing period and agronomic practices [50].
Field planting and traits examination
High yielding accessions from primary gene pool of upland cotton (G. hirsutum) were selected as male and female parents. Two hundred eighty-four female parents were mated with 5 male parents namely 7886 (A tester), Zhong 1421 (B tester), A971 Bt (C tester) 4133 Bt (D tester) and SGK 9708 (E tester) in proper pattern to produce F 1 hybrid population. Field trials of the F 1 populations and parents were conducted at ten different locations for 2 years. Field experiments followed a randomized complete blocked design with three replications at each location. Young leaves (2-3) from randomly selected plants were sampled for DNA extraction and stored at − 70°C. CTAB method [51] was used for extraction of genomic DNA from young leaves of every genotype. Quality of DNA was then assessed on 1% agarose gel via electrophoresis.
The protocols of PCR cocktail preparation, amplification and electrophoresis all were followed as set by Zhang and Stewart in 2000 [34]. PCR reaction mixture was prepared with a total volume of 10 μL comprising 1.2 μL DNA (50 ng/μL), 0.2 μL Taq DNA polymerase (2 U/ μL), 0.2 μL dNTP mix(10 mM), 0.65 μL (5 μM) each for forward and reverse primer pair, 1 μL 10× PCR buffer (20 mM Mg + 2 and 6.1 μL ddH 2 O. Thermal cycler conditions set for reaction were as follows: 3 min of initial denaturation at 95°C, 30s for 30 cycles of denaturation at 95°C, 50s for both annealing at 57°C and extension at 72°C and 7 min of final extension again at 72°C. After completion of every PCR the samples were hold at 4°C.
Electrophoresis was performed by using 8% PAGE in 1× TBE electrolytic solution to visualize the PCR amplified products. Electrophoretic apparatus comprised vertically loaded gel on both sides each having 96 comb lane. For estimation of amplified DNA products size a 50 bp ladder was kept as standard. Silver staining was performed to visualize bands whilst UV light board was used to read and record bands sizes. Amplified band of every microsatellite locus was recorded in binary form as '0' for absence and '1' for presence of band.
Phenotypic data analysis
Morphological data of fiber-associated attributes especially yield and quality, were taken from 284 lines, 5 testers and 284 respective F 1 s from each cross at each location for consecutive 2 years of study and summary statistics was workout and further subjected to ANOVA for RCBD [52].
For classical multivariate techniques, covariance and correlation matrices (together with mean vectors) provide enough statistics with sound basis of multivariate normal linear models. For the analysis of multivariate structure, various tools with statistical background are at hand which mainly include canonical correlation analysis, factor analysis, principal component analysis and so forth. In order to readily apprehend the relationship among variables with the main purpose of reducing the number of dimensions connected to their multivariate structure, the above mentioned tools are primarily utilized. Besides these, for the revelation of variables relationship among themselves some visualization practices for dimension-reduction have been additionally established which supremely take into account canonical structure plots [53], factor pattern plots, biplots [54] and so on. For enhanced simpler views of relationship among variables use of dynamic graphics on the basis of linear combinations and projections is another advanced technique encountering grand tours [55] and exploratory projectionpursuit [56]. Unfortunately, directly from correlation matrices for the interpretation of variables relationship among themselves fewer techniques are available. However, scatterplot matrix is an exceptional tool to visualize the variables relationship provided relatively less quantity of variables are required to scrutinize. It exhibits all the data and substantially enhance the representation by decorating it with regression lines (linear), (loess) smoothed curves, data ellipses and so on. Predominantly with non-parametric smooth curve, it becomes possible to define the variables relationships from scatterplot visualization whether linear or if some transformations would be useful. Onward to this, mostly it is assumed that all such similar complications have been dealt with along with consideration that all variables are linearly correlated with each other on some transformations scales. It develops some glitches in the direct display of data when we go beyond the limits of comparatively lesser variables data. Above discussed approach has been established for dimension-reduction sort of complications.
To possibly display the patterns of correlation among variables present in larger data set form, we pondered on techniques which can apprehend mentioned scenario of data. To attain this in logical manner, while dealing with relatively greater amount of variables an effective visual thinning (schematic visual summary) approach was utilized like in boxplot [57], that reduces details in the middle in order to depict more significant statistics on univariate shape, center, spread and outliers. The eigenvalues of the first two principal components and correlation coefficients were extracted for each genotype (F 1 s and parents) and their studied traits by using R software package.
Evaluation of heterosis and combining ability
The percent increase or decrease of F 1 hybrids over parent values were calculated using the formulas proposed by Fehr in 1987 [58] to estimate possible heterotic effects of the traits measured in the current study. The GCA variance of parents and SCA variance of hybrids were evaluated by following Line × Tester variance analysis as reported by Singh and Chaudhary in 1977 [59].
Genotypic data analysis Population structure
The Bayesian model-based program STRUCTURE 2.3.4 has been utilized to evaluate the population structure. The length of burn-in period and the number of Markov Chain Monte Carlo (MCMC) replications following burn-in were set at 100,000 having an admixture and allele frequencies correlated model. Ten independent run iterations were executed set with the hypothetical number of subpopulations (K) extending from 1 to 11. However, the outcomes represented a continuously increasing value of K with corresponding LnP(D) value. By integrating the probability data from [LnP(D)] obtained via STRUCTURE with ΔK (ad hoc statistic), K value was precisely estimated [60]. On the basis of this precise K, every genotype was given to the relevant subpopulation with membership value (Q value) > 0.5 [61], and so Q-matrix (population structure) was created for further association mapping of marker and traits. For the STRUCTURE software, "1" was used for fragments presence, "0" for fragment absence, and "-9" for missing data.
Association analysis and superior allele identification
To estimate LD pattern in Upland cotton genome, the weighted average of squared correlation coefficient r 2 of each pair of microsatellites was calculated using the software package TASSEL 2.1 based on rapid permutations in 1000 shuffles with rare alleles (allele frequency less than 0.05) treated as missing data [31]. Every loci pair was ranked as linked or unlinked with the basis regarding their presence on same or different chromosome respectively. For both types of linked and unlinked markers LD was calculated in parental populations and hybrid populations taken from STRUCTURE analysis. The 99th percentile of r 2 distribution for unlinked markers, which determined whether LD is due to physical linkage, was treated as the background LD level [62]. The r 2 values of each pair of microsatellites were plotted against map distance (Mbp), and LD decay was estimated. By utilizing Sigmaplot version 12.5 an inner fitted trend line i.e., nonlinear logarithmic regression curve was sketched in order to elaborate the affiliation between r 2 and Mbp of microsatellites prevailing on single chromosome.
Mixed linear model (MLM) was used to construct markers-fiber quality trait association tests using the TASSEL 2.0.1 software package [31]. For the TASSEL software, "1" designates presence of fragments, "0" specifies absence, and "?" designates missing value. The MLM association test was performed by considering Q-matrix and K-matrix simultaneously as followed by Yu et al. in 2006 [30]. False positive associations are significantly reduced by MLM model by considering the effects of both kinship and structure related to the material under investigation [30] and gives P and r 2 values of each significant association. The detail of genotypic and phenotypic data combinations used in the TASSEL analysis is given in Table 2.
Significantly, associated loci were further scrutinized for determining the favorable alleles respective of their targeted traits on the basis of association results previously obtained. This phenotypic effect value was calculated through comparison between the average phenotypic value over genotypes with specified allele and that of all genotypes: However, ai: phenotypic effect of the ith allele xij: phenotypic value over the jth accession with the ith allele ni: number of accessions with the ith allele N k : phenotypic value over all accessions n k : number of accessions If value for a i came larger than zero then allele was considered with a positive effect, otherwise with negative effect.
|
2023-01-15T14:21:41.706Z
|
2018-10-29T00:00:00.000
|
{
"year": 2018,
"sha1": "86a4eea060de712af8be1ebad1928d228887e803",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-018-5129-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "86a4eea060de712af8be1ebad1928d228887e803",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
271393745
|
pes2o/s2orc
|
v3-fos-license
|
Role and significance of c-KIT receptor tyrosine kinase in cancer: A review
c-kit is a classical proto-oncogene that encodes a receptor tyrosine kinase (RTK) that responds to stem cell factor (SCF). C-KIT signaling is a critical regulator of cell proliferation, survival, and migration and is implicated in several physiological processes, including pigmentation, hematopoiesis, and gut movement. Accumulating evidence suggests that dysregulated c-KIT function, caused by either overexpression or mutations in c-kit, promotes tumor development and progression in various human cancers. In this review, we discuss the most important structural and biological features of c-KIT, as well as insights into the activation of intracellular signaling pathways following SCF binding to this RTK. We then illustrate how different c-kit alterations are associated with specific human cancers and describe recent studies that highlight the contribution of c-KIT to cancer stemness, epithelial-mesenchymal transition and progression to metastatic disease in different experimental models. The impact of tyrosine kinase inhibitors in treating c-KIT-positive tumors and limitations due to their propensity to develop drug resistance are summarized. Finally, we appraise the potential of novel therapeutic approaches targeting c-KIT more selectively while minimizing toxicity to normal tissue.
INTRODUCTION
About two-thirds of the 90 tyrosine kinase (TK) genes described in the human genome encode for receptor tyrosine kinases (RTKs) [1].These cell-surface receptors transduce a response on binding to a ligand and are defined by an extracellular (EC) ligand-binding domain, a single transmembrane (TM) region, a juxtamembrane (JM) region, a cytoplasmic portion with a conserved protein TK domain, and a flexible carboxy (C)-terminal tail [2,3].RTKs are ubiquitously spread in multicellular animals, from the oldest metazoan phylum existing today (Porifera) to Chordata [4].In humans, the 58 RTKs described so far are classified into 20 subfamilies or classes based on the structure of their amino (N)-terminal ligand binding ectodomains, which consist of one or more defined motifs including cysteine-rich regions, fibronectin type III-like domains, immunoglobulin (Ig)-like domains, kringle-like domains, epidermal growth factor-like domains, cadherin-like domains, discoidin-like domains, and leucine-rich regions [1,5].Among the different classes of human RTKs described to date, Class III RTKs, which are characterized by the presence of five Ig-like EC domains, include platelet-derived growth factor α and β receptors (PDGFR α/β), colony-stimulating factor 1 receptor, fms-like RTK 3, and c-KIT [6,7].These RTKs play a pivotal role in several aspects of normal cell physiology, and different mutations that affect them can cause aberrant downstream signaling that is often linked to many disorders, including cancer [8,9].
STRUCTURE AND BIOLOGICAL FUNCTIONS OF c-KIT
(D1-D5).The first three domains are essential for c-KIT binding to SCF, whereas D4 and D5 are involved in dimerizing adjacent c-KIT monomers [13,14].The EC region is followed by a single spanning TM helix that connects with the IC domain, including a JM domain coupled to a TK domain and a C-terminal tail region.The JM domain is essential for c-KIT receptor control and modulation, particularly in the relay of IC downstream signaling [15].The TK domain is split into the proximal amino-terminal lobe (N-lobe) TK1 with an ATPbinding region, and a distal carboxy-terminal lobe (C-lobe) TK2 with a phosphotransferase domain [16] (Figure 1).
Different c-KIT isoforms generated by alternative mRNA splicing have been described, including two that differ by the presence or absence of the tetrapeptide sequence glycine-asparagine-asparagine-lysine (GNNK) in the EC domain [17][18][19].Although both c-KIT isoforms have binding affinity to SCF, the GNNK-negative isoform leads to faster phosphorylation of the receptor, a more robust downstream signaling, and higher tumorigenic potential in mice [20][21][22][23].Another c-KIT isoform results from losing one of the two serine residues in the kinase insert (KI) domain [17].In contrast, a fourth isoform is caused by a shorter transcript of c-Kit that encodes a truncated c-KIT without kinase activity and only contains TK2 and the C-terminal tail region [24].c-KIT is expressed by various cells in the body, and signaling pathways stimulated by its activation by SCF under physiologic conditions are implicated in regulating cellular processes such as cell proliferation, survival and migration [13].In the normal bone marrow, c-KIT is expressed by hematopoietic stem cells, playing an important role in self-renewal and differentiation into various blood cells (reviewed in [25]).Indeed, homozygous white-spotted (W) loss-of-function mutations in the c-Kit gene (c-Kit W/W ) in mice have shown to cause lethal anemia caused by hematopoietic stem cell defects [26,27].c-KIT expression is gradually lost during hematopoietic differentiation and only retained or increased in mast cells, natural killer (NK) cells and dendritic cells (DCs), suggesting an essential function in inflammation and immunity [28,29].Moreover, different studies have shown that CD117/c-KIT is not only expressed by bone marrow-derived stem cells, but also by those found in other organs in adults, such as prostate [30], liver [31] and heart [32], suggesting that SCF/c-KIT signaling pathways may contribute to stemness in some organs.Furthermore, c-KIT has been linked to many different biological processes in other cell types.For instance, c-KIT signaling has been shown to regulate oogenesis, folliculogenesis and spermatogenesis, exerting critical functions in female and male fertility [33,34].c-KIT is also critical to the proliferation, survival and migration of melanocytes from the neural crest to the dermis [35].Loss-of-function mutations in c-kit can cause piebaldism, an www.bjbms.orgHere, we will provide a simplified explanation of signaling pathways downstream of c-KIT that, rather than being just linear and independent, are now known to be much more complex and occasionally connected with other downstream signaling molecules activated by other receptors.
Signaling cascades activated downstream of c-KIT include mitogen-activated protein kinase/EC signal-regulated kinase (MAPK/ERK), phosphatidylinositol 3-kinase/protein kinase B (PI3K/AKT), C-γ (PLC-γ), Janus kinase/signal transducer and activator of transcription (JAK/STAT), and Src kinase pathways [44].On phosphorylation, Y703 and Y936 bind the SH2 domain of the adaptor protein growth factor receptor-bound protein 2 (Grb2), ending in the activation of the MAPK/ ERK pathway, which plays essential roles in the regulation of gene transcription and cell proliferation [45,46].Conversely, phosphorylation of c-KIT at Y721 triggers the PI3K/AKT pathway that promotes cell survival and evasion of apoptosis, either through direct binding of the p85 subunit of PI3K or indirectly through binding of PI3K to the scaffolding protein Grb2-associated binding protein (Gab2) and Grb2 [47][48][49].
binding of dimeric SCF to D1-D3 regions bridges two adjacent c-KIT molecules together and leads to a D4 and D5 reorientation that results in c-KIT homodimerization [42,43].This conformational change leads to trans-autophosphorylation of selected tyrosine residues (Y) events that appear to occur in a specific order.The initial autophosphorylation occurs in tyrosine residues in the JM domain (primarily Y568 and Y570), resulting in its displacement from the N-lobe and swinging away of the loops, allowing access to ATP and release of ADP from the active site [5,13,15,16].Subsequent transphosphorylation occurs in the activation loop (Y823) [13,15,16] (Figure 2B).Full activation of c-KIT occurs when additional tyrosine residues in the KI region (Y703, 721, and 730) and the C-terminal tail (Y900 and Y936) are phosphorylated [13,15,16] (Figure 2C).
Many of the tyrosine residues mentioned above serve as substrate docking sites after transphosphorylation, activating downstream transduction pathways that lead to various cellular responses (Figure 2C).It is important to note that the transduced signaling pathways and their consequential effects are dependent on the specific tyrosine residue phosphorylated.
www.bjbms.org
The PI3K/AKT pathway can also be activated by phosphorylated Tyr900 in the C-lobe through binding to the adaptor protein Crk [41].The PLC-γ pathway, which promotes cellular proliferation and suppresses apoptosis through the actions of diacylglycerol and inositol 1.4,5-triphosphate, can be triggered when PLC-γ interacts with phosphorylated Y730 [13,50].In addition, activation of the Src family of tyrosine kinases (SFK) has been reported to occur through the interaction of its SH2 domain with phosphorylated Y568, Y570, and Y936, stimulating cell proliferation and survival through Akt phosphorylation and, presumably, cell migration through phosphorylation of focal adhesion kinase [46,51].Moreover, SFK and PI3K/ AKT participate in the activation of the JAK/STAT pathway [46,52], suggesting that phosphorylation of some c-KIT tyrosine residues leads to activation and translocation of STAT proteins into the nucleus, where they act on target gene promoters [53,54].
Different regulatory mechanisms function as negative-feedback loops to ensure tight control of signaling output once the c-KIT receptor is activated [55].The main mechanisms of attenuation of c-KIT signaling include (1) c-KIT ubiquitination and internalization, (2) dephosphorylation, and (3) PKC-dependent serine phosphorylation.In the first of these mechanisms, activated c-KIT is transported from the cell surface to the interior of the cell through clathrin-mediated endocytosis [56].E3 ubiquitin-protein ligase c-Cbl (named after Casitas B-lineage Lymphoma) binds directly to activated c-KIT receptors through Y568 and Y936 or indirectly through Grb2 to Y703 and Y936, or via the p85 subunit of PI3K [13,57,58].Binding of another E3 ubiquitin ligase complex containing suppressor of cytokine signaling (SOCS) 1 and 6 isoforms to Y568 in activated c-KIT has also been described [59,60].The internalized c-KIT is then targeted for lysosomal and proteasomal degradation [61,62].Furthermore, attenuation of c-KIT signaling can occur by the action of phosphatases such as Src homology region 2 domain-containing phosphatase-1, which has been shown to associate with activated c-KIT causing its dephosphorylation [63,64].Finally, increased PKC activation downstream from c-KIT can lead to negative feedback regulation of the receptor by phosphorylating Ser741 and Ser746 residues in the KI domain [65,66].Moreover, PKC activation has also been shown to cause shedding of the EC domain of c-KIT, making it unresponsive to SCF stimulation [67].
c-KIT AND CANCER
Genomic profiling of nearly 19,000 de-identified samples has shown c-kit alterations in 2.86% of 59 major cancer types studied, with some of them presenting very frequent and clinically actionable mutations [68], such as gastrointestinal stromal tumor (GIST) in about 80-85% of cases [69].Although most c-kit alterations associated with cancer involve "gainof-function" mutations that lead to constitutive activation of c-KIT in an SCF-independent manner, others entail amplification/overexpression or "loss-of-function" mutations [70].
In GIST, c-KIT expression is detected immunohistochemically in more than 95% of the cases.It has become an important diagnostic marker when used together with morphologic features displayed by these tumors [77,78].In addition, 85%-90% of adult GISTs bear c-kit and PDGFRA gene gain-of-function mutations that are mutually exclusive [69] and seem unrelated to c-KIT expression, as they can be found in a proportion of GISTs that are immunohistochemically negative for c-KIT [77].Mutations in c-kit, which in GISTs are more common than those in PDGFRA, most frequently involve the exon 11 that codes for the JM region, disrupting its autoinhibitory function and leading to constitutive activation of c-KIT [71,79].Most of the mutations in exon 11 are caused by deletions and clusters between codons 550 and 560, which represents one of the hot spots within the c-kit gene [13,69].Less common mutations occur in exon 9 that encodes the EC region of c-KIT, mainly involving an internal tandem duplication of Ala502-Tyr503 [71,80,81] that would mimic the conformational change that occurs when c-KIT dimerizes after binding its cognate ligand SCF [82].Mutations also occur in exons 13 (encoding the ATP-binding region of c-KIT) and 17 (encoding the activation loop of the kinase), but are rare, with a combined frequency of 1-2% among all GISTs [81,83,84].
Systemic mastocytosis is a rare myeloproliferative neoplasm in which malignant mast cells infiltrate bone marrow and other extracutaneous tissues such as liver, spleen, and peripheral blood.More than 90% of adult patients with this disease present a gain-of-function mutation in exon 17 within the c-kit gene, particularly KIT D816V, a missense mutation in which aspartic acid is substituted by valine in codon 816 [13,91].Through kinase assays, it has been shown that the D816V mutant can autoactivate 586-fold faster than native c-KIT [79], which explains the adverse prognostic impact of the c-kit mutation in this hot spot in patients with systemic mastocytosis [91].The D816V mutation is also present in children with systemic mastocytosis but with a lower frequency (42%).In contrast, other mutations occur in other locations, often in exons 8 and 9 (44%) that encode the fifth EC Ig-like domain [74], thus promoting a conformational change that enables dimerization and activation of c-KIT by lower physiologic SCF levels than normally needed [16].
Although an overall somatic mutation rate of about 8% has been reported in both seminoma and non-seminoma testicular germ cell tumors, the incidence of c-kit mutation is ten-fold higher (20-25%) in the former than in the latter [95,96].The most common c-kit alteration in seminomas involves activating mutations in exon 17, mainly D816X (where X is either valine [V] or histidine [H]) [13,97].Similar c-kit mutations have been reported in dysgerminomas (the ovarian counterpart of seminomas) [98,99], along with c-kit amplification associated with c-KIT protein overexpression evident through immunohistochemistry (IHC) (Figure 3C and D) [99].Moreover, high expression of c-KIT in patients with primary ovarian highgrade serous carcinoma has been shown to be associated with shorter disease-free survival and peritoneal metastasis [100].No correlation was found between c-kit mutations and c-KIT protein expression [99], which is detected in 78%-100% of ovarian dysgerminomas [101].
In addition to gain-of-function mutations described above for some cancers, different studies have shown overexpression of c-KIT in cancer cells that, in their normal cell counterparts, show very little or undetectable c-KIT expression when mainly assessed by IHC.For example, a 7-fold increase in c-kit mRNA expression relating to normal renal tissue has been reported in renal oncocytoma and chromophobe renal cell carcinoma (RCC) [102].Moreover, IHC analysis performed in tissue microarrays (TMAs), including 226 renal tumors, revealed a strong c-KIT immunoreactivity in more than 85% of chromophobe RCCs and oncocytomas.In contrast, c-KIT expression was infrequently observed or undetectable in other renal tumors assessed [102,103].IHC studies revealed c-KIT expression in 100% of cystic renal oncocytomas (Figure 3E) [104].c-KIT overexpression in chromophobe RCC and renal oncocytoma was not associated with c-kit mutations [105].
Normal breast ducts and acini, but not myoepithelial and stromal cells, show some expression of c-KIT, which has been reported to be lost in most breast cancers [106][107][108].This loss of c-KIT expression has been a potential determinant of malignant breast transformation due to c-kit gene promoter DNA hypermethylation [109].Although a low percentage of breast carcinomas express c-KIT, if any, 20-42% of triple-negative breast cancers, which lack www.bjbms.orgexpression of estrogen receptor, progesterone receptor, and HER-2/neu and have a significantly higher probability of relapse and poorer overall survival when compared with other breast cancer types, do express it [110][111][112].Another type of breast cancer in which the expression of c-KIT is frequently seen is adenoid cystic carcinoma (ACC) of the breast that, although relatively clinically indolent, can be confounded with infiltrating duct carcinomas (particularly with tubular and cribriform carcinomas of the breast).In this context, IHC assessment of c-KIT is a valuable diagnostic tool since its expression is found in more than 90% of mammary ACC but not in other carcinomas with overlapping histologic features (Figure 3F) [113][114][115].
Although overexpression of c-KIT -rather than mutations in its gene -has been reported in a high percentage of small cell lung cancer (SCLC) patients by different groups [116][117][118][119][120], its prognostic relevance remains debatable due to conflicting findings that may be related to the type of tumor specimens used (biopsy or surgical samples), cancer stages, and other variables that still need to be scrutinized.
Expression of c-KIT and SCF has been reported in patient-derived immortalized colorectal cancer cell lines [121] and in premalignant and malignant colonic lesions, where c-KIT and SCF co-expression has been associated with a worse clinical outcome [122].In a more recent study using a TMA comprising 137 patient-derived colon tumors and 179 associated serially passaged xenografts, it was found that c-KIT is expressed in approximately 50% of colorectal cancer tissues [123], in agreement with data collected from The Cancer Genome Atlas [124].
Several studies have assessed c-KIT expression in human prostate cancer (PCa) cell lines and biopsies taken from patients, though with some divergent results that may be due to the use of antibodies that have been discontinued some years ago and thus cannot be further employed for reproducibility analyses [125,126].Studies carried out by our lab (RDB) using benign prostatic hyperplasia, primary tumors, and bone metastatic PCa specimens have shown uniform levels of SCF and a trend of increasing c-KIT expression that parallels disease aggressiveness [127].These findings are in agreement with other studies that also revealed significantly increased expression of c-KIT in high-grade (Gleason score [GS] 8 or higher, or clinical Stage 2) compared with low-grade (GS 6-7, or clinical Stage 2) prostate tumors [128].Despite this, we observed that most human PCa cell lines grown in vitro express c-kit at the gene level, though c-KIT immunoblotting only detects low or null protein levels [127], as shown similarly by others [129].Using experimental models of bone metastasis, we observed de novo expression of c-KIT in intraosseous tumors generated by otherwise c-KIT-negative PCa cell lines, suggesting an induction of c-KIT expression in PCa cells by the bone microenvironment, which was confirmed by co-culture studies of PCa cells and bone marrow-derived cells [127,130].Furthermore, we found that inhibition of bone-induced c-kit expression in PCa cells transduced with lentiviral short hairpin RNA could significantly reduce intraosseous tumor incidence and growth, suggesting a crucial role of this RTK in PCa bone metastasis [130].
IMPLICATIONS OF c-KIT IN CANCER DEVELOPMENT AND PROGRESSION
A complex interplay of numerous biological processes contributes to the development and progression of cancer.In addition to studies reporting associations between specific gain-of-function or loss-of-function mutations in c-kit, induction of de novo expression or overexpression of c-KIT and clinical outcome in cancer patients (summarized above), research by many groups has revealed that c-KIT plays crucial roles in the regulation of many mechanisms leading to tumor formation and cancer progression in carcinomas.Below, we will describe some of these studies.
Cancer stemness, which refers to the cancer stem cell (CSC) phenotype, is characterized by the ability of a subpopulation of cancer cells to self-renew, differentiate into defined progenies, initiate tumor growth, and drive metastasis, recurrence, and resistance to therapies [131,132].C-KIT has been proposed to regulate stemness in different cancers.Studies in ovarian cancer cells have related c-KIT expression to cancer stemness [133][134][135][136]. Accumulating evidence also indicates a role for c-KIT in colon cancer stemness, as supported by studies employing spheroid cultures derived from colon cancer patients' tumor cells grown in serumfree and non-adherent plates, a technique commonly used to investigate CSCs [137].These studies have shown that the release of SCF by more differentiated colon tumor cells modulates the growth of c-KIT-expressing CSC-like colon tumor cells [138], suggesting the existence of a paracrine system by which SCF can stimulate CSC-like cells found in the colonospheres.Furthermore, it was recently demonstrated that c-KIT stimulates CSC properties in colorectal cancer cells, including CD44 expression and other stem cell markers [139].Studies on non-small lung cancer have also related c-KIT to cancer stemness, based on findings that revealed a reduction of CSCs through targeting the SCF-c-KIT autocrine signaling loop [140] and inhibition of c-kit with specific shRNA and inhibitors [141].Moreover, in a recent study in which human PCa cell lines were separated into CD117 (c-KIT)-positive and CD117-negative cells, Kerr' s group demonstrated that in some instances c-KIT www.bjbms.orgpromoted sphere formation and increased the expression of specific stemness markers [142].Similarly, we have observed that ectopic expression of c-KIT in PC3 cells followed by exposure to its ligand SCF increases the number of prostaspheres formed in selective serum-free medium and non-adherent plate conditions (Figure 4).
To acquire migratory and invasive capacities, carcinoma cells must detach from adjacent epithelial cells and adopt a mesenchymal phenotype -the epithelial-mesenchymal transition (EMT), which plays a critical role in their aggressiveness and metastatic potential [143,144].Regulators of EMT comprise different transcription factors such as Snail (including Snail and Slug, also called SNAI1 and SNAI2, respectively) and ZEB (Zeb1 and Zeb2), which can repress the expression of genes encoding E-cadherin and cytokeratins (associated with epithelial phenotype) and upregulate others encoding proteins linked to the mesenchymal phenotype (e.g., N-cadherin, vimentin, and fibronectin) [145][146][147][148]. Different studies have shown that the expression of EMT transcription factors is also increased in CSCs [147,149,150], and a growing body of evidence supports the view that circulating tumor cells (CTCs) can arise from tumor cells that have gone through EMT [151][152][153][154]. Acquisition of EMT properties and enhanced invasiveness and CSC traits has been found in salivary ACC cell lines after ectopic expression of c-KIT [155].The association between KIT and EMT is also supported by immunohistochemical studies performed in a TMA comprising 150 specimens of thymic epithelial tumors, where the expression rate of c-KIT was found to be significantly higher in thymic carcinomas than in thymomas, which in most of the cases behave in a benign fashion and are noninvasive.In these studies, c-KIT expression positively correlated with EMT markers N-cadherin, Twist, and Snail and negatively with E-cadherin, suggesting that the immunohistochemical analysis of those proteins could be important to distinguish between thymic cancer and thymoma [156].The association between c-KIT (CD117) and EMT is also supported by studies in ovarian cancer cells, where a reduction of CD117+ and CD44+ subpopulation of ovarian CSCs by metformin at low dose led to a significant decrease of Snail2, Twist, and vimentin related to mesenchymal traits, and an increase in expression of the epithelial marker E-cadherin [157].In line with these findings, another group demonstrated that CD117+ subpopulations of human PCa cell lines present a significant increase in vimentin expression and in vitro migratory ability than CD117− subpopulations of the same cell lines [142].In concordance with these results, we found that ectopic expression of c-KIT supports the in vitro migration and invasion of two different PCa cell lines along with BRCA2 downregulation, which may play a role in the process as suggested by gene rescue experiments [130].We posit that the stimulation of the migratory and invasive abilities induced by c-KIT in these PCa cells would be mediated by an EMT-like phenomenon, as we observed a change in morphology toward a more mesenchymal phenotype, an increase in expression of Snail1, Slug, Zeb1, and vimentin, and a decrease in E-cadherin expression in c-kit-transfected PCa cell lines as compared to the same PCa cells transfected with the empty vector (EV) (c-kit-negative control cells) (Figure 5).In addition to the contributory role exerted by cancer cell-intrinsic expression and activation of c-KIT in tumor development and progression, several lines of evidence suggest a key role for SCF-c-KIT signaling occurring in the tumor microenvironment.Perhaps the best example is that of mast cell infiltrates associated with tumors.Indeed, different studies in mice have demonstrated that high levels of c-KIT on mast cells and their presence in the tumor microenvironment promote angiogenesis, leading to increased tumor growth and metastasis [158][159][160].Furthermore, additional research indicates that c-KIT and mast cells modulate the development, recruitment, and immunosuppressive effects of myeloid-derived suppressor cells in tumors [161,162].
TARGETED THERAPY OF c-KIT-POSITIVE TUMORS
As previously outlined, various cancers present an aberrant activation of c-KIT kinase, caused either by overexpression or mutations in c-kit.Most of the 500 c-kit mutations identified so far in human cancer (Sanger Institute Catalogue of Somatic Mutations in Cancer [163] are passenger rather than driver mutations.To target and inhibit dysregulated c-KIT, two main approaches have been considered: small molecule inhibitors and monoclonal antibodies (mAbs).Among small molecule inhibitors, the first one developed was imatinib mesylate (Gleevec ® ), which was originally found to inhibit the TK activity of the chimeric BCR-ABL fusion oncoprotein resulting from the translocation t(9;22) in chronic myelogenous leukemia, and was approved for the treatment of this hematologic cancer in 2001 [164].Serendipitously, imatinib was also found to inhibit the autophosphorylation and activation of some RTKs, such as c-KIT and PDGFR, and was approved as standard first-line treatment for metastatic GIST.It is also used in the adjuvant setting for patients with GISTs who may have potential curative treatment by surgery and for the treatment of adult patients following surgical removal of CD117-positive GISTs (reviewed in Kelly et al. [165]).Although imatinib can traverse the cell membrane and bind to JM and cytoplasmic enzymatic domains due to its small size (Molecular Weight: 493.6), its therapeutic effect is highly dependent on the mutation involved [70].For instance, in GIST patients whose tumors harbor gain-of-function point mutations in the exon 11 JM domain of c-kit, found in 75-80% of the cases, imatinib provides a robust initial clinical response [71].However, in almost 90% of these patients, there is a relapse of the disease within 20-24 months [166][167][168][169], which is due to secondary mutations in c-kit that usually cluster in exons 13/14 (the ATP-binding pocket) and 17 and 18 (the activation loop) of the kinase domain, preventing optimal binding of imatinib and restoring c-KIT signaling in the presence of the inhibitor (reviewed in Serrano et al. [169]).This has led to the approval of other TK inhibitors, such as sunitinib and regorafenib, with activity against secondary c-kit mutations [169].Among these, there are agents such as sunitinib, which elicits longer progression-free survival and overall survival in patients that harbor exon 13 or 14 secondary c-kit mutations compared to those with exon 17 or 18 secondary c-kit mutations [167,168], and regorafenib, which has equal efficacy in tumors with secondary exon 13/14 or exon 17/18 mutations, or combinations thereof [170].Ripretinib, a novel type II switch control kinase inhibitor, is a broad-spectrum inhibitor of secondary drug resistance mutations, including activation loop mutations targeted by type I inhibitors [171].In addition, because PDGFRA-mutant GIST accounts for up to 10% of GISTs that exhibits primary resistance to imatinib and sunitinib therapy [165], other agents that selectively target PDGFRα D842V mutant advanced GISTs are of immense clinical value.Among them, we find avapritinib, an inhibitor of c-KIT and PDGFRA activation loop mutants that has been approved by the Food and Drug Administration (FDA) for GISTs that harbor PDGFRA exon 18 D842V mutations [167], whereas dasatinib, an oral inhibitor of c-KIT, produced a positive response in one patient with PDGFRA D842V-mutant GIST in a Phase II trial, and is currently used off label for this molecular subtype [172].
The clinical experience of imatinib in GIST led to studies to explore the potential therapeutic value of this TK inhibitor in systemic mastocytosis.In adult patients affected with the disease, the activating D816V c-kit mutation is found in about 90% of the cases and is responsible for primary resistance to imatinib.In contrast, in the remaining patients (with the absence of D816V c-kit mutation or unknown c-kit mutational status), a clinical response to imatinib was found, leading to its approval as a treatment by the FDA in 2006 [173][174][175].This clearly demonstrates the relevance of identifying specific c-kit mutations to select patients for more adequate treatments.
To overcome the resistance developed in certain wildtype or mutant c-KIT-positive cancers treated with TK inhibitors such as imatinib, it has been proposed the use of mAbs to target and inhibit dysregulated c-KIT.Although unlike small molecule inhibitors, antibodies can only recognize EC epitopes of c-KIT, this may represent a potential therapeutic advantage due to their specific binding to both mutant and wild-type c-KIT receptors (recall that most c-kit mutations localize to JM or IC domains of the receptor).Using KIT-expressing NIH 3T3 and Ba/F3 cell lines, Shi et al. evaluated the feasibility of targeting oncogenic c-kit mutations using anti-D4 mAbs that obstruct homotypic D4 or D5 contact formation [176].Oncogenic c-kit mutations were divided into two classes.Class I mutants include D5 point mutations D419A and N505I, deletion of Y418 D419, and duplication of A502Y503, which exhibit surface expression of constitutively activated TK activities.In contrast, Class II mutants, including the D5 T417IΔ418-419 mutation and the IC V560D and D816V point mutants, have constitutively activated TK activity with low or low negligible surface expression [176].Anti-D4 mAbs abrogated oncogenic c-KIT signaling in mutations localized in D5, including all Class I mutants and the Class II T417IΔ418-419 mutation.Based on these findings, the authors proposed differential pharmacological treatment regimens for cancer patients depending on the c-kit mutations present in their tumors [176].
Moreover, antibody-drug conjugates (ADCs) can also be designed by conjugating different drugs to mAbs to deliver a potent cytotoxic payload to cancer cells while minimizing toxicity to normal tissue [177].Studies with LOP628 [178] and NN2101-DM1 [179], two humanized anti-KIT antibodies conjugated to the tubulin polymerization inhibitor emtansine (DM1) [177], have been reported.Both ADCs showed strong in vitro antiproliferative activity on several c-KIT-positive human tumor cell lines representing GIST, AML, SCLC, and systemic mastocytosis regardless of their c-kit mutational status, and in vivo antitumor responses in imatinib-sensitive and -refractory GIST and systemic mastocytosis xenograft models, as well as in SCLC and AML models [178,179].In both cases, the ADCs bind to c-KIT on the surface of the cancer cells, forming a complex that is then internalized and rapidly trafficked to the lysosome, releasing DM1 in the cytoplasm, arresting the cell cycle by inhibiting microtubule polymerization and leading to apoptosis of cancer cells models [178,179].Despite the promising preclinical results obtained with LOP628, rapid hypersensitivity reactions were observed in some patients treated with this ADC in a Phase I clinical trial experienced, which led to a termination of the trial [180].This unexpected outcome is likely www.bjbms.orgcaused by mast degranulation resulting from a high affinity binding of the Fc region of LOP628 to the Fc-gamma receptor on mast cells and an SCF-mediated c-KIT activation that is not inhibited by LOP628 (recall that c-KIT is expressed by mast cells) [180].Although clinical studies are still needed to define the safety profile of NN2101-DM1, it has been demonstrated that the NN2101 antibody has decreased binding affinity to Fc receptors and an inhibitory action on SCF-dependent c-KIT activation [181], which might prevent hypersensitivity reactions such those observed with LOP628 (studies in patients are necessary to examine this hypothesis).Moreover, in vivo and in vitro studies revealed synergistic inhibitory effects on some cancer cells when treated with NN2101-DM1 and imatinib or carboplatin/etoposide [179].This suggests that the use of combination therapies involving novel anti-KIT ADCs in conjunction with standard chemotherapeutic agents, TK inhibitors, or other targeted agents, should be considered a strategy to enhance the efficacy of anti-KIT ADCs used as a monotherapy for different cancer types.
CONCLUSIONS
Knowledge of the contribution of SCF and c-KIT to different physiological mechanisms has increased dramatically during the last decades.Furthermore, accumulating evidence suggests that activating mutations or amplification/overexpression of c-kit contribute to the development and progression of many human malignancies, as supported by gene and protein profiling of clinical specimens and numerous in vitro and in vivo studies at elucidating the role played by c-KIT in cancer.Following the great success of imatinib in treating GISTs, other broad TK inhibitors have been approved to overcome the resistance acquired by certain c-KIT-positive tumors through secondary mutations occurring in c-kit.The identification of specific c-kit mutations could be of importance in recognizing more potent and selective treatments in certain c-KIT-positive tumors.However, the experience with TK inhibitors suggests an almost ever-present potential for the outgrowth of resistant cancer clones.Recent studies suggest that anti-KIT monoclonal ADCs may represent a new modality to treat wild-type and activating-mutant c-KIT-positive tumors, irrespective of their c-kit mutational status.The refinement of these highly selective therapeutic tools, either alone or combined with chemotherapeutic agents, TK inhibitors, or immune checkpoint inhibitors, will help treat cancer types driven by the c-KIT signaling machinery.These therapeutic strategies, if successful, hold the potential to significantly minimize toxicity to normal tissue and improve patient clinical outcomes.
FIGURE 1 .
FIGURE 1. Structural organization of the human c-KIT receptor.In its inactivated state, c-KIT is present as a monomer that comprises extracellular (EC), transmembrane (TM) and intracellular (IC) domains.The outer immunoglobulin-like (Ig-like) domains D1 to D3 are key components for binding to stem cell factor (SCF), whereas D4 and D5 are essential for homotypic contacts needed for KIT dimerization.The IC domain contains a juxtamembrane (JM) domain, a tyrosine kinase (TK) domain and a flexible carboxy-terminal (C-terminal) tail.The JM domain contributes to the relay of IC downstream signaling.The TK domain is further divided into the amino-terminal TK1 (N-lobe) domain, which houses an ATP-binding region, and the carboxy-terminal TK2 (C-lobe) domain, which encompasses a phosphotransferase region and activation loop.
FIGURE 2 .
FIGURE 2. Schematic representation of c-KIT activation and downstream signaling.(A) Here, two adjacent monomeric c-KIT molecules are cis-autoinhibited by the juxtamembrane (JM) domain that inserts between the tyrosine kinase (TK) 1 and 2 domains, leading to a static configuration that sterically blocks the activation loop (AL) residing in the catalytic cleft between the lobes; (B) c-KIT is activated on binding of dimeric stem cell factor (SCF) to immunoglobulin-like (Ig-like) domains D1 to D3, which induces receptor reorientation and homotypic interaction between adjacent D4 and D5 domains.Transphosphorylation of tyrosine residues in the JM domain enables its dissociation from the TK1 domain.TK1 and TK2 domains are displaced away, allowing access to the catalytic cleft for ATP binding and release of ADP from the binding site for a second transphosphorylation of tyrosine residues in the AL; (C) Further phosphorylation of tyrosine residues in the kinase insert region and C-terminal tail creates docking sites for several substrates, leading to downstream signaling through the mitogen-activated protein kinase/extracellular signal-regulated kinase (MAPK/ERK), phosphatidylinositol 3-kinase/protein kinase B (PI3K/AKT), phospholipase C-γ (PLC-γ), Janus kinase/signal transducer and activator of transcription (JAK/STAT), and Src kinase pathways.
FIGURE 3 .
FIGURE 3. Examples of different cancer types expressing c-KIT.(A and B): Uveal melanoma morphology (A) with a strong and diffuse c-KIT expression; (C and D): Ovarian dysgerminoma (C) with C-KIT expression in cancer cells; tumor-infiltrating lymphocytes were negative (D); Renal oncocytoma (E) and mammary adenoid cystic carcinoma (F) exhibiting C-KIT positivity.All cases were stained immunohistochemically using polyclonal A-4502 antibody (DAKO Agilent).The images were taken at ×20 magnification except for C and D (×10 magnification).
FIGURE 4 .
FIGURE 4. c-KIT expression and sphere formation in prostate cancer cells.(A) Representative image of prostaspheres formed by PC3 cells 6 days after transfection with c-kit and exposure to SCF under non-adherent 3D culture conditions (selective serum-free medium and non-adherent plates).Note that control PC3 cells transfected with the empty vector (EV) show little homotypic cell aggregation.Scale bars, 50 µm; (B) Quantitation of sphere formation by c-KIT-expressing and EV-expressing PC3 cells 6 days after plating at different numbers in non-adherent conditions.Data are expressed as the mean ± SE number of spheres larger than 50 µm per ten 100× microscopic fields.*p=0.05;**p<0.005(Student's test).
FIGURE 5 .
FIGURE 5. c-KIT expression and EMT-like phenomenon in prostate cancer.(A) PC3 cells stably transfected with c-kit show a fibroblast-like phenotype, while control PC3 cells transfected with the empty vector (EV) display the typical epithelial phenotype.Scale bars, 50 µm; (B) Western blots for c-KIT, epithelial-mesenchymal transition (EMT) markers, and EMT-related transcription factors are shown for PC3 and C4-2B PCa cells stably transfected with c-kit or EV and incubated with SCF (100 ng/mL for 60 min).Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) served as a loading control; (C) Gene expression of Snail, Slug, and Zeb1 is higher in c-KIT-expressing PC3 and C4-2B cells relative to EV-transfected control cells (RT-qPCR; p<0.001,Student's test).Gene expression was normalized to GAPDH.Values are mean ± SE for triplicate samples.
|
2022-05-02T06:23:06.799Z
|
2022-04-27T00:00:00.000
|
{
"year": 2022,
"sha1": "1cf13edece39180ae0356999ad4d2e8401deaf51",
"oa_license": "CCBY",
"oa_url": "https://www.bjbms.org/ojs/index.php/bjbms/article/download/7399/2495",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90c6acbf130a99b6263764ab633742fd917a198f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215349869
|
pes2o/s2orc
|
v3-fos-license
|
Micropenis
Micropenis is part of a larger group of conditions broadly known as inconspicuous penis; however, it is fundamentally different from the other diagnoses in this group, such as webbed penis and buried penis, in that the underlying problem is the size of the penis itself, not with the surrounding and overlying skin. This condition is usually the result of a defect in the hypothalamic-pituitary-gonadal axis, although iatrogenic causes are identified infrequently. Management revolves around testosterone (direct administration or encouraging the patient's body to make its own), and long-term results with respect to increase in penile length are promising. Reconstructive surgery is based on the use of a vascular pedicle free flap and is reserved for patients who fail to respond to hormonal treatment. Although substantial long-term data are lacking, adult patients with micropenis appear to report dissatisfaction with penile appearance, but the majority appear to have adequate sexual function.
INTRODUCTION
Micropenis is an oft-misapplied diagnosis in medicine that, when used inaccurately, can cause considerable parental anxiety and can take a good amount of effort to overcome. The term refers to a specific disorder that has a specific set of causative factors and a different set of treatment modalities than the broader term of inconspicuous penis or, apparently, small penis, in lieu of which it is generally used. It would therefore behoove us to understand fully the meaning and genesis of micropenis to better differentiate it from the other types of inconspicuous penis.
EMBRYOLOGY
As with many genital disorders, an understanding of the relevant embryology allows a better understanding of the condition itself. Beginning at 8 weeks of gestation, maternal chorionic gonadotropins from the placenta begin to stimulate testosterone production from the fetal Leydig cells. Under the influence of dihydrotestosterone, a conversion product of testosterone, penile differentiation occurs. The genital tubercle differentiates into the glans penis, the genital folds become the shaft of the penis, and the genital swellings migrate to the midline to become the scrotum. Penile differentiation is complete by 12 weeks of gestation. During the second and third trimester, growth of the penis is accomplished through fetal androgens, which are produced under stimulation by fetal pituitary gonadotropin. There is a marked increased in penile size over that time period, with the penis growing almost 20 mm from weeks 16 to 38 [3,4]. Therefore, true micropenis must result from a hormonal abnormality that occurs after 12 weeks of gestation. Studies of genital skin fibroblasts in patients with micropenis have shown normal androgen production and action after administration of gonadotropins, as well as appropriate receptor activity, which reinforces the central role the hypothalamic-pituitary axis played in the genesis of micropenis [5].
ETIOLOGY
True micropenis is a result of a hormonal abnormality occurring after 12 weeks of gestation. The causes of this condition can be divided into three broad groups: hypogonadotropic hypogonadism (pituitary/hypothalamic failure), hypergonadotropic hypogonadism (primary testicular failure), and idiopathic. These represent the most common etiologies of micropenis [6,7,8]. Table 2 highlights the different etiologies. Typically, when due to hypogonadotropic hypogonadism, micropenis is associated with conditions such as Kallman's syndrome (hypogonadotropic hypogonadism and anosmia) and Prader-Willi syndrome (hyperphagia, mental retardation, short stature, hypotonia, and hypogonadism) (see Table 2). Hypergonadotropic hypogonadism, or primary testicular failure, can be due to gonadal dysgenesis, or may be associated with Robinow's syndrome [6] as well as poly-X syndromes, such as XXY (Klinefelter's syndrome), gene translocations, and trisomies of chromosome 8, 13, and 18 [2]. Problems with testosterone action, such as 5-α reductase deficiency, can present as micropenis in its incomplete form, although hypospadias is a much more common result [6]. Finally, idiopathic causes of micropenis are associated with an empirically normal hypothalamus-pituitary-testicular axis [6].
DIAGNOSIS
As stated previously, to make an accurate diagnosis of micropenis, the examining clinician must have a clear understanding of the definition of micropenis as well as how to measure the penis. This is to exclude confounding diagnoses, such as webbed penis and hidden penis. SPL is measured from the point where the penis meets the pubic bone to the distal tip of the penis, which is put on maximal stretch (Fig. 1). Care must be taken to compress any suprapubic fat pad, prevalent in infants and most likely the major cause of misdiagnosis of micropenis in this age group. The micropenis typically has a normal circumference-tolength ratio, although, rarely, severely hypoplastic corpora cavernosa will be seen [8]. Frequently, micropenis is associated with cryptorchidism and small-volume testicles, as well as a hypoplastic scrotum, most likely due to the same causative factors that are responsible for the micropenis [8]. Other characteristics, such as delayed puberty in older children, suggestive of hypogonadotropic hypogonadism, should be noted to aid in diagnosis.
Once micropenis is confirmed through physical exam, consultation with the endocrinology service should be obtained to help determine the cause of micropenis as well as to rule out possible lifethreatening associated abnormalities. Specifically, hypogonadotropic hypogonadism is commonly associated with growth hormone (GH) deficiency and/or adrenocorticotropic hormone (ACTH) deficiency, putting the infant at high risk for death due to hypoglycemia or cortisol deficiency [9]. Plasma cortisol, serum electrolytes, and plasma glucose may be obtained in this setting to rule out acute problems. The endocrinologic evaluation can also isolate the cause of micropenis to its level in the hypothalamic-pituitary-testicular axis [9]. Specifically, prolactin (PRL) levels help to isolate the defect to the hypothalamus (high PRL) vs. the pituitary (low PRL) [9,10]. In addition, plasma GH, thyroid stimulating hormone (TSH), and ACTH can all be used to isolate the location of dysfunction [9]. Interestingly, it may be difficult to make the diagnosis of hypogonadotrophic hypogonadism in the prepubertal patient with micropenis if they are past infancy, as there is a quiescent phase of the pituitary that sees levels of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) drop precipitously [9].
In parallel with an evaluation of central endocrine function, testicular function may also be assessed through serum testosterone levels before and after administration of human chorionic gonadotropin (hCG). No rise in postadministration testosterone and rises in LH and FSH are consistent with testicular failure or absence, although this can also occasionally be seen in those patients with Kallman syndrome and undescended testicles [11]. Antimüllerian substance (AMH), also known as müllerian-inhibiting substance (MIS), and inhibin B, which are produced by functioning Sertoli cells, can also be used to determine the presence of functional testicular tissue [9,12]. Low AMH coupled with a normal inhibin B is indicative of the rare, persistent müllerian duct syndrome, which is the result of a defect in the gene that encodes AMH [9].
Magnetic resonance imaging (MRI) may be used to identify midline structural defects, such as pituitary stalk dysplasia syndrome, central diabetes insipidus (indicated by a lack of a posterior pituitary bright spot), and pituitary aplasia [9,13,14]. Specifically, findings such as a small anterior pituitary gland, attenuated or absent pituitary stalk, and ectopic posterior pituitary are all suggestive of hypopituitarism and thus can help to facilitate identification of the etiology [14,15]. Finally, some authors advocate obtaining a karyotype [11], although this recommendation is not universal.
TREATMENT
Treatment of micropenis should focus on penile size sufficient for the patient to have an appropriate body image, normal sexual function, and standing micturition. Inability to bring the penis fully to the mean measurement for age does not imply failure. Primary treatment of micropenis revolves around exogenous testosterone administration to increase the length of the penis so that it may be considered within a range of normal. Most authors endorse 25 mg of intramuscular testosterone in infancy, typically in its ethanate formulation to promote longer action, once a month for 3 months [8,9,11], followed by further courses at higher dosages at the start of pubarche [16]. Good responses are typically seen, with increases of over 100% in penile length over the course of initial treatment to be expected [16,17]. However, if the response is not deemed satisfactory, repeat administrations over short time periods may be performed without significant concern about early maturation of bony growth plates and subsequent reduction in stature [9,16]. Transdermal delivery of both testosterone and dihydrotestosterone have also been reported [18,19], with the application of dihydrotestosterone resulting in increases in penile length of over 150% during the treatment period [18], although others have found no such results in head-to-head comparisons with respect to testosterone [20]. There has also been two reports of administering LH and FSH to an infant with hypogonadotrophic hypogonadism and micropenis; however, although there was a significant increase in testicular volume over the treatment period in both studies, minimal increase in penile size was noted in one study [21] and was seen in one out of two patients in another [22]. This is in keeping with the gonadotropin releasing hormone (GnRH) surge seen immediately after birth in normal infants [9], with accompanying increases in Sertoli cell populations [23].
In light of all this, there have also been concerns voiced about the administration of testosterone to prepubertal patients and the impact on their ultimate penile length. Current long-term data regarding patients treated in childhood with exogenous testosterone have shown no reduction in adult penile length [24]. It should be emphasized, however, that the impact of testosterone treatment in childhood on masculinization during puberty is still not fully described and better long-term data are needed to fully understand the effects of treatment.
If endocrine treatment does not accomplish a satisfactory result, surgical therapy can offer an alternative in the management of micropenis. Early writers on micropenis endorsed sex reassignment [25,26], especially if there was evidence of lack of testicular tissue [25]. However, more recently, the lack of data regarding the long-term psychological impact of gender reassignment in pediatric patients [27], coupled with some high-profile cases of patients who were sex reassigned in childhood having spectacularly bad outcomes, has called into question the wisdom of this approach. While some data do seem to promote the idea that most sex-reassigned patients are comfortable with their assigned sex, regardless of chromosomal sex [28,29], and others have found mixed results [30], still other researchers have found significant psychosexual problems in adult patients who had undergone sex reassignment as children [31]. Due to the relatively small number of patients in whom the discussion of sex reassignment in infancy is indicated, most studies of outcomes in this group include a range of etiologies, most of which entail a markedly different fetal and infantile hormonal milieu than those patients with micropenis [28,29,30,31]. Therefore, extrapolation to the patients with micropenis who underwent sex reassignment is fraught. In light of this, sex reassignment with creation of female genitalia for patients with this condition should be undertaken with extreme caution and should only be done by those with a large amount of experience.
Reconstructive surgery has a long history in the treatment of micropenis, with Frank Hinman publishing his initial results of reconstruction of patients with micropenis in the early 1970s [32]. Further advances in penile reconstruction began in the 1980s with the description of a faciocutaneous neophallus based on the radial artery of the forearm [33]. Other techniques for phallic reconstruction include the sensate osteocutaneous fibula flap, the free scapular flap, the suprapubic abdominal wall flap, and the vertical rectus abdominis flap, although the radial forearm free flap remains the most popular in terms of phallic reconstruction [34]. Cosmetic and functional results are acceptable, especially when a prosthesis is implanted after reconstruction [35]; however, despite being used in select patients, the complication rate remains dauntingly high, even in the most experienced of hands [35,36]. While most donor site complications are minimized with increasing experience, complications with the flap (and urethral anastomosis, if the patient undergoes one) continue to challenge reconstructive surgeons, regardless of experience. One large study showed a flap revision rate of 12% [35], while another cohort had 53 patients undergoing a mean of six operations to achieve a lasting neophallus [36]. Corporal augmentation procedures have also been described in the setting of micropenis, with acceptable short-term results, although no long-term data exist [37].
Most likely the biggest problem associated with the management of micropenis is the lack of knowledge in terms of long-term outcome. With regards to long-term sexual function and gender identity, Reilly and Woodhouse [38] studied a group of 20 patients with micropenis, unresponsive to hormonal therapy as children, raised as males, ranging in age from 10 to 43 years of age. They found all patients reporting male gender identity, erections, and orgasm. In addition, nine of the 12 adult patients were sexually active, although half also experienced teasing due to genital appearance [38]. In a group of 22 adult micropenis patients raised as males, Lee and Houk reported similar findings [39]. Satisfaction with genital appearance remains an issue, however [40]. Overall, it may safely be said that current evidence points to normal gender identity and sexual function in the majority of patients with micropenis raised as males, even if their micropenis is not corrected.
SUMMARY
Micropenis is part of a larger group of conditions broadly known as inconspicuous penis; however, it is fundamentally different from the other diagnoses in this group, such as webbed penis and buried penis, in that the underlying problem is the size of the penis itself, not with the surrounding and overlying skin. This condition is usually the result of a defect in the hypothalamic-pituitary-gonadal axis, although iatrogenic causes are identified infrequently. Since micropenis can be the first manifestation identified of a broader endocrinologic problem, a pediatric endocrinologist should be consulted once the diagnosis of micropenis is made. Treatment of micropenis revolves around testosterone, either through direct administration or encouraging the patient's body to make its own, and long-term results in terms of increased penile length are promising. Reconstructive surgery comes with a variety of options, all based on the principle of a vascular pedicle free flap, and is reserved for those patients not responding to hormonal treatment. Patients with micropenis in adulthood do report dissatisfaction with the appearance of their penis, but the majority appears to have adequate sexual function. It should be stated, however, that long-term robust data are still lacking in this crucial area.
|
2018-04-03T01:44:51.575Z
|
2011-07-28T00:00:00.000
|
{
"year": 2011,
"sha1": "0bc91ad175f2f39a7877e57353e6d3f08ee22a78",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2011/348457.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b655a153a76bb190a672fc2b9fef67ad21eda8ef",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254045723
|
pes2o/s2orc
|
v3-fos-license
|
Contrasting genomic consequences of anthropogenic reintroduction and natural recolonization in high‐arctic wild reindeer
Abstract Anthropogenic reintroduction can supplement natural recolonization in reestablishing a species' distribution and abundance. However, both reintroductions and recolonizations can give rise to founder effects that reduce genetic diversity and increase inbreeding, potentially causing the accumulation of genetic load and reduced fitness. Most current populations of the endemic high‐arctic Svalbard reindeer (Rangifer tarandus platyrhynchus) originate from recent reintroductions or recolonizations following regional extirpations due to past overharvesting. We investigated and compared the genomic consequences of these two paths to reestablishment using whole‐genome shotgun sequencing of 100 Svalbard reindeer across their range. We found little admixture between reintroduced and natural populations. Two reintroduced populations, each founded by 12 individuals around four decades (i.e. 8 reindeer generations) ago, formed two distinct genetic clusters. Compared to the source population, these populations showed only small decreases in genome‐wide heterozygosity and increases in inbreeding and lengths of runs of homozygosity. In contrast, the two naturally recolonized populations without admixture possessed much lower heterozygosity, higher inbreeding and longer runs of homozygosity, possibly caused by serial population founder effects and/or fewer or more genetically related founders than in the reintroduction events. Naturally recolonized populations can thus be more vulnerable to the accumulation of genetic load than reintroduced populations. This suggests that in some organisms even small‐scale reintroduction programs based on genetically diverse source populations can be more effective than natural recolonization in establishing genetically diverse populations. These findings warrant particular attention in the conservation and management of populations and species threatened by habitat fragmentation and loss.
| INTRODUC TI ON
Species reintroductions are increasingly being used in ecological restoration and biodiversity conservation programmes (Seddon et al., 2014;Taylor et al., 2017;Bubac et al., 2019).Most reintroductions involve translocation of a small number of individuals to establish new populations that may be geographically isolated from the species' current range (Frankham, 2010).Founding populations are often characterized by having a small effective population size, only a subset of the genetic variation that exists in their source populations and limited or no gene flow with other populations (Frankham, 2010;Lynch & Gabriel, 1990).An alternative to reintroduction by translocations is the use of more passive measures that facilitate natural dispersal and recolonization of species' ranges (Scott et al., 2001).Natural recolonization processes may differ from reintroductions in that they require some connectivity with other populations.They can, therefore, be very slow, even in highly mobile species (Hurford et al., 2006;Larter et al., 2000).Especially in fragmented habitats and in species with low dispersal rates, natural recolonization (including recolonization from reintroduced populations) may also involve one or multiple sequential founding events and relative isolation of recolonized populations (Clegg et al., 2002;Pruett & Winker, 2005).
Due to the often small founder population size, both reintroduced and naturally recolonized populations may initially experience strong genetic drift and accumulate inbreeding because individuals are more likely to share common ancestors, which increases homozygosity (Allendorf, 1986;Nei et al., 1975).The levels of genetic diversity in reintroduced and recolonized populations are however also affected by population growth rate, genetic structure and immigration (Biebach & Keller, 2010, 2012;Latch & Rhodes, 2005).For example, inbreeding and genetic drift accumulate over generations at a rate that depends on the population size and the rate of immigration (Whitlock et al., 2000;Willi et al., 2013).Rapid population growth reduces the duration of a population bottleneck and the degree of genetic drift (Allendorf, 1986;Nei et al., 1975).Immigration counteracts the loss of diversity due to drift by introducing unrelated individuals and novel genetic material that replenishes genetic variation and reduces inbreeding rates (Frankham et al., 2017;Latch & Rhodes, 2005;Vucetich & Waite, 2000).The accumulation of inbreeding and genetic drift can allow low-frequency (partially) recessive deleterious alleles that are rarely homozygous (i.e.masked genetic load [Bertorelle et al., 2022]) to increase in frequency and even become fixed.Consequently, masked genetic load may be converted to realized genetic load (Bertorelle et al., 2022;Wang et al., 1999), which is expected to reduce fitness (i.e.inbreeding depression [Charlesworth & Willis, 2009]).However, this process exposes deleterious variation to selection, potentially purging strongly deleterious recessive alleles and reducing the fitness consequences of future inbreeding (Hedrick & Garcia-Dorado, 2016;Robinson et al., 2018).Genetic drift also reduces genetic diversity, including potentially adaptive genetic variation that may be important for evolutionary responses necessary to maintain fitness in changing environments (Frankham, 2005;Kardos et al., 2021).Together, these genetic consequences can impact both the short-and long-term viability of populations (Frankham, 2005;Weeks et al., 2011).
Consequently, a key goal in the management of newly reestablished or fragmented populations is to maximize the genetic diversity and minimize drift and inbreeding (Frankham et al., 2017).
Several studies have shown that genetic diversity in reintroduced and naturally recolonized populations is often higher in those that receive gene flow from other populations (Biebach & Keller, 2012;Latch & Rhodes, 2005;Malaney et al., 2018), originate from multiple source populations (Huff et al., 2010;Sasmal et al., 2013;Vasiljevic et al., 2022;Williams et al., 2000;Williams & Scribner, 2010) and in reintroductions that use multiple translocations (Cullingham & Moehrenschlager, 2013;Drauch & Rhodes, 2007).Without such mitigating factors, erosion of genetic diversity and accumulation of inbreeding can occur due to isolation and/or slow population growth (Hundertmark & van Daele, 2010;Williams et al., 2002), and may have detrimental population-level consequences for fitness-related traits (Wisely et al., 2008) and population growth rates (Bozzuto et al., 2019).Differences in population connectivity and demography between reintroductions and natural recolonizations could, therefore, result in differing genetic consequences for these two paths to population reestablishment.Reintroduced populations may be population founder effects and/or fewer or more genetically related founders than in the reintroduction events.Naturally recolonized populations can thus be more vulnerable to the accumulation of genetic load than reintroduced populations.This suggests that in some organisms even small-scale reintroduction programs based on genetically diverse source populations can be more effective than natural recolonization in establishing genetically diverse populations.These findings warrant particular attention in the conservation and management of populations and species threatened by habitat fragmentation and loss.
conservation genetics, inbreeding, recolonization, reintroduction more isolated from their source than naturally recolonized populations.However, the sequential reestablishment of habitat that often characterizes natural recolonization can result in cumulative founder effects that severely reduce genetic diversity (Clegg et al., 2002;Le Corre & Kremer, 1998).Naturally recolonized populations can thus be more vulnerable to the accumulation of genetic load than reintroduced populations in species or environments with limited dispersal possibilities.
Recent years have seen increased accessibility of genomic data that provide greater power to study population structure, genetic diversity and inbreeding (Supple & Shapiro, 2018), which are important for understanding the genetic outcomes of reintroductions (Hicks et al., 2007;Taylor & Jamieson, 2008;Wright et al., 2014).
One such advantage of genomic data is its utility for quantifying inbreeding using runs of homozygosity (RoH).These RoH occur when breeding between individuals that share common ancestors results in offspring with stretches of homozygosity along segments of their homologous chromosomes that both parents inherited from a common ancestor (Kardos et al., 2015(Kardos et al., , 2016)).The ability to quantify the length of RoH segments enables us to distinguish between inbreeding due to recent or more distant shared ancestors of the parents based on the distribution of RoH lengths, giving insights into the demographic history of populations (Brüniche-Olsen et al., 2018;Druet & Gautier, 2017;Kardos et al., 2017).
While the success of reintroductions has been studied across a variety of taxa including fish (Drauch & Rhodes, 2007), birds (Brekke et al., 2011), insects (White et al., 2017) and ungulates other than reindeer (Grossen et al., 2018), few studies have been able to evaluate and compare their genetic consequences with those from natural population reestablishments.The wild, endemic Svalbard reindeer (Rangifer tarandus platyrhynchus Vrolik, 1829) subspecies, with its strong metapopulation structure (Peeters et al., 2020), is a biological system well suited for comparing the genetic consequences of reintroductions to those of natural recolonizations.The number of Svalbard reindeer declined drastically due to overharvesting until 1925, when they were protected, and the subspecies was extirpated from much of the Svalbard archipelago, with evidence of reindeer surviving in four isolated populations totalling ~1000 individuals (Le Moullec et al., 2019;Lønø, 1959).The subspecies has since largely recovered, with natural recolonization and anthropogenic reintroductions restoring most of its former range.Accordingly, Svalbard reindeer are now abundant (~22,000 individuals) (Le Moullec et al., 2019) with most populations relatively stable or increasing in size (Hansen, Pedersen, et al., 2019), and populations previously extirpated are still recovering in number (Le Moullec et al., 2019).
Environmental conditions (including sea-ice coverage [Peeters et al., 2020]) are rapidly changing in the Arctic due to climate change (Isaksen et al., 2022), thus the genetic diversity and genetic structure of reindeer populations may be important for their capacity to adapt to these conditions and influence their future population dynamics.Knowledge of the genomic consequences of reintroductions and natural recolonizations in Svalbard reindeer will, therefore, inform future management of this endemic subspecies, in addition to contributing to a broader understanding of the genetic outcomes of different population reestablishment strategies relevant to the conservation management of other species.
Here, we use whole-genome sequencing data to investigate the genetic consequences of two Svalbard reindeer reintroductions (each founded by 12 individuals [Aanes et al., 2000;Gjertz, 1995]) and compare these to natural recolonization processes in adjacent, comparable habitats with similar ecological conditions.Specifically, we quantify the degree to which the genetic diversity of the source population was retained after the founder effects and subsequent rapid population growth (Kohler & Aanes, 2004) associated with anthropogenic reintroduction, and whether a signature of this reintroduction could be detected in the form of longer RoH.Additionally, we investigated whether naturally recolonized populations that were not admixed would show different patterns of genetic diversity and inbreeding coefficients compared to reintroduced populations due to the compounding effects of sequential founding events during natural recolonization.
| Sequencing
We sequenced 100 whole genomes of reindeer sampled from a total of 12 populations and sub-populations, including six sub-populations originating from two reintroductions (n = 46), the reintroduction source population (n = 17), two other remnant natural populations (n = 13) and three naturally recolonized populations (n = 24) across Svalbard.This resulted in a mean nuclear genome sequencing depth of 3.3× for the 90 samples sequenced to a lower target coverage and 23.4× for 10 deep-sequenced samples, after all filtering (see Figure S1 for distribution of sequencing coverage).Four samples had <0.1× coverage and thus were used only for the site frequency spectrum (SFS) estimates.Genotype likelihoods for 8,255,693 variable sites were calculated from the caribou nuclear genome- (Taylor et al., 2019) mapped sequence data after quality filtering.In total, 6,309,215 of these sites remained after removing scaffolds mapping to the bovine X chromosome, and 467,146 sites remained in the dataset used for admixture and PCA analysis after LD pruning.Mean sequencing depth of the mitochondrial genome was >1000×.
| Admixture and principal component analyses
Admixture and principal component analysis (PCA) identified clear genetic structure in the Svalbard reindeer metapopulation (Figures 1 and 2).The optimal number of genetic populations identified using the ΔK method was K = 2 for the admixture analysis including the whole Svalbard-wide dataset (Figure S2).On the broadest scale, PCA and the K = 2 admixture model show tight clustering among a "central Svalbard" group of populations consisting of the reintroduced, the reintroduction source (ADV), and the naturally recolonized southern Spitsbergen (STH) populations (Figures 1 and 2).Strongly correlated residuals between individuals in the same population in the K = 2 model calculated using EvalAdmix (Figure S3) indicate this model was a poor fit to the data and may fail to capture finer scale structure.Admixture analysis only including the Central Svalbard genetic group revealed further genetic substructure with the optimum K = 2 (Figure S2).Instead of running additional hierarchical admixture analyses, we examined higher K-value models and the correlation of residuals using EvalAdmix to investigate finer scale genetic structure.We present K = 7 in Figure 2 as this reflects the hierarchical level of population structure relevant to the origins of, and admixture between, reintroduced and naturally recolonized populations.This is also the simplest model with a low correlation of residuals (<0.1) within all populations (Figure S3).The K = 3-10 and K = 5-10 models suggested that the remnant populations in EST and North East Land (NE), respectively, originate from distinct ancestral populations (Figure 2, Figure S4), supported by highly correlated residuals in models that assigned them as admixed populations (Figure S3) and segregation on the first two PC axes.Individuals in the naturally recolonized Wijdefjorden (WDF) population were assigned admixed ancestry from the MTR and NE ancestral populations (models K = 5-8), or a distinct ancestral population with some admixture with the ancestral NE population (models K = 8-10).
On a finer scale, both PCA (Figures S5 and S6) and admixture analysis (Figure 2, Figure S7) showed clear segregation between Reintroduction 1, Reintroduction 2, ADV and STH, with little admixture between the two reintroductions.Evidence of admixture between reintroduced and natural populations was found in only one individual "B-13" from KAF (Reintroduction 1) that carried approximately 50% MTR ancestry, and one individual in WDF "T-2" that carried approximately 50% Reintroduction 1 ancestry (Figures 1, 2 and Figure S4).
| F ST analysis
Pairwise F ST estimates showed very strong genetic structure, and largely supported admixture and PCA results (Figure 3).Populations
| Mitochondrial haplotype diversity
We detected 38 variant sites among the 96 full mtDNA genomes, comprising 16 unique haplotypes (Figure 4, Table S1).Haplotypes could be grouped into seven distinct haplogroups with a maximum of two substitutions separating each haplotype from its nearest neighbour within the haplogroup (Figure 4c).Mitochondrial DNA haplotype diversity showed a similar pattern to nuclear genetic analyses.
Haplotypes in MTR, NE and EST were not found in any other population except WDF, which carried a mixture of haplotypes found in every population except STH and NE, plus one highly unique haplotype (Figure 4a).Overall, populations with central Svalbard ancestry (ADV, STH and reintroductions) shared similar haplogroups, with the notable exception that the most common haplogroup among reintroduced populations and STH was not found in the ADV source population but instead in EST.Within this shared haplogroup, the haplotypes found in STH and the reintroduced populations were mutually exclusive to those in EST, but differed by as little as a single mutation (Figure 4b).The populations KAF and NIF, founded by natural dispersal from Reintroduction 1 and 2 respectively, carried haplotypes belonging to haplogroups not found in other reintroduced populations (Figure 4a).The KAF individual admixed with MTR (Figure 2) carried a unique haplotype from the MTR haplogroup, and almost half of the NIF samples carried haplotypes in a haplogroup otherwise found only in ADV (Figure 4a).
Haplotype richness was strongly correlated to mean population genome-wide heterozygosity (Pearson correlation r = 0.86, p = 0.013).Each of the two reintroduction groups (populations combined) had similar haplotype richness to ADV, and higher than all other natural populations except WDF (Table S1 and Figure S8).
| Inbreeding
We detected 21,990 RoH longer than 0.5 Mbp across the 83 genomes included in the analysis, ranging from 0.5 Mbp to 12.76 Mbp in length and covering between 3% and 59% of individuals' genomes (Figure 6).Both reintroduced populations showed a higher mean inbreeding coefficient (Reintroduction 1 F ROH 0.236 ± 0.063 SD, Reintroduction 2 F ROH 0.237 ± 0.033) relative to their source population in Adventdalen (F ROH 0.186 ± 0.042).The higher inbreeding in
| DISCUSS ION
By analysing whole genome sequences from 100 Svalbard reindeer across their range, we have quantified important genomic consequences and contrasts of population recovery through anthropogenic reintroductions versus natural recolonizations in the previously severely overharvested subspecies.We found strong archipelago-wide genetic structure, including two distinct genetic clusters corresponding to two reintroductions from a common source, with little evidence for extensive admixture between reintroduced and sampled natural populations (Figures 1-4).Our results show that reintroduced populations also maintained comparative levels of heterozygosity to their source population (Figure 5), although founder effects resulted in a small increase in the length and coverage of RoH across the genomes of reintroduced individuals (Figure 6).This contrasted strongly with non-admixed naturally recolonized populations, which had markedly lower genetic diversity and a greater proportion of their genomes comprising RoH.This suggests that non-admixed, naturally recolonized populations may be more vulnerable to the accumulation of genetic load and loss of adaptive variation than reintroduced populations, even when the latter originate from just a handful of individuals.
| Effect of reintroduction versus recolonization on genome-wide diversity
We estimated genome-wide heterozygosity and analysed the distribution of RoH lengths to separate the contribution of ancient and more recent demographic history, associated with reintroduction and recolonization, to patterns of genome-wide variation.Our analyses identified a weak signal of founder effects in both reintroductions, which had higher total inbreeding coefficients and longer RoH, but no significant reduction in average heterozygosity compared to the source population.The increased inbreeding in reintroduced populations was also accompanied by lower coalescent N e estimates, with Reintroduction 1 showing a greater reduction (12%) than Reintroduction 2 (4%) compared to the source population.Similar genomic signatures of reintroductions have previously been identified, including in European bison Bison bonasus (Druet et al., 2020), Magpie-robins Copsychus sechellarum (Cavill et al., 2022) and ibex Capra ibex (Grossen et al., 2018).The increased inbreeding in reintroduced populations was attributable to a greater proportion of the genome in RoH >1 Mbp, including in the 1-2 and 2-4 Mbp range.These size classes reflect shared ancestry and are thus indicative of effective population size approximately 25-50 and 12-25 generations ago given a recombination rate of ~1 cM/ Mbp (Kardos et al., 2017;Thompson, 2013) We found fewer long RoH in reintroduced Svalbard reindeer than reported for alpine ibex (Grossen et al., 2018) and European bison (Druet et al., 2020), and our results indicate that, like the source population (ADV), short RoH (i.e.inbreeding due to ancient demography) still contribute the most to the total inbreeding coefficients of reintroduced individuals.Rapid population growth immediately after reintroduction (Aanes et al., 2000) and the extensive overlap between generations in reindeer are both characteristics that reduce the loss of genetic diversity after population bottlenecks (Allendorf, 1986;Nei et al., 1975).Furthermore, past and possibly ongoing dispersal among the secondary sub-populations recolonized from the initial reintroduced populations (Hansen et al., 2010;Stien et al., 2010) may have buffered against the effects of sequential founder events.Similar outcomes have been observed in reintroductions of a range of vertebrate and invertebrate species where rapid population growth occurred after reintroduction (Brekke et al., 2011;Hicks et al., 2007;Murphy et al., 2015;White et al., 2017).In particular, populations of other ungulates colonized by only a few founding individuals have been shown to retain high levels of heterozygosity as a result of overlapping generations and rapid population expansion (Kaeuffer et al., 2007;Kekkonen et al., 2012).In contrast, populations reintroduced with few founders that remained small for several generations have shown a pronounced reduction in genetic diversity (Williams et al., 2002;Wisely et al., 2008).
In contrast to the reintroductions, the natural recolonization of STH and MTR resulted in populations with very low nuclear genome-wide heterozygosity, mtDNA haplotype diversity and high total inbreeding coefficients with a longer distribution of RoH.In STH, high F RoH across the 1-2, 2-4 and 4-8-Mbp size classes, with a peak in the 2-4-Mbp range, indicates small N e around the same time as the reintroductions (i.e. during the recolonization period) (Figure 6b).STH also has the highest proportion of genomes within RoH >4 Mbp, indicating smaller N e in recent generations that may be due to founder effects associated with recolonization more recently than in the reintroduced populations.This evidence of smaller recent N e in STH than in Reintroduction 1 contrasts with comparable coalescent N e estimates between the two populations.This could reflect a similar ancestral N e , but it also could be that our estimate of coalescent N e in STH is inflated compared to individual-level diversity due to population structure within the STH samples, which were collected from a larger and more fragmented geographic area than other populations.Long RoH may be particularly significant to conservation as they represent younger haplotypes that have been exposed to less selection than older haplotypes, making them more likely to harbour deleterious mutations (Bortoluzzi et al., 2020) and have a greater impact on fitness (Stoffel et al., 2021;Szpiech et al., 2013).The very high frequency of short RoH in the MTR genomes and a low coalescent N e estimate suggests this population originates from a source with a historically small population size comparable habitat quality (Pedersen et al., 2023).The decreased heterozygosity and increased frequency of longer RoH in naturally recolonized compared to reintroduced populations is, therefore, not likely due to differing population sizes and growth rates resulting from environmental differences.Instead, this likely reflects multiple founder effects from a sequential recolonization process (Peeters et al., 2020), potentially involving few dispersing individuals, which was not a characteristic of the anthropogenic reintroduction.Sequential dispersal and establishment of isolated peninsulas and valleys during the recolonization of STH and MTR may have caused cumulative effects from multiple founder events, reducing N e and eroding genetic diversity (Clegg et al., 2002;Le Corre & Kremer, 1998;Pruett & Winker, 2005), resulting in increased inbreeding and long RoH despite having comparable current regional population sizes to reintroduced populations.
Our inferences regarding the timing of past demography based on RoH length distributions should be considered only as relative between populations and may not accurately reflect the demographic history in an absolute number of generations.Inferences of demographic history using RoH length distributions are imprecise because spatial and temporal variation in generation times and the random nature of recombination result in high variation around the mean expected length of RoH (Druet & Gautier, 2017).Moreover, sequencing coverage, sequencing error rates, biased genotype likelihood estimates, as well as filtering and parameter settings can all affect estimates of heterozygosity (de Jager et al., 2021;Fuentes-Pardo & Ruzzante, 2017;Sánchez-Barreiro et al., 2021), and thus RoH length and frequency (Duntsch et al., 2021).We downsampled sequence data to allow unbiased comparison between samples and populations with varying levels of coverage, and to maximize the sample size for both the genome-wide heterozygosity and RoH analyses.This may limit the direct comparison of our estimates of heterozygosity to those from other studies using higher-coverage sequence data.
RoH analysis using low coverage sequence data is also sensitive to filtering, and may falsely identify multiple short RoH as a single longer RoH, or miss RoH altogether (Duntsch et al., 2021).However, the shorter-than-expected distribution of RoH in reintroduced individuals, given the known timing of the reintroduction founder effects, suggests our analysis underestimated RoH lengths, and this is consistent with an excess of short (<50-Kbp) gaps between RoH in our data, breaking up otherwize long RoH (Figure S9).RoH after allowing one <50-Kbp gap within each 1-Mbp segment (similar to Wilder et al., 2022) gave qualitatively similar results but showed more long RoH which aligned more closely with the known timing of the reintroduction founder events (Figure S10).
| Genetic structure within the Svalbard reindeer metapopulation
Admixture, F ST and mitochondrial haplotype analyses identified strong genetic structure across the archipelago, in some cases even over short geographical distances, confirming patterns identified with microsatellite data in Peeters et al. (2020).Such genetic structure is typical of ungulate populations with a history of population fragmentation and bottlenecks due to past harvesting pressure (Haanes et al., 2010;Williams et al., 2002).On a finer scale, this study reveals population structure within the Central Svalbard group, that is, between the source and reintroduced populations, and among reintroduced populations.The two distinct genetic clusters among reintroduced populations corresponded to the two separate reintroductions to isolated peninsulas on the west coast of Spitsbergen.Pairwise F ST estimates reveal both reintroductions have resulted in a similar degree of genetic divergence from the source population.Founder effects and subsequent genetic drift commonly induce structure between reintroduced populations and their sources, typically reflecting isolation from the source population (Andersen et al., 2014;Brekke et al., 2011;Grossen et al., 2018;Latch & Rhodes, 2005;Williams et al., 2002).
Close genetic clustering of multiple sub-populations colonized from a common reintroduced founder population is characteristic of populations manipulated by reintroduction programmes (Andersen et al., 2014;Grossen et al., 2018).We found only weak genetic structure among populations originating from the first reintroduction, except for the rather isolated island PKF, which population showed little admixture with other reintroduced populations (Figure S7), reflecting low dispersal across the sea (Peeters et al., 2020).Population monitoring after the reintroduction to BGR (Reintroduction 1) recorded substantial movement between BGR, SAR and KAF (Hansen et al., 2010;Stien et al., 2010), but GPS collar data from ~200 reindeer-years and resighting data from ~300 marked reindeer suggest that such exchange of individuals is now rare, with no more than five observations in the past decade (Hansen et al., unpubl. data).This is consistent with the observed lack of fjord ice in recent decades.
Our results indicate little gene flow between reintroduced populations and sampled natural populations.Only one individual from a reintroduced population was identified as admixed with a natural population, with admixture proportions consistent with an F1 offspring resulting from a mating between individuals in Reintroduction 1 and MTR genetic clusters.This individual carried a unique haplotype that differs by only a single mutation from MTR haplotypes, suggesting female dispersal from the north.MTR and BGR, the closest population sampled in Reintroduction 1, are separated by only 15 km across the mouths of Kongsfjorden and Krossfjorden, a span of water which has rarely or never frozen over since the reintroduction (Pavlova et al., 2019;Urbański & Litwicka, 2022).This lack of sea ice as a movement corridor, in combination with tide-water glaciers and steep mountains inhibiting alternative dispersal routes, has likely prevented gene flow and contributed to the extreme degree of genetic differentiation between these geographically proximate populations.
On the contrary, it is likely that these populations were more closely related in the past, that is, before the local extirpations due to overharvest and the subsequent reintroduction from central Spitsbergen (to BGR) and recolonization from the North (to MTR).This illustrates how both reintroductions and recolonization may cause dramatic changes in population-genetic structuring and diversity.
An exception of the clearly separated reintroduced versus naturally recolonized populations occurred along the northern side of Isfjorden in central Svalbard (NIF).We found stronger genetic structure among the two sampled populations in the second reintroduction group (DAU and NIF).Our mtDNA analyses also suggest that introgression possibly occurred from a westward natural recolonization from an unsampled population carrying a haplotype not sampled in other populations, possibly facilitated by more frequent sea ice in the inner parts of the fjord (Muckenhuber et al., 2016).A higher coalescent N e in Reintroduction 2 than Reintroduction 1 and the presence of mtDNA haplotypes in both reintroduction groups that were absent from our ADV source population samples is consistent with introgression from an unsampled natural population east of Reintroduction 2. However, these haplotypes may have been present in ADV at the time of reintroduction, but if so it is unclear whether they existed only at low frequencies and increased in reintroduced populations due to founder effects, or if there has been a significant change in the mtDNA haplotype diversity in the ADV population.
| Implications for conservation and management
Small or bottlenecked populations are at risk of reduced fitness due to the accumulation of genetic load (i.e.increased frequency and fixation of recessive deleterious mutations), making it an important consideration in conservation biology (Bertorelle et al., 2022;Kardos et al., 2021;van Oosterhout, 2020).Additionally, severe or extended bottlenecks are expected to reduce genome-wide diversity, including functional genetic variation potentially important for the longterm adaptive potential of populations (Frankham, 2005;Kardos et al., 2021).Recolonized or reintroduced populations of ibex that have experienced strong founder effects have shown increased realized genetic load compared to those subjected to less severe founder effects (Grossen et al., 2020).Similarly, bottlenecked populations of corvids Corvus spp (Kutschera et al., 2020), Montezuma quail Cyrtonyx montezumae (Mathur & DeWoody, 2021) and rattlesnakes Sistrurus spp (Ochoa & Gibbs, 2021) show higher realized genetic load than larger populations.Thus, the relatively mild founder effects of the Svalbard reindeer reintroductions suggest they are likely to have retained more functional variation and accumulated less realized load than the natural recolonizations, which probably experienced more severe and repeated founder effects.This, in addition to the increased frequency of long runs of homozygosity (that may more strongly reduce fitness) in naturally recolonized populations, means the naturally recolonized, rather than reintroduced populations, likely pose a greater conservation concern.However, while some of the naturally recolonized populations may be at higher risk of reduced fitness due to increased realized genetic load, these populations may also carry fewer highly deleterious recessive mutations due to purging (Glémin, 2003;Grossen et al., 2020), and inbreeding may have less effect on individual fitness (Mathur & DeWoody, 2021).
Several management practice recommendations have been put forward to give general guidelines for the number of individuals that need to be reintroduced, and the amount of gene flow required to maintain genetic diversity.For example, approximately 20 effective founders (Willis & Willis, 2010) and one effective migrant per generation (Vucetich & Waite, 2000) have been viewed as sufficient in mammal populations.Despite founder population sizes lower than this, and the potential for inbreeding depression and accumulation of genetic load, population surveys have shown that both reintroduced and naturally recolonized Svalbard reindeer populations have been expanding (Table S2; Le Moullec et al., 2019), meaning any increase in realized genetic load is not yet severe enough to prevent continued population growth.
Svalbard reindeer face rapid environmental change (Hansen, Pedersen, et al., 2019) that may have implications for the fitness and viability of populations in future.Thus far, most populations of Svalbard reindeer have experienced a net gain from climate change effects, with a warmer and longer snow-free season leading to increased survival, reproduction and abundances (Albon et al., 2017;Hansen, Pedersen, et al., 2019;Loe et al., 2021).However, an increase in winter precipitation or "rain-on-snow" events increases ground ice cover during winter (Peeters et al., 2019) which limits access to winter forage (Hansen et al., 2010;Loe et al., 2016), occasionally destabilising reindeer population dynamics (Kohler & Aanes, 2004; but see Hansen, Gamelon, et al., 2019;Stien et al., 2010).Any short-or medium-term evolutionary responses to such environmental changes will likely depend on sufficient standing genetic variation for natural selection to act upon (Carlson et al., 2014).The potentially genetically depleted naturally recolonized populations may have limited ability to adapt to these changes.There is also the potential for environmental changes to accentuate the fitness effects of realized genetic load.Inbreeding depression (i.e. the fitness effects of realized genetic load) can be more severe in stressful environmental conditions (Armbruster & Reed, 2005;Fox & Reed, 2011).Therefore, previously benign or mildly deleterious variation in reindeer populations may have more severe fitness effects in the future.The Svalbard Archipelago is also experiencing a rapid decline in sea-ice, which acts as a movement corridor important for facilitating gene flow in Svalbard reindeer (Peeters et al., 2020).Future sea-ice reductions may further isolate and fragment reindeer populations, so despite the overall population expansion of Svalbard reindeer, accumulation of inbreeding and further loss of diversity in populations subjected to founder effects should remain a concern.
Our results suggest that anthropogenic reintroductions can sometimes be more effective than natural recolonizations in establishing genetically healthy populations, especially under increased anthropogenic landscape fragmentation, such as that due to sea-ice reductions.This is likely due to a higher number of (potentially more genetically diverse) founders and the avoidance of sequential founder effects that may be an inherent part of the natural recolonization process in fragmented landscapes.
These findings also underline that translocations deserve increasing consideration as a tool to augment gene flow required for the adaptation and persistence of populations in fragmented habitats in the context of climate change (Chen et al., 2022) or more generally to maintain diversity and reduce inbreeding (Frankham, 2010).
Assisted gene flow could be considered for Svalbard reindeer should populations show signs of decline due to genetic factors in future.Much of the Svalbard archipelago has been recently recolonized by reindeer (i.e.within the last century; Le Moullec et al., 2019), and further sampling may shed more light on whether low diversity and high inbreeding is a general pattern across recolonized populations of Svalbard reindeer, and indeed in recolonized populations of other species that exist in fragmented habitats.
Future research may benefit from fitness and phenotypic data, modern and historical reindeer samples, and molecular methods of quantifying both functional variation and genetic load (Bertorelle et al., 2022) to better understand the status of Svalbard reindeer populations and, more generally, how reintroduction and natural recolonization processes affect genetic load, inbreeding depression and potential adaptive variation in the wild.
| Study area
The Norwegian high-arctic Svalbard archipelago lies between the Arctic Ocean, the Barents Sea and the Greenland Sea, approximately 700 km north of mainland Norway (76-81° N, 10-35° E).Only 16% of the archipelago's land area comprises vegetated peninsulas and valleys (Johansen et al., 2012), which are fragmented by tide-water glaciers and inland and mountains that cover the majority of the land area.Vegetation types in the archipelago include polar deserts, Northern Arctic tundra dominated by prostrate dwarf shrubs and cryptogams and Middle Arctic tundra dominated by erect dwarf shrubs, forbs and grasses (Jónsdóttir, 2005).
| Study species
The Svalbard reindeer is an endemic subspecies that likely colonized the archipelago from Eurasia 6700 to 5000 years ago (Kvie et al., 2016).The subspecies is the dominant and only large herbivore in the terrestrial ecosystem, with little interspecific competition and almost non-existent predation pressure (but see Derocher et al., 2000;Stempniewicz et al., 2021).Reindeer were overharvested to near-extinction on Svalbard during the 19th and early 20th centuries, before coming under legal protection from hunting in 1925 (Le Moullec et al., 2019).By this time, reindeer had been extirpated from much of its former natural range, and isolated remnant populations were largely confined to four regions: the northern, northeastern and eastern extremes of the archipelago, as well as the central Spitsbergen region (Le Moullec et al., 2019;Lønø, 1959).After coming under legal protection, the subspecies began to recover but was still absent from much of its range in the 1970s, including the west coast of Spitsbergen.In 1978, 15 reindeer (with nine females and three males surviving the first months) were translocated from Adventdalen in central Spitsbergen to Brøggerhalvøya on the west coast as part of an ecological experiment (Figure 2; Aanes et al., 2000).The translocation habitat and the source population were chosen due to their proximity to human settlements rather than for the most favourable habitat or genetic factors.In 1984-1985, a second translocation reintroduced 12 individuals to Daudmannsøyra, on the north-western edge of Isfjorden (Gjertz, 1995), however, there is no population monitoring data to confirm these survived and established the current population.The reindeer population size at Brøggerhalvøya has been annually monitored since the reintroduction (Aanes et al., 2000;Hansen, Pedersen, et al., 2019).This has recorded the population's rapid expansion after translocation (from 12 individuals in 1978 to ~360 individuals in 1993) until a combination of high population density and poor winter conditions triggered a population crash (~80 individuals in 1994) and migration to recolonize the nearby peninsulas of Sarsøyra (1994) and Kaffiøyra (1996) to the south, and Prins Karls Forland island (~1994) to the west (Aanes et al., 2000;Gjertz, 1995).
Reindeer populations have since then recolonized most of their former range naturally, including southern Spitsbergen, the north coast of Isfjorden (to the east of the reintroduced population at Daudmannsøyra), the north-west coast south to Mitrahalvøya and Wijdefjorden in north-central Spitsbergen (Le Moullec et al., 2019).
Populations at Mitrahalvøya, Wijdefjorden and Southern Spitsbergen appear naturally recolonized from remnant populations, while the origins of the populations along North Isfjorden are unclear, but likely originated from the second reintroduction (Daudmannsøyra) and possibly admixed with naturally recolonizing individuals (Peeters et al., 2020).Both naturally recolonized and reintroduced populations occupy regions with comparable habitat suitability (Pedersen et al., 2023).Genetic evidence suggests the Svalbard reindeer metapopulation has low levels of genetic diversity (Kvie et al., 2016;Weldenegodguad et al., 2020) and shows strong population structure (Côté et al., 2002;Peeters et al., 2020), reflecting a history of population bottlenecks and founder effects and the largely philopatric nature of the species with no large scale migration (Hansen et al., 2010).
| Sample collection
Genetic data were generated from tissue samples (ear, antler, bone or fur) collected in 2014-2018 from 100 individual reindeer originating from 12 (sub)populations on the Svalbard archipelago (Figure 2, Tables S3, S4).Based on the extirpation locations reported in Lønø (1959), we categorized these populations as either putative reintroduced or naturally recolonized extirpated populations or remnant non-extirpated populations.These included six populations believed to have originated from the two translocations (Aanes et al., 2000;Gjertz, 1995): (1) Brøggerhalvøya (BGR, the initial reintroduction site), Sarsøyra (SAR), Kaffiøyra (KAF) and Prins Karls Forland (PKF) from the first translocation (collectively referred to as "Reintroduction 1") and ( 2 Genomic library building was performed for all samples based on the method described in (Carøe et al., 2018), and 90 of these were then sequenced to a target depth of 2-3× (see Figure S1, Table S4 for details).These sequencing data were combined with data from deep sequencing of the remaining 10 samples.
| Bioinformatic processing and genotype likelihood calculation
We used Paleomix version 1.2.13.4 (Schubert et al., 2014) to map demultiplexed sequence reads to the caribou reference genome assembled from a North American male (Taylor et al., 2019).This reference, while more phylogenetically divergent from the Svalbard reindeer, is more contiguous (N 50 = 11.765Mbp) than the Mongolian reindeer reference (N 50 = 0.94 Mbp; [Li et al., 2017]) and more suitable for RoH inbreeding-type analyses.Adapters were trimmed with adapterremoval version 2 (Schubert et al., 2016) and the BWA aligner program version 0.7.15 was used with the MEM algorithm (Li, 2013) without filtering for mapping quality.
To account for the uncertainty in calling genotypes from lowdepth sequencing data, we utilized ANGSD v0.93 (Korneliussen et al., 2014) to generate genotype likelihood data for each individual, and these, rather than explicitly called genotypes, were used in downstream analyses.Genotype likelihood files were generated in beagle format inferring allele frequencies with fixed major and minor alleles using the command-line arguments -doGlf 2 (admixture analyses) or -doGlf3 (inbreeding analyses), -doMajorMinor 1 and -doMaf 1.
Variants were called with a p-value threshold of 1e −6 (-SNP_pval 1e-6) only at sites for which there was sequence data in at least 50 individuals (-minInd 50).Reads with mapping quality <30 and base quality <20, and those with multiple mapping hits, were filtered out using -minMapQ 30, -minQ 20 and -uniqueOnly 1, and low-quality reads were removed with -remove_bads 1. Scaffolds mapped to bovine sex chromosomes by Taylor et al. (2019) were removed.To reduce any issues related to paralogs or mapping errors we filtered out sites that had average coverage greater than twice or less than ⅓ of the genome-wide average in a sub-sample of 10 individuals with equal (3×) coverage.We used the -C 50 parameter to adjust map quality for reads with a large number of mismatches to the reference genome, and the extended baq model to adjust quality scores around indels (-baq 2).
Four samples were excluded from the analysis due to low coverage (<20×).We specified haploid calls (-ploidy 1), only used reads with a minimum-mapping quality of 25 (--minimum-mapping-quality 25) and specified a confidence threshold of 30 for variant calling (-stand-call-conf 30).We then converted the haplotype calls of variable sites to FASTA sequences for each individual.To investigate mitochondrial genetic structure and haplotype diversity, we used pegas v1.1 (Paradis, 2010) to construct a median-joining haplotype network based on the raw number of nucleotide differences between sequences.We also used pegas to calculate population haplotype richness rarefied to a sample size of five using the Hurlbert (1971) rarefaction method.
| Ancestry and admixture analyses
We used the maximum likelihood-based clustering analysis software package NGSadmix (Skotte & Albrechtsen, 2013) to infer population structure and identify admixture between populations using genotype likelihood data.For admixture analysis, we excluded samples with sequencing depth <0.1× (n = 4) and removed two out of three closely related individuals in the reintroduced Daudmannsøyra population identified using NgsRelate (Hanghøj et al., 2019), because closely related individuals can bias admixture results (Garcia-Erill & Albrechtsen, 2020).We LD pruned the genotype likelihood data by first calling genotypes in ANGSD to generate Tped/Tfam files (-doGeno 32 and -doPlink 2) with which we performed variant LD pruning in PLINK v 1.9 (Chang et al., 2015) using --indep-pairwise 50 5 0.3 to specify a window size of 50, step size of 5 and a r 2 threshold of 0.3.Admixture models were run for the number of genetic clusters (K) ranging from 2 to 10, with 10 replicates of each.Only sites with a minimum minor allele frequency greater than 0.02 (using -minMaf 0.02) and that had data in at least half (46) of the 92 individuals in the analysis (using -minInd 46) were included in the analysis.We ran admixture analyses on the full dataset including all populations (467,146 sites), and also a separate analysis on a subset including only the reintroduction source, reintroduced and Southern Spitsbergen populations (427,643 sites).For each value of K, the replicate with the highest likelihood was selected.
We calculated ΔK (Evanno et al., 2005) using CLUMPAK (Kopelman et al., 2015), however uneven sampling of ancestral populations and strong genetic drift can bias rule-based model selection (Garcia-Erill & Albrechtsen, 2020).Therefore, we considered all K models and examined the correlation of residuals using EVALadmix (Garcia-Erill & Albrechtsen, 2020) to evaluate and interpret results instead of relying solely on a rule-based model selection procedure.
| Principal component analysis
We conducted Principal component analysis (PCA) using the software package PCAngsd (Meisner & Albrechtsen, 2018) to estimate a genetic covariance matrix using individual allele frequencies based on the same genotype likelihood data used in the admixture analysis, then computed eigenvectors and eigenvalues using the eigen function in R 3.6 (R Core Team, 2019).To visualize the data, we plotted the first four PC axes, and used ggplot2 (Wickham, 2016) to calculate 95% CI ellipses of the mean principal component coordinates of each natural population and for Reintroduction 1 and 2. We repeated this analysis using only the individuals from the Adventdalen, Southern Spitsbergen and reintroduced populations to characterize fine-scale population structure.
| F ST analysis
We quantified population differentiation by estimating pairwise F ST between each population using RealSFS in ANGSD v0.93 based on 2D (pairwise) population site frequency spectra (SFS) including all samples.First, we generated unfolded per-site allele frequencies (SAF) for each population using the -dosaf 1 argument in ANGSD with the same parameters and site filtering as when generating genotype likelihoods and specified the reference as the ancestral genome.Then, with the realSFS module in ANGSD, we used the unfolded SAF to generate folded 2D SFS priors for each pair of populations using -fold 1 since no ancestral states were available to polarize the ancestral/derived alleles.We then input the unfolded SAFs and the folded 2D SFS prior to realSFS to estimate per-site and global F ST , specifying the Hudson estimation method which is more suitable for smaller sample sizes (Bhatia et al., 2013) using --whichFST 1.Finally, we used the realSFS fst stat function to calculate the weighted global F ST for each population pair.
| Heterozygosity
We estimated genome-wide heterozygosity for each individual with coverage >2.5× using realSFS in ANGSD v0.93 based on the folded site frequency spectrum of each individual (Korneliussen et al., 2014).We used the same site filtering and parameters as for the genotype likelihoods described above, however since coverage can bias heterozygosity estimates in our data (see Figure S11), we downsampled each sample to 2.5× coverage using -DownSample in ANGSD to allow unbiased comparisons between the maximum number of samples.To estimate heterozygosity in ANGSD, we generated a folded SFS from unfolded SAF separately for each individual and divided the number of heterozygous sites by the total number of non-N sites.
| Coalescent effective population size
To estimate the coalescent effective population size, we first used ANGSD to calculate the per-site sample allele frequency (SAF) likelihoods for each population (-doSaf 1) using the same downsampled samples used for the heterozygosity analysis with the same filtering and parameters, but only considering sites with data from a minimum of 2/3 of the population sample size.With the realSFS module in ANGSD, we used the unfolded SAF to generate folded single population SFS, then input the unfolded SAF and folded SFS prior to the realSFS saf2theta function to calculate per-site thetas.We used
| Inbreeding and runs of homozygosity
We used ngsF-HMM (Vieira et al., 2016) to identify tracts of individual genomes identical by descent (hereon referred to as RoH), and estimate inbreeding coefficients from genotype likelihoods.This method utilizes a hidden Markov model approach to estimate per-site probabilities of being IBD rather than a rule-based method, and can be used with genotype likelihood data so is more appropriate for use with low-depth WGS data.We excluded scaffolds shorter than 10 Mbp from inbreeding analyses, leaving approximately 56% of the assembled genome (1.235 Gbp) covered by 1,640,852 variable sites on 64 scaffolds after filtering.We also applied stricter filtering than with heterozygosity, restricting this analysis to samples with >2.5× coverage based on reads used by ANGSD on the >10-Mbp scaffolds after all filtering parameters, and then downsampled each to 2.7× coverage using samtools (Li et al., 2009) to allow unbiased RoH comparisons between our samples.We inferred the approximate age of inbreeding (i.e. the number of generations back to the common ancestor that an RoH was inherited from) based on RoH lengths, using the equation G = 100/ (2rL) where r is the recombination rate and L is the length of RoH in Mbp (Kardos et al., 2017;Thompson, 2013), assuming a recombination rate similar to red deer (Cervus elaphus) of ~1 cM/Mbp (Johnston et al., 2017).
assigned in admixture analyses to the same reintroduction showed lower pairwise F ST values between each other than populations assigned to the other reintroduction.Similar levels of genetic differentiation were found in comparisons between the source population and both groups of reintroduced populations, and between the two groups of reintroduced populations.The naturally recolonized population at MTR was clearly the most genetically distinct, showing extremely high differentiation (>0.35) to all other populations except Widefjorden.
F
Principal component analysis plot showing PC 1 and 2. Shapes indicate the type of Svalbard reindeer population and colours indicate the sample population.Ellipses represent the 95% CI of the mean PC coordinates for each natural population (except NE due to too few samples) and each reintroduction group.Based on NGSadmix K = 2 model results, two individuals (T-2 and B-13) that represented admixture between strongly differentiated populations were not included in population ellipse calculations.ADV, Adventdalen; BGR, Brøggerhalvøya; DAU, Daudmannsøyra; EST, Eastern Svalbard; KAF, Kaffiøyra; MTR, Mitrahalvøya; NE, North East Land; NIF, North Isfjorden; PKF, Prins Karls Forland; SAR, Sarsøyra; STH, Southern Spitsbergen; WDF, Wijdefjorden.
reintroduced populations was due to greater coverage of moderate length RoH (between 1 and 8 Mbp) that resulted in increased median RoH lengths (Figure6), indicating a reduced effective population size in the more recent past compared to the ADV source population.Non-admixed naturally recolonized populations MTR and STH had the highest inbreeding coefficients (mean F ROH 0.477 ± 0.084 and 0.377 ± 0.07, respectively) and longest median RoH lengths (Figure6).We found the strongest signals of recent inbreeding in STH, with a high proportion of individuals' genomes covered F I G U R E 2 Admixture analysis results from NGSadmix analysis of Svalbard reindeer nuclear genomes.Upper: Admixture proportions for model K = 7 (bars), shown at population locations.Arrows indicate translocations for reintroduction 1 and 2; Lower: Admixture proportions for K = 2 and K = 7 models.Vertical bars represent individual reindeer and colours correspond to genetic cluster assignment.Black arrows indicate reintroduction translocations and dashed blue lines indicate assumed natural recolonization routes.Maps obtained from the Norwegian Polar Institute (topos valba rd.npolar.no).Rem: Natural remnant population; NR: Non-admixed naturally recolonized population; AR: Admixed naturally recolonized population.ADV, Adventdalen; BGR, Brøggerhalvøya; DAU, Daudmannsøyra; EST, Eastern Svalbard; KAF, Kaffiøyra; MTR, Mitrahalvøya; NE, North East Land; NIF, North Isfjorden; PKF, Prins Karls Forland; SAR, Sarsøyra; STH, Southern Spitsbergen; WDF, Wijdefjorden.by RoH >4 Mbp relative to other populations and a peak in F ROH in the 2-4 Mbp class (Figure 6b).In MTR, F ROH was highest in the 1-2 Mbp class with a large contribution from all other size classes <8 Mbp relative to other populations, but very long (>8 Mbp) RoH were rare.The admixed naturally recolonized population in WDF had the highest variation in total F ROH consistent with high variation in the proportion of individuals' shared ancestry.The two remnant natural populations EST and NE both had high F ROH in short RoH classes compared to the ADV population, with EST having low F ROH in intermediate and long size classes comparable to ADV and the two samples from NE showing relatively high F ROH in all size classes.
, i.e. before the time of the reintroduction translocation, given an average generation time of 5-6 years.We found only a small increase in the coverage of long RoH (4-8 Mbp), which is the expected length of RoH caused by shared ancestors 6-12 generations ago (around the time of the reintroduction), indicating our analysis may have underestimated RoH lengths.However, generation time may have been reduced in the post-reintroduction period of strong population growth due to an abundance of resources resulting in an earlier age of first reproduction and high fecundity(Giaimo & Traulsen, 2023).Nevertheless, a low frequency of long RoH relative to naturally recolonized populations suggests lower levels of recent inbreeding among reintroduced individuals.The founding population sizes of 12 Svalbard reindeer have thus been sufficient to maintain most of the heterozygosity of the source population and avoid serious accumulation of inbreeding in both reintroductions, both of which are a key concern for reintroduced populations(Frankham et al., 2017;Weeks et al., 2011).
(i.e.prior to recolonization), and high coverage of moderate length RoH indicates founder effects or small N e during the last 25 generations (i.e. during recolonization).ADV, the two reintroduced populations and the naturally recolon ized STH all had a similar average proportion of genomes within RoH <1 Mbp, indicating similar demographic histories >50 generations ago, consistent with these populations sharing a common origin as indicated by our admixture results.Therefore, differences in patterns of genome-wide diversity in STH and the reintroduced populations likely reflect differences between anthropogenic reintroductions and natural recolonization, rather than differences in genetic diversity between their ancestral populations.The extremely low levels of heterozygosity and high inbreeding levels in MTR are thus likely a result of a source population with a historically small population size that harboured little genetic diversity prior to recolonization, in addition to more recent founder effects associated with the recolonization process.Indeed, this is consistent with the reported population size of reindeer in the isolated North-West Spitsbergen area being only 2-300 individuals, 30 years after the historical overharvesting was ended(Lønø, 1959).Regional-scale population size estimates show comparable population sizes and strong population growth in both reintroduced and naturally recolonized areas (TableS2;Le Moullec et al., 2019) with F I G U R E 5 Svalbard reindeer genome-wide heterozygosity estimates using sequence data downsampled to 2.5× coverage.Reintroduced populations were grouped into Reintroductions 1 and 2 based on the admixture analyses.Rem: Natural remnant population; NR: Non-admixed naturally recolonized population; AR: Admixed naturally recolonized population.ADV, Adventdalen; BGR, Brøggerhalvøya; DAU, Daudmannsøyra; EST, Eastern Svalbard; KAF, Kaffiøyra; MTR, Mitrahalvøya; NE, North East Land; NIF, North Isfjorden; PKF, Prins Karls Forland; SAR, Sarsøyra; STH, Southern Spitsbergen; WDF, Wijdefjorden.F I G U R E 6 (a) Cumulative total F ROH from the five Runs of Homozygosity (RoH) size classes (0.5-1 Mbp, 1-2 Mbp, 2-4 Mbp, 4-8 Mbp and >8 Mbp) with each bar representing an individual Svalbard reindeer genome; (b) Proportion of individual genomes within RoH of each size classes; (c) Median RoH lengths of individuals in each population.Rem: Natural remnant population; NR: Nonadmixed naturally recolonized population; AR: Admixed naturally recolonized population.ADV, Adventdalen; BGR, Brøggerhalvøya; DAU, Daudmannsøyra; EST, Eastern Svalbard; KAF, Kaffiøyra; MTR, Mitrahalvøya; NE, North East Land; NIF, North Isfjorden; PKF, Prins Karls Forland; SAR, Sarsøyra; STH, Southern Spitsbergen; WDF, Wijdefjorden.
Several factors could break up true RoH and contribute to downwardly biased RoH length distributions: (1) Sequencing errors resulting in false heterozygosity can break up RoH, and will have a larger effect on accurate identification of longer RoH (MacLeod et al., 2013); (2) Any RoH spanning the genome assembly scaffold edges will be broken up (Brüniche-Olsen et al., 2018), thus lacking a chromosome-level genome assembly, we only included scaffolds longer than 10 Mbp in RoH analyses; (3) Errors in mapping sequence reads or structural variation between the caribou reference genome assembly and the Svalbard reindeer genome could also break up long RoH.Reanalysing ) Daudmannsøyra (DAU, the second reintroduction site) and North Isfjorden (NIF) from the second translocation (hereafter referred to as "Reintroduction 2").Samples were also collected from the source population of the reintroductions (Adventdalen, ADV), from two other remnant populations (Eastern Svalbard [EST] and North East Land [NE]) and the naturally recolonized populations (Mitrahalvøya [MTR], Southern Spitsbergen [STH]and Wijdefjorden [WDF]).Except for those from Daudmannsøyra (n = 8), which are new in this study, all samples were previously used to generate microsatellite data in a study byPeeters et al. (2020).
4. 4 |
DNA extraction, library building and sequencingDNA was extracted from ear tissue for the eight samples from Daudmannsøyra using a Qiagen (Hilden, Germany) DNeasy Blood & Tissue extraction kit according to the manufacturer's instructions except for the addition RNase A (details in SI 1).DNA extraction for all other samples (n = 92) is described inPeeters et al. (2020).
these per-site thetas with the thetaStat module do_stat function to calculate global Watterson's thetas (Ɵ W ) for each population.Using the Ɵ W estimates, we estimated the coalescent effective population size based on the equation Ɵ W = 4N e μ, assuming a per-generation mutation rate μ of 3.46 × 10 −8 as estimated for caribou by Dedato et al. (2022).
|
2022-11-29T14:13:13.074Z
|
2022-11-30T00:00:00.000
|
{
"year": 2023,
"sha1": "b84f374e0deaf011b5785f5ae23a4e2b8bb16363",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.13585",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a96e26e5da16e9c3e62161fdbd9425671e5302d5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
6562298
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Multiplex PCR Discriminates Bacillus anthracis and Its Genetically Related Strains from Other Bacillus cereus Group Species
Anthrax is an important zoonotic disease worldwide that is caused by Bacillus anthracis, a spore-forming pathogenic bacterium. A rapid and sensitive method to detect B. anthracis is important for anthrax risk management and control in animal cases to address public health issues. However, it has recently become difficult to identify B. anthracis by using previously reported molecular-based methods because of the emergence of B. cereus, which causes severe extra-intestinal infection, as well as the human pathogenic B. thuringiensis, both of which are genetically related to B. anthracis. The close genetic relation of chromosomal backgrounds has led to complexity of molecular-based diagnosis. In this study, we established a B. anthracis multiplex PCR that can screen for the presence of B. anthracis virulent plasmids and differentiate B. anthracis and its genetically related strains from other B. cereus group species. Six sets of primers targeting a chromosome of B. anthracis and B. anthracis-like strains, two virulent plasmids, pXO1 and pXO2, a bacterial gene, 16S rRNA gene, and a mammalian gene, actin-beta gene, were designed. The multiplex PCR detected approximately 3.0 CFU of B. anthracis DNA per PCR reaction and was sensitive to B. anthracis. The internal control primers also detected all bacterial and mammalian DNAs examined, indicating the practical applicability of this assay as it enables monitoring of appropriate amplification. The assay was also applied for detection of clinical strains genetically related to B. anthracis, which were B. cereus strains isolated from outbreaks of hospital infections in Japan, and field strains isolated in Zambia, and the assay differentiated B. anthracis and its genetically related strains from other B. cereus group strains. Taken together, the results indicate that the newly developed multiplex PCR is a sensitive and practical method for detecting B. anthracis.
Introduction
Anthrax is an important worldwide zoonosis caused by the spore-forming bacterium Bacillus anthracis, a Gram-positive, rod-shaped bacterium that is transmitted among soil, wildlife, livestock and humans. This disease is still enzootic in most countries in Africa and Asia and occurs sporadically in Europe, the American continent and Australia [1]. The cycle of infection is primarily found in herbivores (e.g., cattle, sheep, goats) infected after contact with soil contaminated with spores. Spores ingested by herbivores germinate within the host to produce vegetative forms, which multiply and express their virulence factors, killing the host [2]. Human infections are caused by contact with infected animals or their products, and the following three forms of anthrax are found depending on the three different infection routes: cutaneous, gastrointestinal and inhalation forms of anthrax. Cutaneous anthrax accounts for more than 95% of human cases worldwide [3]. In cases of gastrointestinal anthrax, the lesion may occur anywhere within the gastrointestinal tract, though mostly in the ileum and cecum, after ingestion of contaminated meat that has not been sufficiently cooked. In recent years, injection anthrax has been identified as the fourth form of anthrax [4]. This type of anthrax is recognized among the drug (e.g., heroin) users in Europe, and the symptoms depend on the type of infection (i.e., cutaneous, gastrointestinal, and inhalation).
B. anthracis is an important pathogen that has been extensively studied not only for maintenance of public health worldwide but also for the defense against an effective biological weapon. Indeed, envelopes containing B. anthracis spores were mailed to news media companies and government officials, leading to the first bioterrorism-related cases of anthrax in the United States of America in 2001 [5]. Anthrax occurred in 22 people with 5 deaths from inhalation anthrax in these biological attacks. The lethal dose for 50% of test subjects for humans is between 8,000 and 10,000 spores, and once symptoms of inhalational anthrax appear, treatment is almost invariably ineffective [6].
A rapid and sensitive method to detect B. anthracis is important for control of anthrax in animal cases to maintain public health and for appropriate treatment in human cases. Several PCR [1,[7][8][9], real-time PCR [10] and loop-mediated isothermal amplification (LAMP) [11] techniques have been developed to detect chromosomal genes (e.g., S-layer (sap) and Ba813), virulent plasmid pXO1, encoding protective antigen (pag), lethal factor (lef) and edema factor (cya), and virulent plasmid pXO2, encoding capsule protein (cap). These genes have been used as markers to discriminate B. anthracis from other bacteria in the B. cereus group, which includes B. anthracis, B. cereus (known as a potential food poisoning pathogen) [12], B. thuringiensis (known as an insect pathogen) [13], B. mycoides, B. pseudomycoides and B. weihenstephanensis. Among the B. cereus group species, B. anthracis, B. cereus and B. thuringiensis are considered as the same species based on genetic evidence [14] and are difficult to discriminate despite the existence of gene markers. Furthermore, there have been reports about severe manifestations and fatal cases caused by B. cereus (extra-intestinal B. cereus, e.g., strains F837/76, 03BB102) [15,16], B. thuringiensis isolated from a necrotic human wound (human pathogenic B. thuringiensis, e.g., 97-27 subsp. konkukian serotype H34) [17,18] and B. anthracis-like strains (e.g., B. cereus var. anthracis strain CI isolated from chimpanzees) [19,20]. Some of these bacteria have been shown to harbor B. anthracis virulent plasmids [21][22][23] or had chromosomal backgrounds closely related to B. anthracis [16,17]. Therefore, the establishment of a new detection method is needed to discriminate B. anthracis from other B. cereus group species including the above genetically related strains of B. anthracis.
In this study, we developed a practical B. anthracis multiplex PCR containing internal control primers targeting exogenous bacterial and mammalian genes. This assay is capable of screening for the presence of pXO1 and pXO2 and of differentiating B. anthracis and its genetically related strains from other B. cereus group.
Bacterial strains and culture conditions
Bacillus strains and bacterial DNAs used for optimization of 16S rRNA amplification are listed in Tables 1-3. Bacillus species listed in Tables 1 and 2, excepted for Sterne 34F2, GTC02891, GTC02896, GTC02916, GTC02917, GTC03222 and GTC02826T, were isolated from natural sources according to the following protocol. One gram of specimen was suspended in 10 ml of sterilized saline followed by incubation at 75°C for 20 min. An aliquot of the sample was inoculated on 10% (v/v) sheep blood agar and cultured at 37°C overnight. The isolates were used in this study after cloning and characterization. Bacillus strains were grown on brain heart infusion agar or 10% (v/v) sheep blood agar at 37°C overnight for subsequent studies. B. anthracis was handled in the biosafety level 3 facility of the Hokudai Center for Zoonosis Control in Zambia, School of Veterinary Medicine, The University of Zambia. Six GTC strains were obtained from the Pathogenic Microorganism Genetic Resource Stock Center, School of Medicine, Gifu University (Table 2). Thirteen bacterial genomes were obtained from the Pathogenic Microbes Repository Unit, International Research Center for Infectious Diseases, Institute of Medical Science, The University of Tokyo (Table 3).
Ethical statements for the project
Approval for wildlife animal sampling and soil sampling in the national park and game management areas was obtained from the Zambia Wildlife Authority (ZAWA) in the Republic of Zambia. Some of the animals were hunted by a hunting company for meat with permission from ZAWA to control their large numbers, and tissue specimens from non-endangered species, African buffalo, puku, warthog, spotted hyena and bushbuck, were kindly provided. Other samples were taken from carcasses to determine the cause of death. We received a waiver of approval from the University of Zambia Research Ethics Committee to do animal research following approval from ZAWA and the Ministry of Agriculture and Livestock. The human blood sample was taken from the first author (Dr. Hirohito Ogawa) with the author's consent.
DNA extraction
A bacteria colony on the agar plate was suspended in 100 μl of sterile phosphate buffered saline followed by heating at 95°C for 15 min and centrifugation at 15,000 rpm for 2 min at 4°C, and then 1 μl of the supernatant was directly used for PCR. Human and animal DNAs were extracted from 100 μl of blood or 10% (w/v) tissue homogenate by using TRIzol Reagent (Life Technologies) or QIAamp DNA Mini Kit (QIAGEN) according to the manufacturer's instructions. The extracted DNA was dissolved in 100 μl of distilled water or Buffer AE (QIAGEN).
For a sensitivity test, 100 μl of LB broth was collected, followed by heating at 95°C for 20 min and centrifugation at 15,000 rpm for 2 min at 4°C, and then 1 μl of the supernatant was directly used for PCR.
Primer design
Six primer sets were designed for the multiplex PCR targeting the B. anthracis chromosome and two plasmids, pXO1 and pXO2, common bacterial and mammalian genes and chromosome of the genetically related strains of B. anthracis to detect and discriminate B. anthracis and other Bacillus species by the amplified band patterns (Table 4). Three primer sets targeting the B. anthracis Ba813 region, tentatively termed "BA_5031" from the locus tag name of B. anthracis strain Ames, on the chromosome, pag encoded on pXO1 and cap encoded on pXO2 were designed to detect B. anthracis genes. One primer set (hBC/BT) targeting the region, tentatively termed "BACI_c47770" from the locus tag name of Bacillus cereus var. anthracis strain CI [19], which was capable of detecting the region of the genetically related strains of B. anthracis but not that of B. anthracis, was designed for discriminating them. Two primer sets targeting the 16S rRNA gene (16S rRNA) on a bacterial chromosome and beta-actin gene (ACTB) on a mammalian chromosome were designed as internal controls. Amplifx 1.7.0 (http://crn2m.univ-mrs.fr/pub/amplifx-dist) was used to simulate PCR reaction using the newly designed primers.
PCR with newly designed primer set(s)
Multiplex PCR was carried out using a TaKaRa Multiplex PCR Assay Kit (TaKaRa) according to the manufacturer's instructions. A total volume of 25 μl of reaction mixture containing 0.6 μM of BA_5031 primer set, 0.2 μM of cap and ACTB primer sets, 0.1 μM of hBC/BT primer set, 0.05 μM of 16S rRNA and pag primer sets and 1 μl of template DNA was used ( Table 4). The multiplex PCR program consisted of initial denaturation at 94°C for 1 min followed by 35 cycles of denaturation at 94°C for 30 sec, annealing at 58°C for 2 min, extension at 72°C for 2 min, and final extension at 72°C for 10 min. A PCR using each primer set was carried out using a TaKaRa Multiplex PCR Assay Kit (TaKaRa) according to the manufacturer's instructions. A total volume of 25 μl of reaction mixture containing 0.2 μM of each primer and 1 μl of template DNA was used. The PCR program consisted of initial denaturation at 94°C for 30 sec followed by 35 cycles of denaturation at 94°C for 30 sec, annealing at 58°C for 25 sec, and extension at 72°C for 1 min.
Previously reported PCR compared to the newly developed multiplex PCR Five PCRs with each primer shown in Table 5 were carried out using a TaKaRa Multiplex PCR Assay Kit (TaKaRa) according to the manufacturer's instructions. Primer concentration and annealing temperature were the same as those in previous studies [1,[7][8][9][10]. The PCR program for single PCR consisted of initial denaturation at 94°C for 1 min followed by 35 cycles of denaturation at 94°C for 30 sec, annealing at a stable temperature for each primer set (Table 4) for 25 sec, extension at 72°C for 1 min, and final extension at 72°C for 5 min. The PCR program for duplex PCR consisted of initial denaturation at 94°C for 1 min followed by 35 cycles of denaturation at 94°C for 30 sec, annealing at a stable temperature for primer sets (Table 4) for 90 sec, extension at 72°C for 90 sec, and final extension at 72°C for 5 min. A commercial kit (TaKaRa Bacillus anthracis PCR Detection Kit, TaKaRa) was used according to the manufacturer's instructions. This PCR system contained internal control genes for pag and cap. Either or both internal control bands amplified by the pag primer set (409 bp) and cap primer set (98 bp) would appear when the detection limit for pag and cap was below or the sample was lacking either or both of the plasmids.
Sensitivity test of multiplex PCR
The B. anthracis strain CZC5 (Accession number: BAVT00000000) [24] was cultured on LB agar at 37°C overnight. One colony with a diameter of approximately 3 mm was suspended in 3 ml of LB broth followed by serial 10-fold dilution in LB broth with 5% (v/v) sheep blood. Each 100 μl of suspension was collected from the dilution and used for DNA extraction according to the heating method described above. An aliquot from the dilution was cultured for counting colony-forming units (CFU/ml) on LB agar at 37°C overnight.
Sequence analysis
Sequence analysis of the PCR product was carried out by direct sequencing or a cloning method using pGEM-T Easy Vector (Promega). Purified PCR products or plasmids were sequenced in a 3130xl Genetic Analyzer (Life Technologies) following reaction using a BigDye Terminator v3.1 Cycle Sequencing Kit (Life Technologies) according to the manufacturer's instructions. T7-GG primer (5'-AATACGACTCACTATAGGG-3') and SP6 primer (5'-CAAGCTATT-TAGGTGACACTATAG-3') were used for sequence analysis of the plasmid.
Specificity of internal control primers targeting ACTB and 16S rRNA
To confirm the host range of the ACTB primer set, we attempted to amplify ACTB from DNAs extracted from a human and various kinds of mammalian animals. As a result, all of them were amplified (S1 Fig.). Nucleotide sequences of all of the products were determined and identified as an ACTB derived from the respective templates. Next, the 16S rRNA primer set was assessed with a probe match function in Ribosomal Database Project (RDP) [25] in silico to determine the suitability for 16S rRNA amplification in common bacteria. The 16S-663F and 16S-1395R obtained 93.72% and 95.67% coverage of 9209 strain type of the domain Bacteria sequences submitted to repositories, respectively, using RDP's probe match function with three errors allowance in each primer under the following data set options: strain type, isolate source, all size, and good quality (S1 Table). Combined in silico performance of the 16S-663F and 16S-1395R set covered 90.15% of the domain Bacteria (S1 Table). As a practical validation, PCR using the 16S-663F and 16S-1395R primer set detected 16S rRNA from B. anthracis genomic DNA and 13 bacterial genomic DNAs (Table 3) in two major bacteria phyla: Proteobacteria and Firmicutes.
Specificity of primers targeting the chromosome and virulent plasmids of B. anthracis
For specific amplification of B. anthracis, four target genes, Ba813 region, tentatively termed "BA_5031" on the chromosome, pag on pXO1, cap on pXO2 and a chromosomal region tentatively termed "BACI_c47770", which was a specific region for genetically related strains of B. anthracis (described in Materials and Methods), were designed (Table 4).
It is known that the Ba813, a 277-bp fragment encoded on B. anthracis chromosomal DNA, has been specific to B. anthracis [26]. However, some B. cereus strains harboring Ba813 have been isolated from humans with severe manifestations [15,21,27]. For amplification of chromosomal DNA, primers to amplify the 1027 bp of BA_5031 were designed for B. anthracis as well as the genetically related strains of B. anthracis (Fig. 1). Furthermore, we designed a hBC/ BT primers to amplify the 197 bp of the region tentatively termed "BACI_c47770" for discriminating between B. anthracis and the genetically related strains of B. anthracis (Fig. 1). Each specific band was amplified by PCR using each primer set (S2 Fig.).
Next, to compare the results, several PCR methods including World Health Organization (WHO) recommended methods [1,[7][8][9][10] and a method using a commercial PCR kit were performed. These PCR methods also detected each gene according to the variation of the isolates (Fig. 2).
Sensitivity of multiplex PCR using B. anthracis strain CZC5 in this study
To determine the sensitivity, one colony suspended in LB broth with 5% (v/v) sheep blood was serially diluted 10-fold in same medium, and aliquots were used for both DNA extraction by the boiling method and counts of CFU on an LB agar plate. All 5 bands except for BACI_c47770 could be simultaneously detected from 2.99 × 10 3 CFU/ml (Fig. 3). These amounts corresponded to 3.0 CFU/PCR reaction. The sensitivity of previously reported PCR methods including PCR using a commercial kit were less than or equal to the same serially diluted samples (data not shown).
Application to B. cereus clinical strains genetically related to B. anthracis and field isolates
The applicability of multiplex PCR to clinical strains genetically related to B. anthracis, which were B. cereus strains isolated from outbreaks of hospital infection [23], and field isolates was studied ( Table 2). Six field isolates in Zambia except for LZ48-5 and 5 clinical strains (GTC02891, GTC02896, GTC02916, GTC02917, GTC03222) were positive for both BA_5031 and BACI_c47770 (Fig. 4). Furthermore, GTC02917 was clearly cap-positive, being in accord with a report by Zhang et al. [23]. Four field isolates in Japan were 16S rRNA-single positive. The newly developed multiplex PCR detected each gene according to the variation of the isolates. Regional variation of isolates was not found among band patterns. Interestingly, the band patterns were closely related to genetic differences analyzed by multilocus sequence typing (MLST) defined by Priest et al. [29] (S3 Fig.). Basically, lineage Anthracis was both BA_5031 and 16S rRNA-positive, and lineage Cereus III and/or assortative lineages were BA_5031, 16S rRNA and BACI_c47770-positive in Clade 1. Plasmid genes (cap and pag) were detected according to isolates' characteristics. In Clade 2, GTC02826T from lineage Tolworthi, BC_CZC1 and BC_CZC2 were 16S rRNA-single positive. LZ48-5 was both BA_5031 and 16S rRNA-positive, the pattern of which was the same as that of B. anthracis vaccine strain (Sterne 34F2); however, LZ48-5 was hemolytic (B. anthracis being non-hemolytic). In Clade 3, BP_CZC1 and BP_CZC1 were also 16S rRNA-single positive.
Discussion
A rapid and sensitive method to detect B. anthracis is important in anthrax risk management and control as it enables early tracing and elimination of the infection source and timely diagnosis for appropriate treatment of the disease. Severe extra-intestinal infections due to B. cereus and B. thuringiensis in humans have recently been reported [15][16][17][18]27]. These isolates have been found to be genetically closer to B. anthracis than to B. cereus or B. thuringiensis typical strains [16,17,22], and their emergence has led to complexity of diagnosis. In this study, we developed a novel B. anthracis multiplex PCR method. The multiplex PCR assay contains multiple internal control primers, targeting the ACTB on the mammalian chromosome and the 16S rRNA on a common bacterial chromosome, to allow for monitoring of appropriate amplification. Since the actual specimens were derived from a human, mammalian animals or their products, bacteria isolates and environment samples containing commensal bacteria (e.g., water and soil), both target genes were chosen.
Universal PCR primers for amplification of bacterial 16S rRNA are widely available [30][31][32][33]. Recently, Winsley et al. [34] redesigned the 16S rRNA primer set to detect a wider range of bacteria including candidates belonging to new bacteria phyla. Since it has been assumed that their degenerate primer set hampered PCR amplification with other primers, a novel 16S rRNA primer set (16S-663F and 16S-1395R) was designed in this study. However, the 16S-663F and 16S-1395R combination showed lower homology to 16S rRNA among some kinds of bacteria, especially Caldiserica (0/1; 0%), Chlorobi (2/22; 9.09%), Spirochaetes (33/71; 46.48%) and Thermotogae (18/38; 47.37%) in S1 Table. These bacteria have unique morphological properties and/or differences of cultural conditions (e.g., temperature, existence of oxygen, etc.), which easily enable them to be distinguished from B. anthracis. Therefore, the 16S rRNA primer set can detect bacterial DNA derived from cultivable bacteria in standard media and can work as an internal control without the influence of lower homology to 16S rRNA among the above bacteria (Caldiserica, Chlorobi, Spirochaetes and Thermotogae). The 16S rRNA primer set could also work as an internal control to examine environment samples (e.g., water and soil) because these samples contain innumerable commensal bacteria. Indeed, 16S rRNA was amplified by the set of 16S-663F and 16S-1395R primers in all 14 bacterial DNAs including B. anthracis belonging to Proteobacteria and Firmicutes (Table 3), which were composed of major Grampositive and-negative bacteria (data not shown). It is clear that the novel 16S rRNA primer set works to amplify the bacterial gene and is useful for preparation of an internal control in the multiplex PCR. Recently, the Ground Anthrax Bacillus Refined Isolation (GABRI) method, which was able to detect very low level of B. anthracis from the contaminated samples (e.g., soil), has been reported [35]. The usage of GABRI method should have a better influence to the set of 16S-663F and 16S-1395R primers, since the environmental contaminants are strongly reduced by this method.
In order to detect B. anthracis, three target genes, pag on pXO1, cap on pXO2 and the Ba813 region on the chromosome, were chosen as markers of B. anthracis. Since pXO1 and pXO2 are closely related to virulence of B. anthracis strains, possession of which is valuable information to know virulence of strains. The multiplex PCR in this study could identify all B. anthracis strains, which showed variations of plasmid possession, in one reaction; however, previously reported methods could not identify them (Fig. 2). Interestingly, duplex PCR [1], single PCR to detect cap [10] and pag [7], and a commercial PCR kit could not even detect the B. anthracis genome to B. anthracis strain Sterne 34F2 clone2 (pXO1 -/pXO2 -/Ba813 + /16S rRNA + ), although B. anthracis, which dose not harbor pXO1 and pXO2, is avirulent and not a public health threat (Fig. 2). Our multiplex PCR clearly enables detection of B. anthracis and its plasmid possession in one reaction. In the past decade, some B. cereus strains possessing B. anthracis pag [22] as well as cap [21,23] have been reported. Furthermore, the multiplex PCR could identify the characteristics of field isolates and extra-intestinal B. cereus clinical strains, especially strain GTC02891 possessing cap (Fig. 4), indicating that pag and cap primer sets contained in the multiplex PCR are useful for detecting pag and cap even in the genetically related strains of B. anthracis.
Although Ba813 (277 bp in length) is known as one of the important markers on the chromosome to identify and monitor B. anthracis [36], it has been reported that some B. cereus strains were found to harbor it [23,36]. In this study, the Ba813 region (1008 bp) including the 277-bp length of Ba813, tentatively termed "BA_5031", was newly designed to amplify B. anthracis (Fig. 1). Furthermore, the hBC/BT primer set, which is capable of detecting the region, tentatively termed "BACI_c47770", in the genetically related strains of B. anthracis (Fig. 1). These strains have been reported over the past two decades [16][17][18]21,23], emphasizing the importance of diagnosis and monitoring of these strains. However, there are no reports to the effective chromosomal DNA markers and assays targeting them, since the crossreactions are frequently observed between B. anthracis and Bacillus strains genetically related to B. anthracis [37]. The isolates listed in Table 2 were identified by MLST (S3 Fig.). Interestingly, Zambian isolates (LZ77-1, LZ77-2, LZ78-7, LZ78-8, LZ136-1 and LZ136-2) except for LZ48-5 and clinical outbreak strains in Japan belonged to the same lineage in MLST, in which extra-intestinal B. cereus and human pathogenic B. thuringiensis strains were allocated. The band pattern of multiplex PCR also demonstrated that these strains were not typical B. cereus strains (Fig. 4); moreover, the BA_5031 and BACI_c47770 band pattern was also the same as that of B. cereus and B. thuringiensis, which are genetically closely related to B. anthracis (Ba_5031 + / BACI_c47770 + ) according to the result of in silico simulation (Fig. 1). It can be seen that our multiplex PCR discriminates Bacillus strains genetically related to B. anthracis as well as B. anthracis from B. cereus group species. LZ48-5 belonging to an unassigned lineage showed both a BA_5031 and 16S rRNA-positive band pattern as did B. anthracis strain Sterne 34F2 (vaccine strain) in multiplex PCR; unfortunately, the newly developed method could not discriminate a LZ48-5 without a hemolysis test. Only in this case, the combination of multiplex PCR and a hemolysis test might be required for identification; however, B. anthracis lacking virulent plasmids is not a public health threat because strains lacking either plasmid are avirulent or significantly attenuated.
Sensitive and simple detection is required for diagnosis and treatment of anthrax, since a complicated method is burdensome for the examiner and may produce inconclusive results due to human error. Therefore, DNA extracted from liquid culture by heating was directly used in the sensitivity test to avoid a complicated method. To confirm the sensitivity in the culture condition with blood, LB broth with 5% (v/v) sheep blood was used for the test. All 5 DNA fragments could be simultaneously detected from 3.0 CFU/PCR reaction (Fig. 3). Regarding 16S rRNA and cap primer sets under a multiple primer condition, specific DNA fragments were detected from 0.4 CFU/PCR reaction. The sensitivities of PCRs using a single primer set targeting pag and a single primer set targeting cap for blood samples containing B. anthracis were reported by Kurosaki et al. to be 3.6 and 90 CFU/PCR reaction, respectively [11]. Furthermore, the sensitivities of 6 previously reported PCR methods including PCR using a commercial kit were less than or equal to these sensitivities (data not shown). The results suggest that our multiplex PCR detected target genes equivalently to single PCR. By testing a dilution series of cultural B. anthracis, it was demonstrated that our multiplex PCR assay is a sensitive method for detection of B. anthracis as well as differentiation of B. anthracis and B. anthracislike strains.
In conclusion, the newly developed multiplex PCR is a sensitive and practical method for detecting B. anthracis from various kinds of specimens. This assay can also discriminate the virulence of B. anthracis and its genetically related strains from other B. cereus group strains.
Supporting Information S1 Table. Matches from the Ribosomal Database Project's (RDP) probe match function to the primers 16S-663F and 16S-1395R designed in this study. Fig. Specificity of BA_5031, cap, pag, and hBC/BT primer sets. Specific bands were amplified by single PCR each using the BA_5031 primer set (lanes 1 and 2), cap primer set (lanes 3 and 4), pag primer set (lanes 4 and 6), hBC/BT primer set (lanes 7 to 9) and 16S rRNA primer set (lanes 10 to 12). The expected band sizes amplified by BA_5031, cap, pag, BACI_c47770 and 16S rRNA primer sets were 1027 bp, 578 bp, 364 bp, 197 bp and 733 bp, respectively. Lane M, 100-bp DNA ladder; lanes 1, 3, 5, 7, 10, B. anthracis strain CZC5; lanes 2, 4, 6, 9, 12, distilled water. lanes 8, 11, B. cereus strain LZ77-1 as a positive control for the hBC/BT primer set. Single PCR using the 16S rRNA primer set was used to confirm template DNA. (TIF) S3 Fig. Phylogenetic analysis of Bacillus strains used in this study by multilocus sequence typing (MLST). In order to confirm the phylogenetic relationships, concatenated sequences of B. cereus type strain and 5 clinical strains, 7 field isolates in Zambia and 4 field isolates in Japan (Table 2) were used to construct a maximum likelihood tree according to the published MLST method [29]. Bold font indicates sequences from this study. The rectangles shaded with grey indicate the genetically related strains of B. anthracis in Fig. 1. Clade and lineage names are as designated by a published study [29]. (EPS)
|
2016-05-12T22:15:10.714Z
|
2015-03-16T00:00:00.000
|
{
"year": 2015,
"sha1": "2c2ba8b9bbaa3276696b9eb7dd2f317076f7d941",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0122004&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c2ba8b9bbaa3276696b9eb7dd2f317076f7d941",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
248085676
|
pes2o/s2orc
|
v3-fos-license
|
Improve Generalization of Driving Policy at Signalized Intersections with Adversarial Learning
Intersections are quite challenging among various driving scenes wherein the interaction of signal lights and distinct traffic actors poses great difficulty to learn a wise and robust driving policy. Current research rarely considers the diversity of intersections and stochastic behaviors of traffic participants. For practical applications, the randomness usually leads to some devastating events, which should be the focus of autonomous driving. This paper introduces an adversarial learning paradigm to boost the intelligence and robustness of driving policy for signalized intersections with dense traffic flow. Firstly, we design a static path planner which is capable of generating trackable candidate paths for multiple intersections with diversified topology. Next, a constrained optimal control problem (COCP) is built based on these candidate paths wherein the bounded uncertainty of dynamic models is considered to capture the randomness of driving environment. We propose adversarial policy gradient (APG) to solve the COCP wherein the adversarial policy is introduced to provide disturbances by seeking the most severe uncertainty while the driving policy learns to handle this situation by competition. Finally, a comprehensive system is established to conduct training and testing wherein the perception module is introduced and the human experience is incorporated to solve the yellow light dilemma. Experiments indicate that the trained policy can handle the signal lights flexibly meanwhile realizing the smooth and efficient passing with a humanoid paradigm. Besides, APG enables a large-margin improvement of the resistance to the abnormal behaviors and thus ensures a high safety level for the autonomous vehicle.
Abstract-Intersections are quite challenging among various driving scenes wherein the interaction of signal lights and distinct traffic actors poses great difficulty to learn a wise and robust driving policy. Current research rarely considers the diversity of intersections and stochastic behaviors of traffic participants. For practical applications, the randomness usually leads to some devastating events, which should be the focus of autonomous driving. This paper introduces an adversarial learning paradigm to boost the intelligence and robustness of driving policy for signalized intersections with dense traffic flow. Firstly, we design a static path planner which is capable of generating trackable candidate paths for multiple intersections with diversified topology. Next, a constrained optimal control problem (COCP) is built based on these candidate paths wherein the bounded uncertainty of dynamic models is considered to capture the randomness of driving environment. We propose adversarial policy gradient (APG) to solve the COCP wherein the adversarial policy is introduced to provide disturbances by seeking the most severe uncertainty while the driving policy learns to handle this situation by competition. Finally, a comprehensive system is established to conduct training and testing wherein the perception module is introduced and the human experience is incorporated to solve the yellow light dilemma. Experiments indicate that the trained policy can handle the signal lights flexibly meanwhile realizing the smooth and efficient passing with a humanoid paradigm. Besides, APG enables a large-margin improvement of the resistance to the abnormal behaviors and thus ensures a high safety level for the autonomous vehicle.
Index Terms-Autonomous driving, Minimax formulation, Reinforcement Learning, Integrated decision and control
I. INTRODUCTION
A UTONOMOUS driving is regarded as one of the most potential applications of artificial intelligence, which can provide numerous benefits from increasing efficiency to reducing accidents. Technically, decision-making is the core and challenging element to achieve high-level intelligence of the automated vehicles [1]. To that end, reinforcement learning (RL) has been widely adopted to learn a reasonable driving policy inspired by its successful application in Go game and robotics [2], [3]. Generally, RL can be viewed as a trial-anderror learning by interacting with the environment [4]. While early RL applications were mainly deployed in the high-way scenes to learn some simple behaviors such as lane keeping [5], lane changing [6] or overtaking [7], it is currently promising to conquer more complicated urban scenarios. Especially, intersection is the typical representative of urban roads due to the highly dynamic and stochastic characteristics caused by the interaction of signal lights and distinct traffic actors including pedestrians, cyclists and vehicles [8]. In that situation, assuring driving safety has always been the principal consideration and challenge to the application of RL.
The intuitive approach for safety is introducing the constraints and updating the driving policy towards the direction of satisfying these constraints. For that purpose, Isele et.al (2018) firstly considered the safe exploration during the learning process by using the prediction as a constraint and claimed that it can be used to safely learn intersection handling behaviors on an autonomous vehicle [9]. Wen et. al (2020) considered a risk function and bounded the expected risk within predefined hard constraints, showing that this could assure the driving safety at the simple two-vehicle interacting at an intersection [10]. Besides, some other works also have explored different constraints formulation for driving tasks such as control barrier function [11], chance constraint [12] and constraints in continuous time [13]. Inspired by these constraint optimizations, Guan et.al (2021) recently proposed the integrated decision and control (IDC) framework for autonomous driving which intuitively adopted RL to solve the constrained optimal control problems (COCP) based on multiple candidate paths generated by static path planner [14]. The dynamic model of the ego vehicle and the empirical model of surrounding participants are built separately to construct the distance constraints. Besides, both simulation and real vehicle experiments have proven that IDC shows more flexibility and intelligence at complex urban roads compared with the state-of-the-art methods [15]. However, the IDC framework suffers two major issues when extending to more signalized intersections. One is the static path planner is hardly to generalized to diversified intersections with different shape and topology. The other is the policy trained on empirical models is too idealized to accommodate the abnormal behaviors from real driving environments.
The first issue says that the static path planner of IDC currently focuses on a certain intersection to design the candidate paths individually, whereas for other crossroads with different configuration, it should determine the reference paths one by one according to the realistic map of this intersection. Concretely, bizer curves [16] is adopted to generate reference path with the guidance of control points, whose choices become the core procedure as they will determine totally the shape of reference paths. This calls for a general approach to the automatic determination of control points which could be adaptive to the intersections of all shapes. Also, this path generation only concerns the green light, and the following training process simulates red lights by adding virtual vehicle in front of the stop line, and treating the yellow light as the red one to force a hard deceleration at any time [17]. This will lead to a rather conservative dealing with yellow light dilemma zones, where the ego vehicle must choose to stop or pass without hesitation according to traffic condition [18]. To deal with signal lights, Zhou et al. (2019) utilized the predefined rules to replan the trajectory online considering the ego vehicle's position, velocity, and the constraints of surrounding vehicles [19]. Chen et al. (2021) employed model predictive control (MPC) method to optimize the acceleration to drive the vehicle to pass the intersection at the current or the next green light phase [20]. Although the embedding of human experience will be helpful for handling traffic lights, this online optimization is difficult to be applied in practice because of the high computational burden, especially in complicated scenarios.
As for the second issue, IDC mainly focus on the safety guards against common situations by constraining the prior known environment models, while this kind of empirical models hardly reflects the randomness of driving tasks. This gap will bring disabled safety guarantees against the abnormal behaviors such as over-speeding or direction change contrary to the traffic rules. To improve the resistance to environmental disturbances, Pinto et. al (2017) extended adversarial learning to RL fields wherein the protagonist policy tried to learn the control law by maximizing the accumulated reward while an adversary agent aimed to make the worst possible destruction by minimizing the same objective [21]. These two policies were trained alternatively, with one being fixed whilst the other adapted. After that, Pan et.al (2019) explicitly used an ensemble of policy networks to model risk as the variance of value functions, and then train a risk-averse protagonist and a risk-seeking adversary [22]. They built the round way racing track in TORCS [23] on a self-driving vehicle controller and showed that it could handle substantially fewer catastrophic events caused by adversarial attacks. Similarly, we demonstrated that adversarial learning could be helpful for the safe decision-making at intersections, wherein we used the protagonist policy to control the ego vehicle, the adversarial policy to control the two surrounding vehicles respectively [24]. However, these works largely simplify the driving conditions of intersections, only considering the sparse and homogeneous traffic flows. Besides, their adversarial training process mainly focuses on the performance at worst-case situation, and lack of the guarantee of driving safety with the absence of distance constraints.
This paper aims to learn robust and intelligent driving skills at signalized intersections, which could be scalable to diversified urban crossroads and can handle the abnormal behaviors from surroundings. Based on the IDC framework, the contributions and novelty of this paper are summarized as follows: 1) A static path planner for urban intersections is developed to produce candidate paths, which is proved to be generalized to various crossroads. Specifically, our path generation only considers static road information and consists of the route planning and the velocity planning. The former is capable of providing reference position for the automated vehicle and we design a automatic static route generation method using bizer curves where the control points can be adaptive to the size and topology of intersections. The latter aims to deliver reference velocity to adjust the driving behaviors encountering different phases of the signal light, wherein we design two distinct modes, i.e., the stop velocity to encourage yielding or waiting behaviors at red lights and the pass velocity to inspire a quick ride at green lights. 2) Based on the candidate paths, we construct a COCP wherein the bounded uncertainty of dynamic models is considered to capture the randomness of driving environment and the constraints are introduced to assure driving safety. Novelly, we propose adversarial policy gradient (APG) to solve this COCP under the worst constraint, wherein the driving policy learns to realize reasonable operations and the adversarial policy aims to provide disturbances. Formally, the minimax formulation is established based on the driving performance of tracking cost and safety constraints, with which the driving policy intends to minimize and the adversarial policy attempts to violate the safety constraints by seeking for the most severe uncertainty. APG optimizes these two policies alternately, and the competition will provably make the driving policy obtain a greater ability to cope with external variations. 3) We establish a comprehensive driving environment wherein the typical intersections are built and the perception module is introduced to simulate the sensor characteristics. Besides, we incorporate the human knowledge into the online application of the trained networks to solve the yellow light dilemma. Simulations show that the trained policy can handle the switch of signal lights flexibly which is comparable to human drivers, meanwhile realizing the smooth and efficient passing at different intersections. Moreover, the generation tests are conducted to verify that APG improves the safety and robustness of the driving policy to resist abnormal behaviors of traffic participants.
The paper is organized as follows. In Section II, we introduce the key notations and related works. Section III describes the mathematical details of the static path planner and APG. Section IV presents driving environment construction and the implementation for RL algorithms. Section V verifies the effectiveness of the trained policy and Section VI concludes this paper.
II. PRELIMINARIES
This section will first introduce the basic principle of RL and the details of IDC, then the related works on adversarial training will be summarized.
A. Principles of RL
Formally, we describe the driving process as a standard RL setting wherein the automated vehicle interacts with the driving environment in a real-time manner. At current state s t , the vehicle will take action u t according to the driving policy π and then the environment will return next state s t+1 according to the environment model f (s t , u t ), i.e., s t+1 = f (s t , u t ), and a scalar reward or utility function l(s t , u t ). Overall, RL seeks for optimizing the driving policy π which maps s t to the action distribution, i.e., u t ∼ π(s t ). To that end, value function v π is introduced to evaluate the performance of policies, which is concretely defined as the accumulated utility expectation under π obtained from the state s, i.e., v π (s) = E{ T −1 t=0 l t |s 0 = s}. Typically, most RL algorithms can attribute to the classic actorcritic architecture [25]. The critic is designed to update v π to evaluate the current policy by minimizing its distance from the target generated from interaction with environment. Actor intends to find another better policy π by minimizing the calculated value function, i.e., π = arg min π E s∼d {v π (s)}, where d is the state distribution determined by policy π. Meanwhile, the policy π and value function v π are usually parameterized as π θ and v w by neural networks (NNs) to handle continuous and high dimension tasks, in which θ and w are parameters to be optimized. In that case, the gradient-based optimizations are usually employed to iteratively update these NNs to approximate the optimal policy π * and value function v * .
B. Adversarial Learning
Adversarial learning is usually modelled as the minimax formulation, wherein there exists another adversary policy π φ with φ as its parameters, to provide a competitor for the protagonist policy π θ . Given the current state s t , π θ will take action u t , π φ will take action ξ t , and then the next state s t+1 will be reached. Whereas they will obtain distinct utility function: the protagonist gets a utility l(s t , u t , ξ t ) while the adversary gets a utility −l(s t , u t , ξ t ) at each time step. Under this scheme, the value function v π will be determined by these two policies together, and a zero-sum two-player game can be seen as the adversary maximizing the value function while the protagonist attempts to minimizing it: This optimizes both agents using an alternating procedure. Firstly, we learn the protagonist's policy while holding the adversary's policy fixed. Next, the protagonist's policy is held constant and the adversary's policy is learned. These two steps will be repeated until convergence. Intuitively, the adversary is introduced to apply disturbance forces to the system and is reinforced-that is, it learns an optimal policy to thwart the original agent's goal. This learning paradigm jointly trains a pair of agents, a protagonist and an adversary, where the protagonist learns to fulfil the original task goals while being robust to the disruptions generated by its adversary [21].
C. Integrated decision and control (IDC)
IDC serves as an effective framework of deploying RL to conquer the decision and control of autonomous vehicles, which consists of two essential modules: static path planner and dynamic optimal tracker [14]. The former aims to generate the candidate path set, denoted as Π, which builds all feasible paths considering static traffic information. The number of candidate paths can be set priorly with consideration of the potential target lanes and the traffic density of intersections. Dynamic optimal tracker considers tracking all candidate paths as well as assuring the safety against surrounding environment by meeting the constraints. Given an arbitrary path τ ∈ Π, its COCP involves the tracking performance, denoted as J track , and the constraints for different participants, denoted as g(·) to assure safety: where T is the prediction horizon and s i|t is the driving state at the prediction step i, starting from the current time step t. We should emphasize the s 0|t = s t indicates that the initial state s t is sampled from the driving environment and its following transition can be derived through the simulation model f (·, ·) based on the corresponding driving action u i|t . The stochastic driving policy π θ (s i|t ) aims to build the mapping from the driving states to the distribution of the actions, i.e., u i|t ∼ π θ (s i|t ), the latter will be delivered to control the automated vehicles. d denotes the distribution of s t , which usually is designed as a joint distribution of reference path τ , the state of the ego vehicle and surrounding participants, the road information, etc. l(s i|t , π θ (s i|t ), τ ) denotes the utility of tracking and g(s i|t ) denotes all the constraints on s i+1|t , such as the distance to surrounding participants, road edge.
Besides, the value function v w is trained to evaluate the tracking cost by predicting the track performance J track of the given path τ ∈ Π. After training, the optimal counterpart π * and v * will be obtained and then applied online to implement decision and control functions. v * firstly tells the optimal path from Π which has smaller tracking cost and less potential to collide with other surrounding vehicles. And then π * takes this path to construct the corresponding driving state and consequently outputs the control commands for the automated vehicle. Comparatively, IDC owns high online computing efficiency because of the fast forward propagation of NNs, which could calculate the control commands within 10ms at one complex intersection [14], [15]. With the power of RL, it can also cast off the tedious human designs and improve the safety and intelligence level of decision and control.
III. METHODOLOGY
This section introduces the methodology to improve the generalization ability of RL-enabled algorithms under the scheme of IDC from two aspects. One is the static path planner which is scalable to different intersections; the other is the APG algorithm, which could improve the resistance of the driving policy to the abnormal behaviors of traffic participants.
A. General Static Path Planner
Static path planner aims to generate trackable paths with consideration of static road information. Considering the diversity and complexity of urban roads, this module should be scalable to intersections of different shapes and sizes, meanwhile incorporated with various traffic rules, such as the speed limits or pass restrictions. To that end, our general static path planner consists of two essential parts: route planning and velocity planning. For the route planning, the basic idea is to generate smooth and continuous curves by fitting and splicing, as shown in Fig. 1. Each candidate path, numbered as τ , will be determined by the six control points (X 0 , X 1 , X 2 , X 3 , X 4 , X 5 ), which divide this candidate path into three important segments: the entrance route controlled by (X 0 , X 1 ), the curve route controlled by (X 1 , X 2 , X 3 , X 4 ) and the exit route controlled by (X 4 , X 5 ). As the routes outside intersections possess definite lanes, we can directly choose the lane center as the entrance and exit route, indicating that (X 0 , X 1 ) and (X 4 , X 5 ) can be constructed directly along the direction of lane center line. For the curve route, the thirdorder bezier curve [26] is adopted to generate a reasonable route inside intersections, which also smoothly connects the entrance route and the exit route. Therefore, the key to generating such a curve lies in the choice of four control points (X 1 , X 2 , X 3 , X 4 ). To guarantee the continuity of the whole route, X 1 should be the end point of the entrance lane and X 4 is the start point of the exit lane. Thus, the curve route is mainly determined by these two middle control points (X 2 , X 3 ).
To assure the scalability to various intersections, we propose the choice standard of the control points (X 2 , X 3 ), which can be adaptive according to the topology of the crossroads. Without loss of generality, we assume there exists an included angle between the entrance route and the exit route with the horizontal direction, denoted as θ in and θ out respectively, also shown in Fig. 1. The control points (X 2 , X 3 ) can be decided with such a geometric method: firstly, given X 1 and X 4 , we connect them directly, thereafter two ρ bisection points, P 2 and P 3 are built, which starts from X 1 and X 4 respectively. Note that ρ ∈ (0, 1) is a hyperparameter and can be determined in experience. Then, we extend the entrance route and exit route along the inside of intersection, towards which we draw perpendicular line from P 2 and P 3 respectively. Finally, the vertical feet are determined as X 2 and X 3 respectively. Based on the geometric relation, we can derive the specific calculation formula for X i,2 , X i,3 respectively: ρ cos 2 θ out + sin 2 θ out (ρ − 1) sin θ out cos θ out (ρ − 1) sin θ out cos θ out cos 2 θ out + ρ sin 2 θ out X 4 (2) After obtaining four control points (X 1 , X 2 , X 3 , X 4 ), the curve route can be generated according to the bizer formula: where B(t) is the point coordination on the bizer curve and t reflects its relative position.
After concatenating the entrance route, curve route and exit route together, one feasible route τ is completely generated and similarly we can construct more potential routes according to the road structure and passing connection. To show the simplicity and high operability of this route planning, here we choose 6 typical intersections to generate the candidate routes, as shown in Fig. 2. These intersections are picked carefully from the open source street map to represent common scenarios, covering the single lane and multi-lane intersections, regular and irregular intersections, as well as the intersections equipped with green belts. We suppose the ego vehicle departs from the downside and aims to finish the left-turn, right-turn and straight-going tasks. The number of candidate route is determined by that of target lane and ρ is set to 0.6 for all intersections. For instance, in Fig. 2(f), 3 candidate routes are constructed for left-turn tasks because there exit 3 target lanes to pass through the intersection, while the straight-going task only has 2 candidate routes. Overall, Fig. 2 indicates that the general static path planner can generate smooth and seemingly reasonable paths for diversified intersections with different topology and we will further verify the trackability of them in V-B. Another essential part of static path planner is the velocity planning, as shown in Fig. 3. Considering the existence of traffic lights and rules at intersections, we design two expected velocity profiles with respect to the spatial position, i.e., the pass velocity mode and the stop velocity mode. The former corresponds to passing the intersections quickly while the latter means to wait if encountering a red light or congestion inside the intersection. For the pass velocity, its value outside the intersection is set to be 0.8V limit , where V limit shows the speed limit of the entrance lane, usually determined by traffic rules or the human experience. The latter is set to be the lower one between 0.5V limited and 30km/h as Chinese traffic regulations usually limit the speed inside intersections to be less than 30km/h. For the stop velocity, the ego vehicle is required to decelerate uniformly to the stop line within 30m from the speed of 0.8V limited to 0, then the expected velocity will keep 0 inside the intersection and turn to 0.8V limit again outside the intersection. Note that here we design these two velocity modes mainly considering the responses of the automated vehicle to green and red lights and both of them will be utilized to train the driving policy. After that, we further incorporate human knowledge in IV-E to apply the driving policy to deal with some unusual situations such as yellow lights. In that case, the appropriate velocity mode will be chosen in a real-time manner based on the the vehicle state and the remaining time of current light phase.
Combining the route planning and velocity planning, the general static path planner is capable of generating candidate paths to provide the reference position and speed for automated vehicles. Note that our method only depends on the high-precision map and has no regard of the dynamic traffic participants in path production, and the subsequent training will further consider to track these paths while satisfying the safe constraints against the surrounding traffic actors. As for the construction of COCPs, it always has assumed a precise environment model, i.e., s i+1|t = f (s i|t , u i|t ), where the current state s i|t and action u i|t will definitely lead to a deterministic next state s i+1|t . However, this is almost non-trivial for driving tasks due to the behavior randomness of surrounding participants and the perception noises. Therefore, here we construct a stochastic dynamic model to consider the uncertainty of driving environment, i.e., s i+1|t = f (s i|t , u i|t , ξ i|t ), and ξ i|t is the bounded random noise satisfying ξ i|t ∈ [ξ min , ξ max ], where ξ min and ξ max represents the lower and upper bound of the randomness respectively. Consequently, s i+1|t shall be a random variable and its corresponding constraint g(s i+1|t ) ≤ 0 must be held over the randomness of environment model to assure the driving safety. Nevertheless, this might be unavailing for the optimization process as ξ i|t can be captured as any value from the continuous interval ξ min , ξ max . Intuitively, we aim to seek for the "worst case" of this safety constraint to guarantee it can work in the whole interval of the random noise, i.e., max ξ i|t g(s i+1|t ) ≤ 0, which means that the most aggressive constraint w.r.t. ξ i|t will be satisfied and thus any other value of the random noise would be covered automatically. To sum up, the problem formulation of our COCP can be formulized as: This constrained optimal problem needs to be further transformed so that it can be solved by RL algorithms. Here, we adopt the generalized exterior point method [27] to convert it as the unconstrained one, wherein ϕ(s i+1|t ) is the penalty function of the state constraint g(s i+1|t ) ≤ 0, i.e., Obviously, ϕ(·) characterizes the degree to which the constraints are satisfied, in which if g(s i+1|t ) ≤ 0 holds, the penalty will exactly be 0 otherwise it will become a square growth to the constraint violation. Based on that, we can reformulate (3) and construct the total policy cost J π as follows: (4) where ρ is the penalty factor of the penalty function, which determines the importance level of the tracking cost J track and safe cost J safe . Differently, solving this problem in (4) needs to solve the inner maximization operator firstly to find a sequence of the worst noise for all the predictive states, i.e., {ξ 0|t , ξ 1|t , . . . , ξ T −1|t }, which is rather costly for the non-linear and high-dimension optimization. Inspired by the adversarial training, we introduce the adversarial policy π φ to map the state s i|t over the distribution of random noise ξ i|t , i.e., ξ i|t ∼ π φ (s i|t ). Note that π φ (·) also needs to be trained progressively to reach its optimal counterpart, which can output the worst noise after receiving an arbitrary state. Thus, we can construct the minimax formulation problem: Up to now, we can employ the policy gradient method to optimize the driving policy and adversarial policy simultaneously, wherein the former optimization aims to decrease both the tracking and safety cost, and the latter intends to seek for the worst noise to disturb the learning process. Concretely, π θ (·) conducts the minimization operation by gradient descent method, whose gradient can be derived as (6) Similarly, the gradient of π φ (·) can be extracted as ∂s i+1|t ∂φ .
(7) As for the training of value network v w (·), its objective, denoted as J v , aims to minimize the mean square error between the prediction of v w and the real value J track calculated by the environment model f (·, ·, ·) given a random path τ and the initial state s t : And its gradient w.r.t. the network parameter θ can be calculated as ∇v w (s t , τ ) ∇w .
(9) Accordingly, APG is proposed to obtain a generalized driving policy which is capable of handling the uncertainty from environment model, and its implementation is based on (6), (7) and (9) to update the adversarial policy, the driving policy and the value function iteratively. Once their optimal counterparts are acquired, the automated vehicle can firstly select out the optimal path τ by comparing their path values, and based on that the driving state will be constructed and fed to the optimal policy π θ to generate the optimal control action u * t , i.e., τ = arg min The details of APG are shown as Algorithm 1. Note that we introduce the update interval m to adjust the update frequency of adversarial policy and driving policy, which is an important technique to stabilize the training process as suggested in [28], [29].
Algorithm 1 Adversarial Policy Gradient (APG)
Initialize parameters θ, w, φ Initialize learning rate β θ , β w , β φ Initialize penalty factor ρ Initialize iterative step k = 0 Initialize update interval m Initialize buffer B ← ∅ Initialize candidate path set Π repeat // Sampling Randomly select a path τ ∈ Π for each environment step do Receive s t from environment with the chosen path τ Obtain action u t = π θ (s t ) Apply u t in environment, returning s t+1 and l(·, ·, ·) Add the sample into buffer: B ∪ {s t , u t , l(·, ·, ·), s t+1 } t = t + 1 end for // Optimizing Fetch a batch of states from B, compute J v and J π by f (·, ·, ·), π φ and π θ Update the value function with (9): w ← w − β w ∇ w J v Update the driving policy with (6): Update the adversary policy with (7):
IV. TRAINING ENVIRONMENT CONSTRUCTION
This section aims to construct the training environment of signalized intersections, and demonstrates the implement details of APG.
A. Traffic Configuration
The training environment is constructed based on six typical signalized intersections demonstrated in Fig. 2. The candidate path set Π is generated with the proposed method in III-A and the mixed traffic flow, i.e., the vehicles, cyclists and pedestrians, will be deployed on these intersections with the support of SUMO software [30]. Specifically, we set the density of pedestrians, cyclists and vehicles to 100, 100, 400 per hour for each entrance lane to simulate a dense traffic flow. The pedestrians are controlled by the stripe model of SUMO, while the surrounding vehicles and cyclists are controlled by the the car-following and lane-changing models. All these surrounding participants are randomly initialized at the beginning of each experiment and ride along the predefined destination. Moreover, the signal light systems are also designed to regulate the passing of large-scale traffic flow, where the light controlling the right-turn keeps green, and that dominating the left-turn and straight-going keeps synchronous, which will lead to more complex driving operations for the automated vehicle, i.e., the unprotected left-turn. Empirically, the phase time of red, yellow and green are set to 40s, 3s, and 60s respectively. Note that we only care about the red and green light during offline training, that is, the ego vehicle will attempt to track the pass velocity mode in Fig. 3 if the light is green and the stop mode if it is red. The yellow lights will be further considered in the online application in IV-E. Note that the V limit is set to 37.5km/h for the velocity planning. Based on the above settings, the ego vehicle is initialized outside of the intersection and aims to complete three different tasks, i.e., turning left, going straight and turning right, to pass this intersection with guaranteeing driving safety, efficiency and comfort simultaneously.
B. Perception system
To make a more realistic training system, we equip the perception system for the ego vehicle including one camera, one lidar and three radars, whose specifications are referred to the real sensor products in market such as Mobileye camera, DELPHI ESR (middle range), and HDL-32E [31]. As shown in Table I, the effective perception ranges of the camera, radar and lidar are set as 80m, 60m and 70m respectively, and the horizontal field of view of them are set as ±35 • , ±45 • , 360 • respectively. During riding, only the surrounding participants entering into the perception range can be captured to construct the driving states, forming the interested participant set I. Besides, we model the perception noise by adding its statistical counterpart to the true value given by SUMO. These noises obey Gaussian distribution and their parameters, i.e., the mean and variance, come from statistics of real sensor data. Take the lidar as an example, we firstly collect the pairs of true value and observation based on the open-source dataset [32], the difference between which can be seen as the sensor error of lidars. Then we draw the statistical histogram of the sensor error of different traffic participants and estimate the distribution parameters using maximum likelihood estimation. Table II shows the distribution parameters of different traffic participants of lidars. For each surrounding participant, numbered j, i.e., j ∈ I, the observation consists of six variables: relative lateral position p j x , relative longitudinal position p j y , relative speed v j , heading angle ϕ j , length L j and width W j . We can see that almost all variables possess the mean close to zero and the main difference lies in their variance. For example, the variances of v j for vehicles, cyclists and pedestrians demonstrate the decrease in order as the speed of vehicles varies over a more wide range. Besides, ϕ j of pedestrians is much larger because their small sizes will cause difficulty to the shape detection of lidars.
C. State, action and utility
As a typical RL algorithm, the training of APG mainly involves the design of state, action and utility function. State is the extraction of original observation given by sensors, which consists of the information of ego vehicle x ego , tracking error x track and surrounding participants x other , i.e., s = [x ego , x track , x other ] . Concretely, x ego is represented with a dynamic model which contains longitudinal position p x and lateral position p y , longitudinal speed v x and lateral speed v y , heading angle ϕ, yaw rate ω, front wheel angle δ and acceleration a, i.e., x ego = [p x , p y , v x , v y , ϕ, ω, δ, a] . x track is constructed based on x ego and the given reference path τ , including longitudinal distance error ∆x, lateral distance error ∆y, speed error ∆v and heading angle error ∆ϕ, i.e., x track = [∆x, ∆y, ∆v, ∆ϕ] .
As for x other , we will sort different traffic actors by their distance to ego vehicle based on the observation from the perception system. Then the closest 8 vehicles, 4 cyclists and 4 pedestrians of I will be chosen to construct x other . For each item, the description information, as shown in Table II, can be summarized as x other = [p j x , p j y , v j , ϕ j , L j , W j ] j∈I . To realize smooth lateral and longitudinal control, we utilize the derivatives of δ and a as actions, i.e., u = [∆δ, ∆a] . Considering the actuator saturation in reality, the action shall be limited to a certain range wherein we assume ∆δ ∈ [−0.4, 0.4] rad/s, ∆a ∈ [−4.5, 4.5] m/s 3 . Accordingly, δ and a are also limited into a reasonable range to prevent some unreasonable driving operations, i.e., δ ∈ [−0.4, 0.4] rad and a ∈ [−3.0, 1.5] m/s 2 . The utility l(·, ·, ·) is mainly related to the precision, stability and energy-saving performance of path tracking and thus we construct a classic quadratic formulation l(·, ·, ·) = 0.03∆v 2 + 0.8∆x 2 + 0.8∆y 2 + 30∆ϕ 2 + 0.02ω 2 + 5δ 2 + 0.05a 2 + 0.4∆δ 2 + 0.1∆a 2 .
For safety constraint design, we adopt the same method with [17], where they utilize two circles to represent the cyclists and vehicles, one circle to represent pedestrian, and the constraints between the ego vehicle and surrounding participants can be constructed by comparing the distance between their circle centers and the given safety threshold.
D. Environment Model and Uncertainty
Environment model f (·, ·, ·) is established to predict the transition of driving environment within the predictive horizon in (3), which typically consists of the model of ego vehicle f ego and that of the surrounding participants f other . The former controls the motion of the automated vehicle, and thus the precise dynamic model has been developed and verified completely [33], wherein the state space equation can be written as Specially, the key parameters of the ego are well-designed in accordance with a physical vehicle, as shown in TABLE III. f other takes charge of predicting the motion of surrounding traffic participants, which involves the calculation of safety requirements. Usually, the kinematics models are adopted as its simplicity and good performance on structured roads [34]. However, this may be unable to describe the randomness of intersections, so we add the additional uncertainty terms to ξ x , ξ y , ξ v , ξ ϕ to the corresponding states of this basic kinematics model and construct the following stochastic prediction model: in which R indicates the turning radius determined by the route and position of the corresponding participant. And ξ = [ξ x , ξ y , ξ v , ξ ϕ ] will be generated by the adversarial policy π φ , i.e., ξ ∼ π φ , to learn the aggressive noise to violate the safety constraints.
E. Dealing with traffic lights
After offline training, we have obtained V w * for path selecting and π θ * for path tracking. As for the subsequent online application, we can further incorporate more human experience with the trained functions to handle some special traffic conditions at signalized intersections such as the yellow light or traffic congestion. Therefore, a series of rules are established in terms of the driving states of ego vehicle and the phase of signal lights to select the appropriate velocity mode, i.e., the pass mode or the stop mode in Fig. 4.
We firstly judge whether there exists traffic congestion ahead, i.e., the front vehicle keeps stopping over 3s. When the result is yes, the stop mode in Fig. 3 will be chosen to avoid further deterioration of traffic efficiency, otherwise we will judge whether the ego vehicle has passed the stop line based on the vehicle's position and the road map. If the answer is yes, the pass mode will be selected as it has entered into the intersections. Otherwise, the final judgement will be shown up to further distinguish situations, wherein the result will be jointly determined by two conditions, denoted as C 1 , C 2 respectively: 1) C 1 ∈ {R, G, Y } denotes the current light phase, where R, G, Y indicate the red, green and yellow lights respectively. 2) C 2 ∈ {T, F } denotes whether the ego can stop in front of the stop line before the red light comes when the current light is yellow. Note that T, F are the abbreviations of TRUE and FALSE respectively. Therefore, the ego vehicle will choose the stop mode undoubtedly when encountering a red light, or there is enough time and distance to decelerate at the yellow light. As for other situations, for example, when the green light or yellow light appears but it is hard to stop by braking, the ego will directly pass the intersection with the guidance of pass velocity mode.
Formally, C 1 can be captured directly by the vehicle sensors or the communication devices bound to the signal lights. C 2 can be derived based on the current speed of the vehicle v x and the constant acceleration model. At each time step during the yellow light phase, the ego vehicle is assumed to brake with the 80% of the maximum deceleration, i.e., a M = −2.4m/s 2 . Based on that, the longest deceleration distance D e and time t e can be deduced by At this moment, the distance of the ego vehicle to the stop line D y can be read from high precision map. Referred to Chinese traffic rules, the duration of yellow light is always 3s and thus we can obtain the remaining time, denoted as t y , by counting from the initial stamp of yellow light. Based on these conditions, C 2 = T holds if D y ≥ D e and t y ≥ t e are satisfied simultaneously, otherwise C 2 = F is concluded to indicate that the ego can not stop in time at this constant and then the passing velocity will be given to encourage rushing before the red lights. Note that this choice procedure is conducted at each step of the driving process, therefore the output mode will provably be changing with the state of vehicle and light duration. Besides, the stop or pass mode only determines the speed error ∆v in the track error of (12). After that, each candidate route will be further adopted to construct other elements ∆x, ∆y, ∆ϕ respectively and V w * will select the optimal one to track.
V. SIMULATION RESULTS
This section mainly tells the training parameters, visualizes the training curves and tests the generalization of the driving policy.
A. Training settings
The NNs of driving policy and adversarial policy employ similar architecture, wherein the multi-layer perceptron (MLP) is adopted as the approximate function, containing 5 hidden layers, consisting of 256 units per layer. Considering the stochastic property of both policies, the output of MLPs are the mean and covariance of the Gaussian distribution where the covariance matrix is diagonal. In this case, each policy maps the input states to the mean and logarithm of standard deviation of the Gaussian distribution. Moreover, all hidden layers take GeLU as the activation function while the output layer uses the hyperbolic tangent to bound the output into the pre-defined range as shown in Table IV. For the value network, it adopts almost the same architecture with the policy, except that the activation function of the output is the ReLU to output a positive scalar to indicate the path value. The Adam method [35] with a cosine annealing learning rate is adopted to update all networks. The predictive horizon T is set as 25 and the penalty factor ρ is 15. We also include the parallel training for RL as in [36] to accelerate the training process, which contains 10 workers to sample from driving environment, 10 buffers to store samples and 18 learners to update the networks. Table IV provides more detailed hyperparameters of our training. Then, we compare the training performance of APG with the baseline called deterministic policy gradient (DPG), wherein the only difference lies in the consideration of the randomness in the surroundings model f other . For DPG, it assumes the empirical model f other is accurate enough and thus the uncertainty ξ is ignored to train the single driving policy. Whereas, APG introduces the adversarial policy to simulate the despiteful noise and the driving policy must learn to handle such tough cases to make reasonable decisions. During the training process, we record the policy performance and value loss every 1000 iterations and visualize the learning curves in Fig. 5, including the value loss J v , the track loss J track , the safety loss J safe and their summation, i.e., the total policy loss J π . Note that all these performances only involve the training based on the environment model established in IV-D, not the driving performance in the real driving environment. To do that, we also test the temporary policies by applying them to control the automated vehicle to pass the intersection for 10 runs in the real environment every 1000 iterations. At each run, the vehicle will be initialized outside of the intersections and move forward 120 steps during which the utility function will be accumulated. We use the total average return (T AR) to reflect its real performance, i.e., [l(s t , u t , τ ) + ρϕ(s t+1 )] .
And Fig. 6 visualizes the T AR performance during the training process.
We can see from Fig. 5 that all losses of these two training paradigms show a remarkable decline with the increasing of iteration, indicating the driving policy has been improved progressively with the guidance of their corresponding simulation models. In addition, the tracking loss and safety loss almost keep the synchronous descent until they all drop to the same level, meaning that our driving policy can balance the optimization of tracking performance and safety requirements wisely. However, these exists a little difference between APG and DPG. The latter seems to obtain better policy performance as shown in Fig. 5a because it accesses a lower track cost in Fig. 5c. That is, APG has to sacrifice some tracking performance to guarantee a fair safety cost, which is explicable that the adversary policy provides a strong disruption to the learning of driving policy. In this case, the automated vehicle has learned to avoid the potential collision by decelerating or diversion encountering the despiteful surrounding vehicles, which will lead to the decreasing of tracking performance. As for the test performance of these two driving policies shown in Fig. 6, we can see both of them indeed behave better and better at the real intersection, but APG has improved TAR with a large margin. Besides, DPG suffers a little higher fluctuations than APG shown from the confidence interval because it will be likely to fail encountering some unusual conditions. This result is very inspiring, which says although we can realize the training convergence based on the constructed simulation model, but it might hardly work in real environment if there exists a large gap between simulation model and the real driving environment. Thus, for the modelbased RL or classical control methods like Model Predictive Control (MPC), the model uncertainty should be considered especially at complex urban roads. Obviously, the kinematic model cannot describe the randomness of the intersections in our simulation, as there exists complex interactions between different traffic participants.
B. Visualization
To verify the rationality of the paths generated by the general static path planner, we firstly test the path tracking function with the trained policy where we remove all traffic flows at the intersections plotted in Fig. 2. We conduct 200 runs totally based on the six distinct intersections and record the tracking error and vehicle state of each step. We randomly sample the route and velocity mode for each run and always initialize the automated vehicle outside the intersection, then the trained policy will drive it to track the given path. The distribution of the tracking error and vehicle state as shown in Fig 7. Note that the distance error is defined by ∆x and the longitudinal error ∆y, i.e., ∆x 2 + ∆y 2 . We can see that the tracking error is close to zero in most cases with the maximum as 0.2m. And the maximum speed error is less than 3m/s, and also mainly concentrates around 0. Fig. 7b demonstrate that steer angle and acceleration are close to zero in most cases, of which the steering angle keeps less than 150 o and the absolute value of acceleration keeps less than 1.2m/s 2 . In addition, there is no situation where the two variables are large simultaneously, which means the vehicle stability can be guaranteed. To sum up, the general static path planner can generate reasonable candidate paths for diversified intersections, which are easily trackable for the automated vehicle with an acceptable energy consumption.
Next, we visualize a typical driving process of the unprotected left-turn task and the curves of some important vehicle states in Fig. 8 and Fig. 9, which is considered as one of the most difficult tasks at signalized intersections. The automated vehicle starts from the left-turn lane with the green light shown in Fig. 8a, wherein the pass velocity in Fig. 9a and the bottommost route will be chosen to drive towards the intersection as fast as possible. Then the signal light goes into the yellow phase and immediately the stop velocity mode works to decelerate the ego as there is a long distance from the stop line and also a front vehicle attempts to stop as shown in Fig. 8b. After that, the ego vehicle continues deceleration and stops to wait the red light in Fig. 8c and Fig. 9b. Next, although the green light appears at 60s in Fig. 9a, the stop velocity still makes the difference because the front vehicle yields to other vehicles stuck inside the intersection. Later, the ego vehicle will choose the pass mode and accelerate to a high speed as shown in Fig. 9d and it shall detour the sluggish front vehicle by choosing the topmost path as shown in Fig. 8d, which needs to steer right for a while as shown in Fig. 9c. After that it will quickly switch to the middle path in Fig. 8e to rush before the opposite straight-going vehicle. Soon, our automated vehicle drives close to the sidewalk in Fig. 8g through which some pedestrians are walking. Thus, it will decelerate priorly and yield to pedestrians as shown in Fig. 8f and Fig. 9b. Finally, the automated vehicle starts to accelerate again at 86s and pass this intersection successfully as demonstrated in Fig. 8h and Fig. 9d, during which it prefers the middle path to track because the sparse participants in this lane will reduce the potential collisions.
Besides, we also visualize the straight-going and rightturn tasks at two other intersections. In Fig. 10, the ego vehicle starts initially close to the stop line at green light in Fig. 10a and thus it will disregard the yellow light by choosing the pass velocity in Fig. 10b. After entering into the intersection, it will steer left to avoid the opposite turningleft vehicle by selecting the leftmost route in Fig. 10c and finally finish the safe navigation at this intersection shown as Fig. 10d. In Fig. 11, the ego vehicle is initialized in the rightturning lane in Fig. 11a with the constant green lights. Then it will acceleration to pass before the upcoming pedestrians in Fig. 11b and attempt to avoid the front vehicle by choosing the middle path in Fig. 11c. Eventually, it will insert the straightgoing traffic flow to pass successfully in Fig. 11d.
C. Generalization ability
Compared with the performance during training process, we also concern more about that on situations distinct from the training environment, for example, these may exit some abnormal behaviors of the surrounding participants which rarely encounters during training. Here, we design two typical conditions wherein the part of surrounding vehicles are endowed with some abnormal driving behaviors to test the Fig. 8: Trajectory visualization for the left-turn task. The red box represents the ego vehicle controlled by the trained policy and the highlighted curve is the optimal path chosen by value function. The black, blue and purple rectangles indicate the surrounding vehicles, cyclists and pedestrians respectively. The shaded sectors are the perception range of radar, camera and lidar, and the red pots trailed behind the automated vehicle shows its history trajectory until this moment. generalization ability. Our tests will focus on the unprotected left-turn task with green light set as green. In the first case shown in Fig. 12a, 15% vehicles in the opposite straight lane will overspeed to rush through the intersections as quickly as possible. That is, the speed limits of the surrounding vehicles are set as 25km/h for the roads inside of the training intersection set and 30km/h for the outsides. However, during the testing, part of surrounding vehicles will exceed the speed limits by 10%, 20% and 50% respectively. As for the second case shown in Fig. 12b, a proportion of vehicles of the traffic flow will violate the road connection, and attempt to round and enter into the inside lane wherein the middle lane is only allowed to ride during training. The proportion will be set to 10%, 20% and 50% respectively. Two crucial indicators including passing rate and travel time will be introduced to evaluate the driving performance of the trained policy at this intersection. The passing rate is defined as the ratio of the successful pass time to the total 200 runs, wherein one successful pass means the automated vehicle can ride out of the intersection without collision with other participant, roads and the traffic lights violation. And the travel time is evaluated by the average time used to pass the intersection, starting from entering the intersection at the stop line.
The test results of the speeding case are shown in Table V. We can see that the driving policy of APG shows a stronger resistance to the overspeeding behavior of surrounding vehicles while that of DPG suffers a sharp drop on the passing rate.
APG can handle totally the speeding by 10% and the passing rate maintains steady when the speeding degree changes from 20% to 50%. From the travel time, APG has learned to yield to the aggressive vehicles by decelerating and waiting them to pass firstly, which needs to sacrifice the passing efficiency. This driving strategy provably seems reasonable because it can assure safety of the automated vehicle as much as possible. In promotion at the training environment, but also access more resistances to disturbs from environment.
VI. CONCLUSION
This paper focuses on the decision-making and control for signalized intersections, which are crucial and challenging for the popularity of autonomous driving. To that end, we firstly design a general static path planner for the intersection scenarios, which consists of the route planning and velocity planning. It can generate smooth and trackable paths for the diversified intersections and feature high efficiency. Secondly, we construct a constrained OCP for integrated decision and control wherein the bounded uncertainty of dynamic models is considered to capture the randomness of driving environment. APG is proposed to solve this OCP in which the adversarial policy is introduced to provide disturbances by seeking for the most severe uncertainty and the driving policy learns to handle this situation by competition. Finally, a comprehensive system is established to conduct training and generalization test wherein the perception module is introduced and the human experience is incorporated to to solve the yellow light dilemma. Results indicate that the trained policy can handle the yellow signal lights flexibly and realize smooth and efficient passing with a humanoid paradigm. Besides, the proposed APG enables the large-margin improvement of the resistance to the abnormal behavior of traffic participants and can ensure a high safety level for the autonomous driving. About the future work, we will build large-scale intersections to make more powerful driving policy, and conduct real vehicle experiment to further verify its effectiveness.
|
2022-04-12T01:16:20.237Z
|
2022-04-09T00:00:00.000
|
{
"year": 2022,
"sha1": "f4a7499d7d2bfa87ba486b6ec0d0ebd752e49a23",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f4a7499d7d2bfa87ba486b6ec0d0ebd752e49a23",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
225234885
|
pes2o/s2orc
|
v3-fos-license
|
The Worldwide Phenomenon of Nationalism versus Globalism: A Rhetorical Perspective
Aristotle long ago explained the definition and essential praxis of rhetoric: the faculty of observing in any given situation all available means of persuasion. 2400 years later, the global praxis of politics is being jolted by persistent bouts of nationalism that threaten a normative order of globalism evident at least since the end of World War II. In this essay, a rhetorical study of the phenomena of globalism is provided. Prominent examples of nationalism are examined with their attendant political rhetoric, with four case studies; second, consistent implications from these disparate examples and case studies form a basis for a unifying rhetorical theory of nationalism versus globalism in the 21st century. The observed characteristics of nationalist versus globalist rhetoric include: Nationalist rhetoric appeals to local interests and casts external interests as a disruptive threat; Globalist rhetoricians and sympathetic media cast their competing interests as evolutionary and progressive; Nationalist rhetoric is currently resurgent and more successful than its globalist rhetorical counterpart; and, There is not a singular globalizing narrative pulling humanity to a unipolar order. Twenty-first century rhetoric draws from these elements to form persuasive political arguments in the present era.
Overview
Initially, it is important to understand the status quo being challenged by ascendant nationalism. The global order is often described as featuring and progressing toward a notion of globalism. What does globalism mean? Globalism has at least several evident characteristics: 1) increased interdependence of nations with regard to various interests including trade and military action, 2) increased acceptance and even promotion of immigration from areas of despondency to areas of prosperity, 3) urbanization increasing while the agrarian and rural society is in decline, and 4) an increasing profession rhetorically for interstate agreements such as climate accords or the joining of states in affiliations such as the European Union. Many more characteristics could likely be added and each of these characteristics could be meaningfully subdivided into further categories, but these four certainly provide a recognizable rhetorical coherence for what is meant by terms such as globalism.
Since the end of World War II various rhetorical venues within nations often offered by major political leaders directed the world toward increasing processes of globalism. Aristotle provides the original idea of rhetorical study in his definition of the term: "Rhetoric is the faculty of observing in any given situation all available means of persuasion" Aristoteles (1975). As such, we are concerned with the persuasive elements involved with communication. More than that, however, rhetorical study requires an examination of communication in its contexts so we can better understand the original communication. In this this essay we will examine the argumentation surrounding globalism to gain a better understanding of the competing communication of globalism and nationalism. Free trade agreements between national pairs and increasingly trade agreements as regional pacts such as NAFTA, and now USMCA, signaled a rhetorical notion of progression. The world would reduce the risk of war and reduce the incidence of poverty by adhering to such an agenda of interoperable economies and national populations. Global commitment to this assumption is irregular and this of course adds to global conflict on the question of globalism. Russia, China, and Iran do resist this notion of cooperative economics. Profound correlations tend to support the positive results of these globalization efforts, however. Scholar Steven Pinker documents an overwhelming raft of data that points to success in reducing a range of harms from unclean water to disease and poverty. Pinker (2018) These declines are not marginal and are in fact significant. Rosling (2019) is another data researcher that offers compelling confirmation of this correlation. This evidence and so much more constitutes a substantial rhetorical basis or point of argument for the resistance to nationalism and ongoing adherence to a now established status quo of globalization.
Before trying to explain, we must now observe the ascendant nationalism. What is meant by nationalism? Nationalism must at least mean a domestic internal preference for the interests of those citizens within a nation. More decidedly nationalism today tends to mean that the broad forces of multiculturalism and exchange threaten traditional norms of a national society that warrant an evident political defense. Local community members are defending their traditions with public arguments and this constitutes the key basis of our current study. Nationalism responds to threats from outside the nation. Globalists will tend to describe these threats as "perceived." Herein lies an important rhetorical dialectic over which globalists and nationalists struggle. What is a real threat? Is immigration a threat? Is free trade a threat? Is global warming a threat? All of this establishes effective basis while not exhausting the topic -of what nationalism is and how we might recognize it rhetorically when we observe it as Aristotle suggests.
Specific examples of ascendant nationalism arguably include the election of various leaders from countries around the world: Donald Trump, United States; Narendra Modi, India; Boris Johnson, Britain; Scott Morrison, Australia; Andrzej Duda, Poland; Vladimir Putin, Russia; and Muhammadu Buhari, Nigeria. The issue is much more granular than the election of national leaders. Within this vast political sphere and dialectic of globalism versus nationalism are competing issues of trade, immigration, and interstate compacts. The controversy over Britain's exit from the European Union is rooted in a referendum of the British public that shocked international observers who encountered that public willing to tear up one of the most significant international agreements on the planet: the European Union. The long struggle to accomplish that public will of leaving the EU failed at the hands of Teresa May and the sensational nature of this public unrest has its own salient term: "Brexit." The rhetorical salience is apparent in the viral nature of the name that is appropriated to describe any number of local populace disruptions of consensus. "Calexit," for example, is a term describing the possible break up or exit of California from the United States. Brexit is absolutely a European signifier, meaning that its ultimate success could possibly lead to the complete dissolution of the European Union. These examples are not only economic. Moral issues can also serve as a point of rhetorical and argumentation struggle globally. In the United States, for instance, a seemingly mundane vote about gay marriage and Methodist unity was shattered over the independent voices of African Methodist delegates who would not be dominated by their established white counterparts on such a contentious theological point. Their localism disrupted a globalist trend toward greater sexual permissiveness that seemed assured as a pattern. The phenomenon is not limited to conventional western powers, however. In the examples that will follow, we review the communication and contexts of Brexit, Australian nationalism, Hindu nationalism, and Nigerian nationalism.
Brexit Case Study
Britain's effort to remove itself from the EU is an iconic case study in nationalism rhetoric. Observe this sensational description of events from September 2019: Jeremy Corbyn and the other leaders of the "Rebel Alliance" today agreed to work together to stop Boris Johnson forcing an early general election on Monday as the chances of a Brexit delay increased.
Labour, the SNP, the Liberal Democrats, the Greens and Plaid Cymru will either vote against the government or abstain when Mr. Johnson holds a crunch vote at the start of next week in a bid to go to the country on October 15.
The Prime Minister will need the support of two thirds of the House of Commons to succeed but with the opposition now all on the same page his attempt at triggering a snap poll appears doomed to failure.
That could leave the PM stuck in Number 10 but unable to deliver a No Deal Brexit on October 31 and he could be forced to resign rather than break his "do or die" pledge.
Mr. Johnson today declined to rule out resigning if he fails to deliver Brexit by the current deadline as he embarked on a visit to Scotland.
He said: "That is not a hypothesis I'm willing to contemplate. I want us to get this thing done" (Maidment, 2019).
The terminology of "Rebel Alliance" is a clear allusion to the Star Wars Film trilogy and rhetorically casts those defending the status quo of globalism as insurgent rebels fighting off the likes of Darth Vader and various supporters of Brexit such as Boris Johnson. This is rather ironic since the EU is the normative status quo and the rebellion is clearly a populist insurgency that opposes that status quo and seeks and exit from the EU grand alliance.
Australia Case Study
Slate magazine provides another provocative rhetorical framing of the nationalist insurgency in Australia. With the provocative headline--"What the Bloody Hell Just Happened in Australia? A shocking election upset has confused Australians searching for answers-Slate proceeded to describe in blunt terms the inconceivable Australian backlash against globalism: The polls were wrong. The pundits were wrong. The party insiders were wrong. The bookies were wrong. I was wrong. Even Burt the psychic croc was wrong.
Australia's dysfunctional, unpopular, conservative government (the Liberal and National parties, currently in coalition, sit on the right in Australian politics) held onto power for a third term in Saturday's national election. This happened despite the fact that most analysts expected it to lose a large number of seats; despite being (seemingly) out of step with the nation's emerging consensus on climate change, marriage equality, religion, and race; despite a chaotic tenure in office that has seen three leaders since 2016; despite a threadbare policy agenda; despite many of its high-profile figures recently retiring in frustration or anticipation of defeat; despite betting agencies paying out Labor backers early; despite losing more than 50 consecutive opinion polls. After all of it, the conservatives won the only poll that mattered, in what reelected Prime Minister Scott Morrison, an evangelical Christian, called "a miracle" Withers (2019).
Slate provides a rhetorical framing of shocked indignation-not unlike the election of President Trump in November 2016. Progressive issues such as gay marriage and climate change form key aspects of some sense of political inevitability envisioned by the Slate magazine. Scott Morrison was re-elected to the government of Australia and it appears again that there is a hidden bloc of voters unseen by major media at work down under. Moreover, in reading this article the reactionary stance of the writer toward Christians is evident and directed at the identity of the elected "evangelical" leader of Australia. Christianity is a long 2,000-year tradition of religion, ethics, and politics (Millenia longer if the "Judeo" part of the Judeo-Christian tradition is considered). Slate editorial anger that this Christian leader could be reelected challenged their notion of "progress." There is a clear sense of epistemic collapse. The normal order of explanation is no longer functional (Withers, 2019).
Hindu Nationalism in India
India is on track to becoming the most populous nation in the world by the middle of the 21st century. This nation recently experienced a resurgence of nationalism in its affinity for the Hindu national party of the BJP. This media example provides an indication of the populist popularity of Prime Minister Modi:
On a hot day in May, Indian Prime Minister Narendra Modi held a campaign rally in the central Indian city of Indore. Young men spread as far as the eye could see. Wearing bright orange T-shirts that said "NAMO AGAIN!"
in comic-book letters, they scanned the sky for the helicopter carrying their superhero, the soon-to-be-reelected prime minster.
Patriotic songs from major Bollywood war hits played in the background. A particular favorite, from the film Border, was sung in the film by homesick soldiers on the battlefront, remembering the mothers and sweethearts who waited for them at home. The song was apt for what had become a nationalsecurity election, fought in the shadow of air strikes against Pakistan. The young men at the rally were soldiers, too, of a sort-soldiers for the cause of a Hindu nation (Chandra, 2019).
The synthetic nature of nationalism was apparent in a recent rally in Houston, Texas, featuring Modi and Trump (Kapur, 2019). Both men worked together to encourage mutual support among Indians for the two leaders. The two men are raising their internal legitimacies by appealing to notions of nationalism. India is an important global example since they make up such a huge portion of the global population and a rising force in the global economy. The political rivalries of India with Pakistan and China make the nation a strategic partner for the United States that increasingly seeks to contain both Pakistan and China as threats to American interests.
Nigerian Nationalism
Nigeria's emergence as the leading economy on the continent of Africa has not made it immune from the struggle between globalism and nationalism. South Africa was the former largest economy and a sense of rivalry is evident in this media sample from BET entitled, "Nigerian Youth Coalition Warns South African Investors To Leave Nigeria": The nationalist youth group issued a seven-day ultimatum, threatening to attack South African investors with thriving businesses throughout Nigeria.
Amid widespread reports alleging Nigerian immigrants are under fatal xenophobic attacks in South Africa, Sahara Reporters announced . . . that the Oodua Youth Coalition issued a seven-day ultimatum to all South African business owners and investors in Nigeria, warning them "to leave the country or risk being attacked." The announcement follows reports saying Nigerians and their businesses in South Africa are under brutal attack.
"Oodua Youth Coalition is saddened and angered that South Africans, supported by the country's authorities, is coordinating the looting and burning of Nigerian businesses and maiming and killing of our brothers and fathers in their land," said President of the OYC, Oluyi Tayo, in a statement" (Estevez, 2019).
Nigeria is now the largest economy on the African continent. Since surpassing South Africa, the sense of rivalry between the former economic powerhouse and the new leader is evident in this immigration struggle with reciprocal violence against businesses of each nation. Leaders can rhetorically appeal to notions of nativism. Immigrants are not positively viewed, and a preference for accomplished integration is necessary for stable internal economic activity. Both nations are threatening immigrants from the other country in their economic rivalry.
A 2020 view of essential characteristics of global nationalist rhetoric From these case studies we can derive a number of common characteristics. These consistent trends and characteristics may make us more adept at understanding ongoing 21st century rhetoric on these and related questions as they continue to emerge. The observed characteristics of nationalist versus globalist rhetoric include:
1.
Nationalist rhetoric appeals to local interests and casts external interests as a disruptive threat.
2.
Globalist rhetoricians and sympathetic media cast their competing interests as evolutionary and progressive. 3.
Nationalist rhetoric is resurgent and more successful than its globalist rhetorical counterpart.
4.
There is not a singular globalizing narrative pulling humanity to a unipolar order.
These characteristics can help us recognize future instances of such rhetoric and help us understand current communication struggles surrounding globalism. Globalism does appear to be in decline and there are many more examples that could be chosen to illustrate these points including Russia, Poland, Japan, and more. These examples examined are good representations across the world as a political stage and involves the lives of almost two billion people.
Summarizing Thoughts
Examining four major global examples of nationalist rhetoric provide a rhetorical framework for our ongoing dialectic between arguments for globalism and arguments for localism expressed as nationalism. The evidence suggests that nationalism is ascendant and that globalism is in decline as a rhetorical feature in public arguments. The natural human craving for simple explanations such as the globalizing and integrating societies may cause us to misread current events. A variety of local narratives are at work and deference to those local narratives appears to be on the rise rhetorically. It is possible that cosmopolitan integrationist impulses operate on a cyclical basis. Human societies may integrate to various outsider points of view and then recoil to consolidate and maintain tradition.
This pendulum swing between nationalism and globalism may point to our present 2020 world of ascendant nationalism as local communities affected by record global levels of immigration and economic integration draw on rhetorical traditions and patterns to re-assert their role in human affairs. It is possible that the ascendancy of nationalism may come to an end in the future but the current dominant rhetorical pattern is nationalist appeals, and this pattern has clear recognizable rhetorical patterns discerned in our examples. President Trump's September 2019 remarks to the United Nations about a future bound by patriots signals the bold trend discovered in this analysis. The situation of these remarks was within the halls of an institution dedicated to ideals of globalism. The irony of this statement is a good summation of the global human condition oriented toward nationalism and less globalism.
|
2020-09-03T09:12:22.884Z
|
2020-08-27T00:00:00.000
|
{
"year": 2020,
"sha1": "25caf885dad2a0bad2b19e81273f769bb03aa246",
"oa_license": "CCBY",
"oa_url": "https://www.wcsaglobal.org/wp-content/uploads/2020/08/Volume-1-Nb1-WCSA-Journal-2-Special-Session-Article-7-Voth.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "69405af2a73f96390c8f37736e8471f8a819f4db",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
7722844
|
pes2o/s2orc
|
v3-fos-license
|
Contribution of the Mevalonate and Methylerythritol Phosphate Pathways to the Biosynthesis of Dolichols in Plants*
Plant isoprenoids are derived from two biosynthetic pathways, the cytoplasmic mevalonate (MVA) and the plastidial methylerythritol phosphate (MEP) pathway. In this study their respective contributions toward formation of dolichols in Coluria geoides hairy root culture were estimated using in vivo labeling with 13C-labeled glucose as a general precursor. NMR and mass spectrometry showed that both the MVA and MEP pathways were the sources of isopentenyl diphosphate incorporated into polyisoprenoid chains. The involvement of the MEP pathway was found to be substantial at the initiation stage of dolichol chain synthesis, but it was virtually nil at the terminal steps; statistically, 6–8 isoprene units within the dolichol molecule (i.e. 40–50% of the total) were derived from the MEP pathway. These results were further verified by incorporation of [5-2H]mevalonate or [5,5-2H2]deoxyxylulose into dolichols as well as by the observed decreased accumulation of dolichols upon treatment with mevinolin or fosmidomycin, selective inhibitors of either pathway. The presented data indicate that the synthesis of dolichols in C. geoides roots involves a continuous exchange of intermediates between the MVA and MEP pathways. According to our model, oligoprenyl diphosphate chains of a length not exceeding 13 isoprene units are synthesized in plastids from isopentenyl diphosphate derived from both the MEP and MVA pathways, and then are completed in the cytoplasm with several units derived solely from the MVA pathway. This study also illustrates an innovative application of mass spectrometry for qualitative and quantitative evaluation of the contribution of individual metabolic pathways to the biosynthesis of natural products.
Plant isoprenoids are derived from two biosynthetic pathways, the cytoplasmic mevalonate (MVA) and the plastidial methylerythritol phosphate (MEP) pathway. In this study their respective contributions toward formation of dolichols in Coluria geoides hairy root culture were estimated using in vivo labeling with 13 C-labeled glucose as a general precursor. NMR and mass spectrometry showed that both the MVA and MEP pathways were the sources of isopentenyl diphosphate incorporated into polyisoprenoid chains. The involvement of the MEP pathway was found to be substantial at the initiation stage of dolichol chain synthesis, but it was virtually nil at the terminal steps; statistically, 6 -8 isoprene units within the dolichol molecule (i.e. 40 -50% of the total) were derived from the MEP pathway. These results were further verified by incorporation of [5-2 H]mevalonate or [5,5-2 H 2 ]deoxyxylulose into dolichols as well as by the observed decreased accumulation of dolichols upon treatment with mevinolin or fosmidomycin, selective inhibitors of either pathway. The presented data indicate that the synthesis of dolichols in C. geoides roots involves a continuous exchange of intermediates between the MVA and MEP pathways. According to our model, oligoprenyl diphosphate chains of a length not exceeding 13 isoprene units are synthesized in plastids from isopentenyl diphosphate derived from both the MEP and MVA pathways, and then are completed in the cytoplasm with several units derived solely from the MVA pathway. This study also illustrates an innovative application of mass spectrometry for qualitative and quantitative evaluation of the contribution of individual metabolic pathways to the biosynthesis of natural products.
Polyisoprenoid alcohols together with sterols and quinone side chains constitute three main branches of terpene products originating from farnesyl diphosphate (FPP) 4 (1). These linear five-carbon unit polymers are divided into two groups, i.e. polyprenols and dolichols, according to the hydrogenation status of the ␣-terminal isoprene unit (dolichol structure is shown in Fig. 1). In cells, polyprenols and dolichols are always found as mixtures of prenologues, and data collected so far show polyprenols to be typical for bacteria and plants, whereas dolichols are generally attributed to animals and yeast (2). Nevertheless, it should be remembered that dolichols are the predominant form in some plant organs like roots (3). Data on the occurrence and functions of polyisoprenoids are summarized in recently published reviews (4,5). The formation of the polyisoprenoid chain, starting from the -end of the molecule (Fig. 1), proceeds in a biphasic manner with farnesyl-diphosphate synthase responsible for the synthesis of the all-trans-FPP (three isoprene units of -t 2 structure, t stands for trans-isoprene unit), and its further elongation by cis-prenyltransferase. The latter enzyme, cloned from several prokaryotic and eukaryotic organisms (see Refs. 6, 7 and references therein), including Arabidopsis thaliana (8,9) and Hevea brasiliensis (10), utilizes isopentenyl diphosphate (IPP) for elongation of FPP up to the desired chain length, thus producing a family of polyprenyl diphosphates (n isoprene units of -t 2 -c n-3 structure, c stands for cis-isoprene unit), which are subsequently converted to polyprenols or dolichols according to the "tissue-specific requirements" by a still unknown mechanism.
In plant cells two pathways are known to produce IPP utilized by numerous enzymes to finally give more than 50,000 different isoprenoid structures, the mevalonate pathway (MVA) and the mevalonate-independent methylerythritol phosphate pathway (MEP) (for reviews, see Refs. [11][12][13]. Both pathways are compartmentalized as follows: the MVA in the cytoplasm to provide sterols, the many sesquiterpenes, and the prenyl chains of ubiquinones, and the MEP one in the plastids giving hemi-, mono-, and diterpenes, carotenoids, and the side chain of plastoquinone. Although both pathways are thought to operate independently under normal conditions, interactions between them have been reported repeatedly. As a result of the exchange of intermediates derived from both pathways, "mixed origin" isoprenoids are described (reviewed in Ref. 12). More recently, the potential for the cross-talk between the MEP and MVA pathways has been directly proven by genetic modifications of pathway-specific enzymes and by application of pathway-specific inhibitors, namely mevinolin (14), also referred to as lovastatin, a highly specific inhibitor of 3-hydroxy-3-methylglutaryl-coenzyme A reductase in the MVA pathway, and fosmidomycin (15), a specific inhibitor of 1-deoxy-D-xylulose-5phosphate reductoisomerase in the MEP pathway. Both inhibitors have recently been used to perturb biosynthetic flux in hairy roots (16,17). The involvement of both pathways leading to the formation of a studied isoprenoid compound is most often estimated by the application of specifically labeled [ 13 C]glucose. A pathway-specific pattern of 13 C label within the isoprenoid residues ( Fig. 2) allows their origins via the MEP or MVA route to be discerned.
Using [ 3 H]mevalonate, it has been shown that in plants mainly cis-polyisoprenoid alcohols are synthesized via the MVA pathway (18,19), similarly to mammalian dolichols (20,21), although the possibility of an input from the alternative MEP pathway has not been addressed. Localization of polyprenols to plastids (22)(23)(24) and accompanying dolichols (a small fraction of total polyisoprenoid pool) to microsomes (25) might be the indication of their MEP and MVA origin, respectively. On the other hand, the MEP pathway has been found to contribute significantly to the synthesis of solanesol, an all-transpren-9 (26), whereas the solanesyl-like side chains of ubiqui-none turned out to be synthesized from IPP derived from the MVA pathway (27).
A hairy root culture of Coluria geoides was established for eugenol production (28); and because it also produces dolichols, it could be a useful model for studies of the early steps of dolichol biosynthesis in plants. The recently developed HPLC/ electrospray ionization-mass spectrometry methods (3) should greatly facilitate precise investigation of their structure. Despite many efforts, the biosynthetic origin of polyisoprenoid alcohols in plants remains unclear. Therefore, we have decided to analyze which pathway is the source of IPP built into dolichol.
Here we report that both pathways are involved in the biosynthesis of dolichols in hairy roots of C. geoides. The -terminal isoprene unit and several subsequent ones are synthesized with an involvement of both the MEP and MVA pathways, in contrast to the very last ␣-terminal and a few preceding units, where contribution of the MEP pathway is negligible. According to our MS data, on average 6 -8 isoprene units per dolichol molecule (ranging from 14 to 18 isoprene units) are formed in the MEP-dependent manner. A model is discussed, suggesting spatial regulation of dolichol synthesis and a unidirectional influx of IPP into plastids. C) at a 9:1 ratio (w/w). The maximal possible 13 C abundance in the products was thus 5.4% (see "Results"). For mass spectrometry either [ 13 C]glucose alone or, when indicated, native glucose supplemented with [5-2 H]mevalonate (3.9 mg/flask) or 1-deoxy-[5,5-2 H 2 ]xylulose (4.6 mg/flask) was used. For inhibitor studies regular medium was supplemented either with mevinolin (30 M) or fosmidomycin (100 M) for the last 3 days of the culture, and no symptoms of toxicity were observed during these treatments. Two independent feeding experiments were carried out, each performed in duplicate. Nonsaponifiable lipids obtained from dried hairy roots were purified chromatographically as described earlier (3). For NMR analysis the dolichol-enriched fraction eluted from the silica gel column was further subjected to flash chromatography on RP-18 Lichroprep gel suspended in methanol. Pure dolichol mixture was eluted with acetone.
EXPERIMENTAL PROCEDURES
NMR Analysis-1 H and 13 C NMR spectra of metabolically 13 C-labeled dolichols (7 mM) were obtained with a Varian INOVA 400 MHz (Palo Alto, CA) spectrometer at 25°C in C 6 2 H 6 . Two-dimensional { 1 H, 13 C} gradient heteronuclear single quantum correlation experiments (31)(32)(33) were performed in proton-decoupled mode with a carbon spectral width of 25 kHz and 256 increments. One-dimensional 13 C experiment was performed with 64 K data points and a spectral width of 25 kHz in one-dimensional proton-decoupled mode. The measurement was carefully calibrated to achieve integrals for quantitative analysis. For CH 2 and CH 3 carbons of dolichol, T1 was measured yielding values between 1.1 and 2.7 s. The pulse sequence was optimized according to the literature (34,35); delay of 5.0 s, pulse width of 90°, and acquisition time of 0.2 s were applied avoiding nuclear Overhauser effects and presaturation. One hundred and twenty thousand scans (168 h) were collected yielding S/N ϭ 130. Spectra were calibrated against the chemical shift of benzene in 13 C spectra (128.0 ppm). Assignment of all 13 C NMR signals ( Table 2, for sequential numbering of carbon atoms, and see also supplemental Table 1 and supplemental Fig. 1) was done by combined use of twodimensional { 1 H, 13 C} gradient heteronuclear single quantum correlation spectra and literature data (36). Four residual signals were assigned to the traces of the organic solvent used for dolichol isolation, n-hexane (␦(C-␣ H 3 ), 14.4 ppm; ␦(C- H 2 ), 23.1 ppm, and ␦(C-␥ H 2 ), 32.3 ppm), and to traces of "grease" accumulated during sample preparation (␦(CH 2 ), 30.2 ppm) as confirmed with 1 H, 13 C correlation spectra, whereas two low intensity signals (␦ 77.6 and 29.5 ppm) remained unassigned.
HPLC/ESI-MS analysis was performed as described for native dolichols (3) with the following modifications. Briefly, when indicated potassium acetate dissolved in solvent B was introduced post-column by a syringe pump (flow rate 5 l/min) through a T union into the LC flow before entering the mass spectrometer instead of the sodium salt. For statistical calculations, all the m/z data were normalized for [M ϩ Na] ϩ pseudomolecular ions. Because in all measurements only singly charged dolichol pseudomolecular ions were observed, the values of m/z were considered as molecular masses (in daltons) of pseudomolecular ions (adducts with sodium or potassium cations, as indicated).
Analysis of MS Spectra, Modeling of the Theoretical Envelope of MS Spectra-The theoretical distribution of isoptomers, ⌿(M), so-called "theoretical MS spectra," is mathematically given by a binomial distribution describing the probability of k successes (i.e. number of 13 C atoms replacing 12 C ones) in a hypothetical experiment of N ϭ n ϫ z tries, i.e. synthesis of Dol-n consisting of n isoprene units, each unit containing "z" carbon atoms which, theoretically, might be 13 C-enriched (see the Table 1) with the unitary probability of success, p, according to Equation 1, where M mi,n is the mass of monoisotopic [ 12 C]Dol-n; a binomial coefficient is given by Equation 2, and p is the probability of labeling of the specified carbon atom, which depends on the feeding conditions (summarized in Table 1). For the statistical analysis, the isotopomer distributions calculated for [1-13 C]glucose and [1,6-13 C 2 ]glucose experiments were additionally corrected for the natural 1.1% 13 C abundance at "nonenriched" positions (cf. Fig. 2). Analysis of MS Spectra, Estimation of the Average Molecular Mass of Dolichol-For each dolichol Dol-n, the average molecular mass was estimated as the location of the center of a Gaussian distribution fitted to the pattern of 10 -20 highest intensity signals recorded experimentally. The values were estimated with the standard deviation 0.1-0.3 Da.
Meta-analysis of MS Spectra, Linear Regression Analysis-The values of the estimated average molecular masses of dolichols were used to analyze the trends of the increase of dolichol molecular mass using two alternative linear regression procedures.
In the first linear regression procedure, the average molecular masses of dolichols, M(n,i), were analyzed as a function of the number of the isoprene units in the dolichol molecule, n, according to Equation 3, Thus, the average molecular mass of a dolichol molecule, consisting of n isoprene units M(n,i), is the sum of the masses of all the isoprenoid units, n ϫ m(i), and the intercept, called Dol 0 , which represents the mass difference obtained by subtraction of the mass of n repeating C 5 H 8 units from the total mass of the Dol-n pseudomolecular ion [M Dol-n ϩ Na] ϩ (see the chemical formula below Table 3). It is also worth stressing that Dol 0 contains no carbon atoms and consequently its mass cannot be dependent on feeding conditions. These calculations were performed separately for all the feeding conditions, enumerated by index i, and resulted in two sets of parameters, i.e. slopes m(i), which correspond to the average molecular mass of the isoprenoid unit specific for a given feeding experiment, and intercepts, Dol 0 . This approach permitted a qualitative estimation of the involvement of the metabolic pathways in the synthesis of individual isoprene units.
In the second linear regression procedure, the average molecular masses of dolichols were analyzed as a function of the theoretical isotopic enrichments, ⑀ MVA and ⑀ MEP , expected for the MVA and MEP pathways, respectively, according Equation 4; see Table 3 for the theoretical ⑀ values.
According to Equation 4, the average molecular mass of a Dol-n molecule is the sum of the following three components: the molecular mass of native dolichol, M nat (n), the mass increment from k units derived via the MEP pathway, k ϫ ⑀ MEP , and the mass increment from (n Ϫ k) units derived from the MVA pathway, (n Ϫ k) ϫ ⑀ MVA , and (⑀ MEP ؍ 2 ⁄ 3⑀ MVA ).
All the numerical analyses, including linear regression and fitting to the Gaussian distribution via conjugated gradient method (37), were performed with the aid of GnuPlot 4.0 software (©2004 Thomas Williams, Colin Kelley, available on line). [1-13 C]Glucose-Previous analysis of polyisoprenoid composition showed that hairy roots of C. geoides contained a family of dolichols (Dol-13 to Dol-29, with Dol-16 dominating) accompanied by traces of polyprenols of similar chain lengths (3). Because ϳ60% of dolichols accumulated in Coluria roots were in the form of esters with carboxylic acids (data not shown), alkaline hydrolysis was always performed prior to further analysis. Root cultures were grown for no longer than 3 weeks to keep glucose concentration in the medium sufficiently high to ensure that it was the main carbon and energy source.
C NMR Analysis of Polyisoprenoid Alcohols Labeled with
It is well established in the literature that after labeling with [1-13 C]glucose, carbon atoms derived from C-3 of IPP and DMAPP ( Fig. 2 and supplemental Fig. 1) should not be labeled either via the MVA or the MEP pathway. Indeed, analysis of the NMR spectrum of [ 13 C]-labeled dolichol ( Table 2) confirmed that their isotope abundances were the lowest of all the carbon atoms. Thus, the signals in 13 C-labeled dolichol (Dol) spectrum assigned to the IPP/DMAPP C-3 atoms were considered as reference signals of the natural 13 C isotope abundance, and the mean value of their integration calculated per a single carbon atom was considered as the natural abundance of ϳ1.1% and used for the subsequent normalization of the abundances of other signals.
High 13 C abundances in 13 C-labeled dolichol spectrum were recorded for C-2 and C-4 atoms of IPP and DMAPP; the calculated average abundance for C-2 and C-4 atoms was 3.0 Ϯ 0.4 and 3.0 Ϯ 0.2%, respectively. Somewhat higher values were observed for C-5 atoms of IPP and DMAPP (average value 3.5 Ϯ 0.3%). In contrast, the isotopic abundance of C-1 of IPP and DMAPP was considerably lower (average 1.3 Ϯ 0.4%) than those of C-2, -4, and -5. Notable differences between subgroups of the C-1 type atoms were recorded however. The isotopic abundance of the C-1 atoms located in the trans-isoprene units (ϳ2%) was notably higher than that of C-1 type atoms of the cis-isoprene units (ϳ1%), and in fact the latter one was at the level of the natural abundance. Nonetheless, it should be remembered that at the applied experimental conditions (10.9% isotopic 13 C abundance in the feeding medium), the maximal expected 13 C abundance in the products equals 5.4% because of glycolysis. In glycolysis, [1-13 C]glucose gives [3-13 C]glyceraldehyde 3-phosphate and [3-13 C]pyruvate, two precursors of the MEP pathway, and subsequently formed [2-13 C]acetyl-CoA, the direct precursor of the MVA pathway. Because of the isomerization of the triose phosphates, dihydroxyacetone phosphate and glyceraldehyde 3-phosphate are interconverted, and C-1 and C-6 of glucose are metabolically equivalent. Consequently, the 13 C abundance in the final product is ϳ50% that used in the supplied glucose, and the probability of labeling of each potentially labeled carbon atom is ϳ0.5. It should be noticed, however, that the expected 13 C abundance (5.4%) was never observed for any of the labeled positions, indicating possible activity of the pentose phosphate pathway that resulted in a partial loss of 13 C from C-1 of glu-cose, formation of 13 CO 2 , and consequently isotopic dilution. Nevertheless, the 13 CO 2 resulting from [ 13 C]glucose catabolism is not efficiently recycled in nonphotosynthetic tissues like roots, which in turn precludes scrambling in the labeling pattern in fact not observed in our labeling experiments (see "Discussion").
Conclusions Arising from 13 C NMR Spectrum of Dolichol-The observed labeling pattern was consistent with a dual pathway origin of dolichols; however, the intensity of labeling of selected carbon atoms was not constant throughout the length of the dolichol molecule. The lower isotopic abundances found for C-1, -2, and -4 of IPP and DMAPP than that of C-5 are well explained by their different labeling pattern (Fig. 2). Although C-5 is labeled via either pathway, C-1 will become labeled only when synthesized via the MEP one, whereas C-2 and -4 only via MVA. Consequently, if a dolichol molecule contains isoprene units derived from both pathways, its C-1, C-2, and C-4 positions will be labeled to a lesser extent than the C-5 positions of IPP/DMAPP because of isotopic dilution.
Thanks to the high resolution of the NMR spectra of dolichols, signals of carbon atoms could be assigned unambiguously to all the positions C-1 to C-5 of IPP/DMAPP and also could be differentiated between those deriving from the trans and the cis units (i.e. those near the terminus of the dolichol molecule and those near the ␣ terminus, respectively; see Fig. 1) ( Table 2 and supplemental Table 1). Keeping these data in mind, the results of the analysis of the 13 C NMR spectrum suggested that the -terminal part of the dolichol molecule contains both MEP-and MVA-derived isoprene units, in contrast to the ␣-terminal part exclusively derived from the MVA pathway (see supplemental Fig. 2). Simultaneously, the characteristic profile of this spectrum, especially no labeling of C-3 atoms of IPP, indicated that virtually no 13 C scrambling occurred during the labeling experiments. Because only approximate estimation of the relative contributions of both pathways to dolichol biosynthesis may be based on 13 C NMR analysis, 13 C-labeled dolichols were analyzed in parallel by mass spectrometry to further evaluate the relative involvement of both pathways to their biosynthesis.
MS Analysis of Polyisoprenoid Alcohols Labeled by [U-13 C 6 ]Glucose-As expected, the m/z values for pseudomolecular ions of dolichols obtained from roots grown on [U-13 C 6 ]glucose as the sole carbon source (99% isotope abundance) were increased compared with native dolichols and equaled the maximal m/z predicted for uniformly labeled [U-13 C 5n ]Dol-n (supplemental Fig. 3A). In addition to the signals of the labeled dolichols, signals of lower intensity of native Dol-n, originating from the inoculum, were recorded in the Isotopic abundance of [1-13 C]glucose used was 10.9% and thus maximal possible abundance was 5.4%, see "Results." Numbering of carbon atom positions is given in Fig. 2. Average isotopic abundance of C-3 signals was assumed to correspond to the natural 13 C abundance (ϳ1.1%) and was further used as a standard for calibration. ␣ and stand for OH-and hydrocarbon-terminal isoprene residues, respectively (see Fig. 1 and supplemental Fig. 1 and Table 1). Values in bold highlight 13 C labeling. Fig. 3B) and were accompanied by signals corresponding to native Dol-16 with a maximum at 1132.3 Da. Spectra of several homologous dolichols (Dol-14 to Dol-19) were recorded (Fig. 3A); however, the latter one, because of its low intensity, was not further used for quantitative analysis. The observed shift of isotopomer distribution toward higher molecular mass indicated 13 C enrichment of the dolichol molecules. The complex distribution pattern of the signals, which mirrors the stochastic dispersion of the probability of formation of differentially labeled [ 13 C]dolichol isotopomers, is understandable bearing in mind that the probability of labeling of each potentially labeled carbon atom is ϳ0.5, as the result of stochastic mixing of triose pools during glycolysis (see "Results," " 13 C NMR Analysis of Polyisoprenoid Alcohols Labeled with [1-13 C]Glucose"). For each IPP molecule derived from the MEP pathway (C-1 and C-5 labeled, Fig. 2), the number of combinations of 13 C enrichment patterns is 2 2 , and the number of differently labeled IPP isotopomers is 3, with the 13 C distribution profile [ 13 C 2 ]:[ 13 C 1 ]:[ 13 C 0 ] in a 1:2:1 ratio. In the case of the MVA pathway (C-2-, -4-, and -5-labeled), the number of combinations is 2 3 , whereas the number of IPP isotopomers is 4, and the corresponding profile is [ 13 C 3 ]:[ 13 C 2 ]: [ 13 C 1 ]:[ 13 C 0 ] in a 1:3:3:1 ratio. The condensation of the differently labeled IPP molecules leading to the polyisoprenoid backbone is also further probabilistic, resulting in a complex mixture of differentially labeled dolichol isotopomers producing a complex pattern of m/z signals in MS analysis, as presented in Fig. 3. [1,[6][7][8][9][10][11][12][13] C 2 ]Glucose-To simplify the dolichol mass distribution patterns, [1,6-13 C 2 ]glucose was used for feeding. As expected, JULY 25, 2008 • VOLUME 283 • NUMBER 30
JOURNAL OF BIOLOGICAL CHEMISTRY 21029
the m/z values were higher than those obtained with [1-13 C]glucose for all dolichols studied ( Fig. 3B and supplemental Fig. 3C). Unexpectedly, however, the MS spectrum was still rather complex, which might result from the interference of two phenomena. First, 13 C abundance in the doubly labeled glucose molecule was different at C-1 and C-6 (99 versus 97%, respectively). Second, the 13 CO 2 released by the oxidative pentose phosphate pathway might have been reincorporated via the activity of the malic enzyme, yielding a small but noticeable redistribution of 13 C within the intermediates resulting in higher-than-expected formation of hypo-and hyper-labeled 13 Clabeled dolichol. Additionally, the effect of stochastic mixing of the pools of [ 13 C 2 ]IPP and [ 13 C 3 ]IPP, respectively, derived from the MEP and MVA pathways and incorporated into the polyisoprenoid chains, should also be considered.
Global Analysis of Mass Spectrometry Data-The complex mass spectra recorded for the dolichols provoked us to generate "theoretical spectra," i.e. the theoretical distributions of [ 13 C]dolichol isotopomers expected for dolichol synthesized exclusively via either the MEP or the MVA pathway. For this calculation, glycolysis was assumed as the only source of the intermediates for IPP formation. Representative theoretical spectra calculated separately for Dol-16 synthesized either via the MVA or the MEP pathways with [1-13 C]glucose feeding are shown in Fig. 4. The experimentally recorded Dol-16 spectrum was located between the two theoretical distributions, indicating a mixed biosynthetic origin of this dolichol. Similar correlation was obtained for [1,6-13 C 2 ]glucose labeling (not shown). This observation indicates that neither the MVA nor the MEP pathway was the sole source of IPP utilized for the formation of the dolichol molecule, which is in accordance with the NMR data described above. On the other hand, the complexity of the experimental MS spectra prompted us to apply numerical methods for their analysis.
Estimation of Average Molecular Masses of [ 13 C]Dolichols-
Because each dolichol species always occurs as a mixture of isotopomers, a knowledge of its average molecular mass was required for all the calculations described below. This estimation was done for the theoretical spectra generated above for each dolichol (Dol-14 -18) separately for each labeling conditions (supplemental Table 2). Thus for native or uniformly labeled glucose, for which the expected average molecular mass of Dol-n is the same regardless of the pathway used for its biosynthesis, 10 values of the expected average molecular masses of Dol-14 -18, 5 for either of these conditions, were obtained. For selectively labeled glucose, the theoretical average molecular masses expected for dolichols (Dol-14 -18) obtained exclusively either from the MEP or the MVA pathway gave two values for each dolichol, thus in total 10 values for [1-13 C]glucose and similarly 10 values for [1,6-13 C 2 ]glucose feeding were obtained (supplemental Table 2). The experimental spectra were treated in a manner analogous to that applied to the theoretical ones described above. The average experimental molecular masses were estimated separately for each dolichol labeled at all four labeling conditions (feeding by [1-13 C]-, [1,6-13 C 2 ]-, [U-13 C 6 ]-, and native glucose). Namely, for Dol-15-18, four mixtures of isotopomers each and three mixtures for Dol-14 were used, as shown in Fig. 3B; for technical reasons the spectrum of uniformly labeled Dol-14 obtained after [U-13 C 6 ]glucose feeding could not be recorded. A representative determination of the four values of average experimental molecular masses of Dol-16 (three labeling conditions overlaid) is shown in Fig. 5. As the final result a set of 19 experimental values of molecular masses of dolichols was obtained (supplemental Table 2). The location of the experimental values of the Dol-n average molecular masses between the two theoretical ones (Fig. 4) indicates that each Dol-n molecule was of a mixed biosynthetic origin, via both the MEP and MVA pathways. The obtained experimental average molecular masses of dolichols were further analyzed to estimate the involvement of the two pathways in dolichol biosynthesis. It is worthwhile to emphasize that the value of the average experimental molecular mass of Dol-n permits to unambiguously determine the number of isoprene units derived from the MEP and the MVA pathway, respectively (for explanation see supplemental text).
Qualitative Analysis of Mass Spectrometry Data-Using the experimental average molecular masses of 13 C-labeled dolichols (supplemental Table 2), it was possible to estimate the average increase of the molecular mass between sequential dolichols, which corresponds to the average mass of a single isoprene unit. The mass of the average isoprene unit, m(i), will of course be dependent on the feeding condition. The values of the average molecular masses of dolichols (supplemental Table 2) were analyzed according to the linear regression method. The dolichol molecular mass is thus described by Equation 3. Individual trends estimated for each labeling experiment are presented in supplemental Fig. 4. The linear regression analysis leads to the estimation of four average masses of isoprene units and four values of Dol 0 , summarized in Table 3. The theoretically expected enrichment of the average molecular mass of a single isoprene unit was calculated based on the labeling pattern of the isoprene unit for [1-13 C]glucose and [1,6-13 C 2 ]glucose (Fig. 2). The experimental values of the enrichment of the mass of the isoprene unit (1.24 Ϯ 0.09 and 3.09 Ϯ 0.21 Da for the singly and doubly labeled glucose, respectively) were in better agreement with the theoretical values predicted for the MVA pathway (correspondingly 1.49 and 2.99 Da) than with those for the MEP pathway (0.99 and 1.49 Da, respectively). It should be stressed here that the average molecular mass of a single isoprene unit calculated above concerns isoprene units located in the proximity of the ␣ terminus of the Dol-n molecule because all the Dol-n species are synthesized by elongation of precursors from a common pool of polyprenyl diphosphates. Thus, Dol-n is formed directly from Dol-(n-1) via addition of a single isoprene unit to the ␣ terminus and consequently the estimated increments concern the ␣-terminal units.
On the other hand, the average molecular masses estimated experimentally for the entire Dol-n molecules (see above) were lower than those calculated for the MVA-derived dolichols, indicating that some part of these molecules must have been synthesized via the MEP pathway, which produces "lighter" isoprenoid units. It should be remembered that isoprenoid units derived from feeding by either [1-13 C]glucose or [1,6-13 C 2 ]glucose might be either "light," originating from the MEP pathway and so containing up to two 13 C atoms per unit, or "heavy," originating from the MVA pathway thus containing up to three 13 C atoms per unit. Too low a mass of the experimentally estimated average molecular mass of Dol-n is also mirrored by the lower-than-expected experimental values of Dol 0 , presented in Table 3 (further discussed in comments to supplemental Fig. 4). By elimination, because we know from the earlier considerations that the ␣-terminal isoprene units were derived from the MVA pathway, the MEP-derived ones could only have been used for the synthesis of the -proximal portion of the dolichol molecules. It should be also pointed out that good agreement of the values of the observed increase of the molecular mass between sequential dolichols with theoretical calculations clearly indicates that there was no sizeable loss of 13 C atoms along the biosynthetic pathway from mono-or doubly labeled glucose toward dolichol, which in turn indicates negligible 13 C scrambling.
Quantitative Analysis of Mass Spectrometry Data-We proceeded to estimate the number of the two types of isoprene units in the dolichol molecules. For the isoprene unit synthesized via the MVA pathway, the expected isotopic enrichment, ⑀ MVA , should be 1.49 and 2.94 Da for [1-13 C]glucose and [1,6-13 C 2 ]glucose, respectively, with 0.05-and 4.95-Da enrichment for native and uniformly labeled glucose (Table 3). For a dolichol molecule, Dol-n, synthesized entirely via the MVA pathway, its average molecular mass, M(n) after [1-13 C]glucose feeding should equal M nat (n) ϩ n ϫ 1.49, because it is the sum of the molecular mass of native Dol-n and the total enrichment of all the isoprene units. Similarly it should be M nat (n) ϩ n ϫ 2.94 for [1,6-13 C 2 ]glucose feeding and M nat (n) ϩ n ϫ 4.95 for feeding with uniformly labeled glucose. A graphic presentation of this linear increase of molecular mass plotted as a function of the enrichment expected for the MVA pathway is shown separately for each dolichol in Fig. 6 (solid lines). The experimental average molecular masses of dolichols were lower than those calculated for the MVA labeling, as discussed above. To estimate the number of the lighter (k) and heavier (n Ϫ k) isoprene units per Dol-n molecule, Equation 4 giving the average molecular mass of each dolichol species obtained with all three types of [ 13 C]glucose and native glucose feeding was used. Equations describing the individual trends estimated for each dolichol are presented in the supplemental text. Because ⑀ MEP ϭ 2 ⁄ 3⑀ MVA (see Table 3 for the theoretical ⑀ values), the slope value for the linear regression (Equation 4) against ⑀ MEP (n Ϫ 1 ⁄ 3 ϫ k) permitted estimation of k, the average number of isoprene units derived from the MEP pathway (Table 4). According to these calculations, six to eight (lighter) isoprene units per dolichol molecule are derived from the MEP pathway. In summary, the described meta-analysis of mass spectrometry data revealed that in dolichols longer than Dol-14 successive isoprene units used for elongation of the molecule (i.e. a single ␣-terminal unit for Dol-15; two units. ␣ and ␣ ϩ 1 for Dol-16; three ␣-terminal ones for Dol-17; four for Dol-18, see supplemental Fig. 1) are derived exclusively from the MVA pathway. The 13 C NMR spectroscopy data indicated that also the ␣-terminal unit of Dol-14 is of the MVA origin, similarly as in the longer dolichols. Other units, incorporated into the growing Dol molecule at earlier steps, may be derived from either pathway; on average between six and eight such units come from the MEP pathway and the balance (i.e. seven to five for the 13-unit oligoprenyl precursor) is from MVA. A model summarizing the involvement of the MVA and MEP pathways in the biosynthesis of a dolichol molecule is presented in supplemental Fig. 2. This conclusion on the involvement of both pathways in dolichol biosynthesis was verified by application of pathway-specific precursors and inhibitors.
Labeling of Polyisoprenoids with Pathway-specific Precursors-Supplementation of the feeding medium with deuterated compounds (i.e. either with [5,5- (Fig. 7A), whereas for [5-2 H]mevalonate all but the monoisotopic signals were enhanced (Fig. 7B) for each of the dolichols analyzed. In con-trast to mevalonate, feeding experiments with DX indicated incorporation of an even number of deuterium atoms per dolichol molecule, which is in agreement with the known maintenance of the two hydrogen atoms linked to C-5 of DX at the C-1 position (Fig. 2) of the isoprene unit (39,40). The observed enhancement of the odd ion [M ϩ 3] resulted from the incorporation of two deuterium atoms into a dolichol isotopomer containing one 13 C atom per molecule. The low incorporation of deuterium was because of the high dilution of the deuteriumlabeled precursors because they constituted only 0.06% (by mass) of glucose in the feeding medium. Effective incorporation of tritium from [ 3 H]mevalonate into both polyisoprenoid alcohols and sterols was also noted (supplemental Fig. 5). The highest labeling was found for 21-day-old cultures.
Effect of Pathway-specific Inhibitors on Isoprenoid Accumulation-Mevinolin (30 M), a specific inhibitor of 3-hydroxy-3-methylglutaryl-CoA reductase of the MVA pathway, efficiently inhibited accumulation of both polyisoprenoid alcohols and sterols in the oldest culture, resulting in a remarkable decrease of their content (by 85 and 82% for polyisoprenoids and sterols, respectively, see Table 5). A higher concentration of mevinolin (60 M) gave similar results (not shown); however, surprisingly, de novo synthesis of both lipids from [ 3 H]mevalonate was also inhibited suggesting a pleiotropic effect of this drug on isoprenoid metabolism. On the one hand this might be an indication of the inhibition of other enzymes of the pathway besides 3-hydroxy-3-methylglutaryl-coenzyme A reductase, as was earlier found for sesquiterpene cyclase (41) or in mammalian systems for geranylgeranyl diphosphate synthase (42). On the other hand, possible toxic effects of mevinolin should be also considered (43). Fosmidomycin, a specific inhibitor of 1-deoxy-D-xylulose 5-phosphate reductoisomerase decreased the accumulation of both lipids in the youngest culture (by 73 and 78% for dolichols and sterols, respectively), whereas an increased content of both lipids was found for 2-and 3-weekold cultures. The variable effects of fosmidomycin most proba- Table 3). Solid lines follow the expected values calculated for masses of dolichols synthesized exclusively via the MVA pathway. The observed mass deficit of 7-15 Da found for mono-and di-labeled glucose experiments indicates substantial participation of the alternative MEP pathway in dolichol synthesis. bly mirror metabolic shifts between both pathways upon stress caused by the inhibitor.
DISCUSSION
In this study the biosynthetic origin of plant dolichols was investigated. We analyzed the contribution of the two alternative pathways known to produce IPP in plant cells, the MEP and the MVA pathway, to dolichol biosynthesis. Dolichol synthesis was studied by in vivo labeling with a general precursor (glucose), pathway-specific precursors (MVA or DX), and by inhibition of the metabolic flow with pathway-specific inhibitors. Our conclusion indicating a "mosaic" structure of plant dolichols is based on the following observations. First, isoprenoid units located at and near the -end of the dolichol molecules could be synthesized via either pathway as shown by the high intensities of the corresponding signals in the 13 C NMR spectrum. Second, isoprenoid units localized at the proximity of the ␣ terminus were exclusively of the MVA origin as shown by the numerical analysis of the 13 C-enriched masses of pseudomolecular ions and in parallel by NMR. Third, the calculated average number of lighter (i.e. derived from MEP) isoprenoid units per dolichol molecule was between 6 and 8 as based on the MS spectra, which was also supported by the 13 C NMR data. The efficient incorporation of 13 C atoms from uniformly labeled glucose into dolichols confirmed the applicability of Coluria roots as a model for biosynthetic studies. Dolichols were labeled almost exclusively by products derived from the glycolytic pathway as confirmed by the NMR spectrum (synchronized labeling of C-2 and C-4 carbon atoms, with proportionally higher labeling of C-5 atoms of IPP, see Table 2) and MS spectra (increase of the dolichol molecular mass, see Table 3). All these observations exclude any substantial scrambling of 13 C during the course of labeling. Yet the general conclusions drawn from such a model should take its limits into consideration. In vitro propagated organs might differ from their physiological equivalents. Heterotrophic growth of the tissue stops the influx of intermediates derived from photosynthesis as well as the lightdriven regulation of metabolism. Summarizing the validity of the meta-analysis of MS data applied here, it should be pointed out that by knowing the value of the enrichment of the molecular mass of a compound of interest and the number of isoprene units constituting its molecule one can estimate the relative input of both the MEP and MVA pathways to its biosynthesis (for details see supplemental text).
Finally, the proposed mechanism of dolichol biosynthesis is as follows. A mixture of IPPs derived from the MVA and MEP pathways is used in plastids for the initiation of the process and sequential condensations, and thus-formed oligoprenyl diphosphates (surely shorter than 14 isoprene units, see supplemental Fig. 2) are exported to the cytoplasm where they are finally elongated and terminated with IPP of exclusively MVA origin. Such a mechanism, besides compartmentalization of the sequential steps, also requires a unidirectional import of IPP from the cytoplasm to plastids. A spatial model of the organization of dolichol biosynthesis is presented in Fig. 8. Such a model assuming stochastic mixing of IPP molecules within the plastid compartment explains well the broad signal distribution pattern in the MS spectra obtained for [1,6-13 C 2 ]glucose feeding (Fig. 3B), which results from randomized insertion of the doubly (MEP-derived) and triply (MVA-derived) labeled IPP in the growing oligoprenyl chain. A further argument supporting the dualpathway biosynthetic origin of dolichol is given by the localization of the experimental distribution profile of [ 13 C]dolichol isotopomers between the theoretical profiles ( Fig. 4) calculated separately for each pathway. In line with the above model (Fig. 8) is a recent observation of an export of MEPderived IPP and possibly also geranyl diphosphate from plastids to the cytoplasm during sesquiterpene biosynthesis in carrot roots (44). The dolichol biosynthetic scenario described above raises the intriguing question of the nature of at least two independent cis-prenyltransferases which should be involved. Their existence seems plausible in the light of the in silico predicted occurrence of a family of six genes encoding cis-prenyltransferase in the Arabidopsis genome (45). However, the mechanism of the reaction remains unclear because the cytoplasm/endoplasmic reticulum localized cis-prenyltransferase should accept medium chain length oligoprenyl diphosphate as a substrate, although it is generally believed that only all-trans-FPP can serve as a starter for the polyisoprenoid chain.
|
2018-04-03T03:41:30.086Z
|
2008-07-25T00:00:00.000
|
{
"year": 2008,
"sha1": "69984551bc617235e910888e992811cc31ed3956",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/30/21024.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7d1565f4e41742d33649b89b3b5ab7cd9288f065",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
118930662
|
pes2o/s2orc
|
v3-fos-license
|
Pion structure in a nuclear medium
The structure and electroweak properties of the pion in symmetric nuclear matter are presented in the framework of the Nambu--Jona-Lasinio model. The pion is described as a bound state of a dressed quark-antiquark pair governed by the Bethe-Salpeter equation. For the in-medium current-light-quark properties we use the quark-meson coupling model, which describes successfully the properties of hadron in a nuclear medium. We found that the light-quark condensates, the pion decay constant and pion-quark coupling constant decrease with increasing nuclear matter density. We then predict the modifications of the charge radius of the charged pion in nuclear matter.
Introduction
Pions play special roles in understanding the strong interactions of quantum chromodynamics (QCD) at low energies [1]. The pion electromagnetic form factor would provide us with information on the non-perturbative aspects of the internal structure as well as the dynamics of the quarks and gluons.
The phenomenon of medium modifications is one of the most interesting subjects in nuclear and hadron physics such as the EMC effect [2]. In particular, how the properties of hadrons and their internal structures change by the surrounding nuclear environment is one of the important issues, which collects special attentions [3] in connection with the partial restoration of chiral symmetry in a strongly interacting environment [4]. The order parameters of this phenomena are the light-quark condensates in nuclear medium and their changes in nuclear medium are main driving forces for the changes of the hadron properties in nuclear medium. However, finding the clear experimental evidence of partial restoration of chiral symmetry is still challenging.
In the present work, following the formalism of Ref. [5,6] we report the electroweak properties of the pion in symmetric nuclear matter as well as the pion form factor. The in-medium current-lightquark properties obtained in the quark-meson coupling (QMC) model are used as inputs to study the properties of the in-medium pions in the NJL model. This report is based upon the recent article [7].
In-medium current-ligth-quark properties based on the QMC model
The QMC model [8] has been successfully applied to many phenomena of nuclear and hadron systems. Recently, studies have been extended to the calculation of the medium modifications of the nucleon weak and electromagnetic form factors on the neutrino mean free path in dense matter [9].
In symmetric nuclear matter, the isospin-dependent ρ-meson field vanishes in the Hartree approximation. The nucleon Fermi momentum k F , nucleon (baryon) density ρ B , and the scalar density ρ s of 1 nuclear matter are defined as where γ = 4 for symmetric nuclear matter and γ = 2 for asymmetric nuclear matter. The Fermi momenta of the proton and neutron k p,n F are respectively determined by ρ p and ρ n with ρ B = ρ p + ρ n . In the QMC model, nuclear matter is treated as a collection of the nucleons that are assumed to be non-overlapping MIT bags. The Dirac equations for the light q (u and d) quarks inside the bag are given by Here we assume m u = m d = m q . The scalar and vector mean fields in symmetric nuclear matter are defined by V q σ ≡ g q σ σ and V q ω ≡ g q ω δ µ0 ω µ , respectively. By solving the self-consistent equation for the scalar σ mean field, the total energy per nucleon is obtained as The coupling constants g q σ and g q ω are determined by fitting the binding energy, 15.7 MeV, of symmetric nuclear matter at the saturation density and they are respectively related with g σ and g ω by with ψ q being the lowest mode bag wave function in medium. These relations determine the inmedium quark dynamics, which is needed to compute the medium modifications of hadron properties in nuclear medium. That is, we assume that the in-medium light-quark properties obtained for the nucleon are not much different from those of light quarks in other hadrons. In the present work, therefore, we will explore the in-medium pion properties with the modified quark properties determined by the in-medium nucleon properties.
In-medium pion properties
Being equipped with the NJL-model formalism and QMC model for the medium-modified currentlight-quark properties, we compute the pion properties in nuclear medium. Using the in-medium current-light-quark properties obtained in the QMC model with m q = 16.4 MeV, we calculate the effective constituent quark mass M * u , in-medium pion mass, in-medium pion decay constant, in-medium quark condensate, and in-medium πqq coupling constant. The results are illustrated in Figs. 1-2 as functions of ρ B /ρ 0 with ρ 0 being the normal nuclear density, 0.15 fm −3 .
The left panel of Fig. 1 shows the ratio of the in-medium to vacuum quark condensates. We found that this ratio decreases as the nuclear matter density increases. In the right panel of Fig. 1 we found that the ratio of the in-medium to vacuum pion-quark coupling constant decreases with increasing nuclear matter density. Our results for the ratio of the in-medium to vacuum pion decay constant are presented in the left panel of Fig. 2. Again, the ratio is found to decrease as the density increases. The medium modifications on the pion mass is shown in the right panel of Fig. 2. Here we confirm that the pion mass is almost constant up to 1.25 ρ 0 . This justifies our assumption that m * π ≈ m π up to about normal nuclear density.
In-medium electromagnetic form factors
We calculate the electromagnetic form factor of the positively charged in-medium pion. The inmedium pion electromagnetic form factors are determined by adopting the method of Refs. [5,6] with a dressed quark-photon vertex. The in-medium electromagnetic form factor is obtained by modifying the quark propagator as where M * q is the effective constituent quark mass and the in-medium quark momentum is given by k * µ = k µ + V µ with vector field V µ = (V 0 , 0) by neglecting the modifications of the space component of the quark momentum that is known to be small [10]. In the form factor calculation, the vector field, which enters the propagator in Eq. (5), can be eliminated by the shift of the variable in the integration [11]. The in-medium form factor of the pion is then given by where ℓ = u, d, and e u/d are the u/d quark charges. The superscript "(bare)" in Eq. (6) means that the quark-photon vertex is elementary, that is, Λ µ(bare) γq =Q γ µ , whereQ is the quark charge operator. The first superscript a in f * ab π (Q 2 ) indicates the struck quark and the second superscript b means the spectator quark (See Ref. [7] for the details). Our numerical result for the elastic form factors of the [12], the empirical parameterization of Ref. [13], and the experimental data from Refs. [13][14][15]. Results for Q 2 F * π (Q 2 ) for various nuclear matter densities (right panel).
positively charged pion in medium are shown in the left and right panels of Fig. 3. The left panel of Fig. 3 shows that the in-medium pion electromagnetic form factor is suppressed with increasing density. Shown in the right panel of Fig. 3 are the same results as in the left panel of Fig. 3 but for Q 2 F π (Q 2 ). The medium effects on the suppression of the pion form factor are clearly seen and it reduces by about 10% at the normal nuclear density. Our calculations show that the charge radius of the in-medium pion increases with density (See Ref. [7] for details on the in-medium charge radius prediction).
|
2019-01-17T21:09:50.000Z
|
2019-01-17T00:00:00.000
|
{
"year": 2019,
"sha1": "08f2ef37c2c0fe3845a44c6a1c75316cef8cc0e6",
"oa_license": "CCBY",
"oa_url": "https://journals.jps.jp/doi/pdf/10.7566/JPSCP.26.031031",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "14ef2b74b0b43d79117e1d5c7ab6d0174e8782d0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
181536638
|
pes2o/s2orc
|
v3-fos-license
|
Dystrophic mineralisation in chronic exogenous lipid pneumonia in cats
Case series summary Exogenous lipid pneumonia with mineralisation of the lung parenchyma was diagnosed in three cats with radiographs, CT and/or bronchoalveolar lavage cytological findings. All three cats had a common clinical history of chronic constipation and long-term forced oral administration of mineral oil. All three cases showed radiographic findings compatible with aspiration pneumonia, with an alveolar pattern in the ventral part of the middle and/or cranial lung lobes. Minor improvement of the radiographic lung pattern in the follow-up studies was seen in two cats, and a miliary ‘sponge-like’ mineralised pattern appeared in the previously affected lung lobes months to years after the diagnosis. In one cat, patchy fat-attenuating areas in the consolidated lung lobes were present on thoracic CT. Cases 1 and 2 showed respiratory signs at the initial presentation, while in case 3 the radiographic findings were incidental and the cat had never exhibited respiratory signs. Relevance and novel information This is the first report to describe dystrophic mineralisation of the lung in exogenous lipid pneumonia and also the first to describe the CT features in cats. Exogenous lipid pneumonia should be included in the differential diagnosis in cases of miliary ‘sponge-like’ mineral opacities in the dependent part of the lung lobes on thoracic radiographs or CT in cats, especially in cases of chronic constipation, previously exposed to mineral oil.
Introduction
Exogenous lipid pneumonia (ELP) results from inhalation or aspiration of oily material. Only isolated cases of ELP have been reported in cats, 1-3 dogs 4 and horses. 5 Chronic forced administration of mineral oil used for treatment of constipation and hair balls is the most common cause of ELP in cats. [1][2][3] Clinical signs are nonspecific and can vary from absent to severe, depending on the amount and quality of the lipid aspirated. 1,2,6 Treatment of ELP is largely supportive after cessation of the inciting agent. Prognosis depends on the extension of the lesions and the progression to permanent changes such as fibrosis. 1,2 We report the clinical and imaging findings in three cats with ELP that received mineral oil as a treatment for chronic constipation. mineral oil administration. The cat had been treated conservatively with mineral oil (PO q12h) and a low-residue diet for a 3 year period. Rectal enemas were occasionally performed in case of an episode of constipation.
The cat showed low body condition score (BCS), abdominal breathing, tachypnoea (80 breaths per minute) and cyanotic mucous membranes on physical examination. Thoracic radiographs revealed an alveolar pattern in the right middle and left cranial lung lobes, with a ventral distribution (Figure 1a,b).
Aspiration pneumonia was suspected and supportive treatment based on oxygen, antibiotics, systemic corticosteroids, bronchodilators and intravenous (IV) fluids was established. Mineral oil was immediately discontinued. The patient improved clinically and was discharged with antibiotics, inhaled corticosteroids and bronchodilators 4 days later.
There was only a mild improvement of the lung changes on the follow-up thoracic radiographs 2 weeks after the initial onset (Figure 1c,d). Bronchoalveolar lavage (BAL) under general anaesthesia showed marked macrophagic inflammation with a neutrophilic component. The majority of the macrophages contained multiple round, clear and different-sized vacuoles compatible with lipid droplets (Figure 2). Both fungal and bacterial culture yielded no growth. Constipation episodes persisted 1 year later and subtotal colectomy was performed. Thoracic radiographs were obtained before surgery and a miliary 'spongelike' mineralised pattern was visible in the previously affected lung lobes ( Figure 3). The alveolar pattern in the right middle lung lobe was still present. The cat was still alive while writing this case series, 7 years after the initial presentation, and no recurrence of the respiratory signs had been noted.
Case 2
A 4-month-old entire female Persian cat was referred to our institution for chronic constipation and acute vomiting. The cat had been treated with mineral oil (PO q12h) and rectal enemas with partial response since the beginning of the clinical signs. Physical examination revealed Increased soft tissue opacity of the right middle and left cranial lung lobes is visible at initial presentation, with a lobar sign and air bronchograms indicating an alveolar pattern. (a) Note the ventral distribution of the lung pattern in the right lateral view, commonly seen in aspiration pneumonia. (c,d) The alveolar pattern in the right middle and left cranial lung lobes is still present in the follow-up radiographs, with only mild improvement of the radiographic findings low BCS, crackles in all lung lobes in the lung auscultation and abdominal discomfort on palpation. Thoracic radiographs showed diffuse increased soft tissue opacity within the lungs, more marked in the cranioventral aspect of the thorax, with an alveolar pattern in the right and left cranial and the right middle lung lobes. There was also a focal alveolar pattern with a perihilar distribution in the caudal and accessory lung lobes (Figure 4a,b). Severe diffuse colon distension was noted in the abdominal radiographs. Marked scoliosis of the lumbar spine, partial fusion of the L5-S1 vertebral bodies and decreased pelvic diameter were also noted, so secondary obstructive megacolon was suspected. Rectal enema was performed and supportive treatment based on antibiotics, systemic corticosteroids, bronchodilators and IV fluids was established for the suspected aspiration pneumonia. Mineral oil was immediately discontinued. The patient improved after 2 days and was discharged with antibiotics, systemic corticosteroids, lactulose and a low-residue diet.
Two months later, follow-up thoracic radiographs showed an alveolar pattern (similar to that previously described) with granular mineral opacities and 'spongelike' appearance of the previously affected lung lobes (Figure 4c,d). Multiple recurrent episodes of respiratory signs were recorded during a year, which partially responded to systemic corticosteroid and bronchodilator therapy. Owing to the refractory medical management, subtotal colectomy was elected as the treatment for the obstipation 1 year later. CT of the thorax was performed prior to surgery, showing an alveolar pattern with air bronchograms in all lung lobes, with a ventral distribution in the cranial and middle lobes, and a perihilar distribution in the caudal and accessory lung lobes. Within the consolidated lobes, patchy ill-defined fat-attenuating areas and punctiform mineral-attenuating structures were present ( Figure 5). Three years later, neither the respiratory signs nor the constipation reappeared.
Case 3
A 6-year-old neutered male domestic shorthair cat was referred to our institution for acute non-ambulatory tetraparesis secondary to a traumatic event. General physical examination was unremarkable. Postural reactions were absent in the right limbs and decreased in the left limbs in the neurological examination. Mild cervical hyperesthesia was noted. Previous to the MRI study and as part of the pre-anaesthetic protocol, thoracic radiographs were obtained. Increased soft tissue and granular mineral opacity were present in the caudal aspect of the right middle lung lobe ( Figure 6). Previous aspiration of mineral oil was suspected as a result of the radiographic changes and the referring veterinarian was contacted for further information. The cat had a previous history of chronic diarrhoea, tenesmus and recurrent rectal prolapse 5 years previously. Among others, long-term treatment with mineral oil had been established by the referring veterinarian for the intestinal problems. No respiratory signs had ever been reported. The MRI findings were compatible with an acute non-compressive nucleus pulposus extrusion lateralised to the right at the level of C2-C3 and a treatment based on IV fluids and cage confinement was established. The patient was discharged 4 days later with mild improvement but still showing severe right hemiparesis. Six months later, neurological examination showed mild right-sided proprioceptive deficits. No respiratory signs have been reported 1 year later.
Discussion
Exogenous lipid pneumonia is a particular form of aspiration pneumonia, resulting from accumulation of oily compounds of mineral, vegetable or animal origin within the alveoli. 1,2,4 Mineral oil, such as paraffin oil, has been widely used as a laxative for constipation in cats. 1 The tasteless and mild nature of the mineral oil makes it less irritating to mucosal surfaces without eliciting a cough reflex when aspirated. 4 It may also inhibit mucociliary transport, subsequently reaching easily the bronchial tree and reducing its clearance from the respiratory tract. 6 These characteristics of the mineral oil explain the chronic and subclinical behaviour of the disease when small amounts of lipids are aspirated, such as in case 3, where the radiographic findings were incidental and the cat had never exhibited respiratory signs. More than half of the patients in human medicine are asymptomatic on presentation. There is also frequently a discrepancy in the severity between the radiological and clinical findings in people, with extensive imaging findings in asymptomatic patients. 6 Cases 1 and 2 showed extensive radiological changes and only the first exhibited severe respiratory signs on presentation. Similarly to what has previously been reported, cases 1 and 2 showed an absence of or minor improvement in the radiographic lung pattern. This may be explained by the inability of the macrophages to metabolise the lipid material, degenerating and releasing the oil back into the alveoli. 4 The diagnosis of ELP is based on a history of exposure to oil, imaging findings indicating aspiration pneumonia and the presence of lipid-laden macrophages on BAL analysis. BAL may be normal or show lipid-laden macrophages, with or without other inflammatory cells. 6 In case 1, BAL demonstrated lipid droplets within the cytoplasm of the macrophages, confirming the suspected diagnosis of ELP. All three cases had a common clinical history of gastrointestinal disorders and long-term forced oral administration of mineral oil. Furthermore, in 2/3 cases, radiographic findings were clearly associated with administration of the lipid.
Radiographic features of ELP in the literature are described as non-specific, including a mixed interstitial to alveolar pattern involving mainly the dependent portion of both cranial lobes and the right middle lung lobe, 1,6 occasionally forming masses or nodules. 4,7 However, to our knowledge, no mineral opacity within the lungs has been described in cases of ELP. All three cats showed a pulmonary pattern and a distribution compatible with aspiration pneumonia, with an alveolar pattern in the ventral part of the middle and/or cranial There is increased soft tissue opacity and granular mineral opacity in the caudal aspect of the right middle lung lobe (black arrows). A lobar sign is visible between the right middle and right caudal lung lobes in the left lateral view (white arrow) lung lobes. Perihilar distribution has also been reported, 2 similar to case 2, in which the cranioventral aspect of the thorax was also affected.
High-resolution CT is the best imaging modality for the diagnosis of ELP in people. 6,8 Multiple different nonspecific pulmonary patterns have been described for ELP on CT in people, but the most characteristic finding is the presence of fat-attenuating areas in the consolidated lungs. 8 This feature has been described in dogs, 3 and was also present in case 2, where the alveolar pattern within the affected lung lobes showed negative attenuating areas. To our knowledge, no descriptions of the CT features have been previously described in cats.
Chronic consolidation and mineralisation of the lung lobes was the characteristic finding in all three cases herein. Pathological mineralisation of the pulmonary parenchyma is rare, and may occur via a metastatic mechanism, in which calcium deposits in normal tissues, or via a dystrophic mechanism, occurring in previously damaged, degenerated, necrotic or fibrotic lung tissue. 9 In ELP, chronic accumulation of lipid in the alveoli triggers a pyogranulomatous or foreign body type inflammatory reaction and fibrosis. 7 Therefore, a dystrophic mechanism may explain the mineralisation in our cases.
Dystrophic mineralisation of the lungs has been described in chronic pyogranulomatous pneumonia caused by mycobacteria in cats. [10][11][12] Unlike in our cases, perihilar lymphadenomegaly was the most common finding in mycobacterial infections, and radiographic changes represent usually a multisystemic disease. 11 However, non-tuberculous mycobacteria (NTM) infections may be confined to the thoracic cavity, causing consolidation and mineralisation of the lungs, 10 similar to cases 1 and 2. There are various reports in the literature of NTM infections as a complicating factor of ELP in people, dogs and cats. [13][14][15] Lipid-rich environments seem to be essential in some fast-growing species of NTM, acting as a mechanical protection and contributing to the growth and pathogenicity of the organisms. 13,15 In our patients, an underlying mycobacterial infection was considered unlikely considering the clinical presentation and the absence of progression of the pulmonary lesions years later not receiving proper treatment for mycobacteriosis.
Multifocal mineral opacities thorough the lung fields have also been described in cases of broncholithiasis in cats. 16,17 Broncholithiasis is defined as the presence of calcified material within the bronchial lumen, usually as a result of mineralisation of inspissated bronchial secretions in chronic diffuse inflammatory lower airway disease. 16,18 In contrast to our cases, this condition is commonly associated with a bronchial pattern and signs of obstruction of the airways (bronchiectasis, emphysema). 17,18 Moreover, the CT study in case 2 revealed a parenchymal location of the mineral-attenuating areas, unlike the intraluminal bronchial location in cases of broncholithiasis.
Idiopathic mineralisation in the lung occurs in pulmonary alveolar microlithiasis, a rare disease characterised by intra-alveolar calcium deposits. 19 The deposition occurs in the absence of calcium metabolism disorders, usually as an incidental finding. 20,21 In contrast to our cases, pulmonary lesions are commonly diffuse with a 'sandstorm appearance' and not associated with inflammation. 20 Other differential diagnoses for mineralised lung foci include aspiration of contrast media or pulmonary neoplasia with calcification. The clinical history allows exclusion of the first, because none of the three cats received any contrast media and, in 2/3 cases, an alveolar pattern preceded the mineralisation. Although there is a high incidence of mineralisation in bronchopulmonary tumours in cats, 22 imaging findings usually include a solitary mass (frequently cavitated) in the caudal lungs, pleural effusion and signs of tracheobronchial lymph node enlargement. 22,23 Besides, there was only mild or no progression of the pulmonary lesions in our cases, making the possibility of a tumour unlikely.
Conclusions
ELP should be included in the differential diagnosis in cases of mineral opacities in the dependent part of the lung lobes in cats, especially in cases of chronic constipation and previous history of exposure to mineral oil. Mineral oil is unsafe owing to the risk of ELP and should be avoided in cats. We recommend the use of other treatment options for the management of constipation.
Conflict of interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
|
2019-06-07T21:13:33.482Z
|
2019-05-20T00:00:00.000
|
{
"year": 2019,
"sha1": "66cf6141d0bfc923e099167e2850b45af249793a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2055116919850255",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66cf6141d0bfc923e099167e2850b45af249793a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261222314
|
pes2o/s2orc
|
v3-fos-license
|
Notes on nesting behavior of Yellow-footed Green Pigeon Treron phoenicopterus ( Latham , 1790 ) in Aligarh Muslim University campus and its surroundings , Uttar Pradesh , India
The nesting behavior of the Yellow-footed Green Pigeon Treron phoenicopterus was observed during its breeding season in 2021 in an urban region encompassing the Aligarh Muslim University Campus and the surrounding areas. Data were collected by searching nests in the study area. The breeding season for the species in the study sites begins in March and re-nesting is attempted even in July. A total of 31 nests were found on 24 trees belonging to eight species. The analysis of nest site characteristics revealed that Millingtonia hortensis, Azadirachta indica, and Dalbergia sissoo were the most important nest tree species, accounting for 69% of the identified nests during the study period. These findings contribute to our understanding of the nesting behavior of the Yellow-footed Green Pigeon in an urban environment and have implications for its conservation and management.
INTRODUCTION
The Columbidae family is one of the world's most threatened families. Despite its widespread distribution, the family, which contains pigeons and doves worldwide has received little conservation attention; is considered to have 369 species, 16 of whom are extinct, and one is extinct in the wild (the Socorro Dove Zenaida graysoni) (Birdlife International 2020). Thirty-three species of Columbidae have a distribution in India. It is most likely because it is one of a group of birds threatened by human persecution, habitat degradation, and introduced predators (Owens & Bennett 2000).
India has an incredible diversity of Columbiformes, inhabiting 33 species, including fruit pigeons (Ali & Ripley 1987;Grimmett et al. 2016). Frugivorous birds are the key functional species, performing valuable seed dispersal services and regeneration and their decline or local extinction may have severe consequences for the functioning of an ecosystem (McConkey & Drake 2002). Yellow-footed Green Pigeon is frugivorous species and is a common resident species in Aligarh district in Uttar Pradesh, where no studies have been carried out on its status, distribution, or ecology. As a result, we planned to investigate the Yellow-footed Green Pigeon nesting ecology in this area. It breeds in the Aligarh Muslim University campus and adjoining areas regularly, and the Indian Jungle Crow Corvus macrorhynchos and House Crow Corvus splendens prey on their clutches and nestlings. A few studies on Columbidae are by Bhattacharya (1994) on morphological adaptations, Somasundaram (2006) and Devi (2012) on the ecology of Nilgiri Wood Pigeon and Yellow-footed Green Pigeon, respectively, and Kour (2016) on eco-biology of some species from Jammu. Therefore, the present study was conducted to present preliminary data on the nesting behavior of Yellow-footed Green Pigeons in an urban region.
Study Area
Aligarh Muslim University is in Aligarh district in Uttar Pradesh, India, in the Ganga-Yamuna doab region. It is located at the northernmost part of the Agra division, stretching from 27.4833 0 N to 28.0166 0 N latitude and 77.4833 0 E to 78.6666 0 E longitude (Image 1). The district covers an area of 3,650 km 2 and is 130 km from Delhi. The flora in the area is dry deciduous with mostly deciduous trees in most areas. The locations of the nest and random plots at both the study sites, i.e., AMU campus (NC i & RC i ) and Aligarh Fort (NQ i & RQ i ) are shown in Image 2.
In central Ganga Plain, the interfluvial stretch of the Ganga and Yamuna passes through Aligarh district. The methodology adopted by Devi (2012) aimed to understand the characteristics of the nesting sites and the factors influencing nesting site selection. To quantify the nesting environment, the methods outlined by James & Shugart (1970), subsequently refined by Mudappa & Kannan (1997), were employed. Data on nesting trees and nesting environment factors were collected and quantified. In order to detect nests of Yellow-footed Green Pigeons in the beginning of their nesting season, we largely depended on their behavioral cues such as collecting nesting material. During the mating season of Yellow-footed Green Pigeons, nest searches were conducted in the study region, and observations on nest trees and nest-site characteristics were made, following the methods used in previous studies (Gokula 2001;Devi & Saikia 2012). To ensure minimal disturbance to the nesting birds, all observations of their nesting activities were conducted from a safe distance, thus preserving the natural nesting behavior of the Yellow-footed Green Pigeon.
Observations of its nesting sites and nest site characteristics were undertaken from 10 March to 13 July 2021, when the final fledglings of active nests fledged. Once an individual or pair was sighted gathering twigs from the trees or constructing a nest, they were followed using binoculars or a camera and their nesting activities were recorded daily from 0630 h to 1130 h. A comprehensive dataset was obtained by closely observing selected nests, while additional nests were also monitored to determine the overall nesting success and gather supplementary data.
Adult birds undertaking breeding activities such as nest construction, incubation, and feeding the young in or near the nest indicated the presence of an active nest. A circular plot with a radius of 10 m was set up around each nesting tree to measure nest-site selection along with random plots which were also placed at a distance of 30-50 m from the nest plot. All characteristics were recorded in these plots as exercised in some earlier investigations (James & Shugart 1970;MacKenzie & Saely 1981;Clark et al. 1983;Sieg & Becker 1990;Liebezeit & George 2002).
Nest site and random site characteristics recorded during the study were tree number to be used subsequently for density calculations (trees/hectare), tree height (m), tree GBH (cm), basal area (m 2 ), the height of the first branch (m), distance from the nearest road (m), distance from nearest habitation (m), ground cover (%), shrub cover (%), canopy cover (%), canopy spread (m 3 ) and nest height (m). In addition, the species of nesting trees were identified and recorded. To ensure comparability and permit statistical analysis, the collected data were normalized beforehand. In the Qila (Aligarh Fort) area, the listed nest plots were labeled as NQi, while the random plots were labeled as RQi. On the other hand, in the University Campus area, the nest plots were labeled as NCi, and the random plots as RCi.
RESULTS
The Yellow-footed Green Pigeon's mating season begins in March and continues until July having renesting attempts towards the end of June and July. During this period, birds whose nests have been destroyed by predators seek to re-nest (Ayesha Mohammad Maslehuddin pers. obs. 31 May 2021). A total of thirty-one nests of Yellow-footed Green Pigeons The GBH of non-nest trees (0.46 ± 0.03) was slightly higher than that of the nest trees (0.42 ± 0.05) ( Figure 1), but there was no significant difference (t = -0.754, p >0.05) among them.
Tree species used for nesting by Yellow-footed Green Pigeons were Millingtonia hortensis, Prosopis juliflora, Azadirachta indica, Dalbergia sisso, Holoptelea integrifolia, Mangifera indica, Syzygium cumini, and Bombax ceiba. The number of nests on different tree species and the total number of individuals of each species, including those in the centre of random plots, are shown in Figure 3. The maximum number of nests were found on Millingtonia hortensis.
DISCUSSION AND CONCLUSION
The semi-natural plantations on the AMU Campus provide habitat to a wide variety of avifaunal species. The current study was one of the few attempts to acquire useful data regarding the ecology of species nesting.
Nest site characteristics show that Millingtonia
J TT
hortensis, Azadirachta indica, and Dalbergia sissoo are essential nest tree species accounting for 69% of the total nest trees identified during the study period. These tree species ranged 11-25 m in height and were branched and bifurcated to provide a better place to hold the nest and a safe base for the pigeons to make their nests. Another reason was the abundance of these tree species at the study site.
The study revealed that the breeding season of Yellow-footed Green Pigeon is from late March to July in the study area. Nest building begins in early April, and they make open nests of mostly twigs (Image 3). Nests of Yellow-footed Green Pigeons are very simple in structure and made up of small twigs placed crisscrossed over one another. Both sexes were seen sharing nest building and duty of incubation, i.e., one of the breeding males or females continued to sit on the eggs while the other pair went foraging. As per observation, only one squab is hatched per nest. The duration from nest building until the fledgling left the nest was 39-44 days.
Nest building by Yellow-footed Green Pigeons was observed during the study period. Most of the nestbuilding activity occurred 0630-1000 h. Nest materials such as twigs were collected from dried branches of Holoptelea integrifolia, Azadirachta indica, Tectona grandis, Eucalyptus citriodora, Syzygium cumini, and Casuarina equisetifolia trees 15-30 m away from nest site by one of the mates. One of the breeding pair individuals broke suitable twigs from the branches and carried them toward the nest. The waiting individual on the tree gently arranged it into the nest securely. Also, it was observed that the individual carrying the twig never landed directly at the location where another mate was building the nest; instead, it would land on branches higher in the canopy and then move down towards the nest location cautiously. Apart from these, the frequency of nest-building trips was maximum during the 2 nd and 3 rd days of nesting, which gradually declined in the following days.
During the study, birds of prey such as Pariah Kites Milvus migrans and occasionally crows (Corvus splendens or C. macrorhynchos) were commonly seen preying on nests of Yellow-footed Green Pigeon. Some competitors like Common Mynas Acridotheres tristis, Eurasian Collared Dove Streptopelia decaocto, and Indian Palm Squirrel Funambulus palmarum mostly destroyed the nests; they forcefully entered the nest area of the pigeon, destroyed it and occupied the territory. Natural calamities like heavy rain and the storm destroyed most nests during the pre-hatching stage. Generally, the Yellow-footed Green Pigeons construct their nests on softwood trees, which are easily broken due to heavy rain and storm.
Association of yellow-footed Green Pigeons with the Black Drongo Dicrurus macrocercus during nesting season (Image 3) may be a great driver in predicting nests' success and subsequently emerging chicks. Around 40% of nests were successfully raised due to a Black Drongo nest in the vicinity of nests of Yellowfooted Green Pigeons. It has also been observed by Ali & Ripley (1987).
The success of nesting attempts by Yellow-footed Green Pigeons was determined based on the presence of hatched squabs in each nest. Of all the nests encountered, it was determined that only 35% achieved successful nesting, indicating the successful hatching of squabs. In contrast, the remaining 65% of nests were deemed unsuccessful due to their destruction by storms or abandonment caused by excessive disturbance, resulting in the absence of hatched squabs.
www.threatenedtaxa.org www.threatenedtaxa.org The Journal of Threatened Taxa (JoTT) is dedicated to building evidence for conservation globally by publishing peer-reviewed articles online every month at a reasonably rapid rate at www.threatenedtaxa.org. All articles published in JoTT are registered under Creative Commons Attribution 4.0 International License unless otherwise mentioned. JoTT allows allows unrestricted use, reproduction, and distribution of articles in any medium by providing adequate credit to the author(s) and the source of publication.
|
2023-08-28T15:05:39.651Z
|
2023-08-26T00:00:00.000
|
{
"year": 2023,
"sha1": "63eee7ae04124417c0d3e0a229c28f9731b2c77e",
"oa_license": "CCBY",
"oa_url": "https://threatenedtaxa.org/index.php/JoTT/article/download/8390/9445",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "703812f60a4acfbdea1b33a02a3f1a90135fe6aa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
258782879
|
pes2o/s2orc
|
v3-fos-license
|
Insecticidal Properties of Erythritol on Four Tropical Tephritid Fruit Flies, Zeugodacus cucurbitae, Ceratitis capitata, Bactrocera dorsalis, and B. latifrons (Diptera: Tephritidae)
Simple Summary Tephritid fruit flies are among the most destructive agricultural pests of fruits and vegetables worldwide and have global significance, introducing barriers to the trade of fresh tropical commodities. The melon fly, Mediterranean fruit fly, oriental fruit fly, and Malaysian fruit fly have entered and become established in Hawaii and have been making frequent incursions into agriculturally important states of the U.S. mainland, such as California and Florida. Bait sprays containing protein food bait plus an insecticide such as spinosad have been a major control method for these fruit flies. However, resistance to bait sprays has been reported. In this study, we evaluated the potential insecticidal effects of five different non-nutritive sugars on four species of fruit flies established in Hawaii. Erythritol alone or erythritol plus sucrose formulations have a significant negative impact on their survival, suggesting a potential use of erythritol as a non-toxic management tool for the control of tropical tephritid fruit flies. Abstract Tephritid fruit flies are among the most destructive agricultural pests of fruits and vegetables worldwide and can impose trade barriers against the movement of fresh tropical commodities. Primary pre-harvest control methods for these flies rely on the spraying of conventional chemical insecticides or bait sprays. However, resistance to these control methods has been reported in fruit flies. Erythritol is a non-nutritive sugar alternative for human consumption, which has been tested and confirmed for its insecticidal properties against various insect pest species. In this study, using laboratory bioassays, we evaluated the insecticidal effect of erythritol alone or various erythritol formulations containing sucrose and/or protein on four tropical fruit fly species established in Hawaii (e.g., melon fly, Mediterranean fruit fly, oriental fruit fly, and Malaysian fruit fly). In addition, the effects of other non-nutritive hexose and pentose sugar alcohols, such as sorbitol, mannitol, and xylitol, were tested. Among the different standalone and combinatory treatments tested, 1M erythritol and a combinatory formulation of 2M erythritol + 0.5M sucrose appeared to be the most detrimental to the survival of all four species of tested flies, suggesting the potential of using erythritol as a non-toxic management tool for the control of tropical tephritid fruit flies.
Introduction
Tropical tephritid fruit flies are among the most destructive agricultural pests of fruits and vegetables worldwide. They are serious pests both in native and in established ranges and pose significant trade barriers that disrupt the export and import of fresh tropical commodities [1,2]. Zeugodacus (Bactrocera) cucurbitae (Coquillett) (melon fly), Ceratitis capitata (Wiedemann) (Mediterranean fruit fly), Bactrocera dorsalis (Hendel) (oriental fruit fly), and B. latifrons (Hendel) (Malaysian fruit fly) (Diptera: Tephritidae) entered Hawaii in 1895, 1907, 1945, and 1983, respectively [3,4], and have been making frequent incursions into agriculturally important states such as California, Florida, and Texas since 1954 [5]. Strong tropical fruit fly prevention, outbreak response, and quarantine programs are in place [2], which have prevented these flies from being established in the U.S. mainland. However, their invasion frequency has been on the rise in recent years and there is a critical need to improve the fruit fly management programs in the U.S.
Protein bait sprays laced with insecticides have been widely used to control these fruit flies in outbreak areas [6], in combination with surveillance traps, the male annihilation technique, and the sterile insect technique [1]. Malathion is an inexpensive contact organophosphate insecticide and has been a conventional choice for fruit fly bait sprays. However, its low residue tolerance in export markets and adverse effects on beneficial insects, non-target insects, and humans have caused a decline in the use of malathion in bait sprays [7]. The most prominent replacement of malathion bait has been a spinosad-based hydrolyzed protein bait (GF-120, Dow AgroSciences, Indianapolis, IN, USA) [7], which was initially developed by Moreno and Mangan [8]. With low mammalian toxicity and a reduced impact on natural enemies of spinosad [9], GF-120 has become the most widely used tephritid control product. However, GF-120 is expensive and needs to be ingested. Moreover, the almost exclusive use of GF-120 as a spray product for fruit fly control has led to the development of resistance in Z. cucurbitae and other fruit fly species [10,11].
The effects of erythritol have been evaluated alone or in combination with phagostimulative sugars (sucrose, sucralose) on some beneficial non-target species. This includes the honeybee Apis mellifera (Hymenoptera: Apidae), both larvae [22] and adults [23], a pupal parasitoid of D. suzukii, Pachycrepoideus vindemiae (Hymenoptera: Pteromalidae), the western yellowjacket Vespula pensylvanica (Hymenoptera: Vespidae) [22], and a predatory spider mite Tetranychus urticae (Trombidiformes: Tetranychidae) [24]. No apparent toxicity was observed in bees and yellowjackets and minimal impacts were observed on the parasitoid and predatory mites, which fared better than the target pest. It is important to understand the physiological action and toxicity of the insecticidal erythritol in insects in order to deliver it to target pests and avoid non-targets in the field.
The physiological action of erythritol has been suggested to be via osmotic imbalance between the hemocoel and the digestive system in the body. For example, in D. suzukii, the erythritol-fed fly cannot metabolize and convert the erythritol to nutritional carbohydrates [13], and thus needs to excrete the erythritol immediately through the digestive tract [25,26]. A large amount of water in the body is required to dilute or excrete unmetabolized erythritol, but the excretion process is slow, leading to the accumulation of excessive erythritol molecules in the hemolymph, which can increase osmotic pressure and dehydration, leading to eventual death.
In this study, we selected four tropical tephritid fruit fly species, Z. cucurbitae, C. capitata, B. dorsalis, and B. latifrons, and evaluated the effects of erythritol, other sugar alcohols, erythritol mixed with sucrose, and protein on their survival. Although erythritol has been previously shown to decrease the survivorship of B. dorsalis [15], it was evaluated with erythritol alone, not mixed or formulated with other sugars or proteins.
Insects
Bactrocera dorsalis, B. latifrons, Z. cucurbitae, and C. capitata pupae were obtained from the laboratory stock colonies maintained at USDA ARS Daniel K. Inouye U.S. Pacific Basin Agricultural Research Center in Hilo, HI. All four species were maintained on an artificial diet. Pupae were held in cubical screen cages (30 cm W × 30 cm D × 30 cm H; Bugdorm.com) housed in a dedicated rearing room maintained at 22.5 ± 1 • C, 55 ± 6% rh, and a 14:10 h L:D photoperiod until eclosion.
Feeding Assay
The feeding arena consisted of a 1 L plastic deli container with a perforated lid in which the test flies were introduced at the start of the test. For Experiments 1 to 3, 10 adult flies (5 females and 5 males) from each species were placed into a bioassay arena upon emergence (Day 0), and for Experiments 4 and 5 were placed either newly emerged (Day 0) or 7-day-old (Day 7) flies. In the arenas, flies were offered one of the sugar treatments as listed below (10 flies from one of four fly species/treatment/arena). Sugar solutions were provided in 4 mL glass vials (15 × 45 mm) with a cotton wick inserted into the vial opening to absorb the liquid sugar. Vials were inserted into a hole cut into the side of the plastic container, cotton wick slightly angled down, and secured with a strip of tape. This ensured that the cotton wicks were constantly saturated with sugar solution and minimized variation in sugar availability due to evaporation. Vials were refilled when solution levels were low. For all experiments described below, each test was replicated five times and survival of flies was recorded each day for 2 weeks.
Experiment 1: Effect of Non-Nutritive Sugar on Fly Survival
Upon emergence (Day 0), 10 adult flies (5 females and 5 males) were placed in a bioassay arena as described above and offered a 1M concentration of (1) erythritol, (2) mannitol, (3) sorbitol, (4) xylitol, (5) sucrose, or (6) water. Water was offered as a negative control and sucrose as a positive control.
Experiment 2: Effect of Sucrose Addition on Fly Survival
In order to clarify whether the observed decrease in survivorship from erythritol and mannitol was from starvation or the insecticidal effects of non-nutritive sugars, combined solutions of 1M erythritol + 1M sucrose and 1M mannitol + 1M sucrose were offered to the flies. Thus, the treatments tested included (1) 1M erythritol + 1M sucrose, (2) 1M mannitol + 1M sucrose, (3) 1M erythritol, (4) 1M mannitol, and (5) 1M sucrose as a positive control.
Experiment 4: Effect of Protein Availability on Fly Survival
In this experiment, we tested whether protein feeding could affect erythritol's insecticidal impact using two different age groups of flies. Day 0 flies were considered as young flies and Day 7 flies as old flies. For old flies, upon emergence, stock flies in the rearing cage were fed a diet of 0.5M sucrose solution, yeast hydrolysate, and water for 7 days. Treatments included (1) 2M erythritol + 0.5M sucrose, and (2) 2M erythritol + 0.5M sucrose + protein. Erythritol and sucrose were combined and dispensed from a vial. For protein treatment, ten grams of yeast hydrolysate was separately provided in a small plastic Petri dish on the bottom of the deli container for the treatments with protein.
Experiment 5: Effect of Water Availability on Fly Survival
In this experiment, we tested whether water ingestion could affect the impact of erythritol, using two different age groups of flies as described above (Day 0 and Day 7). Treatments included (1) 2M erythritol + 0.5M sucrose + water, (2) 2M erythritol + 0.5M sucrose, and (3) 0.5M sucrose. Erythritol and sucrose were combined and dispensed from a vial. For water treatment, additional water was separately provided in a 4 mL glass vial with a cotton wick inserted into the vial opening.
Statistical Analyses
The cumulative proportion of flies alive within an arena over 2 weeks was analyzed for each species/experiment in a repeated measures generalized linear mixed model in SAS 9.4 (SAS Institute 2016). In Experiments 1-3, treatment, day, and treatment*day interactions were fixed effects, and treatment*replicate was the random subject effect being measured repeatedly. In Experiments 4-5, treatment, age, treatment*age, and day were fixed effects, and treatment*age*replicate was the random subject effect. A binomial or normal distribution was used depending on convergence. Treatment differences were separated by Tukey HSD.
Results
Survival of tested flies decreased over time (14 days) in all experiments, as expected (Day effect: all p < 0.0001).
Experiment 1: Effect of Non-Nutritive Sugar on Fly Survival
Among the non-nutritive sugars tested, erythritol and mannitol led to lower fly survival than the others ( Figure 1). The survival rates of all four species were significantly lower when fed a solution of 1M erythritol, with all dead in 3-5 days among Z. cucurbitae (ZC) (Figure 1a Figure 1). Among the sugar solutions, the survival rates of mannitol-fed flies were significantly lower than when flies were fed with sucrose. Mannitol-fed flies all died by day 9 for ZC, day 11 for CC, and day 8 for BD, and 80% of BL died by day 14. Sucrose-fed flies had 52% of ZC, 34% of CC, 44% of BD, and 6% of BL dead by day 14. There was no difference between xylitol and mannitol for Z. cucurbitae. The survival rate of mannitol-fed flies was greater than when flies were fed with either erythritol or water. Water-fed flies all died by day 6 for ZC, day 6 for CC, day 5 for BD, and day 8 for BL. The survival rates of flies given only water were slightly different by day 8, but not significantly different from the erythritol treatment. For Z. cucurbitae, the survivorship of sorbitol-fed flies was significantly greater than that of sucrose-fed flies (Figure 1a). The survival rates of xylitol-fed flies were significantly lower than those of sucrose-or sorbitol-fed flies, except for B. latifrons flies.
Experiment 2: Effect of Sugar Concentration on Fly Survival
The potential insecticidal effect of erythritol and mannitol was further evaluated by feeding in combination with sucrose, because erythritol and mannitol are considered zerocaloric sugars (Figure 2). For all four fruit fly species, feeding on 1M erythritol significantly reduced survival rates, with all flies dead in 4-5 days compared to sucrose-fed flies, where 56% of ZC, 42% of CC, 44% of BD, and 38% of BL died by day 14 12), although the survival rates from 1M erythritol + 1M sucrose treatment were significantly greater than those from 1M erythritol-only treatment. Unlike erythritol, 1M mannitol + 1M sucrose solution was not toxic to all four fly species (Figure 2), although feeding 1M mannitol alone reduced the survivorship of Z. cucurbitae and B. dorsalis (Figure 2a,c).
Experiment 2: Effect of Sugar Concentration on Fly Survival
The potential insecticidal effect of erythritol and mannitol was further evaluated by feeding in combination with sucrose, because erythritol and mannitol are considered zerocaloric sugars (Figure 2). For all four fruit fly species, feeding on 1M erythritol significantly reduced survival rates, with all flies dead in 4-5 days compared to sucrose-fed flies, where 56% of ZC, 42% of CC, 44% of BD, and 38% of BL died by day 14 12), although the survival rates from 1M erythritol + 1M sucrose treatment were significantly greater than those from 1M erythritol-only treatment. Unlike erythritol, 1M mannitol + 1M sucrose solution was not toxic to all four fly species (Figure 2), although feeding 1M mannitol alone reduced the survivorship of Z. cucurbitae and B. dorsalis (Figure 2a,c).
Experiment 3: Effect of Different Sugar Concentrations on Fly Survival
There was a significant effect of different erythritol concentrations on fly survival rate over 2 weeks (
Experiment 4: Effect of Age and Protein Availability on Fly Survival
When Day 0 and Day 7 flies were fed 2M erythritol + 0.5M sucrose solution mixed with or without protein, fly survivorship was similar over 2 weeks (Figure 4
Experiment 4: Effect of Age and Protein Availability on Fly Survival
When Day 0 and Day 7 flies were fed 2M erythritol + 0.5M sucrose solution mixed with or without protein, fly survivorship was similar over 2 weeks (Figure 4
Experiment 5: Effect of Water Availability on Fly Survival
For all four species of flies tested with 2M erythritol + 0.5M sucrose, the insecticidal effect of erythritol was reduced when flies had access to additional water, although the insecticidal effect of erythritol was still observed from the treatment even with additional water supplemented (
Discussion
In this study, we examined the insecticidal properties of four sugar alcohols individually (erythritol, mannitol, sorbitol, and xylitol) and different erythritol formulations containing sucrose and/or protein on the four tropical fruit fly species that are established in Hawaii. In the first experiment, all four species of flies tested died by day 8 of feeding either erythritol or water. This could be attributed to the insecticidal effect of erythritol, the lack of nutrition from erythritol consumption, or the absence of erythritol consumption by tested flies. Experiment 2, which tested the effect of the erythritol + sucrose combination, validated the insecticidal impact of erythritol on all four species of fruit flies tested, and, thus, in subsequent experiments, we used erythritol + sucrose formulations to ensure that flies consumed erythritol. These results are similar to those of previous studies with D. suzukii, where the erythritol + sucrose formulation induced more feeding than erythritol alone [13]. In our study, however, all four fly species also died when fed with mannitol solution only, with Z. cucurbitae being the most susceptible to mannitol. Xylitol, a five-carboned sugar, also reduced the survival of Z. cucurbitae but had minimal effect on
Discussion
In this study, we examined the insecticidal properties of four sugar alcohols individually (erythritol, mannitol, sorbitol, and xylitol) and different erythritol formulations containing sucrose and/or protein on the four tropical fruit fly species that are established in Hawaii. In the first experiment, all four species of flies tested died by day 8 of feeding either erythritol or water. This could be attributed to the insecticidal effect of erythritol, the lack of nutrition from erythritol consumption, or the absence of erythritol consumption by tested flies. Experiment 2, which tested the effect of the erythritol + sucrose combination, validated the insecticidal impact of erythritol on all four species of fruit flies tested, and, thus, in subsequent experiments, we used erythritol + sucrose formulations to ensure that flies consumed erythritol. These results are similar to those of previous studies with D. suzukii, where the erythritol + sucrose formulation induced more feeding than erythritol alone [13]. In our study, however, all four fly species also died when fed with mannitol solution only, with Z. cucurbitae being the most susceptible to mannitol. Xylitol, a five-carboned sugar, also reduced the survival of Z. cucurbitae but had minimal effect on B. dorsalis survival. The differences in the effect of xylitol between the two species could be from the different digestive or metabolic processes of the pentose molecules.
Though not yet examined in insects, fewer carbon molecules are typically passed faster through intestinal membranes than hexose sugars [27,28]. The xylitol could be quickly absorbed and diffused through the midgut membrane in Z. cucurbitae, which may increase the osmotic pressure in the hemolymph before being excreted out. Further study is necessary to elucidate the possible physiological mechanisms.
The toxicity of mannose or mannitol on insects represents a long-standing question. In a previous study, honeybees that ingested mannose died, which suggested a potential toxicity of mannose to honeybees [29], which was created by an imbalance of the enzymes involved in mannose metabolism. However, mannose toxicity was not found in D. melanogaster [30] and D. suzukii [13]. Notably, in this study, mannitol appeared to be potentially toxic to all four tephritid fruit flies. The physiological processes, including digestive and metabolic mechanisms, might differ between Drosophilidae and Tephritidae, indicating that the tephritid flies could have a problem controlling their osmotic pressure after they ingest mannitol. Further study is necessary to compare the structures of the digestive tracts and metabolic enzymes involved between the two families.
Interestingly, erythritol had minimal or no effect on all three hymenopteran species, as beneficial or non-target insects, examined in the laboratory and the field [22,23]. Insect digestive tracts are generally specialized for different feeding habits [31]. The digestive structure of hymenopteran insects might differ from that of other insect groups in terms of digesting and uptaking sugars from the midgut, transporting them into the hemolymph, and excreting them from the body. The relationship between physiological processes and specific digestive structure regarding the impacts of erythritol on dipteran and hymenopteran insects remains to be studied in the future.
When yeast hydrolysate, as a protein nutrient, was combined in the erythritol formulations, it did not affect fly survival in the four fly species tested. This could be because flies somehow did not uptake many proteins, or the proteins did not significantly contribute to the osmotic change in the fly hemolymph. Protein molecules are relatively larger than sugars, and the process of their breakdown and absorbance is slower than that of sugars [32]. Like mammals, proteins might not increase blood sugar levels in the fly, resulting in minimal or no impact on the osmolality of the hemolymph, resulting in mortality.
In all four tephritid species, the erythritol + sucrose formulation was detrimental compared with sucrose alone. Moreover, flies given the formulation with a separate water source lived longer than without separate water, similar to D. suzukii [33]. This further supports the hypothesis that non-nutritive sugars cause an osmotic imbalance in the tephritid flies and that erythritol molecules are accumulated in the hemolymph by a physiological difference between the uptake and excretion processes, as observed in D. suzukii [25,33]. The physiological mode of action underlying the insecticidal effect of erythritol, as a non-toxic management tool, will be critical to our understanding of how erythritol has insecticidal properties for some insect species but not others, such as the honeybee.
In conclusion, our results suggest that erythritol is a potential insecticide alternative that can be used to sustainably reduce tropical tephritid damage to their host fruit. Managing these flies is particularly challenging due to the limited availability of effective insecticides and the development of resistance to protein bait sprays. Although promising, additional studies are required, including (1) additional confirmatory trials with a large sample size of flies, (2) better characterization of the palatability of erythritol to fruit flies, (3) the development of effective formulations of erythritol and suitable delivery or application techniques such as sprayables or baits, and (4) the evaluation of the potential impact of erythritol on the host plant and fruit quality.
Data Availability Statement:
The data that support the findings will be made available through a Data Transfer Agreement following an embargo from the date of publication to allow for the commercialization of research findings.
|
2023-05-19T15:14:30.127Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2fcd737cb13a80e222e0be596c92a406dbb4e02d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/insects14050472",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a289ddef6507c3def9e915b5d91332ccc39ee13",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
259548515
|
pes2o/s2orc
|
v3-fos-license
|
Adding Sound Transparency to a Spacesuit: Effect on Cognitive Performance in Females
Spacesuits may block external sound. This induces sensory deprivation; a side effect is lower cognitive performance. This can increase the risk of an accident. This undesirable effect can be mitigated by designing suits with sound transparency. If the atmosphere is available, as on Mars, sound transparency can be realized by augmenting and processing external sounds. If no atmosphere is available, such as on the Moon, then an Earth-like sound can be re-created via generative AR techniques. We measure the effect of adding sound transparency in an Intra-Vehicular Activity suit by means of the Koh Block test. The results indicate that participants complete the test more quickly when wearing a suit with sound transparency.
Intra-Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) safety is susceptible to numerous factors, with astronaut fatigue and cognitive load being primary concerns, often closely interlinked.Astronaut fatigue chiefly stems from the considerable weight (>100 kg), pressurization, and restricted mobility of spacesuits [1].High cognitive load arises from the abundance of error-sensitive tasks, compounded by sensory deprivation [2], [3].EVA spacesuits, comprised of up to 16 material layers, impose limitations on touch, and visibility (e.g., 2023 Artemis spacesuits obstruct foot view, and block sound [4]).The consequence of this degradation of senses is an increase in cognitive load [5], [6] and the effect on mission safety can be as critical as those of fatigue.This effect was evidenced in a 2021 analysis that shows a correlation between cognitive load and increased falls during the Apollo missions [7].
A. ERGONOMICS AND SAFETY
Astronauts suffer a surprisingly high rate of musculoskeletal injury [8].Various authors have addressed the ergonomics of suits from different perspectives [9], [10], [11].Further efforts beyond ergonomics have been proposed in diverse areas, such as hypercapnia prevention, thermal regulation, waste management, radiation shielding, and further injury prevention [12], [6], [13].Solutions to mitigate sensory degradation have also been proposed via the use of various interfaces, such as multimodal screens [14], Augmented Reality [15], new glove designs [16], and passive electro-stimulation for haptic sensory substitution [17], among others.Compared to other solutions such as haptics [18], sound design [19] has been an overlooked area that offers potential for improvement with minimal engineering drawbacks.
B. AUDITIVE COMFORT
EVA and IVA suits offer life support [21], [22] but tend to dampen external sounds, especially those associated with EVA activities.Consequently, the protective features of these suits limit the natural auditory experience that humans typically encounter in their daily environment.The suits often incorporate multiple layers of insulating and protective materials, which can attenuate sound transmission and lead to muffled or reduced audio clarity.For example, when wearing Final Frontier Design's training suit with visor down (Fig. 2), students reported an inability to understand speech.Consequently, individuals wearing an IVA suit might face challenges in perceiving and interpreting auditory information, which could have a substantial impact on their cognitive performance and problem-solving abilities in situations where sound affects cognition [3], [7].
In the case of an EVA occurring in the absence of atmosphere, the sound and vibrations that the astronaut feels are transmitted by the suit itself from the suit's contact with objects, or contact and friction between the human skin and the suit's inner shell.Therefore, the absence of atmosphere does not imply silence, as the suit (that is analogous to a pressurized human-sized balloon) transmits sound and vibrations to its wearer.In addition, on a hypothetical Mars walk the rarified Martian atmosphere would provide faint audio feedback, which has a significantly different signature from feedback heard on Earth.Regardless, of the fact that on Mars the suit would block external sounds too, it is not unfeasible to process the Martian sounds to make them sound like they would on Earth by means of simple sound processing techniques and the use of linear filtering techniques.
C. EVALUATION OF SOUND TRANSPARENCY
Our primary objective is to assess the impact of sound transparency on cognitive performance in subjects wearing an IVA spacesuit.We hypothesize that the implementation of transparency leads to enhanced cognitive performance in tasks that require manipulation as well as problem solving skills.To test this hypothesis, we designed an experimental setup with two conditions: a standard spacesuit and one where an audio system provides sound transparency.We measure cognitive performance using the Koh block test.
The Koh Block test is about solving a 9-block puzzle.It requires manual dexterity as well as problems solving skills.It was chosen because it is representative of the task an astronaut faces, such as during a spacewalk (assembly, tool manipulation, thinking, stress, and time constraints).We considered various tests.In our related work [20], we applied the Fukuda step test to measure cognitive improvement related to proprioception in preventing falls during EVA; however, the results were inconclusive as the Fukuda test is primarily designed to detect vestibular damage.Another alternative is to use the NASA Task Load Index; although widely used in various fields [21], including to assess sound comfort in offices [22], it is not a well-defined test per se but rather a qualitative assessment guide that can be applied to any activity.Thus, we opted for the Koh Block Test, which is quantitative.As it is timed, it facilitates a numerical comparison between different suit configurations (sound transparency ON/OFF).
II. RESULTS
We divided the participants into two groups.Fig. 3(a) compares groups' puzzle completion times using sound transparency ON for both.Fig. 3(b) compares puzzle completion times when sound transparency is ON for one group and OFF for the other, statistical significance (p<0.05) is indicated with an asterisk.See also the sequence section in the materials and methods.
A. EFFECT OF SOUND TRANSPARENCY ON COMPETITION TIME
Welch's t-test shows a significant difference (t = 2.38, p = 0.012) in completion time between the groups with sound transparency ON (mean = 101 s) and OFF (mean = 159 s), with a 95% CI (16.6 s, Inf).The results support the argument that providing astronauts with sound transparent suits improves cognitive performance.A summary of qualitative comments from participants is shown in Table I.It shows a list of the most common feedback from participants aggregated by topic.
III. DISCUSSION
Of the more than 500 astronauts that have flown to space, about 11% have been women.The median age was 46 years old.
A. THE GENDER GAP IN SPACE
In the past astronauts used to belong to a very specific demographic of race and gender.While this distribution has changed in recent missions with improvements in gender [23] and race distributions, society, however, is still far away from gender equality.This research pioneers a gendered approach because it focused on a sample of 95% females that were younger than the average astronaut.This approach manifest due to our own logistic limitations.
1) GENDER
The literature related to our setup does not support that comparative the results would be dissimilar in the case of the male-only population [24].However, more study might be needed to account for consequences of long exposure to the space environment.
2) AGE
The average age of the sample here is 22 years old.The youngest astronaut case was the orbital flight of Gherman Titov in 1961 who was 25 at the time.We acknowledge that the results may not be generalizable across different age groups.Age can have a significant impact on cognitive performance, with research suggesting changes in cognitive abilities, such as processing speed, working memory, and problemsolving, over the course of one's lifetime [25], [26].Future work could investigate the effects of age on problem-solving efficiency and compare the results with our findings.
3) GRADE POINT AVERAGE Finally, we observed no correlation (R < 2%) between selfdeclared Grade Point Average (a numerical score of academic performance) and time to complete the test.This finding is in line with current literature.While some studies have identified a correlation between Intelligence Quotient and academic outcomes, they also highlighted the importance of other factors, such as self-discipline, having a greater influence [27].Further studies concluded that Intelligence Quotient alone does not account for the differences in academic performance among students, with motivation being the primary factor [28].
IV. CONCLUSION
This study highlights the importance of sound transparency in spacesuit audio systems, demonstrating its impact on cognitive performance.Our findings demonstrate that the implementation of transparency improves cognitive performance, with potential implications for spacesuit design, safety.Other areas of application of these findings are underwater welding operations as they have similar sensory impairments and record one of the highest occupational fatality rates in the world [29].
A. EXPERIMENTAL SETUP
An IVA training suit (Terra-Suit) from Final Frontier Design [30], [31] (NY) was rented from startup Zero2infinity S.A, Barcelona, Spain, and connected to a Stanley FATMAX air compressor (30 L/min, 59 dB noise).A digital air regulator (LEMATEC DAR02B) maintained a constant pressure inside the suit between 1.01-1.03bars.Test days had temperatures of 22-24 °C, and the suit had a sound insulating characteristic of 21 dB.Apple AirPods TM (1st generation) paired with a microphone Android App was used to provide sound transparency, with a one-way (mic-2-ear) delay of 144 ms.The experiments were carried in room E1-3071 in the campus of the first author.This room was windowless, and had a relatively high background noise level of 42.5 dB stemming HVAC, which was always on.During the experiments, the noise level was 49.5 dB, owing to the compressor that sat 3 m away.The compressor was always on.Flooring: carpet flooring.Lighting: fluorescent bulbs.
B. KOH BLOCK TEST
A description of the Koh block protocol can be found in [24].Procedure: Participants were briefed, given consent forms, and informed of the procedure.They were instructed to solve three different Koh block puzzles in sequence: (Figs.2-3).All participants solved the puzzles in the same order, with an approximate duration of 23 minutes per participant.
1) SEQUENCE
Briefing and consent form signing. Visor up.EarPods removed from Group A participants.Feedback is not requested but noted down if any is received spontaneously.
Visor down.Third puzzle: Both groups A and B solve the 'checkered pattern' puzzle.
Visor up.
Qualitative feedback is requested and collected.Removal of the suit.
2) PARTICIPANT DEMOGRAPHIC 39 participants were recruited from UAE University (36 female, 3 male), with a mean age of 21.5 (SD=4.6,min 18, max 33).Three participants were left-handed, six wore contact lenses or prescription glasses, and various preexisting conditions were reported.Participants signaled puzzle completion with a hand gesture.Time was recorded, and the experiment ended.If a participant completed the puzzle incorrectly, their data is not used, which occurred for seven participants across the three puzzles.Remaining statistics for groups A and B: Group A (18 females, mean age 20.4,five used graduated glasses); Group B (16 females, 2 males, mean age 20.0, two used graduated glasses).Male performance was similar or worse to mean female performance.
D. ETHICS
This research, titled "Space Suit Haptics Project," adhered to IEEE and UAEU ethical guidelines.The study received ethical approval from the UAEU Social Sciences Ethics Committee -Research / Course (Application No: ERSC_2023_2408) on January 20, 2023.Key ethical considerations were addressed as follows: 1. Informed Consent: Participants provided written consent after receiving a detailed explanation of the project.
FIG. 3 .
FIG. 3. The time taken to complete a puzzle.(a) A control experiment where both Group A and B wear a Terra-Suit with sound transparency ON.(b) group A wears the Terra-Suit suit without sound transparency (OFF), group B wears the Terra-Suit with sound transparency ON.
First
puzzle: Participants familiarized themselves with the Koh block test mechanics by solving the 'offset diamond' puzzle without wearing the suit.Spacesuit-donning, EarPods that relay exterior sound via Bluetooth are placed on participants' ears.Visor down.Second puzzle (control experiment): Both groups A and B, solve the 'diagonal stripes' puzzle.Sound transparency is on.
2 .
Privacy and Confidentiality: Participant data was securely stored and anonymized to maintain privacy.3. Minimization of Potential Harm: Precautions were taken to avoid undue physical, psychological, or emotional risks to participants.4. Fair Treatment: All participants were treated fairly and without discrimination during recruitment and throughout the study.5. Transparency and Accountability: The research team disclosed methods, findings, and potential conflicts of interest.
|
2023-07-11T18:28:36.623Z
|
2023-06-22T00:00:00.000
|
{
"year": 2023,
"sha1": "4778a9623dfd20686d5bef2204cff8bea2f81423",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/8782705/8937520/10159475.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ff2f8a739139d750a657d9ba19057d0da13bb19",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258352601
|
pes2o/s2orc
|
v3-fos-license
|
Analogue Hawking Radiation as a Tunneling in a Two-Level PT-Symmetric System
In light of a general scenario of a two-level non-Hermitian PT-symmetric Hamiltonian, we apply the tetrad-based method to analyze the possibility of analogue Hawking radiation. We carry this out by making use of the conventional null-geodesic approach, wherein the associated Hawking radiation is described as a quantum tunneling process across a classically forbidden barrier on which the event horizon imposes. An interesting aspect of our result is that our estimate for the tunneling probability is independent of the non-Hermitian parameter that defines the guiding Hamiltonian.
Introduction
The physics of black holes has continually aroused interest after Bekenstien-Hawking's pioneering works in the 1970s, in which the authors tried to interpret them as thermodynamical objects that release radiation outside their event horizon [1][2][3][4] (see, for a review of the literature, [5]).The idea of Hawking radiation exploits the concept of the creation of pair production next to the event horizon (out of the vacuum), with one of the particles running away to infinite space from the boundary, while the other with negative energy is sucked into the black hole, resulting in a decrease in mass until the whole black hole disappears in a cloud of radiation.A natural question has been asked as to whether viable information could be gathered at temperatures near the scale of Planckian mass when the quantum gravitational effects become substantial [6].
In this paper, our primary focus is to look at a suitable structure of a non-Hermitian two-level effective Hamiltonian [7,8] to illustrate the possibility of artificial Hawking radiation.We carry this out by mapping to a coordinate setting and making use of the tetrad-based method.The latter strategy is frequently used for seeking solutions pertaining to a curved space.We will demonstrate that black hole similarities emerge with the emission of Hawking-like radiation when the event horizon causes the separation of two distinct topological regions [9,10].
A non-interacting field theory is often sought to address Hawking radiation.In fact, it was observed that in two-dimensional Schwarzschild geometry, interaction effects are minor, and a free particle theory is adequate for the treatment of Hawking radiation; see for some detailed discussion [11,12].In the following, we adopt the procedure of Parikh and Wilczek to estimate the tunneling probability by employing the standard classical approach of WKB approximation [13].In their formalism, the effect of the back-reaction was included to ensure energy conservation while a particle was emitted through the process of tunneling when moving past the horizon.Noting that the need for a nonsingular coordinate system is essential at the horizon, we adopt below the well-known Painleve-Gullstrand coordinates [14,15], which are simply coordinate transformations of the usual Schwarzschild solution.The corresponding metric tensor, which involves an off-diagonal element, is regular at the Schwarzschild radius but has a singularity only at the origin; see, for example, [16].It needs to be mentioned that prior to the work of [13], the idea of Hawking radiation as tunneling was first investigated by Srinivasan and Padmanabhan [17], including a treatment of different coordinate settings [18].
However, a remark is in order.The need to use Painlevé-Gullstrand coordinates for the Schwarzschild metric arises due to the singularity in the usual Schwarzschild coordinates.However, the use of Kruskal-Szekeres coordinates was shown to yield an incorrect formulation [19,20], implying that one should be able to use other coordinates so long as one does not have a real singularity instead of a coordinate singularity.
Biorthogonality and Exceptional Point
The most general 2 × 2 Hermitian Hamiltonian (with the Fermi velocity set to unity) in terms of the linear node k = (k x , k y , k z ) has the form [21,22] where χ and d i are suitable real coefficients, k = | k|, and the σ i variables are Pauli matrices.The Hamiltonian that characterizes the conduction and valence bands supports a pair of eigenvalues whose separation is removed when the coefficients vanish simultaneously.This is at once obvious from the energy dispersion providing the energy splitting We see that degeneracy requires all three d i s to become zero together, i.e., d x = d y = d z = 0, which is not symmetry-protected.In Weyl semimetals, in which the conduction and valence bands' energies coincide over a certain region of the Brillouin zone, a linear crossing of two bands takes place at the forming of nondegenerate Dirac cones corresponding to the conduction and valance bands.
The extension of topological phases from the Hermitian to the non-Hermitian sector has been pursued in a variety of papers [23], and the presence of gains and losses has been investigated [24,25].A point was made a few years ago about the question of whether real black holes can emit Hawking radiation and whether meaningful information can be gathered about Planckian physics [6].Very recently, De Beule et al. [26] made an explicit analysis of the existence of artificial event horizons in Weyl semimetal heterostructures.In their work, the electronic analogs of stimulated Hawking emissions was studied, and physical observables were identified.Sabsovich et al. [27] examined black and white hole analogs in Weyl semimetals subjected to inhomogeneous nodal tilts, explored experimentally viable consequences, and showed the general relativity analogy of such Hamiltonians.An analogy was also drawn in some papers between the low-energy Hamiltonian of tilted nodes and black hole metrics [8,28,29].The possibility of the emission of Hawking radiation was investigated in a somewhat similar context [30].Further, an imitation of black hole Hawking radiation was found in a purely classical-mechanical system by employing a coupled double-chain model admitting frequency dispersion [31].The tied-up issue of the tunneling probability across the event horizon points was explored in the framework of a two-level non-Hermitian topologically insulated Weyl-type Hamiltonian, which contained titing in one of the directions [32].
A two-dimensional structure of a non-Hermitian dissipative Hamiltonian was also featured in an elaborate investigation of analogue Schwarzschild black holes emitting Hawking radiation [33].A typical situation is described by the inclusion of a non-Hermitian term i τ(k) • σ, with τ = (τ x , τ y , τ z ), to ĥ0 when the overall Hamiltonian assumes the form ĥ = ĥ0 + i τ(k) • σ.In such a situation, the eigenvalues are where A consequence is that these become equal when ∆ = 0, pointing to the presence of an exceptional point [34][35][36][37][38] where we have Defining the right and left eigenstates as where |ψ R ± and |ψ R ± are explicitly the biorthogonality relations are a ready outcome.
The self-orthogonality of the eigenstates can be worked out easily [39].
The Hamiltonian
Consider the following arrangement of the d-coefficients to enquire into the spectral phase transition as the system transits to exhibiting complex eigenvalues from the real ones: where k is a real variable, ρ(k), η(k), and φ(k) are a set of real, nonzero periodic functions, and λ ∈ + is a coupling parameter.The inclusion of the latter implies the introduction of gain and loss in the system, thereby signaling the possibility of the appearance of exceptional points where abrupt phase transitions could occur.The class of representations seen in ( 9) have also been studied in [40,41].
Expressed in matrix form, the Hamiltonian corresponding to (9) reads ĥ where k) .The Hermitian counterpart of Ĥ(k) corresponds to k = 0. Enforcing P T symmetry [42] requires the diagonal elements of the matrix generated by (10) to be complex conjugates of each other, and this is similarly true for the off-diagonal elements as well.Ĥ(k) is easily seen to commute with the joint operator P T , i.e., [ Ĥ, P T ] = 0, where the P operator is represented by σ x and T stands for the usual complex conjugation operation.Apart from analytical evaluations [43], numerical algorithms for the diagonalization of P T -symmetric Hamiltonians have also been carried out in the literature [44].The possibility of P T symmetry residing in quantum mechanical systems is a prominent forefront in research.Their position was soon found to be intermediate between open and closed systems.While the role of non-Hermiticity in understanding stable phases has been pursued in the literature for the last several years [45,46], the character of P T symmetry for stable nodal points concerning gapped and gapless semimetals [22,47], where the invariants are constituted of Bloch bands, is a somewhat recent realization.Indeed, due to such a symmetry prevailing, one finds stable nodal points to exist in lesser dimensions [48].
The eigenvalues of Ĥ are easily seen to satisfy the relation Introducing tan θ = ρ(k) iλη(k) , where θ = θ(k), one can classify the accompanying right eigenvectors as while their left partners are We can readily check that these obey the biorthogonal conditions (8).Expression (11) clearly shows that the eigenvalues stay real when the inequality λ < λ holds.This corresponds to the situation when P T is unbroken.However, when the opposite is the case, i.e., λ > λ, a broken P T phase is encountered.At the critical value λ = λ, exceptional points appear when both the eigenvalues E + and E − coincide to become χ(k) and the associated eigenvectors coalesce to form a single entity.In other words, at the exceptional points, we have ρ = ±λη.
It is worthwhile to mention that a simpler form of Ĥ, proposed by Bender et al. [49] a few years ago, discusses the type Ĥ = q cos φ ¶ + iq sin φσ z + sσ x , which supports the eigenvalues λ ± = q cos φ ± s 2 − q 2 sin 2 φ.These remain entirely real when the inequality s 2 > q 2 sin 2 φ is obeyed.
The Tetrad Representation
Let us represent the Hamiltonian in the following tetrad basis: Let us also assume that all h µ variables are suitable entities to be matched.The vielbeins e µ a and e µ 0 satisfy the orthonormality conditions e a µ e µ b = δ a b , µ = (0, x, y, z), and a, b = (x, y, z) subject to the metric being expressed as the bilinear combination of the tetrads g µν = e µ α e ν β η αβ (15) where η αβ = diag(−1, 1, 1, 1) is the Minkowski metric of flat spacetime.
To proceed with (15), let us first write the vielbeins e a µ in terms of four functions, f 1 , f 2 , g 1 , and g 2 , and let us then seek to relate them with the known functions at hand, namely χ(k), ρ(k), η(k), and φ(k).A convenient set of vielbeins is given by [50] with the respective inverses The line element then simplifies to the form The Schwarzschild gauge arises for the following choice of f 1 , f 2 , g 1 , or g 2 , namely and yields the following form of the black hole metric in Painlevé-Gullstrand coordinates: where M is the mass of the black hole and t represents the Painlevé time.Metric ( 20) is stationary (i.e., invariant under the translation of t) but not static (i.e., not invariant under time reversal) and is consistent with the transformation originally proposed in [14,15].With the help of ( 14) and ( 16), the general form of the Hamiltonian emerges as Compared with (18), we can easily derive the coordinate-space correspondence where we have specified h 0 = h 1 = h 2 = 1 and h 3 = i.Using the values in (19), we have the mapping correspondence to the (r, θ) variables, which is consistent with ρ → √ r 2 + 1 and φ → tan −1 (r).As a result, the Hamiltonian assumes the form In the following we estimate the probability transmission amplitude of the analogue Hawking radiation by making use of the correspondence set up in (23).
Analogue Hawking Radiation and Tunneling Estimate
First of all, the energy eigenvalues acquired read when using (11).The exceptional points correspond to r = ±i sec θ, which is located on the imaginary axis.In what follows, we will adhere to the positive sign.We then have Before we calculate the tunneling probability, let us note that when the particle escapes from the black hole with an energy ω, the mass of the black hole decreases from M to M − ω.Indeed, as pair production takes place around the event horizon, the positive energy particle, when breaking free [13], has to transit through the separating region defined between r in , which is the radius of the black hole before the emission of the particle, and r out , which is the radius of the black hole after the emission of the particle and which acts as a possible barrier wall.This is possible if the particle can tunnel through such a barrier.Actually, a classically inaccessible zone is replicated if the particle possesses energy below such a resistance.
In dealing with the tunneling problem, we observe that since the action ζ in the transmission region is imaginary, the probability of tunneling taking place can be straightforwardly calculated by making use of the semiclassical WKB approximation [13].Here, an s-wave particle is considered to go outwards from r in to r out , meaning that ζ can be cast in the form Imζ = Im where, since the emitted particle has a very negligible mass, we can approximate M = M in ≈ M out and dM ≈ −dω.
For the metric at hand, the presence of the horizon can be determined from the radial null geodesic condition ds 2 = 0 corresponding to (20).The resulting differential equation becomes which admits the following acceptable solution: Substituting in (28) results in which can be reduced to a tractable form using the residue theorem with the help of the substitution α 2 = 2(M−ω) r .This yields Extracting the imaginary part from the right side, for the tunneling probability, we obtain the result A similar estimate was made in [13] while accounting for global conservation laws.It is to be noticed that Imζ as given by ( 33) is independent of the parameters of the model Hamiltonian.Also, it is important to point out that we do not claim invariance under the canonical transformation of the tunneling rate as given by (20), as also follows from the works of [19,20].Furthermore, for the inclusion of the temporal contribution to the tunneling rate, we refer to the works in [52,53].
In conclusion, concerning the observable that could be associated with Hawking radiation, a natural question arises as to what is the observable that would prove to be analogous to the Hawking radiation and, in particular, the decay rate of complex eigenstates.Indeed, because of the near impossibility of observing Hawking radiation in a real black hole [54], seeking black-hole analogues has become an interesting alternative because these could reveal properties akin to gravitational black holes, with the emission of Hawking-like radiation being a specific one.In this regard, the construction of experimental set-ups has been undertaken to identify the quantum fluctuations that might emerge [55].Also, very recently, the observation of stimulated Hawking radiation was reported, which occurs in a regime of extreme nonlinear fiber optics [56].Another point that requires more detailed study is the relationship between the decay rates addressed in [57][58][59] and the rate estimated in (33).As a final remark, let us state that we can formally extend our model to other black hole metrics, such as the charged black holes in 2D dilaton gravity, which originates from the low-energy effective theory of type 0A string theory [60], as well as for a rotating and charged black hole background [61].
Summary
Non-Hermitian Hamiltonians are found to play a central role in diverse physical problems.In this paper, we studied a generalized form of a two-level P T -symmetric system that depends on a real parameter k and exhibits a period of 2π.Adopting the approach of tetrad formalism, a correspondence was established between such a Hamiltonian and the one constructed in terms of vielbeins.This enabled us to connect to the metric of curved spacetime.By suitably writing the tetrad components in terms of four unknown functions and specifically choosing them such that the Schwrarzschild metric could be described in Painlevé-Gullstrand coordinates, we were able to create a one-to-one correspondence with our chosen form of the Hamiltonian.We computed the probability transmission amplitude of Hawking radiation by looking at it as a tunneling process and making use of the semiclassical WKB approximation.Our result turned out to be independent of the non-Hermitian parameter λ, so the nature of the phase transitions that the system supports does not influence the estimated tunneling probability.
of motion for the canonical momentum is imposed as prescribed by the classical Hamilton equation.Noting that the Hamiltonian assumes the respective values M and M − ω for p r = 0 and p r = p r , Imζ can be transformed to[51]
|
2023-04-28T01:16:14.878Z
|
2023-04-27T00:00:00.000
|
{
"year": 2023,
"sha1": "13a69abd40d30d3608e3497cb2bbd00d9d68b037",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/25/8/1202/pdf?version=1691991430",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a37da048e60ce7b252f1a5c55e3c3bfe639cefe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
270110124
|
pes2o/s2orc
|
v3-fos-license
|
ECRG2/SPINK7 Tumor Suppressor as Modulator of DNA Damage Response
Esophageal Cancer-Related Gene 2 (ECRG2), also known as Serine Peptidase Inhibitor Kazal type 7 (SPINK7), is a novel tumor suppressor gene from the SPINK family of genes that exhibits anticancer potential. ECRG2 was originally identified during efforts to discover genes involved in esophageal tumorigenesis. ECRG2 was one of those genes whose expression was absent or reduced in primary human esophageal cancers. Additionally, absent or reduced ECRG2 expression was also noted in several other types of human malignancies. ECRG2 missense mutations were identified in various primary human cancers. It was reported that a cancer-derived ECRG2 mutant (valine to glutamic acid at position 30) failed to induce cell death and caspase activation triggered by DNA-damaging anticancer drugs. Furthermore, ECRG2 suppressed cancer cell proliferation in cultured cells and grafted tumors in animals and inhibited cancer cell migration/invasion and metastasis. ECRG2 also was identified as a negative regulator of Hu-antigen R (HuR), an oncogenic RNA-binding protein that is known to regulate mRNA stability and the expression of transcripts corresponding to many cancer-related genes. ECRG2 function is important also for the regulation of inflammatory responses and the maintenance of epithelial barrier integrity in the esophagus. More recently, ECRG2 was discovered as one of the newest members of the pro-apoptotic transcriptional targets of p53. Two p53-binding sites (BS-1 and BS-2) were found within the proximal region of the ECRG2 gene promoter; the treatment of DNA-damaging agents in cancer cells significantly increased p53 binding to the ECRG2 promoter and triggered a strong ECRG2 promoter induction following DNA damage. Further, the genetic depletion of ECRG2 expression significantly impeded apoptotic cell death induced by DNA damage and wild-type p53 in cancer cells. These findings suggest that the loss of ECRG2 expression, commonly observed in human cancers, could play important roles in conferring anticancer drug resistance in human cancers. Thus, ECRG2 is a novel regulator in DNA damage-induced cell death that may also be a potential target for anticancer therapeutics.
Introduction
Esophageal Cancer-Related Gene 2 (ECRG2), also known as Serine Peptidase Inhibitor Kazal type 7 (SPINK7), is a putative tumor suppressor gene that was originally discovered by studies attempting to identify genes that were involved in human esophageal cancer [1].Subsequent studies have identified numerous biological functions linked to ECRG2 including growth inhibition [1,2], the induction of apoptosis [3,4], the suppression of epithelial-mesenchymal transition (EMT) and metastasis [5], the maintenance of cellular ploidy [6] and epithelial barrier integrity, the regulation of inflammation [7], increasing proteosome degradation of the RNA-binding protein HuR [3], and the modulation of DNA damage-induced responses [4].
In addition, ECRG2 somatic missense mutations were reported in different human malignancies.Cancer-derived ECRG2 mutations appear to alter its function.For example, the V30E mutant identified in human lung cancer failed to inhibit tumor cell growth, significantly abolished DNA damage-induced cell death, and was linked to the acquisition of anticancer drug resistance [3,4].Further, genomic variations in the ECRG2 gene promoter or polymorphisms in its sequence corresponding to the 3 ′ untranslated region (UTR) were found to affect the regulation and expression of ECRG2 [4,8].Thus, multiple lines of evidence indicate that ECRG2 appears to be an important tumor suppressor that warrants more attention and investigation.In this review, we will discuss various biological functions of ECRG2 in relation to cancer biology, DNA damage response, and therapeutics.
Identification and Molecular Characteristics of ECRG2
ECRG2 was originally identified by Su et al. [9] aiming to discover esophageal cancerrelated gene(s) in cancer patients in Linxian, a county in northern China that has the highest incidence and mortality rate of esophageal cancer (EC) in the world.Using the RT-PCR differentiate display approach, Su et al. [9] identified eighteen mRNA fragments that were differentially expressed in the EC tissues versus those in the normal esophageal epithelia [9].Among them, 13 mRNA fragments were found to be only expressed in the normal esophageal epithelia but not in esophageal cancer (EC), whereas 5 mRNA fragments were only detected in the EC but not in the normal esophageal epithelial [9].The mRNA fragment that was later known to be the transcript of ECRG2 was among those transcripts that were only detected in normal esophageal tissues but not in cancerous tissues [9].This finding was later confirmed by other studies [10,11].Thus, these initial studies suggest that ECRG2 might have an important role in the development of esophageal cancer in humans.
The genomic location of ECRG2 was mapped to human chromosome 5q32 [10].This chromosomal region is frequently perturbed by genetic aberrations and allelic loss in various human cancers, including esophageal tumors [12,13].ECRG2 consists of four exons and three introns that are spread across a ~3.5 kilobase at chromosome 5q [10].ECRG2 is a small protein composed of 85 amino acids with a predicted molecular mass of 9.23 kDa (Figure 1A) [10].ECRG2 protein harbors an N-terminal signal peptide (a.a.1-20), a central linker region, and a C-terminal conserved Kazal-type serine peptidase inhibitor domain (a.a.31-85) (Figure 1A) that is shared by all serine protease inhibitor Kazal (SPINK) family proteins (discussed below) [10,14].Structurally, ECRG2 is composed of two alpha helices and three beta sheets (Figure 1B) [14]; the Kazal-type domain of ECRG2 contains six conserved cysteine residues (Cys32, Cys45, Cys53, Cys64, Cys67, and Cys85) (Figure 1B), which form three intra-molecular disulfide bonds (Cys32-Cys67, Cys45-Cys64, and Cys53-Cys85) [10,14].Studies have demonstrated that the correct formation of these disulfide bonds is important for protein structure and function [14,15].ECRG2 protein has an Nterminal signal peptide sequence and is predicted to be a secreted protein (Figure 1A).Experimental evidence has demonstrated that ECRG2 is indeed a secretory protein [10].In addition, ECRG2 is also widely distributed in the cytosol; it co-localized with microtubules during the interphase and mitotic phase of cell cycles [10,15] and localized at the centrosome during the G 1 /S phase and kinetochore during mitosis [6].ECRG2 disruption has been reported to result in centrosome amplification and spindle checkpoint defects [6].Thus, evidence suggests that ECRG2 is a multi-functional protein that plays important roles in the regulation of microtubule dynamics and cell cycle progression [10,15] as well as chromosome stability [6].Further, recent studies have also identified a shorter isoform of ECRG2.This shorter ECRG2 isoform comprises only 59 amino acids and lacks the first 29 amino acids compared to the full-length ECRG2 protein [16,17].The amino acid sequence of the shorter ECRG2 isoform shares 95% similarity to the C-terminus of the full-length ECRG2 [16,17].Currently, there is a paucity of information about its expression profile and function.
SPINK Family Proteins and SPINK7/ECRG2 in Human Cancers
Due to the presence of a Kazal-type domain, ECRG2 is also termed as Serine Peptidase Inhibitor Kazal type 7 (SPINK7) and grouped with other SPINK family proteins, which are characterized by the presence of at least one Kazal-type domain in their structures [1].The Kazal domain (40-60 amino acids) is evolutionarily conserved among different species [16] and is composed of one α helix and a three-stranded anti-parallel βsheet with six cysteine residues forming three intra-domain disulfide bridges (Figure 1B) [19].To date, ten members of SPINK family have been identified including SPINK1, SPINK2, SPINK4, SPINK5, SPINK6, SPINK7/ECRG2, SPINK8, SPINK9, SPINK13, and SPINK14 [20].Interestingly, seven of the ten SPINK genes identified are clustered at a region of chromosome 5q32; the ECRG2/SPINK7 gene resides within this gene cluster along with six other SPINK genes (SPINK1, SPINK5, SPINK6, SPINK9, SPINK13, and SPINK14) [21].
SPINK proteins are expressed in various tissues, where they regulate serine peptidases and proteolysis activities [22][23][24][25].However, multiple studies have shown that the expression of SPINK genes is dysregulated in human cancers.For example, SPINK1 overexpression was detected in cancers of the gastrointestinal tract, lung, kidney, bladder, prostate, ovary, breast, and testis [26].Moreover, the elevated expression of SPINK1 was shown to enhance the growth, migration, and invasion of hepatocellular carcinoma cells (HCC) and was associated with poor prognosis in HCC patients [27].SPINK6 was shown to promote metastasis of nasopharyngeal carcinoma by the activation of epithelial growth factor receptors [28].Conversely, some SPINK family members were found to have antiproliferative activity.The loss of SPINK4 expression was detected in colorectal cancer (CRC) and lower SPINK4 expression was linked with reduced disease-free survival in CRC patients [29].Sun et al. [30] showed that SPINK5, which is an important biomarker
SPINK Family Proteins and SPINK7/ECRG2 in Human Cancers
Due to the presence of a Kazal-type domain, ECRG2 is also termed as Serine Peptidase Inhibitor Kazal type 7 (SPINK7) and grouped with other SPINK family proteins, which are characterized by the presence of at least one Kazal-type domain in their structures [1].The Kazal domain (40-60 amino acids) is evolutionarily conserved among different species [16] and is composed of one α helix and a three-stranded anti-parallel β-sheet with six cysteine residues forming three intra-domain disulfide bridges (Figure 1B) [19].To date, ten members of SPINK family have been identified including SPINK1, SPINK2, SPINK4, SPINK5, SPINK6, SPINK7/ECRG2, SPINK8, SPINK9, SPINK13, and SPINK14 [20].Interestingly, seven of the ten SPINK genes identified are clustered at a region of chromosome 5q32; the ECRG2/SPINK7 gene resides within this gene cluster along with six other SPINK genes (SPINK1, SPINK5, SPINK6, SPINK9, SPINK13, and SPINK14) [21].
SPINK proteins are expressed in various tissues, where they regulate serine peptidases and proteolysis activities [22][23][24][25].However, multiple studies have shown that the expression of SPINK genes is dysregulated in human cancers.For example, SPINK1 overexpression was detected in cancers of the gastrointestinal tract, lung, kidney, bladder, prostate, ovary, breast, and testis [26].Moreover, the elevated expression of SPINK1 was shown to enhance the growth, migration, and invasion of hepatocellular carcinoma cells (HCC) and was associated with poor prognosis in HCC patients [27].SPINK6 was shown to promote metastasis of nasopharyngeal carcinoma by the activation of epithelial growth factor receptors [28].Conversely, some SPINK family members were found to have anti-proliferative activity.The loss of SPINK4 expression was detected in colorectal cancer (CRC) and lower SPINK4 expression was linked with reduced disease-free survival in CRC patients [29].Sun et al. [30] showed that SPINK5, which is an important biomarker of oral squamous cell carcinoma (OSCC), can prevent the development of OSCC by the inhibition of the Wnt/β-catenin signaling pathway.In addition, a reduction of ECRG2/SPINK7 was also found in several other human malignancies, i.e., head and neck squamous cell cancer, cervical squamous cell carcinoma, and endocervical adenocarcinoma, which was associated with reduced disease-free survival in patients [4].Although further studies are warranted to elucidate the exact molecular function of SPINK proteins in human malignancies, the available line of evidence nonetheless suggests that SPINK family genes play crucial roles in cancer development in humans.
Biological Activities of ECRG2/SPINK7 4.1. Inhibition of Cancer Cell Metastasis
Like other SPINK family proteins, ECRG2/SPINK7 was shown to have serine peptidase inhibitory activity.ECRG2 protein was demonstrated to inhibit the activity of a serine protease known as urokinase-type plasminogen activator (uPA), an enzyme involved in the conversion of inactive plasminogen into active plasmin, which is important in cancer metastasis [5,10].Evidence showed that secreted ECRG2 directly binds to uPA and its receptor uPAR present on the cell surface to form a complex [5,31].It was proposed that this complex disrupts the uPA pathway by three different mechanisms: (1) the inhibition of uPA/plasmin and matrix metallopeptidase 2 (MMP2)-mediated proteolytic activity [5], (2) the prevention of uPAR interaction with α3β1 and α5β1 integrin followed by the inhibition of integrin-mediated activation of the Src/ERK pathway [31], and (3) the prevention of uPA-mediated cleavage of uPAR, which leads to the inhibition of uPAR interaction with and activation of a G protein-coupled receptor, FPRL1 [32].It was shown that the inhibitory action of ECRG2 on serine protease uPA suppresses the degradation of the extracellular matrix (ECM) and cancer cell invasion and metastasis [5,31,32].
Regulation of Inflammatory Responses
The protease inhibitor activity of ECRG2/SPINK7 protein has also been linked to an allergic condition of the esophagus in humans known as eosinophilic esophagitis (EoE).Recent studies found that endogenous ECRG2/SPINK7 was depleted in esophageal tissue biopsies from EoE patients [7,33].Further studies demonstrated that the loss of ECRG2/SPINK7 expression led to the activation of esophageal eosinophils through elevated uPA/uPAR activity [7].Moreover, the silencing of ECRG2/SPINK7 expression disrupted epithelial barrier integrity and induced the release of pro-inflammatory mediators such as thymic stromal lymphopoietin, IL-1β, and TNF-α [7].Hence, it is evident that ECRG2/SPINK7 plays an important role in protecting esophageal barrier function and keeping epithelial inflammatory factors in check [7].It is well-recognized that chronic inflammation is an important risk factor in esophageal cancer development [34,35].The findings by Azouz et al. [7] thus suggest that esophagus chronic inflammation developed due to the loss of ECRG2/SPINK7 expression may be a considerable risk factor for esophageal cancer formation.
Recently, Zhao et al. [36] showed that ECRG2/SPINK7 played an important protective role in chemically induced colitis in animals and that ECRG2/SPINK7-deficient animals were highly susceptible to induced colitis.They found that ECRG2/SPINK7 was significantly elevated in dextran sodium sulfate (DDS)-induced colitis in mice and, interestingly, cells with elevated ECRG2/SPINK7 expression in colitis tissues were mainly the neutrophils [36].Moreover, ECRG2/SPINK7-deficient mice (SPINK7 −/− ) developed more severe colitis with more extensive ulcers, a higher disease activity index, and more severe body weight loss compared with their wild-type littermates; the loss of ECRG2/SPINK7 also impaired the recovery of colitis after DDS exposure was stopped [36].The expression of chemokines/cytokines including CXCL1, CXCL2, CCL2, CCL3, CCL4, IL-1β, IL-6, CCL11, and CCL17 was also much higher in the SPINK7 −/− colitis tissues than that in SPINK7 +/+ tissues [36]; elevated expression of inflammatory cytokines is expected to lead to more severe inflammation in colonic tissues.These results could suggest that the presence of or elevated ECRG2/SPINK7 expression is important for the protection/reduction of inflammation and damage to colonic tissues exposed to colitis-inducing chemicals.These findings are rather interesting; however, the molecular mechanisms involving SPINK7 modulation of colonic inflammation and cytokine/chemokine productions are currently not clear and need to be further investigated.
Roles in the Maintenance of Genome Integrity and Cancer Cell Suppression
ECRG2 is also implicated in proper centrosome duplication during the interphase and orderly chromosome segregation during mitosis [6].Cheng et al. [6] showed that ECRG2 is crucial for the localization of p53 to centrosomes, and the silencing of ECRG2 expression abolished p53 localization to centrosomes.Previous studies have shown that p53 mitotic centrosome localization is crucial in keeping genome integrity [37].Further, Cheng et al. [6] also showed that ECRG2 knockdown in cells led to the increased ubiquitination and degradation of p53, reduced p21 (a p53 target) at the protein level, and the increased activity of cyclin E/CDK2, which ultimately caused centrosome amplification.Decreased ECRG2 protein levels also impaired spindle assembly checkpoints by reducing BUBR1 (budding uninhibited by benzimidazoles-related 1) protein levels [6].Thus, decreased ECRG2 expression ultimately leads to chromosomal instability and aneuploidy [6], the characteristics of premalignant lesions and cancer [38,39].ECRG2 has also been shown to affect the growth of cancer cells.An initial study by Cui et al. [1] showed that exogenously expressed ECRG2 inhibited esophageal cancer cell proliferation and induced cell death.They showed that ECRG2 directly interacted with metallothionein 2A (MT2A), and the ECRG2-mediated modulation of MT2A function was thought to be a possible mechanism via which ECRG2 suppresses esophageal cancer cell growth [1].Although the gene was initially identified from esophageal tissues, the growth inhibitory effect of ECRG2 is not limited to esophageal cells.Studies have also shown that the overexpression of ECRG2 also induced apoptosis in cancer cells derived from different tissues including the colon, breast, lung, liver, and cervix [2][3][4].The induction of cell death by ECRG2 was shown to be associated with the activation of caspases 8, 9, and 3 and cleavage of PARP in lung, breast, and cervical cancer cells [3,4] or with the modulation of nuclear factor-κB, matrix metalloproteinase 2, and E-cadherin in hepatic cancer cells [2].Furthermore, a recent study by Lucchesi et al. [3] has shown that ECRG2 mediates its apoptotic effect by the negative regulation of Hu-antigen R (HuR, also known as ELAV1) and the X chromosome-linked inhibitor of apoptosis protein (XIAP).HuR is known to be a key RNA regulatory protein that affects the stability of numerous target mRNAs [40], while XIAP is an apoptosis inhibitor that inhibits the activation of caspases 3, 7, and 9 [41].Lucchesi et al. [3] showed that the overexpression of ECRG2 caused a significant reduction in XIAP mRNA levels, which was not associated with the inhibition of the XIAP promoter but rather with alterations in XIAP mRNA stability, which is known to be stabilized by HuR [42].Lucchesi et al. [3] further showed that ECRG2 promoted the proteasomal degradation of HuR and thus modulated XIAP mRNA levels by suppressing its mRNA stabilizer HuR [3].It is of note that HuR also regulates many targeted mRNAs encoding proteins that are important in the regulation of cell cycle, proliferation, cell survival, and apoptosis (reviewed in [43]).Thus, ECRG2-mediated proteasomal degradation of HuR protein may have even broader effects on cellular functions in general and apoptosis in particular.
Interestingly, while exogenously expressed ECRG2 induced cell death in cancer cells, it did not appear to affect the growth of non-cancerous breast epithelial cells [3].This suggests that ECRG2-mediated cell growth control exhibits cancer-specific selectivity.In this context, studies by Song et al. [2] demonstrated that adenovirus-mediated ECRG2 expression suppressed hepatic cancer cells grown on nude mice, with no apparent toxicity in the animals.These studies together demonstrated that ECRG2 may have anticancer therapeutic potential.Future studies are certainly needed to further investigate this issue.
ECRG2 Is an Important p53 Target and Effector in DNA Damage Response
Tumor suppressor p53 is known to play a key role in DNA damage response [44,45].Following DNA damage, p53 is activated via post-translational modifications such as phosphorylation and acetylation, which lead to the stabilization and accumulation of p53 protein in the nucleus [44].The p53 protein molecules accumulated inside the nucleus form a tetramer, which binds to the response elements located within the promoter or intronic regions of the target genes to activate their transcriptions [46].These target genes are involved in an array of biological processes such as cell cycle arrest (e.g., Cyclin-Dependent Kinase Inhibitor 1A (CDKN1A)/p21, GADD45a, 14-3-3σ), autophagy (e.g., DRAM), and apoptosis (e.g., BAX, PUMA, DR5) [46,47].In the past two decades, multiple pro-apoptotic downstream targets of p53 have been characterized [48]; however, the inactivation of none of these genes was able to phenocopy the deficiency in apoptotic signaling observed in p53-null cells [44].It was proposed that successful tumor suppression by p53 may require the functional redundancy of multiple downstream target genes with pro-apoptotic activity [49].Alternatively, it was suggested that distinct pro-apoptotic target genes of p53 are induced by stress stimuli and in a cell-type-specific manner [50].This conjecture paved the path for the continued discovery and characterization of novel pro-apoptotic targets of p53.
A recent study by Patel et al. [4] has demonstrated that ECRG2 is a novel p53 target gene and an integral part of p53-mediated DNA damage response.ECRG2 mRNA and protein were significantly induced by the treatments of anti-neoplastic DNA-damaging agents such as etoposide (topoisomerase II inhibitor) or melphalan (an alkylating agent) [4].Patel et al. [4] found that DNA damage-induced ECRG2 expression was associated with the activation of the ECRG2 promoter.Further analyzing the ECRG2 promoter, they discovered that the region of the ECRG2 promoter (from −1000 bp to transcription starting site) harbored regulatory binding sites for p53, p63, and OCT-1, the transcription factors important for the regulation of DNA damage responses [4].Two putative p53-binding sites were identified within the ECRG2 gene promoter, one (p53-BS-1) residing at −844 to −825 and another (p53-BS-2) localizing at −587 to −568.Following DNA damage, p53 was substantially recruited to the p53-binding sites within the ECRG2 promoter, which resulted in the significant induction of ECRG2 promoter activity and mRNA expression [4].They further showed that etoposide-induced ECRG2 promoter activation occurred only in the RKO p53 +/+ cells, but not in RKO p53 −/− cells.More importantly, the disruption of ECRG2 by gene targeting significantly diminished etoposide-induced apoptosis in cells even with the strong induction of wild-type p53 [4].Such results suggest that the absence of ECRG2 blunts p53-mediated apoptosis following DNA damage and, like other p53 target proteins such as PUMA, NOXA, and DR5, ECRG2 acts as an effector of p53 to modulate DNA damage-induced cell death.It is of note that reduced or absent ECRG2 expression was found in significant portions of human cancers [4].It is possible that the insufficient function of ECRG2 may play an important role in anticancer drug resistance in human cancer.
ECRG2/SPINK7 Dysregulation, Mutations, and Other Genomic Variants
p53 is often inactivated by deletion and mutation in human cancers [51,52].Since ECRG2 is shown to be a transcriptional target of p53, one possible mechanism of loss of ECRG2 mRNA expression in human cancers may be due to the inactivation of p53 function by deletion or mutations.Recently, Patel et al. [4] demonstrated that the expression of tumor-derived mutant p53-R273H caused a decrease in ECRG2 protein expression in RKO p53 −/− colon cancer cells, which was previously shown to compromise the transcriptional activation function of wild-type p53 [53,54].A recent study showed that while ECRG2/SPINK7 was downregulated in less aggressive oral squamous cell carcinoma (OSCC), the protein levels of p53 remained elevated [55].Given that p53 missense mutations such as p53-R273H are frequent in clinical cases of OSCC, which targets the DNA binding domain of wild-type p53 [56] and compromises the transcriptional activation func-tion [53,54], it is likely that mutations in the upstream transcription activator p53 may lead to decreased levels of ECRG2/SPINK7 protein in clinical cases of human cancers.Further in-depth studies are needed to dissect the link between p53 status and ECRG2 expression in normal and cancer tissues in humans.
Multiple studies have found a significant correlation between a short tandem repeat (STR) polymorphism (TCA3/TCA3) in the 3 ′ -untranslated region (UTR) of ECRG2 and increased incidence as well as poor prognosis of esophageal [8,[57][58][59] and oral [60] cancers in various patient populations.Zhang et al. [8] showed that microRNA 1322 (miR-1322), which was found overexpressed in esophageal carcinoma, preferentially bound to TCA3 allele present at the site of STR polymorphism within ECRG2 3 ′ -UTR and downregulated ECRG2 expression.Due to its robust downregulation in gastric cancer, salivary extracellular RNA (exRNA) of ECRG2 was utilized for the configuration of a biomarker panel for the noninvasive detection of gastric cancer [61].Conversely, ECRG2 expression was significantly upregulated in human chromophobe renal cell carcinoma [62].Thus, ECRG2 may function differently in certain cancer types depending on the tissue origin.
Multiple somatic mutations of ECRG2 have been reported in various human malignancies such as lung, stomach, endometrium, skin, and colon cancer [3], which may adversely affect ECRG2 structure or function.Studies by Lucchesi et al. [3] recently demonstrated that while the wild-type form of ECRG2 exhibited strong growth suppression in cancer cells, the tumor-derived ECRG2 V30E mutant (identified in human lung cancer) failed to inhibit cancer cell growth.Also, unlike the wild-type ECRG2, the V30E mutant was not able to negatively modulate HuR and XIAP proteins [3].The mutant version also did not activate caspases 3, 8, and 9 or cleave PARP [3].Furthermore, cells expressing the mutant version were more resistant to cancer drug treatments [3].These studies demonstrate that somatic mutations in ECRG2 abolish its tumor-suppressive activity in human cancer.
Patel et al. [4] have recently identified a naturally occurring ECRG2 promoter variant that may affect the regulation of ECRG2 expression.This promoter variant was initially realized and cloned from the genomic DNA extracted from A549 human lung cancer cells [4].In these cells, two alleles of ECRG2 promoter variant were found: one named ECRG2-full (longer variant), while the other was called ECRG2-del (shorter variant), which was missing eight nucleotides (TAGAATTC) at position −217 to −209 when compared with the longer variant [4].Interestingly, analyzing the database of single nucleotide polymorphisms (dbSNP), Patel et al. [4] found that a DNA sequence corresponding to the ECRG2-del variant existed in the dbSNP database, which has been identified as the alternate allele of genomic variant rs3214447 [63].According to information curated from 1000 Genomes Project Phase-3, ~38.5% of the world's population harbors an alternate allele of the rs3214447 variant (TAGAATTC deletion) in one or both copies of the ECRG2 promoter [64].In further investigations, they found that the basal transcriptional activity of the ECRG2-full promoter (ref allele) was much higher than that of the shorter (alt) allele (~2:1 ratio) [4].Similar observations have been reported in the Genotype-Tissue Expression (GTEx v8) database, where the alt allele of rs3214447 is correlated with lower ECRG2/SPINK7 expression in normal esophageal mucosa [65].Importantly, DNA damageinduced ECRG2 promoter activation was also significantly higher with the full-length variant (ref allele) than with the short variant (alt allele).Further, the shorter variant (alt allele) was also defective in p53-modulated ECRG2 promoter induction [4].These in vitro studies indicate that TAGAATTC deletion within the ECRG2 promoter appears to negatively impact basal levels of ECRG2 promoter activity, p53-mediated transcriptional regulation, and ECRG2 promoter activation by DNA damage in general.Given that the rs3214447 alt allele is found in about 38.5% of the world's population and ECRG2 plays important roles in DNA damage response, it would be interesting to determine how ECRG2 expression is regulated in the human population with the rs3214447 alt allele under DNA damage-induced reaction in cells.
Therapeutic Implications
Studies have demonstrated that the plasmid or virus-mediated expression of ECRG2 exhibited strong growth inhibition in cultured cancer cells [3,4], and adenovirus-delivered ECRG2 through intra-tumoral administration also showed significant suppression in tumors grown in animals [2].Further, the loss of ECRG2 expression is frequently observed in multiple human malignancies [1,4,9].Thus, restoring ECRG2 expression and function in cancer cells is of great interest and an attractive therapeutic strategy.Several recent studies have investigated the effect of synthetic ECRG2 polypeptides on cultured cancer cells [66,67].ECRG2 is a small protein of 85 amino acids; thus, its production is amenable to chemical synthesis.Studies by Song et al. [66,68] showed that synthetic ECRG2 alone (8.5 µg/L) killed esophageal cancer cells (EC9706) in cell cultures by ~18-25% at 24 and 72 h time intervals.The synthetic ECRG2 was also able to enhance cisplatin-induced cell death in cisplatin-resistant esophageal cancer cells [67].These studies suggest that ECRG2 appears to have therapeutic potential for cancer treatment.However, while partial cell killing was observed in cells, the efficiency of uptake of the synthetic ECRG2 peptide into cells was not evaluated [66][67][68].ECRG2 is a multifunctional protein that acts extracellularly to regulate cell membrane proteins such as the uPA-uPAR complex and also intracellularly to modulate proteins such as HuR and metallothionein 2A [1,3].Thus, proper delivery of ECRG2 across the cell membrane is an important issue that needs to be considered.Further, ECRG2 is regulated by post-translational modification [3,4,69].It is possible that protein modification (which is not present in synthetic peptides) may be required for the full function of ECRG2.For that reason, the synthetic ECRG2 peptide may not be very efficient in cell killing or performing other functions.Thus, newer approaches are necessary for the more efficient delivery of ECRG2 protein.In this context, several intracellular protein delivery systems have recently been evaluated for their efficiency in delivering proteins in animal studies and clinical trials.These include the cell-penetrating peptide (CPP)-based system [70], nanocarrier delivery system [71], and eTAT chimeric peptide delivery system [72].With more efficient protein delivery approaches, it will be interesting to further evaluate the therapeutic potential of ECRG2 in future studies.
Conclusions
Since its discovery, multiple lines of evidence have demonstrated that ECRG2/SPINK7 is an important tumor suppressor, which is instrumental in numerous cellular phenotypes (Figure 2).ECRG2/SPINK7 inhibits cancer cell migration and invasion and also suppresses tumor metastasis in animals [2,5,31].The overexpression of ECRG2/SPINK7 causes cancer cell death via multiple mechanisms [1,3,4].Reduced or absent ECRG2/SPINK expression occurs in various human malignancies; cancer patients with low ECRG2/SPINK7 expression in their tumor tissues exhibit shorter disease-free survival [1,4,9].Importantly, ECRG2/SPINK7 acts as an important player in DNA damage response and serves as a p53 transcriptional target for inducing cell death in DNA-damaged cells [4].Thus, evidence indicates that defective ECRG2/SPINK7 (due to reduced expression or mutations) may be one of the important factors in human cancer development and acquisition of anticancer drug resistance.All lines of evidence signify the importance of ECRG2/SPINK7 in human tumorigenesis and suggest that restoring ECRG2/SPINK7 function in cancer cells may be an attractive therapeutic strategy.With a suitable protein delivery approach, ECRG2 protein may prove to be a valuable anticancer therapeutic strategy.
Figure 1 .
Figure 1.Structure of ECRG2 protein.(A) ECRG2 is predicted to contain an N-terminal signal peptide from amino acids 1-20 and a conserved KAZAL-type domain at its C-terminal from amino acids 31-85.(B) ModeBase predicted structure of 20-85 ECRG2 showing two alpha helices and three beta sheets [18].
Figure 1 .
Figure 1.Structure of ECRG2 protein.(A) ECRG2 is predicted to contain an N-terminal signal peptide from amino acids 1-20 and a conserved KAZAL-type domain at its C-terminal from amino acids 31-85.(B) ModeBase predicted structure of 20-85 ECRG2 showing two alpha helices and three beta sheets [18].
Figure 2 .Figure 2 .
Figure 2. Multiple molecular functions of ECRG2.ECRG2 is a pleiotropic protein that plays important roles in diverse cellular phenotypes including cancer cell metastasis, inflammation, centrosome duplication, DNA damage response, and cell death by apoptosis.
|
2024-05-30T15:05:39.702Z
|
2024-05-28T00:00:00.000
|
{
"year": 2024,
"sha1": "35c84d44a9f28869078eb9ab90f3de13e14a612b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/11/5854/pdf?version=1716877412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d64057dbc7c9c83a73ffb7713c99e98f8887ff7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
81599237
|
pes2o/s2orc
|
v3-fos-license
|
Functional outcome of external fixators in unstable distal radius fractures
Background: Fractures of distal radius are very common injuries, accounting for about 15% workload of an Orthopedic trauma unit. While cast treatment is universal for stable fractures, unstable fractures with comminution and intrarticular involvement are a different injury and are treated mainly by ligamentotaxis with proper restoration of anatomy. In this study, we evaluated the effectiveness of external fixator with or without augmentation for management of unstable distal radius fractures. Methods: The study was performed on 49 patients with unstable distal radius fractures admitted in the emergency department. Patients who met inclusion criteria, were operated with bridging external fixation using 2 pins in radius and 2 pins in second metacarpal augmented with percutaneous K wires in some patients. Functional evaluation at 12 months was done using Solgaard Scoring. Results: The study comprised of 49 patients in age group of 20-60 years, including 28 males and 21 females with a mean age of 42 years. Laterality included right side (n=23) and left (n=26). Mechanism of injury was road traffic accident (n=19), fall from height (n=17) and fall from standing height (n=13). The mean admission to surgery interval was 1.2 days, the mean operative time was 35 minutes and the mean time to union was 7.2 weeks. Complications included pin tract infections (n=7), transient neuropathies (n=5), early sympathetic dystrophy (n=2), malunion (n=2) and loss of motion more than 20 (n=9). Final evaluation done using Solgaard scoring system revealed excellent results in 22 patients, Good in 18 patients, Fair in 8 patients and Poor in one patient. Conclusion: For unstable distal radius fractures excluding shear injuries, external fixation with or without augmentation is the preferred method of treatment as it is simple, less expensive, with acceptable complications and yields excellent results in a majority of patients.
Introduction
Abraham Colles, an Irish surgeon first described the Colle's fracture in 1814 [1] . Fractures of distal radius are very common especially in osteoporotic post-menopausal women and contribute significantly to the work in orthopedic emergency. They usually result due to fall on outstretched hand and subsequent fracture because of osteoporosis. In young people they usually are a result of high energy trauma such as motor vehicle accidents and fall from height with marked displacement and comminution of distal radius [2] . Due to significant comminution and intrarticular involvement, Jupiter and Lipton described these as 'pilon fractures of upper extremity [3] . Classification schemes for distal radial fractures are controversial with limited usefulness, reproducibility and practicality [4] . Frykman's and AO classifications are the most commonly used [5] . Various important factors to consider while treating these injuries are fracture displacement, articular involvement, distal radio ulnar joint involvement and assessment of bone quality. The various radiographic parameters to consider are radial length, radial inclination, volar tilt and amount of articular step. The criteria for acceptable reduction varies with age of patient [6] . Table 1 summarizes the normal and acceptable radiographic parameters for young active individuals [7] . Most of the distal radial fractures are treated nonoperatively by applying the traction to reduce the fracture followed by casting for a period of 6 weeks. It has been observed that in patients with articular involvement and comminution, there is loss of reduction with collapse of distal radius with this form of treatment [8] . Patients usually have chronic wrist pain, difficulty in routine activities of daily living and lifting weights, poor functional result and unacceptable clinical deformity [9] . Casting supplemented with percutaneous K wires have also been associated with same problems, plus the chances of wires getting infected beneath the plaster. The goal of treating these fractures is to restore radial length and inclination, volar tilt and articular congruency. Open reduction and internal fixation (ORIF) with a volar plate is indicated for volar and dorsal Barton fractures, where it is the treatment of choice. But in patients with metaphyseal comminution and significant articular involvement, ORIF is difficult and amount of soft tissue stripping negates any potential benefit [10] .
Recently a prospective randomized trial found out that the external fixation and percutaneous pinning is the most efficacious for treating unstable distal radius fractures [11] . Initially advocated by Anderson and O'Neil, several authors have reported excellent results with this form of treatment [12][13][14][15] . The external fixation is based on the principle of ligamentotaxis, maintaining the length of radius and reduction of articular fracture fragments through the radioscaphocapitate and long radiolunate ligaments. However, volar tilt and lateral translation are not significantly corrected with this form of treatment. This procedure can be augmented by percutaneous K wires to reduce and fix any large fragments [16] . We conducted a study to evaluate the outcome of unstable distal radial fractures treated with external fixation augmented with percutaneous pins.
Methodology
The present study was conducted in the Department of Orthopedics, Government Medical College Srinagar, from January 2016 to December 2017. Patients aged between 20 to 60 years with closed unstable distal radius fractures were included in the study. We included AO type 23 B1 and 23 C fractures only. Exclusion criteria were simple extrarticular fractures, open injuries, volar and dorsal shear injuries (Barton fractures). All the patients were evaluated thoroughly including distal neurovascular status, and any associated injuries ruled out. Patients were operated as early as possible.
Surgical Technique
Under brachial/general anesthesia, fractures were reduced. After standard draping and painting, external fixator with four pins (3.5mm) and a rod were used for stabilizing the fracture. The proximal pin in radius was placed first at about the junction of upper and lower half of radius inclined at 30-40 degrees dorsal to the frontal plane of forearm. Another pin was placed in the index metacarpal base in line with the first pin in the radius. The pins were joined by a rod and the upper pin was tightened to the rod using a clamp. Traction was manually applied in the longitudinal direction with counter traction applied at the level of elbow. Then the metacarpal pin was tightened keeping hand slightly in volar flexion. Another 3.5 mm pins were added in the radius about one inch above the fracture and in the metacarpal head for added stability. The whole procedure was performed under fluoroscopic control. The fixation was augmented with the percutaneous K wires in cases with radial styloid or dorsal fragments for added stability. One K wire was placed from the tip of radial styloid laterally to engage the medial cortex of proximal fragment. Another K wire was placed from the dorsal aspect to engage the proximal fragment. Pin site dressings were done and limb was elevated to reduce swelling. Post-operative radiographs were assessed for fracture reduction, maintenance of radial length and inclination, lateral tilt and articular step, placement of fixator pins and K wires. Active and passive finger motion, elbow motion was started from the first postoperative day. Patients were followed up at 2, 4, 6, 8 weeks. The external fixator and pins were usually removed at 6 weeks after bridging trabeculae across the fracture site. A below elbow cast was applied for another 2 weeks followed by wrist mobilization exercises. Patients were followed monthly till 6 months and then 3 monthly till final followup at 12 months. Functional evaluation was done using Solgaard scoring modified from Gartland and Werley (Table 2) [17,18] . [17,18] . Table 3]. Associated injuries were present in a significant number of patients and are listed in Table 4. The mean interval from admission to surgery was 1.2 days with a range of zero to seven days. Most of the patients were operated on the same day of admission in emergency theatre (n=26, 53.1%). Patients with associated fractures were operated in a single setting in routine theatres, while patients with head injuries were operated once the patient was fit for anesthesia after a delay of 5-7 days. Similarly majority of patients were discharged on the first postoperative day (n=24, 49%), while patients with associated injuries and head injuries had a mean hospital stay of 5.6 days. The mean duration of the operative procedure was 35 minutes. All the patients started finger mobilization exercises on the first postoperative day. The mean time to union was 7.2 weeks with a range of 6 to 9 weeks. Union was defined when patient had no pain and tenderness with bridging callus present across the fracture site. There was no case of nonunion or deep infection in our series. Final evaluation was done using Solgaard scoring system at 12 months. Results were Excellent in 22 patients, Good in 18 patients, Fair in 8 patients and Poor in one patient (Figures 1, 2, 3).
Complications
The most common complication in our series was superficial pin tract infection in 7 patients. It resolved with daily dressings, local care and oral antibiotics in all patients but one patient had associated loosening of pin. Median nerve compression (n=2) and superficial radial nerve symptoms (n=3) resolved in all patients with conservative treatment. Early reflex sympathetic dystrophy was present in two cases and resolved with physiotherapy and medications. One patient had a fracture of second metacarpal intraoperatively while pinning. Malunion with 15 0 of dorsal tilt was present in 2 patients, among which one patient had pain and limitation of activities at final followup with early arthritis on radiographs. Loss of motion (including palmar and dorsal flexion, radial and ulnar deviation, supination and pronation) more than 20 0 as compared to opposite side was present in 9 patients (Table 5). There were no cases of stiffness or well developed arthritis at final followup. There were no cases of tendon ruptures or midcarpal instability in our series.
Discussion
For distal radial fractures, the functional result is most important in assessing the outcomes [19] . For stable metaphyseal fractures cast method provides the best functional result. However for unstable comminuted intrarticular fractures, external fixation with or without augmentation has become the treatment of choice [11] . Comminuted intrarticular fractures deserve special attention because neither casting method nor ORIF is suitable for these fractures. With casting method and percutaneous K wires, most of these fracture redisplace leading to a poor result. ORIF with a volar plate is also not suitable because the fragments are too small to hold screws, soft tissue stripping and osteoporosis is common in elderly [20] . In 1923 Bohler recognized need for maintaining fixed traction using pins and plaster, while Anderson and O'Neil in 1944 first recommended external fixation [21] . The present day external fixators are lighter and comfortable allowing cleaning of pin sites, remanipulation and any necessary adjustments. The duration of external fixation is also debatable with some authors advocating early removal at 3 weeks while others advocating routine removal at 6 weeks. The concerns with early removal are redisplacement, however Haddad M et al in a study of 36 patients using external fixator for 3 weeks and 5 weeks, found no significant difference in the two groups [15] . They reported excellent results in 30 patients using Solgaard scoring. Jakim I et al reported 83% good to excellent results and only a few complications [22] . Using the same scoring, our study showed good to excellent results in 40 patients (81.6%). The use of external fixator is still not that popular, primarily because of its complications. Also some authors doubt on its ability to maintain reduction. The loss of reduction is primarily due to infection with pin loosening and failure of the fixator [23] . Various authors recommend not to use fixators above 65 years of age [24] . [25] . Anderson JT et al reported complications in 16 of 24 patients with 9 cases of pin track infection and two of pin loosening. They concluded that complications in external fixation are frequent, and their effect on long term functional results and patient satisfaction is negligible [26] . Out of 51 cases, Seitz WH et al had 5 cases of superficial pin tract infections. In our study there were superficial pin track infections in 7 patients all of whom subsided with conservative treatment [14] . Cooney WP used four different configurations of external fixations in a series of 100 unstable distal radius fractures and obtained 86% good to excellent results [13] . The quadrilateral frame provided the most effective immobilization among the four. Joosten U et al in a long term study of 8 years on 174 patients with unstable distal radius fractures concluded that restoration of radial length is important in order to achieve a good outcome. They achieved 71.8% good to excellent results [27] .
Our study had various limitations as various other methods for treating distal radius fractures were not compared with the external fixation method. Also the followup was short and long term randomized comparative studies are needed to document the efficacy of this method of fixation.
Conclusion
External fixation augmented with percutaneous K wires is an excellent option for treating unstable distal radius fractures. The technique is fairly simple with low learning curve, has low reoperation rate, acceptable rate of complications and produces satisfactory outcome in majority of fractures. A wise judgment is needed for restoring radial length and inclination which are important for a good outcome. The technique should not be used for shear fractures which are primarily treated with ORIF.
|
2019-03-18T13:58:42.021Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "fbcd00b974b302b94ad4d353388c63de91d5c375",
"oa_license": null,
"oa_url": "http://www.orthopaper.com/archives/2018/vol4issue2/PartM/4-2-113-367.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a04d6dc4c22853d0f36f67d0a140dfa385ec838c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3191067
|
pes2o/s2orc
|
v3-fos-license
|
Burden and nutritional deficiencies in opiate addiction- systematic review article.
Addiction to the illicit and prescribed use of opiate is an alarming public health issue. Studies on addictive disorders have demonstrated severe nutritional deficiencies in opiate abusers with behavioral, physiological and cognitive symptoms. Opiate addiction is also link with a significant number of diseases including Human Immunodeficiency Virus (HIV), Hepatitis C Virus (HCV) and other blood borne diseases generally stem from the use of needles to inject heroin. The use of medication assisted treatment for opioid addicts in combination with behavioural therapies has been considered as a highly effective treatment. Methadone is a long-lasting μ-opioid agonist and a pharmacological tool which attenuates withdrawal symptoms effectively replacement therapies. This review article aims to explain opiate addiction mechanisms, epidemiology and disease burden with emphasis on dietary and nutritional status of opiate dependent patients in methadone maintenance therapy.
Introduction
Opiates such as heroin, morphine, and some other types are among the most extensively prescribed and efficient medications for the managing of chronic pain clinically. However, the consumption of these compounds due to the agile development and physical dependency has been so relentlessly troubled (1). Historically, addiction was perceived as disease of poor characteristics and was not systematically addressed by the medical and academic societies until 20th century. Drug abuse including opiates, methamphetamine, cannabis, and alcohol has already become a major public health problem (2,3). The extensive uptake of the morphine hypodermic syringes after 1805 by doctors to treat general symptoms caused converting many of their pa-tients into drug abuse inadvertently (4)(5). The intake of opiate and its dependence cause major health damage, including poor quality of life, increasing the risk of cancer, and many other damages (6)(7)(8). Major changes have been seen in past few decades about the culture and patterns of drug consumption, such as decreasing opioid injection or intravenous drug use and also prescription opioids have become more accessible, although, morbidity and mortality among opioid users are continuing by chronic infectious diseases such as HIV and premature death because of accidents and overdose (9)(10). Opiates have been well-known for their effects on human's central nervous system, such as dizziness, relief, mental clouding, mood changes and loss of fine motor skills (11). Consequently, the frequent opiates consumption influence several neurotransmitter activity and neuropeptide systems in brain circuits which causes mood, behavior and other activities regulation. Some of activities including analgesia, species-regular behaviour, and reward are done by opioid receptors which are distributed across the brain and spinal cord. Researchers have shown three separated classes of opioid receptors, the kappa, delta, and mu, although it is likely that there are additional types. Opioids usually act independently of G proteins through calcium-dependent potassium channels contrary to the concept that opioids occasionally activate G proteins (12). A dependence on opiates such as heroin or prescription painkillers is one of the most devastating addictions found in the treatment community. There are various types of opiate addiction treatment. The type of treatment that is most effective or which is safest for you will depend on various factors. One of the most effective treatments for opioid dependence is the use of Medication Assisted Treatment (MAT) with effective medications combined with behavioural therapies (13). One of the safest and efficient treatments of heroin abuse and dependence is Methadone for over forty years (14)(15) as well as Buprenorphine which is authorised to use by licenced doctors (16). Similarly l-alpha-acetyl methanol (LAAM) and Naltrexone have also been used as a replacement therapies. Methadone meets most criteria for a pharmacological agent and long-term treatment of addiction which results in the normalization of many vital physiological processes that have been interrupted during opiates addiction (17). Methadone treatment also effectively lowers bloodborne illnesses such as HIV infection and hepatitis C virus which in injecting drug users are common (18)(19). However, extensive deviation and high level of mortality in this group which is caused by overdose, force government to set up the high regulated methadone maintenance treatment model (20). Advancement in scientific approaches has unveiled that there is a correlation between health and diet for various genders, ages and ecological conditions. The daily essential nu-trients which are necessary for the human body to grow and maintain normal function of life are namely carbohydrates, fats, protein, vitamins, minerals, and water. Studies have reported poor diets with overweight and obesity among people in recovery from opiates addiction. In most of the opiate addicts, serious nutritional deficiencies of key proteins, fats, vitamins and minerals exist which disrupt their ability to digest carbohydrates efficiently. Physical and biochemical changes that occur from drug and alcohol use also cause nutritional deficiencies in opiate addicts (21-22). Therefore, this review aimed to assess the effects and mechanisms of opiate addiction with an emphasis on the nutritional status of opiate dependence patients during methadone maintenance therapy.
Opiate
A group of pain killer drugs are opiates, which comes from the poppy plant. Opiates derived from opium are used as anodyne and hypnagogic. The natural opiates include opium, morphine, and codeine. There are also some man-made substances called opioids (23). Although the most commonly used term is opiate, however the term -opiate‖ is sometimes used for close relatives of opium such as codeine, morphine and heroin, while the term -opioid‖ is used for the entire class of drugs including synthetic opiates such as oxycontin (24).
Disease burden of opiate addiction
The intake of opiates and their dependence cause significant damage to health, increase mortality and morbidity and make a poor quality of life (25-26). Psychoactive substances including illicit drugs constitute 8.9 % of total burden of diseases which is recorded by World Health Report in 2002 (27). Opiate overdose may cause experiencing confusion and physical distress and even slowing down the persons' respiration so much that breathing stops in severe cases (28). Despite its adverse effects on health, opiates consumption is continued by different means including swallowing, smoking, and drinking in all over the world such as Southeast Asian countries (29). Available at: http://ijph.tums.ac.ir
Epidemiology of opiate addiction
Opiate addiction is now becoming a worldwide problem as 13-22 million people are afflicted and Asia encompasses more than half of that. Addiction to the illegal and prescribed use of opiate is an alarming public health issue. Three out of ten percent to 0.5% of the world population (i.e. 21 -35 million people) used opioids (30). The prevalence of opiate use, particularly in young adults after the 1980s has been similar in the United States and Europe. Indeed, in early 1990s the problematic opiate use in the US was two-fold of that in the Western Europe. However, in a limited number of European countries the prevalence is now close to that of the United States (31). Opiate abuse and accidental mortality is rising in the U.S. The Centre for Disease Control (CDC) has estimated that almost 74% of all version drug overdose mortalities in the U.S in 2008 occurred by consumption of opiate sedative drugs. Furthermore 22.6 million Americans aged 12 or older were illegitimate drug users reported from 2012 National Survey on Drug Use and Health. United Nations Office on Drugs and Crime (UNODC), in Canada, reported the use of prescription opi-oids over shadows the use of heroin (0.3% annual prevalence), while in Brazil and Chile the prevalence is 0.5% of the population. However, in South West and Central Asia, most of the countries i.e. Iran, Pakistan and Afghanistan, the prevalence of opiate use is higher than the world average. There is no data available on drug consumption of many regions of Africa, Middle East and some parts of Asia (32). In Malaysia, The National Anti-Drug Agency has recorded 3611 drug abusers in 2010 which has been increased 110% over one year for corresponding period (33). There have been extensive regional differences in the use and abuse patterns of opiates. Heroin is the most widely used illegal drug in the majority of Africa, Europe and Asia, while hydrocodone, codeine, hydro morphine, morphine, oxycodone and meperidine are the main opioids of abuse in the Americas and Oceania, Although in the last decade, it is reported an increase of prescription opioid abuse in some African and Asian countries. Americans consume almost 80% of the world opioid supply while America has only 5% of the world population (30).
Mechanism of opiate addiction
In 1992, a specific opiate receptor by two groups working independently simulated successfully that was a major breakthrough for the first time (34-35). This achievement was followed by the cloning of μ-and κ-opioid receptors of rodents and in humans opened new doors for both animal and basic clinical research studies, as well as molecular and human genetics studies (36-38). Opiates (39), show their effects by connecting to three opioid receptor types in the body (μ, δ, and κ) and imitating the actions of endogenous opioid peptides which are the endorphins, endomorphins, encephalin, and dynorphins. Opiates act mainly as μ-Opioid receptors (MOP-r), Increases dopamine level significantly in the synaptic clefts of body in case of the reward track. On these receptors, heroin and prescription opioids such as oxycodone and hydrocodone act partially with short period of operation, while cocaine-methamphetamine-act primarily to enhance synaptic dopamine by pre-venting of dopamine reuptake or an increased in emissions. A theory suggests that when the opiate attached to the opiate receptors on neurons, Gamma-amino butyric acid receptor (GABA) which is a neurotransmitter that inhibits dopamine will disperse less. This increment of dopamine causes feeling of extreme pleasure for the opiate addicts (40-41). Important advances have been made by humans on pathophysiology, as well as molecular and cellular neurobiology of opiate addiction over the last few decades. The ventral tegmental area (VTA) which is an important centre of DA activity is involved in opioid reward. A couple of studies on rats suggest that opioids in the VTA have a rewarding effect (42-43). Neurons were involved continuously in the valuable properties of opioids. Systematic administrations of opiates enhance dopamine turnover in the NA, which suggests that opioids increase dopamine activity (44). In 1992, DiChiara suggested that opioids, which reduce the inhibition on dopaminergic Available at: http://ijph.tums.ac.ir neurons projecting to the NA, hyper polarize GABA-interneurons in the VTA. Hence, increasing the amplification is associated with addiction. While this assumption is true, it has not been proven yet (39). Opiate usage damage on the psychomotor function in those who cannot quit the drug. Zacny and colleagues studied the intellectual, psychological and physiological effects of analgesics usually prescribed to weak patients who were previously healthy. Patients who cannot stop the drug and usually take prescribed amount of analgesics medications, experience subjective feelings a negative perception due to substance dependent incompatible withdrawing (45-47).
Effects on opiates on addicts
Opiates are initially central nervous system (CNS) depressant and analgesics. Opiate usage normally makes physical and psychological dependence.
Opiates have been well-known for their effects on the central nervous system, such as dizziness, sedation, mental clouding, mood changes and loss of fine motor skills. In particular, performance may be impaired on cognitive tasks that require decision-making that involves balancing shortterm rewards and long-term consequences (54)(55).
The opiate addict's brain, takes months and even years to get hold of its normal functioning. During this time, these opiate addicts experience lack of motivation, extreme fatigue, depression and sensitivity to pain. General symptoms of opiates intoxication are slurred speech, drowsiness, pupillary constriction, decreased level of consci-ousness, hypotension, rhinorrhea, piloerection, nausea; vomiting, diarrhea, restlessness, respire-tory depression and hypothermia opiates induce tolerance. Following are few of the adverse effects if opiate use and misuse (56)(57). Infectious side effects from opiate use are commonly caused by using a syringe to inject drugs, particularly heroin injection, that a significant number of HIV / AIDS and hepatitis cases have been reported. It is estimated that 60% to 90% of injection users are infected to hepatitis C virus (39-40), other infectious diseases, such as some of the common bacterial infections including Staphylococcus aurous and cellulitis, are transmitted by injections either (41). Metabolic problems and opiate abuse are unusual, but becomes more common as drug use rises. Metabolic problems often associated with heroin, cocaine, and ecstasy drug although there is a wide range of medical problems produced. The use of heroin has been implicated in blood sugar disorders in a number of mechanisms. Fasting insulin levels were found to be four times higher in heroin addicts than in control subjects and insulin resistance stemming from opioid use may be coupled with beta cell dysfunction. Acute insulin response in heroin addicts were found 42% lower than control subjects, accompanied by an 80% lower glucose disappearance rate, when they were given intravenous glucose (43-47). Opiate dependents do not usually have an ordinary life and they have many problems in their dietary and sleeping schedules. They usually ignore some of the basic requirements of life due to supply their daily usage of drugs. Hence, most of them become undernourished and impoverished. Moreover, they are more at risk of infectious diseases due to their drug abusing and lifestyle such as injection and risky sex behavior (48). The majority of substance dependent does not have an appropriate weight and their hormonal and immunological system usually remain imbalanced. In some cases a severe organ failure are observed among opiate dependents, like chronic liver failure (49).
Treatment options for opiate addicts
Opiate dependence has been treated by various approaches such as the use of Medication Assisted Treatment (MAT) in combination with behavioural therapies. Methadone, a schedule II opioid agonist and buprenorphine a schedule III partial opioid agonist are frequently employed as replacement therapies. Similarly l-alpha-acetyl methanol (LAAM) and Naltrexone have also been used as a replacement therapies.In the 1960's, methadone gained appreciation as an effective treatment for heroin addicts. Methadone is a long-lasting μ-opioid agonist which assists patients in making the transition towards abstinence by mimicking some opiate actions (50). In 1999, almost 115,000 people who have been addicted to heroin participated in methadone maintenance therapy in the United States (51). Methadone by itself is an opiate which causes depression of central nervous system. Methadone is a pharmacological that reduces the opiate craving as well as lessens withdrawal symptoms and if it coupled with counseling, enables to reach tolerance threshold although prevents from drowsiness and euphoria. Appropriate and safe daily dosage ranges from 20 to 30 mg in initial stages and average 60 to 100 mg at latter stages. Because of its long half-life of 24 to 36 hours, between 4 to 10 days is required to achieve a stable maintenance dosage (52)(53). Several studies on methadone maintenance therapy have demonstrated conditions in which mortalities happened. Based on a cross sectional study most of the 238 patients who died in between 1990-1995 were drug users and suffered from medical illnesses. Almost 21% of the mortalities occurred in the first week of methadone treatment and 88% of these patients were polysubstance abusers. Only around 10% of deaths were related to the first week of MMT tested positive for methadone alone. In another study was reported 62 (71%) patients were involved illicit drug consumption from all 87 MMT patients 'death. According to these studies, overdose during methadone maintenance therapy is related to polydrug use (54)(55)(56).
Buprenorphine is a semi-synthetic opioid derivative and is a sectional μ-opioid agonist and κ-opioid antagonist which has less abuse potential than other opioids. Buprenorphine was initially proposed in 1978 for opioid addicts as an oral alternative opioid replacement therapy, because the intensity of the rewarding effect is milder at higher doses (57)(58)(59). Its use is promoted by the Substance Abuse and Mental Health Services Administration of the US Department of Health and Human Services (60). Buprenorphine acts on the same receptors as heroin and morphine, alleviating drug cravings without producing the same severe "high" or hazardous side effects. Due to wide metabolism in intestinal and liver tissues, buprenorphine has a very low oral bioavailability. Naltrexone and naloxone are the two most commonly used opioid (mu receptor) antagonists act centralized and subordinate, but have different pharmacokinetic indexes with different therapeutic uses.
The US Food and Drug Administration confirmed Naltrexone and naloxone for opioid dependence treatment because of their pharmacologic profile primarily. One of the pharmacologic profiles is antagonism at the μ-opioid receptors by Naltrexone and naloxone that block opioid effects (61). Naloxone often is not oral bioavailability, effects immediately for reversal of opioids. But Naltrexone is useful in detoxification and maintenance therapy by a long duration of action and it is used orally. (62)(63). Naltrexone is also used in anesthesia-assisted detoxification, in which naloxone is administered under general anesthesia to speedup withdrawal. This technique is also known as ultra-rapid or rapid detoxification (63).
Dietary and nutritional status of opiate addicts in methadone maintenance treatment
The development of effective treatments for opiates addiction is a high priority in public health because addiction poses a significant burden on suffered individuals. Studies about addiction disorders have proven extreme nutritional deficiencies on drug abusers such as weight loss and dietary patterns changes. Changes in specific nutrient status can lead to develop barriers in withdrawal from opiates addiction (64)(65). In opiate addicts there have been shown unhealthy eating behaviors due to lack of nutritional knowledge, food preparation skills, and environments (66)(67)(68). It is reported that a good nutrition education and physical activity are quite effective for substance abusers to their withdrawal from opiates (69). During withdrawal from heroin, nicotine, marijuana, and cocaine, weight gain or loss occurs which is caused by major changes in food intake selection. Nutrition is related with conditions and diseases, such as diabetes which decreases sensitivity to dependence on morphine and vitamin D deficiency that slows down morphine dependency as well as protein deprivation which generates preferential fat intake with low cocaine use (70). Nutritional status also plays important role in the process of recovery and survival of an individual, such as in HIV infection which endangers nutritional status till may produce malnutrition (71).
Several studies have concluded that there is a correlation between drug addiction, education, income levels, and body mass index; the higher the body mass index, the higher the income and educational levels and vice versa (72)(73). In 2011, Alves et al. assessed nutritional and socio demographic characteristics of heroin addicts during detoxification program, and it was found that heroin addicts consume less than the minimum amount of vegetable, fruit and grains recommended by the food pyramid and are more eager to have sweets (74). Several other studies have also demonstrated that the consumption of vegetables and fruit in drug addicts are less than general population and they are more prone to consume food with low vitamin content. Unfortunately, the scope of nutrition services has not been defined well in detoxification programs and it has not been seen as a main problem.Larson et al.. asserts that because of the major deficiencies and absorption problems, proper eating behavior, is not the only solution to overcome the depletion of nutrients in the beginning of a detoxification program. Pantothenic acid administered orally is not absorbed by the alcoholic patients as it is found through urine test (75)(76)(77). Increasing the dietary intake of protein and reducing simple carbohydrates in the form of vegetables and whole grains can manage the carbohydrate-metabolism health problems (78). Therefore, to recover from opiates addiction, patients need to consume even more amino acids and protein during the treatment process (79)(80). Methadone maintenance treatment, itself, is not a favorable approach until is coupled with proper diet due to negative role of vitamins and minerals deficiencies in withdrawal process. Williams found that high alcohol intake in rat resulted in vitamin B6, vitamin A, thiamine, riboflavin and pantothenic acid deficiencies (81). Despite proteins and key vitamins, as well as minerals such as zinc, iron, calcium, chromium, magnesium, potassium and other essential nutrients should be prescribed in detoxification programs to recovering addicts. Zinc can help to improve immune system and proper brain function (82). Many opiate and alcohol addicts have shown calcium and magnesium deficiencies due to poor diet and inadequate intake of calcium. Calcium and magnesium deficiencies are the major factors of pain and nervous/muscular disorders among addicts and alcohol consumers during detoxification programs (22).
Conclusion
Opiate addiction -as a major public health problem in all over the world-is rising globally. Opiate dependents have several deficiencies such as nutritional deficiencies and weight deficits. The most efficient recovering program is methadone maintenance therapy but it seems not to be the favorable approach unless be associated with consuming proper and diverse diet to overcome nutritional deficiencies further studies are required to assess the impact of other factors, such as different gender, food behavior, dietary intake, exercise, non-dietary determinants of nutritional status in the opioid-using population. An accurate and efficient nutritional intervention among drug addicts during detoxification could decrease their nutritional deficiencies and subsequently, boost up their productivity.
Ethical considerations
Ethical issues (Including plagiarism, Informed Consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors. Profiling the subjective, psychomotor, and physiological effects of a hydroco-
|
2017-06-16T22:34:04.857Z
|
2014-08-01T00:00:00.000
|
{
"year": 2014,
"sha1": "28c21e075a03e07f67d6dc758cd1e779dac347ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "28c21e075a03e07f67d6dc758cd1e779dac347ef",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5253146
|
pes2o/s2orc
|
v3-fos-license
|
Association between health information, use of protective devices and occurrence of acute health problems in the Prestige oil spill clean-up in Asturias and Cantabria (Spain): a cross-sectional study
Background This paper examines the association between use of protective devices, frequency of acute health problems and health-protection information received by participants engaged in the Prestige oil spill clean-up in Asturias and Cantabria, Spain. Methods We studied 133 seamen, 135 bird cleaners, 266 volunteers and 265 paid workers selected by random sampling, stratified by type of worker and number of working days. Information was collected by telephone interview conducted in June 2003. The association of interest was summarized, using odds ratios (OR) obtained from logistic regression. Results Health-protection briefing was associated with use of protective devices and clothing. Uninformed subjects registered a significant excess risk of itchy eyes (OR:2.89; 95%CI:1.21–6.90), nausea/vomiting/dizziness (OR:2.25; 95%CI:1.17–4.32) and throat and respiratory problems (OR:2.30; 95%CI:1.15–4.61). There was a noteworthy significant excess risk of headaches (OR:3.86: 95%CI:1.74–8.54) and respiratory problems (OR:2.43; 95%CI:1.02–5.79) among uninformed paid workers. Seamen, the group most exposed to the fuel-oil, were the worst informed and registered the highest frequency of toxicological problems. Conclusion Proper health-protection briefing was associated with greater use of protective devices and lower frequency of health problems. Among seamen, however, the results indicate poorer dissemination of information and the need of specific guidelines for removing fuel-oil at sea.
metres (approximately 11,500 ft.) [1]. The accident led to a major spill of the tanker's cargo of oil, with the first black oil-laden tide arriving on the Galician coast on 16 November. By early December, the oil had spread, and started coming ashore on the Asturian coast and, subsequently, in Cantabria and the Basque Country, thereby affecting the entire northern coast of Spain [2].
The heavy fuel (listed as M100, No. 6 or No. 2 according to the Russian, Anglo-Saxon and French classifications respectively) [3,4] discharged by the Prestige contains three groups of substances potentially hazardous to health, i.e., volatile organic compounds (VOCs), polycyclic aromatic hydrocarbons (PAHs) and heavy metals, particularly zinc, nickel and vanadium. Furthermore, it has a density of 992.1 kg/m3 at 15°C (11.04° API), a viscosity of 615 centiStockes at 50°C and a low tendency to evaporate and disperse naturally [3].
The Ministry of Health & Consumer Affairs, acting in liaison with the Asturian and Cantabrian Public Health Authorities, sponsored a survey intended: firstly, to characterise exposure to fuel-oil by persons participating in the oil spill clean-up; secondly, to study the use of protective devices and health-protection information received; and, thirdly, to ascertain the acute health problems experienced by such participants. As a result of this project, acute health problems experienced by the persons who co-operated in the clean-up tasks, and the association between such problems and the nature of the work and use of protective devices in the regions of Asturias and Cantabria [5] were analysed. These individuals were basically divided into four groups according to the type of work undertaken, i.e., volunteers, bird cleaners, seamen and purposepaid workers [5]. Briefly, the volunteers generally worked on weekends, in both high-and low-pollution areas, devoting themselves almost exclusively to cleaning up of boulders and rocks, shingle beaches, sandy beaches and wharves. The bird cleaners performed their tasks, also generally for short periods (weekends), in closed premises where they received the oil-coated birds. The seamen worked in highly polluted areas, positioning floating barriers and booms, and skimming up the oil from their boats, for periods that generally exceeded 20 days [5]. Finally, the purpose-paid workers patrolled highly polluted areas of coastline, carrying out boulders and rocks, shingle beaches, sandy beaches, wharves and high-pressure/vacuum clean-up activities. Like the seamen, their work periods were longer than those of volunteers and bird cleaners. Although there was a high percentage of use and few tears and breakages of protective equipment in all groups, special mention must nevertheless be made of the high proportions of torn gloves among bird cleaners and of torn suits among seamen, in particular, who, moreover, reported wearing masks to a much lesser extent than the other groups [5].
An important result of this first study was the greater frequency of disorders among seamen, and the negligible magnitude of the difference between paid workers and volunteers vis-à-vis the frequency of health problems, despite the fact that paid workers and seamen were involved for an average of two months, whereas volunteers participated for less than a week. These data might suggest that the frequency of health problems could be associated with differences in the health-protection information received. Health-protection information can be an important resource in risk prevention in the case of cleanup workers and, in some contexts, may be less costly than other preventive measures. Nevertheless, the usefulness of a message cannot be taken for granted: not only must it be communicated in an understandable and trustworthy manner, but it should also capture the attention of and be perceived as useful, effective, and acceptable by the target audience [6][7][8][9].
Owing to the possible health risk associated with the Prestige oil spill clean-up work, those involved in such tasks received health-protection information. In Asturias and in Cantabria, information was disseminated by a number of public administrative bodies (regional authorities, as well as city and town councils), Civil Protection Corps, fishermen's guilds, some non-governmental organisations (Red Cross, ecologist associations) and the companies (TRAGSA; Empresa de Residuos de Cantabria) contracted by the Regional Authorities to clean up the beaches and remove the oil residue and tar. In general, the information furnished was based on the Regulations for the Prevention of Risk in the Cleaning up of Areas Polluted by the Oil Spill from the vessel "Prestige" (Normas para la prevención de los riesgos en las tareas de limpieza de zonas contaminadas por el vertido de Fuel-Oil del Buque "Prestige") issued by the Ministry of Health & Consumer Affairs (Ministerio de Sanidad y Consumo-regulations available on request-). These regulations include individual protection measures (work clothes, protective goggles, gloves, boots and mask), recommendations as to diet and hygiene, and a series of circumstances that contraindicate the work for certain persons. Briefings were mainly oral and, as the groups were to be allocated different tasks, each tended to receive specific information, different to that given to the others.
Paid workers were the group that received the most uniform briefings. Most workers were briefed by the TRAGSA Risk Prevention Unit, which issued a series of regulations, containing general information on occupational risk prevention, as well as specific information on removing residue from beaches, cleaning rocky stretches of coastline with high-pressure jets and hoses, conducting spill sur-veillance of slicks approaching beaches, and using selfpropelled sand-rake and beach-cleaning equipment. This information was explained by a prevention technician and a talk was given to each work party prior to the activity. Some workers were hired directly by the town councils affected, and in such cases it was the council itself that undertook the necessary briefing.
Bird cleaners received the Ministry guidelines plus specific recommendations as regards the working conditions at the San Juan de Nieva Bird Rescue & Recovery Centre (e.g., direct work with animals at high temperatures). This information was mainly supplied by Asturias Health Authority staff and ecological associations.
Among seamen and volunteers the information received was more heterogeneous. The seamen were mainly briefed by fishermen's guilds, which the Cantabrian Regional Authority had supplied with a set of "Measures to be adopted by persons engaged in hydrocarbon cleanup work at sea". These measures include: circumstances that contraindicate the work for certain persons; guidelines regarding the use of individual protective equipment (goggles, dungarees, mask, gloves, boots and protective suit); recommendations in the event of occasional direct contact with fuel-oil; recommendations on the consumption of food and drink; cleanliness of equipment; description of symptoms and effects due to prolonged exposure; and first aid. Volunteers were informed by a series of different institutions, with a high degree of participation by NGOs. The information furnished was mainly drawn from the above-mentioned ministerial guidelines.
In the above context, this paper sought to examine the association between use of protective devices, frequency of acute health problems and receipt of the pertinent health-protection information prior to performing cleanup tasks following the Prestige oil spill among above mentioned four groups of people engaged in clean-up activities in Asturias and Cantabria, namely volunteers, paid workers, seamen and bird cleaners.
Methods
Selection of the study sample and data-collection have both been described in an earlier paper [5]. The study population comprised persons who participated in the clean-up of the pollution caused by the Prestige and were registered in the censuses taken by Public Health Authorities of Asturias and Cantabria. This census information included full name, date of birth, group, number of days worked and telephone number. After excluding persons with no information on number of days worked and those who had formed part of 2 or more groups, the sampling framework was made up of 4117 persons in Asturias and 3621 in Cantabria. No seamen were included in the Asturian census and only two bird cleaners (who were not interviewed) were registered in the Cantabrian census.
The health authorities decided a priori to include a total of 400 persons in each of the two geographic areas, viz., Asturias and Cantabria. Initially, 100 persons were to be included in each group and area, but, given the absence of seamen in the Asturian worker census and the lack of bird cleaners in Cantabria, it was decided that the sample size of each group would be increased to 133 in order to maintain the total sample at 400 workers per geographic area. Samples were separately selected for Asturias and Cantabria by means of random sampling stratified by two variables, i.e., "group affiliation" (volunteers, paid workers, seamen and bird cleaners) and "number of cleaning days worked as a member of that group" (less or more than five days), in order to favour the overrepresentation of individuals who had cleaned for longer periods. The final study sampling comprised 133 seamen in Cantabria, 135 bird cleaners in Asturias, and 266 volunteers and 265 paid workers in both regions together. The corresponding number of subjects was predetermined by stratum, and a main and two substitute samples were extracted from the census, randomly establishing a one-to-one relationship between units of the main and each of the substitute samples to reduce any bias caused by replacements. A total of 62.5% of persons selected and located in the main sample agreed to participate in the study. Individuals who could not be contacted after three attempts on different days and at different times of day, or who did not wish to participate were replaced by the relevant substitute. As the composition of the sample was not proportional to the study population, all estimates were computed using the corresponding weighting factors (StataCorp., 2004).
Data for the epidemiological survey conducted by the Ministry of Health & Consumer Affairs were collected by telephone interview during the first 20 days of June 2003. The structured questionnaire was based on one that had been previously used in France after the Erika oil spill [10] and included data on type and duration of clean-up activity, use of protective devices, contact with oil-fouled products, perceived health problems, alternative exposures to PAHs and health-protection information received. The data obtained on items relating to health-protection information were used for this study.
For study purposes, an informed person was defined as any subject who had received information before the start of the clean-up activity. Health problems were divided into two major groups, namely, injuries and toxicological problems. The former grouped together the consequences of physical work, e.g., low back pain and lesions (bruises, scratches, blisters, superficial or deep cuts, twists and sprains, broken bones, knee pain and chipped teeth). Tox-icological effects included symptoms previously related to exposure to VOCs and PAHs, such as headaches, itchy eyes, throat and respiratory tract problems and nausea/ vomiting/dizziness symptoms (including any of them). Differences in proportion were analysed using the Chisquared test. The association between reported health problems and information received was summarized using odds ratios (OR) and their 95% confidence intervals, obtained from logistic regression. Odds ratios adjusted for time worked in high-and low-pollution areas were likewise obtained. Analyses were performed independently for each group because time of exposure, tasks performed and data sources varied accordingly.
Results
The characteristics of the health-protection information received by the different groups of clean-up workers are shown in Table 1. Most workers reported having received information, with paid workers accounting for the highest and seamen for the lowest percentages (94% and 68% respectively). Essentially, this information was imparted orally, prior to beginning the activity. In the case of paid workers, the waste-removal company was the most usual source of information (58%). Volunteers were mainly informed by Regional Health Authority staff (31%) and other sources (37%), principally the Civil Protection Corps (Protección Civil) and fire brigade. In the case of seamen, information was furnished in most cases by the fishermen's guilds, whilst Health Authority staff (35%), in tandem with other volunteers and ecologist organisations (32%), focused on briefing the bird cleaners. The information received was deemed useful by the great majority of subjects, with bird cleaners accounting for the highest percentage (97%). Table 2 shows the frequency of acute health problems reported. Seamen were the group with the highest prevalence of symptoms, mostly in the form of headaches (28%) and throat and respiratory tract problems (30%). While headaches (16%) and nausea/vomiting/dizziness (15%) tended to be frequent among paid workers, nausea/vomiting/dizziness (10%) and lesions (19%) were the main cause for complaint among volunteers and bird cleaners respectively. Table 3 shows the percentage of use and breakage/tear of protective devices among informed and uninformed subjects. In comparison with uninformed paid workers, those who received health-protection information reported greater use of safety goggles (88% versus 70%) and fewer broken boots (0% versus 4%). In the volunteer group, informed subjects reported having worn the protective suit more frequently than their uninformed counterparts (85% versus 66%), and having experienced fewer torn protective suits (18% versus 45%) and broken masks (1% versus 9%). Differences in use and breakage/tear of protective devices were not significant among bird cleaners and seamen. However, attention should be drawn to the greater use of safety goggles and masks among informed seamen, the high proportion of tears to protective suits among seamen in general, and the scant use of protective clothing among bird cleaners.
Discussion
To our knowledge, this is the first study to analyse the importance of health information supplied to workers involved in clean-up operations following a massive oil spill. This study shows that most participants in the Prestige oil spill clean-up received health-protection information, mainly in the form of an oral briefing given prior to the start of activity. In general, subjects who were informed reported a higher frequency of use and a lower percentage of broken/torn protective devices, along with a lower frequency of acute health problems than did sub-jects who were not informed. This pattern of behaviour may indicate successful communication of health-risk information [7].
There is abundant literature on communicating health risk information guidelines [6][7][8][9]. However, manuscripts assessing the effect of preventive information in specific settings such as ours are relatively scarce; effectiveness of preventive health information has been studied, for example, in environmental health disasters [11], in epidemiological outbreaks [12] or in occupational settings [13], showing the importance of developing strategies orientated to diminish the risks.
Some methodological aspects of our study call for comment. Firstly, the fact that health-protection information was disseminated before the commencement of clean-up activity means that we were able to establish a temporal relationship between health briefing on the one hand, and use of protective devices and occurrence of acute health problems, on the other. Secondly, self-report is the appropriate procedure for collecting data on the occurrence of symptoms in cases where objective diagnosis is not possible. Furthermore, as most workers' injuries did not require health care, diagnoses could not be verified against medical information. Thirdly, the telephone interview is a simple and valid system for collecting data in this context. Indeed, a number of studies have reported that, in the case of behavioural risk factors and implementation of preventive practices, telephone interviews yield results comparable to those of face-to-face [14,15] or self-administered [16] surveys.
This study also has certain limitations, which have to be borne in mind to ensure correct interpretation of the results. The possible existence of some degree of observation bias cannot be ruled out, since the public outcry linked to the spill and the ensuing financial loss might well have influenced participants' replies. It should be noted, however, that there was far less alarm in the geo- graphic area targeted by this study, because other parts of the country (Galicia) had been more severely affected previously and the local authorities were consequently that much better prepared. Moreover, the interviews were conducted six months after the arrival of the oil and the economically affected parties were subsequently compensated. A further aspect to be taken into account is the possibility that some of the reported associations might be the consequence of chance, given the number of statistical tests performed in the course of analysing the data. Nevertheless, attention must be drawn to the high consistency of the associations that emerge from the tables, something that supports the results obtained. Moreover, the limited sample size of the uninformed group endows our study with a low statistical power. Notwithstanding this, significant briefing-related differences were detected, both in the use of protective devices and in the frequency of health problems, specifically those linked to toxic components contained in the fuel-oil.
Our results show that health-protection information was provided to most workers. As regards the channel of communication, the fact that subjects were briefed orally probably means: that there was greater access to and better comprehension of the information; and that, overall, this might have contributed to the recommendations and regulations imparted being viewed as beneficial by over 84% of interviewees. A higher proportion of paid workers enjoyed access to such information, probably because they were employed by a waste removal company, with legal obligations relating to occupational safety and hygiene, and more structured protocols for prevention of occupational injuries and warnings about risks. In contrast, seamen reported a notably lower percentage of informed subjects than did the other three groups of clean-up workers.
With regard to the use of protective devices, our results indicate that, in general, informed subjects used such devices -safety goggles in particular-more than did their uninformed fellow workers, and that they experienced fewer tears and breakages. Although the information on the use of protective devices was probably clear, it would nonetheless appear to have been more effective among paid workers and volunteers. Unlike other studies undertaken in similar spills [10,17], the low frequency of skin irritation observed in this study might be explained by the proper and frequent use of the protective devices and clothing supplied.
The data attest to the benefit of furnishing information on the prevention of acute health problems -particularly those of a toxicological nature-in the performance of this type of task. This could be due to the effectiveness of protective devices as a barrier against exposure, while the risk of suffering injury must be assumed to be determined, to a certain degree, by the skill of the individual subject.
Seamen were essentially involved in clean-up tasks at sea, where VOC and PAH concentrations are highest. Earlier studies have shown that direct contact with these products can cause acute health problems, such as neurological disorders (headaches, nausea, dizziness and somnolence) in the case of exposure to VOCs, and respiratory difficulty, digestive problems (nausea, vomiting and abdominal pain) and itchy eyes and skin in the case of PAHs [18]. Indeed, this was the group that reported the most health problems, the least use of masks, and a higher frequency of tears to protective suits. Furthermore, almost half of the seamen reported having eaten while in contact with fueloil [5]. Yet it is relevant to point out that there were no significant differences in the frequency of health problems among informed and uninformed seamen. These results indicate that the information campaign should have been on a much larger scale among seamen and highlight the need for specific protection measures for this group, which performed its clean-up tasks in a setting that was different and entailed a higher probability of exposure. Special mention must also be made of the high percentage of lesions reported by bird cleaners owing, presumably, to the highly specific nature of the tasks performed. The possibility of preventing such injuries depends, above all, on the skill of the person responsible, since gloves are powerless to prevent many of these injuries.
Conclusion
In conclusion, the information received by workers engaged in the clean-up of the Prestige oil spill in Asturias and Cantabria was associated with a greater use of individual protective devices and lower frequency of acute health problems, mainly among the volunteers and paid workers. The experience gained and the health problems detected along the Galician coast may well have served to guide the protection and prevention actions applied in the clean-up operations in Asturias and Cantabria, regions that were affected at a later point in time. It should be stressed, however, that it was seamen, who were the poorest informed, suffered the most toxicological problems (perhaps as a consequence of the scant use of masks) and constituted the subset among whom the information received was least effective. Hence, were a similar situation to arise, this group should arguably receive attention specifically tailored to its designated activities and the conditions under which it works.
Competing interests
The author(s) declare that they have no competing interests.
|
2016-05-04T20:20:58.661Z
|
2006-01-03T00:00:00.000
|
{
"year": 2006,
"sha1": "b171e440b2ac50fb19b07fc24ca01fc33a139676",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-6-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f316ed64590bc0d97e0ec3cde16052bbf1f8c30e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216594950
|
pes2o/s2orc
|
v3-fos-license
|
Nitric oxide affects cisplatin cytotoxicity oppositely in A2780 and A2780‐CDDP cells via the connexin32/gap junction
Abstract Chemoresistance is a main obstacle in ovarian cancer therapy and new treatment strategies and further information regarding the mechanism of the medication cisplatin are urgently needed. Nitric oxide has a critical role in modulating the activity of chemotherapeutic drugs. Our previous work showed that connexin32 contributed to cisplatin resistance. However, whether nitric oxide is involved in connexin32‐mediated cisplatin resistance remains unknown. In this study, using A2780 and A2780 cisplatin‐resistant cells, we found that S‐nitroso‐N‐acetyl‐penicillamine, a nitric oxide donor, attenuated cisplatin toxicity by decreasing gap junctions in A2780 cells. Enhancement of gap junctions using retinoic acid reversed the effects of S‐nitroso‐N‐acetyl‐penicillamine on cisplatin toxicity. In A2780 cisplatin‐resistant cells, however, S‐nitroso‐N‐acetyl‐penicillamine enhanced cisplatin toxicity by decreasing connexin32 expression. Downregulation of connexin32 expression by small interfering RNA exacerbated the effects of S‐nitroso‐N‐acetyl‐penicillamine on cisplatin cytotoxicity and upregulation of connexin32 expression by pcDNA transfection reversed the effects of S‐nitroso‐N‐acetyl‐penicillamine on cisplatin cytotoxicity. Our study suggests for the first time that combining cisplatin with nitric oxide in clinical therapies for ovarian cancer should be avoided before cisplatin resistance emerges. The present study provides a productive area of further study for increasing the efficacy of cisplatin by combining cisplatin with the specific inhibitors or enhancers of nitric oxide in clinical treatment.
atin are urgently needed. Nitric oxide has a critical role in modulating the activity of chemotherapeutic drugs. Our previous work showed that connexin32 contributed to cisplatin resistance. However, whether nitric oxide is involved in connexin32mediated cisplatin resistance remains unknown. In this study, using A2780 and A2780 cisplatin-resistant cells, we found that S-nitroso-N-acetyl-penicillamine, a nitric oxide donor, attenuated cisplatin toxicity by decreasing gap junctions in A2780 cells. Enhancement of gap junctions using retinoic acid reversed the effects of Snitroso-N-acetyl-penicillamine on cisplatin toxicity. In A2780 cisplatin-resistant cells, however, S-nitroso-N-acetyl-penicillamine enhanced cisplatin toxicity by decreasing connexin32 expression. Downregulation of connexin32 expression by small interfering RNA exacerbated the effects of S-nitroso-N-acetyl-penicillamine on cisplatin cytotoxicity and upregulation of connexin32 expression by pcDNA transfection reversed the effects of S-nitroso-N-acetyl-penicillamine on cisplatin cytotoxicity. Our study suggests for the first time that combining cisplatin with nitric oxide in clinical therapies for ovarian cancer should be avoided before cisplatin resistance emerges.
The present study provides a productive area of further study for increasing the efficacy of cisplatin by combining cisplatin with the specific inhibitors or enhancers of nitric oxide in clinical treatment.
K E Y W O R D S
cisplatin resistance, connexin32, gap junction, ovarian cancer, S-nitroso-N-acetylpenicillamine
| INTRODUC TI ON
Ovarian cancer is the leading cause of death among patients with gynecological cancers. 1 Cisplatin (CDDP), the most common and effective platinum-based anticancer drug, is widely used in the chemotherapeutic treatment of ovarian cancer. 2 However, acquired resistance and adverse reactions are 2 major factors for the failure of CDDP in the treatment of ovarian cancer. 3,4 CDDP resistance primarily occurs due to decreased intracellular CDDP accumulation and decreased sensitivity to CDDP-induced apoptosis. 5,6 Therefore, more studies on the mechanisms of CDDP resistance and more effective approaches to subvert cellular resistance pathways are needed to improve the efficacy of cancer therapy.
Nitric oxide (NO), an endogenously synthesized primary messenger produced by the enzyme nitric oxide synthase (NOS), acts as an important regulator in signaling processes. 7 Gap junctions (GJ) are formed from 2 hemichannels that are localized on sites of cell-cell contact and that consist of 6 connexins (Cx). GJs allow the direct exchange of molecules, including metabolic precursors, second messengers, nutrients, and ions (relative molecular mass up to 1 kDa). 15 Our previous studies have shown that GJs enhance CDDP cytotoxicity by increasing copper transporter 1 (CTR1)-mediated platinum uptake in tumor cells 16 ; CDDP and oxaliplatin counteract their cytotoxic efficacy by direct inhibition of GJ intercellular communication and reduction of Cx expression. 17 These results suggested that inhibition of GJ compromised the effectiveness of CDDP/oxaliplatin and that chemoresistance of CDDP may be related to GJ/Cx.
Existing data strongly suggest that there is reciprocal regulation between NO and Cx. Cx-based channels are necessary for NO to cross the plasma membrane and GJ provides a rapid and efficient pathway for NO transfer. 18 Conversely, exogenously applied or endogenously produced NO could alter GJ function or Cx expression. 19,20 Our recent studies have shown that Cx32 expression is significantly upregulated in cervical cancer tissues and that Cx32 is distributed mainly around the cytoplasm and nucleus, this distribution could inhibit apoptosis by modulating EGFR expression. [21][22][23] Moreover, we have found that Cx32 expression was significantly increased and clustered in the cytoplasm of A2780-CDDP cells, 24 indicating that Cx32 may be associated with the chemoresistance of CDDP. Nevertheless, if NO regulates CDDP cytotoxicity by regulating GJ/Cx32 remains unknown. In this study, we investigated the effect of SNAP (a class donor of NO) on the toxicity of CDDP in A2780 cisplatin-sensitive cells and A2780 cisplatin-resistant cells and explored the underlying mechanism.
| Cell lines and cell culture
The ovarian cancer A2780 cell line was purchased from ATCC and A2780 CDDP-resistant cells (A2780-CDDP) were obtained as described previously. 24 Cells were maintained in DMEM supplemented with 10% FBS and grown at 37°C in a 5% CO 2 in air atmosphere. To maintain the resistance, A2780-CDDP cells were cultured in complete medium containing 1 μg/mL CDDP (Sigma-Aldrich).
| Nitrite assay
We evaluated the amount of nitrite released in A2780 cells and A2780-CDDP cells following treatment with SNAP for 48 h. Nitrite accumulation was detected using a total NO assay kit (Beyotime Institute of Biotechnology). Total NO production in the cell lysates was determined by measuring the concentrations of nitrate and nitrite, stable metabolites of NO. Absorbance of each sample was determined at 540 nm. 25
| Cell proliferation assay
The effect of SNAP (Sigma-Aldrich) on cell proliferation of A2780 and A2780-CDDP cells was measured using the Cell Counting Kit-8 (CCK-8) assay. Briefly, cells were trypsinized and seeded into 96-well plates at a density of 5 × 10 3 cells per well. CDDP was then added at various final concentrations. After 48 h of incubation, 10 μL of CCK-8 was added to 90 μL of normal medium until visual color conversion occurred. Finally, a microplate reader (BioTek) was used to determine absorbance at 490 nm.
| Membrane protein extraction and western blotting
Membrane protein and cytoplasmic protein were extracted using a Mem-PER™ Plus Membrane Protein Extraction kit (Thermo Fisher Scientific). Total proteins were harvested from the cells, separated by SDS-PAGE, and transferred onto PVDF membranes (Millipore). The membranes were blocked with 5% nonfat milk in TBST for 1 h at room temperature, followed by incubation with the appropriate primary antibodies overnight at 4°C. Monoclonal antibodies against Cx32 (1:1000; Sigma-Aldrich), cleaved-caspase 3 (1:1000; Cell Signaling Technology), Na-K-ATPase (1:1000; Cell Signaling Technology) and β-tubulin (1:10 000; Sigma-Aldrich) were used. Then, membranes were incubated with the relevant secondary antibody (Cell Signaling Technology) for 1 h at room temperature and then washed 3 times with TBST . Immunoreactive bands were visualized using the Chemiluminescent HRP Substrate Kit (Millipore) and the bands were quantified using ImageJ software.
| Parachute dye coupling assay
Molecular permeability of GJs was measured using the parachute dye coupling assay. 26 Briefly, cells were grown to 80%-90% con-
| Hoechst 33258 staining and apoptosis analysis
Cells were fixed with 4% formaldehyde for 20 min, permeabilized with a solution containing 1% BSA and 0.5% Triton X-100 for 15 min, stained with 0.1 μg/mL Hoechst 33258 at 37°C for 15 min in the dark, and washed with PBS. The stained cells were observed using a fluorescence microscope (Olympus IX71).
Apoptosis was assessed using flow cytometry and an Annexin V-FITC apoptosis detection kit (Biotool). Expo32 Software (Beckman XL) was used to identify apoptotic cells. The ratio of early apoptotic cells was compared with that of the controls for each experiment.
| Immunofluorescence staining
Cells were fixed for 20 min in 4% paraformaldehyde at room temperature and permeabilized with a solution containing 1% BSA and 0.5% Triton X-100 for 15 min. After 1 h of blocking in 10% normal goat serum, cells were incubated overnight at 4°C with a primary antibody against Cx32 (1:200; Sigma-Aldrich), followed by incubation for 1 h at room temperature in the dark with a FITC-conjugated goat anti-mouse secondary antibody (1:400; Invitrogen). Hoechst 33258 and Alexa Fluor 488 Phalloidin (Cell Signaling Technology) were used to stain the nucleus and cytoarchitecture, respectively. Then, cells were observed using a confocal microscope.
| DNA plasmid transfection and siRNA interference
Cells were seeded onto 6-well plates, grown to 80% confluency and then transfected with Cx32-expressing plasmid (Genecopoeia)
| Statistical analysis
All experiments were repeated at least 3 times. The data were analyzed statistically using one-way ANOVA and Student t test with GraphPad Prism 6.0 software. A P-value < .05 was considered to indicate a statistically significant difference.
| The A2780-CDDP cell line was established and the amount of NO was detected
A2780-CDDP cells were established using a method described previously. 24 Initially, the IC 50 of A2780 cells and A2780-CDDP cells
| NO decreased CDDP cytotoxicity by decreasing GJ function in A2780 cells
Then, we determined whether NO induced by SNAP influenced the cytotoxicity of CDDP in A2780 cells. Cells were exposed to gradient concentrations (0, 4, 8, 16, 32 and 64 μg/mL) of CDDP for 48 h either with (100 µmol/L SNAP) or without SNAP. As shown in Figure 2(A), SNAP significantly counteracted the cytotoxicity of CDDP (8 μg/mL or 16 μg/mL) in A2780 cells, and the IC 50 value was increased from 3.99 ± 0.39 μg/mL to 6.60 ± 1.0 μg/mL.
Next, we explored the mechanism by which NO decreased CDDP cytotoxicity in ovarian cancer cells. As shown in Figure 2 Similarly, as shown in Figure 2(D,E), noticeable apoptosis of A2780 cells was induced by CDDP (4 µg/mL, 48 h), but was significantly suppressed by co-treatment with CDDP and SNAP. Furthermore, enhancement of GJs by RA preincubation (50 µmol/L, 4 h) in A2780 cells markedly reversed the anti-apoptotic effect of SNAP. Our previous work revealed that RA increased the GJIC of A2780 cells. 24 Together, these results indicated that NO decreased the cytotoxic activity of CDDP by decreasing GJ function in A2780 cells.
| SNAP enhanced the turnover of Cx32 from the cytomembrane to cytoplasm in A2780 cells
Co-treatment with SNAP and CDDP markedly reduced GJ function in A2780 cells. Western blotting was performed to investigate whether Cx32 was affected. As shown in Figure 3(A), SNAP did not affect Cx32 expression in A2780 cells. Our previous studies demonstrated that the cytoplasmic distribution of Cx32 decreases chemotherapy-induced apoptosis. Therefore, we asked whether SNAP had an effect on transport of Cx32 from the cytomembrane to the cytoplasm. Results of immunofluorescence staining showed that, while Cx32 was primarily localized in the cytomembrane in the control group, co-treatment with SNAP and CDDP caused an accumulation of Cx32 in the cytoplasm ( Figure 3B). Western blotting results also showed that cytoplasmic Cx32 levels were increased significantly and that Cx32 levels in the cytomembrane were decreased significantly in the presence of SNAP and CDDP, when compared with the control group ( Figure 3C,D). These results suggested that NO promoted the turnover of Cx32 from the cytomembrane to the cytoplasm in A2780 cells.
| SNAP enhanced the cytotoxic activity of CDDP by decreasing Cx32 expression in A2780-CDDP cells
Next, we explored the mechanism by which SNAP increased CDDP cytotoxicity in A2780-CDDP cells. To explore the role of Cx32, Cx32 expression was notably knocked down by transfection of cells with Cx32 siRNAs. Western blot assay showed that Cx32 siRNA3 markedly inhibited Cx32 expression in A2780-CDDP cells ( Figure 5A), therefore Cx32 siRNA3 was chosen for subsequent experiments.
Knockdown of Cx32 by siRNA further exacerbated the surviving fraction and upregulated the incidence of apoptosis in A2780-CDDP cells co-treated with SNAP and CDDP, as determined by CCK-8 assay, flow cytometry, and western blot respectively ( Figure 5B-D).
In contrast, as shown in Figure 5(E), Cx32 expression in cells was notably enhanced by transfection with pcDNA Cx32. The results of the CCK-8 assay showed that overexpression of Cx32 obviously decreased CDDP cytotoxicity in A2780-CDDP cells. Taken together, these results indicated that NO enhanced the cytotoxic activity of CDDP by decreasing Cx32 expression in A2780-CDDP cells.
F I G U R E 3 Turnover of cytomembrane-localized connexin32 (Cx32) after S-nitroso-N-acetylpenicillamine (SNAP) and cisplatin (CDDP) treatment in A2780 cells. A, Protein level of Cx32 remained unchanged by incubation with CDDP and SNAP in A2780 cells (n = 4). B, Cells were stained with Hoechst 33258 (blue nuclear), anti-Cx32 antibody (red), and Alexa Fluor 488 Phalloidin (green) respectively. Images were obtained under a ×63 oil objective lens. Scale bars represent 10 μm (n = 3). C, D, Redistribution of Cx32 in the cytomembrane and cytoplasm was determined by western blot (n = 3). Na + -K + -ATPase and tubulin were used as a loading control for membrane protein and cytoplasmic proteins respectively. Values are expressed as the mean ± SEM; *P < .05
| D ISCUSS I ON
Platinum compounds are first-line drugs for standard ovarian cancer chemotherapy. However, resistance to platinum drugs remains a major obstacle to treatment success. 27 To prevent resistance to platinum drugs, such as CDDP, in ovarian cancer, further studies are needed on the mechanisms of CDDP resistance and on novel therapeutic strategies. In this study, we found that the NO donor SNAP had different effects on CDDP toxicity in cisplatin-sensitive A2780 cells and in cisplatin-resistant A2780-CDDP cells. SNAP attenuated CDDP toxicity by decreasing GJ function in A2780 cells, while SNAP enhanced CDDP toxicity by decreasing Cx32 expression in A2780-CDDP cells.
Gap junctions are composed of Cx proteins, which are indispensable for cell homeostasis, growth, differentiation, and death. GJs and Cx (particularly Cx43) are widely thought to have anti-proliferative effects on a wide range of cancer cells. 28 Our previous studies showed that GJ could enhance CDDP cytotoxicity in different tumor cells 17,29,30 and another study also suggested that GJs improved CDDP resistance and that Cx43 reversed the resistance of A549 cells to CDDP by inhibiting epithelial-mesenchymal transition. 31 These results indicated that inhibition of GJs compromised the effectiveness of CDDP. In the present study, we found that SNAP could decrease CDDP cytotoxicity and that GJs were decreased or inhibited. However SNAP did not affect Cx32 expression, this also indicated that GJs could decrease CDDP cytotoxicity. We hypothesized that SNAP may alter the redistribution of Cx32. We used 2 methods to detect whether Cx32 was transported to the cytoplasm from the cytomembrane when cells are treated with SNAP and CDDP. We used Alexa Fluor 488 Phalloidin to stain cytoarchitecture; results of confocal microscopy and cytomembrane isolation were consistent, showing that co-treatment with SNAP and CDDP caused Cx32 to aggregate in the cytoplasm of A2780
cells.
However, the roles of GJs and Cx in carcinogenesis and drug resistance are complex and there have been no related studies on whether CDDP resistance is related to GJs/Cx in real drug-resistant cells. Therefore it will be necessary to establish A2780-CDDP cells for future study. Over the past 50 y, GJs have mostly been demonstrated to act as tumor suppressors as they have shown inhibitory effects on tumor growth. 28,32 However, recently, more studies have focused on the connexin protein itself. 33 Cx are reportedly able to regulate cell proliferation, migration, and apoptosis in a GJICindependent manner. We found that Cx32 expression was increased in A2780-CDDP cells, but that GJs were decreased or inhibited. In A2780-CDDP cells, Cx32 accumulated in the cytoplasm, 24 indicating that cytoplasmic Cx32 may have an anti-apoptosis effect in a GJICindependent manner. In the present study, we found that SNAP enhanced CDDP toxicity by decreasing Cx32 expression in A2780-CDDP cells. These results further confirmed the findings that cytoplasmic Cx32 may have an anti-apoptosis effect and that SNAP could decrease CDDP cytotoxicity by decreasing the expression of cytoplasmic Cx32.
Hemichannels are crucial in microenvironmental communication and cell function. 28 Interestingly, it has been reported that NO could increase the activity of hemichannels by regulating gating and permeability, which has been attributed to modification of cysteine residues located in the C-terminal region of Cx. 34 Therefore, changes in hemichannels might be related to the F I G U R E 4 Effects of nitric oxide (NO) on the cytotoxic activity of cisplatin (CDDP) in A2780-CDDP cells. A, Effect of S-nitroso-N-acetyl-penicillamine (SNAP) on the cytotoxic activity of CDDP in A2780-CDDP cells (n = 5). B, SNAP could significantly increase apoptosis, as detected by Hoechst 33258 staining (n = 4). C, Parachute assay was used to determine the effect of SNAP on gap junction (GJ) function in A2780 cells. Scale bars represent 20 μm (n = 4). D, Co-treatment with SNAP and CDDP significantly decreased the expression of connexin32 (Cx32) in A2780-CDDP cells (n = 4). Values are expressed as the mean ± SEM. ***P < .001 F I G U R E 5 S-Nitroso-N-acetyl-penicillamine (SNAP) enhanced the cytotoxic activity of cisplatin (CDDP) in A2780-CDDP cells via modulation of connexin32 (Cx32) expression. A, Expression levels of Cx32 were knocked down by Cx32 siRNA and were determined by western blot (n = 5). B, Effects of SNAP on the cytotoxicity of CDDP in A2780 cells were assessed by Cell Counting Kit-8 (CCK-8) assays following transfection with Cx32 siRNA3 (n = 4). C, Apoptosis of A2780-CDDP cells was analyzed by flow cytometry (n = 4). D, Expression levels of cleaved-caspase-3 were determined by western blot (n = 4). E, A2780-CDDP cells were transfected with pcDNA Cx32 and the expression levels of Cx32 were determined by western blot. Effects of SNAP on the cytotoxicity of CDDP in A2780-CDDP cells were assessed by CCK-8 assays following transfection with pcDNA Cx32 (n = 4). Values are expressed as the mean ± SEM; *P < .05, **P < .01, ***P < .001 different effects of cisplatin cytotoxicity and SNAP, but this needs further exploration.
There has been some related research on the relationship between NO and CDDP resistance, but results were conflicting.
However, the role of SNAP in mediating CDDP cytotoxicity has not been compared previously in both cisplatin-sensitive cell lines and cisplatin-resistant cell lines and there have been no related studies on whether SNAP regulation of CDDP resistance is related to changes in Cx expression or GJ function. In this study, we provide a novel molecular mechanism that contributes to resistance to NO signaling in A2780-CDDP cells via modulation of Cx32 expression.
Our findings, for the first time, suggested that combining CDDP with SNAP in clinical therapies for ovarian cancer should be avoided before CDDP resistance emerges. This is a productive area for further study on increasing the efficacy of cisplatin in clinical treatment by combining CDDP with specific SNAP inhibitors or enhancers.
Combination of CDDP with specific SNAP inhibitors may prevent the formation of CDDP resistance at initial stages, while combination of CDDP with specific SNAP enhancers could reverse CDDP resistance or prevent the development of drug resistance in ovarian cancers.
CO N FLI C T O F I NTE R E S T
The authors have no conflict of interest to declare.
|
2020-04-29T13:03:40.575Z
|
2020-04-27T00:00:00.000
|
{
"year": 2020,
"sha1": "f63cee97f06ff8b53728309f3b1a910d202465fb",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.14436",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcba7468f48e2c8c525981433cc11d922795fa0b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
53491070
|
pes2o/s2orc
|
v3-fos-license
|
Itaipu Hydroelectric Power Plant Structural Geotechnical Instrumentation Temporal Data Under the Application of Multivariate Analysis – Grouping and Ranking Techniques
© 2012 Villwock et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Itaipu Hydroelectric Power Plant Structural Geotechnical Instrumentation Temporal Data Under the Application of Multivariate Analysis – Grouping and Ranking Techniques
Introduction
The monitoring of a dam structure can generate an enormous mass of data of which the analysis and interpretation are not always trivial. It is important to select the information that better "explain" the behavior of the dam, making possible the prediction and resolution of eventual problems that may occur.
The world largest hydroelectricity generator, Itaipu hydroelectric power plant, has more than 2.200 instruments that monitor its geotechnical and structural behavior, and these instruments have readings stored on a database for over 30 years. The high dimensionality and the large quantity of records stored on the databases are nontrivial problems that are kept so that one can pursue "knowledge" through these data.
The detailed analysis of the auscultation instrumental data requires a combination of knowledge of Engineering, Mathematics and Statistics, as well as the previous experience of the engineer or the technician responsible for the analysis of these data. That consumes a lot of time, and often makes it impossible to accomplish this task in an efficient way. This is the reason why the use of techniques and computational instrumentation to help the decisions maker is extremely important.
There are no records of the existence of methods that perform the classification of monitoring instruments in dams. In case of reading intensification this hierarchy could be useful to define which instrument to chose.
The aim of this paper is to identify the tools that are the most significant for the analysis of a dam behavior, which maximizes the effectiveness and efficiency of the analysis of the readings. It shows a methodology based on the field of Multivariate Analysis, applied to the Hierarchical Cluster Analysis in order to identify the groups of instruments similar to Ward's linkage method. The factor analysis of the strain gauge of each instrument group was also applied, performing the hierarchical cluster of monitoring instruments in dams, detecting the main instruments.
This chapter is organized as follows: Section 2 features the problem statement which addresses the importance of the safety on dams and the risks faced when dam rupture accident occurs. Section 3 describes the application area focusing on the safety of dams, on the conditions of load and on the conditions of the monitoring instrument. Section 4 approaches a "research course". Section 5 describes the used data and the Multivariate Statistical Analysis techniques. Section 6 shows the status. Section 7 shows the results. Section 8 approaches the future researches. Section 9 shows the results.
Problem statement
Once the potential risks and losses as a result of rupture accidents on a dam can reach large scales a safe project and adequate construction as well as a correct operation on dams are concerns of Brazilian and worldwide engineers. Additionally, an effectively done monitoring on large dams is essential for the safety of its structure. By aiming the safety of the dams, International Guidelines and many helpful discussions about this subject have been proposed and conducted, such as the one from the [1].
In Brazil, guidelines that aim the safety of the dams were published by the Comitê Brasileiro de Grandes Barragens (Brazilian Committee on Large Dams), see [2]. The Comissão de Constituição e Justiça e Cidadania do Congresso Nacional) (The Constitutional, Justice and Citizenship Committee of the National Congress) approved, on 06/23/2009, the proposal that requires the Executive Power to establish a National Policy on Dams Safety. Its aim was to endow the Public Power with a permanent instrument for the inspection of over 300 thousand dams in the country. The text that has been questioned is the surrogate for the Law Project 1181/03 [3]. According to [4], the catastrophes have been opportune signs for the examination of the criteria of the existing projects and for the selection of more efficient methods and monitoring safety of dams.
In [5] show a table which contains the estimative for the most common causes of ruptures on dams. Among them, the following are highlighted: problems of the foundation; inappropriate spillway; structural problem; different declinations; extreme low-pressure; rupture of landfills; defective materials; incorrect operation; actions of war, and earthquakes. All these problems can be diagnosed with the monitoring of the dam instrumentation, with exception of the last two ones, which percentage of frequencies sum just 4%.
According to [6], the global experience shows that the expenses in order to guarantee the safety of a dam are little when compared to the costs of its rupture. The author quotes the importance of the use of a database of instrumentation for supporting the preliminary analysis of the readings in order to detect problems.
Safety of dams
The principles established on NBR 8681 -Ações e Segurança das Estruturas (Actions and safety of structures) [7] conceptualizes the safety of the concrete constructions of a dam. For concrete gravity-dam projects, some verification corresponding to the stability analysis are necessary in order to evaluate the safety of the movements of: sliding, overturning, floating, tension at base and on structure, deformations, consolidation and vibrations.
The stability of dams must be primary analyzed during the phase of the project. The geometry of the structures and the property of the materials involved must be well considered, as well as the load condition. Some of the basic load conditions are shown on Figure 1. Through Physics, it is possible to explain that the difference of the water level (downstreamupstream) generates a hydraulic gradient between the dam downstream and upstream making the water of the reservoir to try passing through upstream in order to archive a hydraulic equilibrium. To do so, the water percolates through the foundation mass of the dam. During this process, the infiltrated water generates vertical forces acting upward over the dam, these forces are called uplift pressure in dam. The resultant of these forces is represented by Fuplift. Furthermore, the water from the reservoir generates horizontal forces that act downstream-upstream over the dam. These forces are called hydrostatic pressures against the dam wall. The resultants of these forces are represented by Freservoir. These two resultant forces are called destabilizing forces. As for the force P (dam weight) it is a stabilizing structure force. The combination of Fuplift and Freservoir can cause the overturning and the slipping of the dam, not just because of the efforts and moment when it is directly applied, but also for the relief of the weight of the structure itself (in case of uplift pressure).
The above described effects of loads on dams can be observed on figure 1, where the slipping (a) and the overturning (b) are emphasized.
The loading conditions and the properties of materials can change over the lifecycle of a dam, and instrumentation can identify some of these changes. Figure 2 shows the differences in the behavior of the dam in relation to summer and winter climate conditions, as well as its consequences. In summer, an expansion of the concrete occurs, and that causes the block to tumble downstream. This overturning causes the block to compress the foundation. In winter, the concrete is compressed causing the block to tumble upstream, returning to initial position. As a consequence the pressure that occurs in summer over the foundation to be relieved. In this way, it is possible to identify a cyclical behavior of the structure, intrinsically conditioned by the environmental conditions which involve the construction. According to [9], the instrumentation must be used as supplement to visual inspection when executing the evaluation of the performance and safety of dams. The careful inspection of the instrumentation data can reveal a critical condition.
In [10] shows correlations between the types of instruments that are usually used for the auscultation on concrete dams, and the primary types of deterioration of concrete dams. According to the author, the multiple extensometer for example, is related to the monitoring of deteriorations caused by sliding, different declinations, land subsidence of the upstream base, and the Alkali-Aggregate Reactivity.
The measurement of the declinations is one of the most important observations for monitoring a dam behavior during the period of construction, of dams filling and operation. The measurement of the declination can be performed through a multiple point rod extensometer installed on boreholes [10]. Figure 3 shows the multiple point rod extensometer and an example of a typical profile of a multiple point rod extensometer at Itaipu. The measurements of displacements and deformation can be performed in several parts of the foundation with the usage of various rods. Among these displacement and deformations are the contact of concrete and rock, joints and faults and other sub-horizontal discontinuities in the foundation. This approach was used at the Itaipu Dam, where different points of foundation mass were instrumented, specially the geological discontinuities. Figure 4 shows a typical geological profile of the foundation mass of the Itaipu Dam part, which has no tunnel in its right-side, where primary geological discontinuities can be found (contacts, joints, and gaps) of that specific site. In blocks where there is a transversal gallery access to the shaft, the installation of downstream-upstream extensometers can help in the measurement of the angular displacement of the dam with the foundation [10]. The measurement of the horizontal displacement of the ridge is a relevant parameter which is affected by deflections of the concrete structure, by the rotation of the base of the structure (due to the deformability of the foundation), and by thermal and environmental influences. These displacements are affected by the characteristics of the concrete or by the proprieties of the foundation rock mass, resulting in important information for the auscultation of the behavior of the dam and of its foundation. The horizontal displacements of the ridge can be measured by a direct pendulum, usually installed at the end of the construction process. The measurements are done on the stages of reservoir spillway and of dam operation [10].
The stability of the structure in terms of sliding, overturning or floating is directly affected by the level of the piezometric pressures in the concrete-rock interface and in the sub horizontal discontinuities of low resistance that exists in the foundation. The measurements of low pressures on the concrete dam foundation are important for the monitoring of its safety conditions. The drainage is one of the most efficient ways to ensure adequate safety coefficients. The measurements of low pressure are performed by the piezometer [10].
Itaipu Binacional
The Itaipu Binacional, the largest energy producer of the world, had its construction started in 1973 at a river stretch of Rio Paraná known as Itaipu, which in Tupi language means "the singing boulder", located in the heart of Latin America, on the border of Brazil and Paraguay [12]. The construction of the dam ended in 1982 and the last generator unit was completed in 2008.
Nowadays, the Itaipu Dam has 20 generator units of 700 MW (megawatts) each, generating a total potential of 14.000 MW. Itaipu Binacional (Bi-national Itaipu) reached its record in producing energy in 2000, generating over 93,4 billions kilowatts-hour (KWh). It is responsible for supplying 95% of the energy consumed in Paraguay and 24% of all the Brazilian consumption.
The Itaipu Dam has 7.919m of extension and a maximum high of 196m; these dimensions made of the Itaipu construction a reference in concrete, and dam safety studies. Itaipu dam is made of two stretches of earth dam, one stretch of rock-fill dams and concrete stretches, and these forms the higher structures of it. Figure 5 illustrates the whole structure of the Itaipu dam, and table 1 shows the main characteristics of the stretches pointed on figure 5. It is possible to find in all the Itaipu extension an amount of 2.218 instruments (1.362 in the concrete, and 865 in the foundations and earthen embankments) and from this amount 270 of them are automated, to monitor the performance of the concrete structures and foundations. Furthermore, there are 5.239 drains (949 in the concrete and 4.290 in the foundations). The readings of these instruments occur in different frequencies, they can be, for example, daily, weekly, fortnightly, and monthly, depending on the type of instrument. These readings have been stored for over 30 years.
Even though, every stretch of the dam is instrumented and monitored, one of the stretches, called Barragem principal (main dam) (Denominated stretch F and identified as number "5" on Figure 5), should be highlighted in a deeper study. The turbines for generating energy can be found in stretch F. In addition, this stretch is the most high water column and the most instrumented one. This stretch is made of many blocks, and each of them has instruments in the concrete structures and in the foundation that provides data about its physical behavior. This study was developed based on the data collected in this stretch of the dam. In the stretch F it is possible to find extensometers, piezometers, triothogonal meter, water level gauge and foundation instrumentation (seepage flow meter). Among these instruments, the multiple point rod extensometers, that are installed in boreholes, were selected for the analysis. This type of instrument is considered one of the most important because they are responsible for measuring the vertical displacement. That is one of the most important observations while monitoring the behavior of the dam structure. There are 30 extensometers located in stretch F.
The procedure for the methodology used for the analysis of the problem of Itaipu is the following: In the first phase, the data were selected and it was decided that the methodology would be applied only to the extensometers located in stretch F.
In the second phase, the data given by Itaipu were converted into spreadsheets, from which the necessary data used for developing this study were extracted.
In the third phase, the data were standardized in order to receive the subsequent application of the clustering methods.
In the fourth phase, the Factor Analysis and the Clustering Analysis were applied at the same time. The Factor Analysis was also applied within each cluster formed through Clustering Analysis.
Method used
The methodology used for the analysis was applied to the data of 30 extensometers located in different blocks of stretch F of the dam, which having one or two point rods, totalizes 72 displacement measures. These measurements are identified as follow: equip4_1, meaning rod 1 of the extensometer 4, and so on.
The data used in this study are monthly stored and they correspond to the period of January/1995 to December/2004, totalizing 120 readings. This period was chosen as a suggestion of the engineer team of Itaipu because it is subsequent the construction of the dam and prior to the system of automatic acquisition of data. During the period of system implementation, some instruments ended up having no manual readings, in addition, a total of 11 automated instruments (totalizing 24 rods) went through modifications that might have influenced the subsequent readings; there was an exchange on the instrument head for a 70 cm longer one. In this way, the referred 120 readings were immune to these irregularities.
During the period of pre-processing the data, it was identified that most of the instruments readings are monthly, but some of them showed more than one reading per month, so for this cases, the monthly average was considered. Moreover, some instruments had missing readings, in these cases; interpolations were performed through temporal series, meaning that, an adequate model was established from the Box & Jenkins methodology, using the Statgraphics [13]. In this way, it was possible to assure that all the 120 instruments had 120 readings (10 years). See [14] for more information about the interpolation techniques with temporal series.
In this way, the Matrix of entrance of structural geotechnical instrumentation data (Matrix Q) is of order a x b, where a is the number of patterns and b is the number of attributes. For the structural geotechnical instrumentation data of Itaipu, a = 72 (number of patterns) and b = 120 (number of attributes).
During the period of the Multivariate Analysis was applied and the patterns were grouped through the Ward's hierarchical clustering method. The grouping was performed in order to find out similar groups of instruments, and the aim of doing it was to establish the technical justifications for its formation. In addition, the Factor Analysis was applied to the referred data. The Factor Analysis was used to rank the rods of the extensometers through a balanced average of factor scores. Next, the Factor Analysis was applied within each group formed by the clustering analysis. Once having groups that have the instrumentations with a similar behavior, a raking of these instruments was performed within each group, in order to indicate the most relevant instruments, which would be chosen, for example, in cases of intensifying the reading.
Factor analysis
Factor Analysis is a multivariate statistical method, which objective is to explain the correlations between one large set of variables in terms of a set of unobserved low random variables called factors. Hence, suppose the random vector X resulted from p random variables; X' = [x1 x2 x3 ... xp], and in order to study the covariance structure of this vector, in other words, if X is observed n times, it happens that its parameters E(X) = e V(X) = can be estimated and the relation between the evaluated variables represented by matrix of covariance or of correlation p. The factor analysis makes a grouping of variables to explain the influence of latent variables (unobserved) or factors. Within a same group, the variables are highly correlated with each other, and from one group to another, the correlations are low. Each group represents a factor, which is responsible for the observed correlation.
The covariance matrix of the vector X can be placed in an exact form: There are many criteria to define m number of factors. The most used one is the Kaiser criterion [15], which suggests that the number of extracted factors must be equal to the number of eigenvalues higher than one, of or ρ.
If X is a random vector, with p components, and the parameters E(X) = e V(X) = , in factor model ortogonal, X is linearly dependent upon several random unobserved variables, F1, F2, ... , Fm called common factors and p sources of joining variables: 1, 2, ... , p, called errors or specific factors.
The model of Factor Analysis is represented below, where i is the average of the i-th variable, i is the i-th error, or specific factor, Fj is the j-th common factor and lij is the weight of the j-th Fj factor on i-th Xi variable. Equation 1 shows the model represented in matrix terms.
In order to estimate the loading lij and the specific variables i, the method of principal components can be used, which is briefly described below [15].
If the pair of eigenvalues and eigenvectors are (i, ei) of the matrix of sample covariance S, with 1 2 ... p0 and m<p is the number of common factors the matrix of the estimated loadings is given by L = CD 1/2 , where C is the matrix of the eigenvectors and D is a diagonal matrix of which the diagonal elements are the eigenvalues.
In the application of this method, the observations are primarily centralized or standardized, in other words, the matrix of correlation R (estimator of p) is used in order to avoid problems of scale. The specific variances estimated are given by the diagonal elements of the matrix = S -LL'.
In multiple actions, it is necessary to estimate the value of the scores of each factor (unobserved) for an individual X observation. These factor values are called factor scores. The estimated factor scores to the original variables are F = (L'L) -1 L'(X -X ) and for the standardized variables are F = (L'L)'Lz,, that is, if the principal components are used in order to estimate the loadings.
According to [15], with the rotation of factors, a structure is obtained for the low or moderated loadings on the other factors. This leads to a more simplified structure to be interpreted. Kaiser suggested an analytical measure known as Varimax criteria [15] in order to make the rotation.
In Factor Analysis, communality hi 2 is the portion of the variance of the variable that is attributed to the factors and represents the percentage of variation of the variable which is not random but from the factors. Thus, the criterion used to classify the patterns is sort the variables (instruments) according to their factor scores. The factor scores were evaluated by a factor that distinguishes the behavior of the instrument, using it as a practical and simple quality control of the measurement of the instrument.
To perform the ranking of the variables (instruments), a final factor score was used, which is given by equation (3), where m is the number of factors extracted, λi are the eigenvalues and fi are the factor scores.
The Factor Analysis was done with the aid of the computational Statgraphics [13].
Cluster analysis
The clustering is a manner of grouping in a way that those patterns inside the same group are very similar to each other, and different from patterns of the other groups. According to [16], cluster analysis is an analytical technique used to develop meaningful subgroups of objects. Its objective is to classify the objects in a small number of groups that are mutually exclusive. According, to [17], it is important to favor a small number of groups in cluster analysis.
The clustering algorithms can be divided into categories in many ways, according to its characteristics. The two main classes of clustering are: the hierarchical methods and the nonhierarchical methods.
The hierarchical methods include techniques that connection of the items assuming obtain various levels of clustering. The hierarchical methods can be subdivided into divisive or agglomerative ones. The agglomerative hierarchical method considers at the beginning each pattern as a group and interactively, clusters a pair of groups that are the most similar with a new group until there is only one group containing all patterns. In the other hand, the divisive hierarchical method, starts with a single group and performs a process of successive subdivisions [18].
The most popular hierarchical clustering methods are: Single Linkage, Complete Linkage, Average Linkage and Ward's Method. The most common method of representing a hierarchical cluster is using a dendrogram that represents the clustering of the patterns and the levels of similarity in which the groups are formed. The dendrograms can be divided in different levels, showing different groups [19].
In the dendrogram (figure 6), two groups can be seen by admitting a cut on the level represented by the figure. The first one composed by patterns P1, P2 and P5 and the second one composed by patterns P3 and P4. Methods that are not hierarchical or partitioning seek for a way of partitioning without the need of hierarchical associations. Optimizing some criteria, a partition of the elements on k group is selected [18].
The most known method among the nonhierarchical methods is the k-means cluster method [15]. Normally, the k clusters that are found are of better quality than the k clusters produced by the hierarchical methods. The methods of partitioning are advantageous in applications that involve larger series of data.
The methods of the Multivariate Statistics field were used because these are already common methods. The Multivariate Statistic Analysis is an old method that has been made feasibly recently with the advance of present, fast and economic computation.
The clustering of the patterns is based on the measure of similarity and dissimilarity. The similarity measure evaluates the similarities of the objects, in other words, the highest the measures value are the most similar are the objects. The most known mean of similarity is the correlation coefficient. The means of dissimilarity evaluates whether the objects are dissimilar, this is to say, that the highest the measure value are the less similar the objects are. The most known measure of dissimilarity is the Euclidean distance.
According to [15], Ward's method performs the join of two clusters based on the "loss of information". It is considered to be the criteria of "loss of information" the sum of the error square (SQE). For each cluster I, the measure of the cluster (or centroide) of the cluster and the sum of the cluster error square (SQEi) which is the sum of the error square of each pattern of the cluster in relation to the measure. For cluster k there is SQE1, SQE2, ..., SQEk, where SQE is defined by equation 4. 1 2 ...
For each pair of cluster m and n, first, the measure (or centroide) of the formed cluster is calculated (cluster mn). Then, the sum of error for the square of cluster mn is calculated (SQEmn), according to equation 5.
The clusters m and n that show the lower increase on the sum of error square (SQE) (lower loss of information) will be gathered. According to [16], this method tends to obtain clusters of same size due to the deacrese of its internal variation.
Cluster Analysis was applied with the aid of the computational software Statgraphics [13]. The measure of dissimilarity used was the Euclidean distance. The data were standardized. and it was also supervised by him.
Status
As mentioned before, the aim of this paper is to identify the instruments that are the most significant to the analysis of the behavior of dams. There are no records of the existence of methods that perform the ranking of the instruments of monitoring dams. In order to achieve this aim, it is necessary to select, cluster and rank geotechnical-structural instruments of an electric power plant looking forward to maximizing the effectiveness and efficiency of the readings analysis, in our case the Itaipu Hydroelectric Power Plant. In case of needing to intensify the reading this hierarchy could be useful to define which instruments to choose.
The choice of instrumentation is performed with no previous knowledge about the location, features, or characteristics of the instruments. In this way, it is possible to think of applying the methodology when making decisions about the automation of the additional instruments. Approaches that are similar to this can be used in many other cases because there are hundreds of large Civil Engineering construction works that rely on systems of instrumentation in Brazil which the data must have an appropriate treatment.
Results
In the cluster Analysis, the patterns are the rods of the extensometers, and its readings along the months which are compared in order to determine the clusters. The dendrogram on figure 7 shows the formation of the clusters for these data.
Considering the first cut, there are two clusters left. The first cluster, here denominated "cluster 1", is formed by the rods of the extensometers that are considered extremely important to the monitoring of the dam. They are rods of extensometers installed in the axis of the block upstream the dam and inclined 60º towards upstream.
Notice that there is a formation of two additional clusters in the second cut. The first one denominates "cluster 2" which most of its rods of the extensometer installed in the balsatic rocks B, C and D (A and B are called the deepest rocks; C and D are called the superficial rocks), and on the lithological contacts B/C and C/D. The second cluster, denominated "cluster 3" has most of the rods of the extensometers installed in the joints (between the rock layers) A and B and on the lithological contact A/B.
This was the quantity of clusters which are been considered (3 clusters), since it was possible to obtain technical justification for its formation. In a larger subdivision, such justification was not observed.
Notice that at this point it was possible to cluster the instruments according to the relevant geological characteristics of the foundation mass, even though they weren't explicitly showed to the technician. However, on cluster 2, three rods of extensometers installed in joint B were observed, and in cluster 3, three rods of extensometers installed in the basaltic rocks B and C and in the lithological contact B/C were observed. Figure 8 shows the graphic of all the rods of the extensometer during the period of study. The lines were colored according to the cluster of which the rods belong to (black, blue and yellow for clusters 1, 2 and 3, respectively). It is possible to note the distinction between the clusters. This distinction of clusters is not easily recognized when there is no previous knowledge about these three clusters. The task would not be possible if a larger cluster of data hat to be analyzed, hence, the importance of this type of analysis.
Cluster 1, which is composed by rods of extensometers installed on the upstream of the dam, clearly shows the effects of summer and winter. The clusters 2 and 3 are separated due to the absolute measures. This separation can be justified by the fact that they are indifferent conditions, which is more superficial in cluster 2, and deeper in cluster 3. Once the readings of the most superficial rods and the readings of the deepest rods are summed up, these measures are larger. Table 2 shows the most important rod for each of eight factors, for instances, the rod dominating each factor. Notice that in table 2 the factor 2 is dominated by the rod equip1_1, equip1_2, equip4_1, equip4_2, equip6_1, equip6_2, equip8_1, equip8_3, equip21_1, equip21_2, equip25_3, equip26_2 e equip31_1. This factor has 10 of the 11 rods that are part of cluster 1, it means that there is an external phenomenon influencing them. As mentioned before, these rods reflect the effects of the summer and winter. In the same way, each factor is dominated by a set of rods and there is an external phenomenon that explains each set of rods or factor, even though it is not easy to interpret them. A community is the portion of the variation of the extensometer rods which is explained by its factors. A low community within a rod indicates that the same is not greatly affected by the factor because a community is the sum of contributions of each rod in each square factor. Thereon, in this case the influence mainly comes from a random factor. Notice that none of the extensometer rods showed lower community than 0.71, it means that none of the random variations are over 29%. A community that is equal to 0.71 indicates that the 71% of the rods extensometer variations is ascribed to the factors and that only 29% of those variations is random, it means that these correlated rods are working properly. A low community would indicate a need of investigating the rods. After forming three clusters, the ranking of the rods was performed within each cluster with the help of the Factor Analysis. The hierachization within each group can also be used to identify rods used on readings intensification. The advantage of application of ranking within each group is that a separation of the rods with similar behavior is firstly obtained then the indicated rods will well represent the variability of the cluster. Note that the rods of the automated extensometers are, mostly, among the first in the ranking of each cluster.
As mentioned above, a low communality of a rod indicates that this rod is not strongly influenced by the factors and, in this case, the influence comes from random factors. In the application of Factor analysis within each cluster, there are rods of extensometers with communities between 0,6 and 0,7, in other words, random variation between 30% and 40%. It is indicated that the investigation on the rods is performed in this case.
Furthermore, in order to identify the 24 rods that are the most relevant, we opted, in first place, to identify the 8 best ranked rods from each cluster. In this case, there would be 15 out of the 24 automated rods. This number of rods coinciding with the automated ones in Itaipu would increase with the aid of a specialist for a better interpretation of the results. This specialist would detect that cluster 1, for example, is formed by rods that are extremely important for the monitoring of dams, and that all rods from this cluster should be automated.
This type of analysis was not found in literatures, for this reason the contribution of this study is relevant. It is recommended that this Analysis (process of hierarquization) is repeated periodically (according to the needs indicated by the specialists in this field -in this case, by the engineers' team of Itaipu) what could be done, for example, every two years. This can show the appearance of new rods that are indicated by the performing of readings intensification (that should be investigated), the same could occur with rods that would no longer be indicated.
When there are rods within the clusters with low communalities, it is recommended that they are investigated. Low communality indicates a high percentage of randomness in the data and that can be an indicator of problems with the rods.
These identifications of similar rods can also be used in projecting the control values. In this case, the values of control for each rod can be associated to the readings of the rods that belong to a same cluster.
The final factorial score performs the hierachization of the attributes. In this case the patterns are vectors of which the components (attributes) are the readings of the rods of the extensometers in a certain month. Therefore, the final factorial score performs the hierachization of the months showing whether there is any month that is rather relevant and that deserves greater attention. As mentioned, cluster 1 shows the effect winter/summer in its readings. For this reason the final factorial score was calculated in order to perform a ranking of the months for cluster 1, to show whether there is any month or some months with greater relevance. The identification of the months with more significant readings for an external effect (in this case, the effect of summer and of winter on the readings of the rods of the extensometers), can be useful, for example, in the projection of the values of control. Admitting that there are differences in the readings of the rods for the months related above, only the readings performed in these months would be used to define specific values of control for these months.
Further research
The application of this methodology is suggested for other instruments and other periods, and the implementation of it in order to define values of control and for anomaly detection.
Once the process of ranking is repeated in several periods (every 2 years, for example.) it can show the appearance of new rods which are indicated for performing readings intensification or the appearance of rods that could no longer be indicated (these should be investigated).
Conclusions
This manuscript shows a methodology that uses some techniques of the field of Multivariate Analysis, which aim is to select, cluster and rank geotechnical-structural instruments of a Hydroelectric power plant, in our case, the Itaipu hydroelectric power plant, in order to maximize the efficiency and effectiveness of the analysis of the readings.
The methodology showed was applied to the instruments called extensometers, locates in different points of block F of the dam, a total of 30 extensometers that with one, two or three point rods totalized 72 measures of monthly displacement. This measures were stored over a period of 10 years, totalizing 120 readings (January/1995 to December/2004). It is important to remember that 24 measures out of the 72 were automated by the company. The ranking of the instruments would be a way to choose the instruments without any previous knowledge about its location, features, or other characteristics. In this way, it is possible to think in applying this methodology in further decision-making when it relates to the automation of additional new instruments.
The methodology used to analyze the problem of the research was composed by the following form: Ward's method was applied in order to cluster 72 rods of extensometers; at the same time, the Factor Analysis was applied in order to rank the rods; latter, the Factor Analysis was applied within each cluster formed by Clustering Analysis.
In the Factor Analysis applied to the 72 rods, there was not need of investigation for any of the rods, once the communality was high for each of them. Observing the 25 rods of extensometers with the highest communality, 14 rods were identified among the ones that were automated by the team of engineers of Itaipu (the automated rods are the ones considered the most important), in other words, the proposed hierachization method (without previous clustering of the rods) identified 14 of the 24 automated rods.
The Clustering Analysis shows that it is possible to find technical justification for the formation of three clusters. The instruments were clustered according the relevant geological characteristics of the foundation mass, although they were not explicitly shown to the technicians.
By observing the clusters 1, 2, and 3, the factor analysis was applied within each cluster in order to perform the ranking of the rods of the extensometers. It was possible to notice that the rods of the automated extensometers are, most of the time, among the first ones of the ranking of each cluster.
In order to identify the 24 rods that are the most relevant, we decided to identify the 8 best ranked rods from each cluster. In this case, there would be 15 of the 24 automated rods. This number of rods coinciding with the automated ones in Itaipu would increase with the aid of a specialist for a better interpretation of the results. For instance, this specialist would detect that cluster 1is formed by rods that are extremely important for the monitoring of dams and that all rods from this cluster should be automated.
Approaches that are similar to this can be used in many other cases, since there are thousands of large construction works of Civil Engineering that use the system of instrumentation, of which the data can and must receive an appropriate treatment.
The approach of an important problem of engineer, the analysis of the instrumentation data of large construction works, clustering techniques and other techniques were applied, in the context of the Multivariate Statistical Analysis, aiming the identification of the instruments that are the most significant ones to the analysis of the behavior of dams.
|
2018-10-23T18:53:26.106Z
|
2013-01-09T00:00:00.000
|
{
"year": 2013,
"sha1": "4a1836917694b830fb6d866667ba7df2c0a194ba",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/41751",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "21b86c4ffc48b627dea497bb77450caadbbae866",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
51676270
|
pes2o/s2orc
|
v3-fos-license
|
BIPSPI: a method for the prediction of partner-specific protein–protein interfaces
Abstract Motivation Protein–Protein Interactions (PPI) are essentials for most cellular processes and thus, unveiling how proteins interact is a crucial question that can be better understood by identifying which residues are responsible for the interaction. Computational approaches are orders of magnitude cheaper and faster than experimental ones, leading to proliferation of multiple methods aimed to predict which residues belong to the interface of an interaction. Results We present BIPSPI, a new machine learning-based method for the prediction of partner-specific PPI sites. Contrary to most binding site prediction methods, the proposed approach takes into account a pair of interacting proteins rather than a single one in order to predict partner-specific binding sites. BIPSPI has been trained employing sequence-based and structural features from both protein partners of each complex compiled in the Protein–Protein Docking Benchmark version 5.0 and in an additional set independently compiled. Also, a version trained only on sequences has been developed. The performance of our approach has been assessed by a leave-one-out cross-validation over different benchmarks, outperforming state-of-the-art methods. Availability and implementation BIPSPI web server is freely available at http://bipspi.cnb.csic.es. BIPSPI code is available at https://github.com/bioinsilico/BIPSPI. Docker image is available at https://hub.docker.com/r/bioinsilico/bipspi/. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Protein-Protein Interactions (PPI's) are at the basis of virtually every cellular process. Therefore, elucidating the biochemical underpinnings of interactions is a fundamental step for improving our understanding of cellular mechanisms and diseases. Much research has been done on PPI's, especially at cellular level, which has led to the availability of many interactomes (Cafarelli et al., 2017). However, in order to grasp protein function in cellular processes, not only it is important to know which proteins interact but how proteins bind to their different partners and thus, identifying protein-protein interfaces becomes a central issue.
Many experimental methodologies exist for the characterization of protein-protein interfaces, including mass spectrometry (Sobott and Robinson, 2002), mutagenesis (Chen et al., 2014), X-ray crystallography (Shi, 2014) or nuclear magnetic resonance (O'Connell et al., 2009). Nevertheless, in many cases, these approaches require expensive and time-consuming experiments and are not suitable for the analysis of large datasets. As a result, many computational approaches have been designed to predict and characterize PPI's at different levels. For example, several protein-protein docking approaches (Rodrigues et al., 2015;Zhang et al., 2016) have been developed to obtain atomic models for the interaction of two proteins when solved structures of both partners are available. For those other cases in which there is no structural information, or it only exists at low resolution, other methods to identify which pairs of domains are likely to bind in PPI's have been proposed (Segura et al., 2015b;Segura et al., 2016;Wang et al., 2007).
Nonetheless, most approaches work at residue level predicting which protein residues constitute binding sites or interfaces of a protein complex. Generally, these algorithms employ knowledge derived from structurally solved protein in order to build templates or statistical models (Xue et al., 2015).
Different knowledge-based methods can be found in the scientific literature. Some of them use homology information, inferring protein interfaces from templates of homologous complexes (Mosca et al., 2013;Segura et al., 2016;Xue et al., 2011). Other approaches employ correlated mutations in order to identify pairs of residues that are likely to interact (Jones et al., 2012;Morcos et al., 2014;Pazos et al., 1997). On the other hand, most proposed data-driven methods make use of machine learning algorithms that are trained on heterogeneous sets of structurally solved complexes (Ahmad and Mizuguchi, 2011;de Vries et al., 2006;Fout et al., 2017;Hwang et al., 2016;Meyer et al., 2018;Minhas et al., 2014;Murakami and Mizuguchi, 2010;Neuvirth et al., 2004;Porollo and Meller, 2006;Savojardo et al., 2017;Segura et al., 2011Segura et al., , 2012. The different strategies have different relative merits, depending on the context. For example, template-based approaches might offer accurate predictions when homologue complexes are available (Xue et al., 2015). Similarly, correlated mutations have been shown to provide very useful information when high quality multiple sequence alignments can be compiled (Ovchinnikov et al., 2014). On the other hand, machine-learning solutions are not limited by the need of high quality templates or alignments, so that they can be used in more general contexts. Finally, docking algorithms, which are able to achieve atomic resolution in their prediction but are also computationally demanding, can benefit from data-driven predictions in order to get faster and more accurate solutions (Rodrigues et al., 2015;Segura et al., 2015a).
Several formulations can be found to the problem of predicting protein complex interfaces or binding sites (Ahmad and Mizuguchi, 2011). On the one hand, partner-independent binding site predictions aim to identify all residues of a given protein that interact with any protein. On the other hand, partner-specific binding site predictions (from now on 'interface predictions') pursue to identify which residues are involved in a particular PPI. Partner-specificity is a desirable attribute for interface predictors as most proteins interact with several partners (Grigoriev, 2003) and the interfaces for each partner can be totally different. This is especially true for transient interactions, which are fundamental for processes such as signal transduction (Xue et al., 2015). It is not then surprising that partner-specific methods tend to outperform non-specific approaches (Ahmad and Mizuguchi, 2011;Minhas et al., 2014). However, most current binding site prediction approaches based on machine learning algorithms are designed to achieve non-partner specific predictions. Indeed, to our knowledge, only a few machinelearning based methods computing partner-specific binding sites are currently available. Ahmad et al. proposed PPiPP, an ensemble of 24 neural networks which used amino acid type and PSSMs (Position Specific Scoring Matrices) through a sliding window as features to predict binding sites on protein sequences (Ahmad and Mizuguchi, 2011). PAIRpred (Minhas et al., 2014), one of the state-of-the-art methods, is a Support Vector Machine that employs a specific pairwise kernel over a set of structural and/or sequence-based features. This latter set of sequence-based features is comprised by PSSMs, PSFMs (Position Specific Frequency Matrices) and solvent accessibility predictions, while the structural descriptors include residue depth, solvent accessibility, protrusion index and half sphere amino acid compositions. In general, better performance is achieved when structures of the protein partners are available. Recently, Fout et al.
developed a graph convolutional neural network (GCNN) method using the set of features described in PAIRpred (Fout et al., 2017). Finally, the ECLAIR method (Meyer et al., 2018), which was designed to function in high-throughput scenarios, is based on an ensemble of Random Forests, each of them trained on a different set of features including biophysical, structure-base, docking-based and co-evolution features.
In this work, we present BIPSPI (xgBoost Interface Prediction of Specific-Partner Interactions), a new machine-learning based method for the partner-specific prediction of residue-residue contacts and binding sites. BIPSPI can predict interface residues from either protein sequences or protein structures. To that end, BIPSPI employs multiple structural and/or sequence-based amino acid features that are combined through an Extreme Gradient Boosting (XGBoost) (Chen and Guestrin, 2016) model and a new scoring function that converts residue contact predictions into binding site scores. BIPSPI performance has been evaluated by means of a leaveone-out cross-validation over different datasets (Hwang et al., 2008;Vreven et al., 2015) and against an independent testing set derived from CAPRI targets (Janin et al., 2003). Finally, BIPSPI was compared with similar methods, outperforming previous reported results. A web server where BIPSPI can be employed and results and datasets downloaded is freely available at http://bipspi.cnb.csic.es.
Materials and methods
BIPSPI classifier has been trained to predict interfaces from protein structures and/or sequences. This section describes the implementation of the method using structural data. A full description of the sequence-based classifier is available in Supplementary Section S1.
Datasets
Different sets of protein complexes were used to train and evaluate the performance of BIPSPI predicting protein interfaces. The first one was the Protein-Protein Docking Benchmark version 5 (Vreven et al., 2015) dataset, that contains 230 non-redundant protein complexes for which bound and unbound structures are available. Each of these complexes has resolution better than 3.25 Å and the length of each sequence is >30 amino acids. To avoid redundancy, this dataset was compiled ensuring that none of the protein complexes belonged to the same pair of SCOP families (Andreeva et al., 2008). This set will be referred to as DBv5. The second dataset was the Protein-Protein Docking Benchmark version 3, in this work termed DBv3, (Hwang et al., 2008), which is a subset of DBv5.
In addition to DBv5 and DBv3, we have compiled a new dataset of 117 protein dimers (DImS) following a similar approach as the one used to compile the different Protein-Protein Docking Benchmark versions. This dataset was built selecting PDB dimers of at least 35 amino acids long, with resolutions better than 3 Å and for which >90% of their residues were structurally determined. Similar to DBv5 and DBv3, non-redundancy between protein complexes was established at SCOP family level in such a way that only dimers with one SCOP domain per partner were considered and none of the dimers shared the same combination of SCOP families (see Supplementary Section S6.12 for a detailed list). Finally, several CAPRI targets (see Supplementary Section S6.12 for a detailed list) were also employed as independent testing data and as a way to provide a direct comparison with other methods (Savojardo et al., 2017).
Residue-residue contact definition
Different definitions of residue-residue contact can be found in the scientific literature (Xue et al., 2015). Two of the most commonly used are: (i) residue solvent accessibility reduction after complex formation and (ii) distance threshold between residue heavy atoms. In order to compare with other existent methods, we have adopted the same contact definition that was used in PPiPP (Ahmad and Mizuguchi, 2011) and PAIRpred (Minhas et al., 2014). Accordingly, a pair of residues is defined as interacting if the distance between any of their heavy atoms is <6.0 Å .
In our analysis, we found that in DBv5 there are 20 799 interacting residue pairs as opposed to 15 333 317 non-interacting residue pairs; thus, an extreme class imbalance. To properly handle this situation during the different training steps of BIPSPI, we only considered random samples of all non-interacting pairs, including non-accessible residues to account for possible conformational changes. Several sampling proportions were tested, achieving the best performance when the number of selected negative cases was three times larger than the number of interacting pairs (data not shown). This random sample, which also makes training faster, is drawn at protein complex level, in such a way that all complexes contribute to the dataset with the same relation of positive and negative cases. Finally, it is important to notice that no sampling is done for evaluation and, hence, it is performed on whole proteins data.
Data encoding
Residue pairs are codified as a vector of numerical features. In this work, a protein A is defined as a collection of residues a i and a pair of residues ða i ; b j Þ will identify two amino acids belonging to proteins A and B, respectively. Due to the symmetry of the problem, each pair ða i ; b j Þ can also be defined as ðb j ; a i Þ. To tackle this, we have included both representations as different examples in the training set and, when computing scores, we have assigned the average of predictions for ða i ; b j Þ and predictions for ðb j ; a i Þ to both of them. Next sections describe how the vector of features associated to a pair of residues ða i ; b j Þ is built.
Single amino acid features
Each single residue is encoded by a set of sequence-based and structural features. Sequence-based features include amino acid type, codified as a vector of 22 binary elements (one-hot encoded), sequence profiles computed with PSI-BLAST (Altschul et al., 1997) or retrieved from 3DCONS-DB (Sanchez-Garcia et al., 2017) when available, and sequence conservation scores computed with AL2CO (Pei and Grishin, 2001). Structural features are also calculated to describe residues, including geometrical descriptors and hydrophobicity computed with PSAIA (Mihel et al., 2008), one-hot encoded secondary structure determined by DSSP (Kabsch and Sander, 1983) and half-sphere exposure and contact number (Hamelryck, 2005) computed at radius of 12 Å with Biopython (Cock et al., 2009). An exhaustive list of the residue features is available in Supplementary Section S2.
Residue environment features
Residue environments are also described and included into the vector of features in such a way that for each feature its environment feature is calculated. Several residue environment definitions have been employed in different works, two of the most common ones are: (i) sequential environment through sliding window (Siki c et al., 2009) and (ii) structural environment by Euclidean distance threshold (Porollo and Meller, 2006). In these works, three types of environments are used in combination: sequential environment, structural environment and structural pairwise environment. Sequential environment is obtained by a sliding window approach of length 11 amino acids in which all sequence-based features described above are concatenated for all residues of the window. On the other hand, structural environment features are computed from all sequence-based and structural features of each residue employing a structural neighbourhood definition based on Voronoi Diagrams (Segura et al., 2011). Basically, according to this definition, two residues are considered as neighbours if they share a common edge in the Voronoi Diagram defined by all C-a atoms of the protein. The computation of structural environment features is different depending whether the feature is represented by a real number or if it is represented as one-hot encoded. Hence, let f i be a real value feature for residue a i , then, its associated structural environment feature ef i is define as a set of four values: where k is the number of classes of the feature h. Then, its associated structural environment feature of dimension k is computed as follows: Residue pair scores predicted in the first step classifier are also included as new features in the second step (see Section 2.4); those scores can be regarded as pairwise features. Then, given a pair of residues ða i ; b j Þ and a pairwise score F ij , the structural pairwise environment score eF ij is defined as:
BIPSPI algorithm
BIPSPI algorithm was designed as a three steps workflow (see Fig. 1). First, a XGBoost classifier (Chen and Guestrin, 2016) is fed with the set of sequence-based and structural features and their respective environments. After that, a second XGBoost classifier is fed with the same input features adding the predictions that were obtained in the first step and their associated environment scores. Finally, a scoring function converts interacting pair predictions into binding site residue scores (see Section 2.5). The training procedure and selected algorithm hyperparameters are described in Supplementary Section S3.
From residue-residue contact scores to binding site prediction
In order to obtain individual interface residues, we have designed a scoring function to compute single amino acid binding site scores from residue-residue pairs results. This scoring function takes into account all residue pairs scores relying on the rank of the predictions when all pairs are sorted from highest to lowest score. Thus, the binding site score of a given residue is derived from the list of all pair predictions ordered from highest to lowest score using the following expression: Where a is the particular residue whose score is computed, n is the number of residue pairs and X c ða; 2 i Þ is the number of times that residue a appears among the 2 i highest score pairs (see Supplementary Section S4). Additionally, with the aim of making predictions smoother, scores are averaged along the sequence employing a window size of three amino acids and using the vector of weights (1/4, 1/2, 1/4). Finally, in order to provide a manageable score that allows for easy threshold selection, BIPSPI web server computes an expected precision value that is estimated using an isotonic regression model on the original scores (Zadrozny and Elkan, 2002) (see Supplementary Section S4.2 for more details).
Performance evaluation
The performance of BIPSPI predicting residue-residue contact pairs and binding sites was evaluated computing a leave-one-out crossvalidation over the complexes included in the different datasets (see Section 2.1). Specifically, each of the classifiers of the method was trained with the sampled pairs of all protein complexes except for the ones that belong to the left-out complex. This evaluation procedure is the same that was used in PAIRpred and PPiPP and, when trained over DBv3, allows for a straight and fair comparison with those methods. In addition, several CAPRI targets interfaces were predicted as an independent benchmark. Residue-residue contact predictions (RRCP) were evaluated with the AUC values of ROC curves averaged over protein complexes (AUC ROC ) and mixing all residue-residue scores from the different complexes (AUC ROC ). Additionally, as these measurements can provide an over-optimistic view of performance due to the imbalance between interacting and non-interacting pairs, the AUC of the precision-recall curve (AUC PR ) is also provided. Single residue binding site predictions were also evaluated in terms of the Matthew Correlation Coefficient (MCC), precision (PR), recall (RC), specificity (SPC) and negative predictive value (NPV), which were computed at the threshold that maximized the MCC.
BIPSPI feature importance analysis
The importance of the different features employed in BIPSPI has been analyzed by counting the total number of tree splits caused by each variable during model training (Friedman and Meulman, 2003). In order to obtain easily interpretable results, we have focused on families of features when those are classified by type (e.g. accessibility, conservation, secondary structure, etc.). Accordingly, the family of features with the highest contribution (sum of importance of all variables in the class), approximately 65%, was conservation. However, individually, the most informative variables belonged to the accessibility family for the first step and previous step prediction scores for the second one, being accessibility the next most important family feature. Additionally, we have studied the importance of the residue environment and observed that structural environment features explained >55% of the total importance despite being <31% of the total features number. An extended discussion of feature importance can be found in Supplementary Section S5.
BIPSPI performance analysis
The performance of BIPSPI predicting residue-residue contacts and binding sites was evaluated computing a leave-one-out cross-validation on DBv5 and DImS datasets. As expected, the method achieved the best performance when structural features and two classification steps were computed (see Table 1). Although the improvement in performance predicting residue-residue contacts between the first and second step is small, the improvement in performance predicting single residue binding sites after the second step is not negligible. For example, while BIPSPI AUC ROC measured in DBv5 are 0.9011 and 0.9052 for the first and second step, respectively, the binding site AUC ROC increases from 0.8046 in the first step to 0.8235 in the second one (see Table 1). This behaviour can be explained due to the high imbalance of interacting and noninteracting residue pairs, and, as a consequence, small improvements in residue-residue contact predictions can involve important improvements in binding site prediction.
In general, binding site evaluation measurements improved after the second step. For example, when BIPSPI was evaluated in DBv5 the MCC in the second step increased by >0.01 respect the first step. Also, AUC ROC , AUC ROC and AUC PR measurements increased after the second step was employed (see Table 1). It is worth noting that the apparent precision drop in the second step that could be inferred from Table 1 values is a consequence of the fact that precision and recall were obtained for those thresholds that maximized the MCC in each step independently and thus, they cannot be compared. In fact, as it can be appreciated in the precision-recall curves included in Supplementary Section S6.2, most precision and recall values improved after the second step was applied. This improvement between the two steps can be explained by the addition of the first step scores and their associated structural pairwise environment scores (see Section 2.3.2). Protein binding sites tend to form continuous surface patches and thus, providing predicted scores of neighbour residues can be useful in order to find residues surrounded by high scored regions. Fig. 1. BIPSPI workflow. Sequence-base and structural features are used to codify pairs of residues. At first step, XGBoost classifier is fed with encoded pairs in order to obtain interacting pairs predictions. Interacting pairs scores are combined with original features and fed to a second step classifier. Lastly, interacting predictions obtained in step two are converted to binding site predictions employing our scoring function Furthermore, we analyzed the feature importance for the second step classifier obtaining that the first step scores and its associated environment values were the most important features (see Section 2.3.2 and Section 3.1). In addition to XGBoost algorithm, which has not been widely explored in bioinformatics, we have also analyzed Random Forest (Breiman, 2001) as classifier. Results obtained by XGBoost were superior to Random Forest in all the evaluated metrics (see Supplementary Section S7). Specifically, XGBoost achieved a higher recall (over 7%) while having a similar precision and increased the RRCP AUC ROC over 1% and binding site MCC over 0.02.
BIPSPI behaves partner-specific
In order to measure the partner-specificity of BIPSPI, we have compiled a dataset where some proteins interact with multiple partners through different binding sites. Then, we have compared the scores of binding site residues for a particular interaction (e.g. protein P A interacting with P B ) with the scores of residues involved in the interaction of the same partner but with other proteins (e.g. protein P A interacting with P C or with P D ). For this analysis, equivalent proteins (sequence identity >90%) that interact with different partners were identified from our datasets DBv5 and DImS. As a result, 46 different proteins, involving 123 interactions, were found in DBv5 and 17 proteins, involving 43 interactions, in DImS (see Supplementary S9.1 for a list of pdb chains). To avoid any effect or artefact due overtraining, we analyzed the scores obtained in the leave-one-out cross-validation computed on DBv5 and DImS (see Section 3.2). Then, for each protein, scores from its specific interface residues were collected for the partner-specific binding site distribution and scores from residues that belong to the interfaces of other interactions were included in the non-specific binding site distribution (see Supplementary Material Section S9.2 for a detailed explanation and a particular example). Finally, both distributions were compared using the Mann-Whitney U test, achieving P-values of 2.6e-13 and 2.5e-14 for DBv5 and DImS, respectively and thus, rejecting the null hypothesis of the test that both distributions are equivalent.
Binding site scoring function improves other approaches
In PPiPP and PAIRpred, the binding site score of a particular residue is computed as the mean or the maximum of the residue pair scores involving this particular residue. Then, for a single residue, the resulting binding site score depends on the score of a unique pair and thus, the predicted score of other possible contacts are ignored. In this work, we have designed a novel scoring function to compute single residue binding site scores considering all predicted score pairs for a particular residue (see Section 2.5). This approach increased the performance when compared with the maximum score value proposed in PAIRpred. Finally, we have also found that averaging the predicted binding site scores through a sliding window (see Section 2.5) increased the final performance. Table 2 summarizes the performance of different scoring approaches predicting biding sites from residue pair scores. In our benchmark, the best performance was achieved by the newly defined scoring function averaging the resulting scores through a sliding window. At this point, we would like to highlight that our proposed scoring function is not specific for our method but also can be applied to other pair prediction methodologies. Indeed, when applied to PAIRpred scores, it also improves its performance (see Table 1 and Section 3.5).
Comparison with other methods
We have compared our approach with four other methods (PPiPP, PAIRpred, GCNN and ECLAIR) that also use a machine-learning based approach and have been designed to predict partner-specific binding sites. In order to make comparisons with PAIRpred and PPiPP easier, we have used the same evaluation protocol consisting in a leaveone-out cross-validation over DBv3 complexes. Table 1 shows the performance of PPiPP, PAIRpred and BIPSPI using the metrics described in Section 2.6. The best performance was achieved by BIPSPI when structural data was included in the input data. Moreover, when only sequence-based features were used, BIPSPI also outperformed the other approaches. It is worth to highlight that original PAIRpred binding site predictions considerably improved when our scoring function was applied (see Section 2.5), raising the MCC coefficient by >0.1 points.
Comparison with GCNN was carried out as described in the original publication (Fout et al., 2017). Thus, BIPSPI was retrained on the set of complexes of DBv5 that are contained in Docking Benchmark v4 (DBv4) (Hwang et al., 2010) and tested on the complexes contained in DBv5 but not in DBv4. The median ROC-AUC obtained by BIPSPI on the testing set was 0.942 and thus, >4 points better than the reported in GCNN publication. Similarly, we compared our method with ECLAIR and several other non-partnerspecific methods using the BM90C dataset (Meyer et al., 2018). In this case, BIPSPI also achieved the best MCC when compared with the other methods, 0.389. A detailed comparison table is included in Supplementary Section S8.1.
In addition, we have also evaluated BIPSPI performance over a set of CAPRI targets (see Supplementary Section S6.12 for a complete list of proteins and detailed results). In this evaluation, BIPSPI achieved an AUC ROC for pair prediction of 0.885 and, for binding site AUC ROC and MCC, values of 0.763 and 0.297, respectively. Moreover, we could compare these results with ISPRED4 predictions (Savojardo et al., 2017) as these targets were also used during its testing. It is worth noting that, ISPRED4 is a non-partner-specific predictor and thus, predicting global binding sites is a more general problem. Even so, BIPSPI obtains better MCC than ISPRED4, which reported an MCC of 0.28.
Use case
In this section, we illustrate how BIPSPI can be employed in order to obtain meaningful information of protein-protein interfaces, especially in those cases where several partners are involved and thus, partner-specificity becomes more important. One of these examples can be found in pdb 4ov6, in which two subunits of the preprotein convertase subtilisin/kexin type 9 (PCKS9) are in complex with a PCSK9-binding adnectin protein. PCSK9 plays an important role in the regulation of low-density lipoprotein (LDL) serum levels thanks to its LDL receptor degrading activity and it has been demonstrated that self-association of PCKS9, that occurs at the catalytic region, increase that activity (Fan et al., 2008). For these reasons, it has become a potential pharmacological target for the treatment and prevention of cardiovascular diseases (Mitchell et al., 2014). PCSK9-binding adnectins, which were derived from human fibronectin as an alternative to therapeutic antibodies, are known to bind also close to the active site (Mitchell et al., 2014).
BIPSPI interface residue predictions for the PCKS9-PCKS9 interaction and for the PCKS9-Adnectin interaction are shown in Figure 2. As it can be observed, BIPSPI partner-specificity allows the identification of some of the residues of each native binding site, despite being spatially close. Moreover, it can be noticed that BIPSPI predictions are spatially close to the active site that was identified through 3DBIONOTES application (see Supplementary Section S10 for additional information and Section S11 for an additional use case) (Segura et al., 2017;Tabas-Madrid et al., 2016).
Conclusion
In this work, we have presented BIPSPI, a partner-specific predictor of residue-residue contacts and protein binding sites that uses as input either protein sequences or structures. BIPSPI employs the Extreme Gradient Boosting algorithm over a set of structural and/or sequence-based features in order to predict scores of residue pairs that are likely to interact. Then, these predicted scores are converted into binding site predictions by a novel scoring function. BIPSPI was compared with state of the art methods using a leave-one-out crossvalidation on different datasets. Additionally, several CAPRI targets were also tested as an independent evaluation benchmark. In all these evaluations, BIPSPI achieved the best performance compared to previously reported methods. Moreover, its partner specificity was successfully evaluated through a Mann-Whitney U statistical test. Finally, BIPSPI is freely available through a user-friendly web application at http://bipspi.cnb.csic.es where prediction and visualization of binding site residues can be compute from either protein structures or sequences. Conflict of Interest: none declared.
|
2018-08-01T20:48:44.209Z
|
2018-07-18T00:00:00.000
|
{
"year": 2018,
"sha1": "ddd7395c1028a4df14d758145bb84cf739de3e0c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/35/3/470/27700304/bty647.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddd7395c1028a4df14d758145bb84cf739de3e0c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
227953936
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Chemical Treatment and Length of Raffia Fiber (Raphia vinifera) on Mechanical Stiffening of Polyester Composites
In recent decades, the unique characteristics of natural fibers have promoted their use as reinforcement in polymeric composites. This is verified in several industrial sectors, from packaging to automotive and civil construction. Among the natural fibers, the raffia fiber extracted from the palm tree Raphia vinifera and introduced in the Amazon region a long time ago; started to be considered for the production of polymeric composites only in recent years. For the first time, the effect of raffia fiber length and its alkali treatment on the mechanical properties of a polymer composite was disclosed. Tensile tests were performed in composites with raffia fibers randomly dispersed into terephthalate-based unsaturated polyester resin. The results showed an increase in the Young’s moduli, confirmed by ANOVA, for the composite with both untreated and alkali-treated fibers in comparison to the plain polyester, which characterizes a stiffening effect. The composites with alkali treated fibers exhibited similar tensile strength values for all lengths; however, their strengths are lower than those for the untreated condition due to a weak raffia fiber/polyester matrix adhesion. Therefore, this work fills the current knowledge gap on raffia fiber incorporation in polyester matrix and valorizes this abundant Brazilian resource, providing additional information towards the use of raffia fiber in polymer composites.
Introduction
The use of renewable and biodegradable materials has advanced remarkably in recent years. Among these, the natural lignocellulosic fibers (NLFs) have stood out as a sustainable alternative to replace synthetic fiber in polymeric composites [1][2][3][4][5][6][7][8][9][10][11][12] in the most diverse areas of civil construction [13], the automotive industry [14][15][16], and ballistic vests [17][18][19][20]. Many NLFs have been traditionally used by local people in developing regions as craftwork, ropes, or considered to be industrial waste, such as sugarcane bagasse fiber [21], coir fiber [22,23] and PALF [23]. The growing interest in NLFs is due to their characteristics of relatively low cost, low density, flexibility, and non-abrasive behavior, which unlike synthetic fibers, avoid damage to the processing equipment. In addition, NLFs help with socio-environmental issues, as they come from renewable resources, are biodegradable, and are a source of income in developing regions [24,25].
The raffia fiber extracted from the leaf palm tree, illustrated in Figure 2, is a native from the African continent with over 20 recognized species. Among these species, the Raphia vinifera, was introduced centuries ago in the north of Brazil. This palm tree reaches up to 10 m high with 3-5 m long leaves/petiole [27]. The entire raffia palm tree is usable. The raffia nut is commonly used to extract oils for cosmetics. A kind of wine is produced from raffia sap, and its fibers are used for carpets, ropes, and handicrafts [28]. As a major producer of natural fibers and taking a unique position among South American countries, only recently in Brazil [31] have the raffia fibers from the Amazon region, also known as Jupati fiber, started to be considered as addition to polymer composites. The first work reporting on the potential of raffia fibers predicting a possible use in composites was that of Elenga et al. [29] in 2009. They described the physical and mechanical properties of the Raphia textilis fiber and suggested that the atypical alveoli structure (honeycomb-like and scales) of this species could help in the interface adhesion with a composite matrix. Rodrigue et al. [30] also mentioned the possibility of using the Raphia vinifera in composites. However, in their paper, only the variation of mechanical properties along the stem of this raffia fiber was investigated. In fact, none of these studies have actually evaluated raffia composites; these works focused exclusively on the characterization of the fiber. The work of Obasi et al. [32] in 2013 was the first to report on the use of the raffia fibers as a possible reinforcement by stiffening the composites. The properties of Raphia farinifera fiber randomly dispersed into high-density polyethylene (HDPE) composite, with different fiber loadings of 0 to 60 wt% were analyzed. The addition of 60 wt% of raffia fiber into the composite resulted in a Young's modulus 2.5 times greater than that of neat HDPE. However, no statistical validation was presented to support this stiffening. By contrast, a decrease in both elongation and tensile strength was observed. Moreover, the higher loading content of fiber resulted in higher water absorption, which was reduced by around 30% with the addition of the maleic anhydride-graft-polyethylene (MA-g-PE) in their composites [32]. The reduction in tensile strength of raffia composites was also observed by Rodrigues et al. [33], who investigated the influence of the pressure level in the mechanical properties of raffia composites produced by the vacuum infusion process. The composites with 45 vol% of aligned raffia fibers disclosed a tensile strength of around 24 MPa, which was 30% lower than that of plain polyester (~34 MPa), although no significant difference was verified for both vacuum pressure levels. Table 1 presents the reported tensile strength and Young's moduli of these earlier works [32,33] on raffia fiber composites. In this table the main point of discussion is if there exists a reinforcement effect promoted by the incorporation of raffia fiber into the polymer matrix. The single point results of Obasi et al. [32] did not allow determining a statistical interval of precision. Consequently, their values in Table 1 lack standard deviation, which does not guarantee the apparent increase shown in Young's moduli. In regard to the work of Rodrigues et al. [33], results in both raffia aligned fiber and fabric, although showing statistical precision, failed to indicate Young's modulus of the polyester matrix. The absence of this reference does not permit claiming a stiffening effect caused by raffia fiber/fabric addition to the polyester matrix. As for other properties, Foadieng et al. [34] evaluated the thermal properties of raffia bamboo, which is the stem of the palm tree. They reported thermal conductivity of 0.07 W/m·K, smaller than some timbers [35], which makes the raffia bamboo a good insulation material that could be used in the structures such as houses, drying-lofts, and ceilings [34]. A hybrid sandwich composite based on raffia and glass fibers was produced and the effect of alkaline treatment of raffia fibers on the structural, thermal, and mechanical properties were reported by Ouarhim et al. [36]. The results showed higher thermal and mechanical properties for raffia-treated fiber composite in comparison to untreated raffia fiber-based sandwich composite. A different approach was taken by Overah et al. [37], who produced nanocomposites of magnetite and Raphia farinifera to use as an absorber of heavy metal ions. They found a greater absorption of heavy metal ions, especially Pb 2+ , for the nanocomposites with higher fiber content (magnetite/raffia ratio of 1:3), making it a good choice to be applied in contaminated wastewater.
Although several studies [1][2][3][4][5][6][7][8][9][10][11][12] have been shown the potential of NLF application in composites, natural fibers are not a challenge-free alternative. The heterogeneity and hydrophilicity are some shortcomings of NLFs [7,24]. One way to improve the performance of natural fiber-based composites is the treatment by the chemical modification of the NLF surface [38]. In order to improve the interfacial adhesion between natural fibers and the polymeric matrix, several chemical treatments have been extensively investigated, such as alkali, silane, benzoyl, acetylation, acrylation, permanganate, graphene-based coating, and stearic acid [39][40][41][42]. The alkali treatment is the most commonly used to partially remove the lignin, hemicellulose, wax, and oils covering the external surface of natural fibers, enhancing the matrix-fiber interface and, consequently, the composites mechanical properties [10]. This effect was observed by Mazzanti et al. [43] for hemp-PLA composites, in which an increase of~16% in Young's modulus was exhibited by the addition of 6 wt% alkali-treated hemp fiber in comparison to the untreated fiber composite. The authors reported that the alkali treatment promotes the bundle opening and individualization of thin fibers, helping in their distribution into the matrix, which may have a beneficial effect on the mechanical properties [43]. Hence, in this work, polyester composites with raffia fibers (Raphia vinifera), randomly dispersed, were produced and the effect of both the fiber alkali treatment and fiber length on the mechanical properties were for the first time evaluated. Moreover, based on the only previous tensile results of raffia polymer composites performed so far [32,33], which still cast doubts on a possible reinforcement effect, the present work conducted a statistical analysis by ANOVA and the Tukey test to elucidate this question.
Materials
The unsaturated terephthalate-based polyester resin (Arazyn AZ 1.0 #34) and the catalyst methyl-ethyl-ketone peroxide (MEK), PERMEC D-45, both supplied by Ara Química SA (São Paulo, Brazil) were used as the polymer matrix. Two ratios, 0.7 and 1 vol%, of the hardener catalyst, were evaluated for the curing of the polyester resin. The raffia fibers (Raphia vinifera) used in this investigation were obtained from the local market of Belém, PA, in the north region of Brazil. These fibers, shown in the insert of Figure 2, were first manually cut in three different lengths, 5, 10, and 15 mm, and then alkali-treated to remove noncellulosic impurities from the fiber surface. The alkali treatment was conducted in a 10 wt% sodium hydroxide (NaOH) solution, under ultrasonic stirring at room temperature for 1 h. During the treatment, the ratio of fiber/solution was kept between 0.075 and 1 g/mL. After that, the fibers were washed and dried at room temperature (25.8 • C) for 48 h.
Processing of Composites
Composites were produced with randomly dispersed fibers by a hand lay-up process, in a silicone mold, schematically shown in Figure 3. For this, the mass fraction of fiber was defined by the maximum Polymers 2020, 12, 2899 5 of 17 volumetric capacity of the mold to accommodate the reinforcement without pressure, resulting iñ 10 wt%. Eight polyester composite samples, in both untreated and treated condition, were produced for each fiber length. In addition, pure polyester samples, with different ratios of hardener catalyst, were prepared as control groups. Overall, a total of 64 samples of 10 wt% raffia fiber reinforced unsaturated polyester composites with fiber alkali-treated or untreated and different fiber lengths (5, 10 or 15 mm) as well as polymer cured with either 1.0 or 0.7 vol% of catalyst hardener were investigated.
Tensile Tests
The mechanical properties of the composites, in the different aforementioned conditions, were determined by tensile tests. These tests were conducted according to the ASTM D638 standard [14] using an AROTEC universal testing machine (São Paulo, Brazil), with a 5 kN load cell at a crosshead speed of 5 mm/min. Type I dimensions as per standard [14] were used to produce the specimens. The maximum value attained in the digitally recorded stress-strain curve of each tensile test was considered as the material tensile strength while the stress/strain ratio (maximum value) up to the yield point was computed as the Young's moduli. No outside physically attached extensometer was used to measure strains. Only digitally interpreted crosshead speed and specimen gage length were used to calculate the strain through the machine electronic interface program.
Statistical Analysis
Analysis of variance (ANOVA) was applied using the F test to verify whether there was a significant difference between the results obtained for tensile strength and Young's moduli. The 95% confidence level was adopted and the Tukey test complemented this statistical analysis to quantitatively assess the most prominent value by means of the lower significant difference.
Additional Characterization
The raffia fiber dimensions and their frequency distributions were determined by an optical microscope, model BX53M, Olympus (Tokyo, Japan). Forty fibers, in both untreated and alkali-treated conditions, were randomly selected for statistical analysis of their dimensions, which were measured at five equally spaced positions along the fiber length, as described elsewhere [45]. In order to study the failure mechanisms of each fabricated composition, the fracture surfaces of the specimens were also analyzed after the mechanical tests. In addition, FTIR analysis was carried out to verify the chemical interaction between raffia fiber and polyester matrix in a Thermo Fisher Scientific equipment, model Nicolet iS50 (Waltham, MA, USA), using a mid-infrared range (4000-400 cm −1 ). The morphology of the fracture surface was performed by scanning electron microscopy (SEM) in a model VEJA 3 SBU, TESCAN (Brno, Czech Republic), using secondary electrons at 20 kV accelerated voltage. All samples were gold sputtered before being subjected to SEM investigation.
Frequency Distribution of Raffia Fiber Dimensions
The raffia fiber Figure 4 exhibits an almost rectangular cross-section. The fiber length, corresponding to the major dimension, was cut in sizes of 5, 10, and 15 mm as an investigated variable. The rectangular fiber cross-section has dimensions with clearly different sizes. The greater is indicated as the "width" while the shorter as the "thickness" in Figure 4. Figure 5 presents the resulting histograms for the distribution of thickness and widths for conditions, untreated ( Figure 5a) and alkali-treated raffia fibers (Figure 5b). The untreated fibers exhibited average widths and thicknesses of 1.450 ± 0.032 mm and 93.513 ± 4.191 µm, respectively. After treatment, these fibers showed a dimensional increase of 2% in width (1.487 ± 0.028 mm), and 3.5% in thickness (96.267 ± 3.709 µm), which could be attributed to the volumetric expansion that the fiber suffered during the chemical treatment. Figure 6 shows the effect of different ratios of hardener catalyst on tensile test results of the neat polyester samples. It can be noted that there is a higher stiffness for polyester with 1.0 vol% of MEK catalyst (~0.86 GPa), in comparison to the one with 0.7 vol% (~0.80 GPa). Moreover, the values of deformation were higher in the samples with 0.7 vol% of MEK. It is also worth mentioning that both polyester curves in Figure 6 exhibit a relatively ductile behavior under tensile test, but undergo a sharp fracture with no energy absorption capacity after matrix rupture. Due to the high stiffness of polyester with 1.0 vol% of MEK, only composites with this matrix had their tensile properties evaluated. Moreover, 1.0 vol% MEK was the recommended ratio by the catalyst maker. Figure 7 illustrates the typical stress-strain curves of raffia fiber composites under alkali-treated and untreated conditions for the different fiber lengths. It can be noted that the highest tensile strength and deformation are obtained by raffia composites (untreated) with a length of 15 mm (Figure 7c), presenting 8.50 MPa and 1.79%, respectively. This behavior was also observed by Fadele et al. [46] for the tensile strength of their alkali-treated Raphia farinifera fibers. After chemical treatment with a 10 wt% NaOH solution, the fiber strength was reduced by 47%, and its deformation was slightly changed [46]. Although the authors verified an increase of 22% in cellulose content and a decrease of 8% in the lignin content, they attributed the decreasing of the fiber tensile strength to the presence of voids/flaws, which are sources of stress concentration [46]. In addition, it is possible that the treatment parameters, such as concentration, immersion time, and temperature, were not adequate. Several studies on the degradation of the tensile strength of natural fibers after chemical treatment reported on the influence of these parameters [46][47][48][49]. For example, Mahjoub et al. [47] verified a decrease in the tensile strength of kenaf fibers for higher values of NaOH solution concentration and immersion time. Similar results were observed for flax and abaca fiber, presenting a lower tensile strength by increasing immersion time in 3% NaOH solution [49]. Table 2 presents the tensile test results of this work. As aforementioned, to our knowledge, there are just two works which reported on raffia-based composites [32,33]. Obasi et al. [32] discussed the effect of the fiber mass fraction and MA-g-PE as a compatibilizer on the tensile properties of HDPE composite. Their results, in Table 1, display that the MA-g-PE barely changes the mechanical properties of the HDPE composites. By contrast, the fiber content strongly influences the tensile strength, reducing it by almost 70% for the composite with 60 wt% of raffia fiber in comparison to the neat HDPE. Even so, they verified improvement in the biodegradability of HDPE by adding raffia fiber [32]. Similar behavior on tensile strength was observed in the present study, not caused by fiber content, but by the fiber length, which exhibited a higher decrease for composites with the shortest fibers ( Table 2). Although the raffia fiber has relatively high cellulose (53 wt%) and lignin (24 wt%) contents [28], which are responsible for its strength, Figure 8 indicates that this fiber acted only as filler into the polyester matrix. This may have occurred due to either an unsuitable processing of the composite or a weak interfacial fiber/matrix adhesion. As further shown, porosity and lack of fiber adhesion to the matrix are revealed by SEM fractographs. The relatively small amount (10 wt%) of added fiber is not expected to bring difficulty in the composite processing. Therefore, we believe that poor raffia fiber adhesion to the polyester should be the main reason for all composites relatively low tensile strength, as compared to the plain matrix in Figure 8. It should also be mentioned that the atypical rectangular cross-section of the raffia fiber, shown in Figure 4, might introduce internal stress concentration in the fiber/matrix interface due to the rectangular sharp corners. Indeed, the other works on raffia fiber composites using compression molding [32] and vacuum infusion [33] reported similar low tensile strength in comparison to their matrices (Table 1) although having strength superior to our values in Figure 8. However, the composite with treated raffia fiber with a length of 10 mm, exhibits the highest Young's modulus, exceeding by 66% the stiffness of the neat polyester, shown in Figure 9, which reveals a reinforcement effect. This significant difference was confirmed through ANOVA and Tukey statistical analyses shown in Table 3. Based on this table, is possible to claim, with a 95% confidence level, that the raffia alkali-treated fiber (10 mm) composite was the best condition since the p-value is lower than 5% and the difference between this composite and neat polyester is higher than the truly significant difference (HSD). It can be noted that the length had an effect regarding the tensile strength as shown in ANOVA and Tukey results of Table 4. The composite with the highest raffia fiber length, 15 mm, in untreated condition exhibits an increase of over 2 times in tensile strength in comparison to the composite with a 5 mm (alkali-treated) condition in Figure 8.
Raffia Fiber Reinforced Polyester Composites
SEM images of fracture surfaces of composites with untreated and alkali-treated raffia fibers are shown in Figures 10 and 11, respectively. The alkali treatment removes the non-crystalline constituents inherent to natural fibers such as hemicellulose and waxes, which did not contribute to a stronger bonding between the raffia fiber and polyester matrix. Consequently, the alkali treatment did not improve the transfer of loads from the matrix to the fiber. Hence, for both cases, it can be observed the predominance of failure mechanism associated with weak interfacial adhesion and fiber pullout, which were also verified by Rodrigues et al. [33]. These are indicated by the fiber debonding and fiber print in the matrix. Moreover, no polyester matrix attached to the raffia fiber is shown in Figures 10 and 11. The presence of porosity and river marks, characteristic of the brittle matrix fracture, are also noted in Figure 11. Figure 12 shows the FTIR spectra of raffia fiber and its composites. The wavenumbers and assignments of FTIR bands are summarized in Table 5. It can be observed that the raffia fiber consists of alkene, esters, aromatics, ketone, and alcohol, with different oxygen-containing functional groups, such as OH (3327 cm −1 ), C=O (1732 cm −1 ), O-CH 3 (1462 cm −1 ), C-O-C (1244 cm −1 ), and C-O (1033 cm −1 ). The band in the range of 3600 to 3000 cm −1 is related to the hydroxyl groups (O-H) stretching of the hydrogen bond in the cellulose and hemicellulose [50]. The methyl group vibration, which is a typical molecular structure of the natural fiber, appears at 2918 and 2850 cm −1 , given by the elongation of the aliphatic C-H bonds [50,51]. In addition, it is noticed that these characteristic vibrations are present in lignin. The band at 1732 cm −1 is attributed to the carbonyl groups (C=O) while the stretching of the C-O-C group (1244 cm −1 ) is associated with the vibration in cellulose glycemic rings [52]. Concerning the raffia fiber/polyester composites, the O-H stretching is observed in the range of 3600 to 3400 cm −1 , and stretching vibrations of C-H is verified in the ranges from 3100 to 2900 cm −1 and from 1460 to 1250 cm −1 . This latter one corresponds to the CH 2 and CH 3 groups' vibrations. It can be noted the vibrations of C-H groups have approximately the same transmittance bands for both composites. The stretching vibration of the C=O group occurs at 1728 cm −1 . The three bands at 1600, 1580, and 1492 cm −1 are assigned to aromatic ring stretching and appear at the same positions for the composites, which indicates that no change occurred in chemical interaction between the aromatic ring and the raffia fiber. According to Cecen et al. [53], the vibrations at 1453 and~1380 cm −1 correspond to asymmetric and symmetric bending of methyl groups, respectively. In addition, the band at 1254 cm −1 may be attributed to the CH 2 twist vibration and C-O stretching vibrations which occurs at 1117 cm −1 . However, no expressive change was observed in the FTIR spectrum of raffia composite, which corroborates the aforementioned results of mechanical properties and SEM analyses.
•
The addition of 1.0 vol% of MEK catalyst into neat polyester resulted in the highest Young's modulus ~0.86 GPa, which was ~10% higher than polyester with 0.7 vol% of MEK.
Conclusions
• The addition of 1.0 vol% of MEK catalyst into neat polyester resulted in the highest Young's modulus~0.86 GPa, which was~10% higher than polyester with 0.7 vol% of MEK.
•
Tensile strength results indicated that the raffia fiber acted only as a filler into the polyester composites, which may be associated with either an unsuitable processing of the composite or weak interfacial fiber/matrix adhesion. In spite of that, an increase in the Young's moduli of the composites was obtained in comparison with that of the polyester matrix. • Statistical analyses by ANOVA and the Tukey test confirmed for the first time a stiffening effect caused by 10 wt% raffia fibers with 10 mm in length to the unsaturated polyester matrix composite.
•
The tensile results also disclosed the effect of fiber length on the mechanical strength of the composites. The highest tensile strength was reached by the composite with a higher length (15 mm) raffia fiber, in untreated condition, which represented an increase of more than 100% in comparison to the composite with 5 mm (alkali-treated) fiber. All composites with alkali-treated raffia fiber presented similar tensile strength values that, according to ANOVA, are lower than those for the untreated condition. • SEM analyses revealed the predominance of a failure mechanism associated with weak interfacial adhesion and porosity, even for the composites with alkali-treated raffia fibers. • FTIR analysis failed to disclose any significant change in the raffia composite transmittance bands, which corroborated the relatively unaltered mechanical properties and weak interfacial fiber/polyester adhesion.
•
To date, based on the Scopus metrics, there are very few studies on raffia fiber composites.
In addition to confirming a stiffening, the effects of raffia fiber length and treatment in mechanical properties were also disclosed. Hence, this study provides information filling the current knowledge gap of raffia fiber, which aims to valorize this abundant and unexploited Brazilian resource.
|
2020-12-09T14:06:50.298Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "12f5773e0154e2bc6b013c7d0baa60b2bb629b5f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/polym12122899",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd9bf0ae41bfc836ffcc0bea0037d80f8b61b9cd",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
19619953
|
pes2o/s2orc
|
v3-fos-license
|
Healthcare and Compassion : Towards an Awareness of Intersubjective Vulnerability Comment on “ Why and How Is Compassion Necessary to Provide Good Quality Healthcare ? ”
How to instill compassion in a healthcare organization? In this article, I respond to Marianna Fotaki’s proposals in her piece, ‘Why and how is compassion necessary to provide good quality healthcare?’ by drawing on insights from organization studies. Following Fotaki, I argue that to instill targets and formal measures for assessing compassion would be problematic. I conclude by drawing on psychoanalytic and feminist theories to introduce alternatives, specifically proposing an approach that is grounded in a shared sense of a common, embodied precarity, which necessitates our commitment to preserving the conditions in which life might flourish.
Introduction: Compassion as a Moral Sentiment
In her article, Fotaki examines how compassion might become a moral sentiment, potentially contributing to the development of a system of norms and values within the National Health Service (NHS) and similar organizations. 1 She notes that compassion has the potential to form part of a foundation for ethics that might usefully guide health professionals in their daily practice.Central to her argument is the caveat that managers must avoid imposing new metrics of control, that is, new targets, if they are to effectively foster such an ethics.To do so, she feels, would merely exacerbate the kinds of self-centred, individual-focused behaviour that such targets were created to discourage.In my response, I examine her proposals with reference to insights from organization studies.In doing so, I find both support and supplementary ideas for her arguments.The issue of compassion in organizations tasked with the care of the sick, weak and vulnerable, has rarely been so topical.Successive stories of failures of care in the NHS, coupled with failures to protect the whistleblowers who tried to speak up about these, have lately dominated news reports.Unfortunately, such stories are not new.Scholars have long attempted to grapple with the question of how this can come about; what is it about large-scale organizations that can lead to a breakdown in individual compassion, as staff facilitate or at least look the other way, in cases of serious neglect and abuse?Studies detail breakdowns in care where organizations are charged with looking after vulnerable people, 2,3 including the psychiatrically ill, 4 children 5 and those availing of social services. 6
Organizations and Compassion: A Targeted Approach?
Central to debates is the well-rehearsed argument that organizations, by their very nature, can lead to dehumanization. 7The idea is that the simple act of doing one's job in a large organization, and following orders, can lead to behaviour that is instrumentally rational and focused on achieving narrow goals, as opposed to being aware of the consequences of one's actions.This has a particularly strong effect on the ethical behaviour of individuals in large bureaucratic systems, illustrated for example in Arendt's study of the Eichmann Holocaust trials. 8It is easy to suspend a personal sense of morality, and compassion, when the nature of one's job involves close attention to rules. 9,10f this is the case then how might we, as Fotaki asks, instill a greater sense of compassion in the organizations that we entrust with the care of our weak and sick?At first glance, the introduction of measures and targets that help to encourage people to act with greater compassion, appears somewhat tempting.However, previous studies of organizational processes indicate that such a move would paradoxically lead to a further alienation of a sense of compassion.It could perhaps render compassion impossible.Useful examples are given in studies of large-scale organizational abuses, and the attempts that were made to 'fix' the institutions in question such that abuse would no longer be allowed to happen.In a significant number of cases, after an inquiry has been held and a report issued, recommendations emerge that call for yet more rules and regulations.The idea is that the abuse was caused by absence of 'correct' rules, and will be eradicated if this is addressed.This view informed successive responses into abuse cases in UK health and social care organizations. 11,12he result is a renewed sense of security, with the hope that all will be well under the improved rules. 13This approach has had problems however, in some cases heightening the problems being experienced. 6Increased rules and regulations are seen to further remove a sense of humanity from those tasked with following them, 12 not least because of their role in exacerbating defensive anxieties. 14So it appears that increased targets and measures, even those imposed with the laudable goal of promoting compassion within organizations, may not be a useful way forward.
Fostering Norms in Contemporary Organizations
If this is the case, we need another approach.Fotaki calls instead for the promotion of 'prosocial behaviour' and the development of organizational norms and environments that would foster this.In understanding how this might take shape, it is useful to draw once more on organization studies, this time on insights from psychoanalytic approaches.][17][18] Schwarz, 19 for example, is particularly interested in the question of organizational norms and how these relate to individuals and their moral actions.He studies the ways in which actions by loyal members of an organizations, which are seen as antisocial and wrong by the outside world, are actually an effect of processes of socialization in which such behaviours are instilled as part of commitment to the organization.Psychic ties of commitment, perceived as feelings of 'loyalty, ' are key here, and this works because organizations provide what he terms 'an organization ideal, ' a phenomenon that essentially represents an ego ideal for people in the organization. 19Schwarz is interested in the ways in which people divest themselves of ethical responsibility because of the sheer strength of this ideal, an implicit bargain is set up in which the employee assumes that all responsibility for anti social behaviour is taken by the organization, and in fact that the organization absorbs all guilt that would be otherwise felt by employees.Under this view, loyal individuals are less likely to criticize their organization, as it represents something of a projection of the ego ideal. 19In addition, Schwartz notes, threats to the organization ideal represent threats to the ego ideal and therefore can result in antisocial action.In such situations, organizations can come to form their own moral communities, adopting a defensive stance towards the rest of their community and society more generally.Overall, the stronger the identification, the more loyal the employee, and hence the greater ease with which the employee can diverge from 'normal' moral decisions and carry out organizational injunctions, regardless of how unacceptable they might be.Studies such as Schwarz's adopt a psychoanalytic lens to reveal the darker side of employee loyalty and commitment, but can such an approach lead to a consideration of how alternative, prosocial behaviours described by Fotaki can likewise be instilled and tied in with organizational loyalty?For example, if the organization with which the individual identifies possesses a strong culture of compassion and altruism, perhaps organizational identification will lead to compassionate behaviour at least toward some of the organization's stakeholders.To explore this idea further, insights from psychoanalytic feminist theorist Judith Butler resonate.Butler's work represents a longstanding engagement with questions of how subjects identify with social norms and how relationships with others are implicated in this. 20r recent work on precarity invokes a new 'ontology of the subject' that is grounded in the idea that we are inescapably embodied beings, 21 and the bodies we inhabit eventually decay and die.To live an embodied life necessarily means to be vulnerable: to war, famine, poverty and physical hurt; 'to live is always to live a life that is at risk from the outset, ' she notes, a life that can be 'expunged quite suddenly from the outside and for reasons that are not always under one's control.' 21 This embodied sense is shared by all, and the only thing, for Butler, that can ameliorate vulnerability is our acknowledgement of those others upon whom we depend.Our bodily, vulnerable beings are necessarily and inescapably interdependent. 20,21his common and intersubjective condition of precarity, necessitates our commitment to preserving the conditions in which life might flourish.This forms the basis for a future politics and ethics in which preservation of life, based on the generalizable condition of precarity we all share, might take precedence.
Prosocial Behaviour and Healthcare Organizations
How might such a perspective be fostered in healthcare organizations?One approach perhaps lies in recent investigations of 'prosocial organizational behaviour' (PSOB).This is described as positive behaviour undertaken at the discretion of the individual employee, which involves willingness to not only fulfill one's role but also to exceed 'normal' expectations and, for example, volunteer one's time to help others, suggest improvements to the organization, and assist coworkers.Recent studies suggest that human resource management (HRM) functions in healthcare organizations can play a role in fostering this kind of behaviour among clinicians and practitioners. 22efore embarking on such interventions however, a number of points are important to note.First, such interventions ought not to take the form of 'strong' cultural programming.As noted by Willmott 23 and others, even apparently benign efforts to increase employee loyalty and commitment can be seen as manipulative and exploitative.Second, in considering the concept of prosocial behaviour, it is helpful to draw as Fotaki does here and in other work, 16,24 on feminist philosophy, not least the psychoanalytic idea that the aim of ridding the subject of all forms of aggression and exclusionary impulses towards the other, is an illusory goal. 20,25Under such a view, subjects (including those who work in healthcare) possess the potential for compassion and new forms of 'beingwith' the other, just as they experience inherent impulses for domination and more negative effects.Similarly, vulnerability as described by Butler can make us sensitive to the needs of the other but equally, under conditions of psychological defense, the denial of our own vulnerability can blind us to the vulnerability of the other.Again we see the ambivalence with which the other is inescapably viewed.Any approach to fostering 'prosocial behaviour' must, therefore, facilitate this inescapable ambivalence and tension on the part of the subject.The question is how to enable an environment in which such compassionate, 'transubjective' encounters can nonetheless take place, 25 grounded in the primary affect that our shared precarity as vulnerable subjects engenders. 21uch an approach would be valuable in the workplaces that play such an important role in our society, 24 not least our healthcare organizations.
Conclusion
Fostering compassion in healthcare is clearly a valuable goal.Rather than adopting targets and measures in order to achieve it, the development of 'prosocial' norms of intersubjective engagement might offer a valuable way forward.Such attempts must however incorporate an awareness of the ambivalence of the subject's impulses towards the other, and must likewise avoid the danger of imposing, from without, manipulative attempts at cultural programming.
|
2018-04-03T04:19:10.949Z
|
2015-08-13T00:00:00.000
|
{
"year": 2015,
"sha1": "7c486b3b4396b68fe0b1f01e1988768c859f77ce",
"oa_license": "CCBY",
"oa_url": "https://www.ijhpm.com/article_3044_967d6da6b851aadc23d16825bac59689.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7c486b3b4396b68fe0b1f01e1988768c859f77ce",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
18361392
|
pes2o/s2orc
|
v3-fos-license
|
Generalized scaling in fully developed turbulence
In this paper we report numerical and experimental results on the scaling properties of the velocity turbulent fields in several flows. The limits of a new form of scaling, named Extended Self Similarity(ESS), are discussed. We show that, when a mean shear is absent, the self scaling exponents are universal and they do not depend on the specific flow (3D homogeneous turbulence, thermal convection , MHD). In contrast, ESS is not observed when a strong shear is present. We propose a generalized version of self scaling which extends down to the smallest resolvable scales even in cases where ESS is not present. This new scaling is checked in several laboratory and numerical experiment. A possible theoretical interpretation is also proposed. A synthetic turbulent signal having most of the properties of a real one has been generated.
Introduction
In order to characterize the statistical properties of fully developed turbulence [1], one usually studies the scaling properties of moments of velocity differences at the scale r: where < · · · > stands for ensemble average and v is the velocity component parallel to r. At high Reynolds number Re = U 0 L/ν the S p (r) satisfies the relation for L > r >> η k where L is the integral scale, η k = (ν 3 /ǫ) 1/4 is the dissipative (Kolmogorov) scale, ǫ is the mean energy dissipation rate, ν the kinematic viscosity and U 0 the R.M.S. velocity of the flow. The range of length L > r >> η k , where the scaling relation (2) is observed, is called the inertial range. The Kolmogorov (K41) theory [2] predicts ζ(p) = p/3, but experimental [3] and numerical [4] results show that ζ(p) deviates substantially from the linear law. This phenomenon is believed to be produced by the intermittent behaviour of the energy dissipation [5] which can be taken into account by rewriting eq.(2) in the following way: where ǫ r is the average of the local energy dissipation ǫ(x) on a volume of size r centered on a point x. A comparison of eq.(1) and eq.(3) leads to the conclusion that the scaling exponents τ (p/3) of the energy dissipation are related to those of S p by ζ(p) = τ (p/3) + p/3. Since the Kolmogorov (K62) theory [5] many other models, [6], [7], [8], [9] [10], [11] have been suggested to describe the behaviour of the ζ(p). However, it turns out that the ζ(p) may be not universal in non homogeneous, anisotropic flow and may depend on the location where measurements are done. Specifically, they may have different values if one measures either far away from boundaries, where turbulence is almost homogeneous and isotropic, or in locations of the flow where a strong mean shear is present. The ζ(p) depend also on the way in which turbulence is produced, for example 3D homogeneous turbulence, boundary layer turbulence, thermal convection and MHD. Thus there is the fundamental question of understanding in which way all these parameters influence the scaling laws. Furthermore all the above mentioned models assume the existence of two well defined intervals of lengths that are the inertial range and a dissipation range. According to idea of multiscaling these two ranges may eventually be connected by an intermediate region where the viscosity begins to act [12]. However this idea of a well defined inertial range, where viscosity does not act at all, and the idea of multiscaling turns out to be incompatible with the recently introduced new form of scaling, which has been named Extended Self Similarity (ESS) [13] [14](see section 2 below).
ESS has been observed in 3D homogeneous and isotropic turbulence both at low and high Re and for a wide range of scales r with respect to scaling (2). In contrast ESS is not observed when a strong mean shear is present [15]. All these experimental observations show that also the mechanisms by which energy is actually dissipated in a flow are very poorly understood. Specifically one would like to understand how viscosity acts on different scales. This is clearly an important point in order to safely use large eddy simulations in real applications.
The purpose of this paper is to rationalize all the above mentioned results on scaling both in presence and absence of a shear. We propose a generalized form of ESS which has been checked in many different flows. We have also generated a signal which has all the statistical properties of a real turbulent signal. Our interpretation of ESS and of this generalized scaling suggest that there is no sharp viscous cut-off in the intermittent transfer of energy.
The paper is organized as follows: in section 2 we remind the properties of ESS, in section 3 we discuss the systems where ESS is not observed, in section 4 the hierarchy of structure functions is described, in section 5 the generalized form of scaling is discussed, in section 6 a possible theoretical interpretation is proposed, in section 7 we discuss the multiscaling. Finally conclusions are given in section 8.
Extended Self Similarity
Extended Self Similarity (ESS) is a property of velocity structure functions of homogeneous and isotropic turbulence [13,14]. It has been shown using experimental and numerical data [16] that the structure functions present an extended scaling range when one plots one structure function against the other, namely: where β(n, m) = ζ(n)/ζ(m). The details of ESS have been reported elsewhere [14]. In the following we describe only the main features.
As an example we consider two experimental data sets at different R λ , which is the Reynolds number based on the Taylor scale (R λ ≃ 1.4 Re 1/2 ) [1]. The two experiments are a jet at R λ = 800 and the wake behind a cylinder at R λ = 140. In both cases data have been recorded at about 25 integral scales downstream [14]. In fig.1a S 6 /r ζ (6) , computed for the two experiments, is plotted as a function of r. In fig.1b we show S 3 /r ζ(3) as a function of r. In both figures a scaling region is observed only for the highest R λ . In contrast if the relative scaling (4) is used, see fig.2, a clear scaling is present for both R λ with β(6, 3) ∼ = 1.78. The vertical dashed lines in the fig.2 correspond to r = 5η k and they roughly indicate the extension of the scaling (4), that is 5η k < r < L.
The ESS scaling has been checked both on numerical data and in experiments, in a range 30 < R λ < 2000. A direct consequence of the scaling (4) is that for all p, S p can be written in the following way: with U 3 o = S 3 (L), L = U 3 o /ǫ being the integral scale and C p dimensionless constants selected in such a way that f (x) = 1 for x >> 1. Eq.(5) has been carefully checked by computing the function If f p is independent of p this means that eq.(5) is satisfied. This is seen in fig.3 where log(f 6 /f 2 ) and log(f 4 /f 2 ) are plotted as function of r/η k . We clearly see that the both ratios are close to 1 within 2% for r > 5η k . This result shows that eq.(4) is satisfied for 5η k < r < L.
ESS has been also checked for the temperature and velocity fields in Rayleigh-Benard convection [17] and in the case of a passive scalar [18]. It turns out that ESS is a very useful tool in order to distinguish between Kolmogorov and Bolgiano scaling [17], [19]. In the case of the Bolgiano scaling it has been found that ζ(3) = 2.08 which is clearly very different from the Kolmogorov value ζ(3) = 1. In spite of this large difference between the values of the exponent ζ(3), using ESS one discovers that the ratio ζ(p)/ζ(3) in the case of Bolgiano are equal to those of homogeneous and isotropic turbulence. The same property is observed for the ζ(p) obtained from measurements done on the solar wind [20] for MHD. In Table I we compare the ζ(p) measured in different physical systems and the ratios β(p, 3) for MHD and Rayleigh-Benard convection.
Another interesting observation concerns the behaviour of β(p, 3) with respect to R λ . The values of β(p, 3) reported in Table 1 have been measured in the range 30 < R λ < 5 10 6 . First we note that, within error bars, any change or trend of β(p, 3) as a function of R λ is absent. Second, we show in fig.4 the dependence of β(6, 3) on R λ (a similar result has been reported in ref. [21]). This means that far away from boundaries the β(p, 3) are constants which do not depend on Re and on the way in which turbulence has been generated.
A final point regarding ESS, concerns the generalization of the Refined Kolmogorov Similarity Hypothesis. The RKS hypothesis states that ǫ r ∼ δv 3 /r, as far as concern the dependence on the scale r, and supports eq.(3). We can generalize the RKS hypothesis by introducing an effective scale L(r) = S 3 (r)/ǫ, as suggested by ESS, and we obtain the following relation: ǫ r = δv 3 ǫ/S 3 . Generalization of RKSH simply states that: In section 6 we give some theoretical support of eq.(6). Eq. (6) has been first proposed in ref. [14] and carefully checked in ref. [22]. A typical experimental result is shown in fig.5 where < ǫ 2 > S 2 3 is plotted as a function of S 6 (r). The energy dissipation has been computed using the 1-dimensional surrogate that is: In fig.5 one can see a clear scaling extending over almost ten decades from the integral scale to η k . The slope of the straight line is 1.005 showing that eq.(6) is compatible with experimental data. One can argue that eq.(6) is a trivial one because for r < η k , ǫ r is constant and S p ∝ r p , thus the scaling S n ∝ S p/3 3 is obviously satisfied. Furthermore for r in the inertial range eq.(6) is certainly verified because (S 3 /ǫ) ∝ r. However in principle the proportionality constant of eq.(6) in the inertial and in the dissipative range could be different. The fact that experimentally they are found equal has several important consequences which will be discussed in section 5.
Systems where ESS is not observed
In the previous section we have discussed several systems where not only the ESS works but also the exponents β(n, 3) are universal because they do not depend on the systems and on Re. We want to stress that this kind of universality, observed in different flows, disappears if the system is influenced by the presence of a strong mean shear. In this case ESS does not work, because an extended range of scaling is not present when S n is drawn as a function of S 3 . Violation of ESS has been observed experimentally in boundary layer turbulence [15] [23] and in the shear behind a cylinder [24]. In a recent numerical simulation [25] the effect of the shear on scaling laws has been carefully investigated using a Kolmogorov flow.
This simulation concerns a 3D fluid occupying a volume of V = L 3 sites with L = 160, and forced such that the stationary solution has a non-zero spatial dependent mean velocity < v( x) >=x sin( 8πz L ), wherex is the versor in the direction x, and L is the integral scale. In figures 6a) and 6b) we show the standard ESS analysis by plotting log(S 6 ) as a function of log(S 3 ) for two specific levels z a and z b , where z a and z b were chosen at the level of minimum and maximum shear respectively. The R λ of the simulation was 40 and no scaling laws were present if examined as a function of r. Nevertheless, it is clear from figure 6a) that ESS is observed for the case of minimum shear and it is not observed for the case of maximum shear (figure 6b). In both figures, the dashed lines are the best fit done in the range between the 20-th and 30-th grid point and correspond to the slopes β(6, 3) = 1.78 and β(6, 3) = 1.43 for the minimum and maximum shear respectively. However one finds that generalized Kolmogorov similarity hypothesis eq.(6) is satisfied also for values of r where ESS is no longer satisfied. In order to highlight the previous comment we consider again the above mentioned Kolmogorov flow. In fig.7a and 7b we show the result of the scaling obtained by using eq.(6) at the correspondent z-levels of figures 6a and 6b for p=6. As one can see the generalized Kolmogorov similarity hypothesis is well satisfied in both cases although for z b ESS is not observed. This is another important experimental and numerical result which we will consider again in section 5.
The other relationship which we have observed to hold from large to small scales even in absence of ESS is the moment hierarchy recently proposed in ref. [11] and rewritten in terms of velocity structure functions in ref. [26].
Hierarchy of structure functions
In a recent letter [11] She and Leveque have proposed an interesting theory to explain the anomalous scaling exponents of velocity structure functions. The theory yields a prediction ζ(p) = p/9 which is in very good agreement with available experimental data [14]. The She Leveque model is based upon the fundamental assumption on the hierarchy of the moments, < ǫ n r >, of the local energy dissipation. Specifically they consider that: where A n are geometrical constants and ǫ (∞) r = lim n→∞ ( <ǫ n+1 r > <ǫ n r > ) is associated in ref. [11] with filamentary structures of the flow. On the basis of simple arguments it is assumed that: ǫ (∞) r ∝ r −2/3 . The value of β predicted in ref. [11] is 2/3. Notice that in eq.(8) for n=1, taking into account that < ǫ r >= ǫ is constant in r, one immediately finds that where eq.(6) has been used. Equation (8), which has been experimentally tested in ref. [27], can be extended to the velocity structure functions [26]. Taking in eq.(8) the value n = p/3 and using equations (6) and (9) after some algebra one finds the following relation for the velocity structure functions: where , C p are geometry dependent constants and β ′ = β 1/3 . Notice that eq.(10) is certainly valid for any β in the dissipative range where S n ∝ r n . Equation (10) has been experimentally tested in ref. [26].
This can be seen in figures 8a) and 8b) where the scaling obtained for various p using eq.(10) is reported for two different Re. As we have already observed in the case of eq.(6)(figure 5), the scaling extends from large to small scales even for values of r where ESS is no longer satisfied.
A generalized form of ESS
In sections 3) and 4) we have shown that the GKRS eq.(6) and the hierarchy of moments eq.(10) are two relations which are satisfied even in flows where ESS is not observed. These results suggest that the concept of ESS could be generalized in such a way to take into account the scaling relations equations (6) and (10) properly.
For this purpose we introduce the dimensionless structure function According to Kolmogorov theory (11) should be a constant both in the inertial and in the dissipative range, although the two constant are not necessarily the same. Because of the presence of anomalous scaling G p (r) are no longer constants and by using (6) we have: Thus the functions G p (r) satisfy the hierarchies (8) and (10). Following the results of sections 3) and 4) equation (12) is valid for all scales even in cases where ESS is not verified. Therefore, it seems reasonable to study the self scaling properties of G p (r) or, equivalently, the self-scaling properties of the energy dissipation averaged on an interval of size r: where we have by definition: ρ(p, q) is given by the ratio between deviation from the K41 scaling. It will play an essential role in our understanding of energy cascade. Indeed, it is easy to realise that it is the only quantity that can stay constant along all the cascade process: from the integral to the sub-viscous scales. It is reasonable to imagine that the velocity field becomes laminar in the sub-viscous range, S p (r) ∝ r p , still preserving some intermittent degree parametrized by the ratio between corrections to K41 theory. In order to check the validity of eq.(13) we have plotted in fig.9 G 6 (r) versus G 5 (r) for many different experimental set-up [24], [23], [17], [28], done at different Reynolds numbers and for some direct numerical simulation with and without large scale shear. As one can see the straight line behaviour is very well supported within experimental errors (of the order of 3%) no deviations from the scaling regime are detected. Similar results are obtained, using different G p (r) and G q (r). There are two alternative ways to check (13). First of all, one can rewrite it in the following way: If (13) is true then ρ(p, q) should be equal to r(p, q). One can use (15) directly and perform a two variable fit of ρ(p, q) and r(p, q). Then the quantity: gives a measure of the accuracy of (13). We have computed σ p,q for p and q in the range [1,8] for all the experimental and numerical results. We have found that: Where the above test has been done over all the experimental and numerical data available to us. This result tells us that the accuracy of (13) is extremely well verified.
A second and independent check of (13) can be done by using (10). Indeed (10) can be checked either for fixed p as a a function of r or for fixed r as a function of p. In the second case we may assume that the constant C p in (10) is p-independent and, by plotting in a log-log scale F p+1 against F p for fixed r and different p, we can estimate the exponent β ′ . If (13) is true than we should observe scaling (10) both at large scale and very small scale with the same value of β ′ . Let us remark that the previous statement (on which our test of (13) is based) depends on the two assumptions that the log-poisson hierarchy for structure functions is true and that the constant C p in (10) are p-independent. In fig. 10a and 10b we show a log-log plot of F p+1 against F p for p = 1, ...6 and r = 3η k and r = 30η k for the case of two numerical simulations, namely Rayleigh Bernard thermal convection and channel flow. As one can see a clear scaling is observed with the same scaling exponent β ′ both for small and relatively large values of r. This confirms the quality of the generalized ESS scaling (13).
A theoretical interpretation
The aim of this section is to discuss a possible theoretical interpretation of the experimental and numerical results previously shown. Our starting point is to revise the concept of scaling in fully developed turbulence.
Let us consider three length scales r 1 > r 2 > r 3 and our basic variables to describe the statistical properties of turbulence, namely the velocity difference δv(r i ). We shall restrict ourself to those statistical models of turbulence based on random multiplier. Thus we shall assume that there exists a statistical equivalence of the form: where r i < r j and a ij is a random number with a prescribed probability distribution P ij . By definition, we have: Equation (19) is true no matter which is the ratio r 1 r 2 and r 2 r 3 . Now we ask ourselves the following question: what is the probability distribution P ij which is functionally invariant under the transformation (19)? This question can be answered by noting that equation (19) is equivalent to write: log a 13 = log a 12 + log a 23 (20) (we assume a ij > 0). Thus our question is equivalent to ask what are the probability distribution stable under convolution. For independently distributed random variables a solution of this problem can be given in a complete form [29], [30]. If the variables are correlated the situation becomes much more difficult to solve, as it is well known from the theory of critical phenomena. For the time being we shall restrict ourselves to independent random variables. In this case, for instance, the gaussian and the Poisson distributions are well known examples of probability distribution stable under convolution. These two examples correspond to two turbulence model proposed in literature, namely the log-normal model [5] and the log-Poisson model [11], [31], [32], [33]. A more general description can be found in [30].
We can have a different point of view on our question which is fully equivalent to the above discussion. A simple solution to our question is given by all probability distribution P ij such that: for any functions g k (r i ) and γ k (p) ( · · · represents average over P ij ). Indeed we have: We want to remark that equation (21) represents the most general solution to our problem, independent of the scale ratio r i /r j . Let us give a simple example in order to link equation (21) to the case of probability distribution stable under convolution. Following [11], [31] and [32] let us consider the case of a random log-Poisson multiplicative process, namely: where x is a Poisson process P (x = N) = C N ij e −C ij N ! . By using (23) we obtain: Equation (24) is precisely of the form (21) if we write In order to recover the standard form of She-Leveque model we need to assume that (see also 31): This example highlights one important point in our discussion, i.e. the general requirement of scale invariant random multiplier (21) does not necessary imply a simple power law scaling as expressed by the equations (26)(27). Moreover, the general expression (21) is compatible only to infinitively divisible distribution. For instance, previous random multiplier model for turbulence, such as the β-random model or the p-model, cannot be expressed in the general form (21) independently of the ratio r i /r j . It is worthwhile to review the multifractal language at light of the previous discussion. In the multifractal language for turbulence, the two basic assumptions are: I) The velocity difference on scale r shows local scaling law with exponent h, i.e. δv(r) ∼ r h ; II) the probability distribution to observe the scaling δv(r) ∼ r h is given by In the multifractal language, therefore, there are two major ansatz: one concerns power law scaling of the velocity difference (assumption I) and the other one concerns a geometrical interpretation (the fractal dimension D(h)) of the probability distribution to observe a local scaling with exponent h. How is it possible to generalize the multifractal language in order to take into account equation (21)?
As we shall see, the theory of infinitively divisible distribution is the tool we need to answer the previous question. All published model of turbulence based on infinitively divisible distribution are equivalent to write D(h) in the form: where d 0 and h 0 are two free parameters while the function f (x) depends only on the choice of the probability distribution. For instance for log-normal distribution f (x) = x 2 . Equation (28) allows us to write: where We can see that equation (29) is equivalent to a random multiplicative process given by: Equation (31) can be generalized to the form (21) by allowing h 0 and d 0 to depend on r, i.e. where: Equation (32) is equivalent to (21) by using: The same results can be obtained by (29), i.e. we have Note that the saddle point evaluation of (29) is not spoiled by the dependence of h 0 and d 0 on r.
We have seen that (21) can be reformulated in terms of multifractal language for infinitively divisible distribution whose function D(h) can be rewritten as in (28). We can ask the following question: what is the physical meaning of (21) or its multifractal analogous (32-37)? It is precisely the multifractal language which allows us to answer this question. Indeed, the two basic assumption for the multifractal language can now be replaced in the following way: I) the velocity difference on scale r behaves as δv(r) ∼ g 1 (r)g 2 (r) x ; II) the probability distribution to observe I is g 2 (r) f (x) . Then we have by employing a saddle point integration. The most clear physical interpretation of (39) is that the probability to observe a given fluctuation of the velocity difference has no more geometrical interpretation linked to the fractal dimension D(h). The probability distributions are controlled by a dynamical variable g 2 (r) which at this stage we still need to understand. An insight on the dynamical meaning of g 2 (r) can be obtained by the following considerations. Let us define ǫ(r) the average of the energy dissipation on a scale r. We can define the eddy turnover time τ (r) on scale r as: We have seen that all experimental and numerical data suggest that the following relation is always ( see also eq.6): where = s means that all moments on the r.h.s. are equal to l.h.s. By using (40-41) we obtain the definition of length L(r): L(r) cannot be regarded as a real length scale in the physical space. Rather, L(r) should be considered as a dynamical variable entering into the statistical description of turbulence. This is precisely the idea behind ESS which reformulate the scaling properties of turbulence in terms of L(r). Indeed in order to obtain ESS from (39) it is sufficient to state that, within the range of scales where ESS is observed, g 1 (r) 1/h 0 ∼ g 2 (r) 1/d 0 ∼ L(r). The physical meaning of ESS is strictly linked to (42) and in particular to (41) which is a generalization of Kolmogorov Refined Similarity Hypothesis. Let us summarize all our previous findings: A) we have introduced the idea of scale invariant random multiplier satisfying equation (21); B) we have shown that infinitively divisible distributions are all compatible with(21); C) we have shown that the multifractal language specialized for the case of infinitively divisible distribution gives equation (21) (with n = 2 and γ 1 (p) linear in p) and it is equivalent to scale invariant random multiplier; D) finally we have argued that the correct scaling parameter to describe the statistical properties of small scale turbulent flows is not directly linked to a simple geometrical interpretation, rather it should be considered a dynamical variable. Our finding A-D enables us to have a unified theoretical interpretation of the experimental and numerical results presented at the beginning of this paper. Indeed equation (37) or (39) tells us that the anomalous part of the structure functions: satisfies the scaling properties: where ρ p,q ≡ ζp−p/3 ζq−q/3 . According to our analysis of the experimental and numerical results, the scaling (44) is observed down to the smallest resolved scale. We have shown that, in the theoretical framework so far exposed, we recover the ESS when g 1 (r) 1/h 0 ∼ g 1/d 0 2 ∼ L(r). If g 1 (r) 1/h 0 = g 2 (r) 1/d 0 we lose ESS, but its generalized version (44) is still valid.
Synthetic Turbulence
We can also use (37)and (39) to simulate a synthetic signal according to a random multiplicative process satisfying (32). This can be done by using the algorithm recently introduced in [34]. Let us consider a wavelet decomposition of the function φ(x): where ψ j,k (x) = 2 j/2 ψ(2 j x − k) and ψ(x) is any wavelet with zero mean. The above decomposition defines the signal as a diadic superposition of basic fluctuations with different characteristic widths (controlled by the index j) and centered in different spatial points (controlled by the index k). For functions defined on N = 2 n points in in the interval [0, 1] the sums in (45) are restricted from zero to n − 1 for the index j and from zero to 2 j − 1 for k [35].
In [34] it has been shown that the statistical behavior of signal increments: is controlled by the coefficients α j,k . By defining the α coefficients in terms of a multiplicative random process on the diadic tree it is possible to give an explicit expression for the scaling exponents ζ(p). For example, it is possible to recover the standard anomalous scaling by defining the α's tree in term of the realizations of a random variable η with a probability distribution P (η): α 0,0 α 1,0 = η 1,0 α 0,0 ; α 1,1 = η 1,1 α 0,0 ; α 2,0 = η 2,0 α 1,0 ; α 2,1 = η 2,1 α 1,0 ; α 2,2 = η 2,2 α 1,1 ; α 2,3 = η 2,3 α 1,1 , and so on. Let us note that in the previous multiplicative process different scales are characterized by different values of the index j, i.e. r j = 2 −j . If the η j,k are i.i.d. random variable it is straightforward to realise that α j,k are random variables with moments given by: where the "mother eddy' α 0,0 has been chosen equal to one. In (47) with · · · we intend averaging over the P (η) distribution. In [34] it has been shown that also the signal φ(x) has the same anomalous scaling of (47).
In order to generalize this construction for function showing ESS or generalized-ESS scaling of the form (37) and (44) is now sufficient to take a probability distribution, P l (η), for the random multiplier with the appropriate scale dependency (21). This will be implemented by allowing a dependency of P (η jk ) on the scale r j = 2 −j , i.e. the η's random variables will be still independently distributed but not identically distributed with respect to variation of the scaling index j. According to the previous discussion, ESS corresponds to have only one seedfunction defining the multiplicative process, i.e. g 1 (r) 1/d 0 = g 2 (r) 1/h 0 in the range of scales where ESS is valid (r ≤ 5η k ). On the other hand, at scales smaller then 5 ∼ 6 Kolmogorov scale, ESS is not more valid because g 1/d 0 2 begins to deviate substantially from g 1/h 0 1 : only G-ESS should be observed and we need a multiplicative process defined in terms of two different seed-functions. Following this recipe we define the signal such that : The function G(r) is defined in such a way that for F (r) much greater than η k /L, G(r) ∼ F (r) while for very small scales r we have G(r) ∼ η k /L. In the following we choose the simplest ansatz: with B = η k /L and A is a dimensionless constant.
Let us now spend some words in order to clarify the previous definitions. Relation (48) is defined such that experimental results are reproduced with good accuracy and G-ESS scaling (44) is satisfied by definition. By assuming (50) the only unknown function is F (r) =< δv(r) 3 > /U 3 0 . On the other hand the function < δv(r) 3 > is always very well fitted by the Batchelor parameterization: .
From expression (48) is immediate to extract the expression for the two seedfunctions g 1 (r), g 2 (r) used in the previous sections, namely: Let us note that g 1 (r) goes smoothly from the intermittent value, g 1 (r) ∼ r h 0 (h 0 = 1/9 for the case of She Leveque model), assumed in the inertial range to the laminar value, g 1 (r) =∼ r, characteristic of scales much smaller than Kolmogorov scale. For the practical point of view we have constructed our signal by using a random process for the multiplier η j,k (r j ) with a scale-dependent Log-Poisson distribution. The scale dependency of parameters entering the distribution has been fixed in terms of relations (52) and (25) and such that the ζ(p) exponents correspond to the She-Leveque [11] expression, namely: where x j,j+1 is a Poisson variable with mean C j,j+1 = log(g 2 (r j+1 )/g 2 (r j )), A j,j+1 = g 1 (r j )/g 1 (r j+1 ) and β = 2/3. This choice leads to the standard Log-Poisson scaling in the inertial range: and to the following expression for the ratios of deviations to the Kolmogorov law: Signal constructed according to this scenario will be referred to as signal-A in the following. In fig. 11 we show the structure function of order 6 for such a signal plotted versus the separation scale r at moderate Reynolds number. Clearly, for this choice of Reynolds number there is not any inertial range of scale where scaling exponents could be safely measured. On the other hands, our signal shows G-ESS scaling, how it is possible to see in fig. 12.
Multiscaling
We now turn our attention to a different question which is connected to the theoretical results so far discussed, namely the role played by viscous effects. It is generally argued that the anomalous scaling can be observed for scale larger than a given viscous cutoff. The physical interpretation of this statement is that non linear, intermittent, transfer of energy is acting only for scale larger than the viscous scale. Below such a scale the structure function are supposed to show a simple (regular) scaling δv p (r) ∼ r p .
Usually the viscous cutoff is introduced as the scale at which the local Reynolds number is of order one, namely: This condition can be obtained by the requirement that the local energy transfer ǫ(r) ∼ (δv 3 (r)/r) becomes equal to the energy dissipation ν(δv 2 (r)/r 2 ): which gives equation (54). There is a well defined prediction, based on (54), formulated by Frisch and Vergassola using the multifractal language. Indeed for any exponent h one can introduce the h-dependent viscous cutoff given by: where δv(r) ∼ r h . It follows that r d (h) is a fluctuating quantity. There are two consequences of this theory. The first one predicts that for the structure functions δv p (r) there exists a cutoff scale r p dependent on p and moreover r p < r q for p > q. The second prediction concerns the moment of the velocity gradients Γ which are: Between the two predictions the first one is qualitatively more peculiar of (56). In particular, the first prediction states that between the end of the inertial range (i.e. the region where anomalous scaling of δv p (r) with respect to r is detected) and the dissipation cutoff r d , the local slope is controlled in rather complicate way by D(h). The second prediction is somehow weaker because present experimental data do not distinguish among several models, so far proposed, for the Re-dependence of Γ p . In order to compare the multiscaling in the dissipation range with our experimental and numerical data, we have produced a synthetic turbulence signal (signal B hereafter) similar to the one already discussed but with d 0 and h 0 independent on r. The effect of dissipation is introduced by using (54). In fig.13 we compare the local scaling exponents d(log δv p )/d(log r) for p = 6 between the two synthetic signals. In fig.14 we plot the relation (41) for p = 6 for signal A and B. Finally in fig.15 we plot the ratio G 4 /G ρ 4, 6 6 for signal B. By comparing figures 13, 14 and 15 with the analogous experimental and numerical results discussed in the previous sections (see fig 5,7,9), we can state that the quantitative and qualitative prediction based on (56) is not verified by experimental data. On the other hand, signal A, based on an explicit r-dependence of d 0 and h 0 , seems to be more closely related to what observed experimentally. Let us remark that signal A has no cutoff effect imposed by condition (54).
The above discussion rules out the effect of multiscaling on the viscous cutoff (56). Previous claims on the validity of multiscaling effects should be considered either wrong or affected by experimental errors. On the other hand our model, used to implement the synthetic signal A, should be considered a very accurate model even for scale close to the regular region where δv(r) ∼ r.
There is, however, a theoretical question concerning multiscaling which we are still not able to answer completely and that we shall try to formulate in the following. There are two possible scenario in which a viscous cutoff may be considered.
In the first scenario (let us call it scenario I) we can imagine to consider equation (54) as a fundamental relation independent of any other theoretical considerations. The idea is that when the local Reynolds number is sufficiently small, then non linear effect must be neglected. In order to compute the viscous scale, one should make use of the relation: obtained by the two definitions (38) and (48). In the equation (58) h is now the standard multifractal scale-independent exponent. Generalized scaling (58) should be considered realized with probability inserting (58) into (54) we obtain: where r d (h) is the fluctuating cutoff. We now look for a solution of equation (60) in a region where After some algebra we obtain Thus r d (h) is a fluctuating quantity as in (56). These fluctuations, however, happen in the region where δv(r) ∼ r and therefore no effect on the scaling of structure function is produced. From (58-61) we can compute the scaling of Γ p as function of Re. The scaling is independent on r d (h) and it is: consistent with (58) in the limit r → 0. Note that Γ 2 ∼ Re 3 4 (2−ζ 2 ) . This implies that for ζ 2 = 2/3, Γ 2 does not scale as Re. If we want to recover the experimental fact that ǫ is constant with Re, we should allow for a Re-dependent constant in (58). At any rate, because ζ 2 − 2/3 is a small quantity, these effects are quite small in the full range of available Re-number. We can summarize the scenario I as follows: the scaling (44) and (58) are verified to all scales; the condition (54) introduces a viscous cutoff which fluctuates in the region where δv(r) ∼ r; intermittency in the gradient of the velocity field are prescribed by (58). The above conclusions imply that scaling (41) must be violated near the viscous cutoff, as one can immediately check by an explicit computation. One can take an opposite point of view and assume that (41) is a fundamental relationship which must not be violated. This corresponds to the second scenario.
In the second scenario (54) is disregarded and one generalize (55) as: where Γ is the velocity gradient and r d is the viscous cutoff. In order to compute r d , one observes that Γ ∼ 1 τ (r d ) where τ (r d ) is the eddy turnover time at the viscous cutoff. We obtain: where following (42), we used τ (r d )δv(r d ) = δv 3 (r d ) /ǫ. Once again, by using (58), (61) and (62) we can obtain an explicit formula for r d (h): Thus also in scenario II, we have strong fluctuations of the viscous cutoff. The computation of the gradients is quite straightforward from (65). We have: where we have used (58). Finally by using (59) and (62) we get: Note that in scenario II Γ 2 ∼ Re because ζ 3 = 1.
Because of (69), the II scenario violates (58) and (44) for scales smaller than r d (h) while (41) is always satisfied. It is quite difficult to understand which one of the two scenario is actually verified by experimental and numerical data. In most cases the scale resolution does not reach the region where δv(r) ∼ r. At any rate, either (41) or (44) should be violated at very small scales as the result of viscous effects. This violation is rather small and may not be easily detectable at low or moderate Reynolds numbers. The common point about the two scenario is that the viscous cutoff (if any) acts at scale where already the velocity structure functions behave in a regular way, i.e. δv(r) ∼ r.
Conclusions
In this paper we have proposed several new results concerning the scaling behaviour of small scale statistical properties of turbulence. It is worth to summarize our main findings trying to outline questions which are still to be answered. 1) We have reviewed the main results on ESS and in particular we have shown that in homogeneous and isotropic flows in turbulence, Rayleigh Benard convection and solar wind magnetohydrodynamics, the ratio ζ(p)/ζ(3) seems to have an universal behaviour. This is a rather striking and unexpected result which implies that anomalous violation of dimensional scaling may be explained in an universal way. We do not know any simple phenomenological explanation for our finding.
2) We have shown that ESS is not observed when relatively strong shear flows are present. A phenomenological analysis, based on the Kolmogorov equation, shows the relevance of a length scale based on the mean energy dissipation and the shear strength. This analysis should be refined in order to acquire more quantitative predictions. At any rate, our observation suggests that previous finding of violations of ESS should be due to the presence of shear flows.
3) We have shown that the refined Kolmogorov similarity can be generalized by including ESS. This generalization is verified extremely well in both experiments and numerical simulations. More important, we have shown that the generalized refined Kolmogorov similarity is true also in cases where ESS is not observed. 4) Similar to the previous point, we have shown that the hierarchy relation based on log-poisson distribution for the structure functions is very well supported by experimental data, also for very small scales where ESS is not observed. 5) Based on our results in 1)-4) we have proposed a generalization of ESS. This generalization is supported both by experimental and numerical data and it seems not affected by viscous cutoff. 6) We have developed a theory which unifies the previous point. The theory is based on the assumption that the probability distribution is infinitively divisible and predicts the existence of the generalized ESS. The theory can also be used to generate artificial signals which displays all the scaling features observed in real data. 7) We have shown that the original proposal on the multiscaling for the viscous cutoff is incompatible with the turbulence data. The theory formulated in this paper removes this incompatibility and suggests that multiscaling is acting at much smaller scales than previously proposed. The new point on the theory is a change of view in the probability distribution of the original multifractal model which is not directly linked to a geometrical interpretation in terms of fractal dimensions. 8) Finally we have shown that violations of either the generalized refined Kolmogorov similarity or the generalized ESS should occur at very small scale. Our present data analysis does not allow us to distinguish among the two possibilities. Figure 2: Structure functions S 6 as a function of S 3 at R λ = 800 (a) and R λ = 140 (b) computed from the same data-set of fig.1. Vertical dashed lines indicate the value of S 3 at 5η k . Figure 3: Logarithm of ratio of the universal functions f n /f 2 for two cases n = 6 (diamonds) and n = 4 (circles) for the wake behind the cylinder of fig.1 as a function of r/η k . Figure 4: Dependence of the exponent β(6, 3) as a function of Re. (R λ ≃ 1.4Re 1/2 ). The last point is from ref. [3]. See also ref. [21]. Figure 5: Log-log plot of < ǫ 2 r > S 3 (r) 2 against S 6 (r) at R λ = 500. The straight line refers to the slope 1.005. Data are from an experiment of turbulence behind a cylinder and the measurement point was at about 25 diameter down stream. Figure 6: a) Log-log plot of ESS scaling for the longitudinal structure function S 6 (r) versus S 3 (r). Data are taken from a numerical simulation of a shear flow at R λ = 40. The dashed line is the best fit with slope 1.79. Every point in the plot corresponds to a grid point and the lattice spacing is ∼ 1η k wide. The computation of the structure functions is performed in points of the flow where the shear has a minimum. b) the same of (a) but for points where the shear is maximum. The dashed line is the best fit with slope 1.43. At variance with previous case ESS is not observed. Figure 7: Check of eq.(6) for p=6 using the same numerical simulation of the shear flow discussed in fig.6 (log-log plot). Energy dissipation has been computed by using the 1-dimensional surrogate in order to compare this result with laboratory experiments (see fig.5). (a) points of minimum shear. (b) points of maximum shear. The points refer to the scales at 2, 4, 5, 8, 10, 16, 20, 32, 40 grid points and the dashed line is the best fit done over these points, corresponding to the slope 0.99 for both minimum and maximum shear data. Although in this case ESS is not observed (see fig. 6b), the generalized refined Kolmogorov hypothesis eq.(6) works within 3%. Figure 8: The function F p+1 (r) defined in eq.(10) is plotted, for several values of p, as a function of [(F p (r)) β ′ ·F (r)] with β = 2/3, β ′ = β δ and δ = 1/3 . R λ = 140 in (a) and R λ = 800 in (b). The 5 curves in (a) and (b) correspond to p = 1, 2, 3, 4, 5 starting from the bottom lines. They have been vertically shifted of −0.4, −0.2, 0, 0.2, 0.4 in order to separate them. The solid lines have slope 1. Logarithms are base 10. Figure 9: Log-Log plot of G 6 (r) versus G 5 (r) for different laboratory and numerical experiments. (+) Data are taken in a wake behind a cylinder where standard ESS was not observed [24]. (•) Data taken from the region with log-profile of a boundary layer (courtesy of G. Ruiz Chavarria) where standard ESS was not observed. (squares) Data taken from a numerical simulation of thermal convection [17] where standard ESS was observed.( ∆ ) Data taken from a direct numerical simulation of a channel flow where standard ESS was not observed [28]. Figure 10: Log-log plot of F p+1 against F p for p = 1, ...6 and r = 3η k (squares) and r = 30η k (circles) for the case of two numerical simulations, namely Rayleigh Bernard thermal convection (a) and channel flow (b). As one can see a clear scaling is observed with the same scaling exponent β ′ both for small and relatively large values of r. This confirms the quality of the generalized ESS scaling (13). Figure 11: Log-log plot of the 6th order structure function for the signal A with 19 fragmentation at small Reynolds number. Notice the absence of any scaling range. Figure 12: G-ESS (log(G 6 (r)) vs log(G 4 (r))) for the signal A at the same Reynolds number of figure 10. The slope is ρ 4,6 = 0.241 in perfect agreement with the theoretical prediction obtained from (14) where for ζ(p) we have used the She-Leveque expression. Figure 13: 6th order local scaling exponents for the signal A (circles) and signal B (squares). Notice that the qualitative behaviour of the two signals is almost the same: both of them go from an intermittent scaling (ζ(6) ∼ 1.8) at large scale to a laminar scaling (ζ(6) = 6) at small scales. Figure 14: Generalized-Kolmogorov refined hypothesis (41) for the 6th order structure function in both signal A (squares) and signal B (circles). Notice the sudden jump at the Kolmogorov scale present when multiscaling is valid (signal B). Figure 15: Compensated slope (G 4 (r)/G ρ 4,6 6 (r)) for signal B. Notice deviations of the order of 10% while for signal A the same quantity is constant by definition. Table 1: We show some measured values of ζ(p) and β(p, 3) for 1 ≤ p ≤ 8. In the second column we report the ζ(p) measured in 3D homogeneous and isotropic turbulence(30 < R λ < 2000), in the third column the ζ(p) measured in Rayleigh Benard convection when the Bolgiano scaling is the relevant one(R λ ≃ 30), in the fourth column the ζ(p) obtained from the measurements of the solar wind (R λ ≃ 5 10 6 ). We note that the ζ(3) of the last two cases are clearly very different from 1 which is the value of ζ(3) in the second column. The ratios β(p, 3) computed from the values of the third and fourth column are shown in the fifth and sixth columns respectively. The β(p, 3) are equal within error bars to those of the first column.
|
2014-10-01T00:00:00.000Z
|
1995-10-12T00:00:00.000
|
{
"year": 1995,
"sha1": "95e67f26909012f18e2eb40367326c67fc510898",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/chao-dyn/9510004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7abb490df85e7502ec328dbf43a15276417375cc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
235998191
|
pes2o/s2orc
|
v3-fos-license
|
Enriching CCL3 in the Tumor Microenvironment Facilitates T cell Responses and Improves the Efficacy of Anti-PD-1 Therapy
Chemokines are key factors that influence the migration and maintenance of relevant immune cells into an infected tissue or a tumor microenvironment. Therefore, it is believed that the controlled administration of chemokines in the tumor microenvironment may be an effective immunotherapy against cancer. Previous studies have shown that CCL3, also known as macrophage inflammatory protein 1-alpha, facilitates the recruitment of dendritic cells (DCs) for the presentation of tumor Ags and promotes T cell activation. Here, we investigated the role of CCL3 in regulating the tumor microenvironment using a syngeneic mouse tumor model. We observed that MC38 tumors overexpressing CCL3 (CCL3-OE) showed rapid regression compared with the wild type MC38 tumors. Additionally, these CCL3-OE tumors showed an increase in the proliferative and functional tumor-infiltrating T cells. Furthermore, PD-1 immune checkpoint blockade accelerated tumor regression in the CCL3-OE tumor microenvironment. Next, we generated a modified CCL3 protein for pre-clinical use by fusing recombinant CCL3 (rCCL3) with a non-cytolytic hybrid Fc (HyFc). Administering a controlled dose of rCCL3-HyFc via subcutaneous injections near tumors was effective in tumor regression and improved survival along with activated myeloid cells and augmented T cell responses. Furthermore, combination therapy of rCCL3-HyFc with PD-1 blockade exhibited prominent effect to tumor regression. Collectively, our findings demonstrate that appropriate concentrations of CCL3 in the tumor microenvironment would be an effective adjuvant to promote anti-tumor immune responses, and suggest that administering a long-lasting form of CCL3 in combination with PD-1 blockers can have clinical applications in cancer immunotherapy.
INTRODUCTION
Development of immune checkpoint blockades (ICBs) has revolutionized the field of cancer immunotherapy. Tumor cells escape immunosurveillance and evade the host immune system by employing diverse mechanisms, such as up-regulating inhibitory molecules that suppress anti-tumor immune responses (1). ICBs have exhibited significant clinical potential in the treatment of solid tumors, and have extend patient life-span (2). Although these immunotherapies induce anti-tumor immunity in some patients, most cancer patients do not respond to the ICB immunotherapy (3).
A number of studies investigated the factors responsible for the ineffectiveness of ICB immunotherapy, and suggested that the magnitude and composition of the immune cells that infiltrate into the tumor microenvironment is an important factor influencing the efficacy of the ICB immunotherapy (4). Consequently, this has led to the categorization of tumors into hot (inflamed) or cold (non-inflamed) tumors (5). Hot tumors have abundant numbers of CD8 + T cells and Ag-presenting cells (APCs), such as DCs that infiltrate the tumor microenvironment. These tumors display significantly increased responsiveness to ICBs due to the enrichment of immune cells (6). In contrast, cold tumors harbor high numbers of immunologically suppressive cells, such as regulatory T cells and myeloid-derived suppressor cells (MDSCs), instead of CD8 + T cells and APCs, and exhibit poor responsiveness to ICB therapy (7). Therefore, enrichment of tumor-infiltrating CD8 + T cells or DCs could be a promising strategy to elevate responsiveness to immunotherapy.
Chemokines, defined as chemotactic cytokines are small secretory proteins that bind 7 G protein-coupled receptors, which induce intracellular signaling pathways (8,9). Chemokines are required when immune cells migrate for homeostasis, development, and protecting the host from infections or tumors (10,11). A diverse set of chemokines are expressed by tumor cells as well as immune cells, endothelial cells, and stromal cells in the tumor microenvironment. Depending on the level of chemokine expression, the composition of tumor-infiltrating immune cells may change and eventually affect the immune response to tumor regression (12,13). There have been multiple pre-clinical attempts to modulate chemokines and the corresponding receptors to increase responsiveness of anti-tumor effector immune cells to ICB therapy, such as CD8 + T cells. For instance, the CXCL9 and CXCL10 expression in the tumor microenvironment, chemokines that are well known for their ability to recruit T cells and NK cells to target sites via the CXCR3 chemokine receptor, was reported to correlate negatively with cancer metastasis (14)(15)(16). Additionally, epigenetic silencing of the CXCL9 and CXCL10 genes has been associated with reduced T cell infiltration. Treating mice with 3-deazaneplanocin A (DZNep), an epigenetic reprogramming drug and enhancer of zeste 2 polycomb repressive complex 2 subunit (EZH2) inhibitor, led to a CXCL9 and CXCL10 dependent increase in tumor infiltration of T cells and a subsequent increase in the efficacy of PD-L1 blockade (17). Another study showed that decitabine (DAC), which is a DNA methyltransferase inhibitor, enhanced CXCL10 expression, and subsequently recruited NK cells and CD8 + T cells into the tumor microenvironment of a murine ovarian cancer model. Furthermore, CTLA-4 blockade therapy was potentiated in combination with DAC (18). In contrast, CXCL12 that binds CXCR4 and is expressed by tumor cells as well as the stromal cells in the tumor microenvironment has been reported to promote tumor angiogenesis together with VEGF (19) and sustain cancer cell proliferation and survival (20). These studies indicated that blockade of CXCR4 using neutralizing Abs repressed tumor growth and metastasis in lymphoma and brain tumor models (21,22). Furthermore, combination therapy with LY2510924, a CXCR4 peptide antagonist, and durvalumab, a PD-L1 blocking Ab, has been clinically tested for solid tumors (23).
CCL3, also known as macrophage inflammatory protein 1-alpha, is a cytokine belonging to the CC chemokine family. CCL3 is released when CD4 + T cells interact with DCs and recruit CCR5-expressing CD8 + T cells into specific sites for in vivo activation and proliferation (24,25). Moreover, CCL3 was demonstrated to play a dominant role in the activation, maturation, and migration of CD11c + CD11b + cells to cervical draining lymph nodes in a murine hepatitis virus infected mice, an infection model of the central nervous system. The migration of DCs into the LN was inhibited in the absence of CCL3, and resulted in diminished IFN-γ expression by Ag-specific T cells and increased the levels of in vivo IL-10 (26). Additionally, CCL3 has been shown to be a potential adjuvant for DNA vaccines. Transfecting the HIV Gag DNA vaccine with CCL3 led to markedly enhanced infiltration of CD11c + DCs, and injecting a CCL3-encoding plasmid protected against viral infection with a nearly 200-fold reduction in virus titers (27). The production of CCL3 by B cells in the tumor microenvironments of patients with human melanoma promoted immune responses, while depletion of CCL3-secreting B cells by anti-CD20 decreased tumor-associated inflammation and CD8 + T cell numbers. This indicated that the CCL3 secreted by B cells was important for augmenting immune responses in patients with melanoma (28). Additionally, in the HCmel12 murine tumor model, basophils expressed large amounts of CCL3 and CCL4, and depletion of basophils or blockade of these chemokines inhibited CD8 + T cell infiltration in the tumor microenvironment (29).
Taking into account the protective function of CCL3, we investigated the consequences of administering CCL3 in the tumor microenvironment in combination with PD-1 immune checkpoint blockers. We engineered MC38, a murine colon adenocarcinoma cell line, to overexpress CCL3, and then analyzed tumor growth and changes in the characteristics of immune cells. The CCL3-overexpressed MC38 exhibited delayed tumor growth. Additionally, CD8 + T cells proliferated vigorously in the CCL3-enriched tumor microenvironment, tumorspecific CD8 + T cells displayed the ability to produce IFN-γ, and there was an increase in the DC population. Furthermore, CCL3-mediated tumor regression was accelerated when mice were treated with Abs that block PD-1. For pre-clinical use, we generated a long-lasting form of CCL3, composed of recombinant CCL3 (rCCL3) and a non-cytolytic hybrid Fc (HyFc). Mice who were subcutaneously administered an appropriate dose of rCCL3-HyFc near tumors exhibited delayed tumor growth and enhanced survival compared to those that had been administered the control HyFc. Also, mice treated with appropriated dose of rCCL3-HyFc had more mature myeloid cells and functionally enhanced T cells than other groups. Surprisingly, mice were combined treated with rCCL3-HyFc and αPD-1 significantly delayed tumor growth compared to each agent alone treatment. Therefore, our results indicate that CCL3 enrichment augments tumor regression and positively reshapes immune cell populations in the tumor microenvironment, suggesting that rCCL3-HyFc might have potential as an adjuvant for enhancing the effect of presently used αPD-1 immunotherapeutic agents.
Mice
Care and Use Committee at Yonsei University, and in accordance with the Laboratory Animal Act of the Korean Ministry of Food and Drug Safety that improves the ethics and reliability of animal testing through appropriate administration of laboratory animals and animal testing (permit No. IACUC-A-201907-921-03 and IACUC-A-202007-1097-01).
Tumor cell transfection
The MC38 cells were a gift from Seung-woo Lee's laboratory (POSTECH, Pohang, Korea). They were cultured in DMEM (Corning, New York, NY, USA) supplemented with 10% FBS (Thermo Fisher Scientific, Waltham, MA, USA) and 1% penicillin/streptomycin (Thermo Fisher Scientific). To overexpress CCL3, MC38 cells were transfected with the pCDNA3.1 vector (Addgene, Watertown, MA, USA) that encodes the mouse CCL3 gene under the cytomegalovirus promoter and were selected with hygromycin (200 μg/ml). Selected MC38 clones that overexpressed CCL3 were seeded into 6 well plates at a concentration of 10 6 cells/4 ml media, and the culture supernatants were collected after 48 h. CCL3 expression in the supernatants was analyzed using ELISA (Thermo Fisher Scientific).
Mouse tumor model
Wild type (WT), CCL3 overexpressing (CCL3-OE), and control MC38 cells (5×10 5 cells) in PBS were injected subcutaneously into age-matched (6-8 wk) B6 mice. Tumor sizes were measured at the indicated time points with a caliper and the following formula was used for all calculations: 1/2×(length×width 2 ). Tumors larger than 2,000 mm 3 in size were considered dead.
In vivo treatments
Mice were randomly divided into different treatment groups before the tumor size reached 30-50 mm 3 . To ensure that the tumor sizes between the different groups were approximately equivalent before the therapy, mice were stratified based on the size of the implanted tumor. Each mouse was treated intraperitoneally with 200 μg of isotype control or anti-PD-1 (RMP1-14, Bio X cell) Abs, every 3 days. For rCCL3-HyFc treatment, recombinant mouse CCL3-HyFc and its control HyFc was supplied by Genexine, Inc. (Seongnam, Korea). Each mouse was injected subcutaneously with 350 ng HyFc or rCCL3-HyFc at the indicated dose.
Statistical analysis
Statistical analysis was performed using Prism software version 7.0 (GraphPad, San Diego, CA, USA). A 2-tailed unpaired Student's t-test was performed to determine differences between 2 groups. Comparisons between multiple groups were performed using 1-way ANOVA with post hoc Tukey's test or 2-way ANOVA with post hoc Tukey's test. Kaplan-Meier survival curves were analyzed using the Mantel-Cox log-rank test with a 95% confidence interval. Details about the statistical test, exact value of number, precision measure, and statistical significance for each experiment have been reported in the figure legends.
Expression of CCL3 in the tumors delays tumor growth and improves survival
Since we aimed to investigate the effect of CCL3 on immune cells and not tumor cells, we first determined the in vitro expression of CCL3 receptors CCR1 and CCR5 in 6 murine tumor cell lines. To elaborate, the expression of CCR1 and CCR5 was measured in the C57BL6 derived MC38, LLC1, TC1, B16F10, and EG7 cell lines, and in the CT26 cell line from the BALB/c strain. We confirmed that MC38, TC1, and B16F10 rarely expressed CCR1 and CCR5 via flow cytometric analysis (Fig. 1A). In contrast, the LLC1 and CT26 cells did not express CCR5 but showed low levels of CCR1 surface expression. Interestingly, EG7 expressed high levels of CCR1 and low levels of CCR5. Additionally, we concluded that none of the tested tumor cells produced CCL3 because no CCL3 was detected (ELISA analysis) in the culture supernatants (Fig. 1B).
Among the 3 tumor cell lines that did not express CCR1 and CCR5, the MC38 tumor cell line was selected and engineered for CCL3 overexpression because MC38 tumors have often been used to determine the efficacy of PD-1 blockers because of abundant expression of PD-L1 in MC38 (30). We transfected MC38 cells with either the plasmid designed to overexpress CCL3 or an empty plasmid, and selected MC38 clones expressing high levels of CCL3 (CCL3-OE) or a mock plasmid (mock) in vitro (Fig. 1C). The tumor size and survival time of C57BL/6 mice were measured after inoculation with CCL3-OE, mock, or WT MC38 cells (Fig. 1D). Mice injected with CCL3-OE MC38 cells exhibited significantly delayed tumor growth compared to mice that had been inoculated with WT-or mock-MC38 cells ( Fig. 1E and F). Consistent with the reduction in tumor volume, mice inoculated with CCL3-OE displayed enhanced survival rates (Fig. 1G). We analyzed the CCL3 levels in blood from CCL3-OE MC38 bearing mice. Although serum CCL3 level seemed to be higher in CCL3-OE MC38-inoculated mice than in WT or Mock MC38-inoculated mice at 8 and 16 days post tumor inoculation, the CCL3 level was below the detectable level in the blood (Supplementary Fig. 1). Since CCL3 was produced locally from the tumors, it appears to be difficult to detect enough CCL3 in the blood of CCL3-OE MC38-inoculated mice. Together, these data suggest that CCL3 up-regulation may reshape the tumor microenvironment and enhance anti-tumor immune responses, leading to the efficient control of the in vivo tumor growth.
Increasing CCL3 expression reshapes the T cell and DC populations in the tumor microenvironment
To investigate the influence of CCL3 on immune cell populations, we sacrificed tumor-bearing mice 20 days after inoculation with MC38 cells. We defined CD11b + cells, T cells, and NK cells after serial sub-gating different immune cell populations (Supplementary Fig. 2). While there was no difference in the frequency of CD11b + Ly6C + , CD11b + Ly6G + , and CD11b + F4/80 + cells between mock and CCL3-OE groups, the frequency of CD4 + and CD8 + T cells had decreased in CCL3-OE mice ( Fig. 2A). The regulatory T cells and NK cells displayed similar frequencies between mock and CCL3-OE groups ( Fig. 2A). Interestingly, there was tendency to upregulation of frequency of Ki67 + cells among tumor-infiltrating T cells in the CCL3-OE group compared to the mock group. (Fig. 2B). Additionally, when the tumor-infiltrating CD8 + T cells were re-stimulated ex vivo with the P15E peptide, an H-2K b -restricted CD8 + T cell Ag epitope expressed in H-2 b haplotype tumors, we observed that the MC38 and CD8 + T cells from the CCL3-OE group exhibited a significantly enhanced capacity to produce IFN-γ compared to those from the mock group (Fig. 2C). Furthermore, the frequency of DCs co-expressing CD11c and MHC class II was also significantly elevated in the CCL3-OE group (Fig. 2D). Given the previous report showing the role of CCL3 in activating CD8 + T cells, our data suggest that CCL3 enrichment facilitates DC recruitment to the tumor microenvironment, and enables tumorspecific CD8 + T cells to proliferate and function. Collectively, we hypothesized that CCL3mediated rewiring of immune cells, such as T cells and DCs in the tumor microenvironment, might contribute to delayed tumor growth and enhanced survival in vivo.
PD-1 blockade immunotherapy enhanced the effect of CCL3-induced tumor regression
Since CCL3 rewired the immune cells in the tumor microenvironment, we expected that PD-1 blockade in CCL3-enriched microenvironments would further accelerate anti-tumor T cell responses and tumor regression. To test this hypothesis, mice were intraperitoneally treated with monoclonal PD-1 blocking (αPD-1) or isotype control Abs, starting 5 days post tumor inoculation, and after every 3 days (Fig. 3A). In the CCL3-enriched tumor microenvironment, tumor growth was significantly delayed by PD-1 blockade compared to the isotype Ab treatment ( Fig. 3B and C). Similarly, mice injected with PD-1 blockade Ab lived longer than the mice that belonged to the isotype-injected control group (Fig. 3D). Therefore, our data demonstrate that a tumor microenvironment where CCL3 is stably maintained can support the PD-1 blockade mediated control of the tumor growth. This suggests that CCL3 could be used clinically in combination with PD-1 blockers.
Subcutaneous administration of long-lasting CCL3 near the tumors was therapeutically beneficial at a specific dose
Since our previous data demonstrate that continuous expression and enrichment of CCL3 in the tumor microenvironment supported the inhibition of tumor growth, we attempted to generate an in vivo long-lasting form of rCCL3 and investigated the in vivo efficacy of the protein when administered exogenously near tumors. We first designed a plasmid to express mouse rCCL3 linked to HyFc (rCCL3-HyFc) and purified the recombinant fusion protein from cell supernatants after the transient transfection of MC38 cells with the plasmid. Purified rCCL3-HyFc was injected subcutaneously near the tumors every 3 days to maintain CCL3 concentration in the tumor microenvironment. Additionally, to test the dose-dependency of rCCL3-HyFc, tumor-bearing mice were injected with 2 different doses of rCCL3-HyFc or HyFc as a control (Fig. 4A). In previous reports, mice were subcutaneously injected with rCCL3 with 100 ng dose and they exhibited the slowed tumor growth in established tumors (31). So, we injected higher doses of rCCL3-HyFc in order to better effect than previous reports. Interestingly, treatment with 500 ng of rCCL3-HyFc led to a significant decrease in tumor growth, whereas treatment with 5,000 ng of rCCL3-HyFc did not have any beneficial effect on tumor growth inhibition compared to HyFc control treatment ( Fig. 4B and C). In addition to tumor growth, mice treated with 500 ng of rCCL3-HyFc also showed improved survival compared to those treated with HyFc or 5,000 ng of rCCL3-HyFc (Fig. 4D). Furthermore, to optimize the appropriate concentration of rCCL3-HyFc, we injected 150 ng and 1,500 ng of rCCL3-HyFc to mice with the same experimental scheme and conditions. When mice were injected with 150 ng and 1,500 ng of rCCL3-HyFc, they exhibited similar tumor growth and survival rated with control HyFc injected group. But, in this repeated experiment, mice treated with 500 ng of rCCL3-HyFc displayed delayed tumor growth in vivo (Supplementary Fig. 3 summary, these data demonstrate that a continuous exogenous supply of CCL3 near tumors may have therapeutically beneficial anti-tumor response at a controlled and appropriate dosage, suggesting that rCCL3-HyFc may be used clinically as a therapeutic agent.
Therapeutically injection of rCCL3-HyFc augmented maturation of myeloid cells and function of T cells in vivo
In previous results, administration of specific dose of rCCL3-HyFc only exhibited the delayed tumor growth. To address the immunological differences, which triggered disparate tumor growth, between 500 ng and 5,000 ng of rCCL3-HyFc injected mice, we analyzed the tumor-infiltrating immune cell populations and their characteristics. In myeloid cell populations, they exhibited similar number of CD11b + Ly6C + , CD11b + Ly6G + , and CD11b + F4/80 + cells among 3 groups (Fig. 5A). In HyFc and 500 ng of rCCL3-HyFc injected group, myeloid cells dominantly consisted of CD11b + Ly6C + and CD11b + F4/80 + cells (Fig. 5B). In addition, these dominant CD11b + Ly6C + and CD11b + F4/80 + cells in 500 ng of rCCL3-HyFc injected group displayed more activated and matured phenotype, such as up-regulation of MHCII and CD86, than these cells in other control group (Fig. 5C). Next, we further analyzed the tumor-infiltrating T cell responses. Although the total numbers of CD4 + and CD8 + T cells were similar among 3 groups (Fig. 5D), the function of IFN-γ production in CD8 + T cells was elevated in 500 ng of rCCL3-HyFc injected group when they were re-stimulated with p15E peptide ex vivo (Fig. 5E). Also, when CD4 + and CD8 + T cells were re-stimulated with PMA/ Ionomycin ex vivo, the T cells in 500 ng of rCCL3-HyFc injected group exhibited augmented function and this tendency was down-regulated in 5,000 ng of rCCL3-HyFc injected group (Fig. 5F). In summary, when mice were administrated with specific dose of 500 ng of rCCL3-HyFc, they had myeloid cells with increased MHCII and CD86, and functionally enhanced T cells. The immunological improvement might help to delay tumor progression. In contrast, in vivo treatment with high amount of rCCL3-HyFc did not result in the maturation of myeloid cells and the functional enhancement of tumor-infiltrating T cells. It would be still required to investigate how different doses of rCCL3-HyFc can control anti-tumor immune response and tumor progression differently in vivo.
Combination therapy of rCCL3-HyFc with PD-1 blockade exhibited substantial delayed tumor growth in vivo
In CCL3-enriched tumor microenvironment, we confirmed that tumor growth was delayed when mice were additionally injected with PD-1 monoclonal Ab (Fig. 3). Since rCCL3-HyFc also therapeutically reduced tumor growth, we expected that combination therapy of rCCL3-HyFc with PD-1 blockade would further delayed tumor growth. To validate this hypothesis, mice were intraperitoneally injected with αPD-1 or subcutaneously treated with rCCL3-HyFc or combination of both agents in vivo (Fig. 6A). When mice were treated with rCCL3-HyFc or αPD-1 alone, they exhibited a delayed tumor growth to a similar extent ( Fig. 6B and C).
Interestingly, combination of rCCL3-HyFc and αPD-1 led to more dramatic control of tumor growth and better survival than rCCL3-HyFc or αPD-1 alone, which seemed to be synergistic ( Fig. 6B-D). This data indicates that the anti-tumor effect caused by rCCL3-HyFc or αPD-1 is mechanistically distinct, suggesting the clinical use of rCCL3-HyFc combined with αPD-1 to improve the efficacy of current αPD-1 therapy.
DISCUSSION
Chemokines are important for the in vivo migration and homeostasis of immune cells.
Because of their ability to alter the profile of immune cells, they are involved in the protection of the host from infections or tumors. Here, we focused on the role of CCL3 in reshaping immune cell populations and in improving the anti-tumor immune response. We observed that a CCL3-enriched tumor microenvironment not only reduced tumor growth but also improved the survival rate compared to the parental tumor microenvironment, suggesting CCL3-mediated modulation of immune cell populations. Indeed, CD8 + T cells from tumor microenvironments where CCL3 was overexpressed showed enhanced proliferation and function. Additionally, these tumor microenvironments exhibited an increase in the DC numbers. Furthermore, we observed that PD-1 blockade had a synergistic effect on tumor growth inhibition in CCL3-enriched niches. To reproduce the in vivo CCL3-enriched tumor microenvironment, we generated long-lasting rCCL3-HyFc proteins and injected them near the tumors. Therefore, an appropriate dose of rCCL3-HyFc could augment tumor regression and survival rate along with matured myeloid cells and functionally enhanced T cells, and suggests that an optimized dosage of rCCL3-HyFc may have clinical applications as a potential adjuvant for boosting present immunotherapies. In our present study, we observed that treating mice with 5,000 ng of rCCL3-HyFc did not attenuate tumor growth or improve survival. However, simply increasing the dosage may not be an effective alternative for treating patients with tumor, because these chemokines may exert adverse effects on immune cells during infection or tumor metastasis. The CCL2 and CCR2 axis promoted migration of CCR2 expressing inflammatory monocytes and macrophages into the tumor niche. Moreover, CCL2 expression correlated with the infiltration of tumor-associated macrophages (TAMs), which was associated with poor prognosis in breast cancer patients (32,33). Additionally, CCL2 may also activate TAMs to secrete CCL3, which in turn may lead to the recruitment of additional TAMs and promote tumor metastasis (34). During simian immunodeficiency virus and HIV infections, CCL3 was reported to play a dominant role in the chemotactic recruitment of MDSCs (31). Furthermore, CCL2 and CCL3 can induce the production of matrix metalloproteinase 9 in monocytes. Matrix metalloproteinase 9 induces degradation of the matrix and facilitates tumor cell extravasation (35,36). Therefore, treating mice with a high dose of rCCL3-HyFc Data are representative of a single experiment (n=7-9 mice/group in each experiment). s.c., subcutaneously; i.p., intraperitoneally.
an appropriate dose that has clear anti-tumorigenic effects on immune cells may be an important factor for successful therapy.
As mentioned above, the injection dosage may have important implications on treatment when chemokine therapy is used alone. Therefore, administering chemokines at low doses as adjuvants to induce immune activation, in combination with other therapeutic agents, might be a suitable alternative. Intravenous injection of CCL3 in mice increased the in vivo frequency of DCs (37). Another study revealed that among the different DC populations, CD103 + DCs were most effective in transporting tumor Ags to the draining lymph nodes and in activating tumor-specific CD8 + T cells in the murine melanoma environment. Furthermore, administering FLT3 ligand (FLT3L) and Poly I:C in combination with the BRAF and PD-L1 immunotherapy expanded CD103 + DC populations elevated the in vivo efficacy of BRAF and PD-L1 immunotherapy (38). Additionally, tumor cell-derived VEGF potently inhibits FLT3L activity and negatively affects the differentiation of conventional DCs (39). Therefore, blocking VEGF signaling might augment DC differentiation and FLT3L activity in the tumor microenvironment. The improved activity of PD-1 blockers in a CCL3-enriched microenvironment may be further elevated by administering FLT3L, Poly I:C, and VEGF blockers, which are strong activators of CD103 + DCs that are recruited into tumors by CCL3. However, a detailed analysis of the immunological factors that are altered would be required for such an investigation.
In summary, our study focused on the positive role of CCL3 in controlling tumor growth, and examined the possibility of using rCCL3-HyFc as a clinical therapeutic for cancer. We conclude that rCCL3-HyFc can be a promising therapeutic agent if the appropriate dosage and effectiveness of the combination therapy are verified in clinical phase. In addition to validating effectiveness, a detailed immunological analysis of the patient being treated with rCCL3-HyFc proteins would be helpful in developing a more effective immunotherapy for cancer patients.
|
2021-07-18T05:26:48.537Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "cd9d49a9966d7f1110bc61c4254043ed5795bc2d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4110/in.2021.21.e23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd9d49a9966d7f1110bc61c4254043ed5795bc2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260228125
|
pes2o/s2orc
|
v3-fos-license
|
Halfway to Automated Feeding of Chinese Hamster Ovary Cells
This paper presents a comprehensive study on the development of models and soft sensors required for the implementation of the automated bioreactor feeding of Chinese hamster ovary (CHO) cells using Raman spectroscopy and chemometric methods. This study integrates various methods, such as partial least squares regression and variable importance in projection and competitive adaptive reweighted sampling, and highlights their effectiveness in overcoming challenges such as high dimensionality, multicollinearity and outlier detection in Raman spectra. This paper emphasizes the importance of data preprocessing and the relationship between independent and dependent variables in model construction. It also describes the development of a simulation environment whose core is a model of CHO cell kinetics. The latter allows the development of advanced control algorithms for nutrient dosing and the observation of the effects of different parameters on the growth and productivity of CHO cells. All developed models were validated and demonstrated to have a high robustness and predictive accuracy, which were reflected in a 40% reduction in the root mean square error compared to established methods. The results of this study provide valuable insights into the practical application of these methods in the field of monitoring and automated cell feeding and make an important contribution to the further development of process analytical technology in the bioprocess industry.
Introduction
Chemometrics, which deals with the application of various mathematical and statistical methods, could be described by a broad definition in which the most important part is the application of a multivariate data analysis to data relevant to chemistry [1]. The multivariate statistical data analysis is a powerful tool for analysing and structuring data sets obtained from different measurement systems and for building empirical mathematical models that can predict, for example, the values of important properties that cannot be measured directly [2,3]. Multivariate calibration is often used in the industry for the rapid online determination of important process parameters and critical quality characteristics and enables non-destructive measurements, online monitoring and process control.
In analytical chemistry, molecular spectroscopic methods, including infrared, nearinfrared and Raman spectroscopy, are widely used to determine the molecular structure of various substances [4][5][6]. These methods work by assessing the radiant energy that is either absorbed or scattered when excited by a high intensity monochromatic beam that induces a transient energy state in the molecule. The process of Raman scattering occurs when the material under investigation is exposed to monochromatic light, causing a tiny percentage of the light to be inelastically scattered at wavelengths other than the incident light.
Raman spectroscopy is an optical method that enables the non-destructive investigation of molecular structures and chemical compositions. However, due to its low intensity, the study of Raman scattering requires the use of sophisticated instruments [7]. The data which account for 60-70% of biopharmaceuticals. These processes usually involve the delivery of glucose to CHO cells [30][31][32][33].
By using non-invasive real-time measurements PAT in conjunction with closed-loop feedback control, feeding strategies can be optimised to improve yield [29,34,35]. Raman spectroscopy plays an important role in this, as it enables in-situ measurements and process control in real time. In situ Raman measurements, first presented by [36], allow the simultaneous measurement of total cell density (TCD), viable cell density (VCD) and concentrations of glucose, glutamate, lactate and ammonia. This method has proven successful in monitoring mammalian cell cultures in bioreactors. Several successful examples can be found in recent literature [18,34,36]. Subsequent studies have extended this application from developmental scales of 3 to 15 L [27,34] to clinical production scales of 2000 L [37], demonstrating the scaling potential of this approach.
This manuscript represents a significant advance in the field of bioprocess technology by providing a comprehensive PLS model construction procedure for Raman spectroscopy that incorporates data preprocessing and outlier removal, thereby improving the understanding and control of bioprocess behaviour. In addition, the development of a simulator that incorporates CHO cell kinetics is an important contribution to the field. It paves the way for the development of a model predictive control system for the automated feeding of CHO cells, revolutionising the way we approach the automation and control of bioprocesses.
The paper is organized as follows. Sections 2 and 2.1 describe the process of data acquisition and introduce the process of spectra processing, which is the initial step of data analysis. Section 2.2 explains the development of the PLS models for soft sensor design and different methods for variable selection in spectroscopic multivariate calibration. This subsection also discusses the process of identifying and removing outlier spectra to improve the robustness and accuracy of the PLS model. Section 3 discusses the CHO cell kinetics model required to develop an advanced simulation environment. Section 4 presents the results of the model construction and simulator implementation. Sections 5 and 6 provide the discussion and concluding remarks.
Spectra Processing
The extensive research began with the systematic compilation of measurements and data obtained from the cultivation of CHO cells in a stainless steel bioreactor. The local pharmaceutical company, which was in charge of designing the experiment, played an important role. Our task, on the other hand, was to analyse the collected data, create the necessary models and establish a suitable simulation environment, which is described in this paper.
The cultivation of the CHO cells took place in a bioreactor with a volume of 10 L. To collect measurements (Raman spectra), the probe of a Kaiser RamanRXN2 spectrometer was inserted into the bioreactor. The RamanRXN2 spectrometer is a sophisticated analytical device that uses laser light with a wavelength of 532 nm. The resulting Raman spectrum is collected over a period of at least 30 min, a measure that improves the signal-to-noise ratio. It is important to note that Raman scattering, which is essentially inelastic photon scattering, is a rather small fraction compared to its elastic counterpart.
For data storage, a desktop computer with a Windows operating system was used, which was directly connected to the Raman spectrometer. Four different experiments were performed to grow the cells in the bioreactor, with each batch lasting about two weeks. The bioreactor contained CHO-S cell lines. This cell line is a sub-line of the original CHO-K1, with adaptations for suspension culture. CHO-S cells are commonly used in the industrial production of therapeutic proteins.
To maintain the optimal environment of the bioreactor, the pH and temperature were strictly controlled and nutrient dosing (glucose and glutamine) was conducted manually on a daily basis using reference measurements. A Roche Cedex Bio Analyzer, known for its reliability and precision, was used to record these reference measurements daily. This allowed for the accurate monitoring of parameters such as glucose and glutamine concentration, viable cell count and others.
The development of useful models depends on appropriate methods, but even more important is the selection of appropriate data. In our case, the raw data consist of the Raman spectra shown in Figure 1. For a first experiment, the choice between regression methods such as principal component regression, partial least squares or an artificial neural network may not be so important [27]. However, it is important that the selected independent variables (x-data) have a strong relationship with the dependent variables (y-data) to be modelled [38]. The choice of method then depends on the type and amount of data available. In cases where the x-data for objects represent time series or digitised data from a continuous spectrum (e.g., Raman spectra, see Figure 1), possible pre-processing strategies could include smoothing or a transition to a first or second derivative. Smoothing attempts to reduce random noise by eliminating sharp peaks in the spectrum, while differencing brings relevant data to light despite noise amplification. The first derivative achieves alignment of spectra with different absorbance values that are shifted in parallel by cancelling out an additive baseline. A second derivative removes a constant and linear baseline. Each object vector, referred to as x i , undergoes separate processes of smoothing and differentiation.
For both differentiation and smoothing, the Savitzky-Golay method is used. This is a method widely used in chemistry. This technique, a local polynomial regression using the method of least linear squares, requires x-values that are both exact and uniformly distributed. For each point, symbolised as j with value x j , a linear combination is used to calculate the weighted sum of the neighbouring values. These weights determine whether smoothing or a derivative calculation is performed. Factors such as the number of neighbours and the polynomial order determine the strength of the smoothing. Choosing the right polynomial order is crucial, as incorrectly chosen higher order polynomials could misinterpret significant Raman bands as mere background. In the Savitzky-Golay method, a vector component x j is transformed by where x * j is the new value (of a smoothed curve or a derivative), N is the normalisation constant, k is the number of neighbouring values (determining the size of the moving window) on each side of j and c h are the coefficients, which depend on the degree of the polynomial used and the objective (smoothing, first or second derivative). For example, if a second order polynomial is fitted through a window of five points (k = 2), the following coefficients c −2 , c −1 , c 0 , c 1 , c 2 can be used for smoothing: −3, 12, 17, 12, −3, the first derivative: −2, −1, 0, 1, 2, and the second derivative: 2, −1, −2, −1, 2 [19]. Figure 2 shows the Raman spectra to which the Savitzky-Golay filtering was applied.
The process of pre-processing includes both filtering and normalisation, with the latter playing an important role. The reason for this is that even spectra recorded for the same material may demonstrate differences due to different recording times or unequal instrument conditions such as laser power and alignment. These variations can lead to different intensity values for spectra of the same material.
To compensate for these intensity differences, normalisation comes into play. This process ensures a maximum similarity of the intensity of a given Raman band of a given material when the spectra were taken under the same experimental parameters; however, some conditions are slightly different. Various normalisation methods are explored in the literature, including min-max normalisation, vector normalisation and Standard Normal Variate (SNV) normalisation. Of these methods, SNV normalisation is the most commonly used [39,40]. SNV normalisation works on the basis of the Equation (2), which can be outlined as follows: (x * j −x * ) 2 and j = 1, 2, ..., N.
(2) Figure 3 shows the Raman spectra for which SNV normalisation was performed in addition to Savitzky-Golay filtering.
Model Construction
The construction of predictive models for bioprocesses, particularly for the cultivation of CHO cells in bioreactors, has made significant progress through the application of chemometric methods to Raman spectroscopic data [38]. These models can predict several key variables such as the concentrations of glucose, glutamine, lactate and other biochemical parameters, as well as cell growth metrics such as total cell count (TCC) and viable cell count (VCC). Raman spectroscopy, a non-invasive, label-free technique, provides detailed chemical information about the bioprocess by recording the molecular vibrations of the components. The resulting Raman spectra serve as input data for the prediction model and provide a comprehensive, high-dimensional data set.
Model construction begins with a calibration phase in which known samples are analysed using Raman spectroscopy and appropriate laboratory tests. This process generates a set of reference data that includes Raman spectra and associated concentrations of glucose, glutamine, lactate and cell counts. Another way to collect reference measurements is to use a device such as Roche's Cedex Analyzer. Once the reference data are prepared, multivariate analysis techniques such as Partial Least Squares Regression (PLSR) are used to build the predictive model. These methods work by identifying correlation patterns within the Raman spectra and relating them to the biochemical parameters of interest.
For more complex data sets or non-linear relationships, machine learning techniques such as Random Forest or SVM can be used. Advanced deep learning techniques such as Convolutional Neural Networks (CNN) are particularly effective for processing highdimensional spectral data, as they can automatically extract meaningful features and improve prediction accuracy [18]. However, one must be aware that such a method of creating a model requires a large database, which is not always available.
This approach not only improves our understanding of the bioprocess, but also our control over it. The real-time predictive capability of the model leads to optimised and consistent bioproduction outcomes by enabling rapid, data-driven decision-making and process adjustments, thereby increasing bioprocess performance, reducing costs and improving product quality. The model is continuously refined as more data become available, improving its predictive power over time.
Partial Least Squares
Partial Least Squares (PLS) is a statistical method that finds a linear regression model by projecting the predicted variables and the observable variables onto a new space. The method was first developed by Swedish statistician Herman Wold and has since been widely used in fields such as chemometrics, neuroimaging, bioinformatics and social sciences [41,42].
PLS simultaneously accounts for the covariance of both the independent variables (predictors) and the dependent variables (responses). This approach is advantageous when dealing with complex, multivariate data sets where the predictors are highly collinear or where there are more predictors than observations. The method can handle noisy and missing data, which makes it robust and flexible.
Partial Least Squares (PLS) regression is a multivariate technique that combines features of principal component analysis (PCA) and multiple linear regression. Although PCA is not explicitly used in the PLS method, the concept of extracting principal components or latent variables is central to both methods. In PCA, the goal is to find a small number of uncorrelated variables, called principal components, that explain most of the variation in the data. Each principal component is a linear combination of the original variables and is orthogonal to all other components. PLS works in a similar way, but instead of trying to explain as much of the variance in the predictor variables as possible, PLS tries to extract components that explain as much of the covariance between the predictor and response variables as possible. Essentially, PLS looks for directions in which the predictors not only explain a large part of their own variance (as in PCA), but are also highly correlated with the response. PLS regression can be summarised in the following steps: • Standardisation of data: The first step in PLS regression is to standardise the predictor and response matrices. This ensures that the model is not overly influenced by variables that have large values or a large range of values. • Extraction of PLS components: PLS decomposes the predictor and response matrices into a set of orthogonal components. These are linear combinations of the original variables that explain the maximum covariance between the predictors and the responses. The number of PLS components is chosen to optimise the predictive power of the model. • Estimation of the PLS model: The PLS regression coefficients are estimated by relating the PLS components to the responses. These coefficients show the relationship between the changes in the predictor variables and the changes in the response variables. • Prediction and validation: The PLS model can then be used to predict responses for new data. Cross-validation is often used to assess the predictive performance of the model and to determine the optimal number of PLS components.
In terms of its statistical properties, PLS is a form of regularised regression. Like other forms of regularisation, it can prevent overfitting by introducing some bias into the model, but it reduces the variance of the model and thus improves its predictive performance.
PLS has been extended to handle different types of data and different modelling scenarios. The most popular versions of PLS include PLS-DA (PLS Discriminant Analysis) [43] for classification problems and PLS-PM (PLS Path Modelling) [44] for structural equation modelling. These extensions have made PLS a versatile and powerful tool for multivariate analysis. When considering the use of PLS, it is important to understand its assumptions and limitations. Although PLS does not assume that predictors are independent or normally distributed, it does assume a linear relationship between predictors and responses. In addition, PLS may not work well with unrelated predictors because it attempts to use all predictors in the model, which can lead to overfitting. It is recommended to evaluate the performance of PLS against other multivariate methods such as principal component regression (PCR) or ridge regression to ensure that it is appropriate for a particular data set and research question.
The Nonlinear Iterative Partial Least Squares (NIPALS) algorithm is a common method for calculating PLS components. The goal is to find a set of components (also called latent vectors) that capture the covariance between the predictors and the responses. The algorithm of the simplified NIPALS method can be summarised in the following five points: where X is a predictor matrix and Y is a response matrix. • Selection of an initial column vector. Typically, the first column of the Y matrix represents the vector u: • Iteratively compute the weights w and t until convergence: Normalize the weights: Compute the score vector: Reassign u as: The iteration continues until the difference between the new and old score vectors falls below a certain threshold, indicating convergence. • Deflate X and Y: Calculate the outer product of t and p (the loading vector for the X), then subtract it from X. Do the same for Y with t and q (the loading vector for the Y): The iterations end when X (or Y) can no longer be deflated or when the number of extracted latent variables is enough to describe the data according to some criterion. • Calculate the regression coefficients. Once all the latent vectors are extracted, the regression coefficients B can be calculated as: where W is the matrix of weight vectors, P is the loading matrix of X.
The Root Mean Square Error of Cross-Validation (RMSECV), which is calculated during the creation of the PLS model, can be used as a criterion to find the right number of latent variables and prevent overfitting. For example, Figure 4 shows that in the case of a PLS model for glucose concentration, the most appropriate number of latent variables is four, as the RMSECV does not drop drastically after that.
Selection of Key Variables
To further improve the PLS models and reduce the possibility of overfitting, the Variable Importance in Projection (VIP) and Competitive Adaptive Reweighted Sampling-Partial Least Squares (CARS-PLS) methods were used.
Variable Importance in the Projection is a popular method for assessing the importance of variables in a Partial Least Squares (PLS) regression model. PLS is a statistical approach used in predictive modelling where the prediction of a set of dependent variables from a set of independent variables is conducted through latent variable regression.
The VIP score for a variable is a measure of that variable's contribution to the model, taking into account both its contribution to explaining the dependent variable and its contribution to explaining the independent variable. A high VIP score indicates that the variable is highly significant in the model ( Figure 5 shows an example of selecting key variables in a PLS model of glucose concentration). However, the VIP method also has some disadvantages: • Overemphasis on highly collinear variables: If variables are highly collinear, the VIP score can overestimate the importance of those variables and result in a model that may not be as accurate as possible. This can be problematic in areas where variables may be highly correlated, such as genomics or metabolomics. • Unreliable with small data sets: The VIP method can be unreliable with small data sets because it depends on having enough data to estimate the PLS model accurately.
On the other hand, Competitive Adaptive Reweighted Sampling-Partial Least Squares is a more recent technique used for variable selection in spectroscopic multivariate calibration. It has gained considerable attention in the field of chemometrics. CARS-PLS was developed to overcome two major challenges in the analysis of spectroscopic data: high dimensionality and multicollinearity. These problems can lead to overfitting of the model, poor generalisation ability and difficulties in interpretation. The method CARS-PLS consists of two main stages: • Competitive Adaptive Reweighted Sampling: This is a Monte Carlo-based sampling technique that helps identify relevant variables (wavelengths) for building the model. Initially, CARS assigns equal weights to all variables. Then, a set of subsets of variables is generated, each subset containing each variable with a probability proportional to its weight. A PLS model is created for each subset and its performance is evaluated. Based on the evaluation, the weights of the variables are updated-variables that frequently contribute to good models are given higher weights, while those that contribute to poor models are given lower weights. This process is repeated many times (usually thousands of iterations) until the best subset of variables is found. The CARS-PLS method has been used successfully in many areas where spectroscopic data are used, such as pharmaceutical analysis, food quality control and environmental monitoring. However, like all methods, it has its limitations and assumptions. It assumes that there is a linear relationship between predictors and responses, and it may not work well if this assumption is not met. In addition, the performance of CARS-PLS may depend on the initial weights of the variables and the number of Monte Carlo iterations. Therefore, it is often advisable to make several runs of CARS-PLS with different initial settings and determine the consensus of the results.
Compared to VIP, CARS offers the following advantages: • Better handling of collinearity: In contrast to the method VIP, CARS can better handle the problem of collinearity between variables. • Simplicity and interpretability: CARS tends to lead to simpler and more interpretable models, which is of great importance in practical applications. • Better performance on small data sets: CARS is not as reliant on large data sets as VIP and is therefore a more reliable method for variable selection on small data sets. • More robustness: CARS is less prone to overfitting because it focuses on a subset of particularly relevant variables instead of considering all variables in the model. • Adaptive: CARS is an adaptive method, able to adjust its selection as more data becomes available or the nature of the data changes.
Removal of Outlier Spectra
The PLS model can be further improved by searching for spectra representing outliers. Therefore, a resampling method commonly used in statistics and machine learning was used, which can also be referred to as Monte Carlo cross-validation or repeated random sub-sampling validation. The outlier detection method consists of the following five steps: • Partitioning: first, the original training dataset is randomly partitioned into a training dataset and a test dataset. For example, the partitioning could be 4:1, i.e., 80% of the data are used for training and 20% for testing. This partitioning is conducted many times, which is characteristic of a Monte Carlo approach. • PLS modelling and prediction: A Partial Least Squares (PLS) regression model is built using the training data. This model is then used to make predictions for the test subset. • Error calculation: The prediction errors for each spectrum in the test set are then calculated. Each spectrum will occur multiple times in different test sets; thus, an average error and standard deviation can be calculated for each spectrum across all iterations. • Identification of outliers: Spectra that consistently produce high prediction errors (based on their average error or a combination of average error and standard deviation) can be considered outliers. These outliers represent spectra that are not well modelled by the PLS and thus affect the accuracy of the model. In Figure 6, for example, it can quickly be observed that the 25th and 58th spectra are outliers. • Removal of outliers: The identified outlier spectra are removed from the original dataset, hopefully improving the robustness and accuracy of the model. • Iterating: This entire process can be repeated as needed, each time recalculating the errors for each spectrum and identifying and removing outliers.
The advantage of this method is that it can help to increase the robustness of the PLS models by removing outliers that would otherwise distort the model parameters. It is a relatively simple and intuitive approach that combines the robustness of resampling with the ability to identify and remove problematic data points. This method helps to further reduce the Root Mean Square Error of Prediction (RMSEP) and thus improve the overall performance of the model. However, as with any method, it should be used judiciously. Removing outliers too aggressively can lead to over-fitting, where the model becomes over-fitted to the "typical" data points and performs poorly on new, unknown data. This method is most useful if you have a large enough dataset so that removing some data points does not significantly reduce the overall size of the dataset.
Simulator Construction
In order to develop a predictive control algorithm for automated nutrient feeding in a bioreactor, a simulation environment based on a dynamic model was implemented. The latter describes the kinetics of the growth of a CHO cell culture in a fed-batch bioreactor. It is well known that the process parameters (temperature, pH, feeding, ammonia removal, etc.) have a significant impact on cell growth and especially on the quality of the monoclonal antibodies (mAbs) produced [45]. Therefore, the model is important not only for the development of management algorithms, but also for the observation and identification of the key factors (variables and parameters) that have the greatest influence on cell productivity. This is particularly important from the point of view of optimising protein production in a mammalian cell line.
Modelling CHO Cell Culture Kinetics
Chinese Hamster Ovary cells are the most commonly used mammalian hosts for the industrial production of therapeutic proteins, due to their capacity to perform human-like post-translational modifications. The growth kinetics of CHO cells can be studied using a mechanistic model [32]. A mechanistic model is a type of model used to describe biological processes based on underlying physiological mechanisms. These models allow us to interpret, predict and simulate biological phenomena by using mathematical equations to represent the interactions and transformations that occur in a system. In the context of CHO cell growth kinetics, a mechanistic model would include at least the following components. One of the most important mechanisms determining the growth kinetics of CHO cells is cell division. The rate at which cells grow and divide depends on various influencing factors such as the availability of nutrients, the accumulation of waste products and the passage of time. Mathematical models such as the Gompertz model or the logistic growth model are often used to represent these complicated dynamics of cell growth. Another crucial determinant of cell growth is the assimilation and utilisation of nutrients such as glucose and glutamine. The rate at which these nutrients are consumed can have a significant impact on cell growth and is usually modelled using Monod or Michaelis-Menten kinetics, which provides essential insights into cell metabolism and growth patterns. As cells grow and metabolise nutrients, they inevitably generate waste products such as lactate or ammonia. The accumulation of these waste products can have a suppressive effect on cell growth. To quantify this inhibitory effect, mathematical models are used to provide detailed insight into the relationship between the accumulation of waste products and cell proliferation. The loss of cells through mechanisms such as apoptosis, nutrient deprivation or the toxic effect of accumulated by-products is an inevitable aspect of cell culture. Mathematical models are used to express the rate of cell death as a function of various parameters, providing valuable insights into the factors that influence cell viability over time. Finally, the growth kinetics of CHO cells are significantly influenced by external environmental factors such as temperature, pH and osmolality. These factors must be carefully incorporated into the mechanistic model to ensure its relevance and accuracy. These environmental influences represent an additional layer of complexity and require a comprehensive understanding of their effects on cell growth and survival. Each of these components is interconnected and forms a complex network of interactions that determine the growth kinetics of CHO cells. Together, they form a robust mechanistic model that allows the prediction, interpretation and simulation of the behaviour of CHO cells under different conditions. A mechanistic model of CHO cell growth kinetics would typically be a system of differential equations, where each equation represents a particular biological process (such as cell growth, nutrient consumption, production of waste products, etc.). These models can be quite complex and usually require a large amount of experimental data for their parameterisation.
However, despite their complexity, mechanistic models can provide valuable insights into the cell growth process and can be helpful in optimising cell culture conditions for maximum productivity. Many authors [45][46][47][48] who have worked on modelling the kinetics of CHO cell cultures have set up various dynamic models in the form of differential equations based on steady-state analysis. In most cases, these simple models only describe the variation of extracellular metabolite concentrations and the number of live/dead cells during the cell cycle. The models differ in the number of factors considered (number of variables and parameters), which are more or less relevant to describe what actually happens in a mammalian cell line (in a bioreactor). However, in order to have a practical and universally applicable simulator, a model was needed that took all the important variables into account. An example of such a model was also developed by M. Ivarsson [48] in her PhD thesis, as it takes into account the four phases of the cell cycle, temperature, glutamine concentration, number of dead cells, etc., in addition to the number of living cells and the concentrations of glucose, lactate and ammonia. For the development of a predictive controller for automated feeding, only a model prediction of glucose concentration would be required at this stage. However, as glucose concentration variations are also highly dependent on other variables, these should also be considered in the model. As mentioned above, the chosen dynamic model [48] describes four phases of the cell cycle: G 0 , G 1 , S and G 2 /M and the number of cells per phase: X G0 , X G1 , X S and X G2/M : G 1 phase: S phase: G 0 phase: The equations include transition factors k, where, e.g., k G1−S represents the transition from the G 1 phase to the S phase. The transition factors between subpopulations depend mainly on the growth rate, which in turn is determined by the times (t G1 , t s and t G2/M ) required for the completion of each cellular phase: The transition from the G 1 to the G 0 phase is determined by the transition factor k G1−G0 , which represents the temperature stress. However, the transition to phase G 0 may also cause metabolic stress m stress . The number of viable cells is calculated as the sum of the cells from each phase, where V represents the current volume of material in the bioreactor: The volume varies depending on the nutrient dosage (F Glc and F Gln ) and the potential sampling F OUT : Glutamine concentration varies according to consumption factor Q G ln and degradation to ammonia K deg and potential dose F Gln . Glutamine consumption depends on the cell growth factor, the specific yield Y Gln and the limiting function f upt : The ammonia concentration depends largely on changes in the glutamine concentration, since the ammonia concentration increases with glutamine consumption (factors Y Amn and K deg ): The glucose concentration varies according to the consumption factor Q Glc and the minimum consumption to keep the cells alive m Glc , and the amount of glucose added F Glc . The consumption factor Q Glc is influenced by temperature and lactate as an inhibitor: The lactate concentration depends on the glucose consumption (Q Glc and m Glc ): The change in monoclonal antibody concentration is determined by factors representing the productivity level (q G1/G0 , q S and q G2/M ) per cell phase:
Results
In order to be able to monitor the process in the bioreactor in detail during the entire batch, which usually takes about 14 days, seven PLS models were developed in the Matlab environment. The latter models, which represent soft sensors, allow the monitoring of the most important process variables in CHO cell cultivation. These variables are: Glucose concentration, viable cell concentration (VCC), total cell count (TCC), glutamine, glutamate, lactate and ammonium.
Data from four different batches were available to us for the development of PLS models. Raman spectra were collected every half hour and reference measurements (offline) were performed once or twice a day with Cedex Analyzer. Thus, the first step was to find the pairs of spectra and reference measurements that matched best in terms of acquisition time. The Raman measurement takes about half an hour to obtain a good signal-to-noise ratio and to remove fluorescence interference.
As described in Section 2.1, two key initial steps in the development of PLS models are the preprocessing of the Raman spectra with the Savitzky-Golay filter and the normalisation with the Standard Normal Variate method (see Figures 2 and 3). Savitzky-Golay low-pass filtering was performed for all independent variables (Raman shift (cm −1 )) of each spectrum, with a quadratic function chosen for smoothing with the Savitzky-Golay filter and the window length (smoothing) set to 15 samples. In addition, a normalisation or Standard Normal Variate function is applied to the independent variables for all spectra, resulting in spectra with a mean of zero and a standard deviation of one.
As described in Section 2.2 and illustrated in Figure 2, careful consideration is also required in the selection of the parameter that determines the number of latent variables. For each PLS model, the optimal number of latent variables is determined based on crossvalidation, aiming for the smallest RMSECV error. In general, it is preferred to keep the number of latent variables as small as possible.
Characteristic independent variables of the spectrum (i.e., energy shifts) at which a spike occurs can be extracted from the literature for individual observed variables. Taking these characteristic energy shifts into account when calculating PLS models is therefore considered useful as it further weighs the individual independent variables of the spectrum and improves the model in this way. If these characteristic energy shifts are not known, various methods are available to identify the more important independent variables and take them into account to a greater extent.
The Variable Importance in the Projection method, described in Section 2.2.2, was tested first. However, the prediction results were not improved by this simple method; thus, alternative approaches to selecting key variables were investigated. Attempts to select "key" intervals or several independent variables of the spectrum together also did not lead to better results.
It turned out that the Competitive Adaptive Reweighted Sampling method, which is also discussed in Section 2.2.2, gave the best results for selecting key variables when building PLS models. As can be observed in Figure 5, the method CARS identifies fewer key variables than the method VIP. Nevertheless, the validation results of the PLS model (using glucose concentration as an example) were better when the method CARS was used, as evidenced by the smaller Root Mean Square Error (see Figure 7 and Table 1). The reference values in Figure 7 represent offline measurements performed with the Cedex Analyzer. In some cases of glucose measurement, the VIP method even leads to worse results than not using a method, as shown in Table 1 (see RMSE).
Assuming that Cedex's offline measurements are reliable, the training set was examined for spectra representing outliers that could affect the parameters of the PLS model during the learning phase and consequently affect the prediction accuracy. Applying the Monte Carlo sampling method and calculating the mean error and standard deviation for each PLS model led to the identification of spectra within the dataset that represent outliers, as shown in Figure 6 and discussed in Section 2.2.3. This process allowed a further increase in the accuracy and robustness of the PLS models, as can be observed in Figure 7 and Table 1. In this case, the coefficient of determination for the PLS model for glucose is R 2 = 0.97, which means that the PLS model has been further improved compared to the method CARS (where R 2 = 0.96). An accurate prediction of glucose concentration can also be observed in Figure 8, which shows a comparison of experimental and predicted values using CARS and methods to remove outliers. Ideally, all points should lie on a straight line. Table 2 shows the RMSE and coefficient of determination (R 2 ) for the following constructed PLS models in addition to the glucose PLS model: VCC, TCC, glutamine, glutamate, lactate and ammonium. The results demonstrate that all PLS models developed provide an accurate prediction of the main process variables (R 2 > 0.8), and only the PLS model for glutamine has a slightly worse prediction (R 2 = 0.33). The reason for this lies in the following fact. In Raman spectroscopy, glutamine and glutamate are related because they have a similar molecular structure and similar active Raman vibrational modes that produce similar spectral features. Glutamine and glutamate are structurally similar amino acids, both containing a carboxyl group (-COOH) and an amine group (-NH2). The main structural difference between them is that glutamate has an additional carboxyl group, while glutamine has an amide group (-CONH2) instead. It is important to note that while Raman spectroscopy is a powerful technique for identifying molecules, its resolution is often insufficient to distinguish between similar molecules in a mixture. In such cases, additional techniques, such as chromatographic separation or more sophisticated spectral analysis methods, are required. Table 3 shows the best RMSE results for PLS models according to the existing literature [11,37]. A comparison with the data in Tables 1 and 2 shows that our method for building PLS models excels at accurately predicting key variables from Raman spectra. This comparison essentially underlines the effectiveness of our approach. It is particularly noteworthy that our PLS models have an RMSE that is on average three times smaller than the RMSE published in recent research [11,37]. Table 3. The best RMSE results for PLS models found in the literature [11,37]. The learning process for the PLS models depended on a single offline measurement (Cedex) of each variable (e.g., glucose) per day. Therefore, only the Raman spectroscopy spectra that matched the offline measurements in time could be used. However, once the PLS models were built, all spectra collected every half hour could be used, giving an informative representation of the time course of each variable (see Figure 9). These data are then used in the optimisation to determine the parameters of the dynamic model for the CHO cell kinetics, as described in the Section 3.1. Careful examination of the time series signal for glucose and glutamine concentrations in Figure 9 reveals a sawtooth pattern due to the daily manual dosing of nutrients. This pattern is not conducive to the optimal growth of the CHO cells.
RMSE
The problem can be solved by implementing an automated feeding system that continuously doses the nutrients according to a predefined reference signal. However, such a system requires not only the application of the previously developed soft sensors (PLS models), but also a simulation environment. In this environment, a control algorithm can be developed and different scenarios such as different feeding regimes, the removal of inhibitors and the observation of important process variables can be investigated. The heart of the simulator, represented by the Simulink schema in Figure 10, is a dynamic model of CHO cell kinetics, which is explained in the Section 3.1. Figure 10 also shows the controller and optimisation blocks, the details of which will be explained in more detail in forthcoming scientific publications.
Based on known process parameters (temperature and pH) and time series signals of the main process variables (VCC, glucose, glutamine, etc.), it is possible to perform the optimisation of the parameters of the dynamic model of CHO cell kinetics (presented in the Section 3.1). This optimisation aims at aligning the model results as much as possible with the measurements of previous batches. For the parameter optimisation, the particle swarm optimisation (PSO) method was used, which makes it possible to find the global minimum of the chosen criterion function while optimising a large number of parameters. In this case, the criterion function was given as RMSE, with the final values presented in the Table 4. A comparison of glucose concentration measurements from one of the batches with a glucose concentration prediction derived from a mechanistic model of CHO cell kinetics is shown in Figure 11. The results of the agreement were excellent in this case, with an RMSE of 0.18 g/L and R 2 = 0.99. Furthermore, Figure 12 shows the remarkable matching between the measurements and the predicted values; ideally, all points should lie on a straight line. However, it is important to note that the available data were limited to only four batches. If a larger number of batches are included in the optimisation process, a slight deviation between the individual batches and the process variables is to be expected. In the future, it would be beneficial to combine the data from the individual batches based on the criterion of mutual similarity and then determine the model parameters for the individual clusters.
The predictions for the other process variables, as shown in Table 4, prove satisfactory when the CHO cell kinetics model is used. Only in the case of glutamine concentration does a somewhat larger error occur, which has already been pointed out. The reason for this is that when the PLS model predicts the time series signal for glutamine with less accuracy, the variance of the "measurements" (derived from the soft sensor) increases. Consequently, the time series signal of glutamine is predicted with lower accuracy by the mechanistic model.
Discussion
In developing models that allow the use of soft sensors to monitor key process variables (VCC, TCC, glucose, glutamine, glutamate, lactate and ammonium) in the bioreactor, it was found that using the PLS method alone did not provide the required accuracy and robustness of the models. In particular, with a limited data set (a few batches), the model can be overfitted, leading to a sharp drop in predictive performance compared to what the validation with limited data promises.
Since in our work only about 100 spectra with reference measurements were available during the learning phase and Raman spectra contain more than 3000 components, the phase of selecting key variables became crucial for model construction. By using the CARS method, better handling of collinearity between variables was observed, as well as better performance on small data sets and higher robustness compared to the VIP method. As a result, the RMSE was reduced by up to 30%.
It was found that the VIP method further impaired the predictive ability of the models in certain cases, indicating an overfitting problem, as the number of key variables selected was significantly larger than required by the CARS method. The VIP method also had stability problems, as the results may have become unstable with small samples. Minor variations in the data can lead to significant shifts in the scores, making it difficult to extrapolate the results to other data sets. When calculating the VIP scores based on the weighted sum of squares of the PLS loadings, high variability was found in small data sets.
In Raman spectroscopy, it is important to understand that outlier spectra can occur, influenced by various factors. For example, if the sample in the bioreactor is not evenly mixed, this can lead to deviations in the spectra obtained. Raman spectroscopy derives its readings from the average properties of the area illuminated by the laser. Therefore, a lack of homogeneity in the sample can lead to inconsistent measurements.
Moreover, the components of the sample can play an important role. If components fluoresce under the laser light of the Raman spectrometer, the resulting fluorescence could overshadow the Raman signal and distort the spectra. Additionally, bubbles or particles in the bioreactor can cause scattering or absorption of the laser light, resulting in unpredictable spectra.
Given these potential sources of error, it is important to carefully identify and remove outlier spectra during the modelling phase, as described in Section 2.2.3. This step reduced the root mean square error (RMSE) by 10% (in addition to 30% reduction with the method CARS).
The efficient growth and production of desired products by CHO cells requires specific, strictly controlled conditions in the bioreactor. These conditions include the regulation of pH and temperature, which affect cell metabolic rate, protein folding and expression levels. Equally important is the careful control of nutrient content, especially glucose and glutamine, according to a predetermined profile for the duration of the batch.
Another critical factor is the control of inhibitor concentrations. Metabolic by-products such as ammonia and lactate can potentially inhibit cell growth and protein production if they reach high concentrations. Since glucose is the primary source of energy, its concentration directly affects cell metabolism. Too little glucose can starve cells and inhibit growth, while too much glucose can cause osmotic stress or trigger overproduction of waste products such as lactate.
Given these complexities, the use of an automated bioreactor control system is essential for CHO cell cultivation. Such a system offers several advantages, including maintaining consistent conditions, real-time monitoring, reducing human error and improving efficiency and scalability. Given the significant costs associated with realistic bioreactor experiments, the development of a simulation environment is essential. This environment enables the creation of control algorithms and the evaluation of the effects of different parameters on cell growth and productivity.
The main reason for the lack of advanced automated control techniques in cell culture bioprocesses and bioreactor operations is that these techniques require robust and reliable measurement methods that are available on site. Concentrations of nutrients and metabolites, cell densities and viability are not measured and are uncontrolled or are only controlled manually with long sampling times (12-24 h, as shown in Figure 9). As a result, possible process disturbances may only be detected after long delays, making it difficult to take corrective action and increasing the risk of batch losses.
For the development of an advanced simulation environment, the choice of a CHO kinetic model is also crucial. The chosen model should represent the complex kinetics of CHO cells in sufficient detail. Simpler models based on the Monod equation, for example, are often inadequate in this respect. More complex models, however, pose the challenge of determining numerous parameters that can only be accurately determined with a suitable optimisation method and sufficiently heterogeneous data. In our study, the parameters of a dynamic model of CHO cell kinetics were successfully determined using the PSO method.
To enable the development of a predictive control algorithm, the complex kinetics model will be simplified and linearised, and online adjustment of the (adaptive) model parameters will be facilitated. This adjustment is made possible by an optimisation method that uses the measurements of the current batch to facilitate the online parameterisation.
Future efforts include the development of a model predictive control algorithm based on the simplified model of CHO cell kinetics. Subsequently, the monitoring and control system will be integrated into a real bioreactor. Finally, a practical test of the implemented system will be carried out.
Conclusions
This study demonstrates the significant advances in fully automated feeding of CHO cells achieved through the development of advanced models, soft sensors and a novel simulation environment. The research has required a thorough understanding of various chemometric methods and demonstrated their context-specific application in combination with Raman spectroscopy. It has demonstrated the effectiveness of CARS-PLS and an outlier removal method in overcoming difficult challenges such as high dimensionality, multicollinearity and outlier detection. The models created are versatile and scalable and can be applied to a wide range of products, media and cell lines based on CHO host cells. They can be conveniently scaled up for use in large pilot studies and extensive manufacturing processes. However, the success of these methods depends not only on the right choice of techniques, but also crucially on the quality of the input data. Therefore, the preprocessing of the data to remove interfering signals is of the utmost importance. Raman spectra have no inherent value, but when integrated with the appropriate models, they allow for the creation of a sophisticated measurement system. This system, which consists of soft sensors, is used for real-time monitoring and control of important process variables. The measurements reconstructed with these soft sensors play a crucial role in the design of the simulation environment, which significantly speeds up and cheapens the development of control algorithms and thus the automated nutrient dosing system. In essence, this study provides essential insights into the pragmatic application of Raman spectroscopy and innovative methods that form a solid foundation for further research and development in the field of automated cell feeding.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data is not publicly available due to trade secrets.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-07-28T15:38:43.968Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c3fceeba8494c0451f5c0a6c04af8efa24de94b9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d66352a2c78eeecc15cfbb0bf391bdad1409ca94",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
4792434
|
pes2o/s2orc
|
v3-fos-license
|
Magnetic resonance imaging for assessment of cerebrovascular reactivity and its relationship to cognition: a systematic review
Background Cerebrovascular reactivity (CVR) refers to the responsiveness of cerebral vasculature to vasoactive stimuli. CVR is an indicator of brain health and can be assessed using vasodilatory techniques and magnetic resonance imaging (MRI). Using such approaches, some researchers have explored the relationship between CVR and cognition; here we systematically review this work. Results We extracted information pertaining to: (1) study location and design, participant characteristics, sample sizes, (2) design of vascular challenge, end-tidal CO2 (etCO2) concentrations (if applicable), (3) MRI protocol, (4) cognitive assessment, (5) CVR values, and outcomes of statistical analyses with cognitive tests. Five studies assessed participants with cognitive impairment compared to controls, one studied patients with multiple sclerosis with or without cognitive impairment compared to controls, one examined patients with moyamoya disease with or without cognitive impairment, two investigated patients with Type 2 diabetes mellitus (T2DM), and one was a cross-sectional study with younger and older healthy adults. Cognition was typically probed using the MMSE and tests of executive function, while a number of vasodilatory techniques were employed. Conclusion CVR was associated with cognition in six of ten studies, but heterogeneity of study samples, designs and vasodilatory methods may have a role in the inconsistent findings. We make recommendations for future research that includes use of a multi-domain cognitive assessment and standardised hypercapnic challenge with MRI.
Background
Rising life expectancies, together with declining fertility rates, is leading to rapid global ageing. It is estimated that by the year 2050 the proportion of people aged over 60 years will double from approximately 11 to 22% worldwide [1]. As the population ages, the number of older adults living with impaired cognition and dementia continues to increase. While a variety of mechanisms are thought to contribute to the genesis of cognitive impairment, there is emerging evidence that the signaling between various elements of the neurovascular unit becomes dysfunctional with increasing age, leading to neurovascular uncoupling and dysregulation of cerebral blood flow (CBF) in response to neuronal and metabolic demands [2,3]. Cerebrovascular reactivity (CVR) refers to the response of cerebral blood vessels to vasoactive stimuli. Dysfunctional CVR impairs blood delivery to brain regions requiring supply, which both precedes and contributes to neuropathology over time. Impaired CVR has been implicated in a wide range of disorders including stroke [4-6], multiple sclerosis [7], hypertension [8], diabetes [9, 10], cardiovascular disease [11] and dementia [12][13][14][15]. Further, diminished reactivity has been found to contribute to mild cognitive impairment in the non-clinical general population [16].
This potential link between CVR and cognitive impairment is interesting as it suggests that optimal functioning of the cerebral circulatory system is important for maintaining cognitive functions. The relationship between cognitive decline and numerous vascular anomalies, including stiffness of the peripheral arteries and aorta [17,18], hypoperfusion [19,20], cerebrovascular disease [21], and pathology of the carotid arteries [22,23] has been well established in the literature. To date however, the relationship between CVR and cognitive functions has been poorly understood.
CVR is generally measured as a change in some index of blood flow (e.g., blood flow velocity measured with ultrasound or blood oxygen level dependent (BOLD) signal change measured with fMRI) in response to a vasoactive stimulus. Hypercapnia (increased blood carbon dioxide (CO 2 ) concentration) is the most often used stimulus to elicit increased blood flow via vasodilation. Hypercapnia can be induced in several ways including inhalation of CO 2 -enriched air, breath-holding, and rebreathing. While there are numerous vasoactive challenges that can elicit a change in blood flow required for the assessment of CVR, inhalation of CO 2 -enriched air is most suitable due to the practicality of its use and the ease with which it can be standardized [24]. Acetazolamide, a carbonic anhydrase inhibitor, has the same capacity to dilate the cerebral microvasculature via increasing carbonic acid in the arterial blood, and is often used to elicit vasodilation in studies of CVR [25,26].
Likewise, various tools can be employed to measure the change in blood flow. Most frequently used is the transcranial Doppler ultrasound (TCD). This method is inexpensive, easy to use, non-invasive, is viable for use with large cerebral vessels and has high temporal resolution. However, TCD has low spatial resolution; hence precise regional investigations cannot be performed. Single-photon emission computed tomography (SPECT), positron emission tomography (PET) and other computed tomography (CT)-based technologies also exhibit poor spatial resolution, but are further complicated by the necessity of exposing participants to ionizing radiation. Advances with MRI-based imaging have overcome these limitations whereby CVR assessments can be performed without the use of exogenous contrast agents, and with high spatial resolution so that the responsiveness of blood vessels within discrete brain areas may be studied independently.
Research investigating the relationship between vascular reactivity and cognitive performance has commonly used CT or TCD technology, demonstrating reduced CVR in cognitively impaired patients [12,14,27,28].
Studies using TCD have shown significant relationships between CVR and cognitive status assessed with the mini-mental state examination (MMSE) [28], and with tests of executive function, attention and memory [29]. However, the lack of regional specificity of TCD does not enable an examination of region-specific relationships between CVR and cognitive abilities. To address this apparent gap in the literature, the current work aims to systematically review all research articles investigating the association between cognitive performance and cerebrovascular reactivity to a vasoactive stimulus measured using MRI.
Search criteria
Searches were conducted using Pubmed and Scopus from earliest record until 15th July 2017. Search terms were entered as follows: Pubmed (cognition OR cognitive OR memory OR attention) AND ("cerebral vascular reactivity" OR "cerebrovascular reactivity" OR "cerebral vasoreactivity" OR cvr OR "cerebral vasomotor reactivity" OR "vasomotor responsiveness" OR "cerebrovascular responsiveness") AND (Humans [Mesh]); and Scopus (TITLE-ABS-KEY (cognition OR cognitive OR memory OR attention) AND TITLE-ABS-KEY ("cerebral vascular reactivity" OR "cerebrovascular reactivity" OR "cerebral vasoreactivity" OR cvr OR "cerebral vasomotor reactivity" OR "vasomotor responsiveness" OR "cerebrovascular responsiveness")) AND (LIMIT-TO (DOCTYPE, "ar")).
Only studies published in English using MRI-based CVR assessments and conducted with adult (> 18 years) humans were included. Exclusion criteria included animal studies, CVR assessed with imaging modalities other than MRI, not performing a cognitive/neuropsychological assessment, or not analysing the associations between CVR and cognition. The reference lists of the included studies were also searched.
Quality assessment and extracted information
Studies deemed eligible were checked for quality using the NIH Quality Assessment Tool for Case/Control Studies and the Tool for Observational Cohort and Cross-Sectional Studies and the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines. Information extracted from the studies related to the country, year of publication, MRI technique and analysis, vasodilatory challenge, CVR values, cognitive/ neuropsychological assessment, and participant demographics including age, gender, years of education, cognitive profile and other health status information where available.
Two studies investigated the differences in CVR and cognition between patients with type 2 diabetes mellitus (T2DM) versus healthy controls [10,32]. Metzger et al. [36] assessed cerebral vasoreactivity and cognitive status in multiple sclerosis patients versus healthy controls. Two studies included patient samples with mild cognitive impairment (MCI) and Alzheimer's disease (AD) matched with healthy controls [13,31], while two examined only MCI and healthy controls [35,37] and one paper examined only AD versus healthy controls [30]. Calviere et al. [34] investigated CVR and cognitive impairment in patients with moyamoya disease. The work by Gauthier et al. [33] was a cross-sectional study assessing differences between groups of healthy younger and older adults. All studies were single-visit examinations, with the exception of Chung et al. [32] which was longitudinal; participants were assessed at baseline and 2-year followup. Table 1 displays patient characteristics.
Vascular challenge paradigm
Vascular challenges varied between studies. Five elicited hypercapnia via fixed-level CO 2 -enriched gas inhalation. Concentrations varied between 5% CO 2 in medical air [30,35], 7% CO 2 in medical air [31], 7% CO 2 in 93% oxygen (carbogen) [13] and 8% CO 2 -enriched gas (BACTAL ® ) [36]. Two studies used CO 2 rebreathing as the hypercapnic manipulation, though both failed to report the length of the rebreathing period, and the size of the reservoir used [32,37]. Tchistiakova et al. [10] used the breath-hold technique, wherein participants performed a series of 6 × 15 s breath holds following 3 s of exhalation with 30 s of intermittent regular breathing. Gauthier et al. [33] used a computer-controlled gas delivery system which prospectively targeted the partial pressure of expired CO 2 (etCO 2 ) to 40 mmHg for normocapnia, and 45 mmHg for the hypercapnia period, whilst maintaining the expired O 2 (etO 2 ) at 100 mmHg throughout the procedure. The duration of gas delivery also varied across these studies, see Table 2 for details. The remaining study elicited vasodilation via injection of 15 mg/kg acetazolamide [34].
Cognitive/neuropsychological assessment
While the majority of the studies reported more than one cognitive assessment, we were primarily interested in the tests that were analysed in connection with CVR. All but three studies [10, 34,36] reported mini-mental state examination (MMSE) [38] scores. Of the seven studies that reported MMSE, four investigated the relationship between CVR and MMSE score [13,30,31,37]. Results are reviewed in the discussion section.
Assessments of executive function in connection with CVR were included in five studies. Chung [39] designed to evaluate cognitive impairment in multiple sclerosis. Tasks included assessments of verbal short-term memory, visual perception, digit spans, working memory, processing speed, go-no-go test and verbal fluency. Cognitive status was defined by this evaluation, patients were classified as cognitively impaired if they scored below the 5 th percentile of the normative mean of the BCCOG SEP on at least 4 subtests. Other cognitive tasks that did not overlap between studies are outlined in Table 1.
MRI data acquisition
BOLD fMRI was used in six of the studies [10, 13,30,31,35,36]. Three papers employed the arterial spin labeling (ASL) MRI technique to measure changes in brain perfusion. Of these, one used pulsed ASL (PASL) [37], one used continuous ASL (CASL) [32] and the final employed pseudo-continuous ASL (pCASL) [33]. However, in the work of Gauthier et al. the images acquired with pCASL sequence were separated into BOLD and CBF time-series data, of which only the BOLD information was used in CVR analysis. Therefore, this work is considered to be a BOLD imaging study. The remaining study used dynamic susceptibility contrast-enhanced (DSC) MRI [34]. All but two studies [13,34] used an MR scanner with magnetic field strength of 3T. MR protocol information is displayed in Table 2.
Summary of regional CVR findings
The studies that employed BOLD imaging assessed CVR in various regions-of-interest (ROIs) with some contrasting findings. Results are shown in Table 2. Cantin et al. [13] observed regional impairment in CVR between healthy controls and patients with cognitive impairment, particularly in posterior brain areas, whereas Yezhuvath et al. [30] reported CVR deficits in more rostral regions in patients with AD compared to healthy controls. Cantin et al. [13] investigated CVR in several regions: frontal, parietal, temporal and occipital lobes, the cingulum, the insula, the striatum and the thalamus. Yezhuvath et al. performed a voxel-wise regression and region-of-interest (ROI) analysis using 6 regions: the occipital lobe, temporal lobe, frontal lobe, parietal lobe, insular cortex and subcortical grey matter. This is in contrast to the work by Thomas et al. [35], who found no differences in reactivity between adults with amnestic mild cognitive impairment compared to controls using a voxel-wise comparison of whole-brain grey matter CVR maps. Metzger et al. [36] calculated CVR in 8 regions of interest (ROIs): occipital, parietal, temporal frontal, insula, cingulum, thalamus and striatum, as well as a global median. CVR in all ROIs was significantly reduced in MS patients with cognitive impairment compared to those who were not cognitively impaired.
Another study [31] analysed the BOLD data at the level of overall CVR effect, differences between lobes (7 lobes were delineated as per previous work), brain regions (88 cortical and subcortical regions included), and finally the associations between CVR velocity and the cognitive assessment scores. CVR velocity refers to the temporal dynamics of the CVR response, representing the rate of the vasodilation. It was observed that the largest differences in CVR between AD and healthy controls were seen in the frontal and occipital lobes.
Calviere et al. [34] used 22 ROIs manually drawn on the bilateral frontal and temporoparietal areas of the cerebral cortex, and reference areas in the cerebellum. The mean transit time (MTT) and cerebral blood volume (CBV) values from each area were estimated from perfusion weighted image analysis, and averaged to give one measure from each region. Ratios of CBV in the frontal and temporoparietal areas were calculated relative to the cerebellar CBV, which was used as a control region. The CVR values for each region were estimated from the CBV values relative to the cerebellum. Frontal CVR was lower in cognitively impaired patients with moyamoya disease than those without cognitive impairment.
Gauthier et al. examined CVR using a pseudo-continuous ASL (pcASL) sequence. BOLD data was acquired and intersected with areas of significant signal change in response to the vasoactive stimulus observed with ASL data using cluster analysis to define one frontal ROI [33]. This region was found to be slightly lower in reactivity in older adults compared to younger, yet this difference was not significant, nor was CVR in this region associated with cognitive function.
Chung et al. [32] investigated CVR in the frontal, temporal, parietal, occipital and insula lobes of the brain, as well as calculating a global CVR index. The frontal and parietal lobes were associated with change in executive function in patients with T2DM, but not in healthy controls.
[10] performed a functional ROI analysis, and reported that there was reduced CVR in several regionals in those with both hypertension and T2DM compared to hypertension alone in the left hemisphere (pericalcarine cortex), right hemisphere (inferior parietal, lateral occipital and precuneus) and the cuneus, lingual gyri and superior parietal lobes bilaterally.
CVR and neuroimaging correlates of cognitive dysfunction
Seven studies explicitly mentioned correcting for partial volume effects, grey matter atrophy or white matter hyper-intensities (WMH) in their image analyses [10, 13,30,31,33,36,37]. In one remaining paper the authors made mention of normalising the perfusion signal for tissue volume, yet did not give further information on the specifics of this procedure [32].
Several studies examined the relationship of CVR to WMH, with some mixed results. Gauthier et al. [33] showed that age, gender and volume of WMH accounted for a significant amount of variance in frontal CVR. Similarly, Yezhuvath et al. [30] found that lower CVR was associated with greater volume of WMH in their cohort of AD and healthy controls. Yet another study investigating the association of grey matter CVR, cardiovascular risk factors and periventricular WMH found that these parameters were intercorrelated [37]. In contrast, Richiardi et al. [31] reported that there was no significant association between severity of WMH and CVR velocity in their cohort of AD, aMCI and healthy controls. Similarly Metzger et al. [36] found that there was no association between CVR and WMH in MS patients, healthy controls or the cohort as a whole.
Of the reviewed papers, only one investigated hippocampal atrophy in relation to reactivity, and it was found to negatively correlate with CVR in the occipital, parietal, striatum and temporal ROIs [13].
Calculation of CVR
Metzger et al. [36] did not monitor end tidal-CO 2 (etCO 2 ) throughout their experiments, thus they were unable to use this trace as a regressor in their modelling of CVR. This study used mean etCO 2 obtained from a standard population as a regressor in their general linear model (GLM). Richiardi et al. [31] did not monitor etCO 2 either. In this work two CO 2 regression coefficients were calculated analytically to reflect the CVR amplitude and velocity separately, though the authors only report velocity in this paper. These regression coefficients were calculated from mathematical models of nominal and slow etCO 2 responses to a CO 2 challenge, to represent the expected responses in healthy subjects and those with slower vessel dilation respectively. CVR velocity was defined in this paper as the rate of vasodilation. While the method of CVR estimation here is acceptable, the unavailability of etCO 2 data potentially limits the strength of these findings. Similarly, Tchistiakova et al.
[10] did not record etCO 2 throughout the hypercapnic procedure. These researchers calculated CVR as the % change in BOLD signal during 6 × 15 s breath-holds.
Calviere et al. [34] used the regional cerebral blood volume (rCBV) ratio from the regions of interest (ROIs) that was relative to the CBV of the cerebellum (control region). No etCO 2 was recorded in this study as vasodilation was elicited via injection of acetazolamide, thus the calculation used in this study was: CVR = ([rCBV ratio before acetazolamide − rCBV ratio after acetazolamide]/ rCBV ratio before acetazolamide) × 100.
Chung [32] used a rebreathing paradigm to assess vasodilation, vasoconstriction and vasoreactivity separately. Vasodilation was measured as the perfusion increase from baseline during CO 2 rebreathing normalised to the change in etCO 2 between baseline and rebreathing. Vasoreactivity was defined as the best-fitting slope between normal breathing, vasodilation and vasoconstriction. It should be noted that the 'gold standard' for CVR measurement is more likely the whole vasodilatory range of hypocapnia (elicited by hyperventilation) to hypercapnia [40]. However, CVR is most commonly calculated as the difference in CBF (or surrogate) between baseline and during a vascular challenge divided by the change in etCO 2 between these conditions, thus the vasodilation measure is taken as CVR, not the vasoreactivity measure in this instance.
The remainder of the studies estimated CVR using the standard calculation: where MRIparameter dil is the CBF or BOLD signal measured during the vasodilated period; MRIparameter rest indicates the CBF or BOLD signal measured at baseline; and ΔetCO 2 is the difference is end-tidal CO 2 in mmHg between the two conditions.
Relationship between CVR and cognition
Of the four papers that analysed the association of CVR to MMSE score, two reported significant positive correlations [13,31], and two reported no relationship [30,37]. Metzger's work found that CVR was lower in MS patients with cognitive impairment compared to non-impaired patients [36], supporting the findings of Calviere et al. [34], who reported that CVR was significantly reduced in patients with moyamoya disease and dysexecutive cognitive syndrome (DCS) compared to patients without DCS. This is in contrast to the results of Thomas's study, which concluded that whole-brain grey matter CVR was not significantly different between MCI and healthy control groups [35].
Tchistiakova et al. 's [10] research involved three measures of cognitive function, none of which were found to correlate with CVR. These measures were tests of memory, processing speed and executive function. A second study [33] also reported no significant association between executive function as measured by a Stroop task, yet the work by Chung et al. [32] found that CVR decline was linked to a decrease in executive function in CVR =( MRIparameter dil −MRIparameter rest / MRIparameter rest ) × 100/�etCO 2 T2DM at 2-year follow-up. These results are further discussed below.
Discussion
This paper systematically reviewed research articles that examined the association between cognition and cerebrovascular reactivity (CVR) using MRI. Six out of ten studies described significant relationships between CVR and cognition, including a longitudinal study which reported that lower CVR was predictive of cognitive decline over a 2-year period. The association of CVR to cognition is more established in individuals with cognitive dysfunctions, while this link is less well-known in cognitively normal adults. There was an over-reliance on imprecise measures of cognition, and the vascular challenges used to measure CVR varied widely.
CVR is reduced in adults with cognitive dysfunction
CVR was consistently lower in cognitively impaired adults versus healthy controls, or patients without cognitive impairment in the reviewed research (6 of 10 studies). Two studies reported significant correlations between cognition measured by MMSE and CVR in multiple brain regions [13,31]. These investigations also observed that CVR was significantly reduced in AD and MCI patients, and that AD patients had significantly slower responses to hypercapnia (i.e. CVR velocity was reduced), compared to healthy controls. In contrast, Glodzik et al. and Yezhuvath et al. [30,37] reported that CVR was not directly related to cognition measured using the MMSE. However, in both of these studies patients with cognitive impairment had lower reactivity than matched healthy controls, seen in the hippocampus in Glodzik et al. [37] and in the prefrontal, anterior cingulate and insular cortices in the study by Yezhuvath et al. [30]. Two other studies reported that CVR was significantly reduced in participants with cognitive impairment compared to those who were cognitively normal [34,36].
These findings are supported by evidence using other modalities linking dementia severity with cerebrovascular responsiveness [27,28]. Transcranial Doppler (TCD) ultrasound is often used to measure changes in CBF velocity in investigations of CVR. This method, while temporally precise, lacks spatial resolution, thus its practicality in regional CVR examinations is limited. Nonetheless, research conducted into the relationship between CVR and cognition with TCD has shown interesting results. Silvestrini et al. [28] reported that CVR as measured using the breath-hold index and TCD was the sole predictor of cognitive decline in patients with AD. Moreover, breath-hold index has been found to be associated with early cognitive impairment [41], as well as an increased risk of conversion from MCI to AD [27]. A systematic review of TCD analyses found that CVR to hypercapnia was a good differentiator of dementia subtypes across multiple studies [42]. Overall, the results of the reviewed studies lend support to the hypothesis that CVR and cognitive functioning are linked, evidenced by findings of reduced reactivity in patients with cognitive impairment compared to cognitively healthy controls.
While the data reviewed is suggestive of reduced vascular reactivity in individuals with cognitive impairment, a definitive relationship between CVR and cognition in cognitively healthy adults was not identified. Only one study focused exclusively on cognitively normal adults without chronic health conditions [33], whilst five studies included healthy controls as compared to patients with cognition impairment, and examined CVR and cognition within these participants [13,[30][31][32]35]. Two investigations compared patients with cognitive impairment to those without. Metzger studied MS in relation to healthy controls [36], whilst Thomas et al. [34] investigated only individuals with moyamoya disease (MMD). The remaining study investigated CVR and cognition in cognitively normal individuals with hypertension with or without co-morbid T2DM [10]. Within the reviewed studies, imprecise methods were used for evaluation of cognitive function and CVR. Reliance on the mini-mental state exam (MMSE) as the main assessment of cognition in several studies [13,30,31,37] necessitates some caution, as this measure may not be sufficiently sensitive to variation in cognitive capability, nor does it allow for distinction between different cognitive domains [38]. This is evidenced by the findings of Richiardi et al. [30] in which no evidence of a relationship between CVR and cognitive performance was found using measures of global cognitive function, yet a significant correlation was observed with language ability. The MMSE is specifically designed as a screening tool for distinguishing between individuals with and without gross cognitive impairment [38], and as such its usefulness for precise cognitive assessment is not ideal.
Among research that assessed cognitively healthy cohorts and those assessing the cognitive capabilities of individuals with MS or MMD, tests of executive function were used. However the specific tests used to define this construct varied between the five studies, including tasks of inhibitory control, task-switching, verbal fluency, and processing speed, among others [10, 32-34, 36] (see Table 1 for details). Of the two studies assessing patients [34,36] batteries of neuropsychological tests examining executive function (amongst others) were used to determine cognitive status. While results of two studies showed that executive function was not directly correlated with CVR in either frontal cortex [33] or averaged across the whole brain [10], the two patient studies both reported that CVR was significantly lower in individuals with cognitive impairment compared to those without. This was observed in the frontal region in MMD [34], and in the whole brain grey matter, as well as in a regionof-interest analysis comprising multiple brain areas in MS [36]. Similarly, Chung et al. [32] observed that global CVR was positively associated with executive function in patients with Type 2 diabetes mellitus (T2DM). In T2DM patients, decreased global, frontal and parietal vasodilation at 2-year follow up was linked to accelerated declines in executive function. The executive function task was composed of separate tasks of verbal fluency and Trail Making Task A, which assesses task-switching and visual attention. The studies that failed to observe any association between CVR and executive function used tasks that assessed interference and flexibility in thinking (Stroop and the Wisconsin Card Sorting Task, respectively), whilst the patient studies that did observe an association defined cognitive status on the basis of multiple executive function tasks. Thus it could be that some aspects of executive function are more related to CVR than others. Notably, it is thought that there are from 3 to as many as 7 distinguishable executive abilities [43], hence a more comprehensive approach to assessment would be necessary to draw definitive conclusions.
Relationship between CVR and cognition may be mediated by cardiovascular risk factors in cognitively healthy adults
There is evidence that CVR is related to executive function in populations with cardiovascular risk [10, 32]. One study [10] reported that CVR was significantly lower in those with comorbid hypertension and T2DM versus participants with hypertension alone. Similarly, Chung et al. [32] reported that higher inflammatory markers in T2DM were linked to greater reductions in CVR, which resulted in accelerated cognitive decline over a 2-year period. Gauthier et al. [33] reported an association between cognitive performance and aortic pulse wave velocity (PWV) in their healthy cohort, yet no direct link between CVR and cognition was observed. This finding was interpreted as indicating that declining vascular health, even in primary stages, negatively impacts cognition. Due to the above-average health of the cohort only minor differences in cerebrovascular properties were seen, as compared to larger changes seen in aortic elasticity between younger and older adults. Small blood vessel changes, coupled with the known low signal to noise ratio (SNR) present in BOLD imaging was posited to explain the unexpected lack of relationship observed between CVR and cognition in this study.
The relationship between cardiovascular risk and CVR was more clearly demonstrated by Glodzik et al. [37], who reported moderate negative correlations between the two in the hippocampus (r = − 0.41) and cortical grey matter (r = − 0.46) in both patients and healthy controls. Likewise, there is evidence of a link between reduced CVR and increased vascular risk in previous studies using MRI [44] and TCD [45]. Together, these findings may indicate that decreased reactivity may be the result of poor vascular health in general, and this is the primary factor triggering neurocognitive decline. Extensive evidence indicates that risk factors for cardiovascular disease precede and facilitate cognitive deterioration in aging [46][47][48].
Cardiovascular factors can result in dysfunctional reactivity in specific brain regions, leading to hypoperfusion which may pertain to cognitive impairment. The discrepancies between these three studies are multifaceted including use of different: executive tasks; methods of inducing hypercapnia; and, different imaging techniques [10, 32,33]. These discrepancies' limit the generalisability of the findings to a wider cohort; however, it can be seen that there is a possible association between cardiovascular risk factors and CVR which may mediate the relationship between CVR and executive function in cognitively healthy individuals. Further studies are needed to confirm these associations.
Similarly, there is the possibility that the observed relationships between CVR and cognition could be mediated by the presence of other cerebral pathologies known to disrupt cognition, such as white matter hyper-intensities (WHM) and hippocampal atrophy. It is understood that severity of WMH corresponds to cognitive decline [49,50], and evidence has shown that normal-appearing white matter that progresses to WMH has lower CVR than areas that do not progress [51]. Within the reviewed articles, the relationship of CVR to WMH was mixed, with three [30,33,37] of five studies reported a significant correlation. Interestingly, all three of these papers observed significant relationships between cognition and CVR, thus it is apparent that continued research investigating these associations is necessary.
Methodological considerations
While the results of the reviewed studies are inconsistent, this is likely influenced by heterogeneous samples, imprecise cognitive testing instruments (as outlined above), varying procedures for inducing vasodilation and differences in imaging protocols.
Differences in vascular challenge
All studies induced an increase in cerebral blood flow; however, not all manipulations are equal in their capacity to elicit vasodilation. Whilst the breath-hold method is used widely, is inexpensive and efficient in inducing CBF changes, this technique may produce less reproducible stimuli and/or data due to participant compliance, as well as individual differences in breath hold capacity. Breathhold and re-breathing procedures during MR imaging also present potential risk of motion artifacts, which may result in undesirable signal differences [52]. It is well established that the strength and duration of the stimulus effects the cerebrovascular response [53]. Inhalation of CO 2 -enriched gas mixture has been shown to be a more highly reliable means to induce hypercapnia and stimulate the cerebral vasculature [54][55][56].
Prospective targeting of etCO 2 has been deemed the most standardisable stimuli for measuring CVR in a recent review paper [24], yet only one study included in the current work employed this technique [33]. It should be noted however, that the literature is far from a consensus on which vascular challenge is most appropriate for assessment of CVR.
Five studies used the more traditional method of inhalation of fixed-level CO 2 -enriched gas. Differences may appear somewhat minor-a discrepancy of 2% CO 2 concentration between the 7% used by Richiardi et al. [31] and 5% by both Thomas [35] and Yezhuvath et al. [30]; while Richiardi et al. and Cantin et al. [13] used the same concentration of CO 2 (7%), the latter study mixed the gas with 93% O 2 , a substance known as carbogen, rather than medical air, which is balanced with N 2 . The gas concentration utilised by Metzger [36] was slightly higher again (8% BACTAL ® ), and it should be noted that the composition of this gas mixture was unreported, and unable to be identified from an internet search.
While these differences in CO 2 concentration may seem trivial, the evidence suggests that the relationship between BOLD signal and PaCO 2 is non-linear, thus CVR results may be dependent on the CO 2 concentration used, as well as baseline PaCO 2 [57]. The use of carbogen (and potentially, BACTAL ® ) as the vasoactive stimulus [13], rather than standard medical air, has implications for the measurement of CVR, particularly when combined with BOLD imaging [54]. The percentage of oxygen present in carbogen is greater than that in the atmosphere, which will result in an increase in arterial partial pressure of O 2 (PaO 2 ), and possible vasoconstriction, confounding the vasodilatory response intended for CVR measurement. By nature, BOLD imaging relies on the ratio of oxygenated to deoxygenated hemoglobin in the blood, and any increase in PaO 2 in the brain will elicit unwanted changes in the BOLD signal. BOLD is also sensitive to changes in blood flow, volume and oxygen metabolism, whilst ASL measures flow only, and is not affected by changes in blood oxygenation.
Two studies employed CO 2 rebreathing [32,37]. Both of these papers lacked information regarding the volume of the respiration reservoirs used and length of rebreathing period in the paradigms, hampering comparability. Chung et al. [52] also failed to report the endtidal CO 2 values. The speed at which partial pressure of CO 2 (PaCO 2 ) rises will be affected by respiration rates and the volume of the rebreathing reservoir, ultimately influencing the measured CVR value. Likewise, breathholding may produce confounding variables, as the rise in PaCO 2 during breath-holding varies between individuals due to differences in lung size and metabolic rate [24]. This method also relies heavily on participant compliance and may be difficult or uncomfortable for some to perform [58].
An acetazolamide challenge was used in one study [34]. Whilst this method is safe, does not rely on subject cooperation and is widely used in clinical settings, administration via injection is invasive, and a standardized dose may not produce the replicable stimulus necessary for CVR estimation due to individual variability [24]. For the purposes of participant comfort, less invasive stimuli would be preferable for measurement of CVR.
There are multiple options for inducing an increase in cerebral blood flow, however future research in this area would benefit from a more standardized and reproducible approach, particularly if the purpose is a simple measure of cerebrovascular response amplitude. Inhalation of CO 2 -enriched gas is an easily implemented and standardizable method, with fewer contraindications than rebreathing and breath-holding. While computercontrolled etCO 2 prospective targeting is the most clinically standardizable technique, it requires expensive equipment which is not readily available in most research facilities. At a minimum, researchers should take care to provide sufficient information regarding vascular challenge techniques so that comparisons may be made between studies.
Variations in imaging protocol
ASL, BOLD and DSC imaging methods are all considered valid for measurement of CVR, yet the results from these are not directly comparable. While MR imaging has a clear advantage of spatial specificity over ultrasound and CT-based methods, all three methods present possible drawbacks in regard to measuring CVR. BOLD, the most commonly used method, acquires images via the complex combination of blood volume, flow and oxygenation metabolism in the brain and thus is affected by subtle variations in any of these parameters, despite not directly measuring blood flow per se. BOLD is also known to be more sensitive to the baseline level of vascular tension than perfusion MRI [59]. ASL, while being more physiologically precise, has limited spatial coverage, lower signal to noise ratio, and is generally considered less sensitive for measures of CVR [33]. As BOLD is more commonly used and also more accessible on conventional MRI scanners, it is the currently preferred sequence for CVR measurements, although rapid development of new ASL pulse sequences enabling global brain coverage may render it the favored method in the future. Both ASL and BOLD MRI have been widely used in studies of hemodynamic function and cognitive performance in both healthy [60,61] and patient samples [62,63]. DSC is less commonly employed in CVR measurement studies, most likely due to the necessity of an injection of an exogenous contrast agent. Both BOLD and ASL are non-invasive, well tolerated and easily repeatable, thus either of these methods are considered preferable over DSC MRI for measurement of CVR in research studies.
Conclusion
The connections observed between hemodynamic dysfunction and cognitive impairment observed in the majority of these studies warrants further investigation. Those affected by cognitive impairment were more likely to exhibit decreased CVR compared to healthy controls, as were individuals with greater cardiovascular risk factors. Previous research using alternative methods have given strong indication of the causal relationship between dysfunctional CVR and cognitive deterioration. Given that vascular risk factors are often modifiable, development of vaso-protective therapies may prevent or slow the progression of cognitive decline.
Due to the fact that there is still so much to investigate regarding which type of vasoactive modulation and imaging protocol provides the richest set of data to assess vascular function, recommendations for measurement of CVR response amplitude include the inhalation of a set concentration of CO 2 -enriched gas, in combination with either ASL or BOLD MRI, provided that the whole brain is imaged. Future research should also employ more comprehensive neuropsychological examination to further unravel the nature of the association between cerebrovascular reactivity and cognition.
|
2018-04-13T17:24:16.065Z
|
2018-04-12T00:00:00.000
|
{
"year": 2018,
"sha1": "4ca06c78d79660a918854dbcef3ec3f2dff72b6e",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/s12868-018-0421-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ca06c78d79660a918854dbcef3ec3f2dff72b6e",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12874351
|
pes2o/s2orc
|
v3-fos-license
|
Bistable Perception Modeled as Competing Stochastic Integrations at Two Levels
We propose a novel explanation for bistable perception, namely, the collective dynamics of multiple neural populations that are individually meta-stable. Distributed representations of sensory input and of perceptual state build gradually through noise-driven transitions in these populations, until the competition between alternative representations is resolved by a threshold mechanism. The perpetual repetition of this collective race to threshold renders perception bistable. This collective dynamics – which is largely uncoupled from the time-scales that govern individual populations or neurons – explains many hitherto puzzling observations about bistable perception: the wide range of mean alternation rates exhibited by bistable phenomena, the consistent variability of successive dominance periods, and the stabilizing effect of past perceptual states. It also predicts a number of previously unsuspected relationships between observable quantities characterizing bistable perception. We conclude that bistable perception reflects the collective nature of neural decision making rather than properties of individual populations or neurons.
Introduction
Certain visual displays are not perceived in a stable way but, from time to time and seemingly spontaneously, their phenomenal appearance wavers and settles in a distinctly different form. This phenomenon is called bistable perception and occurs with a variety of ambiguous visual displays (e.g., [1]), as well as with ambiguous stimuli in the auditory (e.g., [2]) and tactile domains [3]. The most extensively studied instance is binocular rivalry [4][5][6][7], where the phenomenal experience of an observer alternates between two images that are continuously presented to the left and right eye, respectively. In spite of the somewhat 'unnatural' method of stimulus delivery, there is good evidence that binocular rivalry shares the typical properties of other instances of bistable perception [8][9][10][11].
One typical property of bistable perception is that phenomenal appearance shifts irregularly, so that a particular appearance lasts for varying lengths of time. The average such ''dominance time'' varies by one or two orders of magnitude (typically seconds to tens of seconds) between individual observers [12,13] and between different bistable displays [10,11,14,15]. Even for the same observer and same display, dominance times vary substantially with stimulus intensity [16,17], with attention [18][19][20][21], and when a display is periodically interrupted [22][23][24]. In some cases, the average dominance time experienced by a given observer on a given display under different stimulus regimes may differ by two orders of magnitude [21].
Another typical property is that the statistical distribution of dominance times is well approximated by a Gamma function [14,25,26]. In general, the shape parameter r of the Gamma function falls into a surprisingly narrow range with values from 3 to 6 [25][26][27][28][29][30], although values from 2 to 20 have also been reported (e.g., [31]).
Whereas bistable perception was long considered a ''memoryless'' process [25,27,28,31], it has become clear that phenomenal appearance can be influenced by past perceptual states. For example, when the presentation of an ambiguous display is interrupted and later resumed, the dominant appearance often remains the same [22][23][24]. This persistence of the dominant appearance stabilizes perception considerably, slowing or even arresting perceptual reversals for intermittently presented displays. The 'memory' in question reflects a longer history of dominance periods, not merely the last dominance period before the stimulus interruption [32,33].
It is not known what mechanisms allow a 'memory' of perceptual appearance to persist and to influence the appearance of subsequent stimulation. One possibility are adaptation states at the level of perceptual representations, as such states are known to persist over short stimulation gaps and to influence subsequent appearance [32,34,35]. Another possible mechanism would be some kind of short-term or working memory at post-perceptual levels of processing [24,36]. Qualitatively, the effect of 'memory' can be summarized as follows: the longer an appearance has dominated perception in the recent past, the more likely it is to dominate perception again. The effect of 'memory' is evident for continuous and, more markedly, intermittent stimulation, and appears to be comparatively long-lasting (i.e., minutes rather than seconds [33,37]). We propose a model for the dynamics of bistable perception with two novel elements: (i) stochastic integration over multiple meta-stable populations and (ii) two separate levels of represen-tation (sensory information and phenomenal experience). Our central intuition is that perceptual bistability reflects the collective properties of many meta-stable populations rather than specific biophysical properties of single neurons (see also [38]). Together, these two elements account for several hitherto puzzling aspects of bistable perception, including the wide range of time-scales of perceptual alternations, the existence and characteristics of memory effects, the highly conserved shape of dominance distributions, and others. Our model predicts the perceptual dynamics of bistable displays for a variety of stimulation regimes, including continuous and intermittent presentation. Although formulated at the level of abstract populations, our model could readily be extended to a biophysically detailed description of spiking neurons. As our model aims to account for comparatively slow processes (O(10 s)), it neglects phenomena such as fast adaptation.
Several computational accounts for binocular rivalry have been proposed previously. All postulate some form of reciprocal inhibition between two rivaling representations [39][40][41][42][43]. Some recent models are biophysically more realistic and are formulated in terms of spiking neurons. In addition to mutual inhibition, these models postulate some form of fast adaptation for the currently dominant population (in the firing rate, the synaptic efficacy, or both), which curtails dominance times and enforces perceptual reversals [44][45][46]. In yet other models, the effect of adaptation is complemented by noise-driven transitions [17,[47][48][49]. Some recent models have introduced an additional form of slow adaptation in order to account for memory effects [32,34,35]. Finally, to accommodate experimental evidence that several neural levels contribute to binocular rivalry, two recent models [45,50] postulate a feedforward hierarchy of competing levels.
Models
Our model is stochastic and follows the activity of many independent neural populations. Each population is assumed to possess two stable states -an 'inactive' state of low activity and an 'active' state of high activity -and to transition back and forth between these states under the influence of input and noise. Transitions are assumed to occur with certain rates (probabilities per unit time), which in turn will be seen to depend on visual input and on the phenomenal percept.
The model postulates two representational levels, one level of 'evidence populations' (EPs), which integrate visual inputs over short time-scales, and another level of 'memory populations' (MPs), which integrate phenomenal states over longer time-scales. To model the dynamics of binocular rivalry, where there are two possible phenomenal states, we assume two pools of EPs (each with N EP populations) and two pools of MPs (each with N MP populations), associating each pool with a different phenomenal state. The four pools and their interactions are shown schematically in Figure 1.
For a pool X (X~EP, MP) with N X populations, P X (n,t) denotes the probability that n populations are 'active' at time t, while the N X {n remaining populations are 'inactive'. Further, n X z denotes the rate of the inactiveRactive transition and n X { that of the activeRinactive transition. We assume that, in the time interval dt, at most one transition can occur, independently of any previous transitions (Poisson process).
Several transition events contribute to the total change dP X (n,t) over dt. Negative contributions are occasioned by one of n active populations becoming inactive nn { dtP(n,t) ½ , or by one of N X {n inactive populations becoming active (N X {n)n z dtP(n,t) ½ . Positive contributions arise from one of nz1 active populations becoming inactive (nz1)n { dtP(nz1,t) ½ , or from one of All four contributions enter into the dynamic equation of pool X : Here, the superscript X denotes the four pools (evidence and memory populations for two percepts) and the superscript c indicates different transition rates (see below). As long as transition rates remain unchanged, the average number of active populations in a generic pool approaches the asymptotic value n ?~NX n z =(n z zn { ) with a characteristic time t~1=(n z zn { ). The asymptotic number of active populations is a binomially distributed random variable: The phenomenal state ( i.e., the currently dominant percept) is not represented explicitly in the model. Instead, the EPs and MP s associated with each percept are combined and their total number is compared with a threshold h. Whenever this number comes to exceed the threshold and the stimulus is on, the associated percept is deemed to gain dominance (even when the other percept's total activity also exceeds h at this moment of time). Once gained, dominance is lost only when a percept's total activity drops below threshold, or when the total activity of the other percept crosses the threshold, too.
An essential aspect of the model is the choice of transition rates. We use transition rates to compactly represent the combined
Author Summary
The instability of perception is one of the oldest puzzles in neuroscience. When visual stimulation is even slightly ambiguous, perceptual experience fails to stabilize and alternates perpetually between distinct states. The details of this 'bistable perception' have been studied extensively for decades. Here we propose that bistable perception reflects the stochastic integration over many meta-stable populations at two levels of neural representation. While previous accounts of bistable perception rely on an oscillatory dynamic, our model is inherently stochastic. We argue that a fluctuation-driven process accounts naturally for key characteristics of bistable perception that have remained puzzling for decades. For example, our model is the first to explain why the statistical variability of successive dominance periods remains essentially the same, while the mean alternation rates of bistable phenomena range over two orders of magnitude. By postulating two levels of representation that are driven by stimulation and by perceptual state, respectively, our model further accounts for the stabilizing influence of past perceptual states, which are particularly evident in intermittent displays. In general, a fluctuation-driven process decouples the collective dynamics of bistable perception from single-neuron properties and predicts a number of hitherto unsuspected relations between behaviorally observable measures.
influence of feedforward input (i.e., visual stimulation), of recurrent input, and of the phenomenal percept. In developing the model, we realized that a handful of conditions, each with different transition rates, suffices to generate the rich dynamical behavior of bistable perception. Specifically, we assume an 'excitation' of EPs by the stimulus, an additional, 'selective excitation' of EPs and MPs associated with the active percept, and a 'selective inhibition' of EPs associated with the other percept. Figure 2 illustrates the typical evolution of activity in the different pools, and the resulting perceptual alternations, when a bistable stimulus is periodically interrupted by blank periods. The dynamic evolution distinguishes 4 conditions, depending on the presence or absence of a stimulus and a dominant perceptual state: Condition 1: After stimulus onset, but before a dominant percept has emerged. When a stimulus is present, but no dominant percept has yet emerged, the activity of EPs grows rapidly, mimicking 'excitation' by the visual stimulus (n ?~1 5, t~50 ms). Any activity of MP s decays (t~5 s).
Condition 2: The first 200 ms after one percept (e.g., the 'butterfly') has gained dominance. When one percept becomes dominant (because the combined activity of its associated populations exceeds threshold), the now dominant EPs continue to charge, but with longer characteristic times (n ?~2 5, t~1:5s), whereas the now suppressed EPs discharge (t~50 ms). This shortlasting condition stabilizes the newly dominant percept and mimics a 'transient suppression' of the EPs associated with the other percept. In effect, this cross-inhibition implements a transient interaction between the active percept and the EPs associated with the other percept. Note that dominance is gained always by the most recent percept to cross h. The rapid sequence corresponding to Condition 1 and Condition 2 explains the 'spikes' that are sometimes observed (in Figure 2) when stimulation resumes at the end of a blank period.
Condition 3: Continued dominance of the same percept. After the brief transition period, the EPs of the dominant percept continue to charge as before, but the EPs of the suppressed percept are now charging as well, albeit more slowly (n ?~2 2, t~4 s). This condition mimics the combined effects of a 'sustained inhibition' by the phenomenal percept and an 'excitation' by the visual stimulus (see (1) above).
In addition to inhibiting EPs, the phenomenal state also excites MPs. Specifically, we assume that the MP s associated with the dominant percept charge slowly, (n ?~1 3, t~5 s), whereas the MP s associated with the suppressed percept discharge at the same rate. This ensures that the phenomenally dominant percept charges its associated memory while discharging the memory of the alternative percept.
Condition 29: The first 200 ms after a reversal, in which the other percept (e.g., the 'tree') has gained dominance. This condition is symmetric to Condition 2.
Condition 4: Blank display. In the absence of a stimulus, any residual activity dissipates and both EPs and MPs become inactive (t~1 s and t~300s, respectively). The rates for MP s are characteristic times for the spontaneous decay of a percept-specific working-memory. These assumptions (7 integration parameters for EPs, 3 integration parameters for MPs, pool sizes N EP and N MP ) suffice to emulate a large body of empirical observations on the perceptual dynamics of continuous and intermittent displays. Moreover, the predicted behavior is robust over a considerable range of parameter values.
The interaction between total activity in EPs plus MPs and transition rates in EPs and MP s, combined with the stochastic activity dynamics in the four pools, produces an irregular sequence of phenomenal reversals that may be compared directly to experimental observations.
Mean dominance times
The main evidence for a memory in bistable perception is the tendency of a percept to persist when stimulation is interrupted: before and after an interruption of stimulation, the subjective appearances are often the same. This persistence slows and perhaps even arrests perceptual reversals in intermittently presented displays [22][23][24]51]. In our model, the persistence of appearance arises from the existence of memory populations that influence perceptual dominance.
We define the dominance time T dom of a percept as the total stimulated time between two reversals. In the case of continuous stimulation, this is simply the time between reversals. In the case of intermittent stimulation, it is the total time minus any blank periods.
Our model predicts a complex dependence of the mean dominance time ST dom T on the stimulation period T on and the blank period T off ( Figure 3A). Starting from T on~? (continuous display), ST dom T rises slowly from the baseline ST continuous dom T~4:5 s (dashed black lines), the increase becoming dramatic in the proximity of T on~S T continuous dom T. At this point, MP s are maximally active and stabilize phenomenal experience. If perceptual reversals occur at all, they happen at the beginning of, rather than during T on . For even smaller T on , phenomenal experience remains stable for a certain number of display cycles (see Perceptual persistence), and ST dom T decreases trivially with T on . The height and position of the peak in ST dom T depends also on T off , for the average activity of MP s (and, thus, their stabilizing effect) depends on the balance between T on and T off .
These predictions account qualitatively for the observation that intermittent stimulation slows perceptual reversals [22][23][24]. Especially for short T on , it is known that dominance times grow very long and that perceptual reversals essentially cease [23]. Unsurprisingly, our model fails to predict the behaviour observed for short T off (,1 s) [52], which is thought to reflect fast adaptation.
Raising stimulus intensity (i.e., luminance and/or color contrast) can be assumed to monotonically increase the parameter n ? . When left-and right-eye images present different intensities, the evidence populations associated with the left-and right-image EPs will exhibit different parameter values, n Left is held constant, dominance times increase slightly for the right image but decrease dramatically for the left image ( Figure 3B). When n Left ? is decreased, the intersection in Figure 3B shifts to the left (not shown), as reported by [17]. This confirms that n ? is a plausible substitute for stimulus intensity.
The qualitative behavior in Figure 3B is empirically well established and is known as ''Levelt's second proposition'' [5,17]. The reason for this behavior is that, in our model, reversals are triggered by the charging of the suppressed percept. As charging rate increases with stimulus intensity (n ? ), greater stimulation of the suppressed percept shortens ST dom T for the dominant percept.
Distribution of dominance times
Dominance times of both human and non-human observers in binocular rivalry and other types of bistable displays exhibit a Gamma-like distribution G(t)~t r{1 l r e {l t =C(r), where l is a rate constant and r is a shape parameter. The mean dominance time is ST dom T~r=l and the coefficient of variation of dominance times is CV~r {1=2 . Empirically, rate l and mean time ST dom T range over almost two OM, whereas the shape parameter r is largely preserved and varies only by half an OM [30,31]. One important aim of our model is to account for this uncoupling of the shape parameter r from the mean time ST dom T.
In our model, perceptual reversals reflect the rapid accumulation of stimulus evidence below the perceptual threshold by evidence populations (EPs). Only three parameters matter for the distribution of dominance times, namely, the total number of evidence populations, N EP , the number of active evidence populations at equilibrium, n ? , relative to the perceptual threshold h, and the relaxation time t. Of these three, the parameter n ? , which represents stimulus intensity, proves the most consequential.
For continuous displays, our model replicates a Gamma-like distribution of dominance times for a wide range of parameter choices (see inset in Figure 4). Intuitively, this may be understood as follows: if n ? &h, EP+MP crosses the threshold almost deterministically, resulting in a Gaussian distribution of dominance times (r&1). On the other hand, if n ? %h, EP+MP will cross the threshold only in the event of rare fluctuations, producing an exponential distribution of dominance times (r^1). Intermediate situations with n ?^h , lead to Gamma-like distributions with r ranging from 3 to 6.
For example, in Figure 3B, the shape parameter r varies in a comparatively narrow range (see inset), whilst the ratio of ST dom T s varies over almost two orders of magnitude. Note that the 'left' values of r and ST dom T exhibit strongly opposing trends. This marked anti-correlation is a sign of the stochastic mechanism for threshold crossing: with lower stimulus intensity n ? , threshold crossings become rarer and the interval distribution becomes more Poisson-like.
Note also the (slight) positive correlation between the 'right' values of r and ST dom T in the inset of Figure 3B (red curve). This constitutes a prediction that depends strictly on memory effects and that goes beyond ''Levelt's second proposition'' [5]. To understand this positive correlation, consider a situation where integration is driven by fluctuations and times-to-threshold are comparatively long and exhibit Poisson-like statistics (r*1). In this situation, the shape parameter r reflects the number of Poisson-like 'jumps' that are required to reach threshold h. The primary consequences of an increase in n Right ? {h are that 'left' dominance times decrease sharply while 'right' dominance times increase slightly. As a secondary consequence, the 'left' memory activity also decreases, which raises the number of 'jumps' required by the 'left' integration and thus also the 'right' value of r. This accounts for the parallel trends in the 'right' values of r and ST dom T.
In general, when the stimulus intensity n ? is varied either in one eye or in both, our model makes a qualitative prediction for the average dominance distribution (comprising dominance times of both percepts): the average values of r and ST dom T should be anti- correlated. Interestingly, there seems to be some evidence for such a trend [31].
For intermittent displays (Figure 4, T on~5 s, T off~5 s), our model predicts a multi-peaked distribution: the integral probability of a perceptual switch between the n th and the (nz1) th T on (darker bins in the background), for nw2, is well approximated by an exponential (continuous line: best exponential fit for nw2). The spikes in the distribution reflect the periodicity of the stimulation and are separated roughly by T on . They comprise the probability of a perceptual switch at the onset and during continued presentation. Assuming that the MPs of the current winning percept have reached a stationary state, both these probabilities do not vary statistically from one T on to the next, leading to an exponential decay for large enough T dom (nw2, or twice the characteristic time of MPs). During the first two T on , the MP s are still charging after the last perceptual switch and a perceptual reversal is more likely than for nw2. The first anomalous peak in the distribution is attributable to the very brief dominance intervals that usually occur during periods of 'uncertainty', when the level of the MP s is roughly equal for both percepts (see the central part of Figure 2 for an illustration).
There are few empirical reports of dominance distributions for intermittent displays. Both Gamma-shaped [37] and monotonically decreasing [51] distributions have been reported. However, further experiments are needed to establish the generality of these results
However, the existence of memory representations predicts small but significant departures from sequential independence. Figure 5A shows the predicted correlation between a given dominance period and its n-th successor. Interestingly, the predictions differ for continuous and intermittent presentation. Figure 5B shows the correlation (c 1 ) between successive dominance periods of percept 'Left' (blue) and percept 'Right' (red), for continuous presentations, as functions of n Right ? (same simulations as in Figure 3B).
The non-monotonic behaviour observed is another consequence of MP dynamics. When one of the ST dom T is much larger than the characteristic times of MP s (left part of the plot), the activity level of MP s is essentially constant (either low or high) and cannot provide correlation effects; if the average ST dom T is much smaller than the characteristic times of MP s, memory effects do not have time to build up and again cannot sustain correlations (right part of the plot). Finally, whenever the distribution of dominance times becomes narrow (high r values), so that the variance is inherently small, sequential correlations will be negligible.
Taken together, Figure 3B and Figure 5B suggest that an experimental verification of Levelt's second proposition should reveal specific links between r, c 1 and ST dom T that result, at bottom, from memory effects. Memory-induced correlations should be somewhat larger in intermittent displays, as the normal alternation of dominant percepts is suspended and the same percept dominates for several successive display intervals. In this situation, the differential activity between the MP s of dominant and suppressed percepts grows larger and stochastic fluctuations in this difference induce more noticeable correlations ( Figure 5A).
Perceptual persistence
In intermitted displays, the persistence of a percept across the stimulation gap is often measured in terms of a 'survival probability' P s [23], viz. the probability of the same percept dominating before and after the gap. Our model predicts an interesting and complex dependence of P s on stimulus duration T on and blank duration T off , which is illustrated in Figure 6A.
For short T on , the MP s do not charge and the survival probability P s is influenced only by differential activity in the EPs, which decays rapidly after stimulus termination. For this reason, P s decreases rapidly with increasing T off ( Figure 6A, red curve). When T on is long enough to charge MP s, but too short to permit spontaneous reversals, P s is governed by memory and remains close to unity as long as the memory persists ( Figure 6A, purple and blue curves). Finally, when T on is long enough to permit spontaneous reversals, the memory activity of both percepts is comparable and P s reflects differential activity in the EPs ( Figure 6A, green curve).
Some of these predictions are borne out by published evidence. For example, Leopold and colleagues reported uniformly high P s for intermediate values of T on (400 ms; [23]). For longer T on that permitted spontaneous reversals, survival probability P s progressively decreased.
When T on permits two dominance periods, survival probability P s reflects the relative durations [23,32,33]: P s w0:5 when the most recent period lasted longer than the less recent period and P s v0:5 when the situation was reversed. Our model readily accounts for these observations (Figure 6B), provided T off is sufficiently large. The regime of T off v1s [34,52,53], where fast adaptation could become important, is again out of the scope of our model.
Discussion
We propose that binocular rivalry, and other instances of bistable perception, reflect the stochastic integration of many meta-stable populations at two levels of neural representation, viz. sensory input and perceptual experience. While previous accounts of bistable perception rely on an oscillatory dynamic, our model is inherently stochastic. We argue that a fluctuation-driven process accounts naturally for key characteristics of bistable perception that have remained puzzling for decades.
One of these puzzling characteristics is the wide range of average times between perceptual reversals, which for different observers, display types, and stimulus properties can extend over two orders of magnitude [30,31]. Another unexplained finding is the preserved stochasticy of reversals, that is, the fact that the statistical distribution of times between reversals is Gamma-like and exhibits a shape parameter r with typical values from 3 to 6.
Taken together, these observations strongly suggest a fluctuation-driven escape process. In such a process, the system state fluctuates until it reaches an escape threshold, at which point it is reset some distance away from threshold. Depending on the asymptotic value of the integration process, the average frequency of threshold crossings can vary over more than one order of magnitude, while the distribution of times between threshold crossings will retain its Gamma-like shape. This uncoupling of mean dominance time and shape parameter is an important advance over previous models and is illustrated in Figure 3B.
Following this general insight, we model bistable perception as a 'race' between two independent processes of stochastic integration, each concerning multiple neuronal pools that are individually meta-stable between inactive and active states. We further assume an escape threshold and a competitive reset mechanism that resets each process whenever the other process reaches threshold.
Previous models of bistable perception postulate a deterministic process at the level of individual neurons (i.e., spike-frequency adaptation [32,34,35,54] or synaptic depression [44][45][46]) which drives the system towards a reversal threshold. The resulting oscillatory dynamic is typically perturbed by a suitable level of neural noise [17,35,[47][48][49]. In such an 'oscillator model', the average time between reversals is set by thedeterministic process while the statistical distribution of these times directly reflects the level of noise. For a given set of parameters, oscillator models such as [32,35] produce either a realistic, Gamma-like distribution of dominance times or a realistic dependence of mean dominance times on stimulus properties (e.g., intensity or timing), but not both. For example, an oscillator model such as [35] accounts for the dependence of dominance times on stimulus times only in the absence of noise. When the model is imbued with realistic levels of noise (so that r~6), the dependence on stimulus intensity all but disappears.
Yet another puzzling characteristic of bistable perception is the hysteresis or memory effects that become evident when visual presentation is interrupted [23,24]. To summarize the available evidence, the history of percepts prior to an interruption biases perception once stimulation resumes. Memory effects are longlasting and are characterized by time-scales an order of magnitude larger than those of perceptual reversals [23,33]. Memory effects are stabilizing in that they favor the recurrence of percepts that have dominated already in the past. Not only the most recent percept, but also less recent percepts that have dominated longer, leave a measurable bias [23,32,33]. Finally, the stabilizing influence of perceptual history is evident not only in the percept that dominates a renewed stimulus onset but also in the duration of dominance phases following that onset [55].
To account for memory effects, several oscillator models have been extended to include an additional interaction or state variable [32,34,35]. However, none of these models captures the entire range of experimental findings. The model of Noest and Figure 6. Survival probability P s and perceptual history. A: Joint dependence on T on and T off , see text for details. B: When T on contains two dominance phases of durations T 1 and T 2 , P s decreases with T 1 (less recent phase) and increases with T 2~Ton {T 1 (more recent phase). doi:10.1371/journal.pcbi.1000430.g006 colleagues [34] lacks a second, longer time-scale and does not account for observations with long interruptions of stimulation. The models of Wilson [35] and of Brascamp and colleagues [32] include multiple time-scales and do capture long-lasting memory effects. However, the Wilson model [35] does not account for the influence of the duration of dominance phases preceding the stimulus interruption [23,32,33]. Conversely, the model of Brascamp and colleagues [32] fails to predict the observed effect on dominance durations following the stimulus interruption [55].
Our stochastic-integration model incorporates two time-scales in the form of 'evidence populations' (EPs with higher transition rates) and 'memory populations' ( MP s with lower rates). A material difference to other models [32,35] is that EPs are driven by sensory evidence and perceptual state, while MP s are driven only by perceptual state. This ensures that the memory of a perceptual state builds up while this state persists and correctly predicts all effects of and on dominance duration that have been reported so far [23,32,33,55]. The recurrent influence of perceptual state on both MP s and EPs distinguishes our model from other two-level models [45,50], which employ a strictly feedforward architecture.
With one major exception (see below), our model comprehensively predicts the dynamics of bistable perception for continuous and intermittent displays. For example, it predicts dominance times, dominance distribution shape, sequential correlations between dominance times, and perceptual persistence across blank periods, including, in the case of intermittent displays, the dependence of these quantities on T on and T off . Some of the predictions bear out past experimental observations: the degree to which phenomenal experience is stabilized with different values of T on and T off in an intermittent display [22][23][24], or the dependence of phenomenal experience on a history comprising several preceding dominance periods [23,32,33]. Several other predictions of interest are yet to be tested, however. For example, our model predicts how the shape of the dominance distribution ( Figure 4) and the size of sequential correlations ( Figure 5) should vary with T on and T off under conditions of intermittent presentation.
An important test for models of bistable perception are the opposite and unequal changes in dominance time that results from an asymmetric changes in stimulus intensity (''Levelt's second proposition'') [5]. Our model correctly predicts the unequal dependence of dominance times on the intensity of a weaker stimulus and partially predicts the reversed dependence of dominance times on the intensity of a stronger stimulus [17].
In its current form, our model does not account for the wellknown effects of visual adaptation [39,[56][57][58][59][60] on bistable perception. This omission is intentional and is meant to highlight the dynamic possibilities offered by stochastic integration on the longer time-scales at which adaptation effects are expected to be small. The absence of adaptation implies that our model cannot account for the phenomenon of ''flash suppression'' [61,62] and, more generally, for the perceptual effects of brief stimulus interruptions (,1000 ms) [22,34,52,53].
For the sake of simplicity, our model is formulated in terms of abstract, meta-stable populations governed by transition probabilities. The underlying idea is that each population represents a recurrently connected network of spiking neurons, with two metastable attractor states [63][64][65][66][67]. In such a 'working-memorytype' network, stochastic transitions between attractor states are driven by internally generated fluctuations in network activity [49,65,[68][69][70][71]. The transition probabilities n z and n { are the escape rates from the two attractor states: the lower the attraction force, the higher the escape rate. Importantly, the transition rates depend less on the time-constants of individual neurons than on the average activity level and the amplitude of activity fluctuations in relation to the transition threshold. This is why small differences in recurrent connectivity can shift transition rates by some orders of magnitude [68][69][70].
Our model postulates that perceptual dominance reflects a collective decision on the basis of two distributed representations (viz., two pools of meta-stable populations). The stochastic integration of those representations provides the accumulated information for the perceptual decision; such a mechanism has been also proposed as a substrate for the perception of time [71,72]. In a detailed (spiking network) model, such a collective decision would require convergent synaptic projections to a readout stage, where competitive interactions could ensure that any decision is categorical [73,74]. In other words, our model predicts the existence of a competitive stage receiving projections from all evidence and memory populations. This hypothetical stage would somewhat resemble the ''saliency map'' that has been postulated by some authors [75,76].
Finally, excitatory and inhibitory projections between representational (evidence and memory populations) and readout levels could generate the facilitatory and suppressive interactions that are needed to start the stochastic integration process over and over again. Such competitive-cooperative interactions in a multi-level network have been studied in the context of visual attention modeling [77].
In conclusion, we suggest that bistable perception is a fluctuation-driven process and is best understood in terms of a progressive integration of, and a collective competition between, 'working-memory-type' populations at multiple neural levels.
|
2014-10-01T00:00:00.000Z
|
2009-07-01T00:00:00.000
|
{
"year": 2009,
"sha1": "0905e86bd1b6f0d679d77d9c47c93601e48df690",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1000430&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0905e86bd1b6f0d679d77d9c47c93601e48df690",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
15566510
|
pes2o/s2orc
|
v3-fos-license
|
Multiple Signals Govern Utilization of a Polysaccharide in the Gut Bacterium Bacteroides thetaiotaomicron
ABSTRACT The utilization of simple sugars is widespread across all domains of life. In contrast, the breakdown of complex carbohydrates is restricted to a subset of organisms. A regulatory paradigm for integration of complex polysaccharide breakdown with simple sugar utilization was established in the mammalian gut symbiont Bacteroides thetaiotaomicron, whereby sensing of monomeric fructose regulates catabolism of both fructose and polymeric fructans. We now report that a different regulatory paradigm governs utilization of monomeric arabinose and the arabinose polymer arabinan. We establish that (i) arabinan utilization genes are controlled by a transcriptional activator that responds to arabinan and by a transcriptional repressor that responds to arabinose, (ii) arabinose utilization genes are regulated directly by the arabinose-responding repressor but indirectly by the arabinan-responding activator, and (iii) activation of both arabinan and arabinose utilization genes requires a pleiotropic transcriptional regulator necessary for survival in the mammalian gut. Genomic analysis predicts that this paradigm is broadly applicable to the breakdown of other polysaccharides in both B. thetaiotaomicron and other gut Bacteroides spp. The uncovered mechanism enables regulation of polysaccharide utilization genes in response to both the polysaccharide and its breakdown products.
We now report the genetic basis for transcriptional control of arabinan and arabinose utilization genes. We determine that a regulatory protein activated by oligoarabinose in the periplasm controls transcription not only of arabinan utilization genes directly but also of arabinose utilization genes indirectly, by enabling the generation of arabinose, which allosterically inactivates a repressor of arabinose utilization genes in the cytoplasm. We establish that an inner membrane transporter encoded within the arabinose utilization locus is dispensable for growth on monomeric arabinose but is required for utilization of arabinooligosaccharides. Furthermore, we uncover a role of a pleiotropic transcriptional regulator in the expression of both arabinan and arabinose utilization genes, and we demonstrate its requirement for the utilization of several carbohydrates. Taken together with the genome analysis of polysaccharide utilization in other Bacteroides species, our findings argue that the use of multiple regulators responding to different signals constitutes a new paradigm for the utilization of complex polysaccharides.
RESULTS
A permease encoded in the arabinose utilization operon is necessary for arabinan catabolism. The BT0356-to-BT0350 operon specifies functions necessary for L-arabinose utilization because a polar transposon insertion in the BT0356 gene ( Fig. 1) (12) prevented growth on L-arabinose ( Fig. 2A) but not on fructose (see Fig. S1A in the supplemental material). Following BT0356 is the BT0355 gene, which encodes a putative permease designated AraP that is proposed to import arabinose from the periplasm into the proteins and imported into the periplasm by two SusC-like transporters. Large arabinan polymers are broken down into polymers of six to eight subunits in chain length (oligoarabinose) and eventually smaller arabino-oligosaccharides, such as arabinobiose, which are transported into the cytoplasm by AraP. These oligosaccharides are broken down to arabinose in the cytoplasm by an unknown glycoside hydrolase. Oligoarabinose binds to and activates the transcriptional regulator BT0366, which, in turn, promotes transcription of the arabinan utilization genes BT0365 to -60, BT0366, and BT0367 to -69. L-Arabinose is transported into the cell by an unknown mechanism. Cytoplasmic arabinose prevents binding of the transcriptional repressor AraR to the promoters of the arabinan utilization gene BT0365 and the arabinose utilization gene BT0356 (araM). The transcriptional regulator BT4338 is necessary for full activation of arabinan and arabinose utilization genes. The signal controlling the activity of BT4338 is at present unknown. OM, outer membrane; IM, inner membrane. cytoplasm (10,11). Surprisingly, deletion of the BT0355 (araP) gene resulted in a very modest decrease in growth rate on L-arabinose compared to the wild-type strain (average growth rate of 0.056 versus 0.061 ⌬A 595 /h; P ϭ 0.27 by Student's two-tailed t test) ( Fig. 2A). These results imply that a transporter other than AraP can import arabinose into the cytoplasm.
Because arabinose is generated during the breakdown of arabinose-containing polysaccharides, we hypothesized that araP is necessary for utilization of arabinan, a component of pectin comprised primarily of 1,5-linked arabinosyl residues (13), which is commonly found in the mammalian diet (14). As hypothesized, the araP mutant exhibited a growth defect on arabinan (Fig. 2B). The growth rate of the araP mutant was 0.035 ⌬A 595 /h, significantly lower (P ϭ 2.8 ϫ 10 Ϫ5 by Student's two-tailed t test) than the wild-type growth rate of 0.067 ⌬A 595 /h. However, the araP mutant reached the same final optical density as the isogenic wildtype strain by 24 h. By contrast, the araP mutant grew similarly to the wild-type strain in arabinogalactan (see Fig. S1B in the supplemental material), which contains terminal arabinofuranosyl side chains (13).
The results presented above suggested that AraP transports an intermediate in arabinan breakdown. In agreement with this notion, wild-type B. thetaiotaomicron grew on arabinobiose (the 1,5linked ␣-L-arabinose disaccharide), albeit after a 24-h delay and to a lower growth yield than in L-arabinose or arabinan; however, the araP mutant did not grow on arabinobiose (Fig. 2C). Taken together, the results presented in this section establish that, despite being encoded in an arabinose utilization operon, AraP is necessary for normal arabinan catabolism and likely transports an arabino-oligosaccharide.
Arabinan promotes transcription of both arabinan and arabinose utilization genes. Because genes in the arabinose utilization locus are specifically involved in growth on arabinan (Fig. 2), we reasoned that the transcriptional activator of arabinan utilization genes, BT0366 (8), may control transcription of araP and other genes in the BT0356-to-BT0350 operon. Thus, we examined the mRNA levels of both arabinan and arabinose utilization genes in isogenic wild-type and BT0366 mutant strains following growth to mid-log phase in minimal medium containing 0.5% glucose as the sole carbon source and then switched to medium containing 0.1% arabinan.
The mRNA levels of the arabinan utilization genes BT0364 and BT0367 were Ͼ1,500-fold higher in the presence of arabinan than in the presence of glucose (Fig. 3A). Transcriptional activation of BT0367 was absent in the BT0366 mutant (Fig. 3A). By contrast, activation of BT0364 was dramatically decreased, but not abolished, in the BT0366 mutant (Fig. 3A). These results suggest that although BT0364 and BT0367 require BT0366 for complete transcriptional activation by arabinan, they are differentially regulated.
The mRNA levels of the BT0366 gene were 15-fold higher in arabinose than in glucose (Fig. 3A), which resulted in larger amounts of chromosomally encoded epitope-tagged BT0366 protein (Fig. 3B). These results suggested that the regulator of arabinan utilization genes positively regulates its own transcription.
The mRNA levels of the arabinose utilization gene BT0356 (araM) increased 90-fold in the presence of arabinan (Fig. 3C). However, in contrast to the results obtained with the BT0364 and BT0367 genes (Fig. 3A), deletion of the BT0366 gene decreased araM mRNA levels only 2-fold (Fig. 3C). The increase of araM mRNA levels observed in the BT0366 mutant upon exposure to arabinan (Fig. 3C) appears to result from arabinose-containing polysaccharide contamination, because an~30-fold increase in araM mRNA levels was still observed in this strain upon exposure to dialyzed arabinan (see Fig. S2A in the supplemental material). Moreover, the BT0366 mutant retained wild-type growth on L-arabinose as the sole carbon source (see Fig. S2B). Cumulatively, these results, which are in agreement with previous reports (4,8), establish that arabinan promotes transcription of arabinan and arabinose utilization genes and that this activation is strictly dependent on BT0366 for the genes in the arabinan PUL but only moderately dependent on BT0366 for the arabinose utilization genes.
AraR represses transcription of arabinose utilization genes in the absence of arabinose. Investigation of the in vitro properties of the regulatory protein AraR showed that it binds to DNA sequences located upstream of arabinose utilization gene BT0356 and arabinan utilization gene BT0365 (11) (see Fig. S3 in the supplemental material) and that binding to these DNAs was prevented when L-arabinose was present in the reaction mixture (11), suggesting that L-arabinose is an allosteric regulator of AraR. However, the in vivo function of AraR has remained unknown. , and BT0366 genes in isogenic BT0366 (GT44) and wild-type (WT, GT23) B. thetaiotaomicron prior to the switch (Ϫ5) and after 1 and 2 h of exposure to minimal medium containing 0.1% arabinan. (B) Western blot of crude extracts from a strain specifying an HA-tagged BT0366 protein (NS204) collected from cultures grown to mid-log phase in minimal medium containing 0.5% glucose (Ϫ5) or 30, 60, and 120 min after switching to medium containing 0.1% arabinan. Data are representative of three independent experiments, which produced similar results. (C) mRNA levels of the araM gene in isogenic BT0366 mutant (GT44) and wild-type (GT23) B. thetaiotaomicron prior to the switch (Ϫ5) and after 1 and 2 h of exposure to minimal medium containing 0.1% arabinan. Graphed are the mean and standard error of the mean from at least three independent experiments. Asterisks indicate significant differences from the wild-type strain for BT0364 and BT0367 expression and significant difference from the Ϫ5 sample for BT0366 expression (*, P Յ 0.05; **, P Յ 0.01 by two-tailed Student's t test). Note log scale of y axis in panels A and C.
In wild-type B. thetaiotaomicron, araM mRNA levels were~30fold higher following growth in arabinose than in glucose (Fig. 4A). By contrast, an araR-deficient mutant displayed the same high mRNA levels during growth on arabinose and on glucose, which were similar to those observed in the wild-type strain grown in the presence of arabinose (Fig. 4A). Complementation of the araR mutant in trans restored araM transcription to the levels displayed by the wild-type strain (see Fig. S4A in the supplemental material). Taken together with the in vitro analysis of the AraR protein (11), our results indicate that AraR directly represses transcription of arabinose utilization genes and that this repression is antagonized by L-arabinose.
AraR controls the transcription dynamics of arabinan utilization genes in the presence of arabinan. Because arabinan catabolism generates arabinose, we wondered whether AraR might also regulate genes in the arabinan PUL, thereby providing feedback based on arabinose levels. However, the mRNA levels of the arabinan utilization genes BT0364 and BT0367 were not affected by inactivation of araR during growth on arabinose or glucose (Fig. 4A). Likewise, the araR mutant grew like the wild-type strain on arabinan (see Fig. S4B in the supplemental material).
The araR mutant exhibited altered expression dynamics of the BT0364 and BT0367 genes when B. thetaiotaomicron was switched from medium containing glucose to medium containing arabinan (Fig. 4B). In the wild-type strain, the mRNA levels of both BT0364 and BT0367 were Ͼ2-fold higher at 2 h than at 1 h after exposure to arabinan (Fig. 4B). By contrast, in the araR mutant, the mRNA levels of these two genes decreased between 1 and 2 h (Fig. 4B). These effects extend to genes necessary for the breakdown of arabinogalactan (see Fig. S4C in the supplemental material) but not rhamnogalacturonan I (see Fig. S4D), both of which contain arabinoyl residues. The mRNA levels of the arabinogalactan PUL gene BT0268, which encodes a SusC-like transporter similar to the BT0364 product, were~1.5-fold higher in the araR mutant than in the isogenic wild-type strain following 1-h exposure to arabinogalactan (see Fig. S4C). By contrast, mRNA levels of the rhamnogalacturonan I PUL SusC-like protein-encoding gene BT4164 were nearly identical in the araR mutant and wild-type strains following exposure to rhamnogalacturonan I (see Fig. S4D). These arabinose-containing polysaccharides promoted an increase in the mRNA levels of the arabinose utilization gene araM (see Fig. S4C and D), presumably because arabinose is generated during their breakdown.
The mRNA levels of araM were only 3-fold higher in the araR mutant than in the wild-type strain 1 h after induction with arabinan, despite 30-fold-higher levels present during growth in glucose (Fig. 4C). However, the araR mutant displayed wild-type araM mRNA levels by 2 h (Fig. 4C). This is likely due to the breakdown of arabinan in the wild-type strain antagonizing AraR. The araR mutant initiated growth significantly faster than the isogenic wild-type strain following a switch from medium containing glucose to medium containing arabinan (Fig. 4D). This effect appears to be specific for arabinan because wild-type and araR strains grew similarly when switched to the unrelated polysaccharide chondroitin sulfate (see Fig. S4E in the supplemental material). Taken together, these results indicate that AraR is a transcriptional repressor that controls the expression kinetics of both arabinose and arabinan breakdown genes when B. thetaiotaomicron encounters arabinan. wild-type (WT, GT23) B. thetaiotaomicron strains following growth in minimal medium containing 0.5% of either arabinose or glucose. (B) mRNA levels of the arabinan PUL genes BT0364 and BT0367 in isogenic araR (NS367) and wild-type (GT23) B. thetaiotaomicron strains prior to the switch (Ϫ5) and after 1 and 2 h of exposure to minimal medium containing 0.1% arabinan. (C) mRNA levels of the arabinose utilization gene araM in isogenic araR (NS367) and wild-type (GT23) B. thetaiotaomicron strains after 1 and 2 h of exposure to minimal medium containing 0.1% arabinan and prior to the switch (Ϫ5) to medium containing arabinan. (D) Growth of isogenic araR (NS367) and wild-type B. thetaiotaomicron (GT23) strains after switching from minimal medium containing 0.5% glucose to minimal medium containing 0.1% arabinan. Graphed are the mean and standard error of the mean from at least three independent experiments. Asterisks indicate significant difference from the wild-type strain (*, P Յ 0.05; **, P Յ 0.01; ***, P Յ 0.001 by two-tailed Student's t test). Note log scale of y axis in panels A, B, and C.
The arabinan-responsive BT0366 protein indirectly regulates arabinose utilization genes. The BT0366 protein appears to control transcription of arabinose utilization genes in the presence of arabinan because the mRNA levels of the araM gene were lower in the BT0366 mutant than in the isogenic wild-type strain (Fig. 3C). BT0366 may exert its regulatory effect directly, by binding to the araM promoter, or indirectly, by activating arabinan breakdown genes, thereby impacting cytoplasmic arabinose levels.
BT0366 does not appear to control araM mRNA levels directly (i.e., by binding to the BT0356-to-BT0350 promoter region), because the purified BT0366 protein did not shift a radiolabeled 207-bp fragment corresponding to the sequence immediately upstream of the araM start codon (see Fig. S3 in the supplemental material) in an electrophoretic mobility shift assay (EMSA) (Fig. 5A). By contrast, the BT0366 protein shifted the positivecontrol fragments (Fig. 5A) corresponding to the BT0365 promoter and the BT0367-BT0366 intragenic region (see Fig. S3).
In agreement with the notion that BT0366 controls araM mRNA levels indirectly (i.e., by generating the AraP substrate, which is broken down into arabinose), the araP mutant produced 5-fold-lower araM mRNA levels than the wild-type strain when bacteria were switched from medium containing glucose to medium containing arabinan (Fig. 5B). The mRNA levels of araM were similar in the araP mutant and wild-type strains when switched to medium containing arabinose (Fig. 5B). As expected, the araP mutant retained wild-type BT0364 mRNA levels upon induction with either arabinose or arabinan (see Fig. S5A in the supplemental material). These results suggest that BT0366 regu-lates transcription of arabinose utilization genes in the presence of arabinan by controlling the production of arabinose, which likely allosterically inactivates AraR.
The absence of accessible carbohydrates promotes sustained transcription of the arabinose utilization gene araM. We hypothesized that, when grown in arabinan, an araR BT0366 double mutant would exhibit slightly lower araM mRNA levels than the araR single mutant. This is because the araR mutant exhibited higher araM mRNA levels than the wild-type strain in arabinan (Fig. 4C) and also because the BT0366 mutant displayed 2-foldlower araM mRNA levels under these conditions (Fig. 3C). To examine this possibility, we measured araM mRNA levels in isogenic araR and araR BT0366 strains following exposure to arabinose or arabinan. The mRNA levels were similar in the two strains in arabinose (Fig. 5C), in agreement with the notion that BT0366 is dispensable for arabinose utilization (see Fig. S3B in the supplemental material).
In arabinan, however, the araR BT0366 double mutant exhibited sustained araM expression compared to the araR single mutant (Fig. 5C). That is to say, araM mRNA levels decreased between 1 and 2 h postinduction in the araR mutant but not in the araR BT0366 double mutant. Because the araR single mutant grew on arabinan (see Fig. S4B in the supplemental material) but the araR BT0366 double mutant did not (see Fig. S5B), the sustained araM expression exhibited by the latter strain may be triggered by lack of growth. In agreement with this notion, araM mRNA levels increased 9-fold in the araR mutant and 3-fold in the wild-type strain following a 1-h exposure to minimal medium lacking a carbohydrate (Fig. 5D). By contrast, BT0364 mRNA levels re- mained essentially unchanged when bacteria were switched from medium containing glucose to medium lacking a carbohydrate (Fig. 5D). The latter results presumably reflect that in the absence of arabinan, the BT0366 protein does not promote BT0364 transcription. Taken together, these results indicate that transcription of arabinose utilization genes responds to arabinose via AraR and to a signal produced under nutrient-poor conditions.
The global regulator BT4338 controls araM transcription in the presence of arabinan. In E. coli, transcriptional activation of arabinose utilization genes requires binding of both the AraCarabinose complex and the CRP-cyclic AMP (cAMP) complex to target promoters (3). The N-terminal region of the B. thetaiotaomicron BT4338 gene product contains a CRP-like effector domain, and its C terminus harbors a helix-turn-helix DNA-binding motif (15). Moreover, a bioinformatics analysis predicted BT4338 binding to the araM promoter region (10). Therefore, we hypothesized that BT4338, originally named MalR for its role in maltose utilization in the absence of the starch utilization regulator SusR (16), operates as an activator of arabinose utilization genes.
We examined araM mRNA levels in five isogenic strainswild-type, BT4338, araR, araR BT4338, and araR BT0366 BT4338 strains-following a switch from medium containing glucose to medium containing arabinan. Deletion of BT4338 decreased the basal araM mRNA levels produced in glucose and abolished the induction promoted by arabinan (Fig. 6A). The araR BT4338 double mutant and the araR BT4338 BT0366 triple mutant displayed a similar behavior, though araM mRNA levels were 3-to 4-fold higher than in the BT4338 single mutant (Fig. 6A).
The BT4338 mutant was unable to grow on arabinose (Fig. 6B), reflecting its essential role in transcription of the arabinose utilization gene araM (Fig. 6A). By contrast, the BT4338 mutant reached a wild-type growth yield in the rich tryptone-yeast extract-glucose (TYG) medium, albeit with slightly slower kinetics (see Fig. S6A in the supplemental material). Cumulatively, the results in this section establish that BT4338 is required for transcriptional activation of arabinose utilization genes and that its role is not simply to overcome repression by AraR.
The BT4338 gene is necessary for full transcription of arabinan PUL genes. Given the critical role that the BT4338 gene plays in transcription of arabinose utilization genes, we investigated whether BT4338 is also required for transcription of arabinan utilization genes. When B. thetaiotaomicron was exposed to arabinan, the mRNA levels of the arabinan PUL gene BT0364 were~600fold higher at 1 h and~225-fold higher at 2 h in the wild-type strain than in the BT4338 mutant (Fig. 6C). The BT4338 araR double mutant displayed 3-to 4-fold-higher BT0364 mRNA levels than the BT4338 single mutant (Fig. 6C), analogous to the behavior of arabinose utilization genes (Fig. 6A). Furthermore, the BT4338 null mutant was defective for growth on arabinan (Fig. 6D). Expression of the BT4338 gene in trans from its native promoter and in single copy restored the ability of the BT4338 mutant strain to grow in both arabinan (see Fig. S6B in the supplemental material) and arabinose (see Fig. S6C), albeit with slightly decreased kinetics. Taken together, these results demonstrate that BT4338 is essential for B. thetaiotaomicron to utilize both monomeric arabinose and its polymeric form arabinan. The BT4338 gene is required for growth utilizing a variety of carbohydrates. Because the BT4338 protein has a domain structure similar to that of CRP, we explored the possibility of the BT4338 gene being required for growth on carbohydrates other than arabinose and arabinan ( Fig. 6B and D). The BT4338 mutant displayed limited or no growth on arabinogalactan, fucose, glucuronate, N-acetylgalactosamine, polygalacturonic acid, ribose, or xylose (see Fig. S7A to G in the supplemental material). No difference in growth was observed between the BT4338 mutant and the wild-type strain in glucose, heparin, mannose, or N-acetylglucosamine (see Fig. S7H to K). The BT4338 mutant exhibited a longer lag phase than the isogenic wild-type strain in all other carbohydrates tested: amylopectin, chondroitin sulfate, fructose, galactose, galacturonate, maltose, maltotriose, ␣-mannan, pectic galactan, and rhamnogalacturonan I (see Fig. S7L to V). The difference in lag phase was short,~2.6 h to an A 595 of Ն0.2, in galactose (see Fig. S7L) but extended to~24 h to an A 595 of Ն0.2 in amylopectin (see Fig. S7M) and~32 h to an A 595 of Ն0.2 in galacturonate (see Fig. S7N). Taken together, these results indicate that BT4338 both contributes to wild-type growth kinetics of B. thetaiotaomicron and is essential for growth on a variety of carbohydrates.
DISCUSSION
We have uncovered how a gut bacterium integrates multiple signals to control expression of genes mediating the utilization of a polysaccharide and the monosaccharide derived from that polysaccharide (Fig. 1). The BT0366 hybrid two-component system of B. thetaiotaomicron senses an intermediate in arabinan breakdown in the periplasm and activates transcription of the arabinan PUL, which encodes products that transport arabinan into the cell and degrade it into arabinose.
Arabinose binding to the repressor AraR in the cytoplasm prevents AraR binding to the promoters of genes required for arabinose utilization and a subset of genes within the arabinan PUL (Fig. 4). The regulatory activities of BT0366 and AraR are connected by AraP, which transports arabinobiose (and potentially other arabino-oligosaccharides) originating from arabinan catabolism into the cytoplasm ( Fig. 2 and 5B) and is encoded in the arabinose utilization locus (Fig. 1).
We established that BT4338 is a global regulator required for full transcriptional activation of the genes necessary to metabolize both arabinan and arabinose (Fig. 6), utilization of several carbohydrates (see Fig. S7A to G in the supplemental material), and wild-type growth kinetics on other carbohydrates (see Fig. S7L to V). Our in vivo analysis provides direct genetic evidence for regulatory interactions suspected on the basis of biochemical (11) and bioinformatics (10) analyses.
Taken together, our findings establish that B. thetaiotaomicron coordinates the utilization of arabinan, arabinose, and other nutritional signals. These coordinated processes may play a critical role in gut colonization because BT4338 and several genes within the arabinan PUL and arabinose utilization operon are necessary for survival in the murine gut (6,12).
By contrast, arabinan breakdown in the Gram-negative bacterium B. thetaiotaomicron is regulated by an activator (BT0366) that senses a degradation product of arabinan in the periplasm, a repressor (AraR) that senses arabinose in the cytoplasm, and a global regulator (BT4338) that senses a yet-undescribed signal. The strategy for arabinan breakdown in B. thetaiotaomicron is reminiscent of the enzymatic capabilities of B. subtilis arabinan utilization and the regulatory framework governing arabinose utilization in E. coli, which relies on AraC operating both as an activator and as a repressor (3).
A new regulatory paradigm for polysaccharide utilization in Bacteroides. We present a new regulatory paradigm governing utilization of arabinan and arabinose that relies on sensing both a degradation product of the polysaccharide and the monosaccharide (Fig. 1). We propose that variations on this paradigm are more common in the Bacteroidetes than the existing paradigm where a single regulator senses monomeric fructose and activates transcription of genes necessary for utilization of both levan and its constituent fructose (7). This is because of the following. (i) The regulator that senses fructose (BT1754) is thus far unique among the Bacteroidetes in sensing a monomeric sugar in the periplasm (7). Indeed, several characterized regulators sense polymers of two to eight monosaccharides in length (4,20,21). (ii) Several genes in B. thetaiotaomicron and other Bacteroides spp. are predicted to encode regulators that sense cytoplasmic monosaccharides (10,11,22,23). (iii) The monosaccharides sensed by these regulators are components of complex dietary and mucosal polysaccharides encountered by Bacteroides in the gut (14). (iv) Bioinformatics analysis predicts that some of these regulators bind to promoters of genes not necessary for utilization of the monosaccharide that they sense (10).
The pleiotropic transcriptional regulator BT4338. BT4338 was previously designated MalR because a transposon insertion in the BT4338 gene in a strain lacking a functional copy of the starch utilization regulator SusR decreased the ability of B. thetaiotaomicron to metabolize maltose and maltotriose (16). The domain structure of the BT4338 protein is similar to that of the Proteobacteria CRP (despite low sequence identity) with an N-terminal CRP/Fnr-like ligand-binding domain and a C-terminal DNAbinding domain (15,24). CRP also controls expression of genes involved in adhesion (25,26) and virulence (27,28) in Proteobacteria. We have now determined that the pleiotropic regulatory protein BT4338 is necessary for utilization of multiple sugars (see Fig. S7 in the supplemental material). Therefore, BT4338 may connect expression of arabinan and arabinose utilization genes to a larger regulatory network that integrates polysaccharide utilization with additional metabolic signals and/or other physiological cues in B. thetaiotaomicron. The identification of the signal(s) controlling the levels and activity of the BT4338 protein may help us understand its critical role in the colonization of the mammalian gut (6).
MATERIALS AND METHODS
Bacterial strains and growth conditions. B. thetaiotaomicron strains were derived from strain VPI-5482 (15) and grown under anaerobic conditions at 37°C on brain heart infusion agar supplemented with 10% horse blood and in tryptone-yeast extract-glucose medium containing tetracycline (2 g/ml), erythromycin (10 g/ml), gentamicin (200 g/ml), or 5-fluro-2=-deoxyuridine (FUdR) (200 g/ml), when needed. All experiments with B. thetaiotaomicron were performed with cells grown anaerobically in minimal medium (9) supplemented with the indicated carbon sources and antibiotics when required. E. coli strains were derived from S17-1 and grown in LB medium containing 100 g/ml ampicillin. All chemicals were purchased from Sigma except arabinan (sugar beet, P-ARAB), arabinobiose (O-ABI), pectic galactan (P-PGAPT), and rhamnogalacturonan I (P-RHAM1), which were purchased from Megazyme, and beta-D-(Ϫ)fructose (MP Biomedicals). Dialyzed arabinan was prepared by incubating 10 ml of 5% (wt/vol) arabinan within a 3,500-molecular-weightcutoff (MWCO) Slide-A-Lyzer dialysis cassette (Thermo) in 1.5 liters distilled water overnight at 4°C. All strains and plasmids used in this study are listed in Table S1 in the supplemental material. All oligonucleotides used in this study are listed in Table S2 in the supplemental material.
Strain construction. Phusion high-fidelity polymerase was used to amplify all DNA fragments, which were ligated into vectors by T4 DNA ligase or NEBuilder HiFi DNA Assembly Master Mix (all products from NEB). Deletion mutants were generated using counterselectable allelic exchange (9).
Growth curve analysis. Growth of B. thetaiotaomicron strains was examined as follows. Following overnight incubation in tryptone-yeast extract-glucose (TYG) medium, bacteria were subcultured at a 1:500 dilution directly into the indicated medium. Growth proceeded anaerobically and was monitored by A 595 measurement in a Tecan Infinite F200 Pro microplate reader. Growth rate was quantified by identifying the absorbance where growth increased by 15% over the baseline (A min ) and the maximum growth immediately after exponential growth (A max ). The time points corresponding to these absorbances, T min and T max , respectively, were used to calculate growth rate as (A max Ϫ A min )/(T max Ϫ T min ). To examine the effects of the araR mutation on growth adaptation to arabinan, cultures were grown overnight in minimal medium with 0.5% glucose and subcultured 1:50 into the same medium, and cells were grown to an optical density at 600 nm (OD 600 ) of 0.3 to 0.4. Cells were harvested by centrifugation, resuspended in minimal medium lacking a carbon source, and incubated with 0.1% (wt/vol) arabinan or chondroitin sulfate.
Gene expression analysis and quantitative real-time PCR. Time course gene expression analysis was carried out as described previously (20), with the following modifications. Cells were grown to an OD 600 of 0.35 to 0.5 prior to induction. Cells were harvested by centrifugation and resuspended in medium containing the indicated carbon sources. Onemilliliter culture samples were collected before (Ϫ5-min time point) and at the indicated times after introduction to medium containing the indicated carbon sources. mRNA levels of genes were measured as described previously (29). mRNA levels are represented normalized to a 1,000-fold dilution of 16S rRNA abundance to account for cell density or as a fold change of values obtained from this normalization.
Electrophoretic mobility shift assays. Electrophoretic mobility shift assays were carried out as described previously (30), with the following modifications. Fragments were amplified from B. thetaiotaomicron VPI 5482 genomic DNA. The BT0366 response regulator domain used for binding was purified as described previously (29).
|
2017-11-06T18:16:32.822Z
|
2016-10-11T00:00:00.000
|
{
"year": 2016,
"sha1": "61696020daa1a6ba66ac40534af3a9d6c7508069",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mbio.01342-16",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15de2c527c8f9de39c8a8dbd8b7868bb9a078921",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
71859249
|
pes2o/s2orc
|
v3-fos-license
|
Multifocal peliosis hepatis: MR and diffusion-weighted MR-imaging findings of an atypical case
Peliosis is a rare benign disorder that is characterized by the presence of diffuse blood-filled cystic spaces and can occur in the liver, spleen, bone-marrow, and lungs. We present a 10-year-old boy with Fanconi anemia who presented with peliosis hepatis due to androgen treatment. Magnetic resonance (MR) imaging revealed multiple non-enhancing masses. Some of the lesions revealed fluid-fluid levels and extrahepatic extension on MR images. Diffusion-weighted (DW) imaging showed restricted diffusion. Fluid-fluid levels and extrahepatic extensions are unusual findings for hepatic peliotic lesions. In addition, DW imaging findings of peliosis hepatis have not been reported previously.
Introduction
Peliosis hepatis is a rare benign entity, which is characterized by the presence of multiple blood-filled lacunar spaces within the liver. This disorder can mimic other hepatic masses, such as hemangioma, hepatocellular carcinoma, abscess, metastasis, adenoma, and focal nodular hyperplasia (1). Imaging findings of peliosis hepatis including ultrasonography (US), computed tomography (CT), and magnetic resonance (MR) imaging have been reported previously (1)(2)(3)(4)(5)(6)(7). Here, we present MR and DW-imaging findings of peliosis hepatis with fluid-fluid levels and extrahepatic extension, which developed due to prolonged androgen therapy in a boy with Fanconi anemia. In this report, we describe the DW-imaging findings of peliosis hepatis, for the first time.
Case report
A 10-year-old boy was admitted to our hospital with Fanconi anemia. After the diagnosis, the patient started treatment with oxymethalone. Eighteen months after the initiation of therapy, liver enlargement and abnormal liver function tests were noted. US revealed multiple solid-appearing hepatic lesions, predominantly hypoechoic, and intralesional small cystic areas. To further characterize the liver lesions, MR imaging was performed by using a 1.5-T MR unit. A routine MR imaging sequence, which included un-enhanced and gadolinium-enhanced axial and coronal T1-weighted fast low angle shot (FLASH) sequences, and axial and coronal T2-weighted turbospin echo sequences, were obtained prior to DW imaging. A DW single-shot spin-echo sequence was performed with b-values of 0, 400, and 800 s/mm 2 during free breathing. An apparent diffusion coefficient (ADC) map was automatically calculated. The lesions were slightly to markedly hyperintense both on T2-and T1-weighted images with no contrast enhancement. Two of the lesions also showed fluid-fluid level on T2-weighted images. In addition, one right lobe mass was extending outside the liver contours ( Figure 1). On DW sequences and ADC map the lesions were hyper-and hypointense, respectively ( Figure 2). ADC values were variable at the mass lesions (e.g. 0.89-1.06 Â 10 -3 mm 2 /s), and normalappearing liver had a mean ADC value of 1.21 Â 10 -3 mm 2 /s. Because of the clinical history of the androgen usage, the diagnosis of peliosis hepatis was reached, and the androgen therapy was discontinued.
Discussion
In peliosis, blood-filled cavitations are seen within lung, bone-marrow, liver, and spleen. Several drugs (such as steroids, oral contraceptives, tamoxifen, and estrogens) and several disease entities (such as malignancies, solid organ transplantations, chronic infections, and Bartonella infection) have been reported in the etiology (3,6).
Peliosis hepatis is characterized by the presence of blood-filled cavities in the liver (2). The lesions are almost always multiple and have varying sizes from several millimeters to several centimeters (2)(3)(4)(5)8). Parenchymal and phlebectatic variants have been described. Both types may coexist and may result in thrombosis, hemorrhage, and occasionally calcification (8). Hepatocellular necroses, outflow obstruction at the sinusoidal level, and lesions of the sinusoidal barrier have been accused in the etiopathogenesis (8).
The clinical course may range from asymptomatic to a progressive case with liver failure or fatal intraabdominal bleeding (1,3,6). If clinical and radiological findings are suggestive of peliosis, percutaneous liver biopsy should be avoided because of the significant risk of severe bleeding (2).
Imaging findings are variable and depend on the pathologic presentation and the various stages of the blood component of the lesion. The signal intensity of the lesion on MR imaging largely depends on the age and the status of the blood component. T1-weighted imaging can demonstrate hypo, iso-or hyperintense foci. On T2-weighted images the lesions were reported as hyperintense (4,5). The lesions are typically surrounded by hepatic parenchyma; however, in our case, one of the lesions was atypically extending outside the liver contours and was not completely surrounded by hepatic parenchyma. In addition, we also found fluid-fluid level in two of the lesions. There has been only one reported case of exophytic extension of the hepatic peliotic nodule in the literature (5).
In addition, only one case of fluid-fluid levels was described on CT images previously, by Hiorns et al. (9). Both of these findings are unusual for hepatic peliotic lesions.
With the advent of ultrafast sequences, DW-imaging is now feasible for abdominal studies, and several researchers have used this technique in differential diagnosis of benign and malignant lesions. DW-imaging findings of peliosis hepatis have not been described previously. Although peliosis hepatis is a benign condition, ADC values were lower for than normal-appearing liver, probably due to its content including thrombus and hemorrhage. We speculate that both fluid-fluid levels and low ADC values on MR images were related to old and new blood products in the lesions.
In conclusion, peliosis hepatis must be added in the differential diagnosis of an atypical liver lesion with fluid-fluid levels and restricted diffusion. The use of MR imaging and DW imaging may be useful in the diagnosis of this challenging but rare disease entity. Extension outside the liver contours, fluid-fluid level inside the lesion, and restricted diffusion on MR images may help to differentiate peliosis hepatis from malignant lesions and prevent unnecessary and dangerous interventions.
|
2019-03-09T14:19:24.685Z
|
2010-04-07T00:00:00.000
|
{
"year": 2010,
"sha1": "35da361e6fcd8f080370f4dd8692a7ab0736cc3f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3109/03009730903262118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b88ac0031b15f669d69775dff2bf488ba0d8b5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6088352
|
pes2o/s2orc
|
v3-fos-license
|
Severe changes in colon epithelium in the Mecp2-null mouse model of Rett syndrome
Background Rett syndrome is best known due to its severe and devastating symptoms in the central nervous system. It is produced by mutations affecting the Mecp2 gene that codes for a transcription factor. Nevertheless, evidence for MECP2 activity has been reported for tissues other than those of the central nervous system. Patients affected by Rett presented with intestinal affections whose origin is still not known. We have observed that the Mecp2-null mice presented with episodes of diarrhea, and decided to study the intestinal phenotype in these mice. Methods Mecp2-null mice or bearing the conditional intestinal deletion of MECP2 were used. Morphometirc and histologic analysis of intestine, and RT-PCR, western blot and immunodetection were perfomed on intestinal samples of the animals. Electrical parameters of the intestine were determined by Ussing chamber experiments in freshly isolated colon samples. Results First we determined that MECP2 protein is mainly expressed in cells of the lower part of the colonic crypts and not in the small intestine. The colon of the Mecp2-null mice was shorter than that of the wild-type. Histological analysis showed that epithelial cells of the surface have abnormal localization of key membrane proteins like ClC-2 and NHE-3 that participate in the electroneutral NaCl absorption; nevertheless, electrogenic secretion and absorption remain unaltered. We also detected an increase in a proliferation marker in the crypts of the colon samples of the Mecp2-null mice, but the specific silencing of Mecp2 from intestinal epithelium was not able to recapitulate the intestinal phenotype of the Mecp2-null mice. Conclusions In summary, we showed that the colon is severely affected by Mecp2 silencing in mice. Changes in colon length and epithelial histology are similar to those observed in colitis. Changes in the localization of proteins that participate in fluid absorption can explain watery stools, but the exclusive deletion of Mecp2 from the intestine did not reproduce colon changes observed in the Mecp2-null mice, indicating the participation of other cells in this phenotype and the complex interaction between different cell types in this disease. Electronic supplementary material The online version of this article (doi:10.1186/s40348-016-0065-3) contains supplementary material, which is available to authorized users.
Background
Rett syndrome (RTT) is a devastating neurodevelopmental disorder affecting mainly girls since their early childhood. The major cause of RTT are mutations affecting the methyl-CpG binding protein 2 (MECP2) coding gene [1]. MECP2 is a transcription factor with a dual role on gene expression. It was first described that MECP2 binds to methylated CpG dinucleotides, and by interacting with the histone deacetylase complex, compacts the chromatin and silences gene expression [2]. More recently, it was described that MECP2 is also able to induce the expression of some of its target genes, mechanism mediated by the interaction with the transcription factor Creb1 [3].
The most significant efforts to understand alterations produced by MECP2 deficiency have been concentrated in neurons and, more recently, in microglia and astrocytes [4][5][6]. Nevertheless, cellular abnormalities affecting other tissues such as the intestinal tract are unknown. RTT patients have a high incidence of symptoms affecting the lower section of the gastrointestinal tract like gastroparesis, chronic constipation, and gastric and intestinal perforations, mainly [7]. The first report of a RTT patient affected by colon cancer urges the understanding of the impact of MECP2 inactivation on epithelial cell function and intestinal physiology [8].
During the breeding of the Mecp2-null mice in our animal facility, we observed that these animals presented episodes of diarrhea at around 2 months of age. Therefore, we decided to study changes on intestinal physiology and histology as a result of genetic inactivation of the Mecp2 gene in this RTT mouse model. Our analysis determined that Mecp2-null mice presented macroscopic abnormalities in the colon. Histological observation of intestinal samples allowed us to determine that the colon crypts are shorter in length and displayed altered histology of the surface epithelial cells. Immunohistochemical analysis showed that proteins of the electroneutral absorptive machinery were abnormally expressed in Mecp2-null mice, while electrogenic movement of ions was normal. Increased quantity of proliferating cells was observed in the colon epithelium of Mecp2-null mice. We were able to observe Mecp2 expression in epithelial cells from the lower part of the colon crypts, but the selective Mecp2 knockdown in intestinal epithelium did not recapitulate the observed features of the Mecp2-null mice, suggesting a complex interaction between different tissues expressing Mecp2 that participates in the development of the multisystemic complaint including intestinal symptoms in RTT patients.
Animals
Mecp2-null colony founders were obtained from The Jackson Lab stock #003890 in a C57BL/6J:129/SvJ genetic background. Colony founders were outbred with C57BL/6 wild-type male for at least eight generations. To generate the conditional KO mice for Mecp2 in the intestinal epithelium, we bred heterozygous Mecp2 female mice carrying one of the Mecp2 allele flanked by loxP sites with the male mice carrying the Cre recombinase protein coding sequence under the control of Villin promoter. Animals were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). The mice were bred in the C57BL/6J strain for at least eight generations. All mice were housed in ventilated racks under specific-pathogen-free conditions at a room temperature of 20°C ± 2°C in a 12/12 h light/dark cycle with food and water ad libitum. All animal procedures were reviewed and approved by the local Institutional Animal Care and Use Committee regulations. The animal facility of the Centro de Estudios Científicos (CECs) is accredited by AAALAC.
Morphological and histology analysis
Adult mice were sacrificed by cervical dislocation. The small intestine and colon were removed, and the distance between the duodenum to ileum and the cecum to rectum, respectively, were measured. Tissues were fixed, embedded in paraffin and sliced into 4-μm-thick sections, followed by staining with hematoxylin and eosin (HE) or Periodic acid-Schiff (PAS). Digital images were captured with a light microscope (Olympus, CX31) attached to a computer using ×40 magnification and recorded using the MShot Digital Imaging systems program. The crypt length in the HE sections was measured (in ≥8 crypts per field) as the distance from the base to the apical side. The PAS sections were used in two different analysis: (1) counting PAS-positive cells per crypt, expressed as the percentage of PAS-positive cells normalized by the total number of epithelial cells lining the colonic crypt, and (2) PAS-positive pixels per micrometer, which was determined, converting the PAS-stained sections to grayscale and splitting it in the three-color channels (green, red, and blue). The threshold was adjusted to the green channel, which has the best contrast, to obtain the PAS-positive pixels. To normalize our results to the crypt longitude, three lines of known length (μm) were traced on each crypt and the PAS-positive pixels determined in this area. Both procedures were performed using the free software, ImageJ.
Immunoblotting
To determine protein and gene expression in the intestine, section of the colon and small intestine from 8week-old mice were rinsed with PBS and opened at the mesenteric border, and the epithelium was stripped of muscle using a glass slide. Tissues were homogenized in M-Per buffer by sonication using a protocol of 6 cycles of 10-s pulses at a 100% intensity followed by 10 s of rest using a QSONICA LLC sonicator (Model Q125, USA). Homogenates were centrifuged at 20,000×g at 4°C in an eppendorf centrifuge (Model 5415R, Germany) for 30 min; the pellet was discarded and the proteins from the supernatant were quantified by Pierce BCA Protein Assay Kit (Thermo Scientific). Thirty micrograms of protein extract prepared in loading buffer was electrophoresed in denaturing polyacrylamide 4-12% gels (SDS-PAGE) in reducing conditions and transferred to PVDF membranes (Bio-Rad Laboratories, Hercules, CA, USA). Membranes were blocked in 0.05% non-fat milk and incubated with primary antibody against Mecp2 (Millipore 1:1000) and beta-actin (Santa Cruz 1:5000) for 2 h at 37°C. After five washes for 10 min each in TBST, membranes were incubated with HRP-conjugated secondary antibody for 1 h at room temperature and then washed six times in TBST. The bands for Mecp2 and beta-actin were visualized by chemiluminescence Pierce West Femto (Life Technologies) and analyzed by exposing membranes in a C-Digit® Blot Scanner (Licor Model 3600, USA).
Real-time qPCR
Ages of mice analyzed are given as days post conception (d.p.c), where the presence of vaginal plugs was considered as embryonic 0.5 d.p.c. The birth took place at 19 d.p.c (P0). Pregnant females (15.5 d.p.c and 18.5 d.p.c) were sacrificed by cervical dislocation. Embryos were dissected from the uteri and placed in PBS. Immediately following harvest, the colon was dissected and embedded in fresh Trizol Reagent (Invitrogen, CA, USA) and stored at −80°C until RNA extraction. RNA was reverse-transcribed using the ImPrim-II TM Reverse Transcription System (Promega) to synthesize singlestranded complementary DNA (cDNA) at a concentration of 2 μg/μL. PCR reaction mixtures were prepared using the KAPA SYBR FASTA qPCR kit. All amplification reactions were performed in triplicate using the Rotor Gene 6200, with a total volume of 10 μL, each reaction containing 1 μL of diluted cDNA. The real-time program used consisted in an initial denaturation period of 10 min at 95°C followed by 40 cycles at 95°C for 20 s, 58°C for 15 s, and 72°C for 30 s. The results were analyzed with the Rotor Gene-6000 series software 1.7 (Corbett) and all values were normalized to cyclophilin 1 messenger (RNA) mRNA expression levels. The primers used were the following: Mecp2 forward 5-CTCCA-TAAAAATACAGACTCACCAGT-3, Mecp2 reverse 5-CTTAAACTTCAGTGGCTTGTCT-3, cyclophilin forward 5-GGCAATGCTGGACCAAACACAA-3, and cyclophilin reverse 5-GTAAAATGCCCGCAAGTC AAAAG-3. Colon samples from adult animals were used for the quantification of mRNA for the ClC-2 accessory protein Glial-cam using the following primers forward: 5-GGGAGAAGACCATCAAC T-3 and reverse 5-TGAG CTCCAGCACAGTGGTT-3.
RT-PCR
PCR reaction mixture was prepared in 20 μL total volume containing 2 μg of reverse-transcribed cDNA as described above. The amplification program consisted in an initial denaturation period of 10 min at 95°C followed by 30 cycles at 95°C for 20 s, 58°C for 20 s, and 72°C for 30 s. The primers used were the following: Mecp2e1 forward 5-AACGGGGTAGAAAGCCTG-3 and reverse 5-TGATGGGGTCCTCAGAGC-3 and for isoform, Mecp2e2 forward: 5-CAGGTCATGGTGATC AAACG-3 and reverse 5-AGTCCTTTCCCGCTCTT CTC-3. Cyclophilin 1 was used as loading control using the same primers described above. Amplicons were electrophoresed in a 1.5% acrylamide gel containing EtBr and visualized in an Ultraviolet GelDoc-it UVP Transiluminator photodocumenter.
For double immunohistochemistry, samples were first incubated with anti-ClC2 antibody, and peroxidase activity was visualized using Vector SG (Vector Labs). At the end of the first labeling reaction, sections were washed in 50 mM Tris HCl pH 7.8 and treated as described above for NHE3 immunolabeling. The counterstaining was performed with nuclear fast red (Vector Labs).
Immunofluorescence
Colonic and small intestine paraffin-embedded sections (4 μm) were dewaxed and hydrated before the immunofluorescence procedure. Microwave antigen retrieval was performed in a 10-mM citrate pH 6. To permeabilize, the samples were incubated for 30 min with Triton 0.2% diluted in BSA 5% plus normal goat serum (NGS) 2.5%. Then, sections were incubated with anti-rabbit MECP2 antibody (1:100, Millipore) at room temperature for 2 h, washed with 50 mM Tris HCl pH 7.8, and incubated with secondary antibody Goat anti-Rabbit Alexa Fluor 488 (Life Technologies) for 30 min. The reaction was visualized in confocal microscopy (Olympus, BX61WI) and attached to Olympus Flouview FV 1000 software.
Ussing chamber experiments
Experiments were performed as previously described [11]. Briefly, stripped colonic epithelium was placed in P2303 tissue holders of 0.1 cm 2 surface and placed in modified Ussing chambers (Physiologic Instruments Inc., San Diego, CA, USA) bathed with bicarbonate-buffered solution (pH 7.4), gassed with 5% CO 2 -95% O 2 , and maintained throughout the experiment at 37°C. The transepithelial potential difference referred to the serosal side was measured using a VCCMC2 amplifier (Physiologic Instruments Inc., San Diego, CA, USA). Current was clamped to 0 μA and 200 ms pulses of ±10 μA and was passed across the tissues at 1 s intervals using the Acquire & Analyze 2.3v software and a DI-720 interface (DataQ instruments, Akron, OH, USA).
Plasmatic sodium
Blood was collected by retro-orbital plexus puncture from 60-62 days old animals. Sodium was determined using the EasyLyte analyzer (Medica Corporation, Bedford, MA, USA) under manufacturer's instructions.
Mecp2 is expressed in colonic epithelium
Immunodetection of Mecp2 was performed in intestinal tissue of the mice. As shown in Fig. 1a Mecp2 was detected in both the colon and small intestine. Nevertheless, fractions of smooth muscle and epithelium demonstrate that Mecp2 is more abundant in the colon than in small intestine epithelium. To further determine tissue distribution of Mecp2, we performed immunofluorescense detection. Mecp2-positive cells were found mostly at the base of the colonic crypts (Fig. 1b), while no staining was detected in small intestine samples (Fig. 1e). The observed immunoreactivity was specific as no reaction was observed in colon samples from Mecp2-null mice (Fig. 1c) or wild-type mice when the primary antibody was omitted (Fig. 1d).
To determine which Mecp2 variants are expressed in the colon epithelium, we performed PCR detection of Mecp2-E1 and Mecp2-E2. As shown in Fig. 1F, colon tissue expressed both variants, but when compared to hypothalamus expression, colon tissues preferentially expressed Mecp2-E2 (Fig. 1g). Finally, real-time PCR experiments showed that Mecp2 was highly expressed at the end of fetal development, changing expression levels with time, decreasing after birth and increasing almost 2 times after weaning (day 21), and maintaining relatively high levels of expression up to 60 days of age (Fig. 1h).
The Mecp2-null mice exhibit altered colon morphology and histology Observation of intestinal anatomical features of 8 weeks old Mecp2-null mice suggested changes in colon size (Fig. 2a). Measurement of intestine sections showed that the colon (Fig. 2b) but not the small intestine (Fig. 2c) was shortened in the Mecp2-null mice.
Histology of distal colon samples showed that in some areas, organization of cells of the surface seems to be altered (Fig. 2d). The surface-crypt axis appeared smaller in the Mecp2-null mice (Fig. 2d), an observation corroborated by direct measurements (Fig. 2e). The number of goblet cells (Fig. 2f ) and content of PAS-positive pixels (Fig. 2g) was unaltered by the absence of Mecp2 in the colon. The described alterations seemed to be age dependent, since 4-week-old animals showed unaltered colon and crypt length when compared to wild-type littermates (data not shown). Histology of small intestine appeared normal.
Colon from Mecp2-null mice showed aberrant expression of ClC-2 and NHE-3 proteins In order to obtain a more detailed view of the organization of the different cell types that compose colon epithelium, we performed immunohistochemical detection of NKCC1 and ClC-2 membrane proteins to differentiate and localize epithelial cells of the crypt base and surface, respectively [9]. We observed that the NKCC1 triple cotransporter was localized in the basolateral membrane of lower and medial parts of the crypts in both wild-type and Mecp2-null colon (Fig. 3a). ClC-2, known to localize in the basolateral membrane of the surface cells, was accompanied by NHE-3 detection, the interchanger localized at the apical membrane of the surface cells. As can be observed in Fig. 3b, distribution of ClC-2 and NHE-3 is altered in most areas of the distal colon of the Mecp2-null animal. ClC-2 was barely detected in the basolateral membrane and was retained in the cytoplasm of the surface cells. NHE-3 was mostly retained at the apical membrane of surface cells, nevertheless, some cells showed intracellular staining and NHE-3-positive staining is also observed in cells in the entry of the crypts (Fig. 3b, right panels).
Glial-cam is an accessory protein known to interact with ClC-2 and target the channel to cell-cell junctions b Summary of average colon and c small intestine length from wild-type and Mecp2-null mice.*P < 0.05 vs. wild-type (n = 6 each genotype). d Representative images of histology H&E staining (upper) and PAS staining (lower) (bar 25 μm). e Surface-crypt axis measurements of wild-type and Mecp2-null colon samples. *P < 0.05 vs. wild-type (n = 6 for each genotype). f PAS-positive cells and g PAS-positive pixels in wild-type and Mecp2-null colon samples (n = 6 for each genotype) [12]. To determine if Glial-cam expression is altered and could explain ClC-2 intracellular retention in Mecp2null colon, we determined its mRNA expression level. Real-time PCR studies showed no differences in Glial-cam mRNA expression between wild-type and Mecp2null colon (data not shown). Thus, increased intracellular localization of ClC-2 is not associated to changes in Glial-cam mRNA expression.
Ussing chamber measurements showed no differences in absorption and secretion in Mecp2-null mice colon
To determine if the observed cellular alterations in Mecp2-null mice impacts electrogenic absorption and secretion of ions in colon, we performed Ussing chamber measurements of the colon tissues dissected from wild-type and Mecp2-null mice. We observed no Recorded V te traces of a wild-type and b Mecp2-null colon samples. c summarizes equivalent Isc calculation for amiloride-sensitive sodium absorption, cAMP-dependent anionic secretion (IBMX + FSK-induced) and calcium-activated anionic secretion (carbachol-induced). n = 5 for each group differences in basal electrical parameters between both genotypes obtained from experiments like those shown in Fig. 4a, b. Calculated values were as follows: V te −3.7 ± 1.2 v/s −5.8 ± 1.2; I sc −63 ± 13 v/s −64 ± 16, and R te 75 ± 8 v/s 63 ± 4 for wild-type and Mecp2-null mice, respectively. Calculation of short-circuit currents for electrogenic cAMP-dependent (IBMX + FSK-induced) or calcium-dependent (carbachol-induced) chloride secretion showed no significant differences among groups. A slight increase in electrogenic sodium absorption (amiloride-sensitive) was observed in the Mecp2-null colon (summarized in Fig. 4c), and without affecting systemic sodium homeostasis (149 ± 3 mM v/s 148 ± 4 mM sodium in plasma for wild-type (n = 5) and Mecp2 null (n = 3), respectively).
Conditional deletion of Mecp2 from intestinal tissue does not reproduce the Mecp2-null mouse phenotype
To study if Mecp2 silencing in intestinal epithelium was able to reproduce the phenotype observed in Mecp2-null mice, transgenic animals expressing Cre recombinase under the control of the Villin promoter (Vil-cre) were crossed with Mecp2 flox/y female mice to induce the deletion of loxP-flanked exon 3 and 4 of Mecp2. Silencing of Mecp2 from colon epithelium was checked by immuno detection (Additional file 1: Fig S1). As shown in Fig. 5a, the Mecp2 Δ3-4/Y mice presented significantly reduced lethality compared to Mecp2-null animals. Analysis of intestinal tissues showed that the lengths of the colon (Fig. 5b) and small intestine (data not shown) were not altered, nor the colon crypt length (Fig. 5c) in the Mecp2 Δ3-4/Y mice.
Shortening of colon and surface-crypt lengths are normally related to the changes in the proliferation rate of intestinal cells. We studied the expression of Ki-67 in colon samples and observed that Mecp2-null colon exhibited a larger number of Ki-67-positive cells than control tissues. Samples of Mecp2 Δ3-4/Y colon showed a number of Ki-67-positive cells comparable to those of control tissues (Fig. 6b). In all cases, Ki-67-positive cells were situated at the crypt base (Fig. 6a).
Discussion
Rett syndrome is a genetic disorder that severely affects normal development of the CNS. Nevertheless, there is evidence for symptoms affecting peripheral tissues, and the efforts to understand the impact of MECP2 mutations in the function of other systems in the organism have recently led to more detailed descriptions of phenotypes affecting the respiratory, cardiovascular, and immune systems [4,6,13,14]. Symptoms affecting the gastrointestinal tract of RTT patients are common, but how the absence of MECP2 impacts intestinal function is currently unknown. We observed that Mecp2 mRNA expression in the colon decreases after birth and increases after weaning on day 21 up to day 60, mimicking what has been observed in the brain [15]. It has been reported that the major Mecp2 isoform expressed in the postnatal brain is Mecp2-E1 [16]; however, our analysis showed that the main isoform expressed in colon tissue of adult animals corresponds to Mecp2-E2. Even though some functional differences among isoforms have been proposed, this has been partially examined in neurons exclusively where the absence of Mecp2-E2 protects cultured cerebellar granule neurons from neurotoxicity [17], but nothing is known about what is the impact of the isoform deletion in other tissues since Mecp2-E2 silencing compromises embryo viability [18].
Examining Mecp2 distribution in intestinal tissues of adult mice revealed some differences between immunofluorescence (colon epithelial cells exclusively) and western blot (colon epithelium and, occasionally, smooth muscles from the colon and small intestine). The strongest signal observed by western blot was detected in Fig. 5 Intestinal deletion of Mecp2 does not replicate Mecp2-null intestinal phenotype. a A group of 14 Mecp2-null animals (solid line) was compared with a group of 11 Mecp2 Δ3-4/y conditional knock-out mice (dashed line). Kolmogorov-Smirnov test demonstrated a better survival rate for the conditional knock-out animals over the Mecp2-null mice, P < 0.0005. Determinations of b colon length and c surface-crypt axis length showed no differences among groups (n = 5 for wild-type, n = 9 for Mecp2 Δ3-4/y , n = 4 for Mecp2 flox/y , and n = 6 for Vil-Cre) colon epithelium correlating with the immunofluorescence localization. The faint signal might be due to contamination of smooth muscle samples with epithelial cells after mechanical stripping, or the presence of Mecp2 expressing cell types like those of the enteric nervous system or macrophages that can be present in the smooth muscle layer [4,19]. MECP2 expression in the intestine has been previously documented in human, rat, and mouse tissues, and the different results obtained might reflect not only species differences but also handling of samples and heterogeneity of technics used for MECP2 detection. For example, western blot analysis of adult mouse tissues showed Mecp2 expression in the colon [20], and the same authors detected northern blot signal for Mecp2 in the colon but not in the small intestine from humans. Nevertheless, expression of Mecp2 mRNA has been detected in epithelial cells of the small intestine of adult rats [21], and other immunodetection studies of MECP2 distribution determined that the protein is expressed in the smooth muscle [22] and in cells of the enteric nervous system of the mouse intestine [19]. Our immunofluorescence experiments showed no signal for MECP2 on the smooth muscle layer. We tested the specificity of the antibody in the colon tissue from the Mecp2-null mouse and no signal was detected in both the mucosa and smooth muscle.
Morphological analysis of the intestine showed a significant reduction of the colon and crypt length in Mecp2-null mice. Colon and crypt shortening has been observed in animal models of colitis and inflammatory bowel disease, as well as in mice when dyslipidemia was induced with a high-fat diet [23][24][25][26]. Dyslipidemia has been observed in the Mecp2 −/− animals and in a significant number of Rett patients [27]. Interestingly, Mecp2 −/− animals subjected to pharmacotherapy to improve lipid metabolism showed amelioration of motor symptoms and decreased lethality [28], but the possibility that the pharmacological restoration of lipid homeostasis is capable to ameliorate the intestinal phenotype in the Mecp2 −/− animals has not been explored.
Our results indicate that MECP2 is expressed in colon epithelial cells, and more specifically in the lower section of the crypts where the stem cell niche and cells with fluid secretion capacity are housed. We observed that absence of MECP2 produces shortening of surface-crypt axis lengths. Nevertheless, we observed histological changes in cells from the surface and not in the crypt where Mecp2 was localized. In fact, the localization and distribution of a main membrane component of electrolyte movement in the colon, the triple cotransporter NKCC1, appears to be normal in cells of the crypt in Mecp2 −/− colon. The latter observation is supported by Ussing chamber experiments that showed intact electrogenic anionic secretion in these animals. The Mecp2-null mouse showed aberrant expression of membrane proteins such as the chloride channel ClC-2 and the sodium-proton exchanger NHE3 both located in cells at the crypt surface and major players of electroneutral absorption of electrolytes in the colon. As we have previously demonstrated, the absence of ClC-2 produces a severe reduction of NaCl absorption [29], and Scl9a3 −/− (Nhe3-null) mice presented with slight diarrhea due to reduced sodium absorption in the colon [30]. Moreover, increased electrogenic sodium absorption via ENaC has been described in both Clc2 −/− and Scl9a3 −/− mouse colon as a compensatory mechanism for reduced electrolyte absorption. Our own examination of electrogenic movement of electrolytes in colon showed a tendency to increase amiloride-sensitive currents (reflecting ENaCdependent sodium absorption) in the Mecp2-null colon that did not reached statistical significance, but that might reflect defective electroneutral NaCl absorption. Nevertheless, plasmatic sodium is unaffected indicating that sodium homeostasis is normal in the Mecp2-null mice and that the intestinal malfunction might be triggered at a very late stage of the disease and for a short period of time that is not long enough to reflect sodium reduction in the circulation. It is also important to comment that a significantly increased intestinal transit time has been recently reported to occur in the Mecp2-null mice and that could also favor malabsorption and the appearance of watery stools observed by us in the Mecp2-null animals [31].
After we described the effects of MECP2 absence in the colon, we questioned if using the Cre-loxP system the sole deletion of Mecp2 from intestinal epithelium was sufficient to reproduce the intestinal phenotype of the Mecp2-null mouse. We first observed that intestinal deletion of Mecp2 did not bear a lethal phenotype and, most surprisingly, it did not reproduce the colon and surface-crypt axis length changes as the Mecp2-null mouse. Such absence of changes in colon morphology and histology was sustained up to 16 weeks of life.
We observed an increase of the proliferation marker protein Ki-67 in colon epithelial cells of Mecp2-null animals, a similar finding that has been reported in colitis models involving mice [32,33]. Even though cells expressing Ki-67 expression are found at the same location where we detected MECP2 in the cells of the lower part of the colon crypts, the exclusive silencing of Mecp2 from the intestine does not increase the number of Ki-67-positive cells, suggesting that Rett phenotype in colon can be originated by factors released outside the colon or by altered activity of cells other than colonic epithelium like the enteric nervous system where MECP2 is expressed [31]. Other explanation might be that cells in the surface can be affected by alterations in the intestinal contents delivered from the small intestine into the colon. For example, bile acids are known to induce colitis in mice and humans [34,35], and even when there is no available data for fecal bile acid contents in RTT patients or animal models, scattered reports of RTT patients hospitalized due to liver failure and many others subjected to gallbladder removal are available [27]. More recently, it has been published that Rett patients presented with a less diverse microbiota than healthy subjects [36], which could affect gastrointestinal function, but this possibility has not been explored in the mouse model yet.
Conclusions
In summary, we have described for the first time the effects of Mecp2 silencing on mouse intestine. Our results indicate that the absence of Mecp2 induces shortening of the colon length and depth of crypts. Along with histological changes in the cells at the surface of the colon and mislocalization of key membrane proteins responsible for electroneutral fluid absorption, this might favor fluid accumulation often seen in the Mecp2null mice. Importantly, the sole silencing of Mecp2 from intestinal epithelium did not account for the complete phenotype, suggesting the participation of other cells that express MECP2 and regulate intestinal function as well as and the profound complexity of this disease.
Additional file
Additional file 1: Figure S1. Intestinal MECP2 deletion using the villin-Cre Tg mouse. Immunofluorescence was performed to detect MECP2 protein in colon samples from (A) Mecp2 flox/y mouse and (B) Mecp2 Δ3-4/y . Positive signal was detected in the Mecp2 Δ3-4/y mouse at the bottom of the crypts mainly. Representative images of three animals per group. (DOCX 211 kb)
|
2017-08-08T12:39:58.356Z
|
2016-11-21T00:00:00.000
|
{
"year": 2016,
"sha1": "ce143521bb64ac5ec8ae6a98ac7b2455165c8da5",
"oa_license": "CCBY",
"oa_url": "https://molcellped.springeropen.com/track/pdf/10.1186/s40348-016-0065-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce143521bb64ac5ec8ae6a98ac7b2455165c8da5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213081279
|
pes2o/s2orc
|
v3-fos-license
|
Study of mycoflora associated with seeds obtained from farmers as compared to certified seeds
In India, soybean (Glycine max (L.) Merrill) is a significant legume widely accepted due to high protein and oil content and on account of neutraceutical and pharmaceutical values. The investigation was under taken to determine the seed health status of farmers own saved seeds and certified seeds of soybean, associated mycoflora, its impact on seed sowing quality and management of soybean. During the period 150 seed samples obtained form 09 districts spread over 4 agroclimatic regions were analyzed for the associated mycoflora through Standard blotter method and visual observations on Diaphanoscope. The seed association of Macrophomina phaseolina, causal agent of charcoal rot ranged from 3.0 to 15.0% while Colletotrichum dematium, causal agent of anthracnose and pod blight, was in the ranged of 2.0 to 10.0%. The association of Fusarium oxysporum, causal agent of seed root, seedling decay was from 10.0 to 14.0%. The seed rot causing fungi Aspergillus niger (3.0 to 12.0%) Aspergillus flavus (4.0 to 11.0%) were noticed. The purple stain of soybean seed, caused by Cercospora kikuchii was in the ranged of (4.0 to 15.0%) and Soybean mosaic virus infected ranged from (1.0 to 4.0%).
certified category. The impact of associated mycoflora on sowing seed quality was investigated. The effectiveness of chemical fungicides and biopesticides against seed borne mycoflora was investigated in the present study. Material used and methods followed are described herewith.
General Cleaning and sterilization of apparatus
The glassware used during the course of investigation was of Corning and Borosil make. Prior to use, each glassware was cleaned with chromic acid solution.
Preparation of chromic acid solution
Sulphuric acid: 300 ml Potassium dichromate: 80 g Distilled water: 400 ml The glassware was cleaned with acid solution followed by thorough washing with detergent powder and finally rinsed with normal tap water and/or distilled water as per need. The air dried glassware was sterilized in an autoclave at 1.05 kg/cm² (15 lb per square inch) for 15 minutes. Whereas, sand and field soil at 1.05 kg/cm² for 180 minutes. Plastic trays were disinfested with 0.1% Copper sulphate solution and later washed by sterile water. The inoculation needle, forceps and biological needle were surface disinfested by dipping in alcohol and there after heating over a flame. The inner surface of growth chamber and clean air flow system was disinfested by using exposing the Ultra violet rays through UV lamps and spray of formaldehyde solution. Prior to use, safety precaution were adopted while using ultra violet lamp and formaldehyde solution.
Media
The ingredients of media used during the course of investigation are as follow: Potato sucrose agar (PSA) Peeled and sliced healthy potato: 200 g Sucrose: 20 g Agar-agar: 20 g Distilled water: 1000 ml
Incubation chamber
The seeded petriplates were incubated under two set of Philips 40 Watt day tube light placed horizontally at the height of 40 cm. Alternate cycles of 12hr light and 12hr dark period were maintained.
Collection of seed sample
Seed samples of soybean variety JS 335 were obtained from farmers of 09 districts covering 4 agroclimatic zones of Madhya Pradesh. Certified seed sample from respective location were also procured and used for comparison. From each district 5 samples of certified seed and 5 samples of farmers own saved seed were obtained through Seed Technology Research Centre, JNKVV, Jabalpur. In all 90 seed samples were obtained. The samples were numbered and stored in paper envelops under low temperature condition to avoid further deterioration. The seeds were tested by dry seed examination and incubation techniques. Collection of farmers own saved and certified seed sample from seven agroclimatic zones covering 09 districts Detection of mycoflora associated with soybean seeds
Dry seed examination
The certified seed samples and samples of farmers own saved seeds were examined on diaphanoscope. The diseased seed samples were sorted and identified on the basis of symptoms.
Standard Blotter method
Seed collected from different sources were tested by standard blotter method (ISTA, 1996) [21] for the associated mycoflora. Three circular blotter papers of the size of petridishes (90mm) were cut and dipped in sterilized water. Excess of water was removed and placed in each sterilized petridish. In each petridish, 10 soybean seeds were placed with the help of presterilized forcep (eight in outer circle and two in the centre).
The seeded petridishes were incubated in the growth chamber. Seed were pretreated with 0.1% NaoCl for 20 seconds. Petridish were examined on fifth day of incubation. Mycoflora were identified on the basis of colony and habit character, subsequently confirming by making slides of fungal structure, fruiting bodies and spores.
Standard ragdoll method
Standard Ragdoll method (ISTA, 1996) [21] was used for the testing and effect of the associated mycoflora on germination of soybean seed.
In the method, four hundred seed of each category and sample were used. The towel (blotters) papers were moistened with sterilized water. Excess of water was removed, the paper were stretched over the flat surface and kept over clean surface of the working table. Fifty seed were arranged on the half portion of the towel paper. Seed were covered with the other half portion of the paper and rolled over. A wax paper was wrapped on the rolled paper towel and both ends were tightened with rubber bands. It prevented the run-off of water and helped in maintenance of the moisture required for seed germination process. The rolled towel papers were kept in a slanting position in a plastic tray. The seeded towel were placed in a seed germinator at 25 °C with RH around 85%. The seedlings were examined on the 10 th day of incubation and germination per cent was calculated.
Standard agar plate method
In the method, potato sucrose agar medium was used. The medium was transferred in each pre sterilized petridish. With the help of forcep, pretreated seed were placed on 18-20 ml solidified and cooled PSA at equal distance. The seed were placed in a manner so that 8 were in outer ring and two in the center. Observation on the basis of colony and habit character of developing associated mycoflora were recorded on 5 th day of incubation. Mycoflora were identified on the basis of developing colony and habits characters and observed directly under stereoscopic binocular microscope. Confirmation of the mycoflora was made by making slides under compound microscope.
Impact of seed associated mycoflora On seed germination
The effect of seed associated mycoflora was determined by Standard Ragdoll method (between the blotters), Standard Blotter method (top of the blotters).
On seed emergence
In the method counted soybean seeds were sown in a plastic tray filled with sterilized soil and sand kept under laboratory condition. Seed emergence was recorded after 5 and 10 day incubation. The trays were irrigated with sterile water whenever required. Light was provided by horizontally hang day tube light. Seed emergence was also determined under field conditions.
On seedling vigour index
Seedling vigour index was calculated based on the seedling length as per the formula recommended by Abdul-Baki and Anderson (1973) [1] .
Seedling vigour index = (Mean root length + mean shoot length) × percent seed germination
To determine seedling vigour index, 10 seedling of each sample grown between towel papers were used. Shoot length was measured from the collar region to the point of attachment of cotyledons. Root length was measured from the collar region to the tip of main root.
On viability
The effect of seed mycoflora on seed viability was determined by tetrazolium salt test.
Procedure:
The seed of all the categories were completely immersed in distilled water for 18 hr to initiate activity of dehydrogenase enzyme and to facilitate penetration of tetrazolium solution. The testa (seed coat) of the seed was removed with the help of forcep and then remaining part of the seed immersed in 1% tetrazolium solution for 3 hr at 20 0 C in complete dark. Then the seeds were rinsed with water and examined. Each seed was evaluated as viable or dead on the basis of staining pattern and intensity of the red colour.
On seed coat cracking
Influence of mycoflora on seed coat cracking was recorded by Ferric-Chloride (FeCl3) test.
Chemical used: FeCl3 solution 20%
Procedure: In the test, the selected seed samples of all the categories were soaked in 20% Ferric chloride solution placed in a beaker. After 15 minutes observation on the development of black stain on the seed surface was investigated. Observations were recorded by naked eye as well the help of hand lense (10x). Qualitative observation was taken.
Management of seed associated mycoflora
The seeds of soybean pre tested variety JS 335 having maximum natural infection of target pathogens was used. The seeds were treated with individual fungicides and observations were recorded on the associated mycoflora adopting standard blotter method (ISTA, 1996) [21] , Standard Ragdoll method (ISTA, 1996) [21] Standard blotter method Fungicide treated seeds were used. Untreated seeds served as control. In the method, 3 circular blotter papers of the size of the petridish were cut and dipped in sterilized water. Excess water was removed and soaked sheets were placed in each petridish. Twenty five soybean seeds were placed in each petridish with the help of sterilized forcep under aseptic conditions of inoculation chamber. In the petriplate 16 seeds were placed in outer circle, 8 in the inner circle and 1 in the center so as to allow in the equal distance between the seeds. Seeded plates were kept for the incubation in the chamber. Fungi were identified by making slides and observing under microscope on eight day of incubation with the help of identification manuals.
Dry seed treatment with fungicide and biopesticide
Each category of soybean seeds was taken in polythene bag and required quantity of fungicide and biopesticide was sprinkled over the seeds. The fungicides and biopesticides seeds were gently shake so as to get uniform coating on individual seed. The treated seeds were sown in sterile soil and sand media at equal distance in a plastic tray. The effect of fungicides and biopesticides on seed germination, seed emergence and seedling vigour was recorded by sowing the treated seeds in sterile soil and sand in a plastic tray. Observation was recorded on 6 and 9 days after sowing.
Result and Discussion
In the present investigation seed health status of farmers own saved seed was determined and compared with certified seed category. The impact of associated mycoflora on sowing seed quality was investigated and the observations on the effectiveness of fungicides and biopesticides were recorded and results are presented herewith.
Collection of seed sample
In all 90 seed samples of soybean variety JS 335 were obtained from farmers of 09 districts covering 04 agroclimatic zones of Madhya Pradesh ( Table 1). The seed sample were numbered and stored in paper envelops under low temperature condition. The seed samples were subjected for seed health by different standard technique (ISTA, 1996) [21] .
Detection of mycoflora by Standard Blotter method
Association of mycoflora with soybean seeds obtained from farmer and certified seed samples was detected by standard blotter method (ISTA, 1996) [21] . Results of association of mycoflora were detected from the seed samples belonging to 7 agroclimatic zone and 15 district of Madhya Pradesh are presented.
Kymore plateau & Satpura Hills
In all, 15 seed samples from 3 district (Jabalpur, Katni and Seoni) were analysed. Data presented in Table 02 indicate the association of 5 major mycoflora in variable proportions. Maximum association of Macrophomina phaseolina (10.0%) was recorded from the seed samples from Seoni. Seed samples from Katni had shown maximum association of Colletotrichum dematium (6.0%). Association of Fusarium oxysporum was maximum (14.0%) in the seed sample from Seoni and Jabalpur in the farmer seed samples. It was noticed that association of mycoflora was lesser in certified seed samples as compared to farmer seeds (10.0%). Incidence of Macrophomina phaseolina was only 4.0% in certified seed. While Colletotrichum dematium (4.0%) was recorded as compared to in certified seed. Association of Fusarium oxysporum was (10.0%) while Aspergillus flavus was 5.0% in certified seed. The germination percent ranged up to 77.0% in farmer seed as compared to 81.0% in certified seed (Table 02).
Central Narmada Valley
In Central Narmada Valley, seed sample from Hoshangabad and Narasinghpur district were analysed. Association of Macrophomina phaseolina was 12.0% in farmer seed as compared to 6.0% in certified seed whereas Colletotrichum dematium 5.0% in farmer seed and 4.0% in certified seed. In certified seed association of Fusarium oxysporum was 10.0%, Aspergillus niger 5.0%, Aspergillus flavus 3.0% as compared to 11.0%, 7.0%, 5.0% in farmer seed, respectively (Table 03).
Maximum association of Macrophomina phaseolina (12.0%), 11.0% Fusarium oxysporum was recorded in the farmer seed samples from Hoshangabad district. The seed germination ranged up to 75.0% in farmer seed sample while it was 78.0% in certified seed. In certified seed samples the seed germination was higher and association was lesser as compared to farmer seed (Table 03).
Satpura Plateau
Soybean seed samples obtained from Chhindwara and Betul districts were analysed for seed health. Association of Macrophomina phaseolina was maximum (15.0%) in the farmer seed sample from Chhindwara district. Fusarium oxysporum (13.0%) was also higher in the farmers seed of Chhindwara. Association of Aspergillus flavus was higher in certified seed as compared to farmer seed indicating the inappropriate threshing and harvesting techniques adopted. The germination percent of farmer seed samples ranged from 63.0 to 75.0% as compared 65.0 to 73.0% in the certified seed (Table 04). Table 4: Association of mycoflora with soybean seeds obtained from farmers of Satpura Plateau and certified seed samples as detected by Standard blotter method (ISTA, 1996) [21] Agro climatic Zone / District / Sample Mycoflora Table 05 indicate that seed sample obtained from Khandwa and Khargone districts were analysed with maximum association of Macrophomina phaseolina (12.0%) from the farmer of Khargoan district as compared to 7.0% in certified seed. Association of Fusarium oxysporum was 11.0%, Colletotrichum dematium 7.0%, Aspergillus niger 12.0% and Aspergillus flavus 10.0% in farmer seed sample as compared to 10.0, 6.0 and 7.0% in certified seed respectively (Table 05). Table 5: Association of mycoflora with soybean seeds obtained from farmers of Nimar Valley and certified seed samples as detected by Standard blotter method (ISTA, 1996) [21] Agro climatic Zone / District / Sample Mycoflora The results of association of mycoflora with soybean seed samples are summarized in Table 06. Among 4 agroclimatic zone, incidence of Macrophomina phaseolina was maximum up to 15.0% in farmer seed sample obtained from Satpura plateau while in certified seed samples. The incidence of Macrophomina phaseolina was comparatively low (10.0%) obtained from Satpura plateau as compared to farmer seed. In farmer seed samples Fusarium oxysporum was maximum up to 14.0% in seed samples from Kymore plateau and Satpura Hills as compared to the certified seed sample from Satpura plateau (11.0%). Table 6: Association of mycoflora with soybean seeds obtained from farmers seven agroclimatic zones and certified seed samples as detected by Standard blotter method (ISTA, 1996) [21] Agro climatic Zone /District / Sample The present investigation was undertaken to determine the seed health status of farmers own saved seed of soybean and its management. Certified seeds were also collected to determine the difference with farmers seed. Globally about 130 diseases have been reported affecting the soybean crop at various stages of crop growth (Hartman et al., 1999) [18] , of which 35 diseases are economically important in India. There are about 13 diseases that are transmitted through seeds. Several pathogens have been observed that are responsible for causing diseases of soybean (Hartman et al., 1999 [18] ) and many of these are reported to be transmitted through seeds (Kilpatric, 1952;Wu et al., 1964;Hussain et al., 1989;Poharkar, 1992; Vishwadhar and Sarbhoy, 1987) [23,38,19,29,31] [33,8,11,32] . Investigations were made to determine the current status of seed mycoflora associated with seed obtained from farmers own saved seed sample as compared to certified seeds. Seed samples of soybean variety JS 335 were procured from the farmers of 09 districts covering 4 agroclimatic zones of Madhya Pradesh. Certified seed sample from respective location were also procured for comparison. From each district 5 samples of certified seed and 5 samples of farmers own saved seed were obtained through Seed Technology Research Centre, JNKVV, Jabalpur. A number of mycoflora have been observed on soybean seeds. A good deal of recording has been made from Madhya Pradesh, India and global level. Incidence of mycoflora in variable proportions was noticed, however, the incidence of major mycoflora was comparatively less in certified seeds as compared to farmers own saved seeds. Major mycoflora found associated with soybean seeds were Macrophomina phaseolina, Colletotrichum dematium, Fusarium oxysporum, Aspergillus flavus, Aspergillus niger and Cercospora kikuchii. Standard Blotter method was reported to be superior over Standard agar plate method for detection of associated mycoflora by , Tripathi and Singh (1991), Vishunawat (2003) [3,36,37] also recorded its suitability. The incidence of seed and seedling diseases have been recorded at caused maximum reduction of seed germination (Hypperly et al., 1983) [20] . The seed samples having natural infection of Macrophomina phaseolina were tested for seed germination by top of the paper and between the blotter paper method. In the sample having no infection, Macrophomina phaseolina has resulted in 89 and 95% maximum seed germination in top of the paper and between the blotter paper. Seed having maximum infection of Macrophomina phaseolina had shown 72% germination in top of the paper and 70% in between the blotter paper. The seed samples having no infection of Colletotrichum dematium obtained from Hoshangabad district have exhibited maximum 91% germination in top of the paper method and 95% in between the blotter paper method. Seed sample from Sagar having association of 10.0% Colletotrichum dematium resulted in 96 to 75% germination as compared to 75 to 80% in between the blotter paper method. Influence of target pathogens on the seed emergence was investigated through sowing of seeds in sterile and unsterile soil and sand. Reduction of 12% in seed emergence was noticed when seeds were sown in unsterile soil and infected with Macrophomina phaseolina. Whereas 7% reduction due to Colletotrichum dematium and 5% due to Fusarium oxysporum was noticed. Mortality was higher in sterile soil, indicating the influence of the particular mycoflora. Seed samples having no infection of Fusarium oxysporum, Macrophomina phaseolina and Colletotrichum dematium resulted in better seed germination as compared to seed sample having variable infection of the selected mycoflora reduction in seed germination was recorded due to seed infection. Pre and post emergence losses have been reported by Chauhan and Gupta (2005) [17] ; Anuja et al. (2000) [7] . Results indicate that there has been significant role of seed mycoflora associated with seed of soybean that caused seed and seedling diseases (Hartman et al., 1999 [18] , Bhale et al., 2003) [12] .
In the present investigation, influence of seed treatment with biopesticide and chemical fungicide was determine in selected seed samples. In seed samples having maximum infection Macrophomina phaseolina (15.0%) elimination was observed in seed treated with Copper oxychloride, Carboxin. The association of Macrophomina phaseolina was 1 to 3 % in fungicide treated seed as compared to untreated seed 15.0%.
In untreated seed association of Fusarium oxysporum was 14% while it ranges from 1 to 3 % in treated seed. Association of Fusarium oxysporum was higher in seed treated with biopesticide as compared to chemical fungicide.
|
2020-02-13T09:12:34.524Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ad56efb227f67946065818278fbe19b1a4caa9c3",
"oa_license": null,
"oa_url": "https://www.chemijournal.com/archives/2020/vol8issue1/PartT/7-6-458-603.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9334d0d5fcb8260699c4d0bd138cd67d3dad6676",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252863594
|
pes2o/s2orc
|
v3-fos-license
|
Study of Some Properties of PbI 2 Deposited on Porous Silicon Using Thermal Evaporation Technique for Many Applications
The present work is a study of some properties of PbI 2 deposited on porous silicon (n-PSi) by using the thermal evaporation technique. X-ray diffraction, scanning electron microscopy, UV–Vis spectrophotometer, and FTIR analysis were used to characterize the structural, optical, and morphological properties of n-Psi. X-ray diffraction showed that the PbI 2 film has a hexagonal polycrystalline structure, while FE-SEM images showed porous silicone in Photoelectrochemical etching, the pore distribution is irregular and the pore refers to the increased surface area of the silicon. SEM images of pbI 2 film showed that particles were scattered and resembled gravel in size. The estimated optical energy value of thin films of PbI 2 was 2.6 eV. PbI 2 film has lower transmittance values at short wavelengths, but as the wavelength increases, the transmittance values gradually increased. The greatest transmittance value was 0.88. From FTIR analysis, chemical bonds were determined between porous silicon and PbI 2 .
Introduction
Lead Iodide (pbI2) has emerged as a promising material with broad technical applications, including Perovskite photovoltaic solar cells, X-ray, and γ-ray detectors at room temperature [1][2][3][4][5][6]. Lead-iodide is an important ingredient in the fabrication of solar perovskite cells, which have been extensively explored by numerous researchers across the world and have recently achieved a Power Conversion Efficiency (PCE) of more than 20% [4]. Lead iodide is a semi-conductive type P with a high energy gap of 2.3-2.6 eV depending on the deposition process, and a high atomic number with iodine having atomic number = 82 and lead atomic number = 52, making it useful as an ionizing radiation detector [3], it also can be utilized in medical imaging, nuclear detection and photosensitivity semiconductor metal applications [7]. Generally, PbI2 can be crystallized by a hexagonal lattice layered form. Its hexagonal closed-pack (HCP) crystal is made up of covalently bound layers of I-Pb-I that are piled on top of one another by weak van der Waals bonds perpendicular to the crystal c-axis [00l]. The anisotropic features of pbI2 crystals are due to this layered structure, which may lead to the formation of numerous polytypic structural changes. [8]. Porous silicon (PS) has recently been a matter of significant investigation, owing to its photoluminescence features and possible uses in photovoltaic devices, chemical sensors, and biological sensors [9,10]. The idea of etching silicon surfaces has gained a lot of importance in semiconductors and solar cells since it helps to improve and produce devices that have a wide range of applications. The etching process has progressed to nanotechnology, where the material acquires new properties as it reaches atomic dimensions and is governed by quantum rather than classical rules [11]. Porous silicon (PSi) is made up of a network of silicon wires and voids that are nanoscale in size. It is made in a variety of ways, including photoelectrochemical etching of the surface of crystalline silicon in an aqueous solution of (HF)acid, with carefully controlling the preparation conditions (etching time -slice specifications -acid concentration and current density) to obtain a suitable crystal structure from porous silicon with various layers of porosity [12,13]. This work aims to prepare lead iodide films deposited on porous silicon by a vacuum evaporation method, it also studies some properties of this material to present preliminary results for a variety of applications. In (2018), Rana Kadhim Abid alnabia, Malek A.H. Muhi et al have prepared lead iodide that was applied to glass bases by a thermal evaporation process, they also studied optical and electrical properties. The film showed a hexagonal crystalline shape. The value of the energy gap for a sample of 200 nm thickness is 2.9051 eV, the intensity is at the plane (003), it equals 12.5 nm. The electrical properties were studied for the measurements of electrical conductivity, mobility, carrier concentration, and finally the Hall coefficient. The results are (1.038*10-5, 0.6727*10+2, 1.009*1012 and 2.631*106), respectively [14].
Experimental Procedures 2.1. Preparation of Porous Silicon
The substrate was a crystalline wafer of n-type Silicon with a resistivity of (5) Ω.cm, a thickness of 500 m, and an orientation of 100. The substrates were cut with (1.5 x 1.5cm 2 ) areas. After chemical treatment, 0.1μ m thick Al layers were formed on the backsides of the wafer using an evaporation method. Photoelectrochemical etching was then carried out at room temperature by employing a platinum electrode in a mixture (1:1) of HF (45%) and Ethanol (99.99%). The light source was a Halogen lamp with a light intensity of 20mW/cm 2 to ensure homogeneity of the etch layer, current of 14mA/cm 2 was applied for 8 minutes as shown in Figure 1. The etched area was (0.785 cm 2 ) as shown in Figure 2.
Preparation of PbI2thin Film
PbI2 films of 200 nm thickness were deposited on a cleaned glass and PSi substrate by the thermal evaporation method at a 10-5 Torr vacuum using a high vacuum coating apparatus (Edwards type E306A). The distance between the source and the substrate was roughly 18 cm inside the vacuum chamber, where a molybdenum boat was utilized to transport PbI2 powder. Figure 3 a & b shows the thermal evaporation procedure and the PbI2 thin film after it has been deposited on the PSi substrate, while Figure 4 shows the cross-section image of the heterojunction that has been prepared.
Characterization
PbI2 and PSi film were characterized by utilizing characterization techniques, namely: X-ray diffraction (XRD), Scanning electron microscopy (SEM), UV-Vis, and Fourier transform infrared spectroscopy (FTIR). Figure 5 shows X-ray diffraction of crystalline Si and n-PSi materials. Blue plots for n-Si and red drawings for PSi were produced by anodization with a current density of 14 mA/cm 2 and an etching period of 8 minutes. When compared to the peak of n-type porous silicon (n-PSi), the crystalline Si peak has a higher intensity value. Even after the etching procedure, the etched silicon retains its single crystal structure, but due to strain, it slightly moved to a small diffraction angle (2θ 69.428 o and 69.381 o ), orientated exclusively along the 400 direction, resulting in a modest expansion in the lattice parameter [17,18]. The Cu-K target with wavelength 1.54060°A was used to achieve X-ray diffraction of PbI2 film that was produced on PSi substrate (using XRD Analysis XRD-6000). Figure 6 reveals the principle diffraction peaks at corresponding planes 001, 101, 002, 202, 003, 210, and 004 at the diffraction angles 12.73°, 23.98°, 25.58°, 28, 38.65°, 39.7, and 52.13°, respectively. Similarly, the figure reveals the sharp and narrow peak at the angle of 69.8 o for the porous silicon layer with orientation (004). PbI2 thin film is polycrystalline and has a hexagonal structure, it was indicated in the JCDPS card about [14]. Scherer's Formula (Eq.1) was then used to calculate the average crystalline size that was around (18.28) nm [14][15][16][17][18][19][20] when θ, and in radian angle, and k is a shape factor that equals 0.9.
=
(1) D: crystalline size, ( ): wavelength for x-ray (1.5406), : is the full Width at half maximum, and : is a degree of diffraction [21]. Figure 7a shows The optical properties were recorded by using a UV-Vis spectrophotometer (Metertech SP8001). Figure 8 shows the transmission spectra of PbI2 films deposited on a glass substrate using thermal evaporation as a function of wavelength (200-1000) nm. The film's structure, preparation, film thickness, and deposition circumstances all have a significant impact on transmission. PbI2 film has lower transmittance values at shorter wavelengths, but as the wavelength increases, the transmittance values gradually increase. The greatest transmittance value was 0.88, which is ideal for optoelectronic devices, particularly solar cell window layers. Furthermore, a dramatic drop at the band's edge refers to PbI2 film crystallinity, which is consistent with XRD data.
Figure 8:
The transmission spectrum of PbI2 films deposited on glass substrate by thermal evaporation Tauc plot was used to analyze the optical band gap. Figure 9 depicts a linear relationship between (αhv) 2 and photon energy. This behavior indicates that a direct permitted transition is possible in the PbI2 film. From the linear part of the curve to the photon energy axis, the optical band gap can be extracted [15]. For PbI2 thin film, the extrapolation yielded a band gap of 2.65 eV. The energy gap was calculated by using Eq. (2): FTIR technique is a potent tool for determining the chemical species present in the substance. This approach measures how much radiation can be absorbed by chemical bonds in the substance as the wavelength of the radiation changes in the infrared range. Different chemical bonds absorb different frequencies of radiation [22]. The best way to determine the chemical composition of the PSi surface is to use Fourier Transform Infrared (FTIR) spectroscopy. Because PSi has a substantially bigger specific area than bulk Si, the FTIR signal is larger and easier. The surface of the manufactured PSi layer oxidizes spontaneously after a few hours in ambient air [23]. For original impurities such as hydrogen and fluorine, which are residuals from the electrolyte, the pore surface has a high density of dangling Si bonds. Figure10a shows that the FTIR spectra can be measured from samples at current density (14 mA/cm 2 ), and etching time (8) minutes. The peaks at around 617 cm-1, 663 cm-1, and 891 cm -1 are from Si-Si, Si-H, and Si-O, respectively. The transmittance peak at 1423-1465 cm -1 and 1519 cm -1 isdue to C-H. The peak at 1708 and 2380 cm -1 is due to C-O, and peaks at 3614 and 3745 cm -1 are due to O-H and Si-OH [24][25][26][27][28]. Figure 10b shows IR spectra of the PbI2 sample that was deposited on the PSi substrate. The peak at frequency 1627 is referred to as the band Pb-I [29]. The peaks around 3412 and 3383cm-1 are due to asymmetric and symmetric stretching vibrations of the Pb-I cluster [30]. It can be noted that the appearance of weak peaks of PbI2, which perhaps can be attributed to the PSi substrate.
Conclusions
We have prepared and characterized the nanocrystalline porous silicon layer and PbI2 thin film by thermal evaporation technique to study some of its properties. X-ray diffraction showed that the PbI2 film has a hexagonal polycrystalline structure. The band gap energy for PbI2 was estimated to be 2.6 eV. FE-SEM images showed porous silicone via using the photoelectrochemical etching method, the pore distribution is irregular. SEM images of PbI2 film revealed that particles were scattered and resembled gravel in size. The results indicated that PbI2 can be used in physical applications such as solar cells.
|
2022-10-13T15:18:41.400Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "30b03f0e50fa285fb5d320710c8f40d6c6c8b988",
"oa_license": "CCBY",
"oa_url": "https://jasn.uotechnology.edu.iq/article_19437_f4c3b1b54ca0e0c568e546618fd12c1c.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ae982d4fede35cb3e7aca5314f825d07d3d583d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
250123407
|
pes2o/s2orc
|
v3-fos-license
|
A Study on Developing Learner Autonomy Through the Reading Circle Method
,
Introduction
The concept of learner autonomy was introduced into language teaching and learning in the late 1960s through the adult movement in Europe and North America (Benson, 2004).From then on, many countries in the world have regarded the cultivation of learner autonomy as one of the important goals of language teaching.According to Little (1991), the development of learner autonomy is gradually recognized as the ideal goal of language education, and is the main manifestation of effective language teaching and learning.
Many scholars and researchers generally agree that the study of learner autonomy in language teaching dates back to the mid-1970s.Many empirical studies have been conducted ever since to explore the efficacy of fostering learner autonomy in different educational contexts.However, few researches have investigated the effect of applying the reading circle method to the cultivation of learner autonomy.
In the present study, based on a brief review of the reading circle method and the researches on learner autonomy and autonomous learning in China's educational context, the focus is upon the effect of the application of the reading circle method to promoting learner autonomy in the English for Academic Purposes course at a comprehensive university in China.
Reading Circle Method
Reading circle is very popular in libraries, educational institutions in the United States, Canada, and some other countries.The reading circle is a student-led, group-based reading, sharing and discussion activity (Shelton-Strong, 2012).In most cases, the content of reading is literary works or works of story genre.Therefore, reading circles are sometimes called "literature circles".
The idea of reading circles, also called literature circles, was first put forward by a renowned Brazilian educator Paulo Freire.It means small, heterogeneous, peer-led discussion groups in which all the members discuss the identical text that has been chosen and read (Daniels, 2002).Specifically, as a reading task, learners are usually divided into different groups, and each group member plays a different role.Commonly used reading circle roles include "discussion leader, summarizer, cultural collector, connector, word master, and passage person" (Daniels, 2002;Furr, 2003;Shelton-Strong, 2012).It can be seen from the name of the role that each member pays attention to different aspects of the text based on the overall understanding, including language (word masters and paragraph analysts), cultural elements (cultural collectors), summary of what has been read (summarizers), analysis and evaluation of the text based on their own experiences (life connectors).
The Bookworms Club's Reading Circle Task Manual provides a task list for each role, which specifically describes each role and its task requirements.The responsibilities of each role of the group are summarized in Table 1.
A Working Definition of Learner Autonomy
Since the introduction of the concept of learner autonomy into the field of foreign language teaching through the publication of Henri Holec's book Autonomy and Foreign Language Learning in 1981, language learner autonomy has become a buzzword and hot topic in the field of foreign language education research.According to Holec (1981: 3), learner autonomy is defined as "the ability to take charge of one's own learning".Therefore, an autonomous language learner is someone who is able to learn independently.
Most scholars agree with Holec's (1981) "ability" theory, but some scholars use it to refer to the situation in which learners conduct self-directed learning outside the traditional classroom.In fact, self-directed learning differs from learner autonomy in that it only emphasizes the learner's attitude towards learning, that is, the learner is responsible for one's own learning, but not necessarily undertakes all the tasks related to learning independently (Dickinson, 1987).
Based on the different discussions on the definitions and connotations on learning autonomy, Han (2013: 19) proposes a working definition and defines learner autonomy as "the constructive process in which language learner develops one's learning capability and strategies within the supportive and conducive environment or context, during which learner's attitude, interest and motivation are enhanced".
The Significance of Cultivating Learner Autonomy in China's Educational Context
Since the concept of learner autonomy originated abroad, research in this area has not entered a prosperous stage until 2000 in China.Published articles about learner autonomy are mainly theoretical introductions and on how to foster learner autonomy in China's educational context (e.g., Gao, 2005;He, 2003;Hua, 2001;Li & Guo, 2015;Tan, 2001;Wang, 2002;Wei, 2002;Xu & Zhu, 2013;Yin, 2014).
In China, researchers often use "autonomous learning" and "autonomous learning ability" to refer to the concept of learner autonomy.Despite the different names, Chinese researchers and scholars generally believe that learner autonomy is the ability or learning behavior of learners to direct their learning independently, and the connotation is consistent with Holec's definition (Qu, 2017).In fact, Benson (2005Benson ( , 2011) ) points out that there are differences in the conceptual connotation of learner autonomy and autonomous learning.
In the teaching reform of college English curriculum in China, emphasis has always been placed on cultivating college students' autonomous learning ability.In 2020, the new College English Teaching Guide was issued, which points out that the teaching objective of college English is "to cultivate students' English application ability, enhance cross-cultural communication awareness and communicative ability, and at the same time develop autonomous learning ability and improve comprehensive cultural literacy...".Furthermore, the use of teaching methods should "pay attention to the cultivation of students' autonomous learning ability, guide and help them develop learning strategies…".It is clear that the curriculum reform of college English teaching still requires foreign language teachers to focus on promoting the development of students' foreign language autonomous learning ability, both in terms of teaching objectives and teaching methods.
The English for Academic Purposes Course at a Comprehensive University in China
The English for Academic Purposes course is set as one compulsory course to freshmen who mostly major in science and engineering.At the introduction of the course, the teacher explained the concept of reading circle and how it works.In the present study, the reading circle involves a group reading activity and tasks that students, in a small group, read the same text while each member in the group plays a different role in the group's overall comprehension of the reading material.
Firstly, the students were divided into small groups with four to five members.Group members were assigned to the respective roles as Word Master, Discussion Leader, Summarizer and Illustrator, Connector.
Secondly, the teacher elaborated on the roles and tasks of each member of the group.For the Word Master, he/she needs to look for key words and expressions in the reading article, explain the meaning, usage, collocations (if possible) of the new words and expressions.For the Discussion Leader, he/she raises some critical thinking questions based on the reading to check other members' understanding of and beyond the text.
For the Summarizer and Illustrator, he/she is responsible to create a visual aid in the form of a concept map or mind-map illustrating the central ideas and supporting details, and explain how the main ideas of the article are structured and organized.For the Connector, he/she discusses with the other members about in what ways the article may relate to outside sources, records group discussion and make a PowerPoint to report and share with the whole class.
Next, the teacher assigned the tasks and the reading article for each group.The tasks are as follows: 1) Skim the text and identify the key words, objective, and writing methods; 2) Summarize the main idea of each paragraph/section using one sentence only, and make a mind-map based on the analysis of the organization of the article; 3) Write a brief summary of about 120 words to summarize the main idea of the text; 4) Raise two to three critical thinking questions concerning the text/theme, and provide possible answers to the questions after the discussion with the rest of the class; 5) Provide some outside sources relating to the theme of the text.
Eventually, the group will report the study of the article in the class.Upon the completion of the group report, the teacher and other students will raise questions and make comments on the content and form of the group's report and performance.
Research Questions
To inquire the influence and application of the reading circle method on the development of learner autonomy, the following research questions are proposed.
1) Does the application of reading circle influence the cultivation of learner autonomy?
2) In what way(s) does the reading circle facilitate the cultivation of learner autonomy?
3) What are the strengths and weaknesses of the application of reading circle in the cultivation of learner autonomy?
Participants
The research was carried out among 116 freshman undergraduates from three classes who mostly majored in science and engineering.The three classes were taught by the same teacher.The teaching materials were the same, and the teaching periods were identical as well.At the end of the course, students were required to fill out an online questionnaire.
Survey Instrument
On the basis of the working definition of learner autonomy proposed by Han (2013), the researcher designed a questionnaire to explore the application of the reading circle method in promoting the cultivation of learner autonomy.
The questionnaire consists of 20 closed items.The closed items were designed to explore the changes of students' learning attitude and interest (5 items), students' learning capacity and strategies (9 items), and students' use of learning environment and resources (5 items).The five-point Likert scale was applied to weigh each closed item in the questionnaire (1 = strongly disagree; 2 = disagree; 3 = unsure; 4 = agree; 5 = strongly agree).
In addition to the 20 closed items on the questionnaire, two open questions were designed to discover students' suggestions and comments on group reading circle activities.
After the questionnaires were collected, the data were analyzed by using SPSS 25.0.The inter-rater reliability of the 20 closed items was analyzed, and the Cronbach's Alpha was 0.953, indicating that the questionnaire had good reliability.
From the factor analysis of KMO and Bartlett's test as is shown in Table 2, the KMO was 0.904, and the significance level was 0.000 (P<0.05),indicating that the questionnaire also had good validity.
Results
At the end of the course, a survey was conducted in the three classes.A total of 116 students completed the online questionnaire, and the responses were all valid.Among the 116 students who completed the online questionnaire, 58 (50%) were male and 58 (50%) were female.
The responses to the 20 closed items about the effect of the reading circle method on the cultivation of learner autonomy are as shown in Table 3.
Analysis and Discussion
The overall objective of the present study is to explore how the application of reading circle affects the cultivation of learner autonomy.Descriptive statistics was used to address the three research questions.
The Influences of Reading Circle on the Cultivation of Learner Autonomy
The first two research questions are to explore whether the application of the reading circle can influence the cultivation of learner autonomy, and in what way(s) it can facilitate the cultivation of learner autonomy.
Firstly, from the data analysis, students' learning attitude and interest have been enhanced through the participation of the reading circle.According to the analysis, most of the students (77.59%) actively cooperated with other group members to complete the tasks of the reading circle.Besides, more than half of the students (52.86%) held that their interest in English language learning has been increased by participating in group reading circle activities.In addition, many students (64.29%) agreed that doing group reading circle activities helped to promote their autonomous learning ability.In brief, students' learning attitude and interest have been promoted through doing the reading circle tasks.
Secondly, students' learning capability and learning strategies have been improved and developed through the participation of the reading circle.For instance, 69.82 percent of students reported that their ability of making presentations in class has been improved; 62.06 percent of students believed that their ability to use academic words has been improved by participating in reading circle activities.Besides, 69.83 percent of students agreed that they have learned how to analyze the structure of a new article, and many students (63.93%) believed that their ability to draw mind maps has been increased.As to reading strategies, most of the students (77.59%) believed that their ability of skimming and scanning has increased.It is noted that 75 percent of students reported that their ability to write an article summary has also been increased through the reading circle activities.In short, participation in the reading circle has helped students to improve their learning capability in many aspects.
Thirdly, students' cooperation, communication and reflection ability have also been improved through the participation of the reading circle activities.Most of the students (68.96%) could discuss and complete the group tasks in different roles of the reading circle.And many students (68.97%) reported that their communication ability has been increased.Furthermore, more than half (54.29%) of the students would reflect on the effect of group report after completing it.
Finally, students' ability of presentation making and critical thinking have been promoted trough the participation of the reading circle activities.For example, 68.1% of students reported that their ability of making PowerPoint and delivering presentations has been improved; 73.27% percent of students believed that their critical thinking ability has been improved as well.
The Strengths and Weaknesses of Applying Reading Circle in the Cultivation of Learner Autonomy
For research question 3, it is designed to investigate the strengths and weaknesses of the application of the reading circle in the cultivation of learner autonomy.
From the above discussion in 5.1, the strengths of the application of the reading circle are as follows: First, the application of reading circle contributes to the increase of students' learning attitude and interest; Second, the application of reading circle facilitates the improvement of students' learning capability and learning strategies; Third, the application of the reading circle can promote the development of students' cooperation, communication, reflection ability and critical thinking; Fourth, the application of the reading circle will increase students' language awareness to read academic articles.As is illustrated in Table 3, most of the students (66.38%) reported that they can critically read an article as in a group reading circle in the future.
However, some points should not be neglected in the application of the reading circle.In the first place, some students (40.51%) did not use English to communicate with group members and complete tasks after class, whereas only 37.07 percent of students reported to use English to communicate.This is probably because English is a foreign language in China, and some learners have not formed the habit of using English to communicate outside the classroom learning context.In the second place, some students (36.21%) believed that there were team members who did not actively participate in the discussion and prepare the materials for report.
Effective measures should be taken to encourage and get all the members involved in fulfilling their respective roles and tasks in the reading circle.
As Benson & Voller (1997) put it, learner autonomy is by no means learning without teacher participation, and teachers play a vital role in promoting learners' self-actualization and providing them with assistance on a regular basis.Likewise, as is pointed out by some researchers (e.g., Arnold, 2000;Benson, 2011;Voller, 1997), language teachers should play the roles as "counselor" and "facilitator" in developing learner autonomy.Language teachers should understand the needs of learners and thus respond to the ongoing needs of each individual learner.Moreover, language teachers should act as "scaffolding".When students need help in the process of autonomous language learning, teachers should provide necessary support.
In addition, Arnold (2000) and Aoki (2000) point out that influencing factors of language learners such as mood, interest, attitude and anxiety can affect their learning behavior and outcome.Therefore, as "counselor", "facilitator" and "scaffolding", language teachers should be conscious of the negative factors that may hinder language learning, and help learners to reduce and alleviate these negative effects in a timely manner.
Conclusion and Recommendations
In the present study, the reading circle method is applied to explore the effect on developing learner autonomy.It is concluded that the reading circle approach is proved effective in promoting the cultivation of learner autonomy, which is mainly embodied in helping learners to increase their interest and motivation, facilitating learners to improve their learning capability and strategies.
Based on the results of this study, the following recommendations are made.First, language teachers need to create more supportive and conducive learning environment outside of the classroom.As there lacks natural communication context, language teachers can organize some activities such as English corners, English speech contests, English debate to encourage and arouse learners' interest in using English to communicate outside the classroom.
Second, language teachers need to have the knowledge of what learner autonomy is, what roles teachers should play, and how teachers play these roles well to meet the needs of each individual learner.Only if language teachers have relevant knowledge, awareness, and comprehensive understanding of the above issues, can language teachers effectively promote the cultivation of learner autonomy.
Third, language teachers need to guide and supervise learners in completing the reading circle tasks, so that language teachers can offer assistance and support when learners have questions or problems.This can help to ensure the efficacy of the application of the reading circle in the teaching practice.
In brief, this research sheds light on developing learner autonomy through the reading circle method.
For teacher education and training programs, a series of courses, teacher training programs and teaching practicums should be designed and organized to promote the cultivation and development of language teachers' knowledge, awareness to facilitate the development of learner autonomy with more innovative approaches, novel methods and practices.
Table 1 .
The main roles and task responsibilities of the Reading Circle Connect the content in the text that is similar to one's own life experience, and invite peers to share it Explore the cultural elements in the text and compare them with the local culture, invite peers to comment Appreciate and select good words, keywords, and difficult words in the text, explain the reasons and make an evaluation Appreciate and select well-written sentences, good paragraphs or important paragraphs in the text, explain the reasons and make
Table 3 .
Responses about the influence of reading circle on the development of learner autonomy
|
2022-06-30T15:02:56.813Z
|
2022-06-27T00:00:00.000
|
{
"year": 2022,
"sha1": "2ccc570907fef5b8654cb33e0108865d72a1891d",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/0/0/47406/50814",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bbc41f39d83626990d2d6fce428669f7e333ffb9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
273693
|
pes2o/s2orc
|
v3-fos-license
|
Calderón's Reproducing Formula for Hankel Convolution
Calderón-type reproducing formula for Hankel convolution is established using the theory of Hankel transform.
Introduction
Calder ón's formula [3] involving convolutions related to the Fourier transform is useful in obtaining reconstruction formula for wavelet transform besides many other applications in decomposition of certain function spaces.It is expressed as follows: where φ : R n → C and φ t (x) = t −n φ(x/t), t > 0. For conditions of validity of identity (1.1), we may refer to [3].
Hankel convolution introduced by Hirschman Jr. [5] related to the Hankel transform was studied at length by Cholewinski [1] and Haimo [4].Its distributional theory was developed by Marrero and Betancor [6].Pathak and Pandey [8] used Hankel convolution in their study of pseudodifferential operators related to the Bessel operator.Pathak and Dixit [7] exploited Hankel convolution in their study of Bessel wavelet transforms.In what follows, we give definitions and results related to the Hankel convolution [5] to be used in the sequel.
Let γ be a positive real number.Set where J γ−1/2 denotes the Bessel function of order γ − 1/2.
Calder ón's formula
In this section, we obtain Calder ón's reproducing identity using the properties of Hankel transform and Hankel convolutions.
.2)
Proof.Taking Hankel transform of the right-hand side of (2.2), we get Now, by putting aξ = ω, we get (2.4) Hence, the result follows.
The equality (2.2) can be interpreted in the following L 2 -sense.
|
2015-07-14T19:54:51.000Z
|
2006-06-08T00:00:00.000
|
{
"year": 2006,
"sha1": "abad3782397d9c4cfaa9c851f44fa3ab03eac1e3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmms/2006/024217.pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "1a22ed83630c3483508aebef449aecf0bb73b29d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
216071412
|
pes2o/s2orc
|
v3-fos-license
|
Caspases from scleractinian coral show unique regulatory features
Coral reefs are experiencing precipitous declines around the globe with coral diseases and temperature-induced bleaching being primary drivers of these declines. Regulation of apoptotic cell death is an important component in the coral stress response. Although cnidaria are known to contain complex apoptotic signaling pathways, similar to those in vertebrates, the mechanisms leading to cell death are largely unexplored. We identified and characterized two caspases each from Orbicella faveolata, a disease-sensitive reef-building coral, and Porites astreoides, a disease-resistant reef-building coral. The caspases are predicted homologs of the human executioner caspases-3 and -7, but OfCasp3a (Orbicella faveolata caspase-3a) and PaCasp7a (Porites astreoides caspase-7a), which we show to be DXXDases, contain an N-terminal caspase activation/recruitment domain (CARD) similar to human initiator/inflammatory caspases. OfCasp3b (Orbicella faveolata caspase-3b) and PaCasp3 (Porites astreoides caspase-3), which we show to be VXXDases, have short pro-domains, like human executioner caspases. Our biochemical analyses suggest a mechanism in coral which differs from that of humans, where the CARD-containing DXXDase is activated on death platforms but the protease does not directly activate the VXXDase. The first X-ray crystal structure of a coral caspase, of PaCasp7a determined at 1.57 Å resolution, reveals a conserved fold and an N-terminal peptide bound near the active site that may serve as a regulatory exosite. The binding pocket has been observed in initiator caspases of other species. These results suggest mechanisms for the evolution of substrate selection while maintaining common activation mechanisms of CARD-mediated dimerization.
INTRODUCTION
Apoptotic cell death is thought to be a unique characteristic of metazoans, although its evolutionary origins are unclear. While caspases from human cells, and model organisms such as C. elegans and Drosophila, have been well-studied both biochemically and structurally (1-6), little is known about caspase activity and regulation from other species (7). Invertebrate caspases were first characterized in C. elegans (3,6) and Drosophila (8), but they have proven to be poor models for studying the evolution of the vertebrate apoptotic network as the networks in C. elegans and in Drosophila utilize fewer caspases and regulatory proteins compared to higher eukaryotes. In contrast, vertebrates have retained many characteristics of the apoptotic machinery found in sponges, sea anemone, and coral (9)(10)(11). Genomic studies of cnidarians, the sister group to the bilateria, revealed many genes that were previously thought to have been vertebrate innovations, demonstrating that the extensive gene loss in C. elegans and in Drosophila resulted in apoptotic pathways that do not reflect the characteristics of ancestral metazoans (12). C. elegans, for example, utilizes only one effector caspase (CED-3), which also bears a CARD-motif necessary for its activation (13). In contrast, humans have multiple caspases with discrete functions in the inflammatory and apoptotic pathways (14). Moreover, cytochrome C is not involved in the formation of the apoptosome in Drosophila, indicating that this organism lacks the intrinsic pathway found in humans (2). The limitations of these model organisms show that studies of basal metazoans, which appear to have a full complement of apoptotic signaling molecules, are more relevant to the evolutionary pathways of vertebrate apoptotic networks.
Disease susceptibility is one of several major stressors of coral communities, with over thirty-five coral diseases reported that affect over eighty coral species (8,15). Coral possess a rudimentary immune system that consists of innate immune pathways but no adaptive immune system (16). The invertebrate innate immune system is similar to that of vertebrates in utilizing physical and chemical barriers, cellular defenses, and humoral responses to pathogens (17), but in the relatively new field of ecological immunity, major knowledge gaps remain regarding the cellular defenses to disease (18). Although general response types have been outlined regarding receptor recognition, signaling pathways, and effector responses, such as metabolic changes, very few functional studies have been performed on the responses of coral to disease stressors (19).
Relatively more is known regarding coral responses to temperature stress, and the phenomenon known as bleaching, but there still remain large gaps in our understanding of cellular responses to stress. For example, coral activate cell death responses following expulsion of the algal symbiont (20)(21)(22)(23), but cnidarian caspases and their regulation during stress responses have not been studied. In Aiptasia pallida, elevated temperatures were shown to induce earlyonset of apoptosis in the endoderm where algal symbionts reside (22). Heat-induced apoptosis has also been correlated with the upregulation of the anti-apoptotic protein Bcl-2 in Acropora millepora, suggesting that coral possess regulatory mechanisms to compensate for sudden environmental changes (24). Caspase-3-like activity was detected in the stony coral Pocillopora damicornis when exposed to heat-stress and high levels of ammonium (25), and caspase inhibitors have been shown to prevent the death of bleached coral (9). Recently, Fuess and colleagues showed an increase in expression of apoptosis-related genes, among others, in Caribbean coral with Eunicea Black Disease, which results in a heavily melanized appearance of gorgonian corals (18), but the response has not been elucidated in functional or biochemical studies. Collectively, the data show the potential for complex apoptotic signaling pathways in coral, but data on activation and control mechanisms, and how they compare to those in vertebrates, are lacking due to a dearth of biochemical characterization.
In order to examine caspase-3-like proteins in coral, we expressed and characterized two caspases each from Caribbean reef-building corals, Porites astreoides and from Orbicella faveolata. The two coral species are found on opposite ends of the stress-tolerance spectrum, and cellular mechanisms that are activated following an immune challenge correlate to diseasesensitivity (20). For example, O. faveolata, a disease-sensitive species, activates caspasemediated apoptotic pathways upon immune challenge, whereas P. astreoides, a disease-tolerant species, activates an adaptive autophagic response (20). These findings suggest that the downregulation of apoptotic genes increases stress tolerance, and upregulation of apoptotic genes exacerbates stress sensitivity. Two of the proteins (called PaCasp7a and OfCasp3a) contain CARD motifs at the N-terminus, an unusual combination that has not been observed in caspases-3 or -7 enzymes from higher eukaryotes. In contrast, PaCasp3 and OfCasp3b show canonical caspase-3/-7 structural organization, with short pro-domains. We describe the first biochemical characterization of the coral caspases and show that the PaCasp3 and OfCasp3b enzymes are not activated directly by the CARD-containing PaCasp7a and OfCasp3a, respectively. We also report the first X-ray crystal structure of a coral caspase, that of PaCasp7a determined at 1.57Å resolution, which reveals an N-terminal peptide bound near the active site that may serve as a regulatory exosite.
Cloning, Protein Expression and Protein Purification
The codon optimized sequences of the four coral caspases, PaCasp3, PaCasp7a, OfCasp3a and OfCasp3b, were based on the sequences from previous transcriptomic data (20) and were cloned into pET11a vector (Genescript, USA). All proteins contained a C-terminal His6 tag and were expressed in E. coli BL21(DE3) pLysS cells and purified as previously described (26,27).
Phylogenetic Analysis
Caspase sequences of representative species were obtained from the CaspBase (caspbase.org) (28) along with BLAST top hits from HMMER (29), and multiple sequence alignments were obtained using MEGA 7 (30). The best model of evolution to construct a phylogenetic tree from our dataset was determined with ProtTest 3 (31) (https://github.com/ddarriba/prottest3), and the tree was computed with the maximum likelihood method in IQTREE, using the Jones-Taylor Thornton model (JTT) plus gamma distribution (32). The tree was bootstrapped 1000 times as a test of phylogeny. The accession numbers of all genes used for phylogenetic analysis are listed in Supplementary Tables S1 and S2.
Size Exclusion Chromatography
Proteins were examined using a Superdex75 Increase 10/300GL column on an AKTA-FPLC.
The proteins were concentrated to 1-5 mg/mL and dialyzed in a buffer of 30 mM potassium phosphate, pH 7.5, containing 1 mM DTT for 4 hours. The column was equilibrated with two columns volume (50 mL) of the same buffer. Protein (200 µL) was loaded onto the column, and the column was resolved at a flow rate of 0.5 mL/min. The column was calibrated using the gel filtration LMW calibration kit (GE Health Sciences) following the manufacturer instructions.
Mass Spectrometry
Matrix-assisted laser desorption/ionization (MALDI) analysis was done as described (33). In brief, proteins were resolved by SDS-PAGE on a 12.5% acrylamide gel, and then bands for the large and small subunits were excised. Each gel fragment was destained using a solution of acetonitrile and 50 mM ammonium bicarbonate (1:1 v/v) for 3 hrs. The gel fragments were then crushed in microcentrifuge tubes, and the proteins were extracted with 30 µL of a solution of formic acid/water/2-propanol (1:3:2 v/v/v) (FWI) for 8 hours at room temperature. After extraction, samples were centrifuged and supernatant was lyophilized then re-dissolved in 2 µL of MALDI matrix solution (FWI saturated with 4-hydroxy-a-cyano-cinnamic acid (4HCCA)).
Dissolved protein was then retrieved for MS analysis using dried-drop method of matrix crystallization then analyzed by MALDI-MS (Axima Assurance Linear MALDI TOF).
Whole-Protein Cleavage Assay
Enzyme specificity of the four coral caspases was first examined by cleavage of human procaspases-3 and -6 in time-course assays, as described previously (34). The procaspase substrate was diluted to a final concentration of 5 µM in a buffer of 150 mM Tris-HCl, pH 7.5, 50 mM NaCl, 1% sucrose, and 10 mM DTT at 37 °C. Reactions were started by the addition of respective coral caspase at a final concentration of 1 µM, and the total reaction volume was 2 mL. Aliquots of 100 µL were removed at times 30 sec, 1 min, 5 min, 15 min, 30 min, 45 min, 1 hour, 2 hour, 4 hour, 6 hour and 8 hour after the addition of active enzyme. Reactions were stopped by the addition of six-fold concentrated SDS-PAGE loading dye (20 µL) followed by incubation in boiling water for 5 minutes. Samples were loaded into a 16% resolving gel with a 4% stacking gel and electrophoresed for 1.5 hours at 80 volts followed by an increase in voltage to 190 V for an additional 4 hours. The change in density for the procaspase substrate over time as a result of cleavage was quantified using Image lab software (Bio-Rad), and the data were plotted with Kaleidagraph. As described previously (34), the data were fit to an exponential decay to determine the CF50 (cleavage of 50% of protein substrate), and the CF50 was used to calculate the rate of hydrolysis (M -1 sec -1 ) using the equation k = ((−ln(P))/(E·t)). In this case, k is the rate of hydrolysis, P is the fraction cleaved (50%), E is the concentration at which CF50 is achieved (in molar), and t represents time (in seconds).
The total reaction volume was 200 µL, and the final enzyme concentration was 10 nM.
Following the addition of substrate (Ac-DEVD-AFC, Ac-VEID-AFC, Ac-LETD-AFC, Ac-LEHD-AMC, Ac-IETD-AMC), the samples were excited at 400 nm (AFC substrates) or 350 nm (AMC substrates), and fluorescence emission was monitored at 505 nm (for AFC substrates) or 450 nm (for AMC substrates) for 60 seconds using a PTI fluorometer (Photon Technology International, Edison, NJ, USA). The steady-state parameters, KM and kcat, were determined from plots of initial velocity versus substrate concentration.
Substrate phage display assays were performed as described (36,37). Briefly, phage libraries consisting of caspase recognition sequences were bound to Ni-NTA resin. Enzyme (10-100 nM) was added to initiate the reaction, and samples were incubated between 3 and 20 hours. E. coli ER2738 cells were used to amplify the cleaved phage from previous rounds by infecting cells with the supernatant after enzyme incubation. The cells were grown for 4 hours, removed by centrifugation, and the supernatant was collected and used as the library for the following round of selection. Plaque counting was used to determine the endpoint of the experiment, when the number of phage bound to the resin was similar to the number of phage released during the treatment. The number of phage released during the reaction versus the control (without enzyme) was monitored to ensure progress in substrate selectivity.
X-ray Crystallography
Protein structure predictions were performed using Swiss-Model (38) using human caspases-3, -6, and -7 as references (PDB ID 2J30, 3OD5, and 1F1J, respectively). For structure determination, the coral caspase proteins were dialyzed in a buffer of 10 mM Tris-HCl, pH 7.9, 100 mM NaCl, 1 mM DTT and concentrated to ∼7 mg/mL. The molar extinction coefficients for the proteins were determined by ProtParam under reduced conditions (Supplementary Table S3).
Inhibitor, Ac-DEVD-CHO (reconstituted in DMSO), was added at a 5:1 (w/w) inhibitor/protein ratio, and DTT and NaN3 were added to final concentrations of 10 and 3 mM, respectively. Samples were incubated for 1 hour in the dark on ice. Hanging-drop vapor diffusion method was applied using 4 µL drops that contained equal volumes of protein and reservoir solutions using the PEG/ion 2 screen (Hampton Research). PaCasp7a protein crystalized in a solution of 0.1 M sodium malonate pH 5.0, 12% w/v polyethylene glycol (PEG) 3350, and conditions were optimized such that the best diffracting crystals of PaCasp7a were obtained at 18 °C in a solution of 0.1 M sodium malonate, pH 4.9-5.1, 15-17% PEG 3350 (w/v), 10 mm DTT, and 3 mm NaN3.
Crystals for PaCasp7a appeared within 3 to 5 days and were briefly immersed in a cryogenic solution containing 20% PEG 4000, 80% reservoir solution. Crystals were stored in liquid nitrogen. We were unable to obtain diffraction quality crystals for the remaining coral caspases. Data sets were collected at 100 K at the SER-CAT synchrotron beamline (Advance Photon Source, Argonne National Laboratory, Argonne, IL). Each data set contained 180 frames at 1° rotation. The protein crystallized in the space group P 21 21 21 and was phased with a previously published HsCasp3 structure (PDB entry 2J30). Data reduction and model refinements were done using HKL2000, COOT, and Phenix, and a summary of the data collection and refinement statistics is shown in Supplementary Table S4. Molecular dynamics simulations were performed for 50 ns with GROMACS 4.5 (39) using the Amber99 force field (40) and the TIP3P water model (41), as previously described (42).
Data Deposition
The crystal structure for PaCasp7a has been deposited in the Protein Data Bank, www.wwpdb.org under PDB ID code: 6WI4.
Caspases in two coral species: Phylogenetic analysis and domain organization
We examined seven caspase genes from O. faveolata, based on sequences obtained from previous transcriptomic and genomic data (Supplementary Fig. S1 and Supplementary Table S1) (20). The caspases were named based on the E-value from BLAST as well as the sequence similarity to the human orthologs. Results from examining the sequence homology and domain organization suggest that three of the caspases are apoptotic initiators and four are apoptotic effectors in O. faveolata (Fig. 1A). The sequence identities of the seven caspases compared to most human caspases are low, only ~35% (Table 1), so it is difficult to determine the nature of each coral caspase based solely on sequence comparisons with human orthologs. In addition, two caspases from O. faveolata contain an N-terminal CARD (caspase activation and recruitment domain) motif, similar to those in HsCasp2 and HsCasp9, and one caspase contains tandem DED (death effector domain) motifs, similar to that found in HsCasp8 (Fig. 1A). The remaining four proteins show domain organization similar to the human effector caspases, with short prodomains (Fig. 1A).
In the case of P. astreoides, four caspase sequences consisted of two initiator-like caspases (called PaCasp7a and PaCasp2) and two effector-like caspases (called PaCasp7b and PaCasp3) ( Fig. 1A and Supplementary Fig. S1). Similar to the results for O. faveolata, the caspase sequences from P. astreoides also have only ~35% identity with human caspases, regardless of comparisons to initiator or effector caspases (Table 1). The sequences from the two coral species displayed much higher identity to putative homologs in the other coral species. For example, PaCasp7a has a 77% sequence identity with OfCasp3a, whereas PaCasp3 has 71 and 73% sequence identity, respectively, with OfCasp3b and OfCasp3c. Likewise, PaCasp2 demonstrates 76% sequence identity with OfCasp2, and PaCasp7b shares 60% identity with OfCasp7 ( Fig. 1B).
A phylogenetic analysis of cnidarian and vertebrate caspases demonstrated that cnidarian caspases cluster in separate groups ( Fig. 2A). All of the short pro-domain caspases, including PaCasp3 and OfCasp3b, cluster together between vertebrate effector (caspases-3/7) and initiator (caspases-8/10) caspases. Interestingly, the comparative genomics and phylogenetic analyses suggest that short cnidarian caspases, that is, those lacking a CARD or DED, share a common ancestor with vertebrate effector caspases-3 and -7 and with initiator caspases-8 and -10 ( Fig. 2A). Homologs of caspase-8 in coral share the same clade with vertebrate caspases-8 and -10, and the CARD-containing OfCasp2 and PaCasp2 clustered with vertebrate caspase-2. With the exceptions of OfCasp2 and PaCasp2, the other CARD-containing coral caspases cluster with OfCasp3a and PaCasp7a and segregate into a different clade, although they share a common ancestor with vertebrate caspases-2 and -9.
We analyzed the CARD motifs of cnidarian caspases independently of the protease domains and compared them to the CARD motifs of vertebrate caspases-2 and -9 as well as that of CRADD (caspase-2 and RIPK1 domain containing adaptor with death domain) motifs, which recruits caspase-2 to the PIDDosome (43) (Fig. 2B). The CARD motifs of coral caspases-3 and -7 cluster together but are more closely related to the CARD of caspase-2 than those of caspase-9 or CRADD. Based on this analysis, there appear to be many CARD-containing caspase-3-like proteins in cnidaria. At present, it is not clear why CARD-containing caspase-3-like proteins provide an advantage for coral development and/or symbiosis since the animals also contain initiator caspases that presumably activate the short pro-domain effector caspases. CARDcontaining caspase-3-like proteins are rarely observed in vertebrate effector caspases. Fishspecific caspases have been found, such as the CARD-containing caspase-8 for example (44), but caspase-2 is, at present, the only characterized DxxDase with a CARD.
We chose two caspases from each species to characterize further, based on the sequence comparisons with human effector caspases-3, -6, or -7. In the case of O. faveolata, we chose two caspase-3-like proteins that showed 47% and 35% sequence identity, respectively, with HsCasp3, and we named the two proteins OfCasp3a and OfCasp3b, respectively ( Fig. 1A and Table 1). Interestingly, despite predicted similarity to HsCasp3, OfCasp3a also has an N-terminal CARD motif. One caspase from P. astreoides demonstrated the highest sequence identity with HsCasp7 (44%) and was named PaCasp7a, even though it also contains a CARD motif (Fig. 1A and Table 1). The second protein from P. astreoides showed similar sequence identity to human caspases-3, -6, -7, and -8 (36-37%) (Fig. 1A and Table 1), but the protein does not have a DED motif like caspase-8 and the domain organization is more similar to that of caspase-3.
Consequently, we named the protein PaCasp3. Overall, the low sequence identity between the vertebrate and invertebrate caspases show that the classification is somewhat arbitrary without further biochemical characterizations of the proteins. Together, the phylogenetic analysis shows that the caspases from P. astreoides and O. faveolata have relatively low sequence identity (~40%) to mammalian caspases as well as other vertebrate families, but the proteins had much higher sequence identities to caspases from other cnidarian species, such as Pocillopora damicornis, Stylophora pistillata, and Nematostella vectensis.
An analysis of the coral caspase sequences shows that the proteins contain all of the conserved features that define a caspase. For example, each protein contains the catalytic dyad, histidine (CP-075) and cysteine (CP-117) (Fig. 3), where "CP" refers to the common position defined previously for caspases (28). The conserved sequence that contains the catalytic histidine (CP-115)-QACRG-(CP-119) is found in the four coral caspases, although PaCasp7a and OfCasp3a contain QACQG as in human caspase-8. One of the most highly variable regions, the intersubunit linker (IL) is the same length in OfCasp3b and PaCasp3 compared to that of HsCasp3, while those of PaCasp7a and OfCasp3a have 1 and 2 amino acids fewer than HsCasp3 respectively (Fig. 3).
Biochemical characterization of coral caspases
We examined the four coral caspases by size exclusion chromatography (SEC) since CARDcontaining human caspases are monomers or mixtures of weak protomer-dimer (45). Because the IL of the procaspase monomer is cleaved during activation, the protomer is defined as a single unit that contains a large and small subunit and a single active site. Thus, the dimer consists of two protomers, or is more formally considered a dimer of heterodimers. The data show that the CARD containing coral caspases, PaCasp7a and OfCasp3a, elute in a single peak with MW of 42.6 and 44 kDa, respectively. The sizes are larger than that of a protomer but smaller than a dimer ( Supplementary Fig. S2 We also determined the mass of the large and small subunits by mass spectrometry. Caspase zymogens are cleaved in the IL, and the N-terminal CARD or pro-domain is removed during activation (45). The proteins also auto-process during overexpression in E. coli. The MW of the large and small subunits of each caspase, determined by MS, are shown in Supplementary Table S5. When compared to the sequences for each protein (Fig. 3), the data show that OfCasp3a and Fig. 1A and Fig. 3). We note that there are potentially other cleavage sites in the CARD motifs, but in our assays the CARD motif was completely removed.
We characterized the substrate specificity for each of the four coral caspases using substratephage display assays, as described previously (37). In these assays, we utilize two substratephage libraries that determine the P5-P1' substrate preferences, with either aspartate fixed at the P1 position (P5-xxxxDx-P1') or random (called 6x), and the results were the same for both libraries. The data show that PaCasp7a and OfCasp3a have Group II specificity, with a preference for aspartate in the P4 position (DxxDase) (Fig. 4A and Fig. 4B). In contrast, PaCasp3 and OfCasp3b prefer valine in the P4 position (VxxDase) (Figs 4C and 4D), which is defined as Group III specificity like HsCasp6.
The activities of PaCasp7a and of OfCasp3a were also examined using DEVD-AFC and VEID-AFC substrates. In all cases, however, the activity against the tetrapeptide substrates was very low due to KM values >500 µM, so we could not reliably determine the steady-state catalytic parameters kcat or KM from the small peptide activity assays. In caspases, the KM is thought to correlate with substrate binding (KD), so the high KM suggests poor binding of the small peptide.
Because of the low activity in small peptide assays, we tested the coral caspases for their ability to hydrolyze full-length (FL) human procaspases-3 and -6, which were made catalytically inactive due to mutation of the catalytic cysteine to serine (26). Thus, the proteins are incapable of undergoing self-proteolysis. As shown in Fig. 3, HsCasp3 is cleaved once in the intersubunit linker at CP-130 (IETD), while HsCasp6 contains two cleavage sites at CP-130 (DVVD) and at GP7-D17 (TEVD). Each procaspase substrate was incubated separately with an active coral caspase, and the reaction was monitored over eight hours. Aliquots were removed and analyzed by SDS-PAGE (Fig. 5A). The results show that procaspase-3 was cleaved by PaCasp3 and by OfCasp3b, with little to no cleavage by PaCasp7a or by OfCasp3a. In contrast, procaspase-6 was cleaved by PaCasp7a and by OfCasp3a, but there was little to no cleavage by PaCasp3 or by OfCasp3b. Together, the data corroborate our results from substrate-phage display (Fig. 4) that identify PaCasp3 and OfCasp3b as VxxDases and PaCasp7a and OfCasp3a as DxxDases, respectively.
As described previously (34), we quantified the rate of hydrolysis of the two procaspase substrates by assessing the disappearance of the full-length procaspases-3 and -6, both ~32 kDa in size, and the appearance of the large (~20 kDa) and small (~10 kDa) subunits over the time course of the assay (Fig. 5B and Fig. 5C). The data were fit to a single exponential decay to approximate kcat/KM. The results show that PaCasp3 and OfCasp3b cleave procaspase-3 with hydrolysis rates of 31 M -1 s -1 and 84 M -1 s -1 , respectively (Fig. 5B), while PaCasp7a and OfCasp3a cleaved procaspase-6 with hydrolysis rates of 159 M -1 s -1 and 231 M -1 s -1 , respectively (Fig. 5C).
We note that, although not quantified, both PaCasp3 and OfCasp3b cleave the procaspase-6 propeptide (TETD) at a much slower rate than that observed for cleavage of the intersubunit linker of procaspase-3 (IETD). Together, the biochemical data show that the coral caspases are weak enzymes, at least in the in vitro assays, with kcat/KM values ~10 2 M -1 s -1 .
Crystal structure of PaCasp7a
We attempted to crystalize all four of the coral caspases, and we were successful in obtaining diffraction quality crystals of PaCasp7a with an inhibitor (DEVD-CHO) bound in the active site.
The crystals diffracted in the P212121 space group, and we determined the structure to high resolution at 1.57 Å (Supplementary Table S4). The data show that the PaCasp7a is very similar to human caspases, with an RMSD of <1 Å compared to HsCasp3 (Fig. 6A). In the active site, the carboxylate group of the P4 aspartate hydrogen bonds to Asn 315 (CP-162) on active site loop 3 (L3), the backbone amide of Arg 356 (GP9-02) on L4, and through-water hydrogen bonds to Trp 321 (CP-168) (on L3) as well as the backbone carbonyl of Arg 356 (GP9-02) (on L4) (Fig. 6B).
In general, the active site provides hydrophilic binding pockets for the P3 glutamate and P4 aspartate of the substrate, and a more hydrophobic binding pocket for the P2 valine side-chain ( Fig. 6C), similarly to that of HsCasp3.
Both PaCasp7a and OfCasp3a contain a two-residue insertion in loop 1 (L1) of the active site (Fig. 3). The structure of PaCasp7a with inhibitor bound shows that the insertion extends the loop compared to HsCasp3 and results in an "RYP" motif in L1 (Fig. 3) near the catalytic histidine (Fig. 6D) (GP2-05) in L1 (Fig. 6E). Altogether, the models suggest that in the absence of substrate, rotation in L1 may stabilize an inactive conformation of the enzyme. We note, however, that MD simulations (50 ns) of the structural models show that the region of L1 that contains the RYP motif is very mobile, so if the RYP motif is indeed autoinhibitory, then the "RYP-In" conformation appears to be transient ( Supplementary Fig. S3).
The structure of PaCasp7a also reveals a peptide bound on the protein surface near a-helices 1 and 4. The structure shows that amino acids in the N-terminus of PaCasp7a (N'-AKLFSFGG-C') (N'-PD-A025 to PD-G018-C' in the common position numbering) comprise the peptide, where the two phenylalanine side-chains bind in a hydrophobic pocket between the helices 1 and 4 (Fig. 6F). The binding pocket on the protein is formed by five hydrophobic residues on the two helices (L 187 (CP-031), A 190 (CP-034), L 191 (CP-035), F 330 (CP-177), A 334 (CP-181)) as well as F 381 and F 382 (CP-217 and CP-218) at the C-terminus (Fig. 6F). The peptide also forms several hydrogen bonds with charged groups on the protein surface. We do not observe electron density for amino acids G 128 (PD-017) -N 141 (PD-004) (Fig. 3), but extensive interactions downstream of D 142 (PD-003) result in an ordered structure that moves into the core of the protein. The fourteen disordered residues would provide ample distance to connect the peptide with the protease domain, and the data suggest that the intervening amino acids may hinder dimerization since they would be anchored near the dimer interface when the peptide is bound on the protein surface. The N-terminal end of the peptide is immediately downstream of the DQAD cleavage site that removes the CARD motif (Fig. 3), suggesting that the binding pocket on the protein surface may be used to position the N-terminal linker (between the CARD and protease domains) in the active site. Interestingly, the region of the peptide that is disordered in PaCasp7a (G 128 PD-017 -N 141 PD-004) forms a short a-helix in caspases-1 and -2 and in CED-3 ( Supplementary Figs. S4A -S4B and S4D). The short helix does not make contacts across the dimer interface but rather makes extensive intra-protomer contacts with the C-terminus of the protein. In the case of DRONC, the intervening peptide forms an extended structure that extends beyond the dimer interface and would clash with the second protomer of the dimer (Supplementary Fig. S4C). In all cases, the binding pocket between helices 1 and 4 is hydrophobic, and the peptide binds through insertion of one or more hydrophobic amino acids into the binding pocket as well as hydrogen bonds between side-chains on the protein surface and backbone atoms of the peptide. Therefore, the structures show a common theme in which the N-terminal peptide downstream of the prodomain cleavage site binds to a hydrophobic pocket on the protein surface. The interactions likely stabilize the peptide in the binding pocket for cleavage.
Finally, we observed a similar hydrophobic pocket in human effector caspases (HsCasp3, PDB: 2J30; HsCasp6, PDB: 3S70; HsCasp7, PDB: 1F1J) (50-52) (Supplementary Figs. S4E -S4G). There is no evidence, however, from biochemical or structural data, that the N-terminal peptide of the short pro-domain caspases bind in the hydrophobic pocket. A comparison of the N-terminal sequences (Fig. 3) shows significant divergence in the peptide of human effector caspases, so although the binding pocket is similar to that of PaCasp7a, the binding interactions with the peptide sequences are not similar. In HsCasp3 and HsCasp6, for example, the cleavage site is downstream of the putative binding sequence, so the entire peptide is removed from the Nterminus. Interestingly, in HsCasp7 the cleavage site is upstream of the binding region, but the sequence evolved into a tetra-lysine motif that has been shown to be an exosite for substrate selection in caspase-7 (53).
DISCUSSION
Coral reefs are facing a significant decline due to increasing local and global stressors from disease, climate change, and pollution (54). While coral possess a robust innate immune system, the cellular responses to disease have not been elucidated, beyond generalized major response categories. In addition, elevated ocean temperatures have emerged as key threats to the long-term survival of coral reefs and are leading to a collapse of the coral-algal symbiosis (21,54). The algal symbionts are responsible for about 90% of the coral metabolic needs, so the mortality rate of bleached coral is high (55,56). Despite the environmental consequences posed by coral disease and coral bleaching, and resultant changes to reef communities, the molecular physiology behind coral immune responses is not well understood (57)(58)(59)(60). A better understanding of coral stress and immune mechanisms, including cell signaling and biochemical response mechanisms, will improve our understanding of coral declines. The two coral species described here, Orbicella faveolata and Porites astreoides are both reef-building coral, but the two species lie on opposite ends of the disease response spectrum. Where O. faveolata is sensitive to disease and activates apoptotic responses to stress, P. astreoides is resistant to disease and activates autophagic responses to stress. Thus, the two stony coral represent intriguing systems to characterize the biochemical responses to stress (18).
We show here that the coral caspases have relatively low sequence identity to human caspases, so the designation of the caspase function is somewhat arbitrary without further biochemical characterization. The data suggest that PaCasp7a and OfCasp3a may function similarly to caspase-2 since they exhibit DxxDase activity and contain an N-terminal CARD motif. In contrast, PaCasp3 and OfCasp3b share characteristics with effector caspase-6, with a short prodomain and VxxDase activity. Moreover, a phylogenetic analysis showed that OfCasp3a and PaCasp7a are close to vertebrate initiator caspases, whereas PaCasp3 and OfCasp3b are closer to effector caspases. Although the caspases exhibited low activity against peptide substrates, we were able to confirm the selection through cleavage assays of protein substrates. The results showed that the DxxDases (PaCasp7a and OfCasp3a) processed procaspase-6, which has a DVVD cleavage sequence in the intersubunit linker, but not procaspase-3, which contains a more hydrophobic recognition sequence recognized by caspase-8 (IETD). The opposite was true for PaCasp3 and OfCasp3b. In those cases, the enzymes processed procaspase-3 but not procaspase-6.
Taken together, the biochemical data show that the two short prodomain caspases (OfCasp3b Alternatively, the OfCasp3a and PaCasp7a may be activated on apoptosome complexes, and the DxxDase activities could be used to activate downstream caspases or to execute apoptosis. The latter suggestion is consistent with the presence of coral caspase-2-like proteins that also contain CARD motifs (Fig. 1A). Caspase-8-like proteins containing DED motifs would also activate the executioner (and PaCasp3 or OfCasp3b) indirectly. The putative caspase-8 and caspase-3executioner proteins have been identified, but not yet characterized, in coral (25,(61)(62)(63). In addition, PIDDosome complex proteins are also present in coral (64). Together, the data suggest that coral are responsive to death ligands as well as metabolic changes in the cell. substrates with both a P1-P4 cleavage sequence and the downstream sequence that binds to the pocket. We also showed that the hydrophobic pocket is conserved in a wide range of species, with similar size and properties. The short pro-domain caspases appear to have retained the binding pocket on the protease domain, but the N-terminal peptide sequence diverged, suggesting that effector caspases may utilize the binding pocket as an exosite for substrate selection. In this case, for example, substrates with sequences that bind in the pocket, and are downstream of the cleavage site, may exhibit better binding compared to substrates that contain only the P1-P4 recognition sequences.
Conclusions
Coral have complex apoptotic signaling cascades, similar to those of vertebrates. We have identified OfCasp3a and PaCasp7a as initiator caspases that appear to function similarly to All cleaved products are labeled along with enzyme itself. (SL-large subunit of substrate, SS-small subunit of substrate, EL-large subunit of enzyme, ES-small subunit of enzyme, SL-P-large subunit with prodomain cleaved, S-substrate and E+S-enzyme and substrate). Bands with "*" indicate only prodomains were removed from full-length substrate. (B, C) Quantification of procaspase bands relative to the control (substrate without enzyme after 8 hours incubation). Data were fit to a single exponential decay to calculate CF50 used to calculate hydrolysis rate of coral caspases (solid line). Procaspase-3 (B), Procaspase-6 (C). Error bars represent standard deviation form three different experiments. Table 1
|
2020-04-16T09:08:36.067Z
|
2019-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "284e4a85cf4fd4c52b68822b14c0353428a308a3",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925817493379/pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "9cc3b4e26704fa98d4022fec4db0ff8a0a39d767",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
249284613
|
pes2o/s2orc
|
v3-fos-license
|
Computational methods to simulate molten salt thermophysical properties
Molten salts are important thermal conductors used in molten salt reactors and solar applications. To use molten salts safely, accurate knowledge of their thermophysical properties is necessary. However, it is experimentally challenging to measure these properties and a comprehensive evaluation of the full chemical space is unfeasible. Computational methods provide an alternative route to access these properties. Here, we summarize the developments in methods over the last 70 years and cluster them into three relevant eras. We review the main advances and limitations of each era and conclude with an optimistic perspective for the next decade, which will likely be dominated by emerging machine learning techniques. This article is aimed to help researchers in peripheral scientific domains understand the current challenges of molten salt simulation and identify opportunities to contribute.
Introduction to thermophysical properties. The aim of computational simulations of molten salts is the accurate prediction of thermophysical properties. These properties are required to simulate complex fluid dynamics and chemical interactions within molten mixtures. Highly accurate and fast computational explorations of different molten salts could replace expensive and hazardous experiments. The properties of interest are categorized into two classes: (1) Properties that can be derived from short simulations of small systems with only a few explicitly modeled atoms, like heat capacity, density, etc.; (2) Properties that require long simulation times and large system sizes before converging to experimental accuracies, like diffusion, thermal conductivity, etc. Figure 2 summarizes reported properties from reviewed articles and the simulations sizes and durations used to calculate them. Modeling the second class of properties has driven the exploration of more efficient simulation techniques, albeit often at the cost of accuracy.
The derivation of thermodynamic properties for a molten salt requires the atomistic simulation of a system consisting of a specified number (N) of atoms from each chemical component (for example Flibe has been simulated with 100 F, 200 Li, and 400 Be atoms). Various methods exist for simulating the movement of atoms in these systems. Common elements of most simulations are: (1) Each atom is modeled as a point in space with a given position, velocity, and charge attributes; (2) The interaction between atoms is calculated to yield a potential energy term; (3) Forces per atom are calculated as the negative gradient of the potential energy term at a given position; (4) An integrator is used to solve Newton's equations of motion for each atom and to update the positions. Executing such simulations under defined constraints-such as constant pressure (P), temperature (T), energy (E), or volume (V)-results in dynamic trajectories of the system. A trajectory is an ordered collection of frames, where each frame represents a snapshot in time of the atomic positions of the simulated system. Figure 3 shows a decision flow chart that can be used to select the right simulation for a desired thermophysical property.
The theories of statistical mechanics provide the recipes to derive thermophysical properties from such simulation trajectories. Structural properties are far more accurate to obtain experimentally. These properties are experimentally derived through neutron scattering experiments and Extended X-ray Absorption Fine Structure. The most common derived properties for MD are the partial radial distribution function, coordination number, angular distribution function, bond angle, and the structure factor. A comprehensive summary of all thermophysical and structural properties is given below.
Overview of the mathematics behind thermophysical properties Temperature.
where N is the number of atoms, m is the mass, and k B is Boltzmann's constant. Velocities, v i , can be calculated for each particle by taking the change in their position between time trajectory frames. Density.
where N is number of atoms, M is molar mass, N α is Avogadro's number, and V is the equilibrated volume of the simulation cell of the given temperature in the isothermal-isobaric (NPT) ensemble. An alternative to determining the density through the NPT ensemble is to use previously measured experimental values for the density. Using experimental densities in some cases is more accurate in the calculation of the other properties such as diffusion, viscosity, thermal, or electrical conductivity. In a nuclear system, high-density molten salts increase neutron production which causes the system to approach criticality. However, if the density is too low the salt will only sustain the system for a short period. An alternative method to find the equilibration volume is to run canonical ensembles (NVT) at different volumes and then fit the pressures and volumes with a Murnaghan equation of state to find the equilibration volume and bulk modulus 4 .
Equation (3) is more commonly used than (4). The results of the NPT ensemble can be used to fit the relation between density and temperature. The relation between density and temperature can be used to find the thermal expansion at a specific density using Eq. (3). Equation (4) can be used by running many NVT ensembles and then using the Murnaghan equation of state to find the equilibration volume for different temperatures 5 . Fitting a line to the relation between the equilibration volumes and temperatures yields the thermal expansion at a specific volume.
Heat capacity.
where C P is the heat capacity at constant pressure, H is the specific enthalpy, T is temperature, and U is the system's total internal energy including both kinetic and potential energy. This can be simulated in NPT or the NVT for a range of target temperatures.
Equation (6) is the heat capacity at a constant volume. For MS C V is usually close to C P and has very little temperature dependence.
are other approximations commonly used, where δE 2 ¼ E 2 À E h i 2 and 〈E〉 denotes the average. T is the temperature and k B is the Boltzmann constant.
Equation (9) is another approximation similar to Eq. (7): where ΔE is the fluctuation in energy 6 .
where hjδr i t ð Þj 2 i is the mean square displacement of the element α, meaning: square the displacements δr of each particle i at various times t and then take the average. The six in the denominator results from being 3-dimensional diffusion. After calculating diffusion coefficients, it is common to find the Arrhenius relationship for temperature and diffusion coefficients: where E α is the activation energy of diffusion, R is the ideal gas constant, and T is the temperature. This relationship is used to find the diffusion activation energy. Alternatively, the solubility is defined by: where H α is the enthalpy of mixing and S α is the solubility as the temperature goes to infinity.
Viscosity. Simulation strategies for molten salts to obtain thermophysical properties. Each property is followed by the equation number listed in the text that can be used to derive the property from the simulation data. The arrows connecting the properties with the simulation ensemble (NPT, NVE, and NVT) are again marked with the equation numbers. For example, thermal expansion can be calculated from NVE simulation with Eq. 3 or from NPT simulation with Eq. 2, however, the NPT simulation is the preferred route. Reverse non-equilibrium molecular dynamics (RNEMD) and equilibrium molecular dynamic (EMD) are explained further in the text.
uses the Green-Kubo relation through the integration of the shear stress autocorrelation function under an NVT ensemble where σ is the virial pressure tensor. It is averaged with respect to the offdiagonal components (i.e., α ≠ β).
can be used to calculate viscosity using a reverse non-equilibrium molecular dynamics (RNEMD) method in the microcanonical (NVE) ensemble. Where L is the size of the simulation box, T is the temperature, v is the velocity of the particles, m is their mass, and x, y, z are the coordinates. η ¼ k B T 2πDλ , is a third option that is more of an approximation than the others. Where λ is the step length of ion diffusion usually assumed to be the diameter of the ion.
Viscosity also follows the Arrhenius relationship in Eq. (11) just like diffusion. And the activation energies of the viscosity can be found from it: Electrical conductivity.
is the most frequently used method using the Green-Kubo relation and the autocorrelation function where J z t ð Þ ¼ ∑ N i¼1 z i ev i t ð Þ. Here z i e is the charge of the particle i, V is volume, k B is the Boltzmann constant, and T is temperature.
resembles the diffusion equation and employs the mean squared displacement (MSD) of the particle α. Notice this is an EMD method using the NVT ensemble. Equation (16) using the autocorrelation function is considered more accurate than using the MSD. A third uncommon approximation can be made with: Thermal conductivity.
uses the Green-Kubo relation through the integration of the autocorrelation of J E , simulated in the NVT ensemble. Where Þ is a summation of kinetic and potential energies. Where r ij f ij is the position and force on a particle, V is the volume, v is the velocity, m is the mass, k B is Boltzmann's constant and T is the temperature.
There is also a useful RNEMD method for calculating the thermal conductivity in NVE similar to Eq. (14): where L is the size of the simulation box, T is the temperature, v is the velocity of the particles, m is their mass, and x, y, z are the coordinates.
Partial radial distribution function (PRDF). PRDF, sometimes referred to as RDF, is defined 7 as: or alternatively 8 as where ρ β is the number density of species β, N α is the number of the respective species α, and N αβ (r) is the mean number of β ions lying in a sphere of a radius r centered on an α ion. This function can be thought of as measuring the correlation between two atoms.
Coordination number (CN). The CN is defined as: where r min the position of the first valley in the PRDF.
Angular distribution function. The angular distribution function can be thought of as the correlation between three particle types and can be derived from the bond angle shown below.
Bond angle. The bond angle is defined as: where i,j,k are the central atom and two arbitrary atoms indices, respectively. This function can be thought to encode the atomic configuration types and the correlation between three-particle types 7 .
Structure factor (S). The structure factor is defined as the Fourier transform of the PRDF in the following manner: where ρ α is the number density of the α ions 9 .
Outline of this manuscript. This review presents the evolution of computational simulations in three eras, culminating in the combination of molecular dynamic simulations with machine learned potentials that are trained on ab-initio data. This new breed of simulation techniques enables the accurate simulation of large molten salt systems for extended periods of time, sufficient to predict thermophysical properties otherwise not accessible. Current techniques, however, rely on outdated neural network approaches that are insufficient to capture the interactions between complex molten salt mixtures. We present the strengths and weaknesses of the different methods and provide insights where new methods could overcome the existing challenges.
Method
We followed a systematic literature review process to gather the relevant publications for this review. The keywords ionic liquid, molten salt, and molten salt reactor were used in the advanced search of Google scholar to identify a list of 138 potentially relevant papers. After reading the abstracts, the papers were sorted into potentially relevant and not-relevant papers. Each potentially relevant paper was read in detail and summarized by the authors. This process resulted in 57 papers found to be of relevance. In a second step, the cited references of each relevant paper and the citing references were collected and checked for relevance. At the end of this process, 95 total papers were identified. An additional search for machine learning papers was conducted, which added 24 papers to the total relevant number of papers for this review. Figure 4 shows a breakdown of related publications per year and is indicative of the wave-like interest in molten salts throughout the different eras.
Discussion
Early simulations . The development of quantum mechanics and the Born-Oppenheimer approximation led to the first potentials for alkali halides that were fitted semi-empirically in the condensed phase. Starting in 1933, computationally derived properties were being reported alongside the development of the Born-Huggins-Mayer (BHM) model 10 . However, limited by the potential form and the lack of computational power, the first set of estimated parameters was rather approximate to experiment 11 . The development of the Monte Carlo method in 1953 opened new simulation possibilities and Tessman et al. found polarizabilities of the alkali halides, which were essential for more accurate simulations 12 . New ionic salt models were developed to improve the accuracy of derived behavior. In 1958, Dick and Overhauser developed the shell model as the first attempt to capture induced polarization effects in ions 13 . By 1959, the Molecular Dynamics method was introduced which enabled the simulation of trajectories for molten salts, and consequentially the development of interatomic potentials continued to grow 14 . Tusi and Fumi modified the Born-Mayer-Huggins potential (BHMTF) and some parameters in 1964 using semi-empirical methods and improved predicted properties, such as densities, diffusion, and heat capacity 11,15 . Besides the still limited computational resources, many of the developed potentials suffered from inaccuracies in fitted parameters-such as Vander-Waals coefficients-, lack of treatment of many-body interactions, and, especially for the shell model, arbitrary assumptions made about potential forms, all of which left much room for improvement 5 .
A breakthrough for atomistic simulation was the development of density functional theory (DFT) by Hohenberg and Kohn in 1964 16 . With the ability to calculate accurate potentials, ab-initio calculations became possible and opened the door to computational-albeit not very feasible-first-principle simulations.
During the same time, molten salt experiments climaxed with the execution of the Oak Ridge Molten Salt Reactor Project between 1965 and 1969. Consequent publications in 1970 described the chemical considerations that motivated the choice of specific salts in Oak Ridge; such salts being of the alkali halide family as well as others 17,18 .
The first simulation of molten salts used KCl and was reported in 1971 by Wood, Cock, and Singer using Monte Carlo with the BHMTF potential 19 . Others continued through 1976 developing various rigid and polarizable models to investigate alkali halides 11 . From these investigations, it became apparent that many-body interactions would need to be included in some explicit manner. In 1983, Tang and Toennie introduced a universal dispersion damping function that could be applied to Born-Mayer type potentials which were one of the first attempts towards the inclusion of many-body effects 20 . In 1985, Madden and Fowler expanded this idea using their asymptotic model of polarization, expanded terms beyond the dipole-induced dipole model, which was comparable to LiF simulations as obtained from Hartree-Fock calculations 21 . It was also during 1985 that abinitio MD (CPMD), by Car and Parrinello, was introduced; it coupled molecular dynamics with a DFT potential 22 . The tools of this era laid the framework upon which the next would be built.
Progression of DFT-based methods 1990-2021. Increased computational power and theoretical insights advanced interatomic potentials and density functional theory to the level of usefulness and reliability. Both fields were developed by different research groups whose insights influenced each other. Here, we will first review the progression of DFT-based methods, resulting in dispersion corrected ab-initio molecular dynamics (AIMD) methods. Afterward, we will discuss the advancements of interatomic potentials, as they were strongly influenced by the increase in accuracy and feasibility of DFT models.
In 1993, Barnnet and Lanmarn introduced Born-Oppenheimer molecular dynamics with DFT which enabled larger simulation timesteps resulting in a better temporal sampling of still very restricted simulations 23 . Later that year, Kresse and Hafner introduced the method of initializing Car-Parrinello Molecular Dynamics with energy minimization schemes for metals, resulting in additional speed up 24 . In 1998, Alfe extracted the diffusion coefficients using CPMD for liquid aluminum and thereby demonstrated the feasibility of DFT methods to extract transport properties in condensed materials 25 . In 2003, Aguado et al. developed an ab-initio process using many condensed phases of MgO with CATSTEP DFT to parametrize coefficients for the aspherical ion model (AIM) representing a breakthrough for polarizable models 26 . In 2005, Hazebroucq used a tightbinding density functional to calculate and investigate diffusion in NaCl and KCl for a specific experimental volume, a step towards full AIMD 27 .
In 2006 Madden et al. highlighted the need to control the dispersion interaction as it-despite being only a tiny fraction of the interaction energies of an ion pair-strongly impacted phase transition behavior, such as transition pressures 28 . It had been a well-known problem up to this time that dispersion interactions in DFT calculations were not accurate. Grimme tackled the problem of dispersion and published a first empirical correction in 2006 for DFT followed by a second in 2010, practically resolving the issue 29 . In 2006, Klix investigated the diffusion of tritium in Flibe, using CPMD, and named the method ab-initio molecular dynamics 30 . This simulation was the first time that a molten salt had dynamical behavior derived using ab-intio methods.
In 2014, Corradini investigated dispersion in LiF with both DFT and molecular dynamics and found that dispersion is significant in NPT simulations: it strongly affected melting point calculations and resulted in underestimated equilibrium densities by 15% when omitted and thereby verified the importance of dispersion in molten salt calculations 31 . In 2015, Anderson used AIMD to model FLiNaK and Flibe and extracted thermodynamic properties such as density, diffusion, and thermal expansion which were validated by experimental measurements 32 . From 2016-2021 multiple papers investigated the thermophysical behaviors for various salts using AIMD 3,33-43 . Few of the papers investigated thermodynamical quantities and even less investigated kinetic properties (diffusion and viscosity) 34,37,42 . Short timescales and a small number of ions render such calculated properties unreliable which most, if not all, DFT-based calculations had in common.
Optical properties such as vibrational spectra are more accessible experimentally than computationally and only a small number of DFT studies investigated them 33,44 . In 2021, Khagendra et al. reported mechanical properties of Flibe using DFT suggesting that these properties could validate models 44 . However, as these properties are not consistently presented throughout the literature there is currently limited development in this domain.
The development of interatomic potentials for molten salts between 1990 and 2008 converged in the Dipole-induced polarizable ion model (PIM). The basic idea of PIM is that a sufficiently complex additive forcefield can be fitted to DFT data to result in accurate energy estimates. These energies can in turn be used to propagate the atoms in the model forward in time. Accurate DFT models allowed for parametrization of complex forcefields; for example in 2003 the first fitted dipole model was produced 26 . In 2008, Salanne developed PIM and defined the most frequently used interatomic potential for the next decade 45 . Throughout 2008-2020, PIM was used to derive various properties for many salts which were often validated by experiments [1][2][3]6,7,9,31,32, . From 2018 until today, a revival of sorts, led to alternative potentials, such as the Sharma −Emerson−Margulis (SEM) Drude oscillator model 81 , to be developed.
The quest for the PIM began in 1993 when Madden and Wilson introduced the method of parametrization via CPMD simulation for halides on a rigid ion model (RIM) potential plus a dipole term (see Section "Summary of Potentials" for an overview of interatomic potentials). This initial model was a response to the earlier shell model; to correct the apparent lack of justification for the potential form 82 . Since oxides could not be accurately described by polarization effects, the Compressible Ion Model was developed in 1996 83 . In the same year, Madden and Wilson enhanced the 1993 model with the addition of a quadrupole using the asymptotic model of polarization and applied it to AgCl systems 84 . Wilson and Madden suggested that ionic salt interactions can be described by four terms: induction/polarization, dispersion, compression, and shape effects 85 .
Another stride was taken in 1998 when Rowley and Jemmer published the Aspherical Ion model which combined the Compressible Ion Model with polarization effects from the asymptotic model of induced polarization 25 . This model was used to investigate the polarizabilities and hyperpolarizabilities of LiF, NaF, KF, LiCl, NaCl, KCl, LiBr, MgO, CaO by Jemmer, Madden, Wilson, and Fowler 86 .
The importance of polarization was further evidenced in a 2001 study by Hutchinsons, describing trichloride system phase transitions 87 . In 2002, Domene used AIM as a starting point to derive an ion model including dipole and quadrupole moments resulting in another polarizable ion model which they used to investigate properties of MgF2, CaF2, LiF, and NaF 88 .
Computationally less demanding interatomic potentials based on the Born-Huggins-Mayer-Tusi-Fumi (BHMTF), or rigid ion model (RIM), also found application in this century. In 2004, Galamba used BHMTF and born-dispersion parameters to calculate the thermal conductivity of NaCl and KCl-although overpredicting it by 10-20% 89 91 .
By 2007, Galamba investigated the theory of thermal conductivity for Alkali-Halides and derived thermal conductivity of NaCl based on BHMTF including non-equilibrium molecular dynamics, but still overestimated measured properties 92 . In a series of papers from 2008 to 2009, Salanne reported polarizabilities of LiF, NaF, KF, and CsF using DFT condensed phases method, produced the modern PIM, and applied it to LiF-NaF-KF, NaF-ZrF4, LiF-NaF-ZrF4 to derive various thermodynamic properties, including experimentally validated electrical conductivities and diffusion coefficients [45][46][47] .
Others built on the works by Salanne and Madden, such as Merlet et al. who reported in 2010 multiple diffusion and transport coefficients for various LiF-KF compositions 49 . Also in 2010, Olivert reported the structural factors of LiF-ZrF4 from a combination of experiments and PIM-based simulations 50 . In 2011, Salanne and Madden reviewed the importance of polarization effects and promoted the use of PIM 52 .
In 2012, Salanne described a method for calculating thermal conductivity using the Green-Kubo formalism and applied it to NaCl, MgO, Mg2SiO4 53 . Another review by Salanne in 2012 compared AIM with PIM and published reusable parameters for alkali halides 54 . In an interesting 2013 study by Benes, heat capacity was determined experimentally for CsF and used to validate DFT and PIM models. While heat capacity was in good agreement with the DFT results, MD simulations did not agree with experimentally determined values for enthalpy of formation 55 . In 2014, Liu investigated Li-Th4, an important MSR salt, using PIM 60 .
In 2018, Abramo deployed RIM to model NaCl-KCl liquidvapor phase, concluding that RIM works well for alkali halides 69 .
In 2019, Guo used PIM to derive thermodynamic properties for Li, Na, K-ThF4 74 . In 2020 Sharma introduced the SEM-drude model which offered an alternative to PIM with 30 times faster execution times 81 .
As computational power grew, more accurate DFT-based AIMD simulations became available and spawned sustained interest in the fitting of interatomic potentials with functional forms that could allow rapid evaluation for longer simulation times. The literature up to this point suggests many successful proofs of principle, but a limited number of computational assays that could replace experiments. With Moore's law working in its favor, AIMD might one day become feasible for relevant simulation times and system sizes to derive all thermodynamic potentials, but until then, alternative approaches with higher computational turnaround will remain of high interest.
Machine Learning Era 2020-ongoing Setting the stage-the promises of machine learning potentials. Machine learning originated from the work by McCulloch and Pitts, who in 1943 proposed artificial neural networks (ANN) as biologically inspired computation architectures 93 . Many decades passed before useful ANN were constructed, mainly due to increases in computational power and available datasets. Today, old theories are frequently rediscovered and used to design the next breakthrough algorithm, such as ANNs, convolutional neural networks (CNN), or graph neural networks (GNN). Deep learning, a subset of machine learning, serves as a general function approximator that can be trained to mimic any expensive analytical or empirical function, often with substantially reduced execution time. For molten salt simulations, a deep learning method can potentially address the two main problems of existing methods: (1) The limited scalability of ab-initio methods due to their complexity and computational cost for large systems and long timescales; (2) The poor accuracy of efficiently parametrized forcefields due to their limited expressibility.
Novel machine learning tools are frequently evaluated on wellestablished benchmarks before they find broad adoption. Traditional computer science tasks with well-respected benchmarks, such as data clustering, image annotations, or natural language processing tend to serve as the battleground where new machine learning algorithms prove themselves. Fields like chemical simulations, typically only deploy these models after a knowledge transfer phase. A recent example for this pattern is the attention mechanism 94 that substantially improved natural language processing models in a transformer architecture 95 and has since influenced models like AlphaFold 96 .
Managing expectations-the first applications of neural networks to molten salts. A review of machine learning models in the field of molten salt simulations is not expected to identify novel machine learning architectures, but rather show the application of well-established methods in a new context. Indeed, before 2020, only one machine learning inspired method was used to predict saturation pressure of pure ionic liquids: Hekayati et al. trained a simple ANN on 325 experimental vapor pressure points and showed that the resulting model could reproduce the experimental values 97 . However, this model was not generalizable and serves only as a historical footnote. The successful applications of machine learning techniques to molten salt simulations began in 2020, using ideas from Behler-Parinello 98 and Bartok 99 . Another related discipline that recently started using ML models is that of nanofluid simulations, however, these typically deal with bulk solvent instead of atomistic simulations and are out of scope of this review [100][101][102][103][104][105][106][107] .
The methods based on Behler-Parinello 98 and Bartok 99 have common elements depicted in Fig. 5. Both methods require a set of atomistic configurations and associated energies, typically obtained from ab-initio molecular dynamics of small systems (~100 atoms) and short simulation times (<100 ps). Both are trained to predict the energy of each atom in a configuration so that their sum equals the total energy of the system. The differences and current adoptions follow.
Behler-Parinello proposed a simple feed forward neural network, or multilayer perceptron, that predicts the total energy of a molten salt system as the sum of individual energy contributions of each atom 98 . This idea is like that of the empirical potentials discussed in section "Progression of DFTbased methods 1990-2021". Although, instead of pre-defining a functional form, this approach allows the network to be dynamically optimized during training, potentially considering many-body effects and polarization without explicitly defining them. The main contribution of this work was the definition of a symmetry function that translated the neighborhood of each atom into an input vector for the neural network. For atomistic problems, one of the biggest challenges is to find an appropriate way to represent a collection of atoms so that a trained neural network will obey expected invariances (translation, rotation, atom replacement) of the real world. The original work was only applied to pure silicon systems, so it did not need to deal with different atomistic species.
Going deeper-extension of basic neural networks for molten salt simulations. The Behler-Parinello 98 concept was further extended by Han et al. 108 and Zhang et al. 109 in 2018 to yield what they Fig. 5 Overview of typical methods used in recent machine learning based studies of molten salts. A short ab-initio molecular dynamics simulation of a small system generates a training set. One of four commonly used models is fitted or trained. A larger system is then simulated using the custom trained potentials for extended periods. The resulting trajectories are analyzed and thermophysical properties are extracted. coin a deep potential. In mainstream machine learning literature, the word deep corresponds to a large number of fitting parameters in a network, frequently inside the hidden layers of a network, as opposed to shallow networks with few fitting parameters. The default architecture of deep potential uses five hidden layers with a decreasing number of nodes (240, 120, 60, 30, and 10). Instead of using a symmetry function like Behler-Parinello, deep potential expresses cartesian coordinates of a configuration in local reference frames of each atom.
The main concern with the deep potential method is that a single neural network must be trained for each new configuration and that this network does not explicitly support different atom types. This means two things: (1) It is not possible to train a deep potential for one molten salt mixture and then to transfer it to a slightly different one (FLi to Flibe for example); (2) The neural network is trained on the average distribution of atomic species within a cutoff distance. If on average a NaCl system has 10 Na and 10 Cl atoms as closest neighbors to each atom, then the deep potential will have optimized the weights for the first 10 input nodes for Cl and the last 10 for Na, as they simply sort the neighbor by atom type. If an atom has by chance 8 Na and 12 Cl closest neighbors, the weights for input nodes 9-10 would treat Cl atoms like Na atoms.
Surprisingly, this systematic error has not been more thoroughly discussed by the authors in this or their two influential follow-up papers DeepMD-Kit 110 and DP-Gen 111 , which both use this same approach. For users of deep potential, it is especially important to highlight the compounding effect of this setup for complicated molten salt mixtures. With an increasing number of atomic species in the system, the fluctuations in neighborhoods will prohibit accurate energy assessment for individual atoms, while providing decent predictions on average.
The problem of generalizability of the original Behler-Parinello 98 approach was already addressed in 2017 by Smith et al. in their highly influential paper that introduced the ANI-1 network for biochemical simulation. 112 Here, the authors used heavily modified symmetry functions that support encoding of recognizable features in the molecular representations and differentiation between atomic species. Most notably, they train neural networks for atomic species, instead of using a single network for all atoms. While this increases the generalizability of the models, which were trained on small molecules but tested with good accuracy on a larger test set, these networks need to be retrained simultaneously whenever a new atomic species is added to the data set. The first ANI-1 model supported only 4 atom types (H, O, C, N), the ANI-2 model 113 expanded this to 7 atom types (S, F, Cl). While the ANI-1x dataset 114 is available, the ANI-2 dataset has not been made publicly available. The authors report empirical evidence "that discriminating between atomic numbers allows for training to much lower error on diverse multi-molecule training sets and permits better transferability" 114 .
The molten salt community has access to the optimized symmetry functions of Smith et al. through the Properties from Artificial Neural Network Architectures (PANNA) 115 software suite that was developed to support the training of Behler-Parinello models. Similar to DeepMD-kit, PANNA integrates with the popular large-scale atomistic/molecular massively parallel simulator (LAMPPS) 116
MD package.
A popular alternative to training custom neural networks for molten salt simulations is fitting of a Gaussian Approximation Potential (GAP) as introduced by Bartok et al. 99 This method uses ab initio configurations to fit a gaussian potential for each atom in the configuration. While these potentials have hyperparameters that can be optimized, the actual fitting of the potentials is fully deterministic opposed to the stochastic processes used in the training of Behler-Parinello ANNs. While GAP does not fully qualify therefore as a machine learning model, the current literature often describes the fitting of the potentials with the same vocabulary (hyper parameter tuning, test/ validation/training datasets, etc). Contrary to empirical potentials discussed earlier, GAP is theoretically able to model any complex potential energy landscape, given a sufficiently large dataset of ab initio configurations and energies for fitting.
First proofs of principle-review of applications of neural networks to molten salts. The following overview of recent publications will use either GAP or ANN based methods. All of them follow the same strategy of first running short and small AIMD simulations, fitting a potential specific to their current salt mixture, and then running MD with that potential. Due to limited integrations, all simulations listed here use the LAMPPS package. This strategy overcomes the prohibitive cost of running long AIMD for large systems and holds the promise of higher accuracy than empirical potentials with their limited expressibility.
In The Y. Lu lab used the DeepMD-kit to train a deep potential for ZnCl 2 on AIMD data with 108 atoms simulated for 30 ps 122 . Larger MD with 1980 atoms and 100 ps duration was compared with a PIM model that simulated 768 atoms for 500 ps. Extracted properties aligned very well between all models, and the authors suggested a deeper comparison between ML and PIM accuracies. In a follow-up work by the Lu lab, the potential was extended to ZnCL 2 mixtures. The resulting ML trajectories showed reasonable agreement within 26% of experiment values for thermal conductivity using the RNEMD method, within 6.6% for specific heat capacity, and within 4.2% for density. These larger uncertainties might well be related to the previously discussed problem of deep potential only training a single ANN. For increasingly complex mixtures, the assignment of atom types to correct input nodes becomes error-prone and should be carefully evaluated in such studies.
The G. Lu lab has embraced the deep potential method and published a series of experiments between 2020 and 2022, first to simulate MgCl 2 123 and then MgCl 2 -KCl 124 . Their strategy is always to run a short AIMD simulation of <100 atoms, use DeepMD-Kit to fit a deep potential, and then follow up with a longer MD simulation using the trained potential. They report consistently good agreement with experimental values for similar studies investigating LiCl-KCl mixture 125 , alkali chlorides 126 , lanthanum chlorides 127 , KCl-CaCl 2 128 , Li 2 CO 3 -Na 2 CO 3 129 , and SrCl2 130 .
Rodrigues et al. used DeepMD-Kit to fit existing AIMD data for LiF and Flibe to a deep potential before conducting MD that resulted in good agreement with the AIMD data 131 . Li et al. investigate the interactions of Uranium in NaCL 132 . They found that a deep potential model trained on AIMD data outperformed classical PIM models. Lee et al. showed also that FLiNaK can be simulated with higher accuracy using a deep potential model than a RIM model trained on AIMD data 133 . It will be interesting to see if additional reports will substantiate the evidence for deep potentials being superior to the current state-of-the-art empirical potentials and supersede them in the near future.
Evaluating the status quo-how far can current neural networks go?. This comprehensive review of machine learning methods for molten salt simulations clearly shows one shortcoming of the applied methodology. None of these papers has reused a model fitted by other groups. Compared with PIM parameters that allow potential reuse by other groups, this breed of ANNs or GAPs always requires new potential fitting before investigating a novel salt mixture. The core problem is not the available algorithms, as demonstrated by the extensible ANI/ANI2 networks that build off a modified Behler-Parinello approach. It is rather that the recent studies are conducted by users and not developers of ML software. Promising new ML models, like graph neural networks such as the SE(3)-transformer 134 are likely going to outperform the current deep potential and Behler-Parinello methods as soon as they become easily trainable and integrated with favorite simulations tools. Especially for more complex mixtures, it will be necessary to make the shift to more extensible architectures, as current approaches suffer too much from the limited expressibility of atom or isotope types. In general, it is worthwhile to remember that machine learning can be optimized through three different approaches: (1) increasing datasets, (2) improved training strategies, (3) better network architectures. While some recent papers try to improve accuracies through increased datasets 120 , others apply strategies such as active learning 135 to reduce the necessary number of training data.
It would be beneficial if a shared database of AIMD configurations of multiple salts would be made available (similar to the ANI dataset), ideally in conjecture with experimentally collected thermophysical properties to create community benchmarks.
As these first studies prove the value of machine learning for replacing ab-initio MD methods and empirical force fields, the current breed of applications only scratches the surface of what is possible. Deep learning does not need to be limited to a specific set of atom species, but novel network architectures, such as graph-based neural networks like the SE(3)-transformer 134 , could potentially generalize over the entire periodic table of elements and the various isotopes encountered in molten salt reactor simulations. Training of such a general-purpose machine learning model would however require a more community-oriented sharing of training data through open databases and reproducible benchmarks.
Once a more general neural network is available, integration with high-performance MD codes, such as OpenMM 136 or Gromacs 137 , could increase the usage further than PANNA or DeepMD-kit's current scopes. The limited support of machine learning potential in LAMMPS is currently restricted to Tensorflow implementations, the popular PyTorch library however can more easily be integrated with OpenMM. Coupled with the right simulation tools, accurate potentials could be used not only to investigate defined salt mixtures but rather to propose ideal mixtures for desired properties. In an unsupervised optimization process, molten salt constituents and ratios could be developed that outperform known mixtures and result in safer and more efficient reactors and solar systems.
Summary of potentials. This section shows the functional forms of all commonly used potentials in molten salt simulations in order of first publication.
The Born-Huggins-Mayer (BHM) 10 model which was developed in 1933 is a pair-potential for akali halides. α is the Madelung constant for the associated ionic crystal. C, D are van der Wall constants calculated by Mayer 138 . β ij is the Pauling factor 139 . b is an arbitrarily chosen factor. σ i ,σ j are the radii of ions. r ij is the separation distance of the ions. ρ is empirically determined.
Shell Model 13 . x The Shell Model 13 which was developed in 1958 represents an ion composed of a core and shell of opposing charge. The pair potential consists of three terms: electrostatic, short-range, and self-energy interactions which depend on four generalized coordinates. p is the polarization, x + ,x − is the distance of the core center to the respective lattice site, d + ,d − is the distance of the core site to the shell center, e is the elementary charge, E is the macroscopic electric field, P = N p is the definition of polarization where n + , n − is the number of electrons, N is the number of ions per unit volume, R 0 is the lattice separation, and ρ is the same as in the BHM model. D is the exchange charge polarization coefficient.
BHM-Tusi-Fumi (BHMTF) 16 model which was developed in 1964 where the effective pair potential differs from the original BHM model by allowing ρ to vary from salt to salt determined semi-empirically and using the effective charge Z i Z j of ion pairs. n i ,n j is the number of electrons in the outer shell. All the other parameters are the same as the BHM model 11 .
Tang-Toennies damping function for dispersion 20 The damping function developed in 1984 represents a generalized way to damp the polarization dispersion energy. The accuracy was verified on several ab-initio calculations.
The Madden-Wilson model developed in 1993 includes the original BHM terms with a change of the first coulombic charge to include the variable charge, ν, of the ion and a dipole potential. The dipoles are approximated with the form given, where d i is the rod length of the dipoles.
This pair potential, developed in 1996, is for oxide-type salts. It is the BHM potential with dispersion Tang-Toennies damping functions and modification of the repulsion term. The ionic radii are allowed to vary and described through the relation σ i ¼ σ i þ δ i where σ is the average ionic radii and δ i describes instantaneous changes based on the environment. The first two terms account for the repulsion term in BHM while the third term, u --, accounts for a frozen oxide-oxide interaction.
This potential, developed in 2002, depends on the dipole moment, μ, the quadrupole moment, θ, and Q, the formal charge of the ion. The T i αβγ is the interaction tensors, while k i are proportional to polarizabilities.
This model, developed partially in 1998 and finished in 2006 combines the Compressible Ion Model and the Polarizable model.
The model, developed in 2008 is reduced to dipole contribution plus the BHMTF model. The dipole moments are calculated self consistently.
where w k ij is the weight parameter connecting node j in layer k with node i in layer k -1, and w k 0j is a bias weight that is used as an adjustable offset for the activation functions f k a : hyperbolic tangent for hidden layers and liner function for output. Weights were initially randomly chosen but error functions can be minimized to obtain correct weights. G μ i is a symmetry function of the cartesian coordinates R α i of the local environment of the i atom. The locality is determined by the function where R c is some predetermined cutoff radius. The radial symmetry function is where η,R s are parameters. The angular symmetry function is where ξ; λ 2 À1; 1 f gare new parameters and cosθ ijk ¼ with i being the central atom and R αβ ¼ R α À R β Deep potential 108 .
where f,w comes from a fully-connected feedforward neural network. I i is the input vector of the function: it takes in the positions of all the atoms (in cartesian or some polar-like coordinate system 1 r ; cos θ; cos ϕ; sin ϕ È É ) centered on the i atom with a cutoff of the nearest N c neighbor atoms determined from the max number of atoms within the cutoff radius of R c . The weights are constrained to be the same for atom-types α. The ordering of the inputs into I i at the first level is atom-type α and at the second level ascending distance from the i atom.
where E i comes from Deep Potential except for I i is called D ij : where R ij is the separation between atoms i,j and x α ij are the cartesian components of the separation vector R ij and D is ambiguous in which situation which case should be used. The parameters are determined by the Adams method, see the associated paper for more details. The parameters are from the fitting functions which are of the similar form described in BPNN.
represents the total energy of the system where r ij is the separation vector and ϵ is the atomic energy function. GAP uses a localized version of the above equation replacing the separation vector with the truncated spectrum space: ϵ r ij n o ! ϵðbÞ. The spectrum space is constructed from the local atomic density of each atom i with a cutoff radius determining the locality and projected into 4D spherical harmonics which encapsulates the 3D spherical coordinates. The projections form a basis set with the Clebsch-Gordon coefficients to determine the intensity of each projection.
where the index i is dropped in the equations above and the following definitions: U j m 0 m are the Wigner matrices (4D spherical harmonics), ρ is the local atomic density of the i atom, C jm j 1 m 1 j 2 m 2 are the Clebsch-Gordon coefficients, and the total angular momentum is truncated, j; j 1 ; j 2 ≤ J max , to the associated atomic neighborhood.
The atomic energy function is approximated as a series of Gaussians: where n,l run over the reference configuration and spectrum components, α n is the fitting parameter, and θ l is a hyper parameter. The function is fit by least-squares fitting using the covariance matrix: where δ, σ are hyperparameters and y is the set of reference energies.
Comparing accuracy of models based on derived properties. Sections "Early simulations 1933-1990, Progression of DFTbased methods 1990-2021, Machine Learning Era 2020-ongoing" have summarized the various computational methods deployed to simulate molten salts: ab-initio molecular dynamics (AIMD), rigid ion model (RIM), polarizable ion model (PIM), Gaussian Approximation Potential (GAP), and deep potential learning methods (DeePMD). A comprehensive review of all computational potentials reported has been provided in Section 3.4. Here, we use metanalysis to provide a comparison between the different methods.
Comparing the accuracy of models quantitatively is impractical due to the lack of consistency in reporting properties across the literature and the nature of the derived properties. For thermophysical properties, the data is primarily in graphical form to capture the relationship between temperature and that property, but for structural properties, some numerical values, such as coordination number and position of local minimums and maximums, are reported in the literature.
We compare the methods using the first peak position of the PDF of Li-Cl ions as shown in Table 1. This value represents the average bond length of the Li-Cl ion and presents a reasonable way to compare the accuracy of models. AIMD and Experiment agree well, with AIMD capturing the descending behavior with respect to increasing temperature. PIM, GAP, and DeePMD agree reasonably well with AIMD (the value of GAP was not directly reported but extracted from a graph by visual approximation). RIM does not capture the temperature dependent behavior, which underlines the importance of polarization effects as widely described in the literature.
Another frequently simulated salt is Flibe, for the first peak position of the partial radial distribution function we find good agreement across PIM 80 , RIM 140 , AIMD 35,44 , X-ray diffraction experiment 141 , and ML methods 121,131 . Only the modified RIM shows disagreement of 0.1 Å for the fluoride-fluoride pair (see Supplementary Data 1- Table S1).
Multiple thermophysical properties of Flibe have been reported across different methods. For density: RIM agrees to 0.5% with experiment 142 ; PIM didn't report density but demonstrated good agreement with experimental molar volumes 80 ; similarly, the DFT methods reported agreement with experimental equilibrium volumes 143,144 by (9-18)% with better agreement using PBE with dispersion corrections (vdw-DF 145 ) with 2% inaccuracy 146 ; ANI-1 underpredicted density by 6%, potentially because this study didn't use a training dataset that corrected for dispersion 121 ; DeePMD underpredicted density by 14% compared to experiment 131 . For diffusion, it is difficult to define an accurate benchmark as there have not been many experimental measurements and the accuracy of AIMD is limited due to short time scales. Compared to AIMD, ANI-1 and DeePMD show 10-15% agreement while PIM shows agreement to within 30%. There is no report for RIM on diffusion coefficients, but we don't expect it to be better based on the viscosity and electrical conductivity data.
For Electrical conductivity: RIM is close to experimental values for higher temperatures but deviates much further for lower temperatures, PIM shows better agreement with experimental data, and DeePMD appears to do worse than PIM but not by much. Thermal conductivity was overestimated by 20% with PIM compared to accepted experimental results. DPMD finds better agreement than PIM. ANI-1 didn't report thermal or electrical conductivities; however, we expect a similar or even better performance to DeePMD.
For viscosity, DeePMD performs best followed by PIM then by RIM, other methods did not report viscosities. Lam et al. compared computational times of PIM, AIMD, and AN1-1 and found that the computational time in ascending order was PIM, ANI-1, then AIMD demonstrating that PIM is faster than ANI-1 121 .
It is unclear what may definitively be said about the best model due to inconsistency in reported values across all thermophysical properties. The ML methods show promise with ANI-1 and GAP demonstrating the greatest accuracy but are suspected to suffer from transferability across compositions where PIM may outperform due to its parameters originating from a larger composition space 91 .
Conclusions and outlook
Accurate prediction of thermophysical properties of molten salts can have an immeasurable impact on our society as new reactor and solar power systems are being developed. Throughout the last 80 years, breakthroughs in theory, computational power, and experiments have substantially advanced our ability to extract the necessary properties from simulations. Today, we witness the advent of machine learning in molten salt simulations and foresee unprecedented improvements in our abilities to design and use new molten salt mixtures. Machine learning models present a valuable midground between the accuracy and efficiency of classical force fields and ab-inito calculations. Thermophysical properties derived from extended simulations will be able to fit more accurate computational fluid dynamic models and help in designing superior molten salt systems.
To support the development of next-generation machine learning methods we identify the following concrete needs: (1) Existing data from DFT calculations should be made freely accessible and transparent to enable data science groups to train novel models; (2) high-performance integrations of machine learning forces into existing molecular dynamics toolkits should be extended beyond the current LAMMPS integrations; (3) reusability of machine learning models should be increased by proper sharing and documentation of models; (4) a set of experimental benchmarks should be defined for a representable set of molten salts to allow for reproducible assessment of the quality of predicted thermophysical properties; (5) an open library for the analysis of trajectories and extraction of thermophysical properties should be made available to render results amongst studies more comparable.
Additional opportunities that may lead to improved machine learning models include the following: (1) Design of custom machine learning architectures for molten salts that incorporate inductive biases, which will require close collaboration between chemists and computer scientists; (2) introduction of superior active learning techniques to develop optimal training strategies; (3) development of architectures that can generalize beyond a few atom types, ideally to the space of all relevant molten salts and solvents.
The grand challenge of molten salt simulations is optimizing salt mixtures for desired thermophysical properties. Current machine learning models are still limited to predictions for mixtures that were used to train them and small variations in experimental conditions, such as temperature. We expect that better datasets, training strategies, and architectures will soon overcome these limitations.
|
2022-06-03T13:24:51.999Z
|
2022-06-02T00:00:00.000
|
{
"year": 2022,
"sha1": "dd97e89ce54361b68a5cecf7ec79915c9cffe60b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "69b545320a1c8f236ee7f0bc8351db45438682a1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
251179348
|
pes2o/s2orc
|
v3-fos-license
|
Important adverse events to be evaluated in antidepressant trials and meta-analyses in depression: a large international preference study including patients and healthcare professionals
Background Non-serious adverse events (NSAEs) should be captured and reported because they can have a significant negative impact on patients and treatment adherence. However, the reporting of NSAEs in randomised controlled trials (RCTs) is limited. Objective To identify the most important NSAEs of antidepressants for patients and clinicians, to be evaluated in RCTs and meta-analyses. Methods We conducted online international surveys in English, German and French, including (1) adults prescribed an antidepressant for a depressive episode and (2) healthcare professionals (HCPs) prescribing antidepressants. Participants ranked the 30 most frequent NSAEs reported in the scientific literature. We fitted logit models for sets of ranked items and calculated for each AE the probability to be ranked higher than the least important AE. We also identified additional patient-important AEs not included in the ranking task via open-ended questions. Findings We included 1631 patients from 44 different countries (1290 (79.1%) women, mean age 39.4 (SD 13), 289 (37.1%) with severe depression (PHQ-9 score ≥20)) and 281 HCPs (224 (79.7%) psychiatrists). The most important NSAEs for patients were insomnia (95.9%, 95% CI 95.2% to 96.5%), anxiety (95.2%, 95% CI 94.3% to 95.9%) and fatigue (94.6%, 95% CI 93.6% to 95.4%). The most important NSAEs for HCPs were sexual dysfunction (99.2%, 95% CI 98.5% to 99.6%), weight gain (98.9%, 95% CI 97.7% to 99.4%) and erectile problems (98.8%, 95% CI 97.7% to 99.4%). Participants reported 66 additional NSAEs, including emotional numbing (8.6%), trouble with concentration (7.6%) and irritability (6%). Conclusions These most important NSAEs should be systematically reported in antidepressant trials. Clinical implications The most important NSAEs should contribute to the core outcome set for harms in depression.
BACKGROUND
Depression is experienced by up to 18% of individuals in the general population during their lifetime, with a high morbidity and mortality burden worldwide. 1 The global incidence of depression has increased from 172 million in 1990 to 258 million in 2017, representing a total increase of 49.9%. 2 International guidelines recommend antidepressant treatment as the first-line treatment for moderate and severe depression and for persisting mild
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Adverse events (AEs) are predictors of lower patient adherence to antidepressant treatment and of poor depression outcomes. ⇒ Despite the large number of clinical trials on antidepressants, there is still a lack of reliable evidence about the safety profile of these medications.
WHAT THIS STUDY ADDS
⇒ We conducted online international surveys) to rank the most important AEs according to patients and healthcare professionals (HCPs). ⇒ Overall, 1631 patients from 44 different countries and 281 HCPs were included. ⇒ Patients reported insomnia as the most important, followed by anxiety and fatigue. HCPs ranked sexual dysfunction as the most important AE, followed by weight gain and erectile problems. ⇒ We also identified 66 additional clinically important AEs such as emotional numbing, trouble with concentration, irritability and withdrawal syndrome.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Non-serious AEs that should be routinely evaluated in trials and meta-analyses about antidepressants in major depression and calls for the development of a core outcome set for harms outcomes in depression. ⇒ These results highlight the importance of engaging in discussions with patients and service users' caregivers in real-world clinical setting. ⇒ Clinicians should have access to appropriate tools that help engage in collaborative deliberation.
Pharmacological treatments
depression. 3 4 In the Organisation for Economic Co-operation and Development (OECD) countries, the average use of antidepressants has doubled between 2000 and 2017, going from 31 to 63 defined daily doses/1000 people/day. 5 Adverse events (AEs) are defined as 'any untoward medical occurrence in a patient administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment'. 6 Despite the large number of randomised controlled trials (RCTs) on antidepressants, most meta-analyses have focused on their comparative efficacy, with a minority only focusing on specific AEs. 7 This lack of information is problematic because AEs are predictors of poorer adherence to antidepressant treatment and poorer clinical outcomes. 8 Among AEs, there is a distinction between serious and nonserious adverse events (NSAEs). Serious AEs are defined by the Food and Drug Administration as AEs resulting in death, lifethreatening, requiring hospitalisation or prolonging an existing hospitalisation, resulting in persistent or significant disability, causing a congenital anomaly/birth defect, requiring specific intervention to prevent permanent impairment or damage. 9 All other AEs are labelled as non-serious, yet patients and clinicians can still find them troublesome, with consequent implications on the choice of the intervention. NSAEs are not systematically collected and reported in RCTs, which is likely to bias the interpretation of treatment effects, limiting the ability to reliably synthesise information about AEs in meta-analysis. 10 11 Objective This study aimed to rank NSAEs of antidepressants according to people with lived experience of depression and antidepressant treatments, as well as healthcare professionals (HCPs) with direct experience of prescribing antidepressants, to inform the selection of AEs to be investigated in clinical studies and metaanalyses of AEs. 12
METHODS
We conducted an online survey (available in English, German and French) asking patients and HCPs to rank the NSAEs according to their perceived importance. In this study, NSAEs were considered important if patients considered them as not tolerable from their personal perspective or if HCPs reported them as troublesome for patients according to their clinical experience. Full information about the methods of this study is reported in the published protocol. 13
Participants and recruitment
Participants (patients and HCPs) had to speak English, French or German to participate, regardless of their nationality. The surveys recruited participants on a voluntary basis and without payment. All participants provided online informed consent. Both surveys were approved by the Institutional Review Board CERHUPO.5 (Paris, France, IRB 00011928) and registered in the INDS (Institut National des Données de Santé, which is the regulatory body for health data in France) in accordance with the European General Data Protection Regulation.
We recruited adult patients (>18 years old) who were treated with antidepressant medication for unipolar depression. 14 Patients were excluded if they (1) had a diagnosis of bipolar disorder, (2) reported no current or previous antidepressant exposure or (3) did not use any of the antidepressants listed in the survey. HCPs were recruited if they had experience of prescribing and monitoring antidepressants in patients with depression (eg, psychiatrists, general practitioners/family doctors, hospital doctors, prescribing nurses or prescribing pharmacists). Patients and HCPs were invited to participate through (1) advertisements on social media networks or posted by professional associations and in scientific journals, (2) recruitment campaigns coordinated by the Mental Elf (https://oxfordhealthbrc.nihr.ac.uk/susanasurvey/), (3) patient associations or professional networks, and (4) invitations from the ComPaRe e-cohort (https://compare. aphp.fr/) and the MoodNetwork (https://moodnetwork.org/). Full information about the recruitment strategy is reported in online supplemental material 1.
Development of the survey and survey content
We developed two versions of the international online survey: a patient version and an HCP version. Both versions of the survey had three sections: the collection of descriptive data, the ranking task and an open-ended question to identify further important NSAEs not included in the ranking task.
In the first section of the survey, patients provided information on their sociodemographic and clinical status (eg, age, gender, country, education, severity and duration of symptoms, history of suicidality, current or previous antidepressant treatment, total duration of exposure to antidepressant treatment, and change of treatment due to AEs). HCPs provided demographic, personal and professional information (eg, gender, age, country, profession, experience, including history of depression, experience of antidepressants and AEs, if any).
In the second section, all participants performed the ranking task and were asked to rank the 30 most frequent NSAEs reported in the antidepressant trials using an existing list of drugs from the scientific literature (online supplemental material 2). 14 First, from the initial list of the 30 most frequent NSAEs, each participant selected the 15 AEs they felt were the most important. Second, the participants sorted these 15 AEs within a constrained template in a tiered system: only one AE could be ranked as the most important (first position), two AEs in the second position, three in the third, four in the fourth and the last five AEs in the fifth position (ie, these are the five least important AEs among the 15 initially selected). This method is validated and aims to shortlist the number of significant items to be ranked by each participant, reducing the burden and increasing the reliability of the evaluation process. 15 In the third section, all patients and only HCPs who had taken antidepressants were asked to answer an open-ended question to identify additional important AEs not included in the 30 most common NSAEs.
The surveys (originally written in English and then translated into German and French) were codeveloped by clinicians, epidemiologists, social scientists and people with lived experience of depression, and double-checked for clarity and appropriate wording by the Patient and Public Involvement Group from the Oxford Health Biomedical Research Centre and by French-, English-and German-speaking patients and HCPs. Surveys were available on a secured online platform (http://clinicalepidemio. fr/proceed2/en/). The surveys (in English) are reported in online supplemental material 3 and 4 (the French and German translations are available on request from the authors).
Analysis of survey data
We separately analysed data from patients and HCPs. As per protocol, 13 data from HCPs who had taken antidepressants for depression were analysed within the HCP group because they participated in the HCP version of the survey.
We used logit models for sets of ranked items to obtain a general ranking of the AEs from the survey data. 16 The logit models calculate the odds for an individual AE to be ranked above an arbitrary reference, here selected as the AE considered the least important by patients ('cold symptoms'). As odds are not very intuitive, we then transformed odds into percentages (ie, the probability for each specific AE to be ranked higher than the least important AE, in this case cold symptoms).
We used four alternative methods to rank AEs. First, we calculated the mean rank of each AE. Then, we assessed, for each AE, the proportion of participants having ranked it as the most troublesome (top 1), in the top 3 or in the top 6. We compared the ranking obtained with these four alternative methods (mean rank, top 1, top 3 and top 6) to the ranking obtained with the logit model by calculating, for each AE, the absolute value of the difference in the rank obtained with the logit model and with another method, and finally by averaging these values.
To evaluate how patients' characteristics could impact their ranking, we fitted models involving an interaction with the characteristics tested: gender (men vs women), severity of disease (PHQ-9score ≥15 vs <15)andstatusoftreatment(currently under antidepressant versus previously under antidepressant). We adjusted for multiple testing by using a Bonferroni correction.
Recognising that our recruitment methods may have led to a non-representative sample of patients taking antidepressants for depression, we performed a sensitivity analysis to evaluate the impact of weighting our sample so as to obtain a similar distribution of gender, age and education as in the European Health Interview Survey using a method of calibration on margins (online supplemental material 5). 17 Statistical analyses were performed with R V.3.6.1 (http://www.R-project.org).
Pharmacological treatments
Open-text data were analysed using an inductive qualitative content analysis approach. 18 Three independent clinicians/ researchers with professional experience of depression and antidepressants double-coded each participants' responses; that is, they assigned a code to each word or expression describing any AE's manifestation and/or impact on the participant's life. Following this, and with the help of a person with lived experience of depression and antidepressant treatment (ST), three independent clinicians/researchers categorised the codes inductively into a list of AEs, taking into account linguistic considerations and clinical judgements. 19
Findings
From 23 May 2019 to 16 October 2019, 5430 individuals visited the website hosting the online surveys, with 3600 patients and 551 HCPs providing consent to participate. After exclusion of participants who did not complete the ranking task, patients who reported bipolar disorder and/or patients who never took antidepressants, 1631 (45.3%) patients and 281 (51%) HCPs were included in the final analyses (figure 1).
Among the 281 HCPs from 27 countries, 224 (79.7%) were psychiatrists and 35 (12.4%) were general practitioners. Their mean professional experience was 14 (SD 11.4) years (see table 2 and online supplemental material 7 for full information). Among HCPs, 93 (33%) had direct personal experience of depression, and 63 of these (67.7%) were taking or had taken antidepressants. Of these, 40 (63.5%) had experienced AEs. Figure 2 represents each of the 30 NSAEs according to the probability of being ranked above cold symptoms, the least important AE for both patients and HCPs Use of alternative ranking methods provided similar results for the patients. The average difference of ranking between the logit method and a global method based on (1) the mean rank of each AE; (2) the proportion of patients selecting the AE among the top three most troublesome AEs was 1.3 and 3.7, respectively. We found heterogeneity in the definition of the most troublesome AEs. Especially, no AE was selected among the top three most troublesome AEs by more than 25% of patients in the sample, except insomnia (32.8% of the patients) (online supplemental material 8).
Results of the ranking task of the patients
Gender had a significant impact on the ranking of the following AEs: weight gain, erectile disorder, nausea and sweating (see online supplemental material 9). However, neither severity of depression nor taking or not an antidepressant at the time of the survey had an impact on the ranking (online supplemental material 10 and 11).
The sensitivity analysis using the weighted data set comparable to the European Health Interview Survey on gender, age and education showed no difference in the first three AEs (insomnia, anxiety and fatigue) and small differences with the next four AEs: weight gain (ranked fourth in the raw data set and seventh in the weighted data set), agitation (ranked fifth in both data sets), sexual dysfunction (ranked sixth and fourth, respectively) and dizziness (ranked seventh and fifth, respectively) (online supplemental material 12). 17
Comparison of the ranking of patients and HCPs
The Prefer not to answer 12 (0.8) *We report here only the countries representing more than 5% of the patients. The further 38 countries are reported in online supplemental material 6. †Number of antidepressants reported by patients among the list of 41 antidepressants. ‡We only list antidepressants reported by >10% of the participants. The 31 other antidepressants are reported in online supplemental material 5. Some people may have taken several antidepressants: only participants who reported having experienced AEs (n=1438) or did not know if they experienced AEs (n=88) answered this question (n=1526). AE, adverse event; PHQ-9, Patient Health Questionnaire-9 . The average difference of rank of AEs between the ranking of patients and HCPs (both using the logit model) was 5.4 (online supplemental material 14).
Additional AEs important for patients and HCPs but not included in the ranking task
In total, 1283 patients and 40 HCPs answered the open-ended question: 'If you have ever experienced any side effects while taking antidepressant medication, could you please describe these side effects and their impacts on your life, in regard to the potential benefits of the treatment?' Overall, 66 additional important AEs were identified (online supplemental material 15, Table A). The most frequently cited were emotional numbing (n=154, 11.6%), trouble with concentration (101, 7.6%), irritability (79, 6%) and withdrawal symptoms (78, 5.9%) (table 3). When reporting emotional numbing, participants noted a reduction and/or suppression of all positive and negative feelings. While some patients argue this could be a relief in the first week of the treatment not to feel the intense negative feelings anymore, some complained about the persisting effects, even after treatment cessation.
DISCUSSION
In this study, 1631 patients and 281 HCPs ranked the 30 most common NSAEs reported in RCTs of antidepressants in depression. Among the top 15 most important AEs for patients and HCPs, 11 were common to both groups: insomnia, anxiety, fatigue, weight gain, agitation, sexual dysfunction, dizziness, sleepiness, sweating, headache and nausea. This list was consistent among subgroups of patients, with the exception of 'erectile problems', which was ranked as most important by men. A sensitivity analysis using a weighted sample representative of the depressed population in Europe found similar results. Using freetext responses, we also identified additional important AEs not included in the ranking task, such as emotional numbing, trouble with concentration, irritability and withdrawal symptoms.
To our knowledge, this is the first study that ranked the importance of AEs of antidepressants for depression using a large and international sample of patients and prescribing HCPs. Previous studies focused only on the prevalence of AEs, did not evaluate the importance of AEs in patients' lives, and were limited to individual countries. [20][21][22] A review of studies investigated patients' preferences for medication-associated outcomes in mental disorders and identified no studies dedicated to AEs of antidepressants in depression. 23 This study has limitations. First, our sample may not be representative of depressed patients taking antidepressants. We found no European data describing the characteristics of people *We only report here the countries representing more than 5% of the HCPs. The further 23 countries are reported in online supplemental material 6. †We only report here the antidepressant taken by more than 15% of the HCPs. All other antidepressants are presented in online supplemental material 6. Some people may have taken several antidepressants. HCP, healthcare professional.
Pharmacological treatments
taking antidepressants for depression. There are epidemiological studies about people with depression, but they did not report data about antidepressants' consumption, or studies about the use of antidepressants, but no information about the diagnosis of patients (eg, depression, anxiety and chronic pain). 24 Therefore, we decided to use the data from the European Health Interview Survey to weight our sample (on age, gender and education), although it does not entirely represent people taking antidepressants for depression. 17 This sensitivity analysis found minor differences in the top six AEs ranked by patients. Other characteristics may have affected the representativeness of our sample and thus limit the generalisability of the results of the ranking. For instance, we cannot rule out the possibility that patients who perceive themselves as harmed by antidepressants or who are more vulnerable to AEs may have more likely participated in the survey than patients who do not. However, we were limited by the scarcity of external data describing patients taking antidepressants for depression. There is also a lack of data regarding the determinants of patients' preferences, such as whether having experienced a given AE has an impact or not on the relative preference for this AE. Second, the surveys were open to any English-speaking, French-speaking or German-speaking patient/ HCP without geographical limitations but ultimately involved mostly patients from Western countries. Since the results regard preferences of individuals, they may be sensitive to sociocultural determinants. There is a need to conduct such surveys in other contexts or at least to identify if preferences toward AEs are determined by cultural aspects. Third, the ranking task investigated 'stated preferences' of patients, that is, choices including hypothetical scenarios, but they could be different from 'revealed preferences', that is, actual choices made by people in real situations. We chose to evaluate stated preference to be closer to
Figure 2
Ranking of AEs of patients and HCPs. Each AE is plotted with the corresponding probability and CI for HCPs (x coordinate) and patients (y coordinate). Cold symptoms are the reference. For instance, the probability of insomnia to be ranked over cold symptoms was 98.3% (95% CI 96.6 to 99.2) for HCPs and 95.9% (95% CI 95.2 to 96.5) for patients. For the sake of visibility, we used a logit scale. The vertical line is the median probability for HCPs (96.6%), and the horizontal line is the median probability for patients (88.3%). The upper right portion contains the AEs considered most important by both patients and HCPs. AE, adverse event; HCP, healthcare professional. the reality of therapeutic decision (before having experienced the treatment/taken the antidepressant), as this is what happens in routine care. 25 Fourth, we chose to study AEs of antidepressants together as a drug class, even if it is likely that individual antidepressants have different profiles of AEs, both in children, adolescents, adults and older adults. 26 Other factors can affect the experience of antidepressant treatment, including the dose and duration of the drugs, whether the AEs are during treatment or are residual symptoms, or whether people are taking antidepressants as monotherapy or combination treatment. Fifth, our results must be interpreted in the sole context of antidepressants in depression only. To obtain a ranking of patients' preferences toward AEs of antidepressants prescribed for other conditions (such as neuropathic pain or urinary incontinence), we would need to extract NSAEs from trials of antidepressants in these conditions first. Finally, the population of the trials in which we identified the list of AEs may be different from the population of patients included in our survey, not only because some of the antidepressants investigated in the older studies may no longer be in use but also because the population of randomised trials is different from the real-world population, due to their strict eligibility criteria. 27 Discrepancies between patients and HCPs in the ranking of AEs and the important AEs identified with the qualitative content analysis highlight the importance of engaging in discussions with patients and service users' caregivers. In our survey, 16.2% of patients reported having changed their treatment without medical advice because of AEs, and some patients described the difficulties they have to talk about AEs with HCPs, with the feeling of not being heard, in particular about sexual dysfunction and emotional numbing. Clinicians need training in a real shared decision-making approach. 28 They should have access to appropriate tools that help engage in collaborative deliberation, and their clinical practice generally needs to be reorganised around the principles of patient engagement. 28
CLINICAL IMPLICATIONS
Nowadays, it is recommended to include the perspective of patients in the selection of efficacy outcomes to be included in trials. 29 30 Several initiatives such as OMERACT (Outcome Measures in Reheumatology -https://omeracthandbook.org/ handbook) and COMET 31 have updated their recommendations to do the same for harms outcomes. To our knowledge, this is the first study to make the proof of concept of a method to select AEs to be included in trials and to provide a ranking of AEs of antidepressants based on their importance from the point of view of patients and HCPs. These findings could set the foundation of a Core Outcome Set for harms 32 to be measured in all primary and secondary studies involving antidepressants for depression. These important AEs should be systematically evaluated and reported properly in RCTs and their meta-analysis, in order to provide evidence-based information to support treatment decisions in clinical practice. 33 Twitter Astrid Chevance @ChevanceAstrid and Andrea Cipriani @And_Cipriani
|
2022-07-31T06:16:45.034Z
|
2022-07-29T00:00:00.000
|
{
"year": 2022,
"sha1": "ce2e776b097dc63bdab5f094d741c0aa3d0cf02f",
"oa_license": "CCBY",
"oa_url": "https://ebmh.bmj.com/content/ebmental/early/2022/07/29/ebmental-2021-300418.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1657b740d8841d0e0c6dc3235b205436c57eacb",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13286496
|
pes2o/s2orc
|
v3-fos-license
|
Real-Time Classification of Patients with Balance Disorders vs. Normal Subjects Using a Low-Cost Small Wireless Wearable Gait Sensor
Gait analysis using wearable wireless sensors can be an economical, convenient and effective way to provide diagnostic and clinical information for various health-related issues. In this work, our custom designed low-cost wireless gait analysis sensor that contains a basic inertial measurement unit (IMU) was used to collect the gait data for four patients diagnosed with balance disorders and additionally three normal subjects, each performing the Dynamic Gait Index (DGI) tests while wearing the custom wireless gait analysis sensor (WGAS). The small WGAS includes a tri-axial accelerometer integrated circuit (IC), two gyroscopes ICs and a Texas Instruments (TI) MSP430 microcontroller and is worn by each subject at the T4 position during the DGI tests. The raw gait data are wirelessly transmitted from the WGAS to a near-by PC for real-time gait data collection and analysis. In order to perform successful classification of patients vs. normal subjects, we used several different classification algorithms, such as the back propagation artificial neural network (BP-ANN), support vector machine (SVM), k-nearest neighbors (KNN) and binary decision trees (BDT), based on features extracted from the raw gait data of the gyroscopes and accelerometers. When the range was used as the input feature, the overall classification accuracy obtained is 100% with BP-ANN, 98% with SVM, 96% with KNN and 94% using BDT. Similar high classification accuracy results were also achieved when the standard deviation or other values were used as input features to these classifiers. These results show that gait data collected from our very low-cost wearable wireless gait sensor can effectively differentiate patients with balance disorders from normal subjects in real time using various classifiers, the success of which may eventually lead to accurate and objective diagnosis of abnormal human gaits and their underlying etiologies in the future, as more patient data are being collected.
Introduction
Analysis of human gaits has long been an active area of research, and many systems have been proposed for observing and differentiating different gait patterns and their irregularities in the literature. Many of these existing systems use appearance-based approaches by extracting features and/or positions from the images captured from different video sequences using high-speed video cameras with frame rates of 50-200 Hz. Many studies have reported that image feature extraction using biomechanical models can allow quantitative analysis of specific gait characteristics, such as joint moments and powers (i.e., kinetic analysis), joint angles, angular velocities and angular accelerations (i.e., kinematic analysis) [1]. The testing protocol of these systems includes placing the optical markers near anatomical landmarks of the body and features related to various gait patterns are extracted from video sequences. Parametric models have been used extensively to describe a set of successive image observations. A vision-based three-dimensional (3D) modeling of motions of a human subject can be achieved by using volumetric bodies (e.g., using cylinders) that represent the flesh of the human subject and a simple collection of segments with joint angles representing the skeletal structure of the human body [2]. Alternatively, 2D models that represent the projection of 3D data onto an imaging plane can also be used (i.e., 2D/3D contour modeling [3]). However, since different kinetic and kinematic methods have been developed from these sophisticated and expensive visual gait analysis systems, it can be rather challenging to directly compare the gait analysis results from different systems/methods, as there is no standardization in the visual gait analysis methodology.
In addition, the variables that can be measured during gait analysis depend on the technique and sensors selected, which makes the direct comparison of gait analysis results derived from different gait sensing systems even much more difficult. The most commonly-reported gait measurement data include the temporospatial parameters, which include walking speed, body rotations, step time, step length and the durations of the stance phase and the swing phase [4]. A basic inertial measurement unit (IMU) that includes 3D gyroscopes and accelerometers can measure angular velocity and linear acceleration for each of the X/Y/Z axes, respectively, and these inexpensive IMUs have been used as wearable sensors that provide a powerful option for human gait analysis [4][5][6]. For example, Aminian et al. [4] and Selles et al. [5] reported methods of measuring both terminal contact (TC) that defines the beginning of the swing phase, as well as the initial contact (IC) that defines the beginning of the gait cycle timing information using those body-worn sensors. On the other hand, Yoshida et al. [6] used an accelerometer/IMU sensor attached to the patient's waist and observed frequency peaks in the anterior plane to detect leg injury. Boutaayamou et al. [7] developed a signal processing algorithm to automatically extract, on a stride-by-stride basis, four consecutive fundamental events of walking, i.e., heel strike (HS), toe strike (TS), heel-off (HO) and toe-off (TO), from wireless accelerometers applied to the right and left foot. This accelerometer-based event identification was validated in seven healthy volunteers and a total of 247 trials against reference data provided by a force plate, a kinematic 3D analysis system and a video camera. An ambulatory monitoring method using an IMU sensor for patients with Parkinson's disease has also been developed [8,9].
We have, therefore, designed a custom low-cost wireless gait analysis sensor (WGAS), which can be used for both fall detection and gait analysis. We reported to have demonstrated measured fall detection rates of 99% in classification accuracies among young volunteers using a similar WGAS with the BP-ANN and SVM classification algorithms [10][11][12]. This paper, however, studies and reports our new WGAS that is used specifically for gait analysis to detect patients with balance disorders among normal subjects while using various classification algorithms and input features to check the speed and accuracy of each classifier. This WGAS is placed and tested at the T4 position on the back of each subject, as shown in Figure 1. For the learning of the classification algorithms, we will first use the following six input features for the X/Y/Z axes extracted from the raw data taken from the WGAS (i.e., R ω : range of angular velocity; R A : range of acceleration, as shown in Equations (1) and (2)). They are used as one of the input features sets for these classifiers that yield excellent classification accuracies, and the details of the WGAS and our experimentation and analysis will be explained next.
Wireless Gait Analysis Sensor
The custom-designed WGAS consists of a three-axis linear accelerometer IC, a single-axis gyroscope IC and a dual-axis gyroscope IC to measure 3D human body translations and rotations during a gait pattern; these ICs are designed with the help of micro-electrical and mechanical system (MEMS) sensors. Our WGAS measures the linear acceleration and angular rotations of the body movements, and there is no need for a magnetometer for our application of gait analysis and gait classification. Furthermore, the sensor has a dual-and single-axis gyroscope instead of a tri-axial gyroscope, because the particular analog MEMS tri-axial gyroscope was not available on the market during the design of the WGAS, but it is now. The future design of our WGAS can have a more compact IMU with a tri-axial single accelerometer IC and a single-gyroscope IC. However, we do not believe this will affect the sensing results whether one IC or two ICs are used. The details of these ICs with their manufacturers' information can be found in [10,11]. This WGAS system is supported by a Texas Instruments (TI) MSP430 microcontroller (Texas Instruments, Dallas, TA, USA) and a wireless 2.4 GHz USB transceiver using the TI SimpliciTI™ protocol (Texas Instruments, Dallas, TA, USA) with a wireless communication range of ~12 meters (40 ft). An overall simplified system block diagram for the WGAS analysis system is shown in Figure 2.
Wireless Gait Analysis Sensor
The custom-designed WGAS consists of a three-axis linear accelerometer IC, a single-axis gyroscope IC and a dual-axis gyroscope IC to measure 3D human body translations and rotations during a gait pattern; these ICs are designed with the help of micro-electrical and mechanical system (MEMS) sensors. Our WGAS measures the linear acceleration and angular rotations of the body movements, and there is no need for a magnetometer for our application of gait analysis and gait classification. Furthermore, the sensor has a dual-and single-axis gyroscope instead of a tri-axial gyroscope, because the particular analog MEMS tri-axial gyroscope was not available on the market during the design of the WGAS, but it is now. The future design of our WGAS can have a more compact IMU with a tri-axial single accelerometer IC and a single-gyroscope IC. However, we do not believe this will affect the sensing results whether one IC or two ICs are used. The details of these ICs with their manufacturers' information can be found in [10,11]. This WGAS system is supported by a Texas Instruments (TI) MSP430 microcontroller (Texas Instruments, Dallas, TA, USA) and a wireless 2.4 GHz USB transceiver using the TI SimpliciTI™ protocol (Texas Instruments, Dallas, TA, USA) with a wireless communication range of~12 meters (40 ft). An overall simplified system block diagram for the WGAS analysis system is shown in Figure 2. classification accuracies, and the details of the WGAS and our experimentation and analysis will be explained next.
Wireless Gait Analysis Sensor
The custom-designed WGAS consists of a three-axis linear accelerometer IC, a single-axis gyroscope IC and a dual-axis gyroscope IC to measure 3D human body translations and rotations during a gait pattern; these ICs are designed with the help of micro-electrical and mechanical system (MEMS) sensors. Our WGAS measures the linear acceleration and angular rotations of the body movements, and there is no need for a magnetometer for our application of gait analysis and gait classification. Furthermore, the sensor has a dual-and single-axis gyroscope instead of a tri-axial gyroscope, because the particular analog MEMS tri-axial gyroscope was not available on the market during the design of the WGAS, but it is now. The future design of our WGAS can have a more compact IMU with a tri-axial single accelerometer IC and a single-gyroscope IC. However, we do not believe this will affect the sensing results whether one IC or two ICs are used. The details of these ICs with their manufacturers' information can be found in [10,11]. This WGAS system is supported by a Texas Instruments (TI) MSP430 microcontroller (Texas Instruments, Dallas, TA, USA) and a wireless 2.4 GHz USB transceiver using the TI SimpliciTI™ protocol (Texas Instruments, Dallas, TA, USA) with a wireless communication range of ~12 meters (40 ft). An overall simplified system block diagram for the WGAS analysis system is shown in Figure 2. The two AAA batteries used in our earlier wired sensor [13] were replaced by a single rechargeable Li-ion coin battery, providing a battery lifetime of~40 h of continuous operation time with each recharge. The PCB, coin battery and the microcontroller are placed in a specially-designed 3D-printed plastic box (2.2 × 1.5 × 0.8 ) with a total weight of 42 g. The design of the box was done with the 3D modeling software Rhinoceros (Rhino, Robert McNeel & Associates, WA, USA) and printed using a 3D printer with acrylonitrile butadiene styrene (ABS) plastic. The box has a sliding lid and is shown in Figure 1. The accelerometer data are sampled at 160 Hz and digitized to eight bits, with its output scaled to ±6 g at ∆ V = ±6 g/VDD (VDD, Supply voltage = 3.6 V) for each axis. The gyroscope data are also sampled at 160 Hz and digitized to eight bits, with its output scaled to 300 • /s (dps) at ∆V = ±300 dps. Multiplied by So (note the typical So (sensitivity) value is 3.752 mV/dps for the accelerometer and 3.33 mV/dps for the gyroscopes). The sensor orientation and position on the body during testing are also shown in Figure 1. The sensor is carefully secured to the subjects during testing to avoid artifacts. A cloth is used to secure the WGAS by tying it tightly at the T4 position of the patient's back. The cloth is also attached to the shirt of the subject by using plastic tape. In addition, Velcro is attached to the sensor box that firmly attaches to the shirt, as well, to make sure that the sensor does not move around during the DGI tests shown in Table 1. The method described above proved that it significantly reduced the outliers in the measured datasets from the sensor. The microcontroller and the transceiver unit enable the real-time wireless transmission of the six-dimensional gait data to the nearby PC, where a LABVIEW™ program is used for designing the graphical user interface (GUI), as shown in Figure 3. The DC values for the six signals are not the same as the battery charges and discharges, and the calibration to make them the same DC level was not done to simplify the WGAS design, as we have ascertained that the detected AC signals from WGAS are really the ones that contain the most useful gait information, as will be shown in the remaining of this paper. The two AAA batteries used in our earlier wired sensor [13] were replaced by a single rechargeable Li-ion coin battery, providing a battery lifetime of ~40 h of continuous operation time with each recharge. The PCB, coin battery and the microcontroller are placed in a specially-designed 3D-printed plastic box (2.2′ × 1.5′ × 0.8′) with a total weight of 42 g. The design of the box was done with the 3D modeling software Rhinoceros (Rhino, Robert McNeel & Associates, WA, USA) and printed using a 3D printer with acrylonitrile butadiene styrene (ABS) plastic. The box has a sliding lid and is shown in Figure 1. The accelerometer data are sampled at 160 Hz and digitized to eight bits, with its output scaled to ±6 g at Δ V = ±6 g/VDD (VDD, Supply voltage = 3.6 V) for each axis. The gyroscope data are also sampled at 160 Hz and digitized to eight bits, with its output scaled to 300°/s (dps) at ΔV = ±300 dps. Multiplied by So (note the typical So (sensitivity) value is 3.752 mV/dps for the accelerometer and 3.33 mV/dps for the gyroscopes). The sensor orientation and position on the body during testing are also shown in Figure 1. The sensor is carefully secured to the subjects during testing to avoid artifacts. A cloth is used to secure the WGAS by tying it tightly at the T4 position of the patient's back. The cloth is also attached to the shirt of the subject by using plastic tape. In addition, Velcro is attached to the sensor box that firmly attaches to the shirt, as well, to make sure that the sensor does not move around during the DGI tests shown in Table 1. The method described above proved that it significantly reduced the outliers in the measured datasets from the sensor. The microcontroller and the transceiver unit enable the real-time wireless transmission of the sixdimensional gait data to the nearby PC, where a LABVIEW™ program is used for designing the graphical user interface (GUI), as shown in Figure 3. The DC values for the six signals are not the same as the battery charges and discharges, and the calibration to make them the same DC level was not done to simplify the WGAS design, as we have ascertained that the detected AC signals from WGAS are really the ones that contain the most useful gait information, as will be shown in the remaining of this paper. . An example output on the LABVIEW GUI of the data wirelessly transmitted from the WGAS to a nearby PC, showing the real-time data for the six monitored signals. For example, the "GYRO X" signal is the angular velocity measured from the gyroscope centered on the X axis, and the "ACC X" signal is the acceleration measured from the accelerometer along the X axis, respectively. . An example output on the LABVIEW GUI of the data wirelessly transmitted from the WGAS to a nearby PC, showing the real-time data for the six monitored signals. For example, the "GYRO X" signal is the angular velocity measured from the gyroscope centered on the X axis, and the "ACC X" signal is the acceleration measured from the accelerometer along the X axis, respectively. Gait and pivot turn: walk normally, but at the end, turn around with a pivot turn 6 Step over obstacle: walk normally, and when you come across the obstacle, step over, but not around it 7 Step around obstacles: walk normal, and when encountering the 1st obstacle, walk around the right-side; when encountering the 2nd obstacle, walk around the left-side
Experimental Section
All four patients and the three normal subjects performed the 7 Dynamic Gait Index (DGI) tests [10,14] with the details shown in Table 1. Therefore, a total of 28 dynamic gait tests were performed on 4 patients and 21 tests on three normal subjects. During all of the tests, the WGAS is placed at the T4 position (at back) for all of the subjects involved. Our WGAS sensor has been accepted by the Texas Tech University Health Sciences Center (TTUHSC) Internal Review Board (IRB) with study title "Fall risk identification and assessment using body worn sensors (CRI12-030 Fall study)".
The six features of the raw data on range from the gyroscopes and the accelerometers from all of the testing subjects are important, and they form the inputs for training the classification algorithms. These include the range of angular velocity in the X/Y/Z direction (i.e., R ωx , from "GYRO X"; R ωy , from "GYRO Y"; and R ωz , from "GYRO Z") and the range of acceleration in the X/Y/Z direction (i.e., R Ax, from "ACC X"; R Ay , from "ACC Y"; and R Az , from "ACC Z"). The data were obtained using a 1.7 GHz PC with 4 GB of RAM, Windows 8 OS, and using MATLAB R2015b. We compared the box plots of the four patients and three normal subjects to show several key test data statistics: the median, mean, range (highest to lowest values) and interquartile range (IQR) of DGI Tests 2, 3, 4 and 6, as shown in Figures 4-7. These four DGI tests are selected and plotted here; they can show better visual comparisons than the other three DGI tests; some detailed statistics of all DGI tests are also listed later in this paper i.e., at the end of Section 2 (e.g., Figures 8 and 9, etc.) Change in gait speed: walk with a normal pace up to 5′; walk fast for the next 5′; walk slowly for the next 5′; and walk normally for the last 5′ 3 Gait with horizontal head turns: walk normally with horizontal head turns up to the 20′ mark 4 Gait with vertical head turns: walk normally with vertical head turns up to the 20′ mark 5 Gait and pivot turn: walk normally, but at the end, turn around with a pivot turn 6 Step over obstacle: walk normally, and when you come across the obstacle, step over, but not around it 7 Step around obstacles: walk normal, and when encountering the 1st obstacle, walk around the right-side; when encountering the 2nd obstacle, walk around the left-side
Experimental Section
All four patients and the three normal subjects performed the 7 Dynamic Gait Index (DGI) tests [10,14] with the details shown in Table 1. Therefore, a total of 28 dynamic gait tests were performed on 4 patients and 21 tests on three normal subjects. During all of the tests, the WGAS is placed at the T4 position (at back) for all of the subjects involved. Our WGAS sensor has been accepted by the Texas Tech University Health Sciences Center (TTUHSC) Internal Review Board (IRB) with study title "Fall risk identification and assessment using body worn sensors (CRI12-030 Fall study)".
The six features of the raw data on range from the gyroscopes and the accelerometers from all of the testing subjects are important, and they form the inputs for training the classification algorithms. These include the range of angular velocity in the X/Y/Z direction (i.e., Rωx, from "GYRO X"; Rωy, from "GYRO Y"; and Rωz, from "GYRO Z") and the range of acceleration in the X/Y/Z direction (i.e., RAx, from "ACC X"; RAy, from "ACC Y"; and RAz, from "ACC Z"). The data were obtained using a 1.7 GHz PC with 4 GB of RAM, Windows 8 OS, and using MATLAB R2015b. We compared the box plots of the four patients and three normal subjects to show several key test data statistics: the median, mean, range (highest to lowest values) and interquartile range (IQR) of DGI Tests 2, 3, 4 and 6, as shown in Figures 4-7. These four DGI tests are selected and plotted here; they can show better visual comparisons than the other three DGI tests; some detailed statistics of all DGI tests are also listed later in this paper i.e. at the end of section 2 (e.g., Figures 8 and 9, etc.) Moreover, the average of STDEV (i.e., standard deviation), Range, Mean, Median, and IQR of DGI tests 2 and 7 are calculated and shown in Tables 2 and 3.
As shown in Figure 4, the box plots of the patients' data appear to be mostly larger than those of the normal subjects for DGI test 2 for all extracted features, and they look significantly tighter for all of the 3-axial acceleration data for all DGI tests as shown in Figures 4-7 (i.e., "ACC X", "ACC Y" and "ACC Z"). The tighter the box plot distributions suggest the normal subjects are walking more steadily with less wobbling or sways than patients during the DGI tests. Moreover, the average of STDEV (i.e., standard deviation), Range, Mean, Median, and IQR of DGI tests 2 and 7 are calculated and shown in Tables 2 and 3.
As shown in Figure 4, the box plots of the patients' data appear to be mostly larger than those of the normal subjects for DGI test 2 for all extracted features, and they look significantly tighter for all of the 3-axial acceleration data for all DGI tests as shown in Figures 4-7 (i.e., "ACC X", "ACC Y" and "ACC Z"). The tighter the box plot distributions suggest the normal subjects are walking more steadily with less wobbling or sways than patients during the DGI tests. Moreover, the average of STDEV (i.e., standard deviation), Range, Mean, Median, and IQR of DGI Tests 2 and 7 are calculated and shown in Tables 2 and 3.
As shown in Figure 4, the box plots of the patients' data appear to be mostly larger than those of the normal subjects for DGI Test 2 for all extracted features, and they look significantly tighter for all of the 3-axial acceleration data for all DGI tests as shown in Figures 4-7 (i.e., "ACC X", "ACC Y" and "ACC Z"). The tighter the box plot distributions suggest the normal subjects are walking more steadily with less wobbling or sways than patients during the DGI tests.
Besides the larger box plots associated with larger STDEV and IQR for patients, in all box plots, we have also observed the median/mean values of the acceleration measured on the X and Z axes (i.e., "ACC X", "ACC Z") of the normal subjects are significantly larger from those of patients for all of the DGI tests, and we have shown this difference in Tables 2 and 3 for DGI Test 2 and Test 7 as examples for better illustration. This contrast might be explained because walking gaits of normal subjects are considerably different from those of patients, where they walk freely with more accelerations along the X/Z direction from their center of mass (CoM), while patients of balance disorders typically walk with a decrease of speed, shortened stride length and other associated factors [4,5]. It is also interesting to notice that the median/mean values of ACC Y of the normal subjects are not so different from those of patients. This might be due to that during the DGI tests, the Y axis is parallel to the walking direction, while the X and Z axes are perpendicular to the walking direction, and therefore, we did not see much change in the median/mean values along the Y axis. We will need to look at these effects related to the median/mean values from accelerometers closely after more patient data can be collected to hopefully understand them better. Moreover, as mentioned before, when the gait data's STDEV is smaller, one would expect the walking to be more steady or stable. We can indeed see in the box plots and tables that the normal subjects' STDEV values on Gyro X and Y and ACC X, Y and Z are all smaller than those of patients. However, the normal subjects' STDEV data of Gyro Z are actually than that of the patients, suggesting that normal subjects may rotate their bodies around the Z axis naturally more and with more variation than patients of balance disorders; their faster walking speed than the patients may contribute to this effect, as well. Note we would really need to collect more patient gait data to improve the statistics and analysis details especially on the gyroscope data, as we can see that for all seven DGI tests, the STDEVs of Gyro Z data for Patients No. 1 and No. 2 are so different from those of Patients No. 3 and No. 4 (see Figures 4-7). To see this better, we have also shown the values of the STDEV for normal subjects and patients for DGI Test 2 in Table 4. Therefore, one can see from Table 4 that the STDEVs of Gyro Z for normal subjects are actually slightly lower than those of "Patient 1" and "Patient 2", but much greater than those of "patient "and "Patient 4"; therefore, the average STDEV of Gyro Z among normal subjects becomes larger than that of patients. Finally, not surprisingly, Table 3 shows that the average range values of Gyro X, Y, Z and also ACC X, Y, Z for the patients' gait data are all greater than those of the normal subjects' for all of the DGI tests, except for the range of the Gyro Z data of the DGI Test 7, probably for the reasons stated before that the normal subjects may rotate their bodies around the Z axis naturally more with faster walking speeds and, therefore, with more variation and range difference than patients of balance disorders. Moreover, Figures 4-7 show the box plots of DGI Tests 2, 3, 4 and 6 that group normal subjects as one group vs. patients as another group. From these points, the DGI Tests 2, 3, 4 and 6 present as better tests than the DGI Tests 1, 5 and 7 for differentiating patients from normal subjects. Having checked those basic data statistics, we have decided to first use the range values of the patients and normal subjects from all six sensor axes and for all 7 DGI tests taken as the single input features for the classification algorithms. We will later also use STDEV, etc., as input features to these classifiers and compare them, as well.
Finally, we have shown in Table 5 the STDEV, range, median, mean and IQR for normal subjects vs. patients for all seven DGI tests. To see the distributions on range better, the histograms of the data with normal distribution are now also shown for each of the six features for both patients and normal subjects in Figures 8 and 9. The box plots of normal subjects and patients are also shown for these six features. It can be clearly seen that the features related to the acceleration data (i.e., the "ACC" data) look tighter in the distribution for the normal subjects than for the patients, especially for ACC X and ACC Z. After checking the raw gait data carefully to ensure the data integrity, we are now ready to show the classification algorithms and classification results next.
Classification Algorithms
Classification algorithms were used on the 6 features extracted from all 7 DGI tests and for all testing subjects to differentiate patients vs. normal subjects from all of the data collected.
Back Propagation Artificial Neural Network
An artificial neural network (ANN) can be seen as a machine that is designed to mimic how the brain performs a particular task or function of interest. Using ANN as a classifier has several advantages, such as: (1) neural networks are data-driven self-adaptive methods that can adjust themselves to the input data without using explicit mathematical functions; (2) it is a nonlinear model that can model most real-world problems; (3) neural networks are able to estimate Bayesian posterior probability, which can provide the basic estimation of classification rules [10]. To train the feed-forward ANN classifier in this work, back propagation (BP) was applied on the input features with scaled conjugate gradient (SCG) learning [10,11]. Similar to the neural network [10], our gait classification neural network also has three layers in which the input layer has six neurons that correspond to the six input feature values. The hidden layer can automatically extract the features of the input patterns. There is no definite rule used to determine the number of neurons in the hidden layer; it is a hit-and-trial method. Here, in our classification study using ANN, there is one hidden layer holding 10 hidden neurons, the number of which was optimized by adjusting the size of hidden neurons (from 1 to 15) as shown in Figure 10. For the hidden layer, a hyperbolic tangent sigmoid transfer function is used for each neuron. At the hidden layer, it is used to calculate network output from its input. The tangent hyperbolic function and its fast approximation are given by the following equation: where a i1 is the i-th element of the a 1 vector containing the output from the hidden neurons, n i1 is the i-th element of n 1 vector containing the net input going into the hidden units and n 1 is calculated by using the formula: where p is the input pattern, b 1 is the bias vector and w 10 is the weight matrix between the hidden layer and the output layer. The output layer is designed based on the required output of the neural network. Here, we have used two output neurons corresponding to the two target classes the network needs to differentiate (i.e., the features of all patients are considered as Class 1, and all features of normal subjects are as Class 2). The pure linear activation function is selected for the output, given by the following equation: where a 2 is the column vector coming from the output layer and n 2 is the output net inputs going into the output layer, which can be calculated by using the following formula: Note here that b 2 is the bias at the second layer; w 21 is the synaptic weights at the hidden layer and output layer; and a 1 is the column vector containing the outputs from the hidden layer.
Here, we have used the back propagation learning algorithm that consists of two paths; the forward path and the backward path. The forward path includes creating the feed-forward network, initializing weights, simulation and training the network. The network weights and biases are updated in the backward path. Here, we have used the scaled conjugate gradient (SCG) training algorithm that uses the gradient of the performance function to determine how to adjust weights to minimize the performance function. An iteration of this algorithm can be written as: where x k is the vector of current weights and biases, g k is the current gradient and α k is the learning rate. Here, we have used the back propagation learning algorithm that consists of two paths; the forward path and the backward path. The forward path includes creating the feed-forward network, initializing weights, simulation and training the network. The network weights and biases are updated in the backward path. Here, we have used the scaled conjugate gradient (SCG) training algorithm that uses the gradient of the performance function to determine how to adjust weights to minimize the performance function. An iteration of this algorithm can be written as: where is the vector of current weights and biases, is the current gradient and is the learning rate.
Support Vector Machine
The support vector machine (SVM) method is popular for performing pattern recognition/classification on two categories of data with supervised learning. In our work, SVM was implemented similar to [10,15,16] to classify patients from normal subjects using the gait data from our WGAS system. A linear kernel was used for training the SVM classifier, which finds the maximum-margin hyper-plane from the given training dataset D, and it can be described as: where is either 1 or −1, and n is the number of training data. Each is a p-dimensional vector having the feature quantity R. Any hyper-plane can be written as: where is the vector to the hyper-plane. If the training data are linearly separable, the hyper-plane can be described as [15]: The distance between these two hyper-planes is 2/‖ ‖, so the purpose is to minimize ‖ ‖.
In general, it is hard to separate the training data linearly. When the training data are not linearly separable, the hyper-plane can be described as:
Support Vector Machine
The support vector machine (SVM) method is popular for performing pattern recognition/classification on two categories of data with supervised learning. In our work, SVM was implemented similar to [10,15,16] to classify patients from normal subjects using the gait data from our WGAS system. A linear kernel was used for training the SVM classifier, which finds the maximum-margin hyper-plane from the given training dataset D, and it can be described as: where y i is either 1 or −1, and n is the number of training data. Each x i is a p-dimensional vector having the feature quantity R. Any hyper-plane can be written as: where → w is the vector to the hyper-plane. If the training data are linearly separable, the hyper-plane can be described as [15]: The distance between these two hyper-planes is 2/ → w , so the purpose is to minimize → w . In general, it is hard to separate the training data linearly. When the training data are not linearly separable, the hyper-plane can be described as: where parameter C determines a trade-off between the error on the training set and the separation of the two classes. Here, ε is a set of slack variables. The dual problems lie in maximizing the following function with respect to the Lagrange multiplier α [15]:
K-Nearest Neighbor
The K-nearest neighbors algorithm (KNN) is a simple, efficient non-parametric method used for classification and regression in pattern recognition, object recognition, etc. [17]. In both cases, the input consists of the K closest training examples in the feature space.
In KNN classification, the object or an unknown sample is classified by assigning to a test pattern the class label of its K nearest neighbors. The object is classified based on the category of its nearest neighbors through a voting procedure. Majority votes of its neighbors are considered during classification, with the object assigned to the most common class among its K nearest neighbors (K is a positive integer, typically small). If K = 1, then there is only one nearest neighbor, and the object is simply assigned to that class. Because of its simplicity and efficiency, the KNN-based algorithms are widely used in many applications.
Binary Decision Tree
Binary decision tree (BDT), also called as decision trees or classification trees, can predict responses to data as a classifier. To predict a response, one needs to follow the decisions in the tree from the root (beginning) node down to a leaf node, where the leaf node contains the response. BDT give responses that are nominal, such as "true" or "false". Here, we have used BDT for the patients' and normal subjects' gait data classification. In data mining, BDT can also be used for regression and generalization of a given set of data [18]. Data come in records of the form: The dependent variable in Equation (14), Y, is the target variable that we are trying to predict, classify or generalize. The vector x consists of the input variables, x 1 , x 2 , x 3 , etc., that are used for classification.
The overall system flow for our real-time gait classification system is shown in Figure 11. We will now present the classification results from our WGAS system using these aforementioned classifiers next.
The dependent variable in Equation (14), Y, is the target variable that we are trying to predict, classify or generalize. The vector x consists of the input variables, x1, x2, x3, etc., that are used for classification.
The overall system flow for our real-time gait classification system is shown in Figure 11. We will now present the classification results from our WGAS system using these aforementioned classifiers next. Figure 11. Overall data acquisition, feature extraction and data classification analysis flow for the gait analysis system using our WGAS.
Results and Discussion
The training and testing datasets were divided into 70:30 in percentages for the BP-ANN algorithm. The rest of the three algorithms used the K-fold cross validation method with K = 6 [19]. The K-fold cross validation method generalizes the approach by segmenting the data into K equally-sized partitions. During each run, one of the partitions is chosen for testing, while the rest of them are used for training. The procedure is repeated K times so that each partition is used for testing exactly once. Again, the total error is found by summing up the errors for K runs. A parallel coordinate plot is shown in Figure 12, illustrating the six features for all 49 datasets (seven subjects with seven DGI tests). The blue lines refer to extracted features for patients, and the brown lines refer to the extracted features for normal subjects. Figure 11. Overall data acquisition, feature extraction and data classification analysis flow for the gait analysis system using our WGAS.
Results and Discussion
The training and testing datasets were divided into 70:30 in percentages for the BP-ANN algorithm. The rest of the three algorithms used the K-fold cross validation method with K = 6 [19]. The K-fold cross validation method generalizes the approach by segmenting the data into K equally-sized partitions. During each run, one of the partitions is chosen for testing, while the rest of them are used for training. The procedure is repeated K times so that each partition is used for testing exactly once. Again, the total error is found by summing up the errors for K runs. A parallel coordinate plot is shown in Figure 12, illustrating the six features for all 49 datasets (seven subjects with seven DGI tests). The blue lines refer to extracted features for patients, and the brown lines refer to the extracted features for normal subjects. From Figure 12, we can clearly see that the features F4, F5 and F6 of the normal subjects (i.e., brown lines; range of ACC Z, range of ACC X and range of ACC Y, respectively) are almost concentrated at 0.1 Volts in the feature space, but the F4, F5 and F6 of the patients (i.e., blue lines) are varying from 0.2 to 1 in the feature space. Therefore, the linear acceleration features (i.e., F4, F5 and F6) from the accelerometer data can be used to differentiate patients from normal subjects. However, the gyroscopic features (i.e., F1, F2 and F3) vary rather differently, and they appear not that different for normal subjects vs. patients. Nevertheless, we found that the gyroscopic features are still important for the classification algorithms to possibly increase their classification accuracy.
Before we show the classification results, we would like to introduce some straight-forward graphical representation and figures-of-merit used to evaluate a classifier. For example, a "confusion matrix" is a useful and simple tool for analyzing/representing how well a classifier can recognize input features of different classes [20]. It tabulates the outcomes performed by a classifier for both correctly-and incorrectly-classified cases. Its definition is shown in Figure 13. From Figure 12, we can clearly see that the features F4, F5 and F6 of the normal subjects (i.e., brown lines; range of ACC Z, range of ACC X and range of ACC Y, respectively) are almost concentrated at 0.1 Volts in the feature space, but the F4, F5 and F6 of the patients (i.e., blue lines) are varying from 0.2 to 1 in the feature space. Therefore, the linear acceleration features (i.e., F4, F5 and F6) from the accelerometer data can be used to differentiate patients from normal subjects. However, the gyroscopic features (i.e., F1, F2 and F3) vary rather differently, and they appear not that different for normal subjects vs. patients. Nevertheless, we found that the gyroscopic features are still important for the classification algorithms to possibly increase their classification accuracy.
Before we show the classification results, we would like to introduce some straight-forward graphical representation and figures-of-merit used to evaluate a classifier. For example, a "confusion matrix" is a useful and simple tool for analyzing/representing how well a classifier can recognize input features of different classes [20]. It tabulates the outcomes performed by a classifier for both correctly-and incorrectly-classified cases. Its definition is shown in Figure 13.
important for the classification algorithms to possibly increase their classification accuracy.
Before we show the classification results, we would like to introduce some straight-forward graphical representation and figures-of-merit used to evaluate a classifier. For example, a "confusion matrix" is a useful and simple tool for analyzing/representing how well a classifier can recognize input features of different classes [20]. It tabulates the outcomes performed by a classifier for both correctly-and incorrectly-classified cases. Its definition is shown in Figure 13. Specificity: This is also known as the true negative rate. It is defined as the fraction of total negative examples that are predicted correctly by the model/classifier.
Precision (positive predictive value): Precision determines the fraction of records that actually turns out to be positive in the group the classifier has declared as positive class.
The higher the precision is, the lower the number of false positive errors committed by the classifier.
Negative predictive value (NPV): This is the proportion of samples that do not belong to the class under consideration and that are correctly identified as non-members of the class.
F − measure: Precision and sensitivity are two widely-used metrics for evaluating the correctness of a classifier or a pattern recognition algorithm. Building a model that maximizes both precision and sensitivity is the key challenge for classification algorithms. Precision and sensitivity can be summarized into another metric known as the F-measure, which is the harmonic mean of precision and sensitivity, given by, Accuracy: Accuracy is used as a statistical measure of how well a binary classification test identifies or excludes a condition. It is a measure of the proportion of the true results as defined by Equation (20): Now, we are ready to discuss the classification results. Considering the importance of all of these six features from the range data, each classifier was trained with the inputs of all six features. The back propagation artificial neural network (BP-ANN) achieved an impressive 100% accuracy with the SCG learning. This SCG algorithm performs the search and chooses the step size by using the information from the second order error function from the neural network. The SCG training is optimized by the parameter sigma σ (which determines the change in weight for the second derivative approximation) and lambda λ (which regulates the indefiniteness of the Hessian). The values of σ and λ were taken as 5 · e −5 and 5 · e −7 , respectively.
The confusion matrix of the BP-ANN classifier is shown in Figure 14, which gives the TP, TN, FP and FN values.
Next, the SVM classifier with a linear kernel achieved 98% overall accuracy with only one misclassification from the 49 feature datasets. The misclassified data are shown in Figure 15, and the corresponding confusion matrix is shown in Figure 16. The only one misclassification is Case 7 of the first normal subject, which was misclassified as the patient class. From the classification results of the trained algorithms, we can clearly see that our WGAS system is robust enough to differentiate patients and the normal subjects with the simple features extracted from the raw accelerometer and gyroscope data as each classifier exhibits 94%-100% high accuracy. The comparison and the performance of the algorithms are shown in the Table 6 as below, where the range data have been used as the input features to the classifiers. Next, we also tried different statistical parameters to use as the input features to check the impact on classifier accuracy. When only STDEV is used as the input feature, Table 7 suggests the classification accuracies for all classifiers degraded, while SVM classification accuracy is now 94%, slightly better than the other classifiers. When both range and STDEV are used as the input features, the SVM's accuracy improves to 98%, and all classifiers have higher than 94% accuracy. The best gait classification accuracy still occurs when only range data are used as input features; in that case, the BP-ANN outperformed all other classifiers with 100% accuracy. From the classification results of the trained algorithms, we can clearly see that our WGAS system is robust enough to differentiate patients and the normal subjects with the simple features extracted from the raw accelerometer and gyroscope data as each classifier exhibits 94%-100% high accuracy. The comparison and the performance of the algorithms are shown in the Table 6 as below, where the range data have been used as the input features to the classifiers. Next, we also tried different statistical parameters to use as the input features to check the impact on classifier accuracy. When only STDEV is used as the input feature, Table 7 suggests the classification accuracies for all classifiers degraded, while SVM classification accuracy is now 94%, slightly better than the other classifiers. When both range and STDEV are used as the input features, the SVM's accuracy improves to 98%, and all classifiers have higher than 94% accuracy. The best gait classification accuracy still occurs when only range data are used as input features; in that case, the BP-ANN outperformed all other classifiers with 100% accuracy. Finally, we need to compare the running time of each of the classifiers to see which algorithm is the fastest to perform this gait classification in real time; we also compared the speed when different input features are used to train the classification algorithms, and the results are shown in Table 8. It is apparent that a simple, but very fast BP-ANN classifier appears to be the best classifier to differentiate patients of balance disorders vs. the normal subjects in real time. SVM also has achieved 100% precision and specificity, but 98% accuracy; however, it is significantly slower than BP-ANN, KNN or BDT. We are collecting more gait data from patients and normal subjects now to improve the data statistics and to ascertain if BP-ANN and SVM would still be the best algorithms for real-time patient classification. Table 9 shows the results of this work compared with previously-published work in the literature. The Mannini et al. [22] used three IMUs featuring a tri-axial accelerometer and a tri-axial gyroscope and collected the raw data from the testing subjects. They have achieved 90.5% classification accuracy using the RBF (radial basis function) kernel SVM classification algorithm by including time domain features, like the mean value, STDEV, maximum, minimum and range in their feature extraction. The Tahir et al. [23] used both ANN and SVM classifier algorithms for the gait classification in Parkinson's disease patients. They have used the SVM classifier with the RBF kernel in distinguishing normal and patients based on kinetic features. They have also used ANN with the Levenberg-Marquardt training algorithm and achieved 98.2% and 96.9% classification accuracy, respectively.
The Bregg et al. [24] applied an ANN and SVM for the automatic recognition of young-old gait types from their respective gait patterns. Minimum foot clearance (MFC) data of young and elderly participants were analyzed using a PEAK-2D motion analysis system during a 20-min continuous walk on a treadmill at a self-selected walking speed. Gait features extracted from Poincaré plot images were used to train the SVM and ANN. Cross-validation test results indicate that their generalization performance of the SVM was on average 83.3% (±2.9) to recognize young and elderly gait patterns, compared to a neural network's accuracy of 75.0% (±5.0). The same research group of [24] used a synchronized PEAK 3D motion analysis system and a force platform during normal walking for young and elderly subjects and achieved 83.3% vs. 91.7% generalization performance for ANN and SVM, respectively, as reported in [25]. The Hasin et al. [26] used both SVM and ANN for gait recognition by extracting geometry and texture features from the frame sequence of the video when the person is walking. They have used polynomial SVM of order three and BP-ANN with SCG training and achieved overall accuracy of 98% for both of the classifiers. The Huang et al. [27] has built intelligent shoes for human identification under the framework of capturing and analyzing human gait. The data of that work were collected from different sensors, like a pressure sensor, a tilt angle sensor, three single-axis gyros, one tri-axial ACC and a bend sensor installed in the shoe. Principle component analysis (PCA) was used for feature generation, and SVM was applied for training and classifier generation. They were successful in achieving a 98% human identification rate. The Lugade et al. [28] used ANN to determine dynamic balance control, as defined by the interaction of the center of mass (CoM) with the base of support (BoS), during the gait in the elderly using clinical evaluation on gaits.
Subjects were asked to walk at a self-selected comfortable speed across a 10 m walkway in that work. During ambulation, 29 retro reflective markers were placed on bony landmarks of the body, where 3D marker trajectories were captured with an eight-camera motion analysis system (Motion Analysis Corp, Santa Rosa, CA, USA). BP-ANN was able to correctly identify the interaction of the CoM with BoS of elderly subjects with an 89% accuracy rate. The Muhammad et al. [29] used 25 reflective markers, which were placed on the body, and data were acquired from the Vicon Nexus 3D motion capture system to analyze the gait patterns, kinetic and kinematic parameters of the hip, knee and ankle joints of patients and normal subjects. They have used ANN to predict the gait patters with approximately 95% accuracy. The Ahlrichs et al. [30] used one tri-accelerometer device worn on the waist of the testing subjects used in detecting the freezing of gait (FOG) symptoms on the people suffering from Parkinson's disease. The acceleration signals from a waist-mounted sensor are split into equally-sized windows (i.e., a sliding window is applied to the time series), and features are extracted from those windows and fed to an SVM for training or classification. The RBF kernel SVM achieved 98% accuracy in detecting the symptoms for the patients with Parkinson's disease.
We have attempted to validate the measured WGAS data with the data taken from a video-based kinematic reference system; i.e., a Vicon motion capture system with cameras. However, so far, we have found it very difficult to validate the WGAS data directly this way, as the Vicon camera-based system measures the movements from the markers on the body, but not the exact force-based acceleration data, as measured from our WGAS. Therefore, a limitation of the current paper exists in the lack of a direct validation of the values of the extracted features to the measured data provided by another reference system (e.g., a kinematic system). However, we do plan to continue the next stage of the research by including this validation step against possibly another kinematic system and/or using a different validation method. In addition, to improve the data statistics of our work, we would like to add more data as we compared the performance of the classifiers with the increased normal data, as shown in Table 10 in the revised manuscript. The total dataset has four patients and twelve normal subjects, and the overall accuracies are still improved when compared with the results from Table 6. Table 10. Comparison of the classification algorithms using performance metrics when the range data are used as the input features (4 patients and 12 normal subjects).
Conclusions
The results presented in this work using our custom WGAS gait analysis system with artificial neural networks (ANN) and other classifiers, such as SVM, suggest that our low-cost system can successfully classify/detect patients with balance disorders from normal subject with very high accuracy and in almost no time. In this study, we used six simple features from our raw WGAS data collected during the DGI tests for seven subjects with BP-ANN SCG trained and/or linear SVM classifiers, and they achieved impressive 100%, 98% overall accuracies, respectively. In addition, KNN and binary decision tree (BDT) algorithms have obtained still pretty good 96% and 94% overall accuracies, respectively. We have also studied the performance of these classification algorithms when different input features (i.e., STDEV and range + STDEV) are used to train the classifiers, and the results are shown in Tables 7 and 8, indicating that BP-ANN appears to have the best combined overall performance of classification accuracy and speed. We have also compared our gait classification results against prior works in the literature and confirmed that our work achieved very high classification accuracies, albeit with a limited subject size. We are in the process of collecting more data for patients of balance disorders at this moment. One of the goals of our study is to provide physicians and patients with a cost-effective means to identify dynamic balance issues and the possible risk of falls from data routinely collected in clinical examinations, such as the DGI tests reported in this work. In the future, we also plan to use our low-cost WGAS and classifiers, such as BP-ANN, to differentiate patients-specific gait issues to potentially form a powerful expert system, capable of real-time gait analysis to assist quantitative diagnosis, monitoring and assessment of fall risks and also to potentially help suggesting effective fall prevention schemes.
|
2017-03-31T08:35:36.427Z
|
2016-11-29T00:00:00.000
|
{
"year": 2016,
"sha1": "ce878747821b8ec7ad8fe4d8857ada17ebef9481",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/6/4/58/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e398d6c03eac568fe721a7a2bc1dd92b39162f5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
1774046
|
pes2o/s2orc
|
v3-fos-license
|
Therapeutic management of botulism in dairy cattle
Aim: To report the successful recovery of few dairy cattle from botulism in response to a modified therapeutic strategy. Materials and Methods: Seventy four naturally-occurring clinical cases of bovine botulism encountered during the period of 2012-2014 which were confirmed by mouse lethality test became material for this study. Affected animals were made into three groups based on the treatment modifications made during the course of study. Results and Discussion: With the modified therapeutic regimen, 17 animals recovered after 7-10 days of treatment. Clinical recovery took 2-30 days. Animals which were not given intravenous fluid and calcium recovered uneventfully. Cattle which were already treated with intravenous fluids, calcium borogluconate, and antibiotics did not recover. They were either died or slaughtered for salvage. Conclusion: In cattle with botulism, administration of Vitamin AD3E and activated charcoal aid the clinical recovery. Besides, strictly avoiding anti-clostridial antibiotics, fluid therapy, and calcium therapy may facilitate the clinical recovery. Upon fluid administration, the pulmonary congestion existed in the ailing cattle might have worsened the anoxia. Administration of antibiotics like penicillin, aminoglycosides, and tetracyclines further worsen the neuronal paralysis by increasing the availability of botulinum neurotoxin. Cattle in early botulism have fair chances of recovery with the modified therapy.
Introduction
Botulism is an exotoxin-induced flaccid paralysis affecting animals and human. Cattle are no exception for this peripheral neuronal paralysis. Being a bio-warfare agent and the most potent toxin known to date, botulinum neurotoxin (BoNT) plays havoc in living beings. The causative agent, Clostridium botulinum is a ubiquitous soil-borne pathogen, prefers to grow well in decaying organic matter [1]. There were few sporadic and unconfirmed reports of bovine botulism from India also [2,3]. Infrequent isolations of organisms are reported from many regions. From aquatic environments, Types C and D have been isolated and reported [4,5]. In the recent past, sporadic incidences and cluster outbreaks of poultry litter associated bovine botulism have been reported from different parts of the world [6,7].
The botulism usually ends in fatality as the neuronal paralysis cannot be reversed by available therapeutic options. Conventionally, administration of antitoxin was suggested as the first line of management. Human botulism cases have been treated successfully with antitoxin, mechanical ventilation, and other symptomatic therapeutic measures [8]. But, the availability of antitoxin in developing countries is still a distant dream. However, antitoxin therapy would be successful if it is initiated before the toxin reaches motor-end plate [9].
Many researchers attempted to save cattle from botulism through symptomatic management. Rare reports of clinical recovery are also available in literature. In this study, 74 clinical cases of bovine botulism were encountered and treated during the period of March 2012-February 2014. Among them, 17 animals recovered clinically following a modified therapeutic strategy. The details of therapy and sequel are elaborated and discussed herewith.
Ethical approval
With due approval from Institute Animal Ethics Committee, KMCH College of Pharmacy, Coimbatore, Tamil Nadu, the mouse lethality test was conducted.
Sample collections
Seventy four dairy cattle reported/referred to Teaching Veterinary Clinical Complex, Veterinary College and Research Institute, Namakkal with clinical signs of botulism were included for this study. All these animals were owned by different farmers and reared at different locations. Animals were at different stages of the production cycle and maintained under semi-intensive system. After a detailed clinical examination, clinical materials including blood, feces, and rumen fluid were collected and routine hematological and serum biochemical parameters were estimated [10,11]. Rumen fluid, fecal samples, and environmental samples (poultry droppings, swabs collected from carcasses, and soil swabs) were subjected for bacteriological culture and mouse lethality test.
Mouse lethality test
Swiss albino mice of either sex weighing 20-30 g were inoculated with serum, inoculums prepared from dung and rumen liquor samples of the affected animals. Serum and culture supernatants were injected intra-peritoneally as such. But, the dung and rumen fluid samples were prepared as inoculums by the following manner. 1 ml of cold (4°C) gelatin diluent (0.2% gelatin, 0.4% Na 2 PO 4 ; pH 6.4) was added to each gram of dung/rumen liquor. It was mixed well to obtain a uniform suspension. The obtained suspension was held at 4°C for 30 min. Clarification of supernatant was done in a refrigerated centrifuge at 12,000 rpm for 20 min. Trypsin solution (0.25 ml of 1:250 solution) was added to each milliliter of clarified supernatant. Inoculation involves single intraperitoneal injection of the suspected material (0.5 ml) after proper preparation as inoculums. Inoculated mouse was observed at 1,2, 4, 8, 12, 18, and 24 h interval on the1 st day and thereafter daily for 4 days (96 h) for the development of ruffling of fur, labored abdominal breathing, weakness of limbs, and total paralysis.The death of inoculated mice after exhibiting signs like ruffling of fur, abdominal breathing, "wasp-waist" appearance, and total motor paralysis confirmed the presence of BoNT [12,13] in the suspected materials. The results of mouse lethality test is given in Table-1.
Treatments
Treatment for the Group I (n=34) was not instituted by the authors as those animals were already treated by the practicing veterinarians with a variety of drugs and regimen and referred to the authors. On reception of the case, they were treated with intravenous isotonic fluids and activated charcoal at 1 g/kg BW P.O. The adopted treatment strategy for Group II (n=13) was the administration of intravenous isotonic saline as per the degree of dehydration and activated charcoal and B-complex vitamin injections [14]. The treatment for the Group III (n=27) was modified based on the response to therapy and necropsy findings of Groups I and II. The treatment regimen for Group III was as follows. Activated charcoal (at 1 g/kg body weight PO) for 2 consecutive days; Vitamin AD 3 E (commercial product composition in each ml: Vitamin A: 2.5 lakhs IU; Vitamin D 3 : 25,000 IU; Vitamins E: 100 mg; biotin-15mcg) at 10 ml IM for a period of 4-5 days. Antibiotics and intravenous fluids were strictly avoided. Along with them, trace mineral bolus at 1 PO s.i.d (composition: copper 250 mg; zinc 500 mg; selenium 3 mg; cobalt 60 mg; iodine 50 mg; manganese 600 mg, Vitamin A 8000 I.U; and Vitamin E 500 I.U) and probiotic bolus containing Saccharomyces cerviciae at 1 b.i.d P.O were given for a period of 7-10 days. Besides nursing, the response to treatment and clinical recovery characteristics were recorded for each group.
Results
Anamnesis of Group I (n=34) animals revealed that they were treated with intravenous solutions containing calcium borogluconate and magnesium, antibiotics (gentamicin, streptomycin-penicillin combination, and enrofloxacin), intravenous fluids (isotonic saline, dextrose [5%], dextrose [10%], and Ringer's lactate), and meloxicam. The entire group I animals were treated for a period of 2-3 days by practitioners.
The posture of the animals before initiation of treatment is given in Table-2. Clinical signs manifested were abdominal breathing, scanty unformed dung, tripping gait, rumen atony, sternal recumbency progressing to lateral recumbency in a span of 4-12 h, frequent pedaling of limbs, and reduced retractile strength of tongue and salivation. The rectal temperature, heart rate, and other vital parameters were within the physiological range although there were insignificant changes associated with the stage of disease. Hematology and serum biochemical analysis revealed erythrocytosis, increased hematocrit, leukocytosis, mild neutrophilia, lymphocytosis, and monocytosis but inconsistently in each animal. Hemoglobin concentration, mean corpuscular volume, mean corpuscular hemoglobin (MCH), and MCH concentration were unremarkable. A reduction in the blood pH was observed. Elevated serum urea nitrogen and creatinine were observed in ailing animals. Except hypokalemia, all other biochemical parameters were within the physiological range. The results of mouse lethality test are given in Table-1. None of the animals in Group I and II recovered. Moribund animals were either salvaged for slaughter or died after a period of 2-10 days. Among the Group III animals, 17 animals recovered ( Figures-1 and 2). Recovery from botulism was characterized by the quality of dung returning to normalcy with colon marks, resumption of voluntary feed intake, reduction in hind limb tripping, restored rumen motility, and rumination. Duration for recovery and the course of treatment ranged from 2 to 30 days and 7 to 10 days, respectively.
Discussion
The BoNT remains to be the most potent neurotoxin until date for all living beings. The degree of susceptibility of various species is reviewed at length by Num and Useh [15]. Anamnesis of previous treatment given to the recumbent animals revealed that 34 (45.95%) cattle were treated with intravenous calcium borogluconate with magnesium solutions, antibiotics, fluids, and anti-inflammatory drugs. Many practitioners opted for intravenous calcium as botulism mimicked hypocalcemia and other metabolic downers which were enlisted as differential diagnoses for botulism [16]. But, intravenous calcium therapy in botulism was of no use as absorption and dissemination of toxin is favored by calcium [17]. Clinical signs were manifestations of peripheral neuronal flaccid paralysis caused by BoNT. But, laboratory findings were neither pathognomonic nor assuming any diagnostic significance in this study. Less significant hematological changes were reported by many authors [6,14,16].
In this study, cattle which had already been treated intravenously with calcium and magnesium containing solutions did not recover. Death or salvage for slaughter was the sequel after calcium borogluconate administration in cattle suspected for botulism [18]. Some authors have reported recovery of cows from botulism after 1 week [19,20]. In this study, 17 animals recovered over a period of 2-30 days. In Holstein cows, as stated by Martin, the average duration of illness was 4 days (3-14 days). Besides, clinical recovery was noticed after 7 days and weakness persisted up to 1 month [16].
Many researchers attempted symptomatic management of cattle with botulism [21]. Jean et al. used injectable Vitamin E and selenium and isotonic fluid therapy without much success [17]. In this study, Group III cattle with botulism were treated with activated charcoal and Vitamin AD 3 E injections (10 ml IM) daily. No antibiotic was used in Group III as antibiotics like aminoglycosides, tetracyclines, and procaine penicillin tend to worsen the flaccid paralysis caused by BoNT [14]. As a palliative measure, oral administration of activated charcoal, sodium sulfate, and subcutaneous administration of neostigmine were the treatment adoptedby Senturk and Cihan [22]. Early cases in standing posture with restlessness and sternal recumbency with alert mentation were treated successfully. One pregnant cow recovered after 30 days and calved successfully. Kummel et al. (2012) reported maintenance of pregnancy and recovery from botulism in a cow [20]. Despite contradicting the recommendations of intravenous fluid administration Available at www.veterinaryworld.org/Vol.8/November-2015/6.pdf by earlier researchers, symptomatic management adopted in this study aided the recovery from botulism [14]. This contradiction could be either due to the previous treatment with antibiotics or compromised pulmonary ventilation caused by respiratory paralysis. This observation was very well supported by the presence of severe pulmonary congestion in necropsies (Figure-3). Nevertheless, as reported by many authors,use of antibiotics like penicillins and aminoglycosides further deteriorated botulism and administration of intravenous calcium did not favor recovery in botulism as evidenced by the response in Group I animals [14].
Conclusions
Fatal botulism in cattle could be managed if treated at the early stage. Early signs like tripping gait, passing unformed dung, abdominal breathing, and recumbency should bring botulism in differential diagnoses. Cattle which assumed lateral recumbency were found to be unsuitable for therapeutic management. Although successful, the modified therapy cannot be claimed superior to antitoxin administration as the latter was not attempted in this study. It is concluded that symptomatic management with activated charcoal, Vitamin AD 3 E, microminerals, and probiotics supplementation bring out good clinical recovery in cattle with botulism. Based on the response to therapy, it can be inferred that intravenous fluid therapy did not aid the clinical recovery in these animals as the pre-existing respiratory paralysis and consequent pulmonary congestion aggravated the hypoxia. It is again proved that use of anti-clostridial antibiotics will further reduce the chance of recovery from botulism.
Authors' Contributions
SJP: Experimental design, execution, manuscript preparation and correspondence; MS and GV: Experimental design and guidance; GAB: Contributed in clinical-pathology part; KS: Contributed in microbiological and toxicological part of the study. All authors read and approved the final manuscript.
|
2016-05-12T22:15:10.714Z
|
2015-11-01T00:00:00.000
|
{
"year": 2015,
"sha1": "e6b6f9e93b0d301efdd722e2e991891899bd9c0c",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.8/November-2015/6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6b6f9e93b0d301efdd722e2e991891899bd9c0c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
102263349
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Organic Additives on Silicon Combustion in Nitrogen
The work shows some peculiarities of silicon combustion in nitrogen in the presence of additives of organic compounds. Organic compounds with different composition were used. Combustion of samples was carried out in a constant-pressure bomb. The combustion products are composite powders and containing α-, βSi3N4, SiC, Si2N2O. The addition of organic additives suppressed coagulation of Si particles, improved the extent of conversion, and promoted combustion of coarse Si powders which can’t be ignited by any other methods. Introduction of organic dopants to Si powders was found to intensify their combustion without significant influence on the combustion temperature. Active transition of silicon to the gas phase occurs in the low-temperature zone of the combustion wave at the temperature lower than the melting point of silicon. At the temperatures lower than the melting point of Si the quenched combustion products contain two types of crystals of SiC and Si3N4. SiC is formed within a low-temperature zone of the combustion wave. SiC is formed by fine crystals and large spherical particulates which are formed by a bunch of very thin web-like crystals. SiC and Si3N4 formation provides a protective coating on silicon particles. It prevents coagulation at the temperature increase. The experiments have proved that it is enough to introduce 1-7 M of the organic additive to 1000 M of silicon for the combustion initiation. Meanwhile, adding different inorganic salts, including ammonium chloride, did not promote combustion. After initiation of combustion to continue Si powder must contain additives or organic compounds or addition of carbon black; carbon black is necessary for keeping the combustion on.
Introduction
Synthesis of silicon nitride Si 3 N 4 by combustion of silicon powder in nitrogen is known to require dilution of starting Si powder with a refractory compound [1][2][3][4]. Otherwise, the yield of Si 3 N 4 is normally low. Due to a high combustion temperature Tc attained in the rapidly propagating combustion wave, Si undergoes melting and thus forms an obstacle for penetration of nitrogen gas into the reaction zone. In order to suppress the coagulation of melted Si particles, a starting Si powder is diluted, up to 30-70 wt.%, with the end product (Si 3 N 4 ). Analysis of the literature and preliminary experiments allowed us to suppose that there exist an alternative approach to the SHS of Si 3 N 4 : instead of strong dilution with final product, we are going to suggest low dilution with organic compounds.
The experiments result in the conclusion that introduction of small additives of organic compounds allows one not only to prevent silicon coagulation and increase nitriding degree but also to carry out combustion of coarse-grained silicon which can't be ignited by any other methods [5,6]. Combustion of silicon powders containing organic dopants in nitrogen gas under pressure was found to yield a mixture of α-Si 3 N 4 -up to 60%, β-Si 3 N 4 -up to 100%, SiC -up to 80% and Si 2 N 2 O -up to 70%. Relative amount of these compounds in combustion product was found to depend on the pressure of nitrogen gas, type and concentration of dopants, combustion geometry, and cooling rate.
The work shows some peculiarities of silicon combustion in nitrogen in the presence of additives of organic compounds.
Experimental
Combustion of silicon powders containing organic additives in nitrogen was performed under nitrogen pressures P(N 2 ) ranging between 20 and 150 atm. In experiments we used nitrogen gas of either commercially pure grade (containing up to 2 vol.% oxygen). Two different Si powders were used in experiments: powder 1 with a particle size d < 50 μm, powder 2 with d < 130 μm. Figure 1 shows the size distribution of particles in powders Si-1 and Si-2. Organic and inorganic additives (D) with different composition were used: Tetraphenyl silane C 24 H 20 Si, Tetraphenyl silanol C 18 H 15 SiOH, Naphthalene C 10 H 8 , Dipyridyl (C 6 H 4 N) 2 , Naphthol C 10 H 8 Powders (Si+D) were mixed in a porcelain mortar and then charged into a quartz tube of 35 mm in diameter. Combustion of thus prepared bulk density samples was carried out in a constant-pressure bomb preliminary purged with nitrogen gas. Vertically placed samples were ignited from the top with a tungsten coil via a thin layer of Ti powder (about 1 g in weight). The combustion temperature Tc was determined (with an accuracy of ± 50 K) by thermoelectric method [7] using W-Re thermocouples (VR-5/20) 100 or 200 μm in diameter placed at the sample center. Combustion products were characterized by XRD (DRON-3M, Cu-Kα radiation) using the Powder Diffraction File (PDF-2) database and by EDS (LEO-1450 electron microscope equipped with an INCA ENERGY 350 analyzer).
Results and Discussion
Preliminary experiments have shown that powder Si-1 can burn in nitrogen gas of P(N 2 ) > 70 atm, but incompletely and with product sintering. The values of combustion temperature Tc measured in the presence of different additives D ([D] = 5 wt.%) are presented in Fig. 2. As it could be expected, Tc values are nearly independent of the type of dopant and fall within the range of 2270 ± 50 K.
In these experiments, combustion of Si + 4 wt.% C 24 H 20 Si mixture was arrested upon rapid depressurization of the reactor. The fracture surface of quenched samples exhibited several colored concentric zones: an outer black zone about 1 mm wide was followed by a narrow blue zone, a wide green zone, and a grey central zone. The data of XRD (Fig. 3) suggest that the blue zone should correspond to SiC (Fig. 3a). Then the phases of α-Si 3 N 4 and β-Si 3 N 4 appear (in nearly equal amounts), the SiC phase remaining predominant (fraction of the green zone adjacent to the blue one) (Fig. 3b). On going to the sample center, the amounts of α-Si 3 N 4 and β-Si 3 N 4 initially grow at nearly the same pace, after which the reflexes from β-Si 3 N 4 and SiC get stronger, along with a decrease in the intensity of the signal from unreacted silicon. Within the grey zone (Fig. 3d), β-Si 3 N 4 is predominant, while the reflexes from α-Si 3 N 4 , SiC, and Si get weaker.
It was established, that the outer black layer of starting Si, is covered with a thin layer of carbon black (Fig. 4a). The blue zone is formed by fine crystals and large spherical particulates (Fig. 4b,c). As it is seen in Fig. 4d, the spheres are formed by a bunch of very thin web-like crystals. In Fig. 3f, there is a beginning of the green zone, where silicon nitride appears. Here we can see an individual particle of silicon. Some erosion traces are obvious. They are explained by evaporation. The melting traces are not seen. Active transition of silicon to the gas phase occurs in the low-temperature zone of the combustion wave at the temperature lower than the melting point of silicon. At the temperatures lower than the melting point of Si, the quenched combustion products contain two types of crystals of silicon carbide and silicon nitride. Silicon carbide is formed within a low-temperature zone of the combustion wave. SiC and Si 3 N 4 formation provides a protective coating on silicon particles. It prevents coagulation at the temperature increase. 20 μm 100 μm The experiments with arresting the combustion front of Si-1 containing oxygen-containing additives (NH 4 ) 2 (COO) 2 × 2Н 2 О, (COOН) 2 × 2Н 2 О and NH 4 НCO 3 were carried out in a quartz glass. These additives are characterized by low decomposition temperatures, so we used the samples pressed in the glass. When the first two additives were used, black carbon was observed in the quenched combustion products in the layer next to the starting mixture, and in the following layer SiC was discovered that is analogous to the experiments with С 24 Н 20 Si, though silicon carbide was absent in the final product. The absence of silicon carbide in the final products in the case of oxygen-containing additives was mentioned in paper [8]. When NH 4 НCO 3 was used, black carbon and silicon carbide were not observed.
Powder Si-2 could not be ignited even at P(N 2 ) = 150 atm, as well as upon preheating to 573 K or upon an increase in the amount of igniting Ti powder. But in the presence of organic dopants (up to 5 wt.%), powder 2 could be ignited already at P(N 2 ) = 70 atm. For Si-2 powder, the minimal a b c d f amount of additives necessary for ignition combustion at P(N 2 ) 90 atm was found to be 1.3 wt.% in the case of C 24 H 20 Si and 3 wt.% in the case of (COOH) 2 × 2H 2 O. Therefore, that it is enough to introduce 1-7 M of the organic additive to 1000 M of silicon for the combustion initiation. Meanwhile, the addition of 5 wt.% H 2 O, СаСО 3 , C, NaCl, NH 4 Cl, NH 4 НSO 4 , NH 4 НCO 3 did not promote combustion. For comparison 5 mass % of NH 4 НCO 3 and 3-5 mass % (NH 4 ) 2 (COO) 2 × 2Н 2 О were introduced into Si-2 powder. From the viewpoint of the quality and quantity their elemental composition is the same. At the same amount of gaseous products the combustion could be initiated in the case of 3% of oxalate. In the case of 5 mass % of NH 4 НCO 3 it was not observed. Different effect of the additives on combustion can be connected with their different behavior at high temperatures. E.g., at heating NH 4 НCO 3 decomposes with formation of molecular compounds, while at heating of (NH 4 ) 2 (COO) 2 × 2Н 2 О hydrocarbon radicals can be formed leading to black carbon formation which was observed in the above mentioned experiments.
The following experiments were carried out at P(N 2 ) 70 and 90 atm. The amount of the introduced additives (С 24 Н 20 Si, С 6 Н 4 Cl, C 10 H 8 , (СООН) 2 × 2Н 2 О) was 5 mass %. In this case the combustion temperature did not depend on the additive composition and was equal to that of silicon (Fig. 2). der without the additive. The combustion was realized completely only in the upper layer. In the lower layer the combusted penetrated to some depth, and the combustion wave got to a cone. 5. The sample consisted of two layers. The upper layer of 10 mm in height consisted of Si-1 powder without additives. The lower layer of 50 mm consisted of Si-2 powder without additives. After the combustion of the upper layer the lower layer failed to burn.
As it was established Si-2 powder at the terms under study without additives could not be ignited, it wasn't ignited even in the case of black carbon introduction (Exp. 2) or after the complete combustion of the layer consisting of Si-1 powder (Exp. 5). However, experiment 3 proved that the mixture of Si-2 powder and black carbon can be ignited by the burning layer of silicon and the additive. Also it is possible to ignite Si-2 powder without additives in the same way (Exp. 4). The depth of the combustion front penetration into the lower layer depended on the dispersion of the starting silicon. If silicon of 20 to 40 µm (which couldn't be ignited at 70 atm) is used instead of Si-2 powder, the combustion will penetrate deeper. So, we can conclude that the additives initiate combustion but for keeping it on black carbon is required. In the presence of black carbon combustion penetrates to the entire depth (Exp. 3). We can suppose that at combustion of the upper layer consisting of silicon and the additive (Exp. 3, 4) active intermediate gaseous products are formed and get into the lower layer with the gas flow where they initiate combustion which can be kept on only in the presence of black carbon. It is well known that different compounds can create ion-radical forms on black carbon surface by sorption; these forms provide fast reactions like chain ones, and perhaps, due to black carbon active products are formed in the lower layer. So, black carbon provides continuation of the combustion process. In Experiment 4 the active products which got from the upper layer to the lower one were consumed and the ignition didn't start. That is why the combustion which was started in the lower layer damped away.
Conclusions
The addition of organic additives suppressed coagulation of Si particles, improved the extent of conversion, and promoted combustion of coarse Si powders. The experiments have proved that it is enough to introduce 1-7 M of the organic additive to 1000 M of silicon for the combustion initiation. Carbon black is necessary for keeping the combustion on. Active transition of silicon to the gas phase oc-Eurasian Chemico-Technological Journal 15 (2013) 127-131 curs in the low-temperature zone of the combustion wave. Two types of crystals of SiC and Si 3 N 4 are formed in the combustion wave at the temperature lower than the melting point of silicon; they create a protective coating preventing coagulation of silicon particles at the temperature increase.
|
2019-04-07T13:03:17.435Z
|
2013-02-20T00:00:00.000
|
{
"year": 2013,
"sha1": "e073bed8e49233c57dd4c44e3d456b12acca12ff",
"oa_license": "CCBY",
"oa_url": "http://ect-journal.kz/index.php/ectj/article/download/150/121",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "30a2d2b2e7fcaefd6b61a2c27a0956b9aa3ad903",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
225023923
|
pes2o/s2orc
|
v3-fos-license
|
Analytical Model of Day-ahead and Real-time Price Correlation in Strategic Wind Power Offering
—— In this paper, the model of strategic wind power of‐ fering in the day-ahead (DA) market is proposed considering the uncertainties of wind power production, and price forecast‐ ing of DA and real-time (RT) market. The wind power devia‐ tion in the RT market is settled with the two-price mechanism based on the deviation direction and the relation between the lo‐ cational marginal prices (LMPs) of DA and RT. Instead of us‐ ing the point forecasting for the DA and RT LMPs, the uncer‐ tainties of LMP forecasting are modeled. In addition, the corre‐ lation between the forecasting errors of DA and RT LMP are directly modeled instead of generating the correlated scenarios. Finally, the opitmal offered quantity of wind power in the DA market is derived using the probability theory based on the probabilistic wind power forecasting. The case study using the price data of actual DA and RT from Midcontinent Indepen‐ dent System Operator (MISO) validates the effectiveness of the proposed model. It shows that the correlation of the forecasting errors of DA and RT LMP has a significant impact on the wind power quantity offered by DA and revenue results.
I. INTRODUCTION
W IND power is substantially increasing in power systems because of the environment policies and reducing capital cost for wind technology [1]. In the United States, most of the wind power plants are connected in the deregulated electricity markets such as Midcontinent Independent System Operator (MISO) and Electric Reliability Council of Texas (ERCOT) [2] - [4]. One important issue of wind power producers in these deregulated electricity markets is to maximize its revenue [5]. Most of the electricity markets in the United States are organized with a day-ahead (DA) forward market and a real-time (RT) deviation settlement market which settles the deviations between the actual demand and the DA forecasted amount. A two-price mechanism is used by several European electricity markets to settle the wind power deviations between the DA and RT mar-kets to reduce the stochastic arbitrage potential for wind power producers. More details about the two-price mechanism can be found in [6].
In the market operation, the wind power producers utilize the probabilistic wind power forecasting shown in Fig. 1 to reduce its financial loss [7] due to the wind power volatility. The percentage value on the right side of Fig. 1 is the probabilistic quantile value in the probabilistic forecasting. In addition, during the DA market offering, not only the actual wind power production is uncertain, but the DA locational marginal prices (LMPs) and the RT LMPs are also uncertain. The DA LMPs are hourly prices and RT LMPs in most of International Standardization Organizations (ISOs) in the United States are 5-minute prices. The joint probability distribution function of the DA and RT forecasted LMPs is depicted in Fig. 2. Therefore, in wind power DA offering method, both the wind production and the price uncertainties from DA and RT markets should be considered. There was previous literature dealing with the wind power offering problems such as [8]. In these studies, a set of probabilistic scenarios were generated to represent the uncertainties of the wind power production and the market prices in which the computation burden dramatically increases with the number of scenarios [9], [10]. In [11], the optimal wind offer quantity was derived directly from wind power probabilistic forecasting. In this paper, only the wind production uncertainty is considered, and the forward prices and deviation penalty prices are deterministic.
In this paper, an analytical method to obtain the optimal wind offering for the DA market is proposed considering both the uncertainties of wind power production and the LMP forecasting of the DA and RT markets. The main contributions of this paper are twofold: ① both the uncertainties of wind power forecasting and electricity price forecasting are considered using the probabilistic density functions of forecasting; ② the correlation between the DA and RT prices are analytically modeled instead of generating a set of scenarios.
The rest of this paper is organized as follows: Section II proposes the method to obtain the optimal wind offered quantity considering the uncertainties of wind production and LMP forecasting; Section III performs the case study with the actual MISO historical LMP data; and Section IV concludes the paper.
II. STRATEGIC WIND OFFERING IN TWO-PRICE MECHANISM MARKETS WITH CORRELATED UNCERTAINTIES
The wind power deviation is settled with the two-price mechanism [6]. The power shortage and power excess are settled with the DA or RT LMPs based on the deviation directions. The expected revenue of this mechanism is shown in (1).
where R is the revenue of the wind power owner; π DA and π RT are the DA and RT forecasted LMPs, respectively; E(X ) is the expectation of random variable X; P DA is the wind power quantity in the DA market; P RT is the wind power output in the RT market; f (P RT ) is the probability distribution function (PDF) of forecasted wind power output; π + RT and π -
RT
are the penalties for the wind power positive and negative deviations, respectively; and f π DA π RT (π DA π RT ) is the joint PDF for DA and RT LMPs. Assume that the DA and RT LMPs are independent of the wind power. The first order derivative of (1) to the offered DA wind power quantity is given as (4). ¶E ( ) where F(P DA ) is the cumulative probability function. The optimal condition for the wind power offering is formulated as: Based on (2) and (3), (5) can be reformulated as: Finally, the offered optimal wind quantity is decided by: If π RT and π DA are Gaussian distributed random variables, the expectation of E(π -RT ) is determined by (9) [12] and E (| π RTπ DA |) is shown in (11).
where μ π DA and μ π RT are the means of DA and RT LMPs, respectively; σ π DA and σ π RT are the standard deviations of DA and RT LMPs, respectively; ρ is the correlation coefficient of LMPs; Φ and ϕ are the cumulative probability function (CDF) and PDF of Gaussian distribution, respectively; and θ is the standard deviation of π RTπ DA considering the correlation. Note that the Gaussian distribution assumption of π RT and π DA means that the forecasting errors of electricity prices follow Gaussian distribution. It does not mean that the actual historical market prices follow Gaussian distribution. This assumption for price forecasting is used in lots of literature.
III. CASE STUDY
In this section, the proposed wind power DA offering method is tested using the historical DA hourly and RT 5-minute price data from MISO and the Michigan Hub data [13]. A 115-MW wind power plant assembled from Wind Toolkit [14] is used. The quantile regressive probabilistic forecasting method [15] is used to obtain the wind power probabilistic forecasting results. The tests were performed from December 11 to 15 in 2016. The expectations of DA and RT price are shown in Fig. 3. The standard deviations of the forecasted DA and RT prices are 10% and 30% of their means, respectively. The optimal quantile values for the DA offering and the actual wind power offering are shown in Fig. 4 and Fig. 5 with different correlation coefficients between the DA and RT LMP forecasting errors. After obtaining the DA offering, the actual wind power output is used to calculate the wind revenue with 20000 samples for the uncertain DA and RT LMPs. Figure 6 demonstrates the revenue results such as the value at risk (VaR), conditional VaR (CVaR) under 95% confidence level [8], [16] and the expected revenue with different correlation coefficients. Figure 4 and Fig. 5 show that the wind power is prone to offer a high quantity in the DA market when the DA price expectation is higher than the RT price. In contrast, when the RT market has a higher price expectation, the wind power DA offering is low. Both Fig. 4 and Fig. 5 demonstrate that the wind power offering changes with the correlation coefficient between the DA and RT LMP forecasting errors. In a higher correlation scenario, the wind power will offer a higher amount during the 8 th to 17 th , 28 th to 47 th , and 61 th to 86 th hours. In contrast, during the remaining hours, the wind power offering decreases with the correlation coefficient. The first order derivative of (8) to the correlation coefficient ρ is shown in (12). When the sign of ¶F(P DA )/ ¶ρ is positive, the wind offering increases with ρ; when this sign is negative, the wind offering decreases with ρ. ¶F(P DA ) ¶ρ = - Figure 6 shows that the VaR, CVaR and the expectation of revenue increase with the correlation coefficients between the DA and RT LMP forecasting errors. For instance, the revenue expectation increases by 30.18% from $140241.20 to $182569.50 when the correlation coefficient increases from -1 to 1. Figure 3 shows that there is a positive correlation (ρ = 0.5) between the DA and RT prices. Thus, in the wind power offering, this price correlation between the DA and RT markets should be considered to obtain the optimal wind power offering, which improves the revenue of wind power producers.
IV. CONCLUSION
In this paper, a model of strategic wind power offering in the DA market is proposed considering the uncertainties of both wind power production and the price forecasting of the DA and RT markets. The optimal wind power offering is derived based on the wind power probabilistic forecasting and the Gaussian distribution assumption for the DA and RT price forecasting errors. The simulation results demonstrate that the correlation coefficients between the DA and RT LMP forecasting errors has a significant impact on the DA wind offering. Therefore, in the market operation, the wind power producers should consider the correlation between the price forecasting of the DA and RT markets through the proposed method.
|
2020-10-19T18:09:09.983Z
|
2020-08-17T00:00:00.000
|
{
"year": 2020,
"sha1": "c4a1a5d6a46eced0db4cc0159fa1ddc992b32a5e",
"oa_license": null,
"oa_url": "https://ieeexplore.ieee.org/ielx7/8685265/9205722/09169982.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "941ed853a5eb73cddfcc65c29f5cef2e8aaf08e3",
"s2fieldsofstudy": [
"Economics",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
12844450
|
pes2o/s2orc
|
v3-fos-license
|
Expression Profile and Potential Roles of EVA1A in Normal and Neoplastic Pancreatic Tissues
The pancreas is a mixed gland with exocrine and endocrine functions. Endocrine dysfunction is associated with diabetes, whereas abnormal exocrine function can lead to pancreatitis. There are various types of pancreatic neoplasms, these include pancreatic ductal adenocarcinoma (PDAC), intraductal papillary mucinous neoplasm (IPMN), solid papillary tumor (SPT), mucinous cystadenoma (MCN), pancreatic neuroendocrine tumor (pNET) and pancreatic acinar cell carcinoma (pACC). However, the pathogenesis of pancreatic cancer is unclear. PDAC has a very poor prognosis with a 5-year survival rate under 5% (Poruk et al., 2013); IPMN, SPT and MCN have the potential for malignant change, therefore early detection can be difficult due to hidden pathogeneses. Although patients with pNET and pACC have a better prognosis than those with PDAC, the rates of metastasis and recurrence after surgery are high (Kuo et al., 2013; Sumiyoshi et al., 2013) Pancreatic islet cells secrete various hormones, including the counter-regulatory hormones insulin and glucagon which maintain normal blood glucose levels.
Introduction
Glucagon is produced in alpha cells and insulin is produced by beta cells. Type 1 and type 2 diabetes are characterized by a deficiency in islet beta cells and an increase in beta-cell apoptosis.
Autophagy is one of the processes involved in lysosomal degradation of endogenous protein substrates. Dysregulated autophagy has been implicated in the pathogenesis of diverse diseases, including diabetes (Jung et al., 2008). Rivera et al showed that increased expression of human islet amyloid polypeptide (IAPP) could impair autophagy, but suggested that scaffold protein p62, which delivers polyubiquitinated proteins to autophagy, may play a protective role against human-IAPP-induced apoptosis (Rivera et al., 2011;. Quan et al reported that beta-cell-specific ATG7-knockout mice developed hyperglycaemia and glucose intolerance (Quan et al., 2013). As a deficiency in autophagy is a proinflammatory condition in beta cells, it may contribute to the development of diabetes by promoting the recruitment of inflammatory cells and enhancing inflammatory activity (Jung et al., 2008).
EVA1A (eva-1 homolog A), which is also known as TMEM166 (trans-membrane protein 166) or FAM176A (Family with sequence similarity 176, member A), is a novel human gene that was originally characterized by Peking University Health Science Center (Beijing, China) (Wang et al., 2007). It is expressed in most normal human tissues and organs in a cell-type and tissue-type specific manner and is involved in cell autophagy and apoptosis (Sun et al., 2012;Chang et al., 2013;Xu et al., 2013;Xie et al., 2014). EVA1A is strongly expressed in the glomerular zona of the adrenal cortex, chromophil cells of the pituitary gland, pancreatic islet cells, the squamous epithelium and esophageal mucosa, the fundic glands, and hepatocytes (Xu et al., 2013). In contrast, underexpression of EVA1A has been reported in several types of human cancer, including gastric cancer and esophageal cancer (Sun et al., 2012;Xu et al., 2013). Restoring EVA1A expression has been shown to significantly inhibit the proliferation of tumor cells through autophagic and apoptotic mechanisms (Wang et al., 2007;Chang et al., 2013;Xie et al., 2014). Together, these reports suggest that EVA1A is a novel positive regulator of programmed cell death.
We previously reported that EVA1A is expressed in pancreatic islet cells (Xu et al., 2013), however its precise location in the islets was unclear, and its role in human pancreatic neoplasms remained to be investigated in detail. In this study, we examined the protein expression profiles of EVA1A protein in normal human pancreatic tissue and tissues from different pancreatic diseases, including PDAC, IPMN, SPT, MCN, pNET, pACC and chronic pancreatitis (CP). We further characterized the location of EVA1A in islet alpha cells. Our findings have provided a foundation for further studies on the specific functions of EVA1A in normal and neoplastic pancreatic tissues.
Human pancreatic tissue samples
Paraffin-embedded pancreatic tumor tissues and adjacent non-tumor tissues were obtained from patients who had undergone surgery at the Peking University Third Hospital, Beijing, China, and had been diagnosed by an experienced pathologist. They included five cases of PDAC, five cases of IPMN, five cases of SPT, five cases of pNET, four cases of pACC and two cases of CP. This study complied with the ethical standards of the Chinese Medical Association (IRB00001052-08044) and national legislation.
Immunofluorescence analysis
The paraffin-embedded sections of normal pancreatic tissues (5 μm thick) were deparaffinized and rehydrated following standard procedures. Antigen retrieval was performed using a pressure cooker at 100°C for 2 min in 0.01 M sodium citrate (Zhongshan Golden Bridge Biotechnology; Beijing, China). After nonspecific blocking in 1% goat serum, the sections were incubated with monoclonal anti-glucagon (Sigma-Aldrich; St. Louis, MO, USA), anti-insulin (Sigma-Aldrich), or polyclonal anti-EVA1A at 4°C overnight. The slides were washed three times in phosphate-buffered saline (PBS) before being incubated with Alexa Fluor 594-conjugated AffiniPure anti-rabbit IgG or Alexa Fluor 488-conjugated AffiniPure anti-mouse IgG (Zhongshan Golden Bridge Biotechnology) for 1 h at room temperature. Nuclei were counterstained with 4, 6-Diamidino-2-phenylindole (DAPI; Sigma-Aldrich). After the slides had been washed three times in PBS, they were sealed with coverslips and visualized under a Zeiss confocal system (LSM 510 META; Jena, Germany).
Immunohistochemistry (IHC)
The paraffin-embedded sections of pancreatic tumor tissues and adjacent normal tissues were deparaffinized and rehydrated following standard procedures. Endogenous peroxidase activity was blocked with 3% hydrogen peroxide and nonspecific binding was blocked with 1% goat serum. The sections were incubated with anti-human EVA1A polyclonal antibody at 4°C overnight. After washing three times in PBS, the slides were incubated with EnVision peroxidase/DAB, rabbit/mouse detection kit (Dako Diagnostics; Glostrup, Denmark). The nuclei were counterstained with ammonia water. The specimens were dehydrated and sealed with coverslips. Hematoxylin and eosin (H and E) staining was also performed, and control samples were prepared. The slides were visualized under a Leica microscope (Wetzlar, Germany).
EVA1A is specifically localized in islet alpha cells
We have previously reported that EVA1A protein is strongly expressed in pancreatic islet cells (Xu et al., 2013). In this study, its location was examined in different types of islets cells. Immunofluorescent staining revealed that the fluorescence from EVA1A completely overlapped that from glucagon ( Figure 1A), demonstrating that EVA1A colocalized with glucagon. In contrast, EVA1A was not colocalized with insulin ( Figure 1B). Furthermore, EVA1A fluorescence was primarily observed in cytoplasm of alpha cells ( Figure 1C). These findings suggested that constitutive expression of EVA1A in islet cells may be essential to maintain the architecture and function of DOI:http://dx.doi.org/10.7314/APJCP.2015.16.1.373 EVA1A Protein Expression in Pancreatic Islet Cells and Pancreatic Neoplasms normal alpha cells. Consistent with our earlier report (Xu et al., 2013), we found that EVA1A expression was negative in normal pancreatic acinar and duct cells.
The distribution of EVA1A in pancreatic neoplasms
The expression patterns of EVA1A protein in a variety of human pancreatic tumor tissues and adjacent non-tumor tissues were examined by IHC. A breakdown in the architecture of the tumor tissues was observed, however the staining patterns showed that EVA1A was not expressed in the islet cells of MCN, SPT, IPMN and pNET (Figure 2, left panel). Conversely, it was expressed in their adjacent non-tumor islet cells (Figure 2, right panel). In the PDAC specimens, EVA1A was not detected in either the tumor or adjacent non-tumor islet cells. It was also not detected in pancreatic insulinoma tissues (data not shown).
We also detected EVA1A staining distributed throughout PDAC tissue that invaded the peripheral nerves, suggesting that EVA1A was expressed in neural cells ( Figure 3A). However, this observation requires more samples for further investigation.
The specimens of pancreatic acinar cell carcinoma displayed a different staining pattern ( Figure 3B) to those in normal pancrease ( Figure 3D) and other tested pancreatic tumor tissues (Figure 2), as moderate or weak EVA1A immunostaining was detected in the plasma membranes ( Figure 3B). The biological significance of this difference in pACC requires further exploration.
The expression of EVA1A in pancreatitis tissue
Our observations had revealed that EVA1A was expressed in the adjacent normal tissues of both benign and malignant pancreatic tumors. Furthermore, the extent of EVA1A staining in the islets of adjacent normal tissues appeared greater than that in normal pancreatic tissue.
Patients with pancreatic neoplasms often present with preoperative pancreatitis or diabetes, both of which can lead to abnormal islet cell morphology and function. This could also account for the absence of islet cells in the tissue specimens observed by H and E staining. Therefore, we examined EVA1A expression in pancreatitis tissue specimens ( Figure 3C). Our results showed that although the islet morphology was altered, the alpha cells displayed strong EVA1A immunoreactivity.
Discussion
EVA1A is a lysosomal and endoplasmic reticulumassociated protein that has been associated with the regulation of autophagy and apoptosis (Wang et al., 2007;Chang et al., 2013;Xie et al., 2014), however, the precise role of autophagy in alpha-cell function is unknown. Therefore, the aim of this study was to evaluate the expression and function of EVA1A in pancreatic cells. Our results showed that the expression profiles of EVA1A protein in normal and diseased human pancreatic tissues had the following characteristics: EVA1A was specifically expressed in islet alpha cells in normal pancreatic tissue; it was strongly expressed in chronic pancreatitis; it was absent in acinar cells in both tumor and non-tumor pancreatic tissues; however, it was moderately or weakly expressed in acinar cells in pACC tissue; EVA1A was not expressed in IPMN, MCN, SPT and pNET tumor tissues, but was expressed in the islet alpha cells of their adjacent normal tissues; in contrast, it was not expressed in tumor or matched non-tumor tissues in PDAC, but was detected in the perineural invasion of PDAC.
The overexpression of EVA1A, and its specificity in islet alpha cells, suggested it may be implicated in blood glucose dysregulation in diabetes and several types of pancreatic neoplasms. However, some preoperative patients with pancreatic neoplasms also have diabetes, which can cause an imbalance in the transformation between islet alpha and beta cells, leading to an increase in the proportion of alpha cells. Chronic systemic inflammation plays an important role in the pathogenesis of insulin resistance and type 2 diabetes (T2DM). The histology of islets in patients with T2DM exhibits many typical features associated with tissue inflammation, including immune cell infiltration, decreased insulin staining and beta-cell apoptosis (Donath, 2013;Wu et al., 2013). Recurrent pancreatitis is also known to disrupt pancreatic endocrine function. Our results revealed strong cytoplasmic expression of EVA1A and nuclear EVA1A immunoreactivity in islet cells from patients with pancreatitis. The imbalance of alpha-beta cells implied that EVA1A might be involved in the inflammatory reaction.
PACC is a rare form of pancreatic cancer and its pathogenesis is unclear. Pathological diagnosis of pACC is generally performed by trypsin and chymotrypsin staining, which lacks sensitivity and specificity as pACC cells do not always secrete these two enzymes leading to errors in diagnosis (Armstrong et al., 2011;La et al., 2012). Therefore, the expression of EVA1A in the plasma membrane of pACC cells may be a potential marker for the diagnosis of pACC.
The specificity of EVA1A expression in pancreatic endocrine cells, nervous tissue, pituitary adenoma and pheochromocytoma has indicated that EVA1A protein is expressed in cell-type or tissue-type-specific manner (Xu et al., 2013). We found that EVA1A was strongly expressed in the perineural invasion of PDAC. The protective role of EVA1A in cerebral ischemic injury was reported by Li et al who demonstrated that cell loss could be prevented by blocking EVA1A activity with siRNA (Li et al., 2012). ARX/Arx is a homeodomain-containing transcription factor that is essential for brain development and necessary for the specification and early maintenance of islet alpha cells (Wilcox et al., 2013). The similarity between ARX/Arx and EVA1A in the development of islet alpha cells and the nervous system, suggested that EVA1A may also have a role in nervous system development and play a role in the neuroendocrine system. However this requires further investigation. It is believable that the development of EVA1A-knockout mice will further understand the role of EVA1A in pancreatic islet alpha cells.
In conclusion, this study has provided compelling evidence that EVA1A is specifically expressed in human pancreatic islet alpha cells. The overexpression of EVA1A protein in the pancreatic alpha cells in matched non-tumor tissues and pancreatitis tissues implied that EVA1A might be involved in the inflammatory response. The specificity of EVA1A expression in the cytoplasm membrane of pACC and perineural invasion of PDAC suggested that EVA1A could be a potential diagnostic and pathogenic marker in these neoplasms. Further studies of EVA1A expression and function in the islet cells could provide valuable insights into the pathophysiology of the pancreas and its disorders.
|
2018-04-03T00:33:49.555Z
|
2015-02-04T00:00:00.000
|
{
"year": 2015,
"sha1": "b359e9d6f4d52e670903831d4b7b934fd2c89939",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201507964683356&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c774997cd855fbd232b0243c9e170a8cb5a34506",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
117931715
|
pes2o/s2orc
|
v3-fos-license
|
Modeling Safflower Seed Productivity in Dependence on Cultivation Technology by the Means of Multiple Linear Regression Model
The results of the study devoted to the evaluation of reliability of the multiple linear regression model for safflower seed yields prediction were presented. Regression model reliability was assessed by the direct comparison of the modeled yields values with the true ones, which were obtained in the field trials with safflower during 2010–2012. The trials were dedicated to study of the effect of various cultivation technology treatments on the safflower seed productivity at the irrigated lands of the South of Ukraine. The agrotechnological factors, which were investigated in the experiments, include: A – soil tillage: A1 – disking at the depth of 14–16 cm; A2 – plowing at the depth of 20–22 cm; B – time of sowing: B1 – 3rd decade of March; B2 – 2nd decade of April; B3 – 3rd decade of April; C – inter-row spacing: C1 – 30 cm; C245 cm; C3 – 60 cm; D – mineral fertilizers dose: D1 – N0P0; D2 – N30P30; D3 – N60P60; D4 – N90P90. Regression analysis allowed us to create a model of the crop productivity, which looks as follows: Y = –1.3639 + 0.0213Х1 + 0.0017Х2 – 0.0121Х3 + 0.0045Х4, where: Y is safflower seed yields, t ha-1; Х1 – soil tillage depth, cm; Х2 – sum of the positive temperatures above 10°С; Х3 – inter-row spacing, cm; Х4 – mineral fertilizers dose, kg ha-1. A direct comparison of the modeled safflower seed yield values with the true ones showed a very slight inaccuracy of the developed model. The maximum amplitude of the residuals averaged to 0.27 t ha-1. Therefore, we conclude that multiple linear regression analysis can be successfully used in purposes of agricultural modeling.
INTRODUCTION
Mathematical modeling is a widely used practice in almost all branches of modern science.One of the most popular mathematical methods of statistical analysis and development of simple models is linear regression, which finds application in solving diverse practical and theoretical tasks [Kutner et al. 2004;Neter et al. 1996].Agricultural science is not an exception, and linear regression models are also successfully used for satisfying the needs in statistical data evaluation and forecasting [Mead 2017].However, linear regression is nowadays considered to be an out-of-date and insufficiently accurate method of modeling the natural processes [Lykhovyd 2018].Most scientists tend to use more modern and complicated methods of non-linear and spatial statistics, for example, artificial neural networks, multiple non-linear fuzzy regression analysis with improved calculations algorithm, etc. [Cheng, Lee 2001;Cross et al. 1995;Gelfand et al. 2010].However, we should take into account that the above-mentioned methods often may not be available and understandable for everyone.Therefore, we decided to prove the efficiency of the linear regression analysis use in agricultural science on the example of modeling safflower seed productivity in dependence on the crop cultivation technology.
Methodology of the field trials conduction
The field trials devoted to the investigation of safflower productivity in dependence on the cultivation technology treatments were carried out in the period from 2010 to 2012 at the experimental field of the Institute of Rice of the National Academy of Agrarian Sciences of Ukraine.The coordinates of the experimental field are: latitude 46°08′34″N, longitude 32°57′15″E, altitude is 8 m.The trials were carried out in accordance to the common recommendations on scientific work in agronomy [Ushkarenko et al. 2016] in four replications by using the randomized split plot design method.The study was devoted to the investigation of cultivation technology treatments on the safflower seed productivity, including: • A -soil tillage: A1 -disking at the depth of 14-16 cm; A2 -plowing at the depth of 20-22 cm; • B -time of sowing: B1 -3 rd decade of March; B2 -2 nd decade of April; B3 -3 rd decade of April; • C -inter-row spacing: C1 -30 cm; C2 -45 cm; C3 -60 cm; • D -mineral fertilizers dose: D1 -N 0 P 0 ; D2 -N 30 P 30 ; D3 -N 60 P 60 ; D4 -N 90 P 90 .
The cultivation technology of the crop was common for the irrigated conditions of the South of Ukraine excepting the studied factors.The previous crop was winter barley.Primary soil tillage was performed in accordance to the experimental design.Safflower cultivar Soniachnyi was sown by means of a seed drill at the depth of 5-6 cm.The inter-row spacing width was set according to the design of the trials.The crops were rolled instantly after sowing.Harrowing was performed before the sprouting stage of the crop, and then it was repeated at the stage of 2 leaves of the crop.Two inter-row cultivations were carried out on the plots with wide (60 cm) inter-row spacing.The irrigation of safflower in the trials was performed by using the frontal irrigation machine by maintaining the soil moisture at 75-80% level of the field water-holding capacity.The safflower seed yields were harvested by means of a selfpropelled combine harvester "Sampo-130".The yields volumes were recorded at the standard moisture content in the seeds.
The climate of the zone, where the trials were carried out, is a coastal moderately continental one.It experiences a great influence of the nearly situated Black Sea.The weather conditions and meteorological indices were fixed at the local meteorological station installed directly on the experimental field of the Institute.The years of the study were characterized as follows: 2010extremely wet, 2011 -moderately dry, 2012 -extremely dry.The weather conditions during the studied period are represented in the Table 1.
Data processing
The multi-factor analysis of variance (ANO-VA) of the crop yields data was performed by using the standard methodology within AgroStat add-on for Microsoft Excel software application [Kim 2014;Rosner 2006;Ushkarenko et al. 2016].Statistical evaluation was performed for the reliability level of 95% (p<0.05).The safflower seed productivity was modeled by the results of the linear regression analysis, which was conducted by using the common calculations by the method of the least squares within Microsoft Excel software [
RESULTS AND DISCUSSION
The mathematical processing of safflower yields data allowed us to determine the effect of different cultivation technology treatments on the crop yields, as well as define their coefficients of multiple and pair correlation, regression, and determination (Table 1).We should mention that we had calculated the sum of the positive temperatures for different times of sowing to enable expressing the above-mentioned factor in mathematical form for further statistical analysis.The statistical analysis proved the significant effect of the studied cultivation technology treatments on the crop yields (Table 2).
The results of the regression analysis showed the high strength of ties between the safflower seed productivity and cultivation technology (Table 3).The coefficient of multiple correlation was 0.8277, and the coefficient of determination was 0.6851.However, the factors of soil tillage and inter-row spacing width had a very slight effect on the seed productivity of safflower, because their coefficients of determination were less than 0.1.Besides, it should be mentioned that interrow spacing had negative value of the coefficient of correlation (-0.0566).This fact shows that an increase in the inter-row spacing width will have a negative impact on the crop productivity.The combination of positive temperatures (r -0.6647) and mineral fertilizers application (r -0.4525) had the strongest effect on the crop productivity.
The model shows that an increase in the depth of tillage on 1 cm, an increase of the sum of the positive temperatures on 1°С, and an increase of the application dose of mineral fertilizers on 1 kg ha -1 lead to respective increases of safflower seed yields on 21.3, 1.7, and 4.5 kg ha -1 .However, an increase of the inter-row spacing width on 1 cm causes a decrease in the crop productivity on 12.1 kg ha -1 .
The results of statistical data processing allowed us to determine the peculiarities in the influence of agrotechnological treatments on the safflower seed yields.It was established that the factor of the effective temperatures (X 2 ) with the share of 60.2% had the strongest influence on the crop productivity.The mineral fertilizers doses (X 4 ) can also be considered as a determinant factor for safflower yields, its share was 27.9%.At the same time, soil tillage (X 1 ) and inter-row spacing (X 3 ) are the factors of the least effect on the crop with their total share of only 5.2%, which is even lower than the share of occasional influence of other unaccounted in the study factors (6.7%).
A comparison of the true and modeled safflower yields showed sufficiently high relevance and accuracy of the developed linear regression model of the crop productivity (Table 4).
The amplitude of the seed productivity residuals averaged to -0.22...0.27 t ha -1 .This is a relatively small discrepancy between the crop yields.However, most of the trial variants obtained much lesser discrepancy between the modeled and true safflower productivity values (in the borders of 0.15 t ha -1 ).
Therefore, the model showed quite a good reliability and accuracy, and it can be used for prediction of the crop yields in dependence on the cultivation technology treatments.We should mention that the model has limitations due to the fact that it was created for specific climatic and soil conditions.Thus, it might be successfully used only in modeling for the conditions of the South of Ukraine.
CONCLUSIONS
The developed multiple linear regression model of safflower seed yields depending on the cultivation technology treatments showed sufficient reliability and accuracy.The testing of the model gave us an opportunity to conclude that it is suitable to make an approximate forecast of the crop productivity in accordance to the cultivation technology parameters, such as soil tillage depth, inter-row spacing, and mineral fertilizers application doses, with taking into account terms of the crop sowing, which could be expressed in the sum of the effective temperatures needed for seed ripening.
Draper, Smith 2014; Gelfand et al. 2010; Seber, Lee 2012].The model of safflower yields was developed as a common linear function Y = b 0 + b 1 X 1 + b 2 X 2 + b 3 X 3 + … + b n X n .The accuracy and reliability of the developed regression model was checked by the direct comparison of the true crop yields values with the modeled ones.
Table 1 .
Weather conditions during the period of the field trials with safflower Notes: AT -air temperature, AH -air humidity, PA -precipitation amounts.
Table 2 .
Safflower seed yields in t ha -1 depending on soil tillage, time of sowing, inter-row spacing and mineral fertilizers application doses (average for the studied period)
Table 3 .
The results of regression analysis of the average safflower seed yields for the studied period depending on the cultivation technology treatments Х 1 -soil tillage depth, cm, Х 2 -sum of the effective temperatures above 10°С, Х 3 -inter-row spacing, cm, Х 4 -mineral fertilizers dose, kg ha -1 .
Table 4 .
A comparison of the true and multiple linear regression predicted values of safflower seed yields depending on the cultivation technology treatments, t ha-1
|
2019-04-14T06:11:05.779Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "4eaa1a0cb723dbd27aab0fb58fc2066feae73c2e",
"oa_license": "CCBY",
"oa_url": "http://www.jeeng.net/pdf-102608-35608?filename=Modeling%20Safflower%20Seed.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4eaa1a0cb723dbd27aab0fb58fc2066feae73c2e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
4844852
|
pes2o/s2orc
|
v3-fos-license
|
Effect of age, sex and physiological stages on hematological indices of Banni buffalo (Bubalus bubalis)
Aim: To determine the physiological baseline values for hematological indices of Banni buffalo (Bubalus bubalis) as well as to assess their alteration due to age, sex and physiological stages. Materials and Methods: A total of 42 clinically healthy Banni buffaloes were categorized into seven groups (n=6): Group I (male calves ≤1 year), Group II (bulls >1 year), Group III (female calves ≤1 year), Group IV (pregnant lactating buffaloes), Group V (non-pregnant lactating buffaloes), Group VI (pregnant dry buffaloes), and Group VII (non-pregnant dry buffaloes). Blood samples collected aseptically from all the experimental groups were analyzed employing automated hematology analyzer. The data obtained were statistically analyzed; the mean and standard deviations were calculated and set as the reference values. Results: The erythrocytic indices viz. total erythrocytes count (TEC), hemoglobin, and packed cell volume (PCV) were significantly higher in bulls as compared to that of male calves unlike mean corpuscular volume, mean corpuscular hemoglobin (MCH), and MCH concentration. The female calves had higher TEC and PCV than the adult buffaloes irrespective of sex. The total leukocyte count (TLC) and neutrophil counts in male calves were significantly lower than the bulls unlike the eosinophil, while monocyte and basophil remained unchanged with age. The TLC, differential leukocyte count and platelet count varied non-significantly among the adult female groups at different physiological stages. However, neutrophils were found to be apparently higher in lactating buffaloes. Conclusion: The present study would be helpful for physiological characterization of this unique buffalo breed of Gujarat. Further, data generated may be a tool for monitoring the health and prognosis as well as diagnosis of diseases.
Introduction
Blood indices are getting increasingly importance in veterinary medicine as indicators of the physiological, nutritional, metabolic, and clinical status of farm animals [1]. Hence, reference values for such indices become imperative for any disease diagnosis, prevention and controlling program as it forms the very basis for clinical interpretation of laboratory data [2]. It is well known that various factors viz. age, sex, breed and physiological stages affect the cellular hemodynamic [3,4]. However, pregnancy and lactation are the two most important stages in the life of dairy animals that affect metabolism resulting in alteration of the hematological parameters [5,6].
Although, various studies have been conducted on blood profile of buffaloes, the "Banni," a unique indigenous breed of Gujarat, India remain untouched in this aspect. The breed is originated from the Banni area of Kutch district, and the total population in the state is about 5.25 lakhs, which is highest in Kutch followed by Sabarkantha, Surendranagar, Kheda and Banaskantha, respectively [7]. It is a black coat colored hardy breed with its characteristic inverted double coiled horns and known for its regular breeding cycle and high yielding capacity in harsh climatic conditions. The average milk yield is 10.05 kg/day when maintained under typical zero input and locally adapted production system in its breeding tract, which signifies productive potential of this unique germplasm [8]. The importance of "Banni" buffalo may also be gauged by the fact that it has been recognized as the 11 th buffalo breed of India by Indian Council of Agricultural Research, New Delhi. Further, with the ever-increasing interest in indigenous livestock breeds as a possible solution to increase efficiency of production in harsh conditions, there is an urgent need Available at www.veterinaryworld.org/Vol.9/January-2016/7.pdf of basic physiological database on such breeds so as to plan and implement health and Disease Monitoring Program to ensure sustainable development of buffalo husbandry and economic viability of commercial buffalo farming in the country [9].
In view of above considerations, the present study was undertaken to determine reference values for complete blood count of clinically healthy Banni buffaloes as well as to study the influence of age, sex, pregnancy and lactation on different blood parameters of this breed.
Ethical approval
The present study was performed as a part of PG research work, which was approved by Director of Research, Sardarkrushinagar Dantiwada Agricultural University, Sardarkrushinagar, Gujarat.
Place of study
The study was conducted between April 2014 and March 2015 at College of Veterinary Science and Animal House, Sardarkrushinagar in Banaskantha district of North Gujarat, India.
Experimental animals
For the investigation, the experimental animals were selected from Cattle Breeding Farm, Bhuj. The region of the farm stretches between 22° to 24°N and 68° to 71°E and falls in the arid tract of Gujarat with the maximum annual average temperature of 39-45°C and 63% relative humidity.
About 42 clinically healthy animals of the farm were randomly categorized into six groups (n=6): Group I (male calves ≤1 year), Group II (bulls>1 year), Group III (female calves ≤1 year), Group IV (pregnant lactating buffaloes), Group V (non-pregnant lactating buffaloes), Group VI (pregnant dry buffaloes), and Group VII (non-pregnant dry buffaloes). All the buffaloes of the experimental groups were reared under standard feeding and husbandry conditions. The health of the selected animals was regularly examined based on behavior, rectal temperature, pulse rate, respiratory rate and fecal consistency. It is to be noted that since seven experimental groups were to be made at the same time as per age, sex, pregnancy and lactation, the sample size was to be restricted to 6 due to unavailability of more numbers of animals of each group in the farm during the experimental period.
Collection of blood samples
About 10 ml of blood sample was collected aseptically from each animal of all the experimental groups by jugular vein puncture in vials containing tripotassium ethylene-diamine-tetra-acetic acid (P.H. Polyplast, Thane, India).
Data analysis
The results were statistically analyzed using two-way ANOVA as per method of Snedecor and Cochran [10]. p<0.05 were considered to be statistically significant.
Results
The mean±standard error values of erythrocytic indices, leukocytic indices and PLT counts of the experimental groups are presented in Tables-1-3, respectively. It was observed that the erythrocytic indices viz. TEC, Hb and PCV were significantly (p<0.05) higher in bulls as compared to that of male calves unlike MCV, MCH, and MCHC. The female calves had higher TEC (p<0.05) and PCV than the adult buffaloes irrespective of physiological stages. Conversely, adult females were found to have significantly (p<0.05) higher MCH than the calves. However, all the erythrocytic indices did not differ significantly between male calves and female calves, although TEC, Hb and PCV were apparently higher in female calves. Similarly, the TEC, Hb and PCV values of pregnant buffaloes were also non-significantly higher than those of non-pregnant buffaloes. Further, non-significant variation was also observed in the case of MCH, MCV and MCHC among adult female buffalo groups during different physiological stages.
Table-2 indicates that TLC and neutrophil counts in male calves were significantly (p<0.05) lower than the bulls. The eosinophil counts of adult buffaloes were also numerically higher as compared with calves. On the contrary, lymphocyte and PLT counts were apparently higher in calves than adults, while monocyte and basophil remained unchanged with age. The TLC, DLC showed no significant variation among the adult female groups at different physiological stages. However, neutrophils were found to be higher in both the groups of lactating buffaloes than the non-lactating groups, while lymphocytes counts were relatively higher in young animals as compared to adults. Eosinophil count was apparently lower in female calves than male calves. Conversely, the pregnant buffaloes had numerically higher eosinophils compared to non-pregnant irrespective of lactation. Nonetheless, the values of eosinophil and monocyte count of the experimental groups were within the physiological range specified for buffalo.
Table-3 depicts that PLT count ranged from 390.33±28.73 (pregnant dry buffaloes) to 648.33±108.40 (male calves). However, the PLT count of male calves was non-significantly higher than that of bulls and female calves. The PLT count in female calves was numerically higher than the adult female buffaloes. Conversely, there was no significant difference between the groups of adult female buffaloes.
Discussion
The importance of the indigenous gene pool of different livestock species has been acknowledged across the globe. In recent years, realizing the importance of indigenous buffalo germplasm for their high milk yield potential and ability to adapt in harsh agro-climatic condition of Gujarat, the breed improvement program on a massive scale is being implemented throughout the state. The enhancement of milk yield through improved management practices and effective disease prevention, control and treatment program have also been given priority. This denotes the significance of reference values on hematological indices as these are useful in determining the general health status of the animals [11] besides being an aid for differential diagnosis of clinical conditions as well as for monitoring response to therapy [12].
The present study revealed thatthe mean TEC, Hb and PCV of calves and adult buffaloes was in corroboration with Mohan et al. [13] in Murrah buffalo calves and Paul et al. [14] in adult buffaloes. However, relatively lower levels of these indices were recorded by Chandra et al. [15] in Murrah buffaloes at different ages. The mean values of MCV, MCH, and MCHC were found to be in accordance with to the reports of Haque et al. [16], Ellah et al. [17]. Nonetheless, significantly (p<0.05) higher TEC was observed in bulls as compared to male calves. This may be attributed to the role of androgens in enhancing the growth of erythroid progenitor cells in the presence of erythropoietin leading to higher rate of erythropoiesis, which in turn results in increased levels of Hb and PCV in bulls as observed in the current study [18]. Further, significantly (p<0.05) higher TEC of female calves than adult female buffaloes was in accordance to previous reports [19,20]. Similarly, significant alteration of TEC, Hb and PCV between male and female calves was also reported by Beechler et al. [11].
The pregnant buffaloes in both lactating and dry groups had apparently higher TEC, Hb and PCV as compared to the non-pregnant buffaloes of same groups. Higher levels of TEC in pregnant buffaloes were also reported by Patil et al. [21] unlike Kumar et al. [22], who recorded opposite trend. Similar to our findings, Mbassa and Poulsen [23] also recorded higher TEC in pregnant lactating animals than Means with same superscript within a row do not differ significantly from each other at 0.05% level of probability. PLT=Platelet non-pregnant lactating and non-pregnant dry animals and concluded that this might be due to maternal adaptation to pregnancy in order to meet the requirements of growing fetus. Further, the fetal growth that occurs during pregnancy produces greater oxygen demands. This greater need for oxygen is compensated by the endocrine system that stimulates the release of erythropoietin by renal tissue [24]. The secretion of this circulating glycoprotein stimulates increased production of erythrocytes in the bone marrow resulting in hike of TEC during pregnancy [25]. Moreover, the apparently higher Hb and PCV value observed during pregnancy may be correlated with the higher TEC in this group of buffaloes. Kopp and Hetesa [26] and Chineke et al. [27] documented that high PCV reading may indicate an increase in the number of circulating RBCs. Nonetheless, the values of MCV, MCH, and MCHC did not vary significantly among experimental groups of adult female Banni buffaloes. Randhawa et al. [28] also reported that the values of MCV and MCH were not influenced by the lactation and dry stage in crossbred cows. The circulating TLC generally represents the outcome of the dynamic production by bone marrow, the release of the cells to the peripheral circulation and their storage in different organs or pools. Sex differences in immune function are well-established in vertebrates [29]. As far as leukocytic indices of Banni buffaloes are concerned, it was observed that recorded data were comparable with those of earlier studies in different breeds of buffaloes [13,16,17,30,31].
The mean values of TLC and neutrophil counts were found to be significantly (p<0.05) higher in the bulls than the male calves, which were comparable with the study of Jacob [32], in which Gir bulls were found to have higher TLC than the male calves of different ages. However, higher lymphocyte count in calves than crossbred cows was reported by Shil et al. [33], which corroborates the finding of the current study. Whereas eosinophils were apparently higher in adult buffaloes than the calves, which were inconsistent with the study of Canfield et al. [34], who also found higher eosinophil counts in adult female buffaloes than female calves. The increase in a number of eosinophils recorded in the current study with the advancement of age might be an adaptive response of the body to various parasitic load and allergens to which it is exposed over a period of time. However, there was no obvious clinical sign of disease in experimental animals when they were sampled. Lactating buffaloes were found to have higher neutrophils than the dry buffaloes. The rise in neutrophil counts at lactation might be due to lactational stress leading to the release of endogenous corticosteroids [35]. The monocyte counts recorded in this study were in accordance with those of Ellah et al. [17] in heifers and Ali and Shukla [30] in normal cyclic post-partum buffaloes.
Current findings on PLT counts were in tune with those recorded by Das et al. [31] in lactating Mehsani buffaloes. Male calves hadnon-significantly higher PLT values than bulls, which was in line with the study of Mikniene et al. [36] in horses, where foals were found to have significantly higher PLT than the adults.
Conclusion
It may be concluded that that age, sex and physiological stages alter hematological indices of Banni buffaloes. The data generated during the current investigation may be useful as reference values for the scientific community as this is the first study of its kind in this breed of buffalo. Further, it may assist the clinicians to assess the health status of buffaloes as well as in differential diagnosis of clinical conditions.
|
2018-04-03T01:13:20.362Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "dd772dfc4b7f260e492221be7503f9495a6260c6",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.9/January-2016/7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd772dfc4b7f260e492221be7503f9495a6260c6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252300435
|
pes2o/s2orc
|
v3-fos-license
|
Mutations in Exons 8 and 11 of c-kit Gene in Canine Subcutaneous Mast Cell Tumors and Their Association with Cell Proliferation
Simple Summary Mast cell tumors (MCTs) are one of the most common skin tumors in dogs with variable clinical behavior ranging from benign lesions to those causing widespread metastasis. Prognostic factors have been intensively studied in cutaneous MCTs but are less commonly investigated in subcutaneous MCTs as the majority are benign. Activating mutations in exons 8 and 11 of c-kit, a gene that regulates proliferation and differentiation of mast cells, occur commonly in canine cutaneous MCTs and are strong predictors of prognosis. c-kit mutations have rarely been reported in subcutaneous MCTs. The goal of this study was to identify the prevalence of c-kit mutations in exons 8 and 11 in 216 canine subcutaneous MCTs and to investigate their association with other prognostic factors, including mitotic count, histologic grade, KIT pattern and proliferation markers. We detected c-kit mutations in exons 8 and 11 in 23 (10.6%) and 12 (5.56%) subcutaneous MCTs, respectively. c-kit mutations in exon 11 were associated with histologic high grade and a high mitotic count, suggesting that these parameters can predict the biological behavior of subcutaneous MCTs in a similar manner as in their cutaneous counterparts. Abstract The prognostic significance of internal tandem duplication (ITD) mutations in exons 8 and 11 of c-kit has been well-described for canine cutaneous mast cell tumors (MCTs), but c-kit mutations have rarely been reported in subcutaneous MCTs. The objective of this study was to identify the prevalence of ITD mutations in exons 8 and 11 of c-kit in canine subcutaneous MCTs and to investigate its association with histologic grade, KIT pattern, and proliferation markers. ITD mutations in exons 8 and 11 of c-kit, mitotic count, Ki67 index, AgNOR number, Ki67xAgNOR score, KIT pattern, and histologic grade (two-tier system) were retrospectively recorded for 216 dogs with subcutaneous MCTs. ITD mutations in exons 8 and 11 of c-kit were detected in 23 (10.6%) and 12 (5.56%) subcutaneous MCTs, respectively. Exon 11 mutations were significantly associated with Kiupel high grade (p < 0.001) and increased mitotic count (p < 0.001) compared to subcutaneous MCTs with no mutations in exons 8 or 11 (p = 0.002) or subcutaneous MCTs with a mutation in exon 8 (p = 0.001). There was no significant association of either c-kit mutation with KIT patterns or proliferation activity. This study identified a higher prevalence of ITD mutations in exons 8 and 11 of c-kit in subcutaneous MCTs than previously reported. Like their cutaneous counterpart, subcutaneous MCTs with exon 11 mutations were more likely to be histologically high grade and have a higher mitotic count, whereas such associations were not observed in subcutaneous MCTs with exon 8 mutations.
Introduction
Mast cell tumors (MCTs) are one of the most common skin tumors in dogs with variable biological behavior, ranging from benign localized masses, for which surgical excision is curative, to aggressive tumors with increased recurrence, metastasis, and mortality [1,2]. Based on their location within the skin, MCTs have been divided into cutaneous and subcutaneous tumors. This anatomical division is important, as more than 90% of subcutaneous MCTs are considered benign and can be controlled with complete surgical excision [3][4][5][6]. However, this still leaves almost 10% of dogs diagnosed with subcutaneous MCTs succumbing to MCT-related disease [6]. Based on the published literature, approximately 8-9% of subcutaneous MCTs will recur, and 4-6% will metastasize [5,6]. Lastly, 11% of dogs with subcutaneous MCTs will develop another MCT [6].
Histologic grading has been the primary tool to prognosticate canine MCTs [1,2,[7][8][9]. While the Patnaik grading system applied the histologic grading to both cutaneous and subcutaneous MCTs, the more recent two-tier system has been shown to be prognostically significant for cutaneous MCTs only [1,[7][8][9]. However, some criteria that are being evaluated as part of the two-tier grading system, such as a mitotic count (>4 in 10 HPFs) and multinucleated giant cells (at least one cell in 10 HPFs), have been shown to be prognostically significant in subcutaneous MCTs, albeit with different established cut-off values [6,10]. Studies evaluating the prognostic significance of the two-tier system for canine subcutaneous MCTs are lacking. In addition to histologic grading, proliferation markers, such as argyrophilic nucleolar organizing regions (AgNORs) and Ki67, have been successfully utilized to prognosticate subcutaneous MCTs [5,10]. A Ki67 labeling index of 21 and a combined AgNORxKi67 score above 54 have been associated with more aggressive behavior in cutaneous MCTs [1,11].
The tyrosine kinase receptor KIT plays an important role in mast cell proliferation, differentiation, migration, and survival [12][13][14][15]. As such, tyrosine kinase inhibitors are a popular targeted treatment modality in MCT treatment [12,[16][17][18]. Aberrant expression of the KIT protein as determined by immunohistochemistry (IHC) has been associated with a negative prognosis [10,11,15]. Activating mutations of predominantly internal tandem duplication (ITD) and, less often, small insertions/deletions of the c-kit gene have been identified in exons 8,9,11,12, and 17, with the highest frequency in the first three [9,[13][14][15][19][20][21][22][23][24][25]. ITD mutations in exon 11 of c-kit have been reported to occur in approximately 20-45% of canine cutaneous MCTs and have been associated with aberrant KIT protein localization, higher grade, and higher proliferation activity as evidenced by a higher Ki67 index and AgNORxKi67 score [9,12,13,15,21,25,26]. Dogs with cutaneous MCTs with ITD mutations in exon 11 of c-kit have an increased incidence of recurrent disease, decreased survival times, and a high risk for MCT-related mortality [2,15,27]. This is in stark contrast to ITD mutations in exon 8 of c-kit, which have been detected in up to 33% of canine cutaneous MCTs but have not been associated with poor prognosis [1,13,27]. Mast cell tumors with exon 8 c-kit mutations were associated with longer overall survival times in dogs with cutaneous MCTs than those without exon 8 or 11 mutations [1]. MCTs with exon 8 mutations of c-kit had a lower histologic grade and proliferation activity and, less often, an aberrant KIT localization [27].
Interestingly, c-kit mutations have rarely been reported in canine subcutaneous MCTs [22,24]. An ITD mutation in exon 8 has recently been reported in a single subcutaneous MCT [22], while previous studies did not detect such mutations [9,10].
Case Selection
The study population was comprised of subcutaneous MCTs submitted as surgical biopsies to the Michigan State University (MSU) Veterinary Diagnostic Laboratory (VDL) between April 2011 and December 2019. Inclusion criteria required the submitted MCT to be located in the subcutis with no involvement of the overlying dermis or deeper fascial planes. All subcutaneous MCTs included in this study were submitted for a prognostic MCT panel that included histologic grading, KIT expression pattern and Ki67 index both determined by immunohistochemistry, and AgNOR count, as well as the calculated combined AgNORxKi67 score.
Signalment
The breed and age of the dog at histologic diagnosis were recorded for each biopsy specimen.
Histologic Grading
The histologic grade was recorded even though it has not been validated for subcutaneous MCTs. Histologic grading was performed on HE-stained sections of all 216 MCTs in accordance with the 2-tier grading system by Kiupel et al. for cutaneous MCTs [7]. A high histologic grade was assigned to subcutaneous MCT that met at least one of the following published criteria: at least 7 mitotic figures in 10 high-power fields (hpfs, equal to 2.37 mm 2 ), or karyomegaly (i.e., nuclear diameters of at least 10% of neoplastic cells vary by at least two-fold), or at least 3 bizarre nuclei in 10 hpfs, or at least 3 multinucleated (3 or more nuclei) cells in 10 hpfs [7]. MCTs that did not meet these criteria were diagnosed as histologically low grade.
KIT Expression Patterns
Cellular localization of the KIT protein was determined by IHC labeling as previously described [11]. Cellular localization patterns are classified as follows: 1. KIT pattern I: predominantly membranous labeling; 2. KIT pattern II: focal to stippled cytoplasmic labeling with decreased membrane-associated labeling; and 3. KIT pattern III: diffuse cytoplasmic labeling [1,11].
Ki67 Index, AgNOR Count, and Combined AgNOR x Ki67 Score
IHC for Ki67 was determined by IHC labeling for the Ki67 as previously described [11], and Ki67-positive cells were quantified as the average number of positively labeled neoplastic nuclei per area of a 1 cm 2 optical grid reticle at a magnification of 40× (5 grid areas counted) in the highest labeling area [1,11]. Histochemical staining for AgNORs was performed as previously described [1,11]. AgNORs were counted in 100 randomly selected neoplastic mast cells throughout the tumor at 1000× magnification. Individual AgNORs were resolved by focusing up and down while counting within individual nuclei. Average AgNOR counts/cells were then determined on the basis of averaging the counts within these 100 random neoplastic cells [1,11]. Quantification of tumor proliferation (combined AgNOR x Ki67 score) was performed by multiplying the Ki67 index with the AgNOR count.
Screening for Mutations in Exons 8 and 11 of c-kit
ITD mutations in exons 8 and 11 of c-kit were determined by polymerase chain reaction (PCR) using previously described primer pairs [19]. For exon 8, primers were located in introns 7 to 8 and amplified the previously reported 12-bp duplication mutation in canine MCTs [13,27]. For exon 11, primers flanked exon 11 and the 5 end of intron 11, which amplifies the previously described region of the ITD mutation in canine MCTs [14,15]. Amplifications for both primer sets were performed using the Type-it Mutation Detection PCR Kit (Qiagen) as previously described [15,27]. PCR products were visualized on the QIAxcel Capillary Electrophoresis System (Qiagen) [19].
Statistical Analyses
Categorical variables were summarized as frequency (percentage), and numerical variables were summarized as median (range). For each case, the histologic grade (2-tier Kiupel system), mitotic count, Ki67 index, AgNOR numbers, combined AgNOR x Ki67 score, KIT pattern (1, 2, 3), and c-kit mutation status for exons 8 and 11 were determined. The distribution of the aforementioned tumor markers and signalment were compared in subcutaneous MCTs with or without c-kit mutations. Specifically, MCTs with exon 8 and exon 11 mutations were compared to each other and to subcutaneous MCTs without exon 8 or 11 mutations.
Mann-Whitney U test was used to determine differences in continuous variables (i.e., age, mitotic count, Ki67 index, AgNOR numbers, and AgNOR x Ki67 score), whereas categorical variables (i.e., histologic grade, breed, and KIT pattern) were assessed by means of chi-squared/Fisher's exact test. Significance was set at p < 0.05.
Results
Two hundred and sixteen samples diagnosed as subcutaneous MCTs were included in this study. Of these, breeds included 67 mixed-breed dogs, 37 Labrador Retrievers, 12 Labrador Retrievers had a high prevalence of subcutaneous MCTs with c-kit mutations, with 8 of 37 (22%) MCTs having exon 8 mutations and 4 of 37 (11%) MCTs having exon 11 mutations. A distinction between black Labradors and yellow Labradors was not available from this database. No other breed predilections were identified with our sample size.
The median age of dogs with subcutaneous MCTs with c-kit mutations in exon 8 was 5 years, which was significantly lower than the median age of dogs with no mutations in exons 8 or 11 (8 years, p = 0.001). Dogs with subcutaneous MCTs with c-kit mutations in exon 11 also had a significantly higher median age (9 years, p = 0.017).
There The median mitotic count was 1 for both low-grade and high-grade subcutaneous MCTs tumors (range, 0-49 for high-grade and 0-6 for low-grade). Of the high-grade subcutaneous MCTs, 29/43 (67.4%) had a mitotic count of 7 or higher, while none of the low-grade tumors had a count of 7 or higher. Exon 11 c-kit mutations were also associated with an increased mitotic count (median mitotic count, 2, range 1-10; Figure 2) compared to mutations in exon 8 (median mitotic count, 1, range 0-11; p = 0.001) or dogs with no mutations in exons 8 or 11 (median mitotic count, 1, range 0-49; p < 0.001). There were no statistically significant differences in histologic grades between subcutaneous MCTs with a mutation in exon 8 and no mutations in exons 8 or 11.
Figure 1.
Histogram of percentage of low-(blue) and high-grade (orange) subcutaneous mast cell tumors associated with each of the three groups of different mutation status. A high grade was significantly associated with an internal tandem duplication mutation in exon 11 (p < 0.001), but not with a mutation in exon 8 or with no mutations in exons 8 and 11.
Figure 2.
Boxplot of mitotic count per 10 high-power fields in subcutaneous mast cell tumors with different mutation status. Internal tandem duplication mutations in exon 11 (gray) were significantly associated with a higher mitotic count compared to exon 8 (orange) mutations (p = 0.001) and no (blue) mutations (p < 0.001), but there was no difference between the latter two groups.
Discussion
This is the first large-scale study investigating the c-kit mutation status in canine subcutaneous MCTs. In contrast to previous studies, we identified ITD c-kit mutations in exon 11 and exon 8 in 5.6% and 10.6% of examined cases, respectively. Compared to the study population, Labrador Retrievers had a higher prevalence at 11% and 22%, respectively. As Labrador Retrievers have been reported to have a higher risk for
Discussion
This is the first large-scale study investigating the c-kit mutation status in canine subcutaneous MCTs. In contrast to previous studies, we identified ITD c-kit mutations in exon 11 and exon 8 in 5.6% and 10.6% of examined cases, respectively. Compared to the study population, Labrador Retrievers had a higher prevalence at 11% and 22%, respectively. As Labrador Retrievers have been reported to have a higher risk for developing MCTs and low-grade MCTs in particular, larger-scale studies are necessary to determine whether MCTs in this breed have a higher prevalence of c-kit mutations than MCTs in other breeds [28,29]. The high prevalence of subcutaneous MCTs with a mutation in exon 8 is especially unusual, as approximately 4% of cutaneous MCTs carry this mutation [13]. Furthermore, the prevalence of subcutaneous MCTs with either mutation was less than half the prevalence that has been reported for cutaneous MCTs [1,13,15,19,27]. Interestingly, the ratio of subcutaneous MCTs with a mutation in exon 11 compared to subcutaneous MCTs with a mutation in exon 8 (ratio 1:2) was inversely related to what has been reported for cutaneous MCTs (ratio 2:4:1) [1,27]. The relatively lower number of subcutaneous MCTs with a mutation in exon 11 correlates with the less aggressive biological behavior of these tumors, as ITD mutations in exon 11 of c-kit have been associated with a poor prognosis for dogs with cutaneous MCTs [2,6,9,10,15,27].
While the current study lacked clinical outcome data, both a high mitotic count as well as a high histologic grade based on the two-tier grading system were significantly associated with exon 11 mutations, while for subcutaneous MCTs with no mutations in exons 8 and 11 or with a mutation in exon 8, no such association was detected. As an increased mitotic count has been demonstrated to predict higher local recurrence and metastasis in subcutaneous MCTs [5,6,10], we surmise that the detection of an ITD c-kit mutation in exon 11 also indicates a more aggressive biological behavior in subcutaneous MCTs, similar to their cutaneous counterpart.
Both a high mitotic count and an exon 11 mutation were significantly associated with a higher grade. A prospective study should confirm the prognostic significance of the two-tier grading system and the detection of ITD mutations in exon 11 of c-kit in subcutaneous MCTs. A total of 8/43 (18.60%) subcutaneous MCTs classified as highgrade had an exon 11 ITD mutation. The prevalence of exon 11 mutations tends to be much higher in high-grade cutaneous MCTs [9,21,26], and a recent study reported such a mutation in 62/75 (82%) cutaneous MCTs [27]. The much lower prevalence of aggressive subcutaneous MCTs compared to high-grade cutaneous MCTs and the lower prevalence of such high-grade subcutaneous MCTs to carry an exon 11 mutation may be responsible for previous studies not being able to identify exon 11 mutations in subcutaneous MCTs [10]. While 32/43 (74.4%) subcutaneous MCTs classified as high-grade had no mutations in exons 8 and 11, a mitotic count above 7 in 67.4% of tumors in this group still supports an aggressive behavior [6,10]. These data may also suggest that exon 11 mutations of c-kit are less often the driver of aggressive behavior in canine subcutaneous MCTs compared to cutaneous MCTs. Interestingly, only 53.5% of these high-grade subcutaneous MCTs had a Ki67 index above 23 or an AgNORxKi67 index above 54. The larger number of cases with a high mitotic count than those with a Ki67 index above the threshold established for cutaneous MCTs may reflect karyologic abnormalities rather than an increased proliferation activity. Regardless, a prospective study will be necessary to fully establish the prognostic significance of histologic grading for subcutaneous MCTs.
Exon 11 c-kit mutations play a crucial role in the oncogenesis of cutaneous MCTs, especially in mast cell proliferation, and we expected that the various proliferation markers investigated in this study would be significantly higher in subcutaneous MCTs with a mutation in exon 11 compared to subcutaneous MCTs with no mutation in exons 8 and 11, similar to what has been reported for cutaneous MCTs [1,11,27]. Furthermore, the aberrant expression of KIT is more commonly observed in cutaneous MCTs with an exon 11 mutation [6,15,19,27,30]. Neither finding could be confirmed for the subcutaneous MCTs with an exon 11 mutation in this study. As we detected only 12 subcutaneous MCTs with such a mutation, the low number may have negatively impacted statistical analysis.
Similar to recently published data for cutaneous MCTs, a mutation in exon 8 of c-kit was not associated with any parameter investigated in this study that predicts a poor prognosis [27]. Based on these previously published data and the data presented here, it seems unlikely that a c-kit mutation in exon 8 causes a gain-of-function of KIT that would thereby stimulate increased tumor proliferation.
Although the overall number of mutations detected in exons 8 and 11 of c-kit in subcutaneous MCTs was low, this retrospective study suggests a prognostic significance for detecting mutations. Furthermore, the histologic grade using the two-tier system may also be helpful in identifying subcutaneous mast cell tumors with a more aggressive biological behavior. Future prospective studies investigating the role of c-kit mutations and histologic grade in subcutaneous MCTs for predicting clinical disease progression and risk for metastatic disease and MCT-associated mortality are needed to confirm these hypotheses.
Conclusions
Mutations in exon 8 and 11 of c-kit have been reported in up to 45% of canine cutaneous MCTs [9,[13][14][15][19][20][21][22]24,25,27], but only a single case of an exon 8 mutation has been reported in a subcutaneous MCT [22]. This study of 216 dogs identified a higher than anticipated prevalence of approximately 11% and 6% of ITD mutations in exons 8 and 11 of ckit in subcutaneous MCTs, respectively. Similar to cutaneous MCTs, our study demonstrated that c-kit mutations exon 11 in subcutaneous MCTs were significantly associated with a histologic high grade (p < 0.001) and an increased mitotic count (p < 0.001) compared to subcutaneous MCTs with no c-kit mutations in exons 8 or 11 (p = 0.002) or subcutaneous MCTs with a c-kit mutation in exon 8 (p = 0.001). The larger number of exon 8 then exon 11 c-kit mutations in subcutaneous MCTs represents an inverse relationship compared to cutaneous MCTs that correlates with the less aggressive biological behavior of subcutaneous MCTs. There was no significant association of either c-kit mutation with KIT patterns or proliferation activity. These results provide future directions for prospective studies to confirm c-kit mutations in exon 11 and a histologic high grade as prognostic factors for subcutaneous MCTs.
Author Contributions: Conceptualization, M.K.; Acquisitions, analysis, or interpretation, P.C., S.S. and M.K.; Drafted manuscript, P.C. and M.K.; Substantively revised and edited the manuscript, L.M., S.S. and M.K.; Provided final approval and agrees to be accountable for all aspects of work ensuring integrity and accuracy, P.C., L.M., S.S. and M.K. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2022-09-16T15:13:11.348Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "50f55893fde79b8500a17dfa997c789b941823e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/9/9/493/pdf?version=1662801736",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "083e73591cda917719b770cca6652b4eb9a03345",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267172462
|
pes2o/s2orc
|
v3-fos-license
|
Comparing Outcomes of Bicruciate-Stabilized and Cruciate-Retaining Total Knee Arthroplasty
Background Bicruciate-stabilized (BCS) total knee arthroplasty (TKA) aims to restore normal kinematics by replicating the function of both cruciate ligaments. Conventional cruciate-retaining (CR) design in TKA has shown previous clinical success with lower complication rates. This study compared the patient-reported outcomes between the BCS and CR TKA designs. Methods This retrospective study examined patients who underwent primary TKA using a CR or a BCS implant. Patient demographics, Knee Injury and Osteoarthritis Outcome Score for Joint Replacement (KOOS, JR), and Forgotten Joint Score (FJS) were compared between two cohorts. Patient-reported outcome measures were analyzed using independent samples t-tests. Results There were no significant preoperative demographic differences between groups. The CR cohort (n = 756) had significantly higher average KOOS, JR Scores compared to the BCS cohort (n = 652) at 3 months (59.7 ± 3.8 vs. 53.0 ± 3.9, p < 0.001) and 2 years (62.6 ± 8.0 vs. 53.8 ± 6.7, p = 0.001) after TKA. Within the cohort, KOOS, JR delta differences were not significant for CR when comparing patient scores 3 months to 1 year after surgery. Meanwhile, the BCS patients did show significant delta improvement (4.1 ± 1.9, p = 0.030) when compared 3 months to 1 year after surgery. One year postoperatively, the BCS cohort (n = 134) showed a significantly higher average FJS score (49.5 ± 31.4, vs. 36.8 ± 28.5, p = 0.028) than the CR cohort (n = 203). Both cohorts displayed a significant difference in delta improvements within their respective cohort when measuring FJS from 3 months to 1 year, 2 years, and 3 years after surgery. Conclusions The CR cohort performed better on average, compared to the BCS cohort in measures of KOOS, JR scores at the 2-year follow-up. The BCS cohort performed marginally better regarding FJS only at 1-year follow-up.
of femoral rollback. 8)However, CR TKA may not be recommended with PCL insufficiency, previous trauma, or severe deformity. 6)Despite the ability to produce femoral rollback, the CR design cannot fully reproduce native knee kinematics due to the resection of the anterior cruciate ligament (ACL); the same is true for the posterior-stabilized (PS) TKA. 9) Furthermore, both conventional TKA bearings tend to result in paradoxical sliding, 10) which has been suggested to result in the loss of knee flexion. 11)he BCS TKA design differs from previous conventional designs as it attempts to reproduce the patient's native knee kinematics, including ligament tensioning and knee rollback, 12) thereby reducing mediolateral instability in the mid-flexion range and improving patient satisfaction. 10)Furthermore, sacrificing the PCL in BCS design may increase the conformity of the prosthesis and can lead to decreased contact stresses and polyethylene wear. 8)The tibial insert in the BCS-TKA is designed with a concave medial and convex lateral shape that increases anteriorposterior stability throughout knee flexion favoring a native kinematic pattern. 13)This physiological matching that the BCS implant tries to achieve may yield promising and, in some cases, superior clinical findings in studies. 9,14)][17][18] A novel method of quantifying patient satisfaction is the Forgotten Joint Score (FJS), which allows patients to give a subjective report as to how natural their implant feels after operation. 19)Using the FJS to compare different TKA designs has been completed previously, although showing no significant differences between CR and PS implants. 20)n order to compare the two implant designs more fully, we aimed to compare the differences using the Knee Injury and Osteoarthritis Outcome Score for Joint Replacement (KOOS, JR) as well as patient satisfaction using the FJS.We hypothesized that there would be no significant differences between the BCS and CR implant designs.
METHODS
Before any study procedures took place, NYU School of Medicine Institutional Review Board (FWA# 00004952) approval was obtained with a waiver of consent due to the retrospective nature of the study.
Patient Selection
We conducted a retrospective review of all patients who received a primary, elective TKA at our hospital from 2015 to 2021 and had at least 2 years of clinical followup.Patients were separated into two cohorts based on the utilized implant design: cruciate-retaining TKA System (Journey II system, Smith & Nephew) was included in the CR implant group and bicruciate-stabilized TKA (Smith & Nephew) in the BCS group.
In all reported cases in this study, surgeons opted for a standard medial parapatellar approach with the goal to recreate neutral mechanical alignment whilst causing the least amount of constraint to achieve stability.The surgical indications for CR or BCS at our institution are derived from the physical exam, radiographic evaluation, intraoperative findings, and surgeon preference.For the purposes of this study, the primary indication for patients who received a TKA was osteoarthritis.The use of CR or BCS was determined by the surgeon.Ultimately, the physician's intraoperative assessment, preference, pertinent patient history, physical exam, and radiographic findings resulted in the decision between implants.
For this study, 1,855 patients who received a TKA with osteoarthritis as their primary indication were identified.Three hundred and fifty-three patients (19%) were excluded because they received a different implant design not analyzed in the scope of this study.One hundred patients (5.3%) were lost to follow-up, leaving a total of 1,402 patients included in the study.Of those, 646 patients (46%) received the CR implant, 55 of whom received bilateral TKA, resulting in a total of 756 knees included in the analysis.Specifically, 112 knees were included from patients who had completed their 2-year follow-up and 115 from those who had completed their 1-year follow-up.Five hundred and eighty-six (42%) patients received the BCS implant, 33 of whom received bilateral TKA resulting in a total of 652 knees included in the analysis.Specifically, 118 knees were analyzed from patients who had completed their 2-year follow-up and 112 from those who had completed the 1-year follow-up.
Data Collection
Demographic data including sex, age, smoking status, race, American Society of Anesthesiologists (ASA) score, body mass index (BMI; kg/m 2 ), Charlson Comorbidity Index, length of stay (LOS; days), and surgical time (hours) were collected for all patients from the electronic patient medical record system (Epic Caboodle ver.15) using Microsoft SQL Server Management Studio 2017.LOS is described as the total number of hours in the hospital after surgery, and surgical time was the difference between initial skin incision and closure.Patients were followed up postoperatively at a series of time points: 2 weeks, 12 weeks, 1 year, 2 years, and 3 years.Data from up to the 3-year follow-up were not included for KOOS, JR scores.
Outcome Measures
In this study, patient-reported outcomes (PROS) were measured by the Knee Injury Osteoarthritis Survey (KOOS, JR) and FJS.PROS were collected preoperatively and again during subsequent follow-up visitations at 12 weeks, 1 year, 2 years, and 3 years postoperatively.Other assessments such as radiological evaluation and ROM were not in the scope of this study, and as such were not included as outcome measures partly as they are not documented in an objective manner that can be compared across surgeons.There was no significant difference in allcause or aseptic revision rates for either group (Table 1).At baseline, there were no significant differences regarding sex, age, smoking status, race, ASA score, or BMI (Table 1).While there was no significant difference in average surgical time between cohorts, the CR cohort had a significantly shorter LOS compared to the BCS cohort (2.1 ± 1.6 vs. 2.4 ± 1.9, p = 0.002).There were also no significant differences between groups regarding all-cause and aseptic revision rates.The CR group had an all-cause revision rate of 2.9% and the BCS group had a revision rate of 4% (Table 1).There was no significant difference in all-cause or aseptic revision rates for either group (Table 1).
Statistical Analysis
All data were organized using Microsoft Excel software (Microsoft Corp.).A binary variable was created to identify patients who underwent TKA with BCS or CR prostheses.Binary variables were also created to group patients' postoperative dates.Study participants' demographic and clinical baseline characteristics were described as means with standard deviations for continuous variables and frequencies with percentages for categorical variables.KOOS, JR and FJS scores were described as means with a standard error of difference.Statistical differences in continuous variables were detected using independent samples t-tests.A p-value less than 0.05 was considered statistically significant.
RESULTS
At baseline, there were no significant differences regarding sex, age, smoking status, race, ASA score, or BMI (Table 1).While there was no significant difference in average surgical time between cohorts, the CR cohort had a significantly shorter LOS compared to the BCS cohort (2.1 ± 1.6 vs. 2.4 ± 1.9, p = 0.002).There were also no significant differences between groups regarding all-cause and aseptic revision rates.The CR group had an all-cause revision rate of 2.9% and the BCS group had a revision rate of 4% (Table 1).
Preoperatively, there were no significant differences between the CR and BCS cohorts in terms of KOOS, JR scores.The CR cohort had a significantly higher average KOOS, JR score than the BCS cohort postoperatively at the 3-month (59.7 vs. 53.0,p = 0.0001) and 2-year (62.6 vs. 53.8,p = 0.001) postoperative follow-ups (Table 2).When comparing delta improvements in KOOS, JR between groups, the CR cohort had a significantly (14.5 ± 1.4 vs 9.5 ± 1.6, p = 0.0196) higher improvement than the BCS group when comparing their preoperative scores to 3-month postoperative scores (Table 2).
In terms of KOOS, JR scores, the CR cohort experienced significant improvement from their preoperative values to 3 months (p < 0.0001) and 1 year (p < 0.0001) of postoperative follow-ups (Table 3).The BCS cohort also experienced significant improvement from their preoperative values to their 3-month (p < 0.0001) and 1-year (p < 0.0001) postoperative follow-ups (Table 3).As opposed to the CR cohort, the BCS cohort also showed a significant improvement when comparing their 3-month and 1-year postoperative values (p = 0.030).
There was no significant difference in the FJS scores between the groups at the 3-month follow-up.However, the BCS cohort experienced a significantly (p = 0.028) higher average score 1 year postoperatively compared to the CR cohort (Table 4).Furthermore, the BCS group also showed a significantly greater delta improvement when comparing their 3-month to 1-year postoperative followup (Table 4).Both cohorts experienced significant improvements within their respective cohorts when comparing 3 months to 1 year, 2 years, and 3 years after operation (Table 5).
DISCUSSION
This study is novel in that it looked at whether implant design, CR or BCS, had any considerable effects on PROS satisfaction at multiple postoperative time points.We found that design choice did not provide substantial differences as both designs showed similar KOOS, JR improvements from baseline to postoperative scores.In terms of patient satisfaction, the BCS cohort showed a significantly higher average FJS score at 1-year follow-up than the CR cohort; no difference was found between the two cohorts at 2-year and 3-year follow-ups.
In a small cohort study comparing 10 CR and 10 BCS TKAs of the same implant, Moewis et al. 21) reported a general improvement 2 years postoperatively in both cohorts in terms of their Knee Society Scores (KSS), adequate levels of passive knee flexion, and full extension.Furthermore, the BCS cohort showed significantly higher mean KSS scores than the CR cohort.When compared to the KOOS, JR score in our study, the CR cohort showed significantly higher mean KSS scores at the same 2-year postoperative time point.This incongruence between clinical outcome scores and PROS could be due to a small sample size measured exclusively at 2 years postoperatively, whereas our present study noted differences across 12 weeks, 1 year, and 2 years postoperatively.Yet, in agreement with our study, the BCS cohort reported higher FJS scores.The findings of Moewis et al. 21) concur with the present study in that both the CR and BCS cohorts displayed general improvement from their preoperative to postoperative values.
The BCS implant is thought to provide better outcomes and a more natural feeling knee by reproducing native knee physiology. 12,13,21)This is supported by Mugnai et al. who found that patients who received a first-generation model of the BCS implant did experience higher mean KOOS scores at a mean follow-up of 29 months compared to a Non-Restrictive Geometry knee system; however, they were unable to perform a comparison in pre-and postoperative differences due to the lack of preoperative KOOS data. 22)In terms of reproducing the normal kinematics, Inui et al. 23) demonstrated that BCS did have more of a normal-like kinematic than other TKA designs.Elaborating on that, Kiyohara et al. 24) performed an in vivo comparison of different TKA implants and found that the BCS designs achieved significantly greater posterior femoral rollback and axial rotation.However, that study did not report clinical outcomes or measures of patient satisfaction.Moewis et al. 21) evaluated patient satisfaction using FJS and found that the BCS cohort also showed higher FJS scores, this was thought to have been contributed by re-duced anterior shift and a high lateral rollback.Yet, when the BCS design was compared to a PS design over time, no evidence of clinical superiority was demonstrated at the 2-year follow-up. 25)Ishibashi et al. 26) compared in vivo kinematics and PROS between 17 BCS and 18 CR knees with the same anatomical surface geometry.They found that BCS knees achieved a higher maximum flexion angle, and knee kinematic differences became apparent as patients entered deep knee flexion.Namely, the BCS knees demonstrated rollback in flexion, whereas the CR knees demonstrated paradoxical anterior motion.They hypothesized that posterior impingement may therefore be reducing the maximum flexion angle in CR knees.Despite differences in kinematics, PROS in their analysis did not differ between BCS and CR knees. 26)hile our study compared BCS to CR implants, similarly, the BCS cohort improvements tended to plateau at 2 years postoperatively in congruence with previous findings.A study by Kawakami et al. 27) found no significant differences in PROS or satisfaction among patients who received a CR or PS implant.Furthermore, a large retrospective study comparing CR and PS implant design found that there were no significant postoperative differences in PROS between 3, 5, and 8 years.It was also concluded that there were no significant differences in patient satisfaction when the CR design was compared to the PS design. 28)While our study compared CR and BCS designs, we found that the KOOS, JR score for the CR group was higher for the CR group, yet FJS scores were higher for BCS.This is consistent with the literature that when compared to each other, neither implant demonstrates a clear superiority over the other.
In a large retrospective study on patient satisfaction regarding customized CR implants, Schroeder et al. 29) found high KOOS-JR scores and high patient satisfaction with 89% of their patients reporting to be either satisfied or very satisfied.This agrees with the present study in which patients who received a non-customized CR design reported a higher average KOOS, JR score, with positive trends in patient satisfaction as measured by FJS scores.
While CR implants are linked with excellent patient satisfaction in the literature, there is scarce literature that utilizes FJS specifically.FJS has been previously positively correlated with patient satisfaction. 30)When compared to a PS implant, patients who received a CR implant reported no differences in FJS scores at 1 and 2 years postoperatively. 20)When CR was compared to BCS, BCS displayed a significantly higher FJS score 1 year postoperatively in the present study.Furthermore, BCS displayed higher FJS scores 2 years postoperatively in a previous study reported by Moewis et al. 21) This is not congruent with our results that showed no significant differences in FJS scores for either CR or BCS at 2 or 3 years postoperatively.
It is important to note that the declining trends for BCS point to a larger issue with patient satisfaction following TKA, which future research should evaluate.The present study also did not look at ensuring the surgeon' s selection for CR or BCS.Proper TKA design and constraint must be chosen in order to achieve a favorable outcome as well as longterm satisfaction.Improper implant selection and execution would likely lead to unsatisfactory results.It would also be prudent to consider long-term patient satisfaction in relation to specific implant options throughout the life of said implant as such differences, if there were any, could provide more considerations to surgeons and patients alike.
As this was a retrospective study, there was no randomized selection, thus there was a possible selection bias allocating patients into CR or BCS TKA group.Furthermore, there could have been errors in the data that could not be controlled for.All PRO scores were collected via self-reported survey measures by the patients.Furthermore, preoperative FJS scores are not yet validated, thereby not allowing for a pre-and postoperative comparison as was done with KOOS, JR scores.Additionally, while this study comprises the largest cohort comparing BCS and CR TKA designs, the mean follow-up time of our investigation is limited and future research should look at PROS in long-term follow-up.Nevertheless, the present findings are congruent with the previous literature so this likely did not alter our present conclusions.This study focuses on the differences in PROS exclusively; therefore, complications and revisions were not within the scope of this study.
Contrary to the predicted hypotheses, this study demonstrated that the CR cohort performed better, on average, compared to the BCS cohort in measures of KOOS, JR scores at the latest follow-up.However, the BCS cohort performed better in measures of FJS scores.Another noteworthy finding of this study is that PRO trends for BCS implant recipients decreased in the long-term follow-up, which is in line with previous findings that there is a certain percentage of patients who are dissatisfied with their TKA.Future studies should focus on patient satisfaction following TKA, specifically in the long term.Surgeons should rely on a variety of factors, their experience, and their patients' expectations to determine which implant design is most suitable.
|
2024-01-24T17:22:39.529Z
|
2024-01-15T00:00:00.000
|
{
"year": 2024,
"sha1": "ecc286f27c8881654c266e1e2e930a3b2970055b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4055/cios22268",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b5e98a2620e7ee643383a358a4ff4883ee385da",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16926452
|
pes2o/s2orc
|
v3-fos-license
|
Workforce capacity to address obesity: a Western Australian cross-sectional study identifies the gap between health priority and human resources needed
Background The disease burden due to poor nutrition, physical inactivity and obesity is high and increasing. An adequately sized and skilled workforce is required to respond to this issue. This study describes the public health nutrition and physical activity (NAPA) practice priorities and explores health managers and practitioner’s beliefs regarding workforce capacity to deliver on these priorities. Methods A workforce audit was conducted including a telephone survey of all managers and a postal survey of practitioners working in the area of NAPA promotion in Western Australia in 2004. Managers gave their perspective on workforce priorities, current competencies and future needs, with a 70 % response rate. Practitioners reported on public health workforce priorities, qualifications and needs, with a 56 % response rate. Results The top practice priorities for managers were diabetes (35 %), alcohol and other drugs (33 %), and cardiovascular disease (27 %). Obesity (19 %), poor nutrition (15 %) and inadequate physical activity (10 %) were of lower priority. For nutrition, managers identified lack of staff (60.4 %), organisational and management factors (39.5 %) and insufficient financial resources (30.2 %) as the major barriers to adequate service delivery. For physical activity services, insufficient financial resources (41.7 %) and staffing (35.4 %) and a lack of specific physical activity service specifications (25.0 %) were the main barriers. Practitioners identified inadequate staffing as the main barrier to service delivery for nutrition (42.3 %) and physical activity (23.3 %). Ideally, managers said they required 152 % more specialist nutritionists in the workforce and 131 % specialists for physical activity services to meet health outcomes in addition to other generalist staff. Conclusion Human and financial resources and organisational factors were the main barriers to meeting obesity, and public health nutrition and physical activity outcomes. Services were being delivered by generalists rather than specialists, which may reduce service effectiveness. Although conclusions from this research need to take into account the fact that the audit was conducted in 2004, the findings suggest that there was a need to equip health services with an adequately skilled workforce of sufficient capacity to deliver an effective public health response to the obesity epidemic, particularly addressing poor nutrition and physical inactivity.
Background
The increasing prevalence of obesity and noncommunicable chronic disease in Australia requires a range of actions and interventions to enable effective prevention policy and programs [1]. The health and economic costs of poor nutrition and physical inactivity contributing to obesity are greater than that of smoking and harmful and hazardous alcohol consumption [2]. Healthy eating and regular physical activity at any age can substantially protect against weight gain, obesity and diet-related chronic illness, and therefore reduce preventable chronic disease and associated healthcare costs [3]. It is acknowledged that public health services designed to improve NAPA are essential to reduce the increasing prevalence of chronic disease [4].
Effective interventions require sufficiently sized and skilled workforce to achieve prevention targets [5]. An appropriately trained workforce to implement healthy eating and physical activity disease prevention strategies is a priority public health infrastructure needed to impact on rising obesity rates [6]. It is not easy to quantify the size of the workforce required but there is no doubt that an appropriate workforce will have a profound impact on the ability to achieve effective outcomes [7]. A critical mass in workforce is required for effective service delivery [8]. To foster workforce adequacy there is a need to firstly consider workforce development through appropriate training and curriculum and secondly to consider the existing workforce capacity to design and deliver effective obesity prevention programs including planning considerations to address future challenges.
Australian public health policy asserted that a range of professionals in public and primary health are required to support population and community based activities and indicated that public health nutritionists and health promotion officers specializing in physical activity are important health professionals to deliver these services [1]. Research suggests that the prevention workforce in other countries is lacking practitioners with specific skills and responsibility for effective public health NAPA action [9,10]. Little is known about Australia's obesity prevention workforce or the public health workforce more broadly. However, there has been concern since 2009 that the level of capacity in the specialist obesity prevention workforce is lacking in most jurisdictions including local government, state government and non-government organisations across Australia [1]. It is likely that the promotion of healthy eating and physical activity is relegated to general staff with lack of additional resources and variable levels of training and/or there is a lack of service delivery. The lack of workforce capacity has been identified as the result of several factors including a lack of specific workforce development efforts and workforce effectiveness associated with population health outcomes [11].
Public health nutrition is a discipline defined as the promotion and maintenance of nutrition-related health and wellbeing of populations through organised efforts and informed choices of society [12,13]. Workforce development is a key strategic domain for building capacity for public health nutrition practice therefore it has been necessary to define the role and scope of the workforce and the competencies required [9,[14][15][16]. There is international agreement that whilst public health work is multi-sectorial and multidisciplinary, the most effective programs to achieve public health nutrition goals are those facilitated by a specialist workforce identified by specific competencies [17]. Australia's 10 year national agenda for action for public health nutrition, Eat Well Australia 2001-10, provided the mandate for capacity building priorities to consider workforce development as a central strategy [18].
Global efforts for public health action in physical activity have also recognised the opportunistic nature of past workforce development and the recurrent need for systematic workforce development [19]. The International Society for Physical Activity and Health was formed in 2009 with a view to moving physical activity to mainstream public health services [20]. The physical activity workforce broadly includes practitioners from health, education, sport and recreation, planning, transport and other disciplines such as medicine [21]. Whilst the broad range of sectors involved can be mobilised to engage in physical activity promotion, the variability in knowledge, skills and training may hinder population based program development efforts. The range of programs provided include examples such as the medicalisation of physical activity risk to exercise physiology where athletic performance is the target, or physiotherapists with rehabilitation as their target [21]. The public health physical activity workforce is emerging with specific positions created, however, there is an imperative to develop a physical activity promotion workforce across a range of disciplines [22].
Little is known about the priority placed on NAPA public health programs or the workforce size needed to support effective efforts to build workforce capacity [23]. As the policy environment continues to focus on reducing obesity in Australia there is an urgent need to profile the obesity prevention workforce. The composition, practice methods, resource allocation and organisation issues are all likely to impact on workforce capacity to address obesity. An audit of the NAPA workforce was carried out in Western Australia in 2004 to explore the policy environment and future workforce needs. This audit was commissioned by the Nutrition and Physical Activity Branch (NPAB) of the Department of Health in Western Australia. The Western Australian Health Promotion Foundation (Healthway) funded Curtin University's Food Law, Policy and Communications to Improve Public Health Research Translation Project to enable results to be published. The specific objectives of the audit were to describe the current priorities for NAPA and workforce structure of the WA NAPA workforce, as determined by health managers and practitioners. This paper reports on the 2004 workforce audit to determine the appropriateness of priorities and size of the workforce to meet the challenges of addressing obesity prevention as an important function of workforce capacity. These results are significant as they are the only workforce data for both workforce areas to be published for Australia and the findings enable the retrospective exploration of factors impacting on workforce capacity and development in relation to policy directives so as to inform future strategies.
Methods
NAPA services were defined for the purposes of the audit as any service offered in the form of education, program delivery, community or policy development that seeks to improve the food intake and physical activity levels of specific target groups or the population in general. The audit consisted of two surveys; the first was a telephone survey of managers of the obesity prevention workforce in NAPA services and the second a postal survey of the existing workforce (practitioners).
Workforce definitions
To describe the current workforce it was necessary to elucidate the types of workforce, with a variety of qualifications currently employed in obesity prevention. Workforce definitions describing different paradigms in the nutrition workforce were used as the basis to describe key workforce areas for consideration. Workforce positions in public health nutrition, community nutrition or dietetics and clinical dietetics formed the specialist nutrition workforce. In Australia, people working in these positions would have Bachelor and/or postgraduate university nutrition and/or dietetics qualifications. It is expected that other professionals working in health, such as health promotion officers and Aboriginal health workers, would also have some role in the delivery of nutrition services. This workforce may have little or no training in nutrition but be experts in other areas, for example health promotion program delivery. For the purpose of this audit this section of the workforce is described as the generalist nutrition workforce. Detailed descriptions of the specialist and generalist nutrition workforce to represent a spectrum of workforce, as adapted from Hughes and Somerset (1997) [24]. These descriptions were then applied to definitions of the physical activity workforce as no previous literature had identified a taxonomy for defining that workforce at the time of the survey.
Definitions of delineated service delivery describing the different features of methods and processes were also adapted and defined for the purpose of the study [8]. Community and Public Health delivery are usually differentiated by intended reach, prevention level, and the wellness/or illness paradigm for operation in Australia [24]. One way to consider workforce was to differentiate between the multiple workforce tiers by the determinant driving the service delivery. Determinants such as community development, needs assessment and policy directives indicate that different workforce competencies are required for community and public health NAPA approaches.
Questionnaire development
Separate manager and practitioner surveys were developed to measure the research objectives based on previous survey's including an unpublished state government Review of Allied Health Professionals Recruitment and Retention Taskforce Survey (1999), the Dietitians Association of Australia's professional competencies [25], general health promotion competencies [26] and a public health nutrition workforce development study [27]. Questions selected aimed to measure priority placed on NAPA services based on required service reporting areas and were mostly closed ended. The Department of Health required service reporting areas and potential national health priority and target areas were listed and managers could select those applicable. Other questions required the enumeration of specialist and generalist workforce and perceptions of current workforce in relation to adequacy, competency and training needs and perceived ability to meet NAPA service goals. A workforce profiling was conducted using the position title, fractional appointment and location of specialist and generalist workforce and description of services provided. Current workforce and future workforce requirements were then calculated for each region and totalled for the state.
The practitioner postage survey included the questions described above with additional details on years working in their current position, methods of training and continuing professional development and perceived barriers to service delivery. Both questionnaires were developed in conjunction with NPAB staff for content validity and were piloted on university staff for comprehensibility and face validity. Ethics approval was granted from Curtin University's Human Research Ethics Committee. All participants signed consent to participate and data was anonymised and aggregated for regions and then the state. Confidentially was maintained at all times and all participants consented to the publication of the results in various formats for the Department of Health's purposes.
Recruitment
Western Australia is a geographically large state (2,532,400 square kilometres) with a population in 2004 estimated to be just under 2 million [28]. There were four metropolitan government health regions including public and community health, seven regional public and community health units and several non-government organisations and welfare organisations involved in prevention service delivery at the time of the survey. Fifteen medical general practices were also organised in geographic areas across the state with a mandate for promoting NAPA [29]. Managers were defined as a person who directly or indirectly line managed practitioner/s that have a functional responsibility to deliver nutrition and/or physical activity services for an area/region or organisation in community, public and population health. The term 'services' was used to broadly cover interventions and strategies designed to improve the risk factors of interest (public health NAPA). During the audit several revisions were made to the recruitment list due to restructuring and people on leave or acting in positions, with 69 managers identified by the end of the survey period. An email was sent to managers with an introductory letter explaining the aim of the audit including research consent and a copy of the questionnaire and workforce descriptions. Managers were asked to respond with details of their current NAPA workforce and a telephone interview was arranged to complete the questionnaire and elicit other comments regarding the workforce. A $20 gift voucher was sent at the completion of the interviews as an incentive to encourage a high response rate. Manager's interviews were carried out over 3 months and lasted between 30 and 60 min. All interviews were carried out by the primary author.
Practitioners were defined as a person who delivers nutrition and/or physical activity services as part of their employment. Practitioners were identified from contact lists of the NPAB and professional organisation mailing lists. As well as the original lists, a snowball approach was used to identify additional practitioners by asking survey participants to nominate other specialist or generalist practitioners. All 185 practitioners identified at the start of the survey were mailed an introductory letter, research consent form and questionnaire with replied paid envelope. They were also asked to send a copy of their job description form outlining organisational structure, key duties and competencies required for the position.
Analysis
Responses to closed-ended questions were coded directly onto the questionnaire and responses to openended questions were summarised and then coded according to a pre-established coding protocol developed after the interviews. Both sets of questionnaires were analysed using SPSS version 11 (SPSS Inc., Chicago, IL, USA), using descriptive statistics and chi-square test of association to assess relationships between data. .
Results
Forty eight managers were interviewed (a 70 % response rate) and 101 of the 185 practitioners identified participated (a 56 % responses rate). The representative spread across all WA health regions and organisations enabled enumeration of the current NAPA workforce.
Demographic characteristics
Over half of the managers (55.8 %) had been in their current position for 2 years or less. Their main service delivery was in population services (37.5 %), community and clinical (29 %), solely community (25 %), public health (6.3 %) and clinical only (2.1 %). The majority of managers (62.5 %) were located in country areas with 87.5 % having regional service delivery. There was variability in the highest qualification held, with only 10.4 % having attained a Master of Public Health qualification. NAPA practitioners were mostly female (97 %) with a mean age of 36.5 years. Most (90 %) delivered nutrition services and 54.5 % delivered physical activity services. The nutrition workforce was more experienced, 41 % had over 10 years' experience compared to 8.9 % of the physical activity practitioners. Most practitioners (76.2 %) had nutrition and/or dietetic qualifications, 4.9 % had health promotion qualifications and 13.8 % had diabetes educator qualifications. The main employers were the Department of Health (70.3 %) and nongovernment organisations (9.9 %) and the remainder from private business. Two thirds (65.4 %) of practitioners were employed in the metropolitan area reflecting the population distribution.
Services and health priority
All managers had some responsibility for nutrition and/ or physical activity service delivery. Table 1 shows that managers rated NAPA services as priority service delivery areas along with many other competing priorities, particularly in regional areas where alcohol and other drugs and injury prevention (including assault & suicide) were rated higher. Diabetes (35.4 %), reducing harm from alcohol and other drugs (33.3 %), cardiovascular disease (27.1 %) and injury prevention (25 %) were the priority health risks. As key risk factors for chronic disease, poor nutrition was ranked 11th and inadequate physical activity 13th in priorities.
The health issues reflected in the ranking of the top five intervention strategies used by managers for their region or organisation. Eight key interventions were predetermined based on expected Department of Health service reporting and the top five listed by managers were improving physical activity (75 %); improving nutrition (70.8 %); capacity building (68.7 %); reducing drugs and alcohol (68.7 %) and addressing obesity (62.5 %). Indigenous people were key target areas identified by three quarters of managers for NAPA services. The second key target areas for managers were women and children however the focus for practitioners were adults in general for both areas of service delivery.
Size and type of NAPA workforce
One quarter of managers had no direct management of positions that were involved in physical activity service delivery and 10 % had no direct responsibility for staff delivering nutrition services. Table 2 shows the 18 different job titles identified as delivering nutrition services. The total specialist nutrition workforce was estimated to be 53.1 full time equivalents (FTE) state-wide or 9 % of the total workforce with the majority having a dietetic qualification as reflected by job descriptions. Practitioners who identified with community delivery roles also had position descriptions that required delivering clinical services (35 %). The majority of managers' capacity to deliver nutrition services fell to a generalist workforce of Aboriginal health workers and community nurses without explicit public health or community nutrition skills in their job descriptions (528.8FTE in total). The majority of physical activity services were delivered by health promotion officers, community physiotherapists, nurses and/or Aboriginal health workers in a preventive role (see Table 2). The specialist workforce was estimated at 47.5 FTE or 14 % of the total physical activity workforce, and the general physical activity workforce was estimated to be 335.1 FTE.
NAPA service delivery
Managers and practitioners were in agreement about the achievement of service delivery against policy goals or strategic plan objectives. Few managers (4.3 %) and practitioners (5.9 %) thought that physical activity goals were being met (%) while 10.4 % of managers and 9.0 % of practitioners indicated nutrition goals were being met. Implications of not meeting goals including the recognition that services were stretched, and the limited ability to use capacity building or community development approaches to respond to the issues and lack of ability to service disadvantaged groups. The major barriers to full nutrition service delivery identified by managers was a lack of staff (60.4 %), organisational and management factors (39.5 %) and financial resources (30.2 %). The major barriers for full physical activity service delivery were financial (41.7 %), lack of staff (35.4 %) and physical activity not being clearly identified in service specifications (25.0 %).
Recruitment and retention of staff to deliver nutrition services were barriers to service delivery reported by managers, particularly in relation to attracting staff to regional areas (20.8 %) and staff burn out (10.4 %). Lack of funding (14.5 %) and the limited number of dietetics trained professionals applying for public health nutrition (PHN) positions (14.5 %) were also considered barriers to delivering nutrition services. There were similar issues to the recruitment and retention of staff to deliver physical activity services, however physical activity was viewed by some managers (10.4 %) as being a newer or untested area for service delivery.
Future workforce requirements
Three quarters of managers said more staff were needed to fully deliver on nutrition service goals, particularly from specialist workforce. An additional 81FTE of specialist workforce (152 % more) and 62FTE (12 % more) of generalist workforce such as health promotion officers was identified as necessary which included filling currently vacant positions. Ideally, the additional specialist workforce would be dietitians (45 %), health promotion officers (17 %), and public health nutritionists (13 %). Figure 1 illustrates the comparison between the current workforce and the estimated additional specialist workforce required by managers to fully deliver on nutrition service goals. In relation to full physical activity service delivery, the majority of managers said that an additional 56.6 FTE (131 % more) of specialist physical activity workforce and 52FTE (16 % more) from generalist workforce was required including filling currently vacant positions.
Discussion
The 2004 WA nutrition and physical activity (NAPA) workforce audit described and quantified the priority and capacity for service delivery from a public health perspective. Even though NAPA are key risk factors for preventable chronic disease and obesity they were considered a low service delivery priority in 2004. Broad policy priorities did not always reflect practice priorities, particularly in regional areas. Increasing decision makers' awareness of the health, economic and social benefits of improving NAPA appears to be warranted. Human and financial resources were identified as major weaknesses in health service delivery only 9 % of positions responsible for delivering nutrition services
Organisational and managerial workforce support
Organisational and managerial support directed the services provided as well as mandated requirements by the state based Department of Health and/or other organisations. Manager's focus was on the seven chronic disease outcomes reflected in Government policy priorities at the time. The program delivery focus in WA at the time was promoting increased fruit and vegetable consumption with the Go for 2&5® social marketing campaign [30,31]. In some instances other immediate local issues, for example, reducing alcohol and other drug usage were higher priorities than poor nutrition, physical inactivity or obesity. The policy priority of preventing obesity continues to increase [32], as does the need for an appropriately sized and skilled public health and primary health care workforce to deliver programs [18]. In 2004, addressing obesity was approached by encouraging employers to ensure a healthy workforce rather than building the workforce to implement actions to improve diet and physical activity [18]. Australia's public health nutrition strategic plan of action, Eat Well Australia, expressed uncertainty about whether the current workforce was large enough to undertake the tasks required to address obesity and highlighted the lack of a specific workforce development strategy [18]. The first action "Investigating workforce requirements, including training needs and the systems necessary to deliver activities in light of current funding arrangements, workforce capacity and composition" was never undertaken ( [18]:26).
The policy priority assigned to specific health issues has the potential to limit service delivery. Unsupported low priority issues result in an undersized and unqualified workforce or alternatively, an undersized and underqualified workforce can influence the priority managers placed on the health issue and subsequent service delivery because they have limited capacity to act. Addressing poor nutrition is complex, there are multiple stakeholders and numerous dietary targets (e.g., increasing fruit and vegetable consumption) and approaches needed [8].
Managers' and practitioners' opinions differed in regard to meeting NAPA expectations with potential misalignment between practice and the work needed. The Indigenous population was an important target for managers yet practitioners focussed on adults in general; suggesting that disadvantaged groups, with great health need could be left out of service delivery.
Workforce profiling
A specialist workforce is critical to obesity prevention program success [18]. Findings showed an urgent need to increase the size of the specialist NAPA workforce in WA to develop the critical mass of human resources required. Managers estimated 152 % more specialist nutrition and 131 % more specialist physical activity workforce was required to achieve policy/program goals. The findings are consistent with research in California which found 70 % of local public health department managers rated their staff capacity for obesity prevention in NAPA environments as less than effective [33].
Benchmarking the recommendations for staffing public health in NAPA areas is limited. The type of workforce is dependent on the size, training, experience and work to be achieved in the target population or the socioecological interventions needed. Just prior to the audit, Fig. 1 Comparison between current and additional specialist and generalist NAPA workforce required to fully meet goals Australian advanced level public health nutritionists were estimated as a specialist workforce capacity at 20 % of that required [14], estimating that WA needed to increase to 265FTE. The only other published figures from the United States (US) planning models for workforce enumeration for government funded programs set the US ratio of 1FTE public health nutritionist to 133 000 head of population in the 90s [34,35]. Updated in 2000 by the US Association of State and Territorial Public Health Nutrition Directors to 1FTE for every 50 000 head of population in consideration of the complexity of addressing obesity and nutrition of vulnerable population groups [36,37].
Australian nutrition workforce enumeration demonstrates variability amongst states. [28]. Matching Queensland's investment, an additional 95.2 FTE would be required, similar to the 134.1 FTE (current and required) indicated by WA managers to fully deliver on nutrition service goals. The exemplar Queensland workforce was disbanded in 2012 following a newly elected State Government restructure which resulted in the devolution of public health with a 90 % reduction to 14 FTE in total [38].
Physical activity workforce human resource requirements are more challenging to estimate as there are no clear professional recommendations. The mixture of health promotion, physiotherapy, and nursing-trained practitioners highlights the need to develop a specialist workforce by defining both the competencies and numeration requirements to contribute to effective physical activity program delivery [11].
Consistent with the 2008 National Preventive Health Taskforce recommendation to expand the supply and support training of relevant primary health workers, health promotion workers, nutritionists, and dietitians, the findings suggests an obvious way to increase workforce capacity is to invest in workforce growth [1]. In Victoria, developing workforce capacity including the FTE, benefited obesity prevention strategies [39]. The variety of position titles and selection criteria used to recruit workers may lead to variability in the WA workforce. Whilst there has been growth in dietetics as a profession this has predominantly been in clinical services [40]. The WA NAPA workforce has not grown substantially since 2004, a worrying implication for achieving obesity targets.
The importance of a diverse generalist workforce for service delivery was demonstrated but there were skill deficits in the respective areas. Reliance on the generalist workforce with limited or no training in NAPA to deliver interventions is likely to be problematic. Existing WA programs required dietetic input, e.g., FoodCents® [41] and future interventions needed to address the obesogenic environment require a coordinated and skilled workforce. Whilst it is important to work in a multidisciplinary and intersectorial way to reach the whole population, a lack of training and specialist workforce to deliver targeted workforce training is also a problem.
These challenges are not confined to the Australian workforce. The US identified a lack of understanding of the complexity of the dietary change process by other practitioners and managers, lack of resources, training and mentoring to do the work, job insecurity and expectations that nutritionists would assume a variety of other roles [34,42,43]. Several European countries identify major constraining factors to public health nutrition workforce development [44]. Variable expectations about work roles and differences in priority placed on NAPA by managers may be due to their own preferences and/ or past work experience. Many managers were clinically trained in disciplines such as nursing, suggesting that practitioners were reporting to managers without NAPA qualifications or delivery of community public health interventions. Other workforce development issues were the impending shortage of experienced workers as many are approaching retirement age, the overall staff and the workforce instability due to high turnover or unfilled positions. Short term funding, the high proportion of female staff and dissatisfaction with career pathways were reasons identified. Interruptions to service delivery, loss of partnerships, and loss of experience when staff leave without positions being filled are priority workforce issues [8]. Professional isolation is a challenge in rural areas [43] and the with ability to work effectively with peers due to competing pressures or risk factors were identified in this study.
Policy implications for building workforce capacity
Obesity prevention requires a strategic approach to workforce planning within governments and organisations. An appropriately trained and skilled workforce can help improve diet quality and physical activity to reduce obesity and improve population health [45]. Policy level support, organisational level workforce management, and continued competency and capacity building in the existing workforce are required. Workforce development is often not part of the range of policy options for public health nutrition [46]. Although human resource capacity and training were identified in strategic Australian policy as essential to build capacity to achieve Australian obesity outcomes the policies have since been rescinded and not replaced. The chronic disease or obesity prevention emphasis rather than the direct focus on addressing poor nutrition and physical inactivity may contribute to this. Governments are focussing on educating the individual rather than environmental, organisational, policy and legislative and economic approaches [47]. Efforts to reduce budget expenditure such as moving to contract, part-time or generalist practitioners or less experienced practitioners also have a negative impact on overall service delivery [34].
Although there is now a mandate for implementing a workforce development strategy [48], amid growing concern about the lack and potential loss of NAPA workforce capacity, there have been no subsequent workforce audits. More research also is required on how best to train and maintain a NAPA workforce to meet current challenges and future needs.
Limitations of the research
The survey was conducted over 10 years ago and a follow-up survey is timely and urgently needed. Caution should be taken when interpreting the results of this workforce audit as the interventions delivered by the Department of Health in Western Australia at the time of the audit were largely directed by national health priority areas. This study findings show that manager's recognition of nutrition and physical activity as major health issues was a lower priority than other factors such as obesity, social impacts and mental health, see Table 1, which may have changed since the audit. Obesity remains a public health priority and research into effective public health policy options interventions has progressed [49,50] and emphasise the need for intersectoral action and approaches. For example, there is increasing recognition of mental health issues and stigma related to body weight [51,52]. Further government workforce audits are recommended and would need to consider the current policy and intervention context and the broader workforce involved in prevention.
However, the findings maybe valuable for future workforce development given the lack of evidence on NAPA workforce in Australia and may contribute to evidence on the lack of progress in addressing issues such as obesity presently. The sampling was designed to target all NAPA service providers however individual practitioners in other settings who may have been involved with promotion in their clinical roles may not have been captured. The relatively low response rate among practitioners compared to managers is a limitation, however, other workforce audits have reported rates as less than 50 % percent [38]. The use of snowball sampling and the uniqueness of the WA context may limit the generalisability of findings. In addition, it should be noted that the practitioner survey relied on self-report data. Also several managers were unable to estimate some of their generalist workforce's time dedicated to nutrition and or physical activity service delivery. Variable size of organisations meant some had more managers and practitioners included, although this was taken into account when enumerating the workforce so that positions were only counted once. The Department of Health NPAB manager (secondary author) who commissioned the audit was not included in the survey but the workforce at the NPAB has been included in enumeration estimates.
Conclusion
Workforce development needs to be a key strategic determinant for obesity prevention. The 2004 WA NAPA workforce audit highlighted a lack of responsibility for workforce development, an unclear and fragmented strategy, and a lack of fit for purpose workforce to deliver interventions. There is no doubt the programs required to effectively influence NAPA are challenging and complex yet there is little evidence of workforce considerations.
Funding
The Department of Health in Western Australia funded the 2003-2004 Western Australian Audit of NAPA and Healthway, the Health Promotion Foundation, funded Curtin University to assist the translation of research into practice through the "Food Law, Policy and Communications to Improve Public Health Project". http://foodpolicy.org.au/.
Availability of data and materials
The data that support the findings of this study are available from Department of Health, Western Australia but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Department of Health, Western Australia.
Authors' contributions AB designed the study instruments and conducted the data collection and carried out the collation of information and contributed extensively to drafting and reviewing the manuscript. CP conceptualized and designed the study and assisted with the development of the instruments, and conceptualizing and drafting the manuscript, and edited and approved the final manuscript as submitted. Both authors read and approved the final manuscript.
Authors' information AB is currently a Senior Lecturer and the Course Coordinator for the Master of Dietetics at Curtin University. She has been at Curtin as a permanent staff member since 1996 is a highly experienced lecturer winning teaching awards. She is responsible for teaching the public health nutrition content for a variety of courses. Her research interests are in food literacy capabilities and effective programs, nutrition during the lifecycle and food and nutrition policy. In 2012 she was awarded Fellow of the Public Health Association of Australia in recognition of significant contribution to PHAA and the field of public health. In 2014 the Dietitians Association of Australia recognised AB as an Advanced Accredited Practicing Dietitian in recognition of leadership, education, supervision, teaching and health professional training CP works part-time for Curtin University and the Western Australian Department of Health to try to build the capacity for nutrition epidemiology in Western Australia to inform policy and practice. CP is best known for managing the Go for 2&5® fruit and vegetable social marketing campaign. She has been awarded the International Fellow of the World Cancer Research Fund, bestowed September 2012, and has achieved Fellowship of the Public Health Association of Australia, appointed September 2012. CP has a particular interest in improving nutrition for population groups who are vulnerable to poor nutrition due to their social, environmental or economic circumstances. CP was manager of the former state-wide Nutrition and Physical Activity Branch of the Department of Health at the time of the survey.
|
2018-04-03T05:52:22.102Z
|
2016-08-25T00:00:00.000
|
{
"year": 2016,
"sha1": "cfabf0838b0b7a5ceb096d13f643984c4a34258c",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-3544-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfabf0838b0b7a5ceb096d13f643984c4a34258c",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55325339
|
pes2o/s2orc
|
v3-fos-license
|
Latitudinal and radial variation of >2 GeV/n protons and alpha-particles at solar maximum: ULYSSES COSPIN/KET and neutron monitor network observations
. Ulysses, launched in October 1990, began its second out-of-ecliptic orbit in September 1997. In 2000/2001 the spacecraft passed from the south to the north polar regions of the Sun in the inner heliosphere. In contrast to the first rapid pole to pole passage in 1994/1995 close to solar minimum, Ulysses experiences now solar maximum conditions. The Kiel Electron Telescope (KET) measures also protons and alpha-particles in the energy range from 5 MeV/n to > 2 GeV/n. To derive radial and latitudinal gradients for > 2 GeV/n protons and alpha-particles, data from the Chicago instrument on board IMP-8 and the neutron monitor network have been used to determine the corresponding time profiles at Earth. We obtain a spatial distribution at solar maximum which differs greatly from the solar minimum distribution. A steady-state approximation, which was characterized by a small radial and significant latitudinal gradient at solar minimum, was interchanged with a highly variable one with a large radial and a small – consistent with zero – latitudinal gradient. A significant deviation from a spherically symmetric cosmic ray distribution following the reversal of the solar magnetic field in 2000/2001 has not been observed yet. A small deviation has only been observed at northern polar regions, showing an excess of particles instead of the expected depression. This indicates that the reconfiguration of the heliospheric magnetic field, caused by the reappear-ance of the northern polar coronal hole, starts dominating the modulation of galactic cosmic rays already at solar maximum.
Cosmic ray measurements within a wide range of heliographic latitudes in the inner heliosphere were performed by detectors on board the Ulysses spacecraft in 1994-1996. This time period was characterized by low solar activity and weak modulation of cosmic rays. As displayed in Fig. 1 Ulysses reached a maximum heliographic latitude of 80.2 • in the Southern and Northern Hemispheres, and has an orbital period of about half a solar cycle. As a consequence, solar activity was high when Ulysses performed the second rapid pole to pole transition in 2000 and 2001. Figure 1 shows the heliolatitude and radial distance of Ulysses from 1993 to 2004. Black and blue solid circles mark the start of each year during the first and second out-of-ecliptic orbit. The red and green histograms show the evolution of the maximum latitudinal extent of the heliospheric current sheet α during the first -solar minimum -and second -solar maximum -orbit. Hoeksema (http://quake.stanford.edu/ ∼ wso/) calculates α using two different magnetic field models: (1) The "classical" model uses a line-of-sight boundary condition at the photosphere and includes a significant polar field correction.
(2) The newer model uses a radial boundary condition at the photosphere, has a higher source surface radius (3.25 solar radii), and requires no polar field correction. In Fig. 1 we used the "classical" model. Note that the figure would not be altered qualitatively by using the newer model, whereas the absolute numbers would be different.
For the purpose of this paper, the exact value of α is not crucial, as α is used as a proxy for solar activity. It is low and high during solar minimum and maximum, respectively. All large modulation effects in solar cycle 22 during the first Ulysses orbit occurred while the spacecraft was at low latitudes in 1990 to 1993. Ulysses was again close to the heliographic equator by the time of the onset of solar activity in solar cycle 23, at the end of 1997 and beginning of 1998. Since then α and also solar activity remained high.
The evolution of the maximum latitudinal extent of the heliospheric current sheet α (a) and the solar polar magnetic field strength for the Southern and Northern Hemisphere (b) are displayed in Fig. 2 (from http://quake.stanford.edu/ ∼ wso/), together with the daily averaged count rate of 100-125 MeV protons (c) and the 26-day averaged "quiet time" count rates of >2 GeV protons and >2 GeV/n alpha-particles (d) from Ulysses' launch in 1990to mid 2002 the corresponding 20 nHz smoothed solar polar magnetic field strength is superimposed. The 20 nHz low pass filter is used by the Wilcox Solar Observatory to eliminate yearly geometric projection effects. From the time profiles it follows that the two hemispheres reversed their polarities in 1990 and 2000. Hence, the heliospheric magnetic field is expected to reverse its polarity accordingly. In 2001 the northern polar coronal hole was formed (McComas et al., 2001a), showing the corresponding signatures in the heliospheric magnetic field (Smith et al., 2001), indicating the decline towards solar minimum. It is important to note that such interplanetary signatures have not been observed by Ulysses in 2000, when the spacecraft was at 80 • S heliographic latitude.
The 26-day averaged "quiet time" counting rates in panel (d) of Fig. 2 are presented as percentage changes with respect to the maximal rates C max measured in mid 1997 at solar minimum. "Quiet time" profiles have been determined by using only time periods when the 100-125 MeV proton channel (panel (c) of Fig. 2) showed no contribution of solar or interplanetary particles . Marked by shading in (c) two rapid pole to pole passages in 1994/1995and 2000, and the ecliptic crossing in 1998 (EC). The observed variations in the particle intensities are caused by temporal and spatial variations due to the Ulysses trajectory. Therefore, the variation from solar minimum to solar maximum in 2000 does not reflect the total modulation amplitude at these rigidities. The two rapid pole to pole transitions at ∼1.5 AU should provide the best "snapshot" of the spatial distribution of cosmic rays in the inner heliosphere at solar minimum and maximum, respectively.
The 3-dimensional heliosphere at solar minimum
Around solar minimum there is a clear separation between low and high latitudes: (1) While the region close to the heliographic equator is embedded in slow solar wind, polar regions are dominated by the high speed solar wind, originating from the polar coronal holes (McComas et al., 2001a).
(2) The heliospheric current sheet, the thin layer separating both magnetic polarities of the heliospheric magnetic field, is embedded in the slow solar wind regime and stable around solar minimum (McKibben et al., 1998). In an A > 0 solar magnetic cycle, like in the 1990's, the heliospheric magnetic field pointed outwards and inwards in the Northern and Southern Hemispheres, respectively. In that case drift models predict that positively charged cosmic rays drift predominantly inward through the solar polar regions and then outward through the equatorial regions along the heliospheric current sheet (Jokipii et al., 1977). (3) The latitudinal distribution of high energy cosmic rays measured by Ulysses showed the expected behavior (Paizis et al., 1995;Heber et al., 1998): The count rate of >2 GeV/n protons and helium increased towards high latitudes, and was nearly symmetric with respect to the equator Belov et al., 1999). The observed time profile at these rigidities during Ulysses' first fast latitude scan in 1994/1995 is dominated by the latitudinal gradient (Belov et al., 1999).
The 3-dimensional heliosphere at solar maximum
The second fast latitudinal scan in 2000/2001 was the first exploration of the inner 3-dimensional heliosphere around solar maximum. The heliosphere was completely different from the first rapid pole to pole passage in 1994/1995. A relatively quiet, stable and well-structured heliosphere was interchanged with a highly variable solar wind and heliospheric magnetic field showing a large number of irregularities (Mc-Comas et al., 2001a;Smith et al., 2001). At solar minimum a clear separation exists between low-latitude slow solar wind and fast solar wind. The heliospheric current sheet, which had a "simple" and stable configuration during solar minimum, became a much more complex structure and was observed at polar regions (Smith et al., 2001) (see Fig. 1). During Ulysses' second out-of-ecliptic orbit, the polar coronal holes disappeared and were interchanged with short-lived coronal holes originating at low latitudes. This situation became even more complex due to (1) the increasing number of coronal mass ejections, causing large Forbush effects, which were nearly absent during solar minimum, and (2) the reversal of the solar magnetic field, as described above.
Data analysis
As described in the previous section, Fig. 2 displays the daily averaged count rates of 100-125 MeV protons and 26day "quiet time" averages of >2 GeV/n protons and alphaparticles from the Kiel Electron Telescope (KET) on board Ulysses. Figure 3 shows the high energy KET channels with higher time resolution along with a 1-AU baseline measurement, which is derived from neutron monitor and IMP-8 observations. A simple inspection of Figs. 2 and 3 shows the differences between the first (solar minimum) and the second (solar maximum) orbit. While the spatial variation dominates the temporal variation during Ulysses' first orbit from 1994 to fall of 1997, the observed count rate variation from 1998 to 2001 is determined by the increasing temporal modula- Daily and 26-day running mean averaged variations >2 GeV/n protons and helium as measured by the KET and 10 GV cosmic ray variation at Earth. The latter is inferred from neutron monitor and IMP-8 observations. tion in the inner heliosphere, reaching solar maximum conditions in 2000/2001. Although numerous solar particle events have been observed, the "quiet time" count rates of galactic cosmic ray protons and alpha-particles are continuously increasing during the second fast latitude scan, indicating that none of these events give rise to a modulation barrier, like the March to July activity in 1991 (McDonald et al., 2000). Ulysses' measurements alone are not sufficient to infer a concept about the spatial distribution of the cosmic ray phase space density during the rising phase of the solar cycle because of its temporal variation. However, Fig. 2 indicates that because of the continuous increase during the rapid pole to pole passage in 2000/2001, no significant latitudinal gradients at solar maximum could be present. If such latitudinal gradients would have been present, then the temporal and Ulysses' radial variation must have canceled them exactly. We find such a symmetric temporal variation, centered around day 136 of 2001, unlikely, and, therefore, reject this scenario. Since Ulysses moved from a distance of ∼2 AU at southern polar regions inward to 1.34 AU close to the heliographic equator and then back to ∼2 AU over the north pole, a radial gradient of ∼3%/AU would lead to a 1.021 times higher flux at polar regions. A negative latitudinal gradient of the order of 0.026%/degree might be masked by the radial variation.
In order to derive mean radial and latitudinal gradients around solar maximum, a model describing the temporal and time dependent spatial parameters is applied. The temporal variations can be described by appropriate 1 AU observations, as given by the neutron monitor network, and displayed together with the Ulysses observations in Fig. 3. The radial and latitudinal gradients are then derived from a fit to the KET data using Ulysses' trajectory parameters as displayed in the upper panel of Fig. 4. We assume that temporal, radial and latitudinal dependencies of the cosmic ray intensities are separable, so that the cosmic ray intensity at Ulysses can be described as: (1) where I i 0 (t 0 , r 0 , θ 0 ) is the particle intensity at time t 0 at a distance r 0 from the Sun and at a heliographic latitude θ 0 ; δ i (t) is the intensity variation at a time (t). Note that δ i (t) < 1. The index i is used for the type of particle (p for protons and h for alpha-particles).
Radial variation
Since Ulysses' radial variation is small, we can write f i r (t, r) = exp(g i r (t) · (r − r 0 )) with g i r (t) as a time dependent radial gradient. It is reasonable to relate g i r (t) with the depth of modulation δ i (t): g i r (t) = g i 0,r + g i 1,r δ i (t), where g i 0,r describes the radial gradient at solar minimum and g i 1,r its changes within the solar cycle. In what follows we will use both, the stationary and time dependent approximation of the radial gradient.
Latitudinal gradient
Although Ulysses' observations over the whole latitude range from the heliographic equator to southern and northern polar regions indicates that two different modulation regions exist, with different latitudinal gradients in the fast and "slow" solar wind regime (Heber et al., , 2002, we assume that f i θ (t, θ ) = exp(g i θ (t) · (θ − θ 0 )). Herein is g i θ (t) the time dependent latitudinal gradient.
Temporal variations
Unfortunately, there is no 1-AU baseline instrument for the KET cosmic ray measurements available. The time variation I i 0 (t 0 , r 0 , θ 0 ) has to be estimated by observations of high energy particles by the neutron monitors from the world-wide network on Earth. From these data a rigidity spectrum of the cosmic ray density variations can be derived on a daily basis Belov et al., 1999Belov et al., , 2001. Since a simple power law rigidity spectrum is not sufficient to describe the long-term variations, we assumed a rigidity dependence f (R) = 1/(β +R γ ). The expected neutron monitor counting where a 0 and c 0 are amplitude and coupling coefficients of the isotropic cosmic ray variation, respectively, R c is the geomagnetic cutoff rigidity, R 0 = 10 GV. The parameters a 0 , β, and γ can be found for every day by comparing the expected variations and the real variations observed by the neutron monitor network (more than 30 stations). While the approach of deriving the rigidity spectrum of the temporal variation is useful at solar minimum Belov et al., 1999), it is not reliable around solar maximum because of the large uncertainties as described by Belov et al. (2001). In this paper we determine the temporal modulation for >2 GeV/n protons and alpha-particles, assuming that the modulation depth δ i (t) is proportional to the modulation depth δ 10 (t) at 10 GV. The latter can be determined from the neutron network observations with high accuracy. Taking into account all the assumptions listed above and introducing l i mod = ln(I i (t, r, θ )), we can rewrite Eq. (1): l i mod = ln(I i (t, r, θ )) = a i +b i δ δ 10 +g i θ θ +(g i 0r +g 1 1r δ i )·r. (2) Herein the explicit time dependence of the five parameters especially for g θ , has been neglected. We used the leastsquare root method to obtain the four or five unknown parameters in Eq.
(2) from the observations. It is important to note that the cosmic ray spatial distribution during high solar activity is more complicated than at solar minimum. For example, during Ulysses' solar minimum orbit in 1994 to 1997 the cosmic ray observations were dominated by (1) latitudinal, (2) radial, and (3) temporal variations. Therefore, we could determine the latitudinal gradient, during Ulysses' fast latitude scan, and the radial gradient, when Ulysses was back to the heliographic equator in 1997 (Belov et al., 1999). At solar maximum one should take into consideration that the parameters g r , g θ in Eq.
(2) become dependent on time, radial distance and heliographic latitude Fujii and McDonald, 1997) The cosmic ray time profile, which should not be related to Ulysses' position, correlates occasionally with the spacecraft distance and/or latitude. Such a correlation can be essentially high on relatively small time intervals (less than a year). To obtain a reliable and stable fit of Eq. (2) to the data we need to analyze the data sets for long time periods.
Results and discussion
To determine the mean radial and latitudinal gradients we analyzed the time period from January 1998 to mid 2002. This period is characterized by (1) increasing solar activity from 1998 to 2000, (2) solar maximum activity of cycle 23 in 2000/2001, and (3) declining activity in 2002. It is important to note that the latter period includes the reversal of the heliospheric magnetic field. The total observed cosmic ray modulation for 10 GV particles at Earth exceeded 30%. In what follows we will discuss the observations, excluding the time period of the heliospheric magnetic field reversal, which corresponds to the times when Ulysses was in the Southern Hemisphere, the observations during the second fast latitude scan, and the latest data, taken in the Northern Hemisphere. ray measurements are dominated by the large, temporal variation, leading to a high correlation in cosmic ray behavior measured at Earth and on board Ulysses.
Due to the different heliographic latitudes and longitudes of Ulysses and Earth, not all short-term cosmic ray decreases caused by, for example, coronal mass ejections or corotating interaction regions are seen at Earth and at Ulysses. Since Ulysses is at a larger radial distance than Earth, an outward moving disturbance will reach the spacecraft later. In order to account for these effects, we applied the following two corrections to the data: 1. Measurements at Earth have been "shifted" to the Ulysses position. If we take into account a propagation speed of 400 km/s for a disturbance moving from 1 AU to Ulysses at a radial distance r u , the time profiles at Earth and at Ulysses are better correlated; the correlation coefficient increased from 0.954 to 0.972; 2. Solar rotation averaged running means have been used to minimize the longitudinal differences.
If we assume that the radial gradient is constant over the time period of interest (from January 1998 to May 2001), g 1,r = 0, the fit of Eq. (2), as displayed in Fig. 4, leads to the following results for >2 GeV/n protons and alpha-particles (for comparision the values obtained at solar minimum have been given too; Belov et al., 1999): Using these parameters we obtain for protons and alphaparticles correlation coefficients of 0.972 and 0.81. Note that the poor correlation coefficient for the alpha-particles results from the large statistical uncertainties. If we take into account the variation of the radial gradient with the modulation depth, we find for >2 GeV protons that the radial gradient is varying between 3.4 and 4.2%/AU. However, in that case the correlation coefficient is statistically not significant, and it is important to note that no temporal dependence of the latitudinal gradient has been taken into account; therefore, our results are not in contradiction with the result at lower rigidities from Heber et al. (2002), who attributed the higher count rates of the data compared to the expectation in mid 1999 to the existence of latitudinal gradients. These gradients vanish later. It is important to keep in mind that Ulysses was below 20 • S before 1999 and around 30 • , when this latitudinal gradient has been observed. In Heber et al. (1997Heber et al. ( , 1998 it was shown that latitudinal gradients are significantly smaller close to the heliographic equator than at higher latitudes, so that a significant contribution of the latitudinal gradient is not expected to occur before 1999. If we attribute the excess of the observations in mid 1999 in Fig. 4 to a latitudinal gradient, then we find g θ ∼= 0.17%/degree. This value is in good agreement with the value g θ = 0.17 ± 0.02%/degree found by Belov et al. (1999)
at solar minimum.
In what follows we will neglect the latitudinal dependence, because the mean latitudinal gradient was found close to zero for the high latitude observations (see Fig. 4). We can rewrite Eq. (2) to: Our analysis showed that the approximation is even valid until July 2001, and we can extent our analysis to that period. We obtain the following parameters for >2 GeV/n protons and alpha-particles: The red curves in Fig. 4 correspond to the result of this spherically symmetric approximation. It is important to note that from mid 1999 to mid 2001, corresponding to 2 years around solar maximum, the cosmic ray distribution is in good agreement with a spherically symmetric one, which is characterized by large radial and nearly no latitudinal gradients. In contrast, Belov et al. (1999) and Heber et al. (1997) determined radial and latitudinal gradients g p r = 0.5%/AU and g p θ = 0.19 ± 02%/ • for the first Ulysses orbit. As mentioned above they also found that the latitudinal gradient was small only within the narrow region of the streamer belt.
To visualize the differences between the mean cosmic ray distribution obtained by Ulysses and the neutron monitor network at solar minimum in 1994 to 1996 and around solar maximum from mid 1999 to mid 2001, Fig. 5 (left) and Fig. 5 (right) display these distributions within a sphere of 5 AU radius. To obtain the solar minimum distribution we used g r = 0.5%/AU, g θ = 0 for |θ | < 15 • , else g θ = 0.19%/ • , gradually decreasing to zero above 70 • . To obtain the solar maximum distribution, a constant radial gradient of 4%/AU has been applied. In contrast to solar minimum our analysis indicates a spherically symmetric distribution of cosmic rays around solar maximum. The intensities in the inner heliosphere depend on the radial distance from the Sun only, while in 1994 to 1996 the latitude dependence outside of the streamer belt (∼15 • ) dominates the observations at solar minimum. Since the radial gradient was increasing in 1997/1998 (Belov et al., 1999;McDonald et al., 2001), we suggest that the transformation from the minimum to the maximum distribution must have occurred around mid 1999, when the spacecraft was well below the heliographic equator, allowing for a good determination of latitudinal effects.
Another important conclusion can be made by the comparison of the spatial distributions displayed in Fig. 5. Since latitudinal gradients were positive at solar minimum in the last cycle and vanishing thereafter, the total modulation is higher at polar latitudes than in the ecliptic. While the observations at solar minimum in an A > 0 solar magnetic cycle confirm the results from advanced modulation models (Potgieter - , 2001), the distribution obtained by Ulysses during the next A < 0 solar minimum will be a crucial test for such models.
Cosmic ray gradients around solar maximum in the Northern Hemisphere
The results discussed in the previous section relate to the Ulysses observations in the Southern Hemisphere. The fit of Eq. (2) and (3) to the Ulysses data from mid 2001 to the most recent data were not successful. This might have several causes: -In contrast to the Southern Hemisphere observations, all observations available were performed during the declining phase of the solar cycle 23; -In contrast to the high southern latitudes a polar coronal hole was observed in the Northern Hemisphere, indicating the reconfiguration at the Sun. If latitudinal gradients are tied to the fast solar wind region, then the fit by both equations will fail, because of the spatial dependence of the latitudinal gradient; -As argued by Heber et al. (2002) the temporal change of radial and latitudinal gradients does not occur simultaneously.
In order to determine the gradients by fitting Eq.
(2) to the data, more data at low latitudes are needed to determine the radial gradient with better precision. Due to the reconstruction of the heliospheric magnetic field towards a well-ordered structure, the galactic cosmic ray distribution is expected to change (Heber et al., 2003), which might not be expressed by the modulation depth, so that Eq. (2) is not applicable.
The solar maximum fast latitude scan
The second fast latitude scan occurred early in the declining phase of solar cycle 23, when the cosmic ray intensities started to recover. Besides the relative short time period -it took Ulysses 11 months from the southern polar cap to the northern one -the radial variations were small. We selected the part of the Ulysses orbit, when the spacecraft was within 2.5 AU from the Sun, which covers the time period from December 2000to December 2001, and the whole latitudinal range from 80 • S to 80 • N. Hence, we used the radial and temporal parameters as determined in the previous section. The residual profile should, therefore, reflect the latitudinal dependence of the cosmic ray fluxes as displayed in Fig. 6 for 2 GeV/n protons and alpha-particles as a function of Ulysses' latitude. A simple inspection of Fig. 6 shows that the proton and helium intensities depend only weakly on latitude with the exception of the increase above ∼ 50 • N for both species. The drop in the Southern Hemisphere at about 60 • S is only observed in 2 GeV protons and might be rather caused by the increased solar activity in March 2001, with large short-term cosmic ray variations, rather than by the latitudinal distribution. In contrast, the increase in the Northern Hemisphere is seen in both channels and has a consistent trend. If we attribute this trend to a latitudinal gradient, then a value of g θ ∼ 0.12%/degree and g θ ∼ 0.1%/degree for protons and alpha-particles is obtained. It is important to note that this latitudinal gradient is smaller than the one observed at solar minimum in 1994/1996, but it is still positive. At a first view this is a surprise because the solar magnetic field reversed in 2000/2001, and drift should operate in this solar cycle such that positively charged particles are streaming in along the heliospheric current sheet and out in polar directions, leading to negative latitudinal gradients. However, the latitudinal dependence of cosmic rays is not only determined by drifts, but also by diffusion, convection and adiabatic deceleration. These mechanisms depend differently on the heliospheric conditions. In this context it is important to note that significant latitudinal gradients were observed mainly in the fast solar wind regime Belov et al., 1999). If the particle transport depends on such structures, one expects no dependence on the solar magnetic epoch. The reappearance of the northern polar coronal hole in (McComas et al., 2001b would consequently lead to larger cosmic ray intensities at polar regions than close to the streamer belt. At the same time the structure of the heliospheric current sheet was still very complicated. Thus, drifts, which would cause negative latitudinal gradients, were not fully "operational", leading to positive latitudinal gradients. This is in agreement with the constancy of the e/p-ratio at 2.5 GV as measured by Ulysses (Heber et al., 2003). In order to analyze and interpret the observations, further measurements are needed. A detailed analysis of the Northern Hemisphere data will be possible, when the spacecraft has returned close to the ecliptic plane.
Summary and conclusion
In this paper we determined important modulation parameters, like the radial and latitudinal gradient, as well as the modulation depth from solar minimum to maximum, by using Ulysses' KET and neutron monitor network observa-tions. We could show that the mean spatial distribution at solar minimum from 1994 to 1996 is remarkably different from the one at solar maximum from 1998 to mid 2001. While the positive latitudinal gradient dominates the picture at solar minimum this distribution is spherically symmetric around solar maximum, with large radial gradients in the inner heliosphere. The increase in solar activity is accompanied by an increase in the radial gradient. When Ulysses was at high heliographic latitudes above 30 • S from mid 1999 on, no significant latitudinal structure could be found until July 2001, when Ulysses was going above ∼50 • N and the tilt angle α fell down sharply. It is interesting to note that, as a consequence of the reconstruction of cosmic rays from a latitude dominated to a spherically symmetric distribution at solar maximum, the magnitude of the 11-year cosmic ray cycle is essentially bigger at polar regions than close to the heliospheric equator, particularly near Earth. Unfortunately, the observations during the slow northern descent of Ulysses in 2001/2002 are difficult to interpret, and we have not been successful in determining the gradients and temporal variations independently from each other. However, we investigated the second fast latitude scan, assuming that the radial gradient as well as the parameters describing the temporal variation stay constant during these 11 months. As a result of this analysis we find higher cosmic ray intensities in the northern polar region than close to the heliographic equator. If we interpret these as latitudinal gradients, we can determine g θ to be 0.12%/degree and 0.1%/degree for 2 GeV/n protons and alpha-particles, respectively, with a lower accuracy for the helium channel. It is important to note that this interval is correlated with the time period when Ulysses is embedded in the recently developing northern polar coronal hole. From mid 1999, when Ulysses was above 30 • S to mid 2001, a highly variable and slow solar wind has been observed by Ulysses only (McComas et al., 2001b). In Heber et al. (1998) and Belov et al. (1999) we showed that latitudinal gradients are small in the streamer belt dominated region. Therefore, we argue here that the expansion described by the tilt angle and structure of the streamer belt to high heliographic latitudes leads to a strong increase in modulation and the form of the cosmic ray spatial distribution in the inner heliosphere. As the tilt angle is decreasing towards solar minimum, with the development since 2001 of the northern polar coronal hole, fast solar wind emanating from that hole has been observed by Ulysses. Nearly simultaneously an increase in the particle intensities at high northern polar latitude can be observed. This is in agreement with a concept of a close correlation of the cosmic ray modulation with the HMF configuration.
Topical Editor R. Forsyth thanks two referees for their help in evaluating this paper.
|
2018-12-05T02:37:43.432Z
|
2003-06-30T00:00:00.000
|
{
"year": 2003,
"sha1": "a21cbfd5cc5da4049967d41d4fb3545cff43279c",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/21/1295/2003/angeo-21-1295-2003.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bac34595d54f448baec079972c884811db4014b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55535325
|
pes2o/s2orc
|
v3-fos-license
|
Exposure Analysis of Drinking and Dietary Contaminants in a Selected Population, Padaviya, Anuradhapura
This study focused to analyse exposure of selected drinking and dietary contaminants and to assess the health risk for the selected population of Padaviya, Anuradhapura. Thirty families were randomly selected from which fifteen families were with the presence CKDu patients and other fifteen families were with absence of CKDu patients. Questionnaire based social survey was conducted and relevant data were collected for the risk analysis. Water, rice and soil samples were collected on family basis for the quality assessment. Nitrate-N, total hardness and fluoride varied within the range of 1.01 - 23.4 mg/L, 40.04 – 644.58 mg/L and 0.47 – 1.92mg/L respectively. All physiochemical parameters were significantly different among the wells (P<0.05). Water pH, conductivity and TDS in well water were below the Sri Lankan standard for portable water level (SLPWL). However, exceeded SLPWL value of NO 3 -N (C10, C14, C15& N3), hardness (C12 & C13) and fluoride (C7, C15 & N3) were observed in some wells. Both iron and copper concentrations in well water were lower than the provisional maximum tolerable daily intake (PMTDI) of WHO (Fe: 2 mg/L and Cu: 2 mg/L). Dietary iron and copper concentrations in rice were higher than the PMTDI of WHO (0.5 mg/kg) except for family N7. Copper and Iron varied within the range of 1.55 – 48.4 mg/kg dw and 467.08-893.61 mg/kg dw in soil respectively. Probable exposure concentration was higher than probable non-exposure concentration in the selected population. Therefore, Relative Risk for CKDu was greater than 1 for all selected contaminants and it explains that there is a possible risk due to drinking water and eating rice for the selected contaminants. Non-cancer risk values in selected families were higher than the unity of the risk level (1x10 -6 ) and therefore the contaminants in drinking water and rice in Padaviya area can be considered as risk factors for prevailing chronic kidney disease.
INTRODUCTION
The contamination from both natural and anthropogenic sources becomes a major issue which is responsible for serious health problems in world wide. The European Food Safety Authority recommends that women should drink about 1.6 litres of water and men should drink about 2.0 litres of water per day and one person consumes about three or two meals per day. Therefore, water and food consumption is the major exposure route for many human contaminants and degradation of the resource caused by various anthropogenic activities increases the availability of contaminants. Sri Lanka is ranked in the eighth position in the world on the list of countries that use the highest quantity of chemical fertilizer (Mudalige, 2014). The highest usage of fertilizers and agro-chemicals has been reported in North Central Province (NCP), is applied for about 128,000 ha (Weeraratna, 2013). Recent findings show that, some toxic heavy metals including fertilizers and agro chemicals are the major contaminant in Sri Lanka (Jayasumana, 2014;Bandara et al., 2008).
More than 80% of the rural drinking water supply needs are met from groundwater by means of dug wells and tube wells (Panabokke & Perera, 2005). High Fluoride levels (above 1.5mg/L) in well water in dry zone had been observed as far back as 1976. Subsequent studies have shown that 40% of wells in NCP were rich in Fluoride and a number of 456 deep tube wells in Anuradhapura district have also been found with Fluoride ranging from 0.78 to 2.68 mg/L (Lasanntha et al., 2008). The same study reported that 34% of the wells exceeded the maximum desirable level of 100 mg/L of calcium in drinking water and also 8% of the wells exceeded the maximum permissible level of 240mg/L in Anuradhapura District. Electrical conductivity in Anuradhapura district reported as 350S/cm indicating the abundance of electrolytes in water (Dissanayake, 2005).
Nowadays, chemical contaminants are a major concern for food safety because of the increased role of man-made chemicals due to our modern lifestyles. Rice is the main diet of Sri Lankans in all over the country and rice and other several popular food items have been contaminated from heavy metals such as cadmium (Bandara et al., 2008). The heavy metal contamination like cadmium was observed in rice, Nelumbonucifera (Lotus) rhizomes, cow's milk and in Tilapia (Oreochromisniloticus) (Bandara et al., 2008). The overuse and misuse of agrochemicals would be contributed to heavy metal contamination (Cd, Cr, Ni, Pb, As etc.) and for both acute and chronic renal failure (Bandara et al., 2010). Therefore, this study was aimed to identify the major contaminants in drinking water and in selected food types in Padaviya area and to follow the risk assessment model in order to assess the exposure levels of above identified contaminants.
Study Area
The study area was decided according to the data taken from Office of the Provincial Director of Health Services, North Central Province, Anuradhapura regarding prevalence of CKDu (Fig 1). Padaviya is 93 km away from Anuradhapura and it receives 1450 mm mean annual rainfall and available from October to March. A questionnaire based social survey and laboratory analyses were carried out to collect data for human health risk assessment. Thirty families were selected representing fifteen families with the presence of CKDu patients and other fifteen families with the absence of CKDu patients.
Collection of Samples
Paranagama (2013) has reported that sources of drinking water of CKDu patients are dug wells (92 %) and tube wells (8 %). Therefore, water samples were collected from the water sources such as dug wells, tube wells, etc. belongs to the selected families. On site measurement of pH, conductivity, Total Dissolved Solid (TDS), salinity, Dissolved Oxygen (DO), temperature were taken by Multi -Parameter Meter (HACH sens ION 156). The filtered water samples were stored with ice in a heat insulated box for facilitating transportation. Rice samples were collected from the selected families while soil samples were collected from their own paddy fields.
Analysis of chemical parameters
The filtered water samples were analyzed for Nitrate-N according to the Sodium Salicylate method, water soluble salts of phosphoric acid were measured by automated ascorbic acid reduction method (Rand et al., 1975, p.481-482) and organic phosphorous content of each water sample were measured by persulphate digestion method and the automated ascorbic acid reduction method (Rand et al., 1975, p. 481-482). Fluoride concentration of each filtered water sample was measured directly by HATCH-DR 4000U Spectrophotometer at a wave length of 580 nm using SPADNS solution (Rand et al., 1975, p.393-394). Also, the water samples were analysed for total hardness by EDTA titration method.
Analysis of heavy metals
Filtered water sample were acidified with10% Conc. Nitric acid (Analar grade). Collected rice samples were thoroughly washed several times with deionized water and the sample was dried at 105°C in an oven until obtained a constant weight. Dried samples were ground to fine powder using mortar and pestle and fine powder was sieved using 250 m sieve. Then, 5 mL of concentrated HNO3 (65%) and 2.5 mL of H2O2 (30%) were added to each 0.5g of sieved rice samples and the solution was heated on Kjeldhal Heating Digester under fume hood at 80°C, for 2 -3h, till the clear transparent solutions were obtained. The final solutions were filtered through Whatman No. 41 filter paper (Jalbani et al., 2014). All soil samples were air dried on polypropylene sheets at room temperature for several days until they were deprived of moisture. Then the soil samples were well ground using porcelain mortar and pestle and were sieved through a 250 m mesh sized sieve. The pre-digestion step was done at room temperature for 24 h with 10 mL of a (3:1) mixture of 12M HCl and 17M HNO3. The suspension was digested on GK 06 Kjeldhal Heating digester under fume hood at 130°C for 15 min. The obtained suspension was cooled at room temperature and filtered through Whatman No. 41 filter paper (Peña-Icart et al., 2011).
A series of standard metal solutions were prepared for Cu using the stock solutions of 1,000 mg/L (BDH chemicals). Concentrations of Cu were measured using Atomic Absorbance Spectrophotometer (SpectrAA 220 AAS). Iron concentration of rice, soil and water samples were measured according to the Thiocyanate colorimetric method (University of Canterbury, 2011).
Human Health Risk Assessment
The data, which had taken from questionnaire based social survey, were prepared to be used for the risk assessment.
Exposure Analysis of water and dietary intake
Where,
Statistical analysis
Oneway ANOVA was carried out to determine the difference of the water quality parameters among the selected wells. The multiple comparison of the water quality parameters was performed using Tukey HSD test (SPSS, Version 16.0).
RESULTS AND DISCUSSION
The physical parameters of pH, conductivity and Total dissolved solid (TDS), did not exceed Sri Lankan standard for portable water level/SLPWL (SLS 614, 1983) and there was a significant difference among the above mentioned parameters in 30 wells (P < 0.05). Water pH and electrical conductivity RFD = Reference Dose (EC) varied within the range of 6.86 -7.95 and 16.25 -755s/cm respectively (Table 1). Jayawardana et al. (2010) has reported that changing of pH values from weak acidic to weak basic (4.0 to 8.2 with the average of 7.2) with very low EC values (2.40 mS/m) indicated that dissolved ionic species are very low in the water. They have also explained that, in dry zone, fluoride values increase in the condition with slightly alkaline pH (7.5-8.2) and relatively low EC (1.0-2.5 mS/m) (Jayawardana et al., 2010). Mechenich & Andrews (2004) have reported, that the much greater hardness may indicate the presence of contaminants which may occur naturally or be influenced by human activity. In the current study, TDS values of most of the wells were much greater than two times of the hardness (Ex. Well no. C3).
Many researchers said that water hardness, high level of fluoride and contamination of heavy metals can effect on human health (Bandara et al., 2010;Bandara et al., 2008;Dissanayake, 2005). In the present study total hardness and fluoride varied within the range as 40.04 -644.58 mg/L and 0.47 -1.92 mg/L respectively. The concentration of nitrate-N varied from 1.01 to 23.4 mg/L among the 30 wells. Paranagama (2013) has observed nitrate-N varied from 0.3x10 -6 to 5.82 x10 -6 mg/L in Padaviya. All chemical parameters were significantly different among the wells (P<0.05). Some parameters as NO3-N (C10, C14, C15& N3), hardness (C12 & C13) and fluoride (C7, C15 & N3) concentrations exceeded the SLPWL (SLS 614, 1983) ( Table 2). Both iron and copper concentrations in well water were lower than the provisional maximum tolerable daily intake (PMTDI) of WHO (Fe: 2 mg/L and Cu: 2 mg/L). Dietary iron concentrations in rice were higher than the PMTDI of WHO (0.8 mg/kg) and copper concentrations in rice were higher than the PMTDI of WHO (0.5 mg/kg) except in the sample taken from family N7. Copper and Iron varied within range of 1.55 -48.4 mg/kgdw and 467.08 -893.61 mg/kgdw respectively in the soil.
It was noted that the wells located closer to the paddy fields have exceeded the nitrate concentration than SLPWL (SLS 614, 1983). Gunatilake and Iwao (2010) also have reported that, during the period of fertilizer application to paddy fields the drinking water wells located nearby paddy fields are more vulnerable to nitrate contamination.
Organic phosphorous in well water varied from 0.1 to 0.39 mg/L. Paranagama (2013) has reported that phosphate varied range of 61.1x10 -6 -80.25 x10 -6 mg/L in well water. Generally, organic matter decomposition is lower in well water than in surface water and microbial decomposition is the main way of entering orthophosphate into the well water Young et al (2010). Therefore, low concentration of orthophosphate was observed in the well water is said that phosphate concentrations in wells found in different soil formations show distinctive variations. 893.61 mg/kg (dw) respectively. The iron rich soil in dry zone has an ability to retain arsenic ions and can be accumulated those (Jayasumana et al., 2014). Recently, arsenic plays main role of chronic renal failure in dry zone of Sri Lanka (Paranagama, 2013). It implies that arsenic is not present naturally in soils nevertheless has been introduced from the surface, most probably due to anthropogenic activities. Today, most of the researchers have been investigated that agrochemicals and fertilizers are the major source of arsenic (Berg et al., 2001). Jayasumana et al. (2014) has reported that a chemical which contain in "round up" is a major causative agent for chronic kidney disease and the chemical is Glycophosate. Hard water in dry zone with high concentration of calcium, magnisium and strontium are combined with glycophosate and easily forms complexes. Ferric ions also play a significant role in the process of adsorption of glyphosate in soil. As a result of current study, high value of total hardness and iron content in soil are correlated with the case group of CKDu.
Collected information from the social survey was important for exposure analysis of each drinking and dietary contaminants. Total exposure in case group (PEC) of all contaminants was higher than total contaminants in non_case group (PNEC) (Figure 2). It implies intake amount of relevant contaminant per day in the case group was high. The relative risk of each drinking and dietary contaminants were greater than 1 (one) according to PEC and PNEC of selected families (Table 3). It explains that, there is a relative risk from each contaminant in the selected population. Non_cancer risk of each contaminant in selected families was higher than the unity of the criteria (1x10 -6 ) ( Table 4). It reveals, in a population of one million people, additional person or persons would be expected to develop risk from considered contaminants in the study. When comparing the risk values separately for groups considering highest and lowest concentrations male group was the most vulnerable group for drinking (Table 4) and dietary contamination than other two groups (Table 5).
CONCLUSION
Since all physiochemical parameters were significantly different among the wells it should be paid individual attention for the wells on their quality of the water. However, pH, conductivity and TDS in well water were below the Sri Lankan standard for portable water level (SLPWL). The exceeded values than the SLPWL was observed for NO3-N, hardness and fluoride in some wells. Both iron and copper concentrations in well water were lower than the provisional maximum tolerable daily intake (PMTDI) of WHO.
Total exposure in case group (PEC) of all contaminants was higher than total contaminants in non_case group (PNEC) explains that there is a relative risk of each drinking and dietary contaminants and it should be assessed further.
|
2018-12-12T16:24:36.299Z
|
2015-07-06T00:00:00.000
|
{
"year": 2015,
"sha1": "981dfa9c44a20c0c914a4782072be301d19bf262",
"oa_license": "CCBY",
"oa_url": "http://ijms.sljol.info/articles/10.4038/ijms.v2i1.61/galley/118/download/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2e15939ecc96792d8d0d63c10363eea56bd80f1e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
251493204
|
pes2o/s2orc
|
v3-fos-license
|
Imbalance Trouble: Revisiting Neural-Collapse Geometry
Neural Collapse refers to the remarkable structural properties characterizing the geometry of class embeddings and classifier weights, found by deep nets when trained beyond zero training error. However, this characterization only holds for balanced data. Here we thus ask whether it can be made invariant to class imbalances. Towards this end, we adopt the unconstrained-features model (UFM), a recent theoretical model for studying neural collapse, and introduce Simplex-Encoded-Labels Interpolation (SELI) as an invariant characterization of the neural collapse phenomenon. Specifically, we prove for the UFM with cross-entropy loss and vanishing regularization that, irrespective of class imbalances, the embeddings and classifiers always interpolate a simplex-encoded label matrix and that their individual geometries are determined by the SVD factors of this same label matrix. We then present extensive experiments on synthetic and real datasets that confirm convergence to the SELI geometry. However, we caution that convergence worsens with increasing imbalances. We theoretically support this finding by showing that unlike the balanced case, when minorities are present, ridge-regularization plays a critical role in tweaking the geometry. This defines new questions and motivates further investigations into the impact of class imbalances on the rates at which first-order methods converge to their asymptotically preferred solutions.
Introduction
What are the unique structural properties of models learned by training deep neural networks to zero training error? Is there an implicit bias towards solutions of certain geometry? How does this vary across training instances, architectures, and data? These questions are at the core of understanding the optimization landscape of deep-nets. Also, they are naturally informative about the role of models since different parameterizations might affect preferred geometries. Ultimately, such understanding makes progress towards explaining generalization of overparameterized models.
Recently, remarkable new progress in answering these questions has been made by Papyan et al. [26], who empirically discover and formalize the so-called Neural-collapse (NC) phenomenon. NC describes geometric properties of the learned embeddings (aka last-layer features) and of the classifier weights of deep-nets, trained with cross-entropy (CE) loss and balanced data far into the zero training-error regime. The NC phenomenon produces a remarkably simple description of a particularly symmetric geometry: (i) The embeddings of each class collapse to their class mean (see (NC) property); and (ii) The class means align with the classifier weights and they form a simplex equiangular tight frame (see (ETF) property). Importantly, as noted by Papyan et al. [26], this simple geometry appears to be "cross-situational invariant" across different architectures and different balanced datasets.
In this paper, we study Neural collapse with imbalanced classes: Is there a (ideally equally simple) description of the geometry that is invariant across class-imbalanced datasets? [24,3,41]. Motivated by deep-learning practice and by studies on implicit bias of gradient descent (GD) for unregularized CE minimization, we analyze the geometry of solutions to an unconstrained-features Support Vector Machines (UF-SVM) problem. We prove, for STEPimbalanced data, any solution of the UF-SVM follows the SELI geometry. Thus, the learned endto-end model always interpolates a simplex labels encoding. We show that (ETF) ⇒ (SELI). However, (SELI) ⇒ (ETF) unless classes are balanced or there is just two of them (k = 2).
Next, we analyze training of the UFM with ridge-regularized CE. Unlike previous studies, we find in the presence of imbalances that regularization matters as it changes the geometry of solutions. In fact, we show that there is no finite regularization that leads to the SELI geometry. However, we also show that as regularization vanishes, the solutions do interpolate the SEL matrix (after appropriate normalization.) Finally, we show that the SELI geometry differs from the minority-collapse phenomenon [3], since the latter does not correspond to solutions with zero training error. In fact, we show for minority collapse that: (i) It does not occur for small finite regularization and finite imbalance ratio, and (ii) It occurs asymptotically for vanishing regularization, but only asymptotically as the imbalance ratio grows.
We numerically test convergence to the proposed SELI geometry in both synthetic and real class-imbalanced datasets. For different imbalance levels, the learned geometries approach the SELI geometry significantly faster compared to the ETF geometry. See Fig. 2. However, we also observe that convergence worsens with increasing level of imbalance. A plausible theoretical justification is that as we show regularization plays critical role under imbalances. We also consistently get better convergence rate for the classifiers. We believe our observations strongly motivate further investigations regarding potential frailties of "asymptotic" implicit bias characterizations and how these might vary in multiclass and possibly imbalanced settings.
Related works
The original contribution by Papyan et al. [26] has attracted lots of attention resulting in numerous followups within short time period, e.g., [41,12,3,9,21,24,5,40,32]. (See also [9, Sec. E] for a review of the recent literature.) Several works have proposed and/or used the UFM with CE training to analyze theoretical abstractions of NC [41,12,3,5]. Other works analyze the UFM with square loss [24,9,40,32] and recent model extensions accounting for additional layers and nonlinearities are studied in [32]. Here, we drove particular inspiration from Zhu et al. [41], who presented a particularly transparent and complete analysis of the optimization landscape of ridge-regularized CE minimization for the UFM under balanced data. In the same spirit, we also relied on the UFM. However, our work is, to the best of our knowledge, the first explicit geometry analysis for class-imbalanced data. See also Table 1 for a comparison.
The only previous work on neural collapse with imbalances is [3], which was the first to note that collapse of the embeddings is preserved, but otherwise the geometry might skew away from ETF. Also, Fang et al. [3] first proposed studying the new geometry using the UFM and appropriate convex relaxations. With this setup, they presented an intriguing finding, which they termed minority collapse: for large imbalance levels, the minorities' classifiers collapse to the same vector. As mentioned above, our focus is on the, particularly relevant for deep-learning practice, zero-training error scenarios. This excludes minority collapse by definition. More importantly, we derive an explicit geometric characterization of both embeddings and classifiers for both majorities and minorities and for all imbalance levels. Specializing these findings to vanishing regularization and imbalance ratio growing to infinity recovers and gives new insights to minority collapse.
Our results also draw from and relate to the literatures on implicit bias, matrix factorization, and imbalanced deep-learning. We defer a detailed discussion on these to Sec. I of the Supplementary Material.
Organization
In Sec. 2 we setup some necessary terminology and introduce the unconstrained features model. In Sec. 3 we formally introduce the SELI geometry as the geometry of the global minimizers of the non-convex max-margin minimization over the UFM. We also relate the new SELI geometry to the ETF and derive closed-form expressions for the former in terms of the level of imbalance. Next in Sec. 4, we investigate the impact of data imbalances on the structure of the ridge-regularized CE minimization with the UFM. In Sec. 5 we present experimental results corroborating our theoretical findings. Specifically, we conduct experiments on both synthetic data under the UFM and on benchmark class-imbalanced datasets. Concluding remarks and some directions for future research are included in Sec. 6.
The proofs of our results are presented in Secs. A-C of the Supplementary Material (SM). Several additional experimental results on the UFM and on real data are also included in Sec.s F and G, respectively. Sec. H of the SM discusses in detail implications of our findings to minority collapse. Finally, Sec. I includes additional remarks and comparisons to previous works.
Notation. For matrix V ∈ R m×n , V[i, j] denotes its (i, j)-th entry, v j denotes the j-th column, V T its transpose and V † its Moore-Penrose pseudoinverse. We denote V F , V 2 , and, V * the Frobenius, spectral, and, nuclear norms of V. tr(V) denotes the trace of V. ⊙ and ⊗ denote Hadammard and Kronecker products, respectively. V ≻ 0 denotes V is positive semidefinite and V ≥ 0 that V has nonnegative entries. ∇ V L ∈ R m×n is the gradient of a scalar function L(.) with respect to V. We use 1 m to denote an m-dimensional vector of all ones and I m for the m-dimensional identity matrix. For vectors/matrices with all zero entries, we simply write 0, as dimensions are easily understood from context. e j denotes a column with a single non-zero entry of 1 in the j-th entry.
Problem setup
We adopt the unconstrained feature model (UFM) [24,3] in a k-class classification setting. Let W d×k = [w 1 , w 2 , ⋯, w k ] be the matrix of classifier weights corresponding to the k classes. Here, d is the feature dimension. We assume throughout that d ≥ k−1. Next, we let H d×n = [h 1 , h 2 , ⋯, h n ] denote a matrix of n feature embeddings, each corresponding to a different example in the training set. We assume each class c ∈ [k] has n c ≥ 1 examples (thus, n c embeddings) so that ∑ c∈ [k] n c = n. Without loss of generality, we assume examples are ordered. Formally, we assume that examples i = 1, . . . , n 1 have labels y i = 1, examples i = n 1 + 1, . . . , n 1 + n 2 have labels y i = 2, and so on. The UFM trains the features h i , i ∈ [n] (jointly with the weights w c , c ∈ [k]) without any further constraints, i.e., by minimizing the ridge-regularized cross-entropy (CE) loss as follows [41]: where L(W T H) ∶= ∑ i∈[N ] log 1 + ∑ c≠y i e −(wy i −wc) T h i is the CE loss.
UFM as two-layer linear net. The formulation above does not explicitly specify inputs for each example. An alternative view follows by considering training a 2-layer linear net with hidden dimension d, first layer H, and second layer W, over n examples with n-dimensional inputs x i = e i ∈ R n , ∈ [n] and labels y i as above: y i = 1 for i = 1, . . . , n 1 , y i = 2 for i = n 1 + 1, . . . , n 1 + n 2 , and so on.
Unconstrained-features SVM (UF-SVM)
Since neural-collapse is observed when training with small / vanishing regularization [26], it is reasonable to consider an unregularized version of (1). In this special case, gradient descent (with sufficiently small step size) on (1) produces iterates that diverge in norm, but converge in direction [22,12]. In fact, it has been recently shown that the GD solutions converge in direction to a KKT point of the following max-margin classifier [22,12]: (Ŵ,Ĥ) ∈ arg min W,H For convenience, we refer to the optimization problem in (2) as unconstrained-features SVM (UF-SVM). This minimization (unlike 'standard' SVM) is non-convex. Hence, KKT points (thus, GD convergence directions) are not necessarily global minimizers; see discussion in Sec. 6.
Class-imbalance model
To streamline the presentation, we focus on a setting with STEP imbalances. This includes balanced data as special case by setting R = 1.
Global structure of the UF-SVM: SELI geometry
In this section, we characterize the global minimizers of the non-convex program in (2). Perhaps surprisingly, we show that they take a particularly simple form that is best described in terms of a simplex-encoding of the labels.
Each columnẑ i ∈ R k ofẐ represents a class-membership encoding of datapoint i ∈ [n]. This differs from the vanilla one-hot encodingŷ i = e y i in thatẑ i =ŷ i − 1 k 1 k . Specifically,Ẑ has exactly k different and affinely independent columns, which together with the zero vector form a k-dimensional simplex, motivating the SEL name. Finally, note thatẐ T 1 k = 0; thus, rank(Ẑ) = k − 1. We gather useful properties about the eigenstructure ofẐ in Sec. A.
Moreover, the following statements characterize the geometry of global minimizers in terms of the the SEL matrix and its SVD.
(iii) For some partial orthonormal matrix R ∈ R (k−1)×d ,Ŵ = R T Λ 1 2 We outline the theorem's proof in Sec. 3.3 and defer the details to Sec. C.1. The theorem provides an explicit characterization of the geometry of optimal embeddings and classifiers that relies around the key finding that the optimal logit matrix is always equal to the SEL matrix. We highlight the following key features of this characterization.
Simplicity. The lack of symmetry in the imbalanced setting, makes it a priori unclear whether a simple geometry description is still possible, as in the balanced case. But, the theorem shows this to be the case. The key observation is that the optimal logit matrixŴ TĤ equalsẐ (cf. Statement (i)). Then, the Gram matrices of embeddings and classifiers are given simply in terms of the singular factors of the SEL matrix (cf. Statements (ii),(iii)).
Invariance to imbalances. The theorem's characterization is valid for all types of (R, ρ)-STEP imbalances. In particular, equality of the optimal logit matrix to the SEL matrix is the key invariant characterization across changing imbalances. This also implies that at optimality all margins are equal irrespective of the imbalance type. The description of Gram matrices in terms of the SVD ofẐ is also invariant. Of course, the particular arrangement of columns ofẐ itself depends on the values of (R, ρ). In turn, the singular factors determining the geometry of embeddings and classifiers depend implicitly on the same parameters. Thus, as we show next, the geometry differs for different imbalance levels; see Fig. 1 for an example.
Invariant properties: NC and SELI
Here, we further discuss the geometry of embeddings and classifiers induced by the SVD of the SEL matrix. The first realization is that the embeddings collapse under all settings.
This statement can be inferred from Theorem 1 (specifically from Statement (iii) and that U has repeated columns.) A more straightforward argument is possible by directly inspecting the UF-SVM optimization in (2). For any fixed (say optimal)Ŵ, the minimization over h i is: (i) separable and identical for all i ∶ y i = c in same class c, and (ii) strongly convex. Hence, for all i ∶ y i = c, there is unique minimizer corresponding to the fixedŴ; this must be their class mean.
Beyond (NC), Theorem 1 specifies the exact geometry of solutions. The corollary below is a restatement of Theorem 1 under the following formalization of what we call the SELI geometry. Definition 3 (SELI geometry). The embedding and classifier matrices H d×n and W d×k follow the simplex-encoded-labels interpolation geometry when for some scaling α > 0: Corollary 1.2. The UF-SVM solutions follow the SELI geometry, irrespective of imbalance.
The (SELI) geometry characterization specifies (up to a global positive scaling) the Gram matrices G W ∶= W T W and G H ∶= H T H, and the logit matrix Z ∶= W T H. Specifically, the diagonals of the two Gram matrices specify the norms of the classifiers and of the embeddings 1 . These, together with their off-diagonal entries, further specify the angles between different classifiers and between different embeddings. Because of the (NC) property, the norms and angles of the embeddings h i , i ∈ [n] are uniquely determined in terms of the norms and angles of the mean-embeddings µ c , c ∈ [k]. In other words, the Gram matrix Finally, the norms together with the entries of the logit matrix determine the angles between the two sets of: (a) the k classifiers and (b) the k mean embeddings. Thus, they specify the degree of alignment between the two sets of vectors. In the next section, we show that it is in fact possible to obtain explicit closed-form formulas describing the norms, angles and alignment of classifier and embedding vectors, in terms of the imbalance characteristics and the number of classes. Remark 3.1 (Why "SELI"?). For the UFM, Z = W T H is the learned end-to-end model. According to its definition, the SELI geometry implies W T H = αẐ. Thus, the learned model interpolates (a scaling of) the SEL matrix, motivates the naming in Definition 3.
Special case: Balanced or binary data
For the special cases of balanced or binary data, Theorem 1 recovers the ETF structure, i.e. (SELI)≡(ETF). LetM = [μ 1 , . . . ,μ k ] denote the matrix of mean embeddings. Corollary 1.3 (R = 1 or k = 2). Assume balanced data (R = 1) or binary classification (k = 2). Then, any UF-SVM solution (Ŵ,Ĥ) follows the ETF geometry as defined in [26]: Thus, when data are balanced or binary: (i) the norms of the classifiers and of the embeddings are all equal; (ii) the angles between any two classifiers or any two embeddings are all equal to ; and, (iii) the set of classifiers and the set of embeddings are aligned.
How the SELI geometry changes with imbalances
For k > 3 and R > 1, (SELI) ⇒ (ETF). In this general case, the norms and angles specifying the geometry are determined in terms of the SVD factors of the SEL matrix as per Definition 3. In Sec. A, we give an explicit characterization of these SVD factors. Notably, this allows us to obtain explicit closed-form formulas for the norms, angles and alignment of the SELI geometry in terms of R, ρ and k, n min . The following lemma is an example: it gives a formula for the ratio of majority and minority norms for the classifiers and embeddings. For simplicity, we focus on the default case of equal numbers of minorities and majorities, i.e. ρ = 1 2.
Thus, w maj 2 ≥ w minor 2 and h maj 2 ≤ h minor 2 , with equalities if and only if R = 1 or k = 2.
The fact that CE learns classifiers of larger norm for majorities (i.e. w maj 2 ≥ w minor 2 ), has been empirically observed in the deep imbalanced-learning literature, e.g. by Kang et al. [17], Kim and Kim [18]. Lemma 3.1, not only provides a theoretical justification for this empirical observation, but it also precisely quantifies the ratio. Moreover, it specifies the norm-ratio between majorities and minorities, not only for classifiers, but also for the learned embeddings.
Sec. B includes additional closed-form formulas for the angles and alignment of classifiers and embeddings. Thanks to these, it is easy to precisely quantify the changes in the geometry as a function of the imbalance ratio and of number of classes. As another example of this, besides : Illustration of how norms and angles of the SELI geometry vary with the number of classes k and the imbalance ratio R. The minorities fraction is set to ρ = 1 2. Unless R = 1 or k = 2, the geometry differs from the ETF geometry. For example, majority classifiers have larger norms than minorities (see Fig. 3a), minority classifiers are less aligned with their corresponding class mean-embeddings (see Fig. 3b), and the angle between minority classifiers decreases (see Fig. 3c).
The depicted values are computed thanks to closed-form formulas. See Sec. 3.2 and Sec. B. Lemma 3.1, the SELI geometry leads to majority / minority angles for the classifiers satisfying the following formulas: (with (R, 1 2)-STEP imbalance; see Sec. B.1.2 for details) Here, w maj , w ′ maj are two classifier vectors corresponding to any two majority classes (similarly for the minorities.) It is easy to see that both these formulas evaluate to −1 (k − 1) for R = 1. Furthermore, the cosine of majorities is strictly decreasing, while the cosine of minorities is strictly increasing. That is, with increasing imbalance ratio majority classifiers go further away from each other, while minority classifiers come closer. These properties, together with the fact from Lemma 3.1 that majority classifiers have larger norms compared to minorities, is visualized in Fig. 1 for R = 10 and k = 4. Additionally, Fig. 3 shows how all the norms and angles of the geometry vary with the imbalance ratio R for several values of k = 2, 4, 10, 20.
Remark 3.2 (Asymptotics).
While we focus on understanding the geometry at finite values of R, it is possible to evaluate limits for our formulas giving asymptotic characterizations as R → ∞. The asymptotic behaviors can also be observed in Fig. 3. As an example, it is easy to see from (4) that the angle between the minority classifiers collapses to zero in that limit. This phenomenon is called "minority collapse" by Fang et al. [3]. Here, we recover it as a special case of Theorem 1 and of the SELI characterization. Note also that the rate at which the minority angle collapses is rather slow (e.g. see Fig. 3c). Additional details and discussion on the SELI geometry are included in Sec. B.
Proof sketch
We start from the following (by now, relatively standard) convex relaxation of the UF-SVM [31,8,41,3]: The relaxation follows by setting Z = W T H, thus Z is the logit matrix (also, the end-to-end model) of the non-convex UF-SVM. Our key technical innovation is proving thatẐ is the unique minimizer of (5). There are three key ingredients in this. First, is a clever re-parameterization of the dual program to (5), introducing the SEL matrixẐ in the dual: Second, we prove thatB = UV T is the unique maximizer of the re-parameterized dual problem in (6). While it is not hard to check thatB optimizes a relaxation of (6), it is far from obvious that B is unique, and, even more that it satisfies the third constraint. The key technical challenge here is that the third constraint acts entry-wise on B. In fact, to proceed with the proof we need that the constraint is not active, i.e.B ⊙Ẑ T > 0, or equivalently, that the sign pattern of the entries ofB agrees with the sign pattern of the transpose SEL matrixẐ T . We prove this by an explicit construction of the singular factors U, V exploiting the structure of the SEL matrix.
Once we have shown thatB is the unique maximizer and is strictly feasible, we use the KKT conditions to prove thatẐ is the unique minimizer of the nuclear-norm relaxation in (5). To do this, we leverage that strict feasibility ofB implies by complementary slackness all constraints in the primal (5) must be active at the optimum. The proof of the theorem completes by arguing that the relaxation (6)
The role of regularization
In this section, we focus on the ridge-regularized CE loss minimization in (1). Specifically, we study the geometry of solutions (Ŵ λ ,Ĥ λ ) as a function of both the imbalance and the regularization parameter λ.
Global minimizers as solutions to a convex relaxation
The regularized CE minimization in (1) is non-convex. Yet, its landscape is benign and the global solution can be described in terms of the solution to a convex relaxation program [41].
Theorem 2 (Reformulated from [41]). Let λ > 0, d > k − 1 and a (R, ρ)-STEP imbalance setting. LetẐ λ ∈ R k×n be the unique minimizer of the convex nuclear-norm-regularized loss minimization, and, denoteẐ λ = V λ Λ λ U T λ its SVD. Any stationary point of (1) satisfies exactly one of the following two. Either it is a strict saddle, or, it is a global minimizer (Ŵ λ ,Ĥ λ ) and satisfies First, the statement ensures that any first-order method escaping strict saddles finds a stationary point that is a global minimizer [41]. Moreover, it describes the structure of the global minimizers of (1) in terms ofẐ λ , the solution to the convex minimization in (7). Structurally, the characterization in (8) resembles the characterization in Theorem 1 regarding the UF-SVM. However, Theorem 1 goes a step further and gives an explicit form for the logit matrix, namely the SEL matrixẐ. Instead,Ẑ λ in Theorem 2 is given implicitly as the solution to a convex program. In the remaining of this section, we ask: how doesẐ λ compare toẐ for different values of the regularizer? Also, how does the answer depend on the imbalance level?
Regularization matters
For balanced data, previous works have shown that the minimizersŴ λ ,Ĥ λ of (1) satisfy the (NC) and (ETF) properties (up to scaling by a constant) for every value of the regularization parameter λ > 0 [41] (see also [5,3,21].) In our language, for all λ > 0, there exists scalar α λ such that a scaling (α λŴλ , α λĤλ ) of any global solution of the regularized CE minimization in (1) satisfies the ETF geometry. Thus, for balanced data, up to a global scaling, the geometry is: (i) insensitive to λ > 0 and (ii) the same as that of the UF-SVM minimizers.
Here, we show that the situation changes drastically with imbalances: the regularization now plays a critical role and the solution is never the same as that of UF-SVM for finite λ. Proposition 1 (Imbalanced data: Regularization matters). Assume imbalanced data and k > 2. There does not exist finite λ > 0 and corresponding scaling α λ such that the scaled solution (α λŴλ , α λĤλ ) of (1) follows the (SELI) geometry. Equivalently, there does not exist λ > 0 and α λ such that a scaling of the UF-SVM solution solves (1).
Vanishing regularization
As λ vanishes, it is not hard to check that minimizers diverge in norm and the relevant question becomes: where do they converge in direction? The following answers this.
Proposition 2 (Regularization path leads to UF-SVM). Suppose d > k − 1 and (R, ρ)-STEP imbalance. It then holds that lim λ→0Ŵ Put together with the content of the previous section: For balanced data, the solution is always the same up to global scaling. However, for imbalanced data, the solution changes with λ and only in the limit of λ → 0 does it align with that of the UF-SVM.
Regarding the proof of the proposition, we note that thanks to Theorems 1 and 2, it suffices that the solutionẐ λ of (7) converges in direction to the SEL matrixẐ; see Proposition 3 in Sec. D. To show this, we critically use from Theorem 1 thatẐ is unique minimizer of (5). See Sec. E.3 for details. The most closely related results are those of [28,16], who studied the regularization path of p-norm regularized CE.
Imbalance emphasizes the impact of non-convexity
Recall interpreting the UFM as a two-layer linear net trained on the standard basis e i ∈ R n . Suppose instead that we train a simple k-class linear classifier Ξ k×n on the same data by minimizing ridge regularized CE: min Ξ L(Ξ) + λ 2 Ξ 2 F . It is easy to check that (after scaling) Z satisfies first-order optimality conditions. Thus, the optimal linear classifier is such that for all λ > 0, there exists α λ such thatΞ λ = α λẐ . Contrasting this to Proposition 1, we find that the end-to-end models minimizing ridge-regularized CE for a linear versus a two-layer linear network are the same (in direction) when data are balanced, but differ under imbalances.
Experiments
In our experiments we choose (R, 1 2)-STEP imbalances with varying imalance ratio R. In all cases we measure convergence to either the (SELI) or the (ETF) geometries, in terms of the three metrics below corresponding to classifiers, embeddings, and logits, respectively. Denote A = A A F the Euclidean normalization and G A = A T A the Gram matrix of matrix A. G M = M T M toĜ M , which we calculate as described above for the ETF and SELI geometries, respectively. On the other hand, for the logits calculations, we compute Z = W T M without centering. The discrepancy between the UFM solutions being already centered, while deepnet embeddings require centering before computation of geometric measures is a common denominator in all previous works on the UFM, e.g. [41,12,3,9,21,24,5,40]. Here, the chosen centering µ G = 1 k ∑ i∈[n] 1 ny i h i is in general different (and the same only when classes are balanced) from the global centering h G ∶= 1 n ∑ i∈ [n] h i used by Papyan et al. [26]. Our choice of the former is motivated by the fact that the SELI geometry (as predicted by the UFM) satisfies ∑ c∈[k] µ c = 0 (see Eqn. (20) in Sec. ), but not always 1 n ∑ i∈[n] h i = 0. (7) with CVX [6], and then, using (8) to infer the Gram matrices G W , G M and logits W T H. Fig. 4c shows that the distance to ETF is large and not approaching zero for any value of λ. On the other hand, Fig. 4a numerically validates Propositions 1 and 2: the distance to SELI for all three metrics is non-zero for any finite λ > 0, but converges to zero as λ → 0. However, this convergence is slow and the rate becomes even worse as R increases. Finally, Fig. 4b depicts the minimum margins of solutions across λ. Note that for all sufficiently small λ values, the minimum margin is strictly positive. We prove this in Lemma D.4 in Sec. D.2. As a byproduct, this shows that the "minority collapse" of [3] can only possibly occur for large λ; see also Sec. H. each R, we select the sample size n min for minorities so that the total number of samples n = (R + 1) 2 kn min is ≈ 400. The weights of the UFM are optimized using SGD with constant learning rate 0.4, batch size 4 and no weight decay. We train for 10 5 epochs, much beyond zero training error and plot the distance to SELI and ETF geometries for classifiers, embeddings and logits over time. We highlight the following three observations: (i) SGD iterates favor the SELI, instead of the ETF geometry. As a matter of fact, the distance to SELI is decreasing with epochs, suggesting an implicit bias of SGD towards global minimizers of the UF-SVM. (ii) However, convergence is rather slow and rates get worse with increasing imbalance. (iii) Also, the embeddings convergence is more elusive compared to that of the classifiers. Interestingly, the last two observations are reminiscent of the trends we observed in Fig. 4a, suggesting connections between regularization path and (S)GD iterates, worth investigating further.
UFM: SGD solutions
We refer the reader to Sec. F for additional numerical results on the UFM, such as experiments with varying weight-decay and regularization choices.
Deep-learning experiments
We investigate convergence to the proposed SELI geometry in deep-net training of (R, ρ)-STEP imbalanced MNIST, Fashion-MNIST and CIFAR10 datasets. For concreteness, we consider here equal number of minorities and majorities, i.e. ρ = 1 2. Additional experimental results for other values of the minority ratio are deferred to Sec. G.3. For all datasets, we keep the same total number of n = 100 × 50 × 5 + 50 × 5 = 25250 examples across all different imbalance ratios R = 1, 5, 10 and 100. No data augmentation was used following [26]. We train two deep architectures, ResNet-18 [10] and VGG-13 [29], and optimize the models using CE loss with SGD over 350 epochs. In all experiments, the initial learning rate is set to 0.1 and decreased by a factor of 10 at epochs 120 and 240. For ResNet training on MNIST, we choose smaller initial learning rate 0.05 which we empirically find that it interpolates data much faster. Weight decay and momentum are set to 5 × 10 −4 and 0.9 respectively. Models are trained on a single GPU with a dataloader batchsize of 128. To evaluate the learnt geometries, we track the three metrics that we defined at the beginning of this section. Recall also that following [26], we first perform a global centering by subtracting from the mean embeddings their global average. The convergence to SELI and ETF for the classifiers, (centered) mean-embeddings, and logits are illustrated in Fig. 2 and 6 for ResNet and VGG models, respectively. The vertical dashed lines mark the epoch at which the model reaches zero training error under all imbalance ratios; see Sec. G.1.2 for details. Note that in all plots, the distance to SELI geometry decreases as training evolves. Also, convergence to the SELI geometry is consistently better compared to the ETF geometry. However, convergence slows down for increasing imbalance (see R = 100). Another interesting observation is that convergence is worse for the embeddings compared to classifiers. In Sec. G.1.6 we compare individual quadrants of the (normalized) G W and G H matrices, which facilitates understanding the individual behavior of majorities and minorities.
In Fig. 7 we focus on predicting the norms of the classifiers. In particular, we measure the norm ratio of the classifiers during training and compute its distance from the predicted closed-form SELI characterization in Lemma 3.1. We see that the (relative) magnitude of the classifiers agrees with the value predicted by the SELI geometry. Analogous plots for the embeddings' norms are given in Sec. G.1.4.
Outlook: Imbalance troubles and opportunities
We propose (SELI) as the class-imbalance-invariant geometry of classifiers and embeddings learnt by overparameterized models when trained beyond zero training error. We arrive at it after showing that the UF-SVM global minimizers follow this geometry. Subsequently, we conjecture that: (C1) GD on the UFM leads to solutions approaching the SELI geometry asymptotically in the number of epochs; (C2) training of deep-nets beyond zero training error learns models that approach the SELI geometry. Statement (C1) is a conjecture and does not follow from Theorem 1. This is because GD is only known to converge to KKT points of the non-convex UF-SVM [22,12], which are not necessarily global minima. Thus, while Proposition 2 combined with the benign landscape of the corresponding ridge-regularized minimization, as well as, our experiments suggest its validity, the conjecture remains to be further investigated. 2 We also note that convergence rates appear slower with increasing imbalance levels; a feature which is interesting to study further.
Regarding conjecture (C2), we deem our experiments encouraging: the classifiers' and embeddings' geometries get closer to the SELI geometry as neural-network training progresses. Specifically, the Gram matrices G W and G H of the classifiers and embeddings, respectively, align increasingly better with their SELI counterparts VΛV T and UΛU T ; see Fig. 2. Also, Fig. 7 shows that Lemma 3.1 is able to make predictions regarding the norms of majorities versus minorities. We note that, similar to the UFM experiments, convergence appears slower for larger imbalance ratios. Also, we observe better convergence for classifiers compared to embeddings and for norm predictions compared to angle predictions. Overall, we hope our results motivate further theoretical and experimental investigations, especially since data imbalances appear frequently across applications.
Beyond that, we believe that further similar studies on identifying geometric structures of learned embeddings and classifiers could offer new perspectives on generalization. Our results could pave that way since they uncover different geometries (aka SELI for different R values), each leading to different generalization (worse for increasing R [1]). Relatedly, we envision that further such studies lead to algorithmic contributions in imbalanced deep-learning as they can facilitate studying the implicit-bias effect of CE adjustments and post-hoc techniques tailored to imbalanced data [2,23,39,18,17,19,20].
Roadmap to the Supplementary material
The SM contains the following. In Section A we derive several useful properties of the simplexencoded-label (SEL) matrix, which we then use in Section B to present closed-form characterizations of the embeddings and classifiers geometries. Notably, these include explicit expressions for the norms and angles of both majority and minority classes, which we accompany with numerical illustrations shedding further light on the features of the proposed (SELI) geometry. Next, in Section C, we use the derived properties of the SEL matrix to prove our main Theorem 1 and its corollaries. In Section D we derive several useful properties of the nuclear-norm penalized CE minimization, which we then use in Section E to prove the statements of Section 4. Section F contains additional numerical results on the UFM, such as experiments with varying weight-decay and regularization choices. Additional experiments on real data (such as, isolated minority/majority geometry investigations, and classifiers/embeddings norm ratios), as well as further implementation details, are included in Section G. In Section H we show how results relate to minority collapse providing additional theoretical justifications and novel perspectives to empirical observations reported by previous work. Finally, an elaborate discussion on additional related works is contained in Section I.
Proof. Proof of (i): Follows directly from the definition. Proof of (ii): The fact thatẐ T 1 k = 0 is easy to check. Hence, the rank is at most k − 1. The fact that the rank is exactly k − 1 follows by noting that the vectors e c − 1 Proof of (iii): Follows directly from Statement (ii). Proof of (iv): Follows easily by direct calculations.
It is easy to see that for Z = αẐ, the matrix A can be written as The desired then follows by recalling Statement (i).
and, for c ≠ y i ,
A.2 Eigen-structure
In this section, we explicitly compute the eigenstructure of the SEL matrix for (R, ρ)-STEP imbalanced data. To simplify the expressions, we assume n min = 1. 3 Also, we need the following definitions. For m ∈ [k], let P m ∈ R m×(m−1) denote an orthonormal basis of the subspace orthogonal to 1 m , i.e. P m P T m = I m − 1 m 1 m 1 T m and P T m P m = I m−1 . We will also denote . Assume (R, ρ)-STEP imbalanced data and n min = 1. Also, denote ρ = 1 − ρ and recall that the total number of examples is n = (ρ + Rρ)k. Then, the SVD factors ofẐ = VΛU T are given as follows: 3 It is rather easy to derive all formulas without this requirement. Concretely, the singular values in (9) Proof. The challenging part is coming up with the formulas in (9), (10) and (11) for the SVD factors. The lemma already does this for us. Hence, proving that the formulas are correct involves a few tedious calculations, which we present below. Let us define for convenience: Similarly, let With same argument as above, it is easy to check that U T U = I k−1 .
Here, we also use that Thus, it suffices to show that VΛU T =Ẑ. The key observation here is thatẐ can be written in block-form as followsẐ With these, we have the following direct calculations:
A.2.1 Special case: Balanced data
When classes are balanced, i.e. R = 1, the following simple description of the SVD factors is immediate to see from Lemma A.3.
Corollary 2.1.
Assume balanced data and n min = 1. Recall that P k ∈ R k×(k−1) denotes an orthonormal basis of the subspace orthogonal to 1 k . Then,Ẑ = P k P T k , that is A.2.2 Special case: Equal minorities / majorities (ρ = 1 2) Another special case of interest is when the numbers of minorities and majorities are the same, i.e. ρ = 1 2. In this case, we get the following simplification of Lemma A.3.
Corollary 2.2.
Consider the setting of step imbalance with even number k = 2m, m ≥ 1 of classes. Let n min = 1. Then, the SVD ofẐ is as follows:
A.3 A useful property of the singular spaces
The following result is particularly important for the proof of Theorem 1. It shows that the singular spaces V, U ofẐ are such that the matrix UV T has entries that agree on their sign with the sign of the entries ofẐ T . Proof. From Lemma A.3, we have explicit expressions for the SVD factors U and V. From these, we can directly compute that To continue, recall again that for any integer m: Hence continuing from the display above we find that Finally, recall from (12) the block-form ofẐ repeated here for conveniencê By inspection, the signs ofB 12 ,B 21 are negative, same as the signs ofẐ T 21 ,Ẑ T 12 . To see that the signs of the diagonal blocks also agree it suffices to check that the following strict inequalities always hold This completes the proof of the lemma.
B The SELI geometry
As mentioned the SELI geometry is described in terms of the SVD of the SEL matrixẐ. In this section, we show that it is in fact possible to get explicit closed-form expressions describing the SELI geometry in terms of the parameters R, ρ, k. Key to this is the explicit construction of the SVD factors in Sec. A.2.
For concreteness, we focus on the case of equal numbers of minorities and majorities (i.e. ρ = 1 2) since the formulas are somewhat simpler and the setting is of sufficient interest to convey main messages. Extension to the general case can be done in a similar fashion.
All results in this section hold under the following assumptions (assumed throughout without further explicit reference): • A (R, 1 2)-STEP imbalanced setting.
• The classifiers w c , c ∈ [k] and the embeddings h i , i ∈ [n] follow the SELI geometry in Definition 3.
B.1.1 Norms
The following two lemmas are essentially restatements of Lemma 3.1. where we let w maj 2 / w minor 2 denote the majority / minority norm of an arbitrary class of the corresponding type.
(ii) It holds that Thus, w maj 2 ≥ w minor 2 with equality if and only if R = 1 or k = 2.
Proof. Recall, since the classifiers follow the SELI geometry, it holds that W T W = VΛV T . Hence, it suffices to compute the diagonal of the matrix VΛV T . We have From this, the statements of the lemma follow readily. where we let h maj 2 / h minor 2 denote the majority / minority norm of an arbitrary example of the corresponding type.
Lemma B.2 (Norms of embeddings
(ii) It holds that Thus, h maj 2 ≤ h minor 2 with equality if and only if R = 1 or k = 2.
Proof. Recall, since the classifiers follow the SELI geometry, it holds that H T H = UΛU T . Hence, it suffices to compute the diagonal of the matrix UΛU T . Recalling that n = k(R + 1) 2 We have From this, the statements of the lemma follow readily by reading out the diagonal.
Proof. Recall under the SELI geometry that W T W = VΛV T . Hence, inspecting the offdiagonal entries of the matrix computed in Equation (14) gives Statement (i) and the following inner-product relations: Combine these with the norm calculations in Lemma B.1 to prove Statement (ii).
(ii) It holds that
Proof. Recall under the SELI geometry that H T H = UΛU T . Hence, inspecting the off-diagonal entries of the matrix computed in Equation (16) gives Statement (i) and the following innerproduct relations: Combine these with the norm calculations in Lemma B.2 to prove Statement (ii).
B.1.3 (Non)-alignment
In the previous lemmas we compute the angles between classifiers of different classes and between embeddings of different classes. Here, we also also compute the angles between classifiers and embeddings. Specifically, for each c ∈ [k], we compute the angle Cos(w c , h i ) for an example i ∶ y i = c that belongs to the same class. These values can be thought of as the degree of alignment between classifiers and embeddings, as Cos(w c , h i ) = 1 corresponds to exact alignment between the two. (ii) It holds that Proof. The proof is immediate by recognizing that under the SELI geometry for all c ∈ [k] and i ∶ y i = c it holds that w T c h i = 1 − 1 k.
B.1.4 Centering
It is easy to see that the classifiers w c , c ∈ Note that this reduces to ∑ i∈[n] h i for balanced data, but is not true otherwise. To see (19) recall first that H T H = UΛU T . Second, check that ∑ i∈[n] . Thus, U T ω = 0. Combining these two it follows that Hω = 0, which gives the desired. Now, suppose that the (NC) property also holds, i.e. the embeddings collapse to their class means, that is, Then, (19) implies the following about the class means: Therefore, the class means are always centered around zero.
B.2 Illustrations and discussion on dependence on R and k
In the previous section, we derived closed-form expressions for the features describing the SELI geometry. Here, we use these expressions to study how varying values of class-number k and imbalance-ratio R change the geometry. We use the numerical illustration in Fig. 8 to guide the discussion. Specifically, in Fig. 8 we compute and plot the norm ratios, alignment and angles between embeddings and classifiers for k = 2, 4, 10, 20 classes and imbalance ratio varying from 1 (aka balanced) to 100. Norms. Fig. 8a shows the ratios of majority vs minority norms for both classifiers and embeddings. The values are computed using Lemmas B.1 and B.2. For binary problems (aka k = 2), the norm ratio is always equal to one irrespective of imbalance. For larger values of k, the norm ratio is equal to one only for balanced classes (aka R = 1). Recall that equal norm ratios is a feature of the ETF geometry [26]. On the other hand, for k ≠ 2 and R > 1, the majorities have strictly larger norms for the classifiers and strictly smaller norms for the embeddings. Thus, in general the SELI geometry is very different from the ETF geometry. Interestingly, the difference is already evident when going from k = 2 to k = 4 classes. Also, the change in the ratios for classifiers is more pronounced than that for embeddings, which is changing progressively slower as k increases (see how close are the green and blue curves in the right plot).
Alignment. Fig. 8b shows the degree to which the geometries of classifiers and embeddings are aligned to each other. Specifically, we plot the cosine between any majority (left) / minority (right) classifier and corresponding embeddings belonging to the same class; see also Lemma B.5. For k = 2 and R = 1 the cosines are equal to one indicating that classifiers and embeddings align for both majorities and minorities. This is consistent with the ETF geometry. On the other hand, the alignment property breaks when k > 2 and R > 1. The effect is more drastic for the minorities (right plot), while for majorities the alignment is approximately preserved (note the y-axis scale is different in the two plots). Interestingly, alignment is in fact favored for larger number of classes, but deteriorates with increasing R consistently for all values of k.
Angles. Fig. 8c shows the angles between majority/majority (left), minority/minority (center), and minority/majority (right) for both classifiers (top) and embeddings (bottom). The values are computed using Lemmas B.3 and B.4. For binary problems (aka k = 2), there is only one majority and one minority class. Thus, we only plot the minority/majority cosines, which are always equal to −1 (k − 1) irrespective of imbalance. For larger values of k, the cosines are equal to that same value −1 (k − 1) only for balanced classes (aka R = 1). Recall that cosine value equal to −1 (k − 1) is a unique feature of the ETF geometry. On the other hand, for k ≠ 2 and R > 1, the cosines are different. For reference, we plot the values of −1 (k − 1) in dashed lines. For the classifiers, the majority angles increase, while the minority angles decrease. The rate of change is more drastic for minorities. In both cases, the rate of change across R is more pronounced for smaller k. The majority-minority angles also increase with R. The trend is reversed for embeddings. For example, the angles between minority embeddings become larger with increasing R. Again, the effect of imbalance (at least for the values of R shown) is more pronounced here for smaller values of k.
B.3.1 Balanced classes and binary classification
The
B.3.2 Asymptotics
We can also use the results of Sec. B.1 to understand the SELI geometric features asymptotically as R increases. These are included in Corollary 2.4 below. See also Fig. 3 for a numerical illustration of the limiting behavior for large imbalance ratios.
Proof. The expressions that appear in the lemmas in Sec. B.1 hold for any value of R. Take the limit of R → ∞ to yield the expressions above. We omit the details for brevity.
Several interesting conclusions are immediate from the formulas above. For example, Statement (iv) shows that, asymptotically in R, the minority classes collapse to the same vector. This is a manifestation of the minority collapse phenomenon discovered by Fang et al. [3]. Asymptotic minority collapse had not been shown before for the UF-SVM. See also Sec. H for an extended discussion. We note that beyond minority classifiers, Corollary 2.4 is further conclusive about the behavior of majority classifiers, as well as, minority and majority embeddings. For example, Statement (viii) suggests that the minority and majority embeddings become orthogonal to each other asymptotically. Further investigations of the validity of such asymptotic conclusions in deep-net training is beyond our scope.
C Proofs for UF-SVM Section 3 C.1 Proof of Theorem 1
We consider a relaxation of the non-convex SVM in (2) by setting With this consider the following semidefinite program: It is not hard to see that q * ≤ p * . In what follows, we will compute the optimal set of (22) and use this to show that the relaxation is in fact tight. This will allow us to characterize the solution of the original problem. (22) is convex and satisfies Slater's conditions. (Since constraints are affine, it suffices to check feasibility, which is easily verified.) Hence, strong duality holds and KKT conditions are necessary and sufficient for optimality. The dual of (22) is written as follows:
Dual of the convex relaxation: The optimization in
sub. to Also, the complementary slackness conditions are Instead of working with the dual in the above standard form, it is convenient to work with an alternative representation by (re)-defining dual variables β ic , i ∈ [n], c ∈ [k] such that 4 sub. to Analogously, the complementary slackness conditions in (24) are equivalent to the following: To see the equivalence of the objective of (26) denote A = ∑ i∈[n] ∑ c≠y i α ic the objective in (23) and note that we get simultaneously A = ∑ i∈[n] β iy i = ∑ i∈[n] ∑ c≠y i − β ic following the definition in Equation (25). Then, we have Solution to the dual: To continue, we consider the following relaxation of the dual problem (26) (by removing the constraint in Equation (27)): Here, recall that B 2 denotes the spectral norm, hence the first constraint is equivalent to the first constraint in the maximization in (26) by Schur-complement argument. Using standard arguments, it can be shown thatd ≤ Ẑ * and equality holds by settinĝ where we recalled the compact SVDẐ = VΛU T . It is also not hard to see thatB is feasible in (29) (recall here that V T 1 k = 0). Hence,B is a maximizer andd = Ẑ * . The following lemma, the proof of which we defer to the end of this section, proves something stronger:B is in fact the only maximizer in (29). (29) is Ẑ * andB = UV T is its unique maximizer.
Lemma C.1. The optimal cost of the maximization in
Next, we use Lemma C.1 to show thatB also satisfies the inequality constraints in (27). To do this, we use an explicit construction of the singular factors U and V presented in Sec. A.2, which is possible thanks to the special structure ofẐ. Specifically, we prove in Lemma A.3 that Hence,B is feasible in (26). In fact, the feasibility inequalities are strict, which we will use soon. For now, note that feasibility ofB in (26) guarantees that it is its unique maximizer (since it is the unique maximizer of the program's relaxation, as established in Lemma C.1.) Therefore, andB in (30) is dual optimal for the semidefinite program in (22).
Solution to the primal relaxation: By strong duality, this implies q * = d * = Ẑ * and that any primal minimizerX = X 11X12 X T
12X 22
satisfies the complementary slackness conditions in (28) with In (33) Combining the last two equations shows Next, we will useX 12 = VDU T in (33) to compute D. For convenience denote e k,c ∈ R k the c-th standard basis vector in R k and e n,i ∈ R n the i-th standard basis vector in R n . Then, starting with (33), we get the following chain of implications: where the second and third equalities used V T V = U T U = I k−1 . To conclude, we have shown that any optimal pointX of (22) satisfieŝ Solving the original problem: Now, we show that the convex relaxation in (22) is tight. For some partial orthonormal matrix R ∈ R (k−1)×d (recall that d ≥ k − 1) with RR T = I k−1 , let By construnctionŴ TĤ =X 12 . Hence, (Ŵ,Ĥ) is feasible in (2). Thus, p * ≤ 1 2 tr(X) = q * . But, we have already argued that p * ≥ q * . Hence, p * = q * .
Proof of Lemma C.1: It only remains to prove Lemma C.1, which we do here. Any feasible B satisfies B 2 ≤ 1. Hence, also recalling thatẐ has rank k − 1 (because V T 1 k = 0): The inequality above is tight if and only if which is indeed satisfied byB = UV T . Clearly,B is also feasible. Hence,B is a maximizer of (29). Next, we will show that there is no other maximizer, sayB. Indeed, sinceB is optimal, it must satisfy (40). Hence, it has rank at least k − 1. But sinceB1 k = 0, we find thatB has rank exactly k − 1.
with columns p i , q i , i ∈ [k − 1]. By Equation (40) we have the following chain of inequalities for all i ∈ [k − 1]: Inspecting this, we note that all inequalities must be equalities. The first inequality in the second line follows because B 2 ≤ 1 ⇒ ∀j ∈ [k − 1] ∶ σ j ≤ 1. Hence, σ j = 1 for all j ∈ [k − 1]. Equivalently Σ B = I k−1 . The second inequality in that same line is Cauchy-Schwarz and equality implies The last inequality follows because Since U B (resp. V B ) has orthonormal columns, equality in (43) holds if and only if for all i ∈ [k − 1], u i (resp. v i ) is a column of U B (resp. V B ). Then, it must be that P and Q are permutation matrices. Combined with (42) this gives for some permutation matrix Π. Continuing from this, Putting things together, we conclude that This concludes the proof of the lemma. 1.1, 1.2, 1.3
and of Lemma 3.1
The proofs of the rest of the results of Sec. 3 are presented in the following sections.
C.3 On different regularization hyperparameters between embeddings and classifiers
Thus far in our analysis of the UFM, we assumed same regularization strength λ for the embeddings and classifiers; see Eqn. (1). Here, we discuss a slight generalization allowing different regularizations λ W and λ H for the classifiers and embeddings, respectively. This is motivated by previous studies of the UFM in the literature, e.g. [41]. As we show, this modification does not change the SELI geometry modulo a relative scaling factor between the embeddings and classifiers. Concretely, consider the following slight generalization version of Eqn. (1): which can be more conveniently reparameterized as follows: where β ∶= λ H λ W and λ H , λ W > 0. Now, consider the limit of vanishing regularization λ W → 0, λ H → 0, with a fixed finite and non-zero ratio β 2 = λ H λ W . Entirely analogous to Eqn. (2), this leads to the a β-parameterized UF-SVM as follows: Note that the UF-SVM we study in Eqn. (2) is a special case of the above for β = 1. For general values of β > 0, it is not hard to see that there is a one-to-one mapping of global solutions (Ŵ β ,Ĥ β ) of (45) to global solutions (Ŵ β ,Ĥ β ) of (2) as follows: Hence, from Theorem 1, it follows that the solutions of (45) satisfy for any β > 0 the following: Therefore, different regularization between embeddings and classifiers only affects the geometry of global minimizers of the corresponding UF-SVM (i.e., at vanishing regularization) up to introducing an extra scaling factor between the Gram matrices of embeddings and classifiers. Specifically, this only affects the relative scaling between the norms of embeddings and classifiers.
D Nuclear-norm relaxations of the UFM
In this section, we gather useful properties of the nuclear-norm-regularized CE minimization (7), repeated here for convenience:Ẑ These properties are useful in the proof of the results that appear in Sec. 4. As a reminder, the nuclear-norm-regularized CE minimization is relevant to us because of Theorem 2, i.e. it forms a tight convex relaxation of the non-convex ridge-regularized CE-minimization for the UFM.
The following lemma gathers some basic properties, which we use later to study the behavior of the solutions to (7) for different regularization strengths.
Lemma D.1 (Basic properties of (7)). The following statements are true.
(i) There is a unique minimizerẐ λ .
(ii)Ẑ λ satisfies the following necessary and sufficient first-order optimality conditions: Proof. We prove each statement separately below.
Proof of (ii): This is straightforward from first-order optimality of the convex program (7). For example, see [41,Lemma C.3] for details.
Proof of (iii): Start from Statement (ii) and use from Lemma A.2 that 1 T k ∇ Z L(Ẑ λ ) = 0. Then, it must then be that 1 T where the last equality uses optimality ofẐ λ . The above display contradicts optimality ofẐ λ + ∆ and completes the proof. . The function y is strictly convex along any direction on the (k − 1)-dimensional subspace orthogonal to 1 k . That is, for all non-zero v ∈ R k such that Proof. Define univariate function g(t) = y (z + tv). From Taylor's expansion, for some θ ∈ (0, 1) Hence, it will suffice showing that v T ∇ 2 z y (z)v > 0 for any z. that v = Ca. But then, v T 1 k = 0, which violates the lemma's assumption.
The key observation here is that 2 for a formula evaluating the gradient.) Thus, ∇ Z L(0) 2 = Ẑ 2 , which completes the proof.
We note that, for (R, ρ)-STEP imbalance the necessary and sufficient condition of the lemma
In Sec. H we combine Theorem 2 with Lemma D.4 above to show that the solution (Ŵ λ ,Ĥ λ ) of (1) cannot satisfy the minority collapse property.
The proposition is an extension of [28, Theorem 3.1] to nuclear-norm regularization. Its proof follows the exact same steps as in Rosset et al. [28] who studied p -regularization. The critical observation allowing this is thatẐ is the unique minimizer of (2) thanks to Theorem 1. Because of that, we can get the following max-margin formulation for (the normalized)Ẑ, which is key in the proof of Proposition 3. Lemma D.5. Assume (R, ρ)-STEP imbalance so thatẐ is the unique solution of (2) (see Theorem 1). Then, it holds that We present the proof of Proposition 3 together with the proof of Lemma D.5 in the next section.
D.3.1 Proof of Proposition 3
First, by KKT conditions (see Lemma D.1(ii)), as λ → 0, we have ∇L(Ẑ λ ) 2 → 0. This, in turn, diverges. To see this, denote G λ ∶= ∇L(Ẑ λ ) and note from Lemma A.2 that for all Assume for the sake of contradiction that lim λ→0Ẑ λ Ẑ λ * =Z forZ ≠Ẑ Ẑ * . Then, by Lemma D.5, it must be that Moreover, since m λ → +∞, we also have that m > 0. The rest of the argument follows mutatismutandis the proof of [28, Theorem 2.1]. We repeat here for completeness. By continuity of the minimum margin in Z, there exists open neighborhood ofZ on the nuclear-norm sphere: and an > 0 such that min i∈ [n], To continue, we use the following lemma Lemma D.6. Assume Z 1 , Z 2 such that Z 1 * = Z 2 * = 1 and Then, there exists T ∶= T (m 1 , m 2 ) such that ∀t > T ∶ L(tZ 1 ) < L(tZ 2 ).
We finish the proof by showing how to get Lemmas D.5 and D.6. (2). But then it must be optimal therein since Ẑ * Z * = Ẑ * Z * ≤ Ẑ * . From Theorem 1,Ẑ is the unique minimizer of (49). Thus, we have shown that Ẑ * Z =Ẑ, which contradicts the assumption onZ and completes the proof.
Proof of Lemma D.5. Clearly,Ẑ Ẑ
Proof of Lemma D.6. Define ∶= m 1 m 2 −1 > 0 and let T large enough such that e −T m 2 n(k − 1) < 1 2. We then have the following chain of inequalities for t > T The inequality in the third line used that t > T and > 0, m 2 > 0. The next inequality follows by our choice of T . Throughout, we also used both sides of the inequality x
E Proofs for Regularized CE Section 4 E.1 Proof of Theorem 2
As mentioned in Remark 4.1 the theorem is drawn from [41, Theorem 3.2] with the following three small adjustments. Since the main proof argument remains essentially unaltered, we refer the reader to [41, Sec. C] for detailed derivations. Instead here, we only overview the necessary adjustments. First, Theorem 2 holds for imbalanced classes. Technically, Zhu et al. [41] only consider balanced data. However, a close inspection of their proof shows that such a restriction is not necessary.
Second, Theorem 2 further shows that the nuclear-norm CE minimization in (7) has a unique solution. We prove this in Lemma D.1(i) in Sec. D.
Finally, Theorem 2 relaxes an assumption d > k in [41, Theorem 3.2] to d > k − 1. (In fact, Zhu et al. [41] conjecture that this relaxation is possible. We close the gap.) The assumption d > k is only used by Zhu et al. [41] to show there exists nonzero a ∈ R d such thatŴ T λ a = 0 for a stationary point (Ŵ λ ,Ĥ λ ) of (1). (This step is necessary to construct a negative curvature direction at stationary points for which ∇ Z L(Ŵ T λĤ λ ) 2 > λ; see [41, Sec. C.1].) Indeed, if d > k, then existence of a is guaranteed becauseŴ λ has k columns implying rank(Ŵ λ ) ≤ k. To relax this requirement to d > k − 1, we show in Lemma E.1 below thatŴ λ 1 k = 0. Hence, rank(Ŵ λ ) ≤ k − 1.
E.2 Proof of Proposition 1
We prove the proposition right below its statement in Sec. 4.2. The only remaining thing to show is that for R > 1 and k > 2, not all eigenvalues of the SEL matrix are same. This follows immediately from Lemma A.3 in Sec. A.2. Specifically, we show in (9), that if R > 1 and k > 2, then the maximum eigenvalue of Λ is √ R and the minimum one is 1 (thus, different).
E.3 Proof of Proposition 2
The proposition follows by combining Propisition 3 and Theorem 2. First, thanks to Theorem 2, for all λ > 0:Ŵ whereẐ λ is the solution to (7). Here, we used the fact from Equation (8) that Next, from Proposition 3 The desired follows by combining the above two displays.
E.4 Comparison to one-layer linear model
In Sec. 4.4, we compared the solution to the non-convex minimization (1), corresponding to a two-layer linear model, to the solution found by an one-layer linear model. Specifically, the one-layer linear model trains k-class linear classifier Ξ k×n by solving the following convex ridge-regularized CE: The following lemma computes the solution of this minimization.
According to the lemma above, the one-layer linear model always finds the SEL matrix: irrespective of imbalances and for any value of λ (including vanishing ones). On the other hand, by Proposition 1, we know that the end-to-end models minimizing ridge-regularized CE for a two-layer linear network correspond to the SEL matrix only if data are balanced, or there is two classes, or regularization is vanishing. Specifically, the solution is different when k > 2, R = 1 and λ > 0.
It is also worth noting the following connection between the one-and two-layer models. Thanks to the convex relaxation of Theorem 2 the end-to-end model W T H found by the twolayer model solves CE minimization with nuclear norm minimization (cf. (7)) compared to the ridge regularization in (51). Correspondingly, for vanishing regularization, the two-layer model corresponds to the "nuclear-norm SVM" in (5) compared to the vanilla SVM in (52). Notably, Theorem 1 proves that the solution to the former is alwaysẐ, i.e., the same as that of the latter.
Proof of Lemma E.2. The minimization in (51) is convex. Hence, it suffices to prove there exists α λ such that that ∇ Z L(α λẐ ) + λα λẐ = 0. Thanks to Lemma A.1(v), it suffices that ∃α λ such that λα λ = k e α λ +k−1 . It is easy to check that this equation always has a positive solution α λ > 0 since the LHS is increasing in (0, ∞), the RHS is decreasing in the same interval, and they both take values in (1, ∞). The fact that following the regularization path λ → 0 leads (in direction) to the SVM solution follows from [28]. These two combined also show that the solution to (52) isẐ.
F.1 Experiments with weight-decay
In this section, we show experiments on the UFM supporting the claim of Theorem 2: the global solution (W λ , H λ ) of ridge-regularized CE in (1), call it "λ-SELI" for convenience (with some abuse of the term SELI, as the logit matrix does not represent the simplex encoding anymore for λ > 0), satisfies (8). Moreover any first-order method that avoids strict saddles converges to that global optimum [41]. Fig. 9 investigates the above claims in a setting of (R = 10, ρ = 1 2)-STEP imbalance, k = 4 classes and n min = 1, where we ran SGD on the ridge-regularized CE with λ n = 10 −3 . We set learning rate to 1 and implement ridge-regularization as weight-decay on the parameters. We observe the following. Fig. 9b, 9c, and 9d verify convergence to λ-SELI, while Fig. 9a verifies that the (NC) property also holds. Also, the solution is clearly away from ETF geometry (see green lines in Fig. 9). This is a noteworthy difference of the behavior of learning with imbalanced data, compared to that with balanced data. With balanced data, the geometry with ridge-regularization λ > 0 is always ETF. On the contrary, the geometry for learning from imbalanced data is sensitive to λ, as discussed in Sec. 4.2. While the distance from λ-SELI is the least, the distance to SELI is smaller compared to ETF. Thus, while SELI is not the "true" characterization when training with finite ridge-regularization, it is nevertheless a significantly better approximation than the ETF. In addition, compared to the λ-SELI solution, the SELI one admits explicit closed-form expressions (see Sec. B), rather than requiring numerical solution to a nuclear-norm CE minimization. Finally, note that, up to a certain point of time during the training, the distances from SELI and λ-SELI are comparable. Interestingly, the divergence between the two distances becomes more prominent at epoch count that also corresponds to a sharp fall of the NC error-curve in Fig. 9a.
Remark F.1 (A note on λ scaling). In Equation (1), the CE loss is not normalized by the number of examples. On the other hand, in all our experiments, we normalize the CE loss by
1 n. This is why, when denoting the regularization used in experiments, we write λ n in the axis-labels of all figures.
F.2 Logit regularization and Ridge-decay
Our Proposition 2 proves that the optimal logits of CE minimization for the UFM with vanishing ridge-regularization is the SEL matrix. However, we also find in Fig. 5 that the convergence of SGD to the SEL matrix is rather slow. Motivated by this slow convergence we discuss here logit-regularized CE minimization, i.e., the solution to the following (non-convex) program: Note that when λ = 0, the global solutions of the above minimization give a logit matrix that aligns with the SEL matrixẐ. This is easy to see by noting the resemblance to the convex program in (51) and invoking Lemma E.2. Empirically, we observe that the above logit regularization helps achieve SGD convergence to solutions with SEL matrix as logits converge much faster, even without additional ridgeregularization. Specifically, for (R = 10, ρ = 1 2)-STEP imbalance, k = 4 classes and n min = 1, we run SGD on the logit-regularized CE with λ L n = 10 −3 and with zero ridge-regularization (λ = 0). We also set the learning rate to 1. Fig. 10 depicts convergence of the logit-matrix Z in the direction of the SEL matrixẐ. However, we find that this does not ensure of convergence for the individual geometries of W and H towards SELI, although their inner product W T H aligns well to the SEL matrix.
On the other hand, we also find that logit-regularized ERM on the UFM yields the SELI geometry simultaneously for logits, classifiers and embeddings when we follow a specific decaying schedule for the ridge-regularization parameter. Specifically, in the experiments shown in Fig. 11 and 12 below, we start with a large initial value λ n = 10 −2 and progressively decay λ by a factor of 10 after every few epochs. We also set a finite strength of logit-regularization λ L n = 10 −3 , although the convergence direction is not sensitive to that choice. We term this scheduling "ridge-decay". While the exact dynamics followed by the SGD with such a scheme require further analysis, "ridge-decay" can be thought of as emulating the regularization path of ridge-regularization with λ → 0. Fig. 11 show convergence of the GD solution with the above described ridge-decay to the SELI geometry. Here, we choose k = 4 classes, with (R = 10, 100, ρ = 1 2)-STEP imbalance and n min = 1. The learning rate is again fixed to 1. In this experiment, we use GD instead of SGD since the number of examples is small. This is also useful as it shows that GD is able to drive the solution towards the SELI geometry without stochastic updates being necessary.
In summary, we make the following observations from Fig. 11. First, GD iterates favor the SELI, instead of the ETF geometry, suggesting an implicit bias towards global minimizers of the UF-SVM. Second, with logit-regularization and ridge-decay, convergence towards SELI is achieved at a faster rate than in the case of unregularized CE. Third, while Fig. 10 showed that the logit matrix converges fast in direction of the SEL matrixẐ with only logit-regularization, Fig. 11b and 11c show that ridge-decay promotes convergence of classifiers and embeddings to their respective SELI geometries as well. Finally, for completeness, we also present the NC convergence in Fig. 11a. The metric used to measure NC is as described in Sec. G.1.3.
The corresponding margins for the 4 classes are shown in Fig. 12. For examples belonging to class y, we define average margin with respect to another class c ≠ y as follows: Note that this is an average over examples from the class y since by NC property h i ≈ µ y , ∀i ∶ y i = c.
We make the following remarks regarding Fig. 12. First, as training progresses, the (average) margins are positive, thus zero training error is achieved for all classes. Second, the average margins for a class y with respect to classes c ∶ c ≠ y converge to a common value, even though their initial values differ. This can be seen from the convergence of the four different colored curves within a plot (e.g. Fig. 12a). Third, all margins for all pairs of classes converge to the same quantity, irrespective of being majority or minority classes. Note, for instance, from Fig. 12a,12b,12c, and 12d that the final value of all graphs is the same. Finally, the value of margins stagnates to a level that is governed by the strength of the logit-regularization λ L .
G Additional results on real data
This section complements Sec. 5.3 with additional experiments.
G.1.1 Implementation details
Our experiments on deep models build on the code provided by [26] 6 . We train ResNet-18 [10] and VGG-13 [29] models, on three 10-class datasets, CIFAR10, MNIST and Fashion-MNIST. Following [26], we use batch normalization in place of the dropout layers in VGG-13. For both models, we disable the biases of all the fully-connected layers, similar to the experiments on UFM. We adopt the same training strategy as [26], namely SGD on CE loss, with momentum (0.9), small weight decay (5 × 10 −4 ), and learning rate 0.1 decayed at two stages (epochs 120 and 240) by a factor of 10. We train the network on a (R, 1 2)-STEP imbalance setting. To create imbalanced data, we use the data sampler provided by Cao et al. [2] 7 . Following [26] we do not use any data augmentation. In all the experiments, we fix the first 5 classes to be majorities, and the rest as minorities. To have a fair comparison between the models with different imbalance ratios R, we sample the datasets to have n = 25250 training images in all cases. While the training set is imbalanced, when measuring test performance of a trained model we do so on a balanced test set, e.g. just like [1,2]. We measure the metrics at certain epochs, and similar to [26], we sample epochs more frequently at the start of the training as the network parameters change more quickly in the beginning.
G.1.2 Model accuracies
Consistent with the requirements of the neural collapse phenomenon by Papyan et al. [26], all models are trained well beyond zero error. Specifically, as illustrated in Fig. 2 and 6, most of the models achieve zero error around epoch 120, while training continues until epoch 350. Table 2 presents the first epochs at which each model achieves 100 percent accuracy for each value of the imbalance ratio R. For practical purposes, the minimal requirement for majority classes to be declared having achieved zero training error is set to 0.2% error. For minority classes n min ≤ 500 we set 0.00% for the same requirements.
Balanced test errors are reported in Table 3. Majority and minority errors are calculated by averaging the per-class majority and minority errors respectively. The total error is calculated
G.1.3 NC property
From the (NC) property, we expect that the embeddings collapse to their class means. In order to quantify validity of this property, we follow [26]. Specifically, we compute the within-class covariance (Σ W ) and between-class covariance (Σ B ) as, h i is the mean embedding of class c and µ G = 1 k ∑ c∈[k] µ c is their (blanaced) global mean. We can now measure NC by computing tr(Σ W Σ † B ) k. Fig. 13 illustrates how this quantity indeed decreases as training evolves. This confirms that feature embeddings converge to their class means, regardless of the imbalance ratio R.
G.1.4 Norms of classifiers / embeddings
Here, we further investigate the geometry of learned embeddings and classifiers, by focusing on their norms. In particular, we study the ratios τ h ∶= h maj 2 h minor 2 and τ w ∶= w maj 2 w minor 2 . Assuming the classifiers and embeddings follow the SELI geometry, those ratios admit explicit closed-form expressions thanks to Lemmas B.1 and B.2. To determine deviations of the measured norm-ratios τ w and τ h compared to those reference closed-form expressions, we calculate and report the following quantity: whereτ w is given by Lemma B.1 for SELI and is equal to 1 for ETF, and, τ w (c, c ′ ) = w c 2 w c ′ 2 with c being a majority and c ′ a minority class. Similarly, we compute distances for the normratios of centered mean embeddings µ c . Fig. 7 and 14 depict these metrics during training of the ResNet and VGG networks. The results confirm once more that the SELI geometry accurately captures features of the learned geometries. On the contrary, this is not the case for ETF when data are imbalanced. We observe that convergence to SELI geometry (and respective deviation from ETF) is more pronounced for the classifier weights and becomes elusive for the embeddings particularly for large imbalance ratios (R = 100.)
G.1.5 Non-alignment of classifiers and embeddings
While from previous empirical results on balanced datasets [26], we expect an alignment between the classifiers and embeddings, Lemma B.5 suggests these two geometries deviate as data becomes more imbalanced. To verify this property, we compute the angle between mean embeddings and their corresponding classifiers, and measure the deviation from the SELI geometry. Namely, let θ c = Cos(w c , h c ). Then, similar to the previous section, we compute, where c ranges over minority classes andθ is given by (18). Fig. 15 shows how this quantity evolves during training. From Fig. 8b, we know that the embeddings and classifiers of the majority classes remain aligned even for highly imbalanced data, thus we only analyze the impact of imbalance ratio on the minority classes.
G.1.6 Majority vs minority geometry
In order to better understand the individual behavior of majorities and minorities, we now compare individual quadrants of the (normalized) G W and G H matrices. Concretely, let be a partition of the normalized G W = G W G W F to (k 2) × (k 2) sub-blocks. Comparing quadrants G W maj−maj , G W maj−minor and G W minor−minor to the corresponding quadrants of the reference SELI/ETF matrixĜ W allows us to "zoom-in" the majority-majority, majority-minority and minority-minority structures. Entirely analogous calculations allow the same for the embeddings.
Classifiers Geometry. Fig. 16 confirms that both majority and minority geometries converge to SELI properly. Interestingly, we see that the minorities diverge the most from the equiangular structure of ETF geometry.
Embeddings Geometry. We find thanks to Fig. 17 that the "error" in convergence of embeddings to SELI geometry (compare to the better convergence for classifiers) shown previously in Fig. 2 and 6 is primarily due to the minority class geometries. However, an overall inspection of the subfigures shows that embeddings also tend to align better to SELI compared to ETF. This alignment property is pronounced in the case of majority geometries (see G M maj−maj ).
G.2 Capturing weight-decay with regularized UFM
In Fig. 18 we compare for R = 10 the distances of G W and G M matrices to various λ-SELI geometries for different values of regularization λ. Specifically, for the λ-SELI geometries, we obtained the reference matricesĜ λ W ,Ĝ λ M as follows. For each value of λ, we solve the nuclearnorm CE minimization in (7) to findẐ λ for k = 10 and R = 10 (to match the CIFAR10 settings). We then formĜ λ W ,Ĝ λ M as described in the first paragraph of Sec. 5 only now usingẐ λ instead of the SELIẐ. For comparison, we also plot the distances to the SELI and ETF geometries. Figure 18: Distances of ResNet learned classifiers and mean-embeddings trained on CIFAR10 data with imbalance ratio R = 10 from the λ-SELI geometry (aka solution to (7) as defined in Theorem 2) for different values of λ, as well as, from the SELI and ETF geometries.
Note that we have experimented with a wide range of values for the regularization λ. Among those is the value 5e − 4 that matches with the choice of weight-decay parameters in the deep-net experiments. Recall also from the discussion in Sec. 4 thatẐ λ is sensitive to λ in this setting where R = 10, k = 10.
We make the following interesting observations from Fig. 18. First, note that for classifiers the minimum distance is that to SELI. The distance of the embeddings' geometry to SELI is also among the lowest ones. Specifically, despite being slightly larger than that from λ-SELI for a few values of λ, the difference is very small. This suggests that SELI is indeed a good approximation for the learned geometry even when training with finite weight-decay. Besides, there are two key advantages of the SELI over the λ-SELI geometry. First, it is unclear what the mapping ought to be (if such a mapping exists) between training-implementation choices (such as weight-decay) and λ. Second, even if such a mapping was known, the SELI geometry has the unique advantage of being expressed simply in terms of the (SVD of the) SEL matrix. In fact, as we show in Sec. B it is possible to get closed-form expressions for the norms and angles describing the geometry. This not only makes calculations much easier, but also it allows further analysis of the properties (e.g. quantifying norm-ratios as in Fig. 7).
G.3 Additional experiments for minority ratios ρ ≠ 1 2
Thus far, in our previous experiments we considered imbalanced data with minority ratio ρ = 1 2, i.e. same number of minorities and majorities. However, note that our theoretical results hold for any value of ρ. Specifically, Theorem 1 shows that the solution of the UF-SVM follows the SELI geometry irrespective of ρ. Here, we empirically study convergence to the SELI geometry for a ResNet-18 model on (R,ρ)-STEP imbalanced CIFAR10 data, for two values of minority ratio ρ = 0.3 and 0.7. These experiments complement our previous demonstrations for ρ = 0.5. Specifically, we create training set of the same size of n = 15350 in all experiments for a fair comparison. 8 All other experimental settings are as described in Section G. 1
H Implications on minority collapse
In this section, we further elaborate on how our results relate to the minority collapse phenomenon, which is defined by Fang et al. [3] as the phenomenon during which minority classifiers become completely indistinguishable. Notably, Fang et al. [3] discover its occurrence both in the UFM and in real datasets trained with deep-nets. Below, we first state their concrete findings and then we discuss how our results extend them.
Summary of findings by Fang et al. [3].
Fang et al. [3] make the following key findings.
FHLS(2)
They also find numerically that the solution to the same constrained UFM gives Cos(w minor , w ′ minor ) = 1 for any R > R 0 for some finite threshold R 0 .
FHLS(3)
Their experiments, specifically [3, Fig. 3], suggest that for fixed number of classes k, the value of the threshold R 0 increases as the constraint parameter gets relaxed and also as the ratio ρ of minority classes increases.
(a) Consistently, as R increases, the cosine similarity between minority classes increases until it reaches one.
(b) The value of R after which the cosine becomes one (i.e. minority collapse is reached) depends critically on the weight-decay. For small weight decay (∼ 5e − 4), it takes R > 1000 to reach minority collapse. It is only for larger weight decay (∼ 5e − 3) that minority collapse occurs for R ∼ 100.
Our novelties. Before discussing implications of our results for minority collapse, we highlight the following key features of our study.
• Entire geometry: We describe the entire geometry of classifiers and embeddings for both majority and minority classes (not only the geometry of minority classes.) • Finite imbalance levels: Our geometric characterizations (aka SELI geometry) hold for all finite values of the imbalance ratio R (not only asymptotically.) • Vanishing regularization: We focus on CE training with vanishing regularization. (As such, our geometry characterizations result from analyzing the UF-SVM.)
Contact points: What do our results say about minority collapse?
• For zero regularization (aka UF-SVM), there is no minority collapse for any finite value of R. Specifically, we show in Lemma B.3 that for (R, 1 2)-STEP imbalance Cos(w minor , w ′ minor ) = (R−7) R−7+2k(2+ (R+1) 2) < 1. This does not contradict Finding FHLS(2) since here we consider zero regularization; all their numerical evaluations are with finite regularization.
Put in terms of imbalance ratio, we prove that It is straightforward to check that f (λ, ρ) is increasing in ρ and decreasing in λ. Thus, we show that the minority collapse threshold R 0 (see Finding FHLS(3)) increases with the minority ratio ρ and with the inverse regularization parameter 1 λ. This finding explains the behavior reported empirically in [3, Fig. 3] for the UFM and in [3, Fig. 2,4] for real data. 9 • The angle between minority classifiers decreases monotonically with the imbalance ratio R for the UF-SVM. Specifically, it can be checked easily by direct differentiation that the formula in Lemma B.3 giving Cos(w minor , w ′ minor ) is increasing in R. This can be seen as a theoretical justification of the empirical Finding FHLS(4)(a).
I Additional related work
As discussed in Sec. 1.1, our work is inspired and is most closely related to the recent literature on Neural Collapse. In Sec. 1.1 we reviewed the most closely related of these works. A few others are referenced in Sec. I.1 below. Beyond Neural collapse, our results and analysis tools are also related to the literatures on implicit bias, matrix factorization and imbalanced deep-learning. We elaborate on these connections below.
I.1 Additional works on Neural Collapse
Beyond CE minimization, a series of recent works study and analyze the neural collapse phenomenon when training with square loss. Interestingly, Graf et al. [5], Fang et al. [3] discover and analyze neural collapse for similarity-type losses, such as the self-supervised contrastive loss, which trains only for embeddings. To the best of our knowledge, all these works restrict attention to balanced classification. Our work shows that it is possible to obtain explicit geometric characterizations in class-imbalanced settings when training with CE. Hence, it also opens the way to extending the analysis to square-loss minimization.
Potential connections of neural collapse to generalization and transferability is a less wellunderstood topic. Some initial investigations appear in the recent works [11,4,9]. Our results are not immediately conclusive about generalization. However, as mentioned in Sec. 6 our results have the potential to offer new perspectives on generalization, since they uncover different geometries (aka SELI for different R values), each leading to different generalization (worse for increasing R [1,2]).
Remark I.1 (Last-layer peeled model (LPM)). Around the same time that the UFM was proposed by Mixon et al. [24] (see also Lu and Steinerberger [21], Graf et al. [5]), the same model was independently formulated and analyzed by Fang et al. [3] under the alternative name of "last-layer peeled model (LPM)".
Remark I.2 ((NC) and (ETF) properties in [26]). The formalization of Neural collapse by Papyan et al. [26] involved four NC properties. The first property concerns the collapse of class embeddings to their corresponding means. Properties two and three concern the geometry of class-means and classifier-weights, specifically their alignment and convergence to a simplex ETF geometry. Property four is a consequence of the other three properties, hence is less important in the formalization and we do not discuss it further. Motivated by our findings, we propose and use here a regrouping/renaming of the aforementioned three properties. We refer to the first property as the (NC) property, and, to the second and third properties as the (ETF) property. We argue that this distinction is important towards a formulation that is invariant across class-imbalances, by showing that the ETF property is not invariant and replacing it with the (SELI) property. For balanced data, the latter simplifies to the ETF property.
I.2 Implicit bias
Neural collapse is intimately related to the recent literature on implicit bias, which started from a series of influential works [30,13,7,14,25]. (This connection is already recognized by the seminal work of Papyan et al. [26].) For example, [7, Theorem 7, Remark 2] concerns a bilinear non-convex SVM-type minimization that bears similarities to the UF-SVM and establishes a connection to a convex nuclear-norm minimization problem. However, the two factors in their bilinear formulation are the same (unlike the UF-SVM) and also they restrict attention to binary classification. Another very closely related work is that by Lyu and Li [22] who show that gradient descent on deep homogeneous networks converges (in direction) to a KKT point of a corresponding non-convex max-margin classifier. While their focus is again on binary classification, they briefly discuss extension to multiclass settings in their appendix. Recently, Ji et al. [12] leveraged their results and formally showed that gradient descent on (1) with zero regularization converges to a KKT point of the UF-SVM. The max-margin classifiers corresponding to multi-layer linear networks (with the UF-SVM being a special case) are nonconvex. Hence there is no guarantee that the KKT point where gradient descent converges to is a global optimum. Whether this is the case or not is investigated for various settings by Vardi et al. [33]. Specifically for linear fully-connected networks, which are of interest to us, they show that, when trained on binary data, the point of convergence is always a global optimum [33,Theorem 3.1]. Their proof uses another nice result on implicit bias by Ji and Telgarsky [15]. In fact, their results are more intimately connected to neural collapse as it can be checked that [15,Proposition 4.4] provides a direct proof that gradient descent on unregularized CE for the UFM and binary data finds embeddings and a classifier that satisfy the NC and ETF properties (irrespective of imbalance). Note here that all these works focus almost exclusively on binary settings. A salient message of our results is that rich and possibly complicated behaviors can occur in multiclass (k > 2) settings. There are several findings supporting this. For example, we show that regularization in the UFM only matters when data are multiclass and imbalanced. Similarly, it is only then that the model found by the two-layer UFM can differ from what a one-layer convex network would find. We also show empirically for the UFM that convergence rates in the absence of regularization can be heavily impacted by imbalances. Related to this, we highlight a missing piece in our analysis: we characterize the global optimum of the UF-SVM, but we do not prove, or are aware of a proof, that gradient descent converges to this global optimum. Proving or disproving this can be of great interest in its own way. On the one hand, if the conjecture holds, then our results warn that imbalance levels can severely impact convergence rates. On the other hand, if the conjecture is refuted, then this would be the simplest model to have been discovered where convergence to global optimum fails.
I.3 Matrix factorization and low-rank recovery
The connection between the study of the UFM for the purpose of neural collapse analysis and the literatures on matrix factorization and low-rank matrix recovery (see for example [36]) is uncovered and first exploited by Zhu et al. [41] and Fang et al. [3]. Thus, we refer the interested reader to those papers for a list of references and detailed discussion (specifically see Zhu et al. [41,Sec. 3.2]). Specializing this discussion to the UF-SVM that is of main interest to us, we note its close ties to the formulation of the hard-margin matrix factorization problem as studied by Srebro et al. [31]. The author formulated the problem of fitting a binary target matrix Y (ie. with entries ±1) with a low-rank matrix W T H as the minimization min W,H where S is a given subset of observed entries of Y. Despite being non-convex, they derived a convex reformulation based on duality and a corresponding procedure for finding a solution to (56) via essentially solving an SDP and an appropriate system of linear equations corresponding to the active constraints. The non-convex max-margin problem (2) that we investigate bears similarities to (56). Specifically, for an one-hot encoding and fully observed Y, (56) is essentially the binary analogue of (2). Importantly in our setting, we are able to calculate the solution to (2) in closed form, that is without requiring numerically solving an SDP. Finally, as mentioned in Sec. E.4 a byproduct of Theorem 1 is that the nuclear-norm max-margin minimization (5) has the same solution as the vanilla max-margin with Frobenius norm. Although different, somewhat related settings were frobenius-norm penalized problems give same solutions as nuclear-norm penalized ones have been studied in the low-rank representation literature, e.g. [34,27].
I.4 Class-imbalanced deep learning
The past few years have seen a surge of research activity towards substituting the vanilla CE empirical risk minimization, which leads to poor accuracy for minorities, with better alternatives that are particularly suited for training large models, e.g. [17,18,2,23,39]. Among the many solutions suggested in the recent literature, most closely related to the topic of neural collapse are [17,18]. Interestingly (as it happened chronologically before the conception of Neural collapse by Papyan et al. [26]), Kang et al. [17] and Kim and Kim [18] observed that the classifier weights found by deep-nets when trained with CE on class-imbalanced data yield larger norms for majority rather than minority classes. This empirical observation led them to propose post-hoc schemes that normalize the logits before deciding on the correct class, thus leading to better performance on the minorities. Our Lemma B.1, not only proves this behavior for the unconstrained feature model, but it also precisely quantifies the norm-ratio between minorities and majorities. Interestingly, our deep-net experiments in Fig. 7 confirm the predicted behavior. Evidently then, Lemma 3.1 offers a plausible theoretical justification of the empirical findings by Kang et al. [17], Kim and Kim [18] and also quantifies the norm ratio. It is conceivable, that this
|
2022-08-12T01:15:41.838Z
|
2022-08-10T00:00:00.000
|
{
"year": 2022,
"sha1": "095a24fd666bd6b1b36566b33e30219132c6f2b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "095a24fd666bd6b1b36566b33e30219132c6f2b6",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
218480043
|
pes2o/s2orc
|
v3-fos-license
|
The strategy of cardiopulmonary bypass for total aortic arch replacement and the frozen elephant trunk technique with aortic balloon occlusion.
Objective To investigate the use of the aortic balloon occlusion technique to assist total aortic arch replacement (TAR) with frozen elephant trunk (FET) to shorten the lower body circulatory arrest (CA) time and raise the nadir temperature during cardiopulmonary bypass. Methods This retrospective study reviewed consecutive patients that underwent aortic balloon occlusion to assist TAR with FET and patients that received conventional TAR with FET procedures. Preoperative characteristics, perioperative characteristics and postoperative outcomes were compared between the two groups. Results The study included130 patients treated with aortic balloon occlusion and 230 patients treated with conventional TAR with FET. The 30-day mortality rate was similar between the aortic balloon occlusion and conventional groups (4.62% versus 7.83%, respectively). Multivariate analysis showed that aortic balloon occlusion reduced the incidence of acute kidney injury, hepatic injury and red blood cell transfusion. The application of aortic balloon occlusion reduced the mean ± SD CA time from 17.24 ± 4.36 min to 6.33 ± 5.74 min, with the target nadir nasal temperature being increased from 25°C to 28°C. Conclusion The aortic balloon occlusion technique achieved significant improvements in reducing complications, but this did not translate into lower 30-day mortality.
Introduction
Total aortic arch replacement (TAR) with the frozen elephant trunk (FET) technique is the standard surgical technique used for treating complex aortic diseases in many aortic and vascular surgery centres. [1][2][3][4][5][6][7][8][9][10] This surgical procedure is associated with more risks as the lower body undergoes circulatory arrest (CA) for 20-40 minutes, 1,3,11,12 which requires complex cardiopulmonary bypass (CPB) and anaesthesia management in addition to the surgeon's technique. The procedures to assist this complicated operation include: (i) setting a suitable deep-to-moderate hypothermic state to adapt the required period of CA time; (ii) exerting a moderate perfusion pressure/rate and perfusion timing that provide sufficient protection for the end organs, especially the brain and the kidneys; (iii) assuring the maintenance of body circulation before and after CPB with drug, fluid, transfusion and/or other measures; (iv) protecting the organs from severe damage caused by the operation and CPB. All these procedures aim to ensure the safe performance of the operation and successful postoperative management in order to achieve satisfactory outcomes. [11][12][13][14][15][16][17] The aortic balloon occlusion technique uses an inflated balloon to occlude the descending aorta and therefore allow continuous perfusion through the femoral artery for the lower body 18 and the subsequent raising of the target nadir temperature setting point during CPB. Improved surgical procedures have resulted in new requirements for CPB management with regard to formulating a higher temperature setting point, modifying the perfusion rate during the aortic balloon occlusion so that it provides simultaneous flow to the brain and the lower body, and maintaining both the body circulation and protecting the damaged organ during the operation in order to accomplish the whole operation process. This study summarizes specific aspects of the adjusted operative strategy of patient management during and after CPB with the aortic balloon occlusion technique as undertaken at the Fuwai Hospital, National Centre for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China. The current study will provide a report of the early outcomes for this procedure at this centre.
Study population and data collection
This retrospective study reviewed the inhospital records of all consecutive patients that underwent the TAR and FET operation in Fuwai Hospital, National Centre for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China between August 2017 and February 2019; during which time the aortic balloon occlusion technique was initially applied, tested for safety by the authors and then gradually introduced to other surgeons in Fuwai Hospital. The indications for TAR with FET in these patients included: (i) for type A aortic dissection with a primary intimal tear distal to the aortic arch that needed to be sealed off by the FET to avoid postoperative development of dissection and prevent aneurysmal formation in the distal aorta; (ii) for complicated type B aortic dissection contraindicated for thoracic endovascular aortic repair; (iii) serving as one of the stages for the treatment of concomitant distal thoracic or thoraco-abdominal aortic replacement. 9 The authors and surgeons who were introduced to the aortic balloon occlusion technique during the study period would apply this technique unless it could not be achieved technically. The other surgeons who were in another team in our centre performed the conventional TAR with FET. All consecutive patients that underwent TAR with FET between August 2017 and February 2019 were included in this study. There were no patients excluded during this study period.
The institutional board of Fuwai Hospital, National Centre for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College approved this study (no. 20180528001) and waived the need for individual patient consent.
Surgical technique and perfusion strategy for the aortic balloon occlusion technique
Arterial cannulation was achieved using the right axillary and femoral arteries of the bifurcated arterial line with a Y-connector from one central perfusion (Figure 1a).
Regional cerebral oxygenation saturation (rScO 2 ) of the forehead was monitored by near infrared reflectance spectroscopy (Suzhou Engin Biomedical Electronics, Jiangsu, China). The axillary arterial line blood flow was monitored using an ultrasound flowmeter probe (Transonic Systems Inc, New York, NY, USA). Systemic CPB was discontinued at 28 C of target nasopharyngeal temperature. When the lower body CA needed to start, the femoral line was clamped with the decrease of perfusion rate. After clamping the proximal innominate artery, antegrade selective cerebral perfusion (ASCP) started at a rate of 5-10 ml/ kg per min through the right axillary artery cannula. The Cronus stent elephant trunk (diameter 26 or 28 mm, length 100 or 120 mm; Cronus; MicroPort Endovascular Shanghai Co., Ltd., Shanghai, China) was released in the true lumen of the descending aorta. Then the aortic balloon (40 ml Coda Balloon Catheter; Cook Incorporated, Bloomington, IN, USA) in a sheath (16 or 18F; W.L. Gore &Associates, Inc., Flagstaff, AZ, USA) was deployed into the metal part of the stent graft and inflated by injection of 40 ml saline to compress the stent graft. Immediately after the balloon was fixed, perfusion of the lower body was then resumed through the right femoral artery cannula along with ASCP through the right axillary artery cannula from the single central perfusion. The CPB flow was restarted and gradually increased to one-half of total flow. The lower body CA lasted for approximately 5 min (Figure 1b). Immediately after distal anastomosis, the balloon was withdrawn and the graft and its branches were clamped. The left common carotid artery was anastomosed first, after which the perfusion through both carotid arteries could be obtained. Then, rewarming for the patient was started with the CPB flow gradually increased to total flow ( Figure 1c). The anastomosis of the left subclavian artery (LSCA) and the innominate artery completed the surgery (Figure 1d).
Statistical analyses
All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 19.0 (IBM Corp., Armonk, NY, USA).Continuous variables are presented as mean AE SD. For univariate comparison, normally distributed continuous variables were evaluated with Student's t-test and continuous variables that were not normally distributed were evaluated using Mann-Whitney U-test. Categorical variables were compared using v 2 -test. Binary logistic regression was performed on clinical events and checked with Hosmer-Lemeshow fitness before drawing any conclusions. The aortic balloon occlusion group had both the conditions of shorter lower body CA time and higher CPB temperature during CA, but the risk factor analysis of both study groups contained both variables of lower body CA time and CPB temperature. Neither lower body CA time nor CPB temperature could not be considered again in the risk factor analysis because the groups already contained both of these factors. Similarly, an emergency operation in the risk factor analysis described the emergency status of the patient when they were admitted to the hospital, in which the operation was performed on the day of admission or the next morning. Most cases were aortic dissection at an acute stage and thus being at an acute stage of aortic dissection could not be considered in the risk factor analysis because they were basically the same subgroup of patients. As a result, each multivariable analysis considered the following independent variables: categorical variables include aortic balloon occlusion group or conventional group, sex, cardiac surgery history, coronary artery disease history, concomitant coronary artery bypass graft, heavy smoker, emergency operation and aortic dissection extended to total abdominal aorta; and continuous variables included age (years), body mass (kg), height (cm), preoperative haemoglobin (g/l), preoperative platelet count (10 9 / ml), preoperative leukocyte count (10 9 /ml), preoperative percentage of neutrophils (%), preoperative D-dimer (lg/ml), preoperative fibrinogen degradation product (FDP; lg/ ml), preoperative aspartate transaminase (U/l) and preoperative serum creatinine (lmol/l). For total transfusion during inhospital stay, the re-examination value on postoperative day 1 of haemoglobin (g/l), platelet count (10 9 /ml), leukocyte count (10 9 /ml), percentage of neutrophils (%), D-dimer (lg/ml) and FDP (lg/ml) were also considered in addition to the previous independent variables. Major complications and clinical endpoints were reported according to the consensus statement from the International Aortic Arch Surgery Study Group. 19 In particular, acute kidney injury (AKI) was defined as serum creatinine (44-133 lmol/l) increased by >1.5 times baseline values (>200 lmol/l). Liver injury was defined as aspartate transaminase (15-40 U/l) increased by >1.5 times baseline values (>60 U/l) for more than 48 h. A P-value <0.05 was considered statistically significant.
Results
This retrospective study analysed data from 130 patients that received the new TAR with FET (aortic balloon occlusion group) procedure and 230 patients that underwent TAR with FET with lower body CA during distal aortic anastomosis (conventional group). The preoperative demographic and clinical characteristics of the two groups are presented in Table 1. The mean AE SD ages of the patients in the aortic balloon occlusion and conventional groups were 49.84 AE 12.21 and 48.00 AE 10.27 years, respectively. The aortic balloon occlusion and conventional groups were similar in terms of age, body mass, height, body surface area, proportion of males, proportion of heavy smokers, proportion with a history of coronary artery disease, time from symptom onset to the time of the operation and aortic pathology. Significantly more patients in the conventional group had a prior history of cardiac surgery and significantly fewer had aortic pathology at the subacute stage compared with patients in the aortic balloon occlusion group (P < 0.05 for both comparisons). The aortic branches were similarly involved by dissection in the aortic dissection cases ( Figure 2).
The perioperative characteristics of the two groups are presented in Table 2. The aortic balloon occlusion group had a significantly shorter mean AE SD lower body CA time than the conventional group (6.33 AE 5.74 min versus 17.24 AE 4.36 min; P<0.001). The mean AE SD total operation time (from the first skin incision to the time in operating room before being transferred back to the intensive care unit [ICU]) was similar in both groups. The mean AE SD primary CPB time was significantly longer in the aortic balloon occlusion group compared with the conventional group (P < 0.05), but the mean AE SD operation time after CPB was significantly shorter in the aortic balloon occlusion group (P < 0.001). A significantly higher proportion of patients required a secondary CPB in the conventional group compared with the aortic balloon occlusion group (P ¼ 0.006).
Univariate comparisons showed that the 30-day mortality rate was similar in the aortic balloon occlusion group compared with the conventional group (4.62% versus 7.83%, respectively) ( Table 3). The patients in the aortic balloon occlusion group were revived significantly earlier from anaesthesia than the conventional group (P < 0.001), but the total mechanical ventilation support time was not significantly different between the two groups. The ICU stay and the postoperative inhospital stay were similar between the two groups. The aortic balloon occlusion group had significantly less prolonged mechanical ventilation (>72 h), low cardiac output syndrome, less severe lung infection and less hepatic injury compared with the conventional group (P < 0.05 for all comparisons). The proportion of patients with AKI was significantly lower in the aortic balloon occlusion group compared with the conventional group (P ¼ 0.013), but this resulted in similar rates of dialysis in the two groups.
With regard to changes in temperature during the operation, system cooling was started at a mean AE SD of 3.29 AE 2.38 min in the aortic balloon occlusion group and at 3.17 AE 4.47 min in the conventional group ( Figure 3). The aortic balloon occlusion group reached the target nasal temperature of 28 C at a mean AE SD of 24.18 AE 6.88 min of CPB and the conventional group reached the target nasal temperature of 25 C at a mean AE SD of 33.50 AE 10.59 min of CPB, after which time lower body CA could be started if necessary. For the aortic balloon With regard to changes in perfusion rate during the operation, Figure 4 presents the results recorded from the CPB reports. Right axillary and femoral arterial cannulations were provided from a bifurcated arterial line from one central perfusion with a controllable perfusion pressure, but the exact perfusion rate allocation of the axillary and femoral arterial cannulations was unknown and this was recorded in the present study. Before the aortic arch was resected, the perfusion rate allocation of the axillary/total ratio was approximately one-third. During the TAR procedure when the aortic arch was completely resected, axillary perfusion proceeded with clamped innominate artery and the perfusion rate allocation of the axillary/total ratio was approximately one-quarter. When the TAR procedure completed, the axillary/total ratio was approximately onethird again similar to the value before the lower body CA started.
With regard to changes in CPB pressure during the operation, Figure 5 presents the monitored blood pressure (BP) of the left radial artery and the foot dorsal artery. The left radial artery catheterization measured BP represents the perfusion pressure of the upper part of the body from the right axillary artery. The foot dorsal artery was catheterized on the non-cannulated side (mostly the left side) and measured the BP of lower part of the body (Figure 1b). During aortic arch reconstruction, perfusion of the organs in the upper body was only from the right axillary artery cannulation, perfusion of the organs in the lower body was only from the right femoral artery cannulation; and all of the blood was mixed and drained back to the CPB machine for oxygenation. Figure 5b shows a greater discontinuity of the lower body perfusion gap in the conventional group compared with the aortic balloon occlusion group. Under normal conditions, the foot dorsal artery BP was higher than the radial artery BP, but when aortic dissection was involved in and occluded the LSCA or left external iliac artery, the monitored radial artery BP and foot dorsal artery BP would drop. The mean AE SD radial artery BP monitored before CPB was 66.22 AE 13.14 mmHg and 67.49 AE 12.01 mmHg in the aortic balloon occlusion and conventional groups, respectively, while the mean AE SD foot dorsal artery BP before CPB was 66.60 AE 13.16 mmHg and Table 4). The conventional surgical method was one of the significant risk factors for postoperative acute kidney injury, hepatic injury and having a conscious revival time >12 h (P < 0.05 for all three outcomes), but it was not related to the other clinical outcomes. The total CPB time was shown to be a risk factor for 30-day mortality, dialysis, prolonged mechanical ventilation (>72 h) and severe lung infection.
Postoperative stroke and temporal paraplegia failed to find risk factors in this model. Thus, all significant risk factors (OR > 1 for a positive risk factor) and all risk factors with a nonsignificant P-value but P < 0.1 are shown in the appropriate tables.
Patients undergoing surgery requiring CPB are at high risk for bleeding. 20 Red blood cell (RBC) transfusion during CPB was triggered if the haematocrit fell below 20% (an approximate haemoglobin transfusions were inversely related to body mass, preoperative blood cell count (operative transfusion) and postoperative blood cell count (total transfusion). Since CPB destroyed all coagulation factors, plasma transfusion requirement was positively related to body mass. Because whenever body mass appeared as a risk factor in relation to RBC and platelet transfusion, the OR was <1, which suggests that the higher the mass, the less likely RBC transfusion was performed. The opposite was true for plasma transfusion. For blood cell counts: (i) the higher the preoperative haemoglobin was, the less operative RBC transfusion was required (haemoglobin OR < 1); (ii) the level of postoperative haemoglobin could result from less operative damage or more operative transfusion. Nevertheless, the higher the postoperative haemoglobin was, the less RBC transfusion was required during in-hospital stay (postoperative haemoglobin OR < 1); (iii) the difference between pre-and postoperative platelet count was a sensitive indicator of operative damage. Thus, the higher the preoperative platelet count was, the more RBC transfusion was required during in-hospital stay (platelet count OR > 1). While the greater that the postoperative platelet count was, the less RBC transfusion was required during in-hospital stay (postoperative platelet count OR < 1). These results came out in pairs; if one of the pair was removed from consideration, the other one would no long be significant. The platelet transfusion results were almost the same as the RBC transfusion results. In particular for point (iii), the difference between pre-and postoperative haemoglobin also reflected the operative damage and reflected this in platelet transfusion risk factor analysis. Some additional intraoperative clinical events that were analysed in the same model are presented in Table 7.
Discussion
Over the past 10 years of performing TAR with FET, the lower body CA time has been gradually shortening due to surgical skill improvement and CA temperature has been rising. 1,11,12,16,23 The current study institution is using 25-26 C as the safe standard CA setting point, which was predicated on an expected mean of 17 min of lower body CA. This routine 25-26 C was already a much higher temperature level than that which was initially reported (18-20 C). 23 For conventional TAR with FET, the CA temperature could not be higher if the lower body CA time could not be further shortened by the surgical method. Raising the CA temperature was reported to have very limited effect on AKI. 16 The aortic balloon occlusion technique offers consistent lower body perfusion and provides a sufficient time window to perform aortic arch procedures. To ensure the safety of the procedure, the CA temperature setting point must be synchronized with an expected shorter lower body CA time as well as ensuring cerebral protection. The nasal 28 C strictly selected as the cooling setting point to make sure the brain could tolerate the 5 min of ASCP and lower body CA. The 28 C setting point significantly shortened the cooling and rewarming times in the aortic balloon occlusion group compared with the conventional group. The trajectories of temperature also showed that the nasal temperature rewarmed faster than the rectal temperature due to the uneven distribution of the arterial lines, despite the fact that the surgeons at this study institution routinely use an ice hat to slow the cerebral rewarming speed to achieve parallel upper and lower rewarming processes. Since the speed of rewarming is limited by the rectal temperature, when the rectal temperature had reached 35 C this was defined as the end of rewarming. This was also the earliest time that the CPB could be stopped, otherwise the rewarming resumes despite the procedure being finished.
With the natural distribution of the perfusion rate from one single perfusion pump, the quantity of perfusion that the brain received in relation to the total perfusion rate was crucial to judge the safety of the cerebral protection of the operation, so this current study measured the precise perfusion distribution of the cerebral and systemic arterial lines. The recommended ASCP rate was 10-15 ml/kg per min and the bilateral cerebral perfusion rate was 20-30 ml/ kg per min. 24,25 Before CA was commenced, the axillary perfusion accounted for one-third of the total perfusion. During lower body CA, ASCP, which was total flow, was continuously set at 10-15 ml/kg per min and the femoral perfusion was clamped. When aortic balloon occlusion was set up, the femoral perfusion clamp was released. If the total flow was set at 1/2-3/4 of the target full perfusion rate, the axillary perfusion rate was approximately 10-15 ml/kg per min. When the left carotid artery was reconstructed, the total flow returned to the target full perfusion with the axillary perfusion accounting for one-quarter of the total perfusion, approximately 15 ml/kg per min. When all of the surgical procedures were completed, the axillary perfusion accounted for one-third of the total perfusion, similar to the preoperative level. For normal cases, the current authors recommend a thinner cannula for the axillary artery than the femoral artery cannula to achieve appropriate flow distribution. A Hoffman clamp could be used to alter the resistance difference between the two lines. For example, if rScO 2 drops > 15% of baseline or ASCP < 5 ml/ kg per min, a Hoffman clamp could be used on the femoral artery line.
The perfusion of the cerebral and systemic arterial lines was controlled by the perfusion pressure. However, perfusion would not flow to an occluded branch or even to an occluded aortic true lumen. Thus, the operative monitoring of the upper and lower limbs was crucial for validation of the proper aortic pressure, which equalled the end-organ perfusion pressure. The arterial catheter pressure of the radial artery and foot dorsal artery should be kept between 50-70 mmHg to ensure the proper perfusion to the brain and kidneys. In the case of diagnosed or confirmed cerebral or renal arterial disease, the monitored pressure should be kept at a higher value (in the 70s mmHg) to prevent postoperative stroke and AKI.
The present study retrospectively reviewed consecutive patients that underwent TAR with FET to study the effect of the aortic balloon occlusion technique. The present study had several limitations. First, it only presented data from within a recent time period (i.e. since the invention of the technique in August 2017). Thus, all consecutive data were used to perform the multivariate analysis, which made the findings of the multivariate analysis more convincing. The findings of the univariate comparisons were less convincing because they contained bias. As a result, the number of patients was enough to find risk factors for many, but not all kinds of the clinical outcomes. For example, the study was not able to identify risk factors for postoperative stroke or paraplegia. Secondly, the study lacked long-term follow-up because it focused on the early postoperative outcomes only.
In conclusion, the aortic balloon occlusion technique achieved significantly lower rates of postoperative AKI and hepatic injury as well as less need for RBC transfusion, but these changes did not translate into significantly less dialysis or 30-day mortality. In our opinion, this technique remains highly recommended for patients that are predisposed to risk factors in order to prevent postoperative complications. The aortic balloon occlusion technique is not only very friendly for new surgeons that are using the TAR with FET procedure, but it is also suitable for experienced surgeons who are performing TAR with FET on more severely ill patients. Balancing and maintaining sufficient cerebral and lower body perfusion is crucial during the CPB procedure. Continued research into CBP management of the aortic balloon occlusion technique should be directed toward getting the precise temperature, CBP flow and pressure to optimize intraoperative care.
|
2020-05-03T13:03:11.875Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0723062ce762587fdb1e79326e346023c14b2629",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060520905410",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffcc15e0a8d993efad55e7acd7f9760372e3bf9b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232364007
|
pes2o/s2orc
|
v3-fos-license
|
Recent Advances on Nanomaterials to COVID‐19 Management: A Systematic Review on Antiviral/Virucidal Agents and Mechanisms of SARS‐CoV‐2 Inhibition/Inactivation
Abstract The current pandemic of coronavirus disease 2019 (COVID‐19) is recognized as a public health emergency of worldwide concern. Nanomaterials can be effectively used to detect, capture/inactivate or inhibit coronavirus cell entry/replication in the human host cell, preventing infection. Their potential for nanovaccines, immunoengineering, diagnosis, repurposing medication, and disinfectant surfaces targeting the novel coronavirus (SARS‐CoV‐2) is highlighted. In this systematic review the aim is to present an unbiased view of which and how nanomaterials can reduce the spread of COVID‐19. Herein, the focus is on SARS‐CoV‐2, analyzing 46 articles retrieved before December 31, 2020. The interface between nanomaterials is described, and the main mechanisms to inhibit SARS‐CoV‐2 pathogenesis and viral inactivation are also discussed. Nanocarbons, biopolymeric, copper, and silver nanoparticles are potential antiviral and virucidal agents toward self‐cleaning and reusable filter media and surfaces (e.g., facial masks), drug administration, vaccines, and immunodiagnostic assays. Trends in toxicology research and safety tests can help fill the main gaps in the literature and overcome health surveillance's challenges. Phytochemicals delivery by nanocarriers also stand out as candidates to target and bio‐friendly therapy. Nanocellulose might fill in the gaps. Future research using nanomaterials targeting novel therapies/prophylaxis measures to COVID‐19 and future outbreaks is discussed.
repurposing antimicrobial drugs, [35][36][37] including therapeutics approaches against viruses [38][39][40] and and coronaviruses (CoVs). [41,42] However, only after the COVID-19 emergence, a considerable number of papers has been published on nanomaterialsbased platforms and coronaviruses, although the first studies initiate after the first coronavirus pandemic, the 2002-2004 severe acute respiratory syndrome (SARS)- Figure 1 shows the number of publications retrieved in a simple search in some databases related to the topics "nano" and "coronavirus" between January 1, 2004 and December 31, 2020.
Since several studies are still undergoing, some overviews discussed nanomaterials′ potential as antiviral candidates for COVID-19, but most took into account studies toward other CoVs. [43][44][45] A meta-analysis study has recently demonstrated nanoscale materials′ overall efficacy against other coronaviruses that appeared before the SARS-CoV-2 virus. [45] The interactions between several viruses and graphene have prospected this nanostructure as a future possibility against COVID-19 [46,47] as disinfectants and antiviral coatings for personal protective equipment (PPE) for health workers. [46] Once copper (Cu)based surfaces have also inactivated coronaviruses as human coronavirus 229E (HCoV-229E) [48] and SARS-CoV-2 in a short time compared to other surface materials, [49] nanostructured containing copper, silver, and zinc have also been prospected to inactivate SARS-CoV-2 and manage COVID-19. [10,50,51] Moreover, due to its potent antiviral activity, the enrichment of plasma copper levels was recently hypothesized to boost innate and adaptive human immunity to prevent and treat COVID-19. [52] The primary aim of this systematic review (SR) is i) to identify, through a rigorous literature search analysis with a predefined strategy protocol, research using nanomaterials-based approaches against the Coronaviridae family and the latest finds until December 31, 2020 toward the novel SARS-CoV-2 virus. Another objective of our study ii) is to critically analyze nano-materials′ role as both a primary strategy (when the nanomaterials interact directly with the virus) and a secondary approach (when the nanomaterials improve the efficacy of another antiviral agent) against coronaviruses. Thus, this review also addresses the main finds in the anti-coronavirus activity promoted by nanocarriers to identify potential candidates with minimum off-target effects for COVID-19 control. Some challenges and knowledge gaps on the design of antiviral nanoagents, in which the purposes are therapy, disinfectants, and antiviral surfaces (e.g., masks and coatings, for public uses), remain. [53,54] Nanomaterials′ modes of action to inhibit infection/inactivate SARS-CoV-2 have yet to be understood. Therefore, our study also contributes to understanding the interactions, types, and mechanisms of action associated between nanomaterials and SARS-CoV-2 inside and outside host cells.
Systematic Search Methods
This SR recovered and assessed all the data available in literature databases about nanomaterials′ antiviral potential to manage the COVID-19 pandemic, from other coronaviruses to the novel SARS-CoV-2. To improve our SR quality, we followed a four-phase flow diagram and the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) statement guidelines [55] supported by the SR management StArt tool. [56]
Research Question
The focus question agrees to the problem, intervention, comparison, outcome, and study type (PICOS) strategy. As study type, we considered theoretical, hypothetical, computational, pre-clinical, and clinical research. The research questions focused on as follows: Which nanoscale structures showed anti-coronavirus effect potential for COVID-19 control? Which nanocarrier has been reported to improve the use of antivirals agents against coronaviruses? What are the modes of action to inhibit viral infection and types of interactions between the viral surface and the nanosystem hypothesized/purposed to viral inactivation?
Search Sources and String Definition
Our search protocol strategy used search strings constructed and adapted for six electronic databases: Web of Sciences, PubMed, Embase, Scopus, SciFinder, and Science Direct. Besides, we performed additional searching on the reference list of relevant articles/reviews identified through the initial screening. The recovered papers of search sources were performed through a search string that summarizes the questions researched. The string was based on pre-determined groups of keywords related to coronaviruses, nanoscalebased structures, and their use as antiviral/virucide agents accordingly: • Search component 1: nanoparticle* OR nanomaterial* OR nanostructure* OR CNT OR graphene OR graph* OR "silver nano particle" OR AgNp OR liposome OR "gold nanoparticles" OR silica OR "self-assembly" OR nanocellulose OR hydrogel OR "nanoparticle-based RNA" OR "copper nanoparticle" • Search component 2: "SARS-CoV-2" OR CoV OR "nCoV-2019" OR "COVID-19" OR "enveloped viruses" OR virus* OR coronavirus OR Coronaviridae OR "SARS-CoV" OR SARS
Search Strategy, Selection Process, and Study Selection Criteria
The advanced search in the database was carried out considering research articles published in English between 2000 and 2020. The search started on July 14, 2020 and was updated from August 3 to December 31, 2020 to cover the maximum peerreviewed papers available, focusing directly on the novel SARS-CoV-2. The results of the screening were uploaded to the StArt tool. The authors first conducted the preliminary selection and extraction of data independently. Table 1 summarizes the inclusion/exclusion criteria adopted for the eligibility of studies. Further details can be seen in our previous work. [57]
Data Extraction Process
For the data extraction of articles included in qualitative synthesis, the authors independently extracted and summarized the following information: nanomaterial type, size and shape, preparation strategy, coronavirus specie, significant results, interactions type between nanomaterial and coronavirus, mechanism of action, and potential application field.
Sources of Bias
We did not consider the draft/final form of papers deposited directly online at preprint servers that are not peer reviewed. Besides, the eligibility criteria and the impact of missing data might be considered sources of bias.
This SR presented in the next section the review of our main finds with coronavirus and nanomaterial types, focusing on antiviral properties, as well as the chemical interface and cellular mechanisms associated with them.
Nanomaterials against Causative Agents of Animal Coronaviruses
Several emerging viruses have been known to be transmitted from animals to humans. The scientific community argues that the most probable cause of the COVID-19 pandemic started in the South China seafood market (Wuhan, Hubei) is zoonotic spillover. [1,2] Despite are still unclear which animals transmitted COVID-19 to humans, recent findings show that SARS-CoV-2 has 96% genomic similarity with a bat coronavirus. [101] Thus, these facts motivate us to choose articles and discuss the approaches with the anti-coronavirus properties of nanomaterials of this section, given the possibility of extending the SARS-CoV-2 virus strategies to overcome the challenges of COVID-19 control.
The type of nanomaterials found included nanoparticles, nanowires, nanostars, nanospheres, nanocapsules, nanoclusters ( Table 2). Most of them are inorganic, polymeric, and carbon-based nanomaterials with varied morphology. These include spherical particles, rods, layers, triangular star shape, spherical particles capped with a corona-like structure. The average size ranged from 5 to 140 nm.
The high energy surfaces of synthetic nanoparticles were exploited to induce protein corona formation in sVLPs based on gold nanoparticles and incubation of IBV spike protein, obtained from the recombinant protein expression of IBV, as model antigens. [59] The core-shell morphology binds with bulbous surface projections to mimetic natural viral particles and can be applied as a vaccine against AvCov-IBV with an improved humoral and cell-mediated immune response ( Figure 4A). Compared to the free antigen protein, the mechanism of the potent immunity response of sVLPs against AvCov-IBV was attributed to primary lymphatic delivery and the multivalent antigen display, the major antibody titers, and the minor infection-like symptoms. [59] Although this approach presented a poor control of polydispersity (i.e., Ag nanowires varied from 20 to 400 nm), direct contact between Ag nanowires, Ag nanoparticles, and TGEV virus (a high mortality virus in seronegative suckling piglets) caused an inhibitory effect on viral infection and replication because of the interference with viral infection during attachment and entry ( Figure 4B). [58] The mechanism of action was studied and correlated with i) the inhibition of host cell apoptosis upregulation of p38/mitochondria-caspase-3 signaling route by Ag nanomaterials; ii) inhibition of the infection initiation through the direct interaction between Ag nanomaterials and the spike glycoprotein; and iii) Ag nanomaterials may alter the structure of surface proteins of TGEV/PEDV, inhibiting its identification and adhesion of the cellular receptor porcine aminopeptidase N. [58] The insights provided by Ag nanomaterials into the antiviral therapy of coronaviruses inspired the development of a nanocomposite by anchoring spherical silver nanoparticle (AgNPs) with thin layers of GO sheets as an antiviral against the lipid enveloped FCoV, with 26% of viral inhibition at the minimum concentration, at a non-cytotoxic level. [60] Regarding the mechanism, in a general way, it was suggested that the unique structure of graphene oxide contribute as follow ( Figure 4C): i) negatively charged GO might interact with positively charged lipid membranes inducing its rupture; ii) then, the lipid tail exposed could strongly bind with the aromatic plane of GO sheets; and finally iii) this GO sheets-lipid membrane interaction can attract more lipid membranes. [60,102] More recently, core-shell bimetallic nanorods were obtained by deposition of silver shell on gold nanorods (Au-AgNRs) followed by releasing of Ag + and exposure of Ag nanowires after endogenous reactive oxygen species (ROS) stimulation. [69] The system inhibited PEDV replication by a multisite mechanism: i) inhibition of PEDV entry with decreasing Table 2. Nanomaterials approach with anti-coronavirus activity against animal coronaviruses found in the retrieved papers.
Nanomaterial
Average size Shape Strategy Coronavirus Application Ref.
Heparan sulfate (HS) proteoglycans are often used as a cellular attachment receptor to mediate viral infections′ adhesion and internalization. Inspired in this mechanism and in view to mimic the cell surface receptor HS, its analog mercaptoethane sulfonate (MES) was employed as the chemical modifier to synthesize Te/BSA nanostars with the advantage of upregulating its high antiviral activity against PEDV. [65] This inorganic nanomaterial mechanism was correlated with ROS generation inhibition, highlighting its potential as a broad-spectrum antiviral agent. [65] Additionally, a meta-analysis in a series of nanoscale materials in publications retrieved demonstrated a positive inhibition efficacy against animal coronaviruses in vitro and in vivo. [78] We retrieved the other six original research articles with nanomaterials and natural compounds for treatment purposes against animal coronaviruses, which we decided to discuss later (see Section 4.4). Table 3 shows the articles retrieved in this SR targeting the antiviral properties of CQDs, gold nanoparticles, titania oxide nanoparticles, and biopolymeric nanoparticles against HCoV-NL63, HCoV-OC43, SARS-CoV, and MERS-CoV.
Nanomaterials against Causative Agents of Human Coronaviruses Identified before SARS-CoV-2
Cationic chitosan nanospheres crosslinked by natural genipin (HTCC) presented high affinity with spike protein of coronaviruses and shown a more significant adsorbent effect against HCoV-NL63 than HCoV-OC43. [61] This biopolymeric nanosphere was appointed as convenient candidates to remove coronaviruses from biological matrices and water with higher selectivity. Among other details that will be discussed later (see Section 4.4), the tremendous adsorptive capacity was attributed to HTCC-virus electrostatic interactions and the high ionic strength caused by the HTCC cationization process.
Three types of CQDs were synthesized containing boronic acid groups, which proved to be essential, in a dose-dependent manner, to the higher antiviral activity presented against the human coronaviruses HCoV-229E-Luc. [71] These nanomaterials were suggested as potential candidates to replace the standard ribavirin/α-interferon (α-IFN), with fewer side effects. Concerning the mechanism of action, it was suggested that CQDs act by multistep at i) the initial stages of viral infection, inhibiting the viral entry, which probably is due to the inhibition of the interactions between the spike protein receptor and host cell membrane, caused by interactions between functional groups of CQDs with viral entry receptors; and ii) an equivalent extensive inhibition activity at the viral replication step. [71] Aiming to develop an antiviral drug and vaccine against MERS-CoV, gold nanorods [72] and biopolymeric hollow nanoparticles made from poly (lactic-co-glycolic acid) (PLGA) with core-shell morphology [73] were studied based on HR1 peptides and viromimetic STING agonists, respectively. In the first study, a series of HR1 peptides was developed by molecular docking to inhibit HR1/HR2-mediated membrane fusion between the MERS-CoV and host cells. [72] Further, the nanoparticle-based vaccine inhibited viral infection, was immunogenic and prevented the induction of undesirable lung disease in immunized human DPP4 enzyme transgenic mice. [73] Recently, preliminary results with the virucidal effect of TNP against the HCoV-NL63 and HCoV-OC43 by photoactive TNP deposited on glass coverslips using UVC radiation. The authors also mentioned the potential to control the spread of COVID-19 by self-cleaning surfaces, and hence, they are extending the concept to SARS-CoV-2 in work currently underway. [70] Table 4 displays computational approaches recently published repurposing medications, therapies, antiviral and immunologic agents to combat COVID-19 patients. By molecular docking, the United States Food and Drug Administration (FDA)-approved magnetic nanoparticles (Fe 2 O 3 and Fe 3 O 4 ) are repurposed to treat and control COVID-19, based on its efficient hydrophobic interactions and hydrogen bonding with the chimeric S-receptor-binding domain (S1-RBD) of SARS-CoV-2, to form a more stable complex [75] ( Figure 5A). Likewise, novel gold nanoparticles functionalized Table 3. Nanomaterials approach with anti-coronavirus activity as potential candidates against human coronaviruses identified before SARS-CoV-2 found in the retrieved papers. Angiotensin-converting enzyme inhibitor 1 or 2; i) Lisinopril covalently grafted onto poly(lactic-co-glycolic acid) nanoparticles; j) RNA-dependent RNA polymerase.
Nanomaterials and the Novel SARS-CoV-2: Recent Advances to Reduce the Spread of COVID-19
Global Challenges 2021, 5,2000115 with peptides formed a more stable complex with RBD than angiotensin-converting enzyme 2 (ACE2). [77] A non-toxic approach with SiNPs showed that SiNPs-encapsulated PolyP (to stabilize PolyP against alkaline phosphatase) inhibited the binding of ACE2 to S1-RBD of SARS-CoV-2 at physiological concentration due to interactions between PolyP nanoparticles and amino acids on the surface of S1-RBD. [76] Moreover, this strategy was suggested to prevent and treat SARS-CoV-2 infection in the oropharyngeal and boost the immune system of thrombocytopenic COVID-19 patients. [76] Another study reported the potential of synthesized nano-formazans as antiviral agents to manage COVID-19 infection by docking simulations at a physiological solution: the results showed that formazan analogs could bind the active site 3CL protease of SARS-CoV-2, inhibiting the viral replication. [78] Polymer nanoparticles-optimized Remdesivir to repurpose it as antiviral therapy associated with lisinopril (a molecule of the therapeutic and lung-protective effect of ACE) by Remdesivir-loaded lisinopril-functionalized PLGA. [79] Thus, the potential of Remdesivir-optimized nanoparticles was reported by a docking study, which confirmed interactions between lisinopril and ACE; and the binding of Remdesivir and RNA-dependent RNA polymerase (RdRp), an enzyme involved in replication and transcription of the SARS-CoV-2 genome. [79] Therefore, all these nanotechnologies approaches were predicted to interfere with viral adhesion to human host cell receptors and viral replication, thus inhibiting the viral infection. Table 5 displays 19 recently published articles research focusing on novel coronavirus by theoretical, hypothesis, and applied research based on nanomaterials approaches to manage COVID-19. Of these were found metal nanoparticles, carbon nanomaterials (e.g., carbon nanotubes, graphene layers, and graphene oxide), inorganic/organic polymeric nanoparticles, nanolipids, and nanodecoys purposing antiviral agents and surfaces, virucide agents, drug delivery, vaccines, immunological agents, and viral detection.
Carbon nanotubes were present in an innovative theoretical proposal to develop vaccines and drugs for COVID-19, exploiting coronavirus physical-chemical properties and constructing a target acidification environment ( Figure 5B). MWCNTs with target functions, RNA lyase-and acid-functionalized combined with photothermal effects, were hypothesized to block viral infection and replication routes in the host cells. [83] Regarding modes of actions, acidizing the cell environment, generating a photodynamic thermal effect by irradiation of functionalizedcarbon nanotubes, and smart drug-delivery of a viral RNA lyase destruction were proposed. [83] Free-energy calculations by DFT were performed on carbon-based nanomaterials and were proposed to develop nanodevices useful in COVID-19 and future pandemics management. [82,100] Exploiting ROS molecules′ harmfulness to coronavirus, nanofilters based on metal-decorated (Pt, Cu, Rh, and Ru)-SWCNTs, combined with H 2 O 2 , were presented as an effective platform for future experiments in SARS-CoV-2 absorption. [100] The nanocomposite use for elastomeric respirators and self-cleaning surfaces could be used in hospitals during future pandemics. [100] Further experiments combined electrostatic composite films (coagulated GO-PMMA) with water aerosols to create a nanocomposite surface that generated a negative voltage from water evaporation. Hence, an electrostatic bond with coronaviruses′ spike protein. [82] In the absence of the novel coronavirus to test, the study used the model beer yeast cells. The mechanism appears to be linked to electrostatic interactions between the nanosystems (due to their nanometric size and chemical character) with the coronaviruses′ negatively charged spike proteins. DFT calculations showed a generous capacity of metal-decorated SWCNTs for peroxide and hydroxyl radical capture with a very long recovery time. [100] Once in contact, the functional groups of GO layers and water molecules interact, thus generating an extra electric field induced through the heterostructure formation with an enhanced dipolar redistribution at the interface. [82] The potential of Cu to neutralize infectious viruses and induce viral killing mediated by ROS [52] have been hypothesized to booster immune system against COVID-19 by Cu supplementation [52] and motivated researchers to formulate copper to fabricate nanodevices as antiviral, self-cleaning, and reusable filter media/surfaces to prevent the COVID-19 spread. [81,92,94,95] A superhydrophobic coating based on silica/copper nanoparticles dispersed in silicone or epoxy polymer (flexible, superhydrophobic, and regenerative monolith surfaces) was GO b) Layers GO b) -PMMA m) film PPE l) design [82] MWCNTs n) Multi-walled cylinder Acidizing and RNA Lyase-modified carbon nanotubes Vaccines and drugs [83] Copper (Not reported) Enrichment of plasma copper levels Immunotherapy [52] Silica-copper nanoparticles Spherical Silica-copper/polymer (silicone or epoxy) nanocoating Superhydrophobic self-cleaning surfaces [81] CuNPs a) /GO b) nanocomposite Nanosheets, nanofibers Electrospinning: multilayers of CuNPs a) /GO b) -PLA c) and CuNPs a) /GO b) -CA d) nanofibers Respirator filter to antiviral face mask [92] Cu-ZIF-8 e) /copolymer nanocapsule Graphene sheets Layers PBASE o) -modified graphene sheets FET p) sensor with graphene sheets conjugated to SARS-CoV-2 q) spike antibody Immunodetection [84] Polymeric nanoparticles Spherical Bioinspired DNase-I r) -coated melanin-like nanospheres: recombinant DNase-I r) /PEG s) coating Therapy for with ARDS t) or sepsis in severe COVID-19 patients [85] Polymeric nanoparticles Spherical, 220 nm DNase-I r) -coated polydopamine-PEG s) nanoparticles (exogenous administration) Therapy for with ARDS t) or sepsis in severe COVID-19 patients [86] Polymeric nanoparticles (Not reported) Ivermectin-delivery by (PLGA-b-PEG-Mal) u) copolymer nanoparticles (orally administrable) Antiviral drug [87] Nanostructured lipid carriers Spherical Pulmonary delivery of Salinomycin by nanostructured lipid carriers Drug delivery [88] Manganese nanodepot (Not reported) Droplet-confined nanoprecipitation in water-in-oil micro-emulsion + thin-film dispersion method Vaccine adjuvant [89] Decoy nanoparticles (Not reported) Fusing genetically engineered cell membrane nanovesicles (293T/ACE2 and THP-1cells) Therapeutic vaccines [90] a) Copper nanoparticles; b) Graphene oxide; c) Polylactide (as matrix); d) Cellulose acetate (as matrix); e) Copper 2+ -doped zeolitic imidazolate framework-8; f) Copper nanowires; g) Block copolymer of PEG-PPG-PEG structure (PEG: poly(ethylene glycol); PPG: poly(propylene glycol)); h) Silver nanoclusters; i) Polyvinylpyrrolidone; j) Decanesulfonic acid; k) Single-walled carbon nanotubes; l) Personal protection equipment; m) Poly(methyl methacrylate (as matrix); n) Multi-walled carbon nanotubes; o) 1-Pyrenebutyric acid N-hydroxysuccinimide ester; p) Field-effect transistor; q) Novel coronavirus; r) Deoxyribonuclease I; s) Poly-(ethylene glycol); t) Acute respiratory distress syndrome; u) Poly(lactide-co-glycolide)-block-poly-(ethylene glycol)-maleimide nanoparticles.
hypothesized to the self-cleaning surface for implementation in public and healthcare work environments to eradicate SARS-CoV-2 spread and protect against COVID-19 by three-step strategy: i) virus encapsulation, ii) contamination suppression, and iii) virus elimination. [81] Although face masks can limit transmission, the increased demand for disposable masks also demand many resources and has raised concerns about the generation of waste. [94] Thus, some strategies using sustainable polymers have been studied to reduce transmission and the impact of waste. A filtration system from a nanofibrous respirator facial mask containing multilayers of Cu nanoparticles/GO nanosheets was dispersed in a nanofibrous matrix of biodegradable polylactic acid (PLA) or cellulose acetate (CA). [92] Interestingly, the use of thermoplastic polymers such as PLA and CA can provide a stable fit with face anatomy. [92] Likewise, low-cost scalable synthesis of Cu nanowires/ZIF-8 stabilized by an amphiphilic block copolymer (Pluronic F-127) in a coreshell structure was direct deposited onto a reusable face mask system and produced 55% inhibition of SARS-CoV-2 replication after 48 h at a concentration of 1 µg. [94] A dual-channel sprayassisted nanocoating hybrid of shellac/CuNP to a photoactivated antiviral facial mask with self-sterilizing and reusability was reported with virucide effects. [95] Another nanotechnologybased filter air was reported with silver nanoclusters/silica composite sputtered coating applied on FFP3 mask with virucide effect, and completely reduced SARS-CoV-2 titer to zero on tested conditions to the sample with the highest content of Ag.
Other research that addressed the antiviral and virucide effect of silver discussed the interaction between coating-based colloidal AgNPs and SARS-CoV-2 by Luciferase-based pseudovirus entry assay and revealed that Ag nanomaterials potently block viral entry step via disrupting viral integrity. [97] Furthermore, SARS-CoV-2 was wholly inactivated after 6 h of exposure to a nanostructured aluminum alloy surface obtained by wetetching technique. [98] Gold nanoparticles coated by decanesulfonic acid ligands inhibited the activity of authentic SARS-CoV-2 in a nanomolar range, and contrarily to most of the strategies that targeted the inhibition of SARS-CoV-2 cell entry by blocking spike protein-ACE2 receptor binding, this sulfonated nanomaterial can inhibit SARS-CoV-2 attachment by blocking spike protein-HS receptors binding and was suggested as simply reversible and potent antiviral agents. [99] A simple, highly selective, sensitive, and rapid method for detecting the SARS-CoV-2 virus in nasopharyngeal swab samples from COVID-19 patients, without sample pre-treatment/ labeling, was performed on a field-effect transistor (FET)-based biosensor using functionalized-graphene sheets as a receptor [84] ( Figure 5C). The sensor target detected the SARS-CoV-2 antigen protein, cultured SARS-CoV-2 virus, and SARS-CoV-2 from clinical samples. [84] The authors proposed a high dependence between SARS-CoV-2 spike protein and specific binding with the SARS-CoV-2 antigen, and the chemically modified graphene surface promoted this binding affinity through a pyrene backbone with an electron-withdrawing group. [84] Due to the biocompatibility nature and size of NLCs, a study hypothesized the pulmonary delivery of Salinomycin (SAL) carried by NLCs as a promising candidate to treat COVID-19 patients [88] (Figure 5D). The SAL encapsulation by NLCs sounds like a potential strategy to increasing its absorption at the local infection due to the good aerodynamical properties of NLCs, which could be aerosolized by droplets as antiviral drugs. Besides, the hypothesis was based on pieces of evidence that SAL has the potential to prevent the viral entry into the cytosol, prevent membrane fusion in a pH-dependent way, interact with spike protein, inducing the ACE2 binding, and preventing the release of viral acid nucleic into the cytoplasm. [88,103,104] Ivermectin is a clinically approved antiviral drug and was repurposed against SARS-CoV-2, using orally administering PLGAgrafted-PEG-maleimide nanoparticles, an amphiphilic and biodegradable block copolymer system, which was capable of delivering a more potent therapeutic dose. [87] The system demonstrated potential for the therapeutic drug to COVID-19 by multisite inhibition into decreasing the viral uptake and transmission by i) inhibition of viral spike protein level and its entry rate by downregulation of ACE2 expression, and ii) possibly, inhibition of nuclear transport activities mediated by proteins (e.g., importin α/β1 heterodimer). [87] A recent study hypothesized that excessive neutrophil extracellular traps (NETs) and extracellular DNAs (eDNAs) could activate NETosis, neutrophil-specific programmed cell death might be associated with COVID-19 pathogenesis. [85,105] Nowadays, there are no FDA-approved antiviral medications that can effectively suppress the SARS-CoV-2-mediated neutrophil activities, cytokine storm, acute respiratory distress syndrome (ARDS), and sepsis, thus, promoting widespread patient improvement. [85,86] Therefore, different strategies using polymeric nanoparticles to deliver antiviral agents have been evaluated to drug repurpose. [85][86][87] An in vivo study with a septic mouse model showed the potential of bioinspired DNase-I-coated melanin-like nanospheres using PEG to reduce neutrophil counts and modulate sepsis-associated NETosis dysregulation in the plasma of COVID-19 patients alleviating inflammation and mortality. [85] Further research showed an exogenous administration of a long-acting DNase-1, a recombinant DNase-1-coated polydopamine-PEG nanoparticulated, can reduce SARS-CoV-2-mediated neutrophil activities and cytokine storm as a potential treatment to COVID-19-related illnesses. [86] Manganese nanodepot (nanoMn) and decoy nanoparticles were proposed as simple, safe, and robust vaccine adjuvants and antiviral agents to manage COVID-19. [89,90] Although manganese can reduce IFN response-a central host response against viruses-there is a challenge in its applicability due to non-specific distribution and neurotoxicity. [89] Thus, manganese was repurposed in nanoMn with enhanced cell uptake and persisted release of Mn 2+ in a pH-sensitive manner, boosted IFN response, broad-spectrum in vitro and in vivo antiviral effects and macrophage polarization; no neuroinflammation effects were observed; nanoMn acted as a vaccine adjuvant to boost host adaptive immunity. [89] Otherwise, a decoy nanoparticle made by genetic engineering can protect host cells against COVID-19 infection by a two-step neutralization approach: i) first, virus neutralization followed by ii) inflammatory cytokine neutralization in the second step (e.g., interleukin 6 and granulocyte-macrophage colony-stimulating factor). [90] Thus, the authors reported stabilization of ACE2 expression; protection of host cell against infection by competition between nanodecoy and host cell to SARS-CoV-2 binding; and suppressing immune disorder and lung injury in an acute pneumonia mouse model by nanodecoy through in vivo assay. [90]
Natural Bioactive Compounds and Nanoparticles against Coronavirus and SARS-CoV-2: A Promissory, Healthy, and Bio-Friendly Strategy for Drug Delivery
Eleven of the 45 articles retrieved highlighted strategies based on nanomaterials as a secondary approach to enhance natural compounds′ use, improving their bioavailability, solubility, and antiviral activity. Phenolic compounds, glycosides, terpenes, saponins, peptides, and proteins can play an essential role against viruses from the Coronaviridae family. Of these, we identified genipin, diphyllin, curcumin, glutathione, glycyrrhizic acids, polyP nanoparticles, and griffithsin. [61,62,64,[66][67][68] The expert opinion reported that morphology characteristics (size and shape) are a fundamental aspect of nanoparticle design, once it is directly connected with pharmacokinetics and cell uptake for drug delivery purposes, especially for those with minimal side effects. [106] In the case of drug carriers in blood for lymphatics channels, the geometry and aspect ratio of shape plays a crucial role in how nanoparticles will be transported. From Table 6, we note that most of all nanoparticles studied are spherical and presented an impressive average size varying from 11.4 to 1.5 nm in the last 3 years reported atte mpts. [63,64,[66][67][68]91,93] Figure 6 shows a schematic representation of some strategies reviewed on active phytochemicals (a broad-spectrum "host-targeted" antiviral)/nanoparticles systems as promising candidates against coronaviruses as virus adsorbents, antiviral agents. and immunomodulatory drugs. The amide coupling reaction is often used in medicinal chemistry [107] to generate novel compounds for antiviral drug discovery. [19,108] Carboxyl quantum dots, QD650, has carboxylic acid terminal groups that could efficiently covalently conjugate with amine groups of biomolecules (e.g., proteins, nucleic acids), forming an amide bond through carbodiimide-mediated coupling reactions. Herein, we reviewed a nanostructure approach against the beta-coronavirus SARS-CoV to rapidly identify natural inhibitors screening of the SARS-CoV nucleocapsid (N) protein, beyond an optical method based on carboxyl quantum dots-conjugated RNA nucleotides system, with applicability for imaging analysis on a biochip [74] (Figure 6A). The systems were used as a platform for natural inhibitors based on several polyphenols, in which (-)-catechin gallate and (-)-gallocatechin gallate, obtained from green tea (Camellia sinensis), presented the higher inhibition effect against SARS-CoV N protein (more than 40% inhibition at 0.05 µg L −1 ). [74] The vacuolar-ATPase (v-ATPase) dysregulation has already been discussed and linked with drug resistance in viral infections [109] and cancer therapies. [110] Hence, v-ATPase blocking could be an attractive target for antiviral approaches. A nanoformulation with diphyllin (a lignan obtained from the plant Cleistanthus collinus) encapsulated by biocompatible block copolymer nanocapsules of PEG-PLGA exhibited a potent inhibitory effect against FCoV in fcwf-4 cells [62] (Figure 6B). The strategy was studied using a nanoparticulated system with diphyllin as a novel v-ATPase blocker as an alternative for the bafilomycin A1, given its compromised clinical applicability due to its high cytotoxicity, low water solubility, and potent off-target effect. [109] Besides that, genipin from Gardenia jasminoides was found as a crosslinker agent of chitosan nano/microspheres (HTCC), that was further cationized and studied as adsorbents (in aqueous suspension medium) with high selectivity against humans and animal coronaviruses (HCoV-NL63 and MHV) [61] ( Figure 6C). An advantage of this attractive water-soluble nanoparticle with genipin and chitosan as an antiviral strategy is that booth are non-toxic materials.
Curcumin pyrolysis applied to prepare uniform and stable CDs with wealthy hydrophilic groups, [68] a glycyrrhizic acid-based carbon dots, [67] glutathione-capped Ag 2 S nanoclusters, [64] and glutathione-modified zinc-sulfide nanoparticles, [66] were developed with high effectiveness against PEDV coronavirus. Curcumin is a polyphenol obtained from Curcuma longa that can play a vital role in antiviral activity due to its phenolic hydroxyl groups. Once the low bioavailability impairs its therapeutic application, curcumin was encapsulated in chitosan nanoparticles by ionic gelation technique with improved in vitro antiviral activity and bioavailability by oral administration in cats infected with FIPV. [63] PolyP, a non-toxic and non-immunogenic inorganic polymer derived from marine bacteria, was added into a mucin/collagenbased hydrogel to simulate the mucus of the nasopharynx and bronchial epithelium on human alveolar basal epithelial A549 cells. [91] This strategy was suggested to stimulate an innate antiviral response by improved mucin barrier (high in antimicrobial proteins) and as a potential antiviral agent by exerting protection against SARS-CoV-2-cell attachment. [91] Griffithsin is a small lectin derived from red algae (Griffithsia spp.) of broadspectrum antiviral activity against coronavirus. [111,112] Thus, poly (2-hydroxyethyl methacrylate) hydrogel lenses with nanoparticles releasing griffithsin were hypothesized as therapeutic contact lenses to protect healthcare workers′ ocular surface as extra protection in daily practice against COVID-19 infection. [93] The synergistic potential between antimalarial drugs as Artemisinin, Artemether, and Artesunate-natural sesquiterpene lactones from wormwood plant (Artemisia annua)-coated with antiviral AgNPs was optimized in a structure by molecular dynamics and suggested to improve the permeability and time retention of these drugs enhancing the therapeutic action against malaria and COVID-19. [80] Related to the mechanism of action, the green tea polyphenols, as epicatechin gallates, were reported as a potent antiviral entry inhibitor capable of blocking the host's binding of glycoprotein CD4 cell with glycoprotein gp120 of HIV-1 and hence, preventing the viral infection. [113] The antiviral activity of green tea catechins was suggested in previous studies against other enveloped viruses. It was mainly attributed to its hydroxyl, galloyl, and pyrogallol groups, [114] as well as phenolic OH groups on B-ring, [115] which can act at several stages of the viral entry: [113] i) affecting the expression of viral antigens or ii) inhibiting the genome replication. The desorption properties of HTCC was explained in terms of i) electrostatic Coloumb attraction of the genipin-chitosan derivatives nanospheres with the spike protein of HCoV-NL63, which can form a proteinpolymer complex, resulting in virus neutralization; and ii) the high ionic strength promoted by chitosan cationization. [61] In a dose-dependent way, the diphyllin inhibited endosomal acidification affecting the viral cellular susceptibility and inhibits the downstream coronavirus replication. [62] Furthermore, multisite inhibition mechanisms were found for all approaches with GSH, curcumin, and glycyrrhizin tested against PEDV: i) inhibition of viral entry by changing the structure of the viral surface protein, prevent the viral RNA synthesis and budding; ii) suppression of the ROS generation; and iii) suppression of viral reproduction by activation of IFN-stimulating genes and the expression of pro-inflammatory cytokines. Likewise, it was hypothesized that the griffithsin's capacity to block viral entry [93] due to its high affinity to glycoproteins sites of MERS-CoV [111] and SARS-CoV. [112] PolyP has already been suggested to block SARS-CoV-2-cell attachment by blocking the binding of RBD of spike protein of SARS-CoV-2 to ACE2 cell receptor in vitro. [76,91] Density functional theory (DFT) calculations predicted the highest affinity of antimalarial drugs to interact with AgNPs surfaces in the order Artesunate > Artemisinin > Artemether, due to the more negative charges on O 6 atom of Artesunate, O 5 atom of Artemisinin, and O 3 atom of Artemether. [80]
Conclusions and Outlook
In this systematic review, the papers retrieved and analyzed show silver, copper, and polymer-based nanomaterials as the primary with efficient anti-SARS-CoV-2 properties. The strong virucide potential of copper and silver nanomaterials with varied morphology and fabrication form were prospected to make reusable and self-cleaning surfaces (e.g., respirator facial masks and coating surfaces) in healthcare work environments to reduce the spread of COVID-19. Biopolymeric and biodegradable polymers in nanoformulations have prospected drug delivery against SARS-CoV-2, as well as nanodecoy and manganese nanoparticle was suggested as a simple, safe, and robust technology for vaccine adjuvants or antiviral agents once it increased immune response by in vivo assays. Virucide surfaces and adsorbents to capture/inactivate SARS-CoV-2 and other beta-CoV were also proposed as adsorbents for coronavirus inactivation. Cutting-edge nano biosensors technologies, nanostructured carriers for pulmonary drug delivery, sVLPs, and polymeric hydrogels were pointed out as potential agents for antiviral drugs, therapeutic vaccines, and immuno-based therapies. A study with real clinic samples of COVID-19 patients demonstrated graphene use for immunodiagnostic assay with high specificity and celerity, potentially useful for serological tests and detection of infection.
Molecular docking and dynamic simulations are powerful tools to study the relationship between receptor-ligand binding affinity in drug discovery using nanomaterials. Therefore, all the nanotechnologies studied by computational tools were predicted to interfering with SARS-CoV-2 adhesion to human host cell receptors and viral replication, thus inhibiting the viral infection.
Primary and secondary metabolites of plants and microorganisms (e.g., phenolic, terpenes, glycosides, polysaccharides, and Polyp), well-known antimicrobial compounds, delivered by spherical nanoparticles, were a bio-friendly and potential strategy to produce antivirals therapies for coronavirus once nanomaterials enhanced their solubility, bioavailability, and antiviral activities. The phenolic phytochemicals acted as multisite inhibitors at several stages of the viral entry, affecting viral antigens′ expression, or inhibiting the genome replication. Biodegradable polymeric nanoparticles are non-toxic and biocompatible options for drug delivery. Due to its high surface energy, the immense possibility of functionalization, and their strong-binding amino acids character, silver and carbon-based nanomaterials showed high potential to be used in different segments to control the spread of COVID-19. Thus, it showed booth a fundamental role and a secondary action (as carriers and antivirals).
Understanding the interface between nanomaterials and coronaviruses reviewed is fundamental to designing target antivirals for COVID-19 infection. Their varied morphology, chemical diversity, excellent physical-chemical properties, and the possibility of binding several types of compounds in their surface with target functions, as well as the synergism between them (e.g., nanocapsules and nanocomposites), justified nanomate-rials′ as potential nanomedicine and prophylactics tools against COVID-19. Nanocarbons can act against coronaviruses multivalent interactions (e.g., electrostatic interactions, hydrogen bonds, and hydrophobic interactions) with spike protein and lipid tails, destroying the membrane, blocking cell entry and viral replication. The primary mechanism to block cell viral infection inhibited SARS-CoV-2 cell entry by blocking the binding between RBD of spike protein and the human ACE2 receptor. However, sulfonate ligands on the gold nanoparticle surface can inhibit the SARS-CoV-2 cell attachment by inhibiting the binding between spike protein and HS receptors. Many strategies can suppress ROS generation, inhibit host cell apoptosis, inhibit endosomal acidification, acting as multistep inhibitors on viral entry and replication. Photoactivated copper, silver, and TNP showed the highest potential as virucidal agents.
No study using the eco-friendly nanocellulose was retrieved. That was surprising, considering that nanocellulose is sustainable, non-toxic, antimicrobial, biocompatible, relatively cheap, and a suitable carrier due to its nonspherical shape in the nanofibrous form attractive to the pharmaceutical/biomedical industries. Additionally, nanocellulose has hydroxyl groups that might form a hydrogen bonding with spike glycoproteins and stabilize the ligand-receptor complex. For future opportunities, we believe in new attempts with the non-toxic nanomaterials and more efforts in nanomaterials that have already been reported in non-cytotoxicity levels as antiviral agents. Thus, trends in toxicology evaluation and safety tests of strategies reviewed can help fill the main gaps in the literature and overcome nanomaterials′ main challenges to health surveillance. Encapsulated nanosystems with v-ATPase inhibition could be a promissive target therapeutic to overcome the challenge of antiviral drug resistance. Future directions can be identified in the opportunity to study the pharmacokinetics of phytochemicals delivered by nanomaterials cited to evaluate the effects in targeting, circulation time, and the ability to overcome biological barriers for drugs repurposing with more healthy options. Beyond that, the absence of attempts with nonspherical nanoparticles (e.g., filamentous shapes) could encourage new future efforts. We hope that our study's notes addressing the role of nanotechnology approaches with anti-coronavirus properties can help researchers with insights to overcome the challenges associated with the SARS-CoV-2 virus control, given direction to develop novel antiviral therapies, and prevent future pandemics similar to the current COVID-19.
|
2021-02-26T14:02:01.301Z
|
2021-02-22T00:00:00.000
|
{
"year": 2021,
"sha1": "6b7f9f27d39d5d8e159ef7adf299b42c88b8b2d9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/gch2.202000115",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "764ea699daaf653ea14fc858d252af0ab65d9f12",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54597311
|
pes2o/s2orc
|
v3-fos-license
|
Test of \textit{Topmetal-${II}^-$} In Liquid Nitrogen For Cryogenic Temperature TPCs
\textit{Topmetal-${II}^-$} is a highly pixelated direct charge sensor that contains a 72${\times}$72 pixel array of 83${\mu}$m pitch size. The key feature of \textit{Topmetal-${II}^-$} is that it can directly collect charges via metal nodes of each pixel to form two-dimensional images of charge cloud distributions. \textit{Topmetal-${II}^-$} was proved to measure charged particles without amplification at room temperature. To measure its performance at cryogenic temperature, a \textit{Topmetal-${II}^-$} sensor is embedded into a liquid nitrogen dewar. The results presented in this paper show that \textit{Topmetal-${II}^-$} can also operate well at this low temperature with a noise (ENC) of 12 e$^-$ lower than that at room temperature (13 e$^-$). From the noise perspective, \textit{Topmetal-${II}^-$} is a promising candidate for the next generation readout of liquid argon and xenon Time Projection Chamber (TPC) used in experiments searching for neutrinoless double beta decay and dark matter.
Introduction
Over the years people have been interested in rare event experiments like neutrinoless double beta decay [1] and dark matter [2] searches. Liquid argon and xenon are considered to be good media for low rate TPC detectors such as ICARUS [3] and LANNDD [4]. The traditional readouts of liquid argon or xenon TPCs are basically multi-wire electrodes. It is challenging to reduce the distance between two wires to a few hundreds microns or tens of microns for a large scale liquid argon or xenon detector [5].
On the other hand, a direct charge CMOS sensor with large pixel array is a good choice of readout for such low rate and large scale detectors, since CMOS sensors offer small resolution and high granularity. Timepix [6] chip is a good example of direct charge readout sensor that has a good performance at -125 • C(148 K) with a noise of 99 e − [7]. Coupled with an aluminum mesh [8] where gas amplification occurs, Timepix can be applied in a dual phase argon TPC with high detection efficiency [9].
We have designed a CMOS sensor named as Topmetal-II − with rather low noise and high spatial resolution. It can be applied into a TPC detector as a charge collector to measure single electrons generated by alpha particles [10] at room temperature without any charge multiplier being necessary. This result prompted us to test if Topmetal-II − can work at the temperature of liquid argon(83.8 K).
In this paper, we mainly study how Topmetal-II − works in liquid nitrogen(77 K) and compare the performance of the sensor at room temperature and in liquid nitrogen.
Topmetal-II −
Topmetal-II − is a direct charge sensor with high spatial resolution. Charges can be collected via metal nodes on each pixel when an electric field is applied above the top of sensor. A wire-bonded Topmetal-II − sensor is shown in Figure. 1. It consists of a 72×72 square pixel array. Each pixel size is 83×83 µm 2 . The total structure of Topmetal-II − has been described in an earlier paper [10]. Here the internal structure of single-pixel analogue readout is briefly presented in Figure. 2. Each pixel consists of a charge collection electrode, a charge sensitive amplifier (CSA), an analogue readout channel and a digital readout channel(not shown in figure). Analogue and digital readout channels operate independently. There is a guard ring at the periphery of the metal nodes of each pixel and the sensor's performance is measured by injecting pulse into the CSA through the guard ring. The capacitance between the guard ring and top metal of pixels (C d ) extracted by IC design software is about 5.5 fF.
Test Results of Topmetal-II −
In order to compare the performance of Topmetal-II − at room temperature and cryogenic temperature, the operating setup of this sensor embedded inside liquid nitrogen has to be well designed. Topmetal-II − sensor is connected to a PCB board, the PCB board is fixed on a lifting platform. We have no special protection on Topmetal-II − sensor. The sensor is slowly moved into a liquid nitrogen dewar under the surface level about 4cm. After 15 minutes, the sensor is powered on and configured, then the signal is observed through an oscilloscope. This process is repeated many times. Three sensors are observed before and after the experiment using microscope, no significant difference is found.
We first tested the performance of Topmetal-II − at room temperature in ambient air and then put it into a liquid nitrogen dewar to measure its performance. The following results show that Topmetal-II − works properly in liquid nitrogen and performs better than at room temperature with lower electronic noise for a single pixel in the absence of drift field.
Decay time constant
For measuring decay time constant, a square wave of 200 mV(±100 mV) peak-to-peak amplitude is applied to the guard ring (internal test circuits of the sensor) of Topmetal-II − in the absence of drift field. An equivalent charge of C d ×200 mV=6.8×10 3 e − is injected to each pixel, where positive equivalent charges are injected at rising edges of the square wave and negative equivalent charges are injected at falling edges. The sensor chip can be tested at a single pixel level and the result is shown in Figure. 5. Figure 3: A single pixel's reset voltage (Vreset) dependence of decay time constant of Topmetal-II − at room temperature (solid square) and in liquid nitrogen (triangle). The sensor is proved to be able to work in liquid nitrogen with higher reset voltage (Vreset) than that at room temperature.
The decay time constant of a single Topmetal-II − pixel varies with temperature, thus different reset voltages V reset should be applied at room temperature and in liquid nitrogen to measure the variations of decay time constants with V reset and be adjusted to achieve similar desired mean values. In liquid nitrogen, Topmetal-II − sensor can work with higher reset voltage (V reset ) than that at room temperature. As shown in Figure. 3, the decay time constant of a pixel changes much faster with V reset in liquid nitrogen than at room temperature.
Since total pixel array in a sensor is connected to the same V reset , uniformity of pixels at the same sensor is very important. The decay time constant distributions of total pixel array both at room temperature and in liquid nitrogen are shown in Figure. 4. The distribution tends to be broader in liquid nitrogen than at room temperature, while both cases show good uniformity among pixels. For our measurements, the pixels with a decay time constant less than 3.3 ms are dead pixels, the number of which is about 500. The presence of these dead pixels will have a bad effect on the accuracy of total charge. Therefore, in the design of next Topmetal sensors, we will improve the uniformity of decay time constants among pixels.
Noise Test
A pulse of 200mV (peak-to-peak) is injected to guard ring as shown in Figure. 5. The baseline voltage shifts about 123 mV from 733 mV to 856 mV, and the electronic noise is much lower (σ 1.3 mV) in liquid nitrogen than that (σ 2.2 mV) at room temperature. Since there is no shaper within Topmetal-II − , a digital trapezoidal filter [11] is applied to the raw data to get the amplitude of signal and then the mean (µ) and standard deviation (σ) of amplitudes are measured. The equivalent noise charge (ENC) is calculated using the formula EN C = C d × V injection × σ/ µ. We configure a reset voltage for a single pixel, and then measure repeatedly the decay time constants and the amplitudes to obtain the ENC and the mean decay time constant. Figure. 6 shows the dependence of a single pixel ENC with the decay time constant from several milliseconds to one second. On the whole, the noise of a single pixel in liquid nitrogen is slightly smaller than those at room temperature.
The uniformity of ENC of Topmetal-II − is significant, so we measured ENC of total pixel array using the same method described previously, setting V reset as 800 mV at room temperature and 1.36 V in liquid nitrogen. The ENC distribution in Figure. 7 of total pixel array in liquid nitrogen is wider than that at room temperature. This is consistent with the measurement that decay time constants change in much larger range in liquid nitrogen for different pixels of Topmetal-II − , since ENC is affected by the decay time constant when trapezoidal filter is applied to shape the signal. Most probable values (MPV) of ENC are 12 and 13 e − in liquid nitrogen and at room temperature respectively, while the mean (σ) value is 19.4 (10.4) and 13.5 (2.5) e − . The small figure in the top right corner of Figure. 7 shows that the ENC in liquid nitrogen is slightly less than that at room temperature when the decay time constant is in the range of 10 to 50 ms.
Linearity
Linearity of Topmetal-II − is of great importance to get injected charges. It has been measured and compared under both conditions of room temperature and liquid nitrogen. The voltage of pulses injected to guard ring varies from 10 mV to 100 mV with a step voltage of 10 mV. In Figure. 8, we can see that the linearity of Topmetal-II − is excellent. In liquid nitrogen, the amplitude is higher (∼10%) than that at room temperature for the same voltage of injected pulses.
Summary
We demonstrate that Topmetal-II − can operate well in liquid nitrogen. The most probable value of ENC of the sensor is about 13 e − at room temperature and 12 e − in liquid nitrogen after a digital trapezoidal shaper is applied. The ENC has a much wider spread in liquid nitrogen than that at room temperature because of the wider spread decay time constants. The sensor has a linear response to injected pulse signal. This Topmetal-II − is a promising candidate to be applied in a liquid argon or xenon TPC for low rate experiments without charge amplification needed. We will explore applications based on Topmetal-II − at cryogenic temperature.
Also in the design of the next version of Topmetal sensors for the purpose of a cryogenic temperature TPC, we will further improve the uniformity of decay time constants among pixels and reduce the noise of charge sensitive amplifier.
|
2016-06-07T03:50:51.000Z
|
2016-01-26T00:00:00.000
|
{
"year": 2016,
"sha1": "a0c7f2c5de8ec256874d417c073e60dfc170a4e2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1601.06955",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a0c7f2c5de8ec256874d417c073e60dfc170a4e2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7143408
|
pes2o/s2orc
|
v3-fos-license
|
Grain protein content variation and its association analysis in barley
Background Grain protein content (GPC) is an important quality determinant for barley used as malt, feed as well as food. It is controlled by a complex genetic system. GPC differs greatly among barley genotypes and is also variable across different environments. It is imperative to understand the genetic control of barley GPC and identify the genotypes with less variation under the different environments. Results In this study, 59 cultivated and 99 Tibetan wild barley genotypes were used for a genome-wide association study (GWAS) and a multi-platform candidate gene-based association analysis, in order to identify the molecular markers associated with GPC. Tibetan wild barley had higher GPC than cultivated barley. The significant correlation between GPC and diastatic power (DP), and malt extract confirmed the importance of GPC in determining malt quality. Diversity arrays technology (DArT) markers associated with barley GPC were detected by GWAS. In addition, GWAS revealed two HvNAM genes as the candidate genes controlling GPC. No association was detected between HvNAM1 polymorphism and GPC, while a single nucleotide polymorphism (SNP) (798, P < 0.01), located within the second intron of HvNAM2, was associated with GPC. There was a significant correlation between haplotypes of HvNAM1, HvNAM2 and GPC in barley. Conclusions The GWAS and candidate gene based-association study may be effectively used to determine the genetic variation of GPC in barley. The DArT markers and the polymorphism of HvNAM genes identified in this study are useful in developing high quality barley cultivars in the future. HvNAM genes could play a role in controlling barley GPC.
Background
Grain protein content (GPC) is an important quality determinant in cereal crops. In barley, GPC is closely associated with feed and malt quality. Higher protein content is favorable for feed quality, while lower or moderate protein content is expected for malt barley. GPC affects malting quality in many ways, including yeast nutrition, haze formation in beer and enzyme activities [1,2].
Barley GPC is under polygenic control, with many quantitative trait loci (QTLs) having been mapped on all seven chromosomes, mainly on 2H, 4H, 5H and 6H [3,4]. All these loci had been determined by QTL mapping. Recently, genome-wide association study (GWAS) has been developed to dissect a variety of complex traits in plant [5,6]. GWAS has the advantage over the conventional QTL mapping in that GWAS can be performed on a number of genotypes. While a population used for conventional QTL mapping is developed from a bi-parental cross, only allowing the detection of a subset of loci/alleles within a plant and offering limited the resolution, due to insufficient recombination between the linked genetic loci. Hence, GWAS may present wider genetic variations and higher mapping resolution on phenotypes and traits at population level than conventional QTL mapping [6]. In barley, seven malt quality traits and some important agronomic traits have been effectively analyzed using GWAS [7][8][9].
Qinghai-Tibet Plateau, considered as one of the original centers of cultivated barley in the world, is rich in barley germplasm [10]. The polymorphism information content (PIC) value of Tibetan wild barley is higher than that of Chinese landraces according to analysis of SSR markers, and the wild barley has more unique alleles than the cultivated barley [11][12][13]. Thus, Tibetan wild barley is assumed to have wider variability in the genes controlling GPC [11][12][13]. Therefore, the population derived from Tibetan wild barley and cultivated barley worldwide could provide high resolution for GWAS in barley GPC.
A wheat QTL controlling GPC, named as Gpc-B1, was cloned, and a transcription factor (NAM-B1) was related to GPC by regulating senescence and protein remobilization [14]. Two orthologs genes (Genbank accession number DQ869678 and DQ869679) of TtNAM-B1 in barley were identified on chromosomes 6H and 2H, respectively [14]. The single nucleotide polymorphism (SNP) analysis showed that allelic variation of the NAM-1 gene could be associated with GPC variation within the Hordeum genus. The differences in expression of HvNAM-1 or other genes among barley cultivars or species could be attributed to GPC variation [15]. However, little research has been done regarding barley HvNAM2 up to date,except that the sequence of HvNAM2 was published [16].
The objectives of the current study are (1) to examine the correlation between GPC and malt quality; (2) to identify molecular markers associated with GPC in a barley mapping population by GWAS and determine the candidate genes controlling GPC; and (3) to analyze the association between HvNAM genes and GPC.
Plant materials
A collect of 158 barley accessions was used for association mapping and GPC analysis. These accessions included 59 barley cultivars (H. vulgare L.) from different areas of the world and 99 Tibetan wild barley (H. spontaneum L.). All barley cultivars and accessions were planted at the Huajiachi campus of Zhejiang University (Hangzhou, China, 120.0°E. 30.5°N) in the early winter of 2008 and 2009. Each accession was sown into a two-line plot, 2 m long and 0.24 m interval between the lines, and 40 seeds were planted in each line. All plots were supplied with 150 kg/ha of N, including 40 kg/ha of N as compound fertilizer applied before seeding, and 110 kg/ha of N as urea supplied at two-leaf stage and booting stage, respectively with equal amount. In addition, 180 kg/ha of potassium chloride was applied prior to seeding. The experiments were arranged in a block design with two replications. In each block, the 158 barley accessions were arranged randomly. All other agronomic managements, including weed and disease control, were the same as those applied locally. At seedling stage, leaves of each genotype were collected for DNA extract. The harvested seeds were stored at 4°C prior to malting. GPC and malt quality of all samples were measured, three measurements were done for each sample.
GPC measurement
Mature grains were ground in a Cyclotec 1093 sample mill (Tecator AB, Hoganas, Sweden) and passed through a 0.5 mm screen. GPC was measured using the Kjeldahl method [17]. Protein content is calculated by duplicating a factor of 6.25 with N content.
Malting and quality analysis
Grain samples (around 200 g) were micro-malted in a Micro-malting Apparatus (Phoenix System, Adelaide, Australia) using the following regime: 6 h steep, 14 h air-rest, 8 h steep, 14 h air-rest and 4 h steep, followed by 96 h germinationall performed at 15°C. The malts were then kilned at 65°C for 24 h, de-rooted and milled using a Tecator Cyclone mill fitted with a 0.5 mm screen. The soluble and total protein contents (SPC and TPC) in malt and the malt quality parameters (malt extract, Kolbach index, viscosity and DP) were determined according to the Analytica EBC Official Methods (European Brewery Convention, 1975).
DNA extraction and genotypic analysis
Genomic DNA samples from young leaves of the barley seedlings were isolated as described by Uzunova et al. [18]. In brief, the leaf tissues were ground, and the resulting powder was re-suspended with CTAB (Hexadecyl trimethylammonium bromide) buffer (pH 5.0). To purify the DNA, insoluble particulates were removed through centrifugation. DNAs were precipitated from the aqueous phase and were washed thoroughly to remove contaminating salts.
Whole-genome profiling of DArT in all the DNA samples were analyzed using the Barley PstI (BstNI) version 1.7 array [19] at the Diversity Arrays Technology Pty Ltd in Australia. There are around 1,500 DArT markers, polymorphic in a wide range of barley cultivars, and 1,000 markers detected in wild barley accessions (http://www.triticarte.com.au/content/ barley_diversity_analysis.html). Among the 1,576 reported markers and 1,319 polymorphic DArT markers, those with P value < 0.05 were used in the current study.
Data analysis
Pearson correlation analysis was conducted between GPC, SPC, TPC and malt quality parameters using SPSS 13.0 and SigmaPlot 10.0. Alignment of all the sequences was performed by ClustalW [21]. Genetic diversity was examined by 1319 randomly-distributed barley DArT markers over the genome at Diversity Arrays Technology Pty Ltd, Australia. The genetic polymorphism data from 1319 DArT markers were utilized to detect population structure by STRUCTURE software version 2.3.3 using an admixture model and five independent replicates of 100,000 Markov Chain iterations [22,23]. K values ranging from 1 to 10 were tested with a burn-in of 100,000 iterations and 100,000 Markov Chain Monte Carlo (MCMC) iterations according to the software's instructions. The effect of population structure on GPC was tested using SAS GLM (SAS Institute, Cary, North Carolina, USA). The model included the components of the Q matrix obtained with STRUCTURE 2.2.3, which was used to illustrate population structure. R 2 (variance explained by the model) was considered as an estimate of the proportion of phenotypic variation explained by population structure. The principal component analysis (PCA) was performed on the genotype data derived from 1319 DArT markers, which were standardized firstly using Unscrambler 9.7 (CAMO PROCESS AS, Oslo, Norway). TASSEL 2.01 was used to calculate linkage disequilibrium (LD) based on the parameter r 2 , which is a measurement of the correlation between a pair of variables [23]. The pair-wise relationship matrix (K-matrix), which was further employed for population correction in the association models, was calculated with 1319 DArT markers using TASSEL 2.01 [23]. The twoyear data of GPC were averaged for future association analysis. The structure-based association analysis with a K-matrix between DArT markers, HvNAM genes and GPC was calculated using TASSEL 2.01 [23]. Association between DArT markers and the total trait variation was tested using mixed linear models (MLM), which was implemented in TASSEL 2.01. The P values were adjusted with permutation test using a step-down MinP procedure implemented in the TASSEL 2.01. The adjusted P value < 0.05 or <0.01 was considered as a criterion for association. The Manhattan plot of DArT markers and P value were drawn with the R software version 2.14.2 (http://www.rproject.org/). The association map was constructed using MapDraw version 2.1 [24].
Sequences of HvNAM1 and HvNAM2 were aligned using VectorNTI 10.0 (Invitrogen Corporation, Carlsbad, USA) or CLC main workbench 5 (CLC bio, Aarhus, Denmark), and alignments were edited manually using the BioEdit software. Haplotypes were inferred using the software TASSEL 2.01 [23]. One barley accession was inferred as rare haplotypes and was excluded from further analysis. Grouped according to haplotypes in the HvNAM genes, GPC variation among the 59 cultivated and 99 Tibetan wild barley accessions was performed using the software SAS 9.0 software (SAS Institute, Cary, North Carolina, USA). For further association analysis between haplotype and GPC in the total 158 accessions, the SAS 9.0 software (SAS Institute, Cary, North Carolina, USA) was used to conduct analysis of variance (ANOVA) and multicomparison analyses with least significant differences (LSD), the mean difference is significant at 0.05 level.
The variation of protein content and Kolbach index
The GPC in 59 cultivated and 99 Tibetan wild barley accessions ranged from 8.02% to 13.50% with a mean of 10.56% in 2008 and varied from 8.28% to 14.45% with a mean of 10.87% in 2009 ( Figure 1). Overall, Tibetan wild barley had higher GPC than cultivated barley ( Figure 2). Moreover, a normal distribution pattern of GPC is presented in Figure 1, suggesting multiple genes/QTLs control of GPC in barley. There was also a large variation in SPC, TPC and Kolbach index of the 158 accessions ( Figure 3).
The values of GPC, SPC and TPC between 2008 and 2009 were significantly and positively correlated (R 2 = 0.4435 ** for GPC; R 2 = 0.3937 ** for SPC; R 2 = 0.3937 ** for TPC) (Figure 3), while the data of Kolbach index in 2008 could account for 55.11% of variation in 2009. Thus, it may suggest that GPC, SPC, TPC and Kolbach index are mainly controlled by genetic factors and also affected by environmental variation.
The relationship between GPC, SPC, TPC and four malt quality parameters
The GPC presented the similar results in both years (Figure 3), so the mean values of two years were used in the correlation analysis. The results showed that GPC was significantly and positively correlated with SPC (0.628, P < 0.01), TPC (0.847, P < 0.01) and DP (0.340, P < 0.05), and negatively correlated with malt extract (−0.347, P < 0.01) ( Table 1). There was no significant correlation between GPC and viscosity or Kolbach index. SPC was positively correlated with TPC (0.759, P < 0.01), Kolbach index (0.626, P < 0.01) and DP (0.456, P < 0.01), and negatively correlated with viscosity (−0.356, P < 0.01), indicating the significance of SPC in determining malt quality. Moreover, TPC was positively correlated with DP (0.465, P < 0.01) and negatively correlated with malt extract (−0.326, P < 0.01) ( Table 1).
Population structure and its impact on GPC variation
One of the primary objectives in the current study was to determine the possibility whether GWAS could be used in association analysis of barley GPC and genetic markers. Hence, we obtained LD (r 2 ) of the population used in this experiment. The extent of the obtained LD extended over 0.40 cM (Additional file 1: Figure S1), and 1319 DArT were distributed randomly over the whole barley genome, ensuring a good coverage of DArT markers on barley genome. The presence of population stratification and an unequal distribution of alleles within these groups could result in nonfunctional and spurious associations [25,26].
Thus, the population structure was taken into account in this study. The 1319 DArT markers were used to evaluate the subset of 59 cultivated and 99 Tibetan wild barley genotypes. Stratification within the barley population was detected by STRUCTURE and PCA. The highest likelihoods for sub-population (K values) calculated with STRUCTURE software were K = 7 (Additional file 2: Table S1, Additional file 3: Figure S2), indicating that seven subpopulations have the most stable variance. In addition, a PCA of the population structure was conducted. Interestingly, the cultivated and Tibetan wild barley were clearly separated into two groups with PCA ( Figure 4), The cultivated barley accessions demonstrated a more distinct membership to subpopulation 4 and 6, while the Tibetan wild barley accessions belonged to subpopulation 1, 2, 3, 5 and 7 (Figure 4 and Additional file 4: Table S2). Collectively, these seven components accounted for 65.18% of the genetic variation. The first component accounted for 32% variation, while the second component explained 11% of the genetic variation ( Figure 4). Then, a Q matrix with 7 sub-populations was used in the further analysis. Variance analysis of GPC data in 2008 and 2009 showed that population structure explains 10.6% of total variation, indicating the presence of impact of population structure on GPC.
Association of DArT markers with GPC and determination of candidate genes
Generally, a stringent model may cause less spurious background association. In the current study, the structurebased association analysis with a K-matrix was calculated using TASSEL 2.01 [23]. The two trials in 2008 and 2009 showed similar results; therefore, we combined the two year data and used the means for association analysis. The adjusted P values, obtained from step-down MinP procedure, were used for permutation test [27]. As the markers with adjusted P values < .05 are considered as significant, the probability of rejecting a single true null hypothesis across the entire set of hypotheses is held to <0.05. This test takes dependence between hypotheses into account and does not assume that hypotheses are independent as do other multiple test correction procedures [23]. Here, the association of DArT and GPC was shown in a Manhattan plot (Additional file 5: Figure S3). When the adjusted P value was <0.01, there were 3, 8, 1, 1 and 7 DArT markers, which were associated with GPC on 1H, 2H, 3H, 5H and 7H, respectively. Interestingly, five molecular markers in this study were close to the genetic markers of GPC reported previously (Table 2). Of them, bPb-1628 and bPb-1072 were close to marker HVBKASI, which was identified as HvNAM2 [14]. In addition, bPb-8986 and bPb-3412 were close to the markers HVM36 and Bmag0751.
It was reported that the results of association analysis is affected by environmental factors [28,29]. Thus, a stringent criterion for significance, may bias studies against detection of causal associations that show significant Genotype-Environment interactions [30]. The correlation analysis of GPC between 2008 and 2009 showed that GPC were mainly controlled by genetic factors, but also affected by environmental conditions. Hence, we set the threshold of association analysis to 0.05, so as to detect possible markers associated with GPC. When the adjusted P value was <0.05, we found that GPC in barley was under polygenic control, and the relevant genes/QTLs were located on almost all chromosomes, except for 4H, mainly on chromosomes 2H and 7H ( Figure 5). There were 10 DArT markers associated with GPC on chromosomes 1H and 5H, 20 on 2H, 13 on 3H, 11 on 6H, and 20 on 7H, respectively. The associated markers accounted for GPC variance ranged from 2.2 to 18.0%. Several DArT markers associated with GPC were closely localized within the genome. Thus, we considered the associated DArT markers within 10 cM to be the same locus. As a result, there were 5, 7, 6, 5, 6, and 8 loci on chromosome 1H, 2H, 3H, 5H, 6H, and 7H, respectively. Among those, a major QTL for GPC, which accounted for 40% of total variation, was quite close to the markers abg458, hvm74, and mwg2029, and it could be orthologous to the Gpc-B1 gene located on wheat chromosome 6BS. The Gpc-B1 was associated with increased grain protein in wheat [1] and this QTL was identified as HvNAM1 in barley [14] ( Table 2). Similarly, in our study, the markers bPb-7179, bPb-5822 and bPb-9522 associated with GPC in barley were close to the markers abg458, hvm74, and mwg2029. In addition, the best Neighbor Joining tree showed that HvNAM genes have the closest distance with wheat NAM genes [14], and the colinearity of NAM locus between barley and wheat was also revealed [16]. Then, we inferred that HvNAM genes could be related to GPC in barley. Thus, HvNAM1 and HvNAM2 were chosen as the candidate genes for further association analysis of GPC.
Association of HvNAM genes with GPC
The sequences of the HvNAM1 and HvNAM2 genes were analyzed against the references from NCBI (accession number DQ869678 and DQ869679). The structure and SNPs of HvNAM1 and HvNAM2 are shown in Figure 6. The amplified length of HvNAM1 gene was 1585 bp, containing 3 exons, 2 introns and a NAM super-family domain from amino acid 35 to 165. In comparison with the reference sequence (DQ869678), the HvNAM1 in this study had five SNPs, located on bases 234, 544 and 1433 in cultivated barley, and on bases 544, 1190 and 1427 in Tibetan wild barley ( Figure 6 and Additional file 6: Figure S4). All of the SNPs were within the coding region and resulted in 5 amino acid substitutions, where Trp, Ala, Gly, Gly, and Ala were replaced with Cys, Pro, Ser, Ala, and Thr, respectively. There was no association between HvNAM1 polymorphism and GPC. Because no SNP of HvNAM1 gene was found to be associated with GPC, haplotypebased association analysis was performed. Using the software TASSEL 2.01 to infer haplotypes for HvNAM1 gene among all accessions, we found five haplotypes within this gene. Three and 4 haplotypes were found in 59 cultivated and 99 Tibetan wild barley genotypes, respectively. There were one and two unique haplotypes in cultivated and Tibetan barley, respectively (Table 3).
To analyze the possible differential effects of haplotypes on GPC, the population structure was taken into account. The haplotypes of HvNAM1 explained 20.6% GPC variance in the tested population. As observed for the whole panel of accessions, the accessions carrying haplotype 4 of HvNAM1 had the highest GPC, whereas accessions with haplotype 3 of HvNAM1 had lowest GPC in two years (Figure 7). The amplified HvNAM2 gene contained 3 exons and 2 introns with a NAM super-family domain between amino acids 28 and 157, and its length was 1528bp. The polypeptide sequence of HvNAM2 showed 80% identity Table 3). The presence of new polymorphisms in Tibetan wild barley indicated that it could provide a new genetic resource in the genetic improvement of barley. However, only one SNP (798, P < 0.05) located within the second intron of HvNAM2 ( Figure 6 and Additional file 7: Figure S5) was associated with GPC as determined in two consecutive years in 59 cultivated and 99 Tibetan wild barley genotypes. Moreover, in order to analyze the effect of HvNAM2 haplotypes on GPC in barley, one haplotype with one accession was excluded from the six haplotypes of HvNAM2. The haplotypes of HvNAM2 explained 7.2% GPC variance in our population. We observed that the haplotype 3 of HvNAM2 was higher in GPC, while the haplotype 5 of HvNAM2 had the lowest GPC in both years (Figure 7).
Discussion
Barley used for malting should have a GPC lower than 11.5%. GPC is influenced to a large extent by both Figure 5 The association map for grain protein content (GPC) in barley. The map was constructed using MapDraw version 2.1 [20]. The asterisks denote the diversity arrays technology (DArT) markers associated with GPC. The brackets denote the DArT markers associated with GPC within 10 cM.
genotype and environment [31,32]. In the current study, phenotyping of the diversity panel provided some valuable information about the range and distribution of GPC in barley. Genotype and environment interactions are indeed apparent when comparing the GPC data over the two consecutive years. Our results showed that some Tibetan wild accessions with higher GPC could be useful for breeding both feed and food barley cultivars. Although there were significant differences in GPC, SPC and TPC among genotypes over the two consecutive years, the traits were mainly controlled by genetic factors as indicated by their high consistency over the two years. A negative correlation between GPC and malt extract and a positive correlation between GPC and DP have been reported [33]. Similarly, in the current study, we found that TPC was negatively correlated with malt extract and positively correlated with DP. Interestingly, SPC was correlated with all malt quality parameters except malt extract. Obviously, the protein content in both grain and malt is closely related to malt quality. Therefore, it is imperative for us to develop barley varieties with stable GPC in malt barley breeding.
The advantages of GWAS over the conventional QTL mapping, based on a population from a bi-parental cross have been confirmed [34]. Compared to QTL mapping, GWAS increases the range of natural variation that can be surveyed in a single experiment and the number of significant regions that are likely to be identified [12]. Hence, GWAS could provide higher resolution than QTL mapping, and facilitate fine-mapping and gene discovery. The materials used in our GWAS study, included 59 worldwide cultivated and 99 Tibetan wild barley accessions, which cover representative accessions from most of the barley-growing regions in the world. GPC were mainly controlled by genetic factors and also affected by environmental variation according the correlation analysis. However, a stringent criterion for significance, may bias studies against detection of causal associations that show significant Genotype-Environment interactions [30]. Thus, we chose 0.01 and 0.05 as the threshold of association analysis, in order to detect possible markers associated with GPC. As a result, GWAS identified as many as 5, 7, 6, 5, 6 and 8 loci to be associated with barley GPC on chromosomes 1H, 2H, 3H, 5H, 6H and 7H, respectively. These results showed that many more molecular markers associated with GPC could be detected by GWAS than by conventional QTL mapping.
In addition to the discovery of the DArT markers for GPC, the completion of the association map for GPC is a significant step towards the cloning of GPC related genes. The identified markers for GPC will be very useful in the evaluation and screening of barley accessions with reasonable GPC. In comparison with previous studies [1,4,15,31], we found more markers in this study, including 3, 3, and 1 marker(s) on chromosome 6H, 2H and 5H, respectively (Table 2). Three major QTLs were identified on chromosomes 6H and 2H using a barley mapping population developed from a cross between 'Karl' , a low grain protein six-rowed variety and 'Lewis' , a high grain protein two-rowed variety. The three QTLs could explain 56% of the total heritable variance of GPC [1]. Two of them were identified as the HvNAM1 and HvNAM2 genes in barley, the homologs of a NAC transcription factor (NAM-B1) that increases GPC by regulating senescence in wheat [14]. Therefore, we considered HvNAM1 and HvNAM2 as the candidate genes controlling GPC. Due to the effect of gene-target association to identify SNP markers for use in barley [35], the association between two candidate genes, HvNAM1 and HvNAM2, and GPC was analyzed, in order to examine the genetic architecture of GPC and to identify GPC loci in barley. Jamar et al. found that allelic variation of the functional NAM-1 gene could be associated with GPC variation within the genus Hordeum [15], and the 13 genotypes used in their study could be classified into three haplotypes: 11 European varieties of H. vulgare being gathered as haplotype 1, one H. spontaneum (Hs) and one Hordeum bulbosum (Hb) being classified as haplotype 2 (Genbank accession number EU908210) and haplotype 3 (Genbank accession number EU908211), respectively. By comparing to the reference sequence (DQ869678), 3 SNPs were identified on bases 355, 483 and 554 of HvNAM1. However, we did not identify these SNPs in the current study. Instead, we found 3 SNPs located on bases 234, 544 and 1433 in the cultivated barley and 3 SNPs on bases 544, 1190 and 1427 in Tibetan wild barley. No association was detected between the polymorphisms of HvNAM1 and GPC, however there was significant correlation between HvNAM1 haplotypes and GPC. Moreover, eight SNPs within HvNAM2 were located on bases 307, 732, 798, 962, 979, 991, 1034 and 1289 in the Tibetan wild barley, but only 4 SNPs were present on bases 307, 798, 979 and 991 in the cultivated barley. Interestingly, a single SNP (798, P < 0.05) within HvNAM2 gene, located on the second intron, was associated with GPC. To gain further insight, the correlation between HvNAM2 haplotypes and GPC was analyzed in barley, where The DArT markers close to HvNAM1 and HvNAM2 explained 18% and 6.4% GPC variance, while the haplotypes of HvNAM1 and HvNAM2 accounted for 20.6% and 7.2% of GPC variance, respectively. The comprehensive analysis, including the primary GWAS, the colinearity of NAM locus between barley and wheat, the best Neighbor Joining tree of NAM genes in Arabidopsis and other crops and the association analysis of HvNAM genes, indicated that HvNAM genes could drive the variation in barley GPC. Moreover, the results also showed that the adjusted P value < 0.05 could be reasonable for finding the molecular markers associated with traits which are greatly affected by environmental factors. In fact, the threshold with P <0.05 used in our primary GWAS of GPC ensured identification of the DArT markers, which were not detected in the analysis with the threshold of P <0.01. One of candidate genes, HvNAM1, detected in the association analysis with adjusted P values <0.05, was found to be associated with GPC. The current results indicate the suitability of the adjusted P value <0.05 for identifying the molecular markers associated with GPC. Similarly, the adjusted P values <0.05 was used as the criteria for association analysis in other research [36].
Ultimately, the identification of SNPs and haplotypes of HvNAM genes could enable the development of useful molecular markers for GPC. Here, the association analysis may provide some molecular markers of HvNAM genes with potential importance for the early selection in malt barley breeding.
More importantly, it will shed some light on the molecular mechanisms responsible for the genotypic differences of GPC in cultivated and wild barley. Furthermore, the exact chromosome regions of these markers would be interesting for researchers to understand the genetics of GPC, since most of these regions have been not annotated in terms of their function. However, association mapping only provides statistical and indirect evidences for the function of identified genes, so we are targeting some direct evidences into the underlying molecular mechanisms of GPC and malting quality in future research.
Conclusions
This study has demonstrated close correlation between protein content and malt quality parameters, indicating that it is imperative for us to develop barley varieties with a stable GPC. The identified markers for GPC in this study will be very useful in evaluation and screening of barley germplasm with reasonable GPC. Moreover, the haplotypes of HvNAM1 and HvNAM2, SNP and DArT markers, which were associated with GPC in barley, could provide key molecular markers for the selection of malt quality traits. In addition, GWAS is very useful for finding candidate genes and may provide a powerful tool for identifying the different loci influencing GPC in barley.
|
2016-05-04T20:20:58.661Z
|
2013-03-03T00:00:00.000
|
{
"year": 2013,
"sha1": "2bb60fa0125f88f04b88d00cdcf91dca8f3f6c86",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/1471-2229-13-35",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e37f2dda6e08dcf1877b5355fcfbae303add525d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
237509185
|
pes2o/s2orc
|
v3-fos-license
|
Novel bounds for causal e ff ects based on sensitivity parameters on the risk di ff erence scale
: Unmeasured confounding is an important threat to the validity of observational studies. A common way to deal with unmeasured confounding is to compute bounds for the causal e ff ect of interest, that is, a range of values that is guaranteed to include the true e ff ect, given the observed data. Recently, bounds have been proposed that are based on sensitivity parameters, which quantify the degree of unmea sured confounding on the risk ratio scale. These bounds can be used to compute an E - value, that is, the degree of confounding required to explain away an observed association, on the risk ratio scale. We complement and extend this previous work by deriving analogous bounds, based on sensitivity parameters on the risk di ff erence scale. We show that our bounds can also be used to compute an E - value, on the risk di ff erence scale. We compare our novel bounds with previous bounds through a real data example and a simulation study.
Introduction
The estimation of causal effects in observational (non-randomized) studies is often hampered by unmeasured confounding. A common way to deal with unmeasured confounding is to compute bounds for the causal effect of interest, that is, a range of values that is guaranteed to include the true effect, given the observed data. Such bounds have, for instance, been derived for causal effects in randomized trials with non-compliance [1], causal effects in the presence of truncation by death [2], controlled and natural direct effects [3,4] and causal interactions [5,6].
A common feature of these bounds is that they are typically "assumption-free," in the sense that they make no parametric assumptions about the relations between observed and unobserved variables. As a consequence, the bounds are often relatively wide and may thus not be very informative. In contrast, Ding and VanderWeele (DV) [7] proposed to parametrize the degree of unmeasured confounding, using two sensitivity parameters that quantify the maximal strength of association between the exposure and the confounders, and between the outcome and the confounders. DV derived bounds for the causal exposure effect, as functions of the sensitivity parameters and the observed data distribution. By using subject matter knowledge to set the sensitivity parameters to plausible values, bounds are obtained that can be substantially narrower than the assumption-free bounds. Many other sensitivity analyses have been proposed in the literature, but these typically require rather special conditions, such as a single binary confounder [e.g., refs 8-10] or no exposure-confounder interaction [e.g., refs 11-13]; we refer to ref. [7] for a thorough review.
The sensitivity parameters proposed by DV are defined on the risk ratio scale. However, if the target causal effect is a risk difference, then it may be natural to specify the sensitivity parameters on the risk difference scale as well. In their eAppendix, DV [7] provided bounds for the causal risk difference using sensitivity parameters on the risk difference scale. However, these bounds are restricted to a categorical confounder with a known number of levels. This is an important limitation, since in practice one would almost never know the dimension of the unknown confounders, and these may also contain a combination of categorical and continuous variables. Furthermore, some subject matter experts may find it more intuitive to speculate about the degree of unmeasured confounding on the risk difference scale, regardless of the chosen scale for the target causal effect.
In this article we address these limitations. We derive bounds for causal effects that can be written as a contrast between the counterfactual probability of the outcome if the exposure is present and absent, respectively, for everybody in the population. The bounds that we derive are functions of sensitivity parameters that quantify the degree of unmeasured confounding on the risk difference scale.
The article is organized as follows. In Section 2, we establish basic notation, definitions and assumptions and define the target causal effect. In Section 3, we derive assumption-free bounds in our setting; these serve as a benchmark to which other bounds can be compared. In Section 4, we briefly review the key results of ref. [7], and in Section 5 we present our novel bounds. Both DV's and our bounds are functions of several sensitivity parameters. In Section 6 we discuss how the parameter space can be reduced, to facilitate interpretation and communication. In Section 7, we illustrate the theory with a real data example on smoking and mortality, and in Section 8, we compare DV's and our bounds in a small simulation study. We provide concluding remarks in Section 9.
Notation, definitions and assumptions
We adopt the notation of ref. [7], with some modifications. Let E and D denote the binary exposure and outcome, respectively. Let , the observed data distribution ( ) p D E , consists of three free parameters: p 1 , q 0 and q 1 . The statistical association between E and D is defined as some contrast between q 1 and q 0 , e.g., where g is a monotone link function. The identity, log and logit links give the risk difference, log risk ratio and log odds ratio, respectively. Let ( ) D e be the potential outcome for a given subject, with the exposure been set to level = E e for that subject [14,15]. The potential outcome ( ) D e is connected to the observed outcome D through the relation which is often referred to as "consistency" [15]. Let { ( ) } = = * p D e q 1 e be the counterfactual probability of the outcome, with the exposure been set to = E e for everyone [14,15]. The target parameter is the causal effect of the exposure, which is defined as some contrast between * q 1 and * q 0 , e.g., Generally, ≠ * ψ ψ due to unmeasured confounding. Let U denote a set of unmeasured confounders, which (together with the measured covariates C) is assumed to be sufficient for confounding control. Formally, we assume that which is often referred to as "conditional (given U ) exchangeability" [15]. In terms of the joint distribution ( ) p D E U , , , the parameter * q e is given by where the first equality follows from the law of total probability, the second from conditional exchangeability (3) and the third from consistency (1).
3 Assumption-free bounds Manski [16] and Robins [17] derived assumption-free bounds for causal effects as follows. Using the law of total probability we can decompose * q e into where the second equality follows from consistency (1), cf. also ref. [18]. In this expression only is unknown. Setting this counterfactual probability to its lower and upper limits, 0 and 1, gives e e e e e e 1
(6)
These bounds are assumption-free in the sense that they are valid regardless of the relations between U and ( ) D E , ; we refer to these bounds as the "AF bounds." Replacing * q 0 and * q 1 in (2) with their upper and lower bounds, respectively, gives a lower bound for * ψ . Similarly, replacing * q 0 and * q 1 in (2) with their lower and upper bounds, respectively, gives an upper bound for * ψ . For instance, the implied AF bounds for the causal The width of the AF bounds for the causal risk difference is constant and equal to regardless of the observed data distribution. This is not the case for other effect measures, e.g., the causal risk ratio.
DV's bounds
We notify the reader that several of the results reviewed below were not presented in the main text of ref. [7], but in their eAppendix. Some of the results were also not stated explicitly, but follow more or less implicitly from the derivations and arguments in that eAppendix. DV proposed to compute bounds for * q 0 , * q 1 and * ψ by specifying sensitivity parameters that quantify the maximal strength of association between E and U , and between U and D, respectively. Formally, these parameters are defined as EUe u (7) and UDe u u Sjölander [19] showed that, given p 1 , q 0 and q 1 , the parameters RR EUe and RR UDe may take any value in the range [ ) ∞ 1, . Define EUe UDe (8) Ding and VanderWeele [7] showed (Section 5 of their eAppendix) that, given RR EUe and RR UDe , * q e is bounded by e e e ee e e e e e e 1 1 1
(9)
We refer to the bounds in (9) as the "DV bounds." By providing guesses of RR EUe and RR UDe for { } ∈ e 0, 1 , the analyst can use the relation in (9) to compute bounds for * q 0 and * q 1 . Notably, the bounds in (9) are not sharp. To see this, note that the upper bound is monotonically increasing in ( and the upper bound for * q e approach infinity as well. However, since * q e is a probability it is logically bounded above by 1. In fact, Sjölander [19] showed that the DV bounds in (9) may occasionally be wider than the AF bounds in (6), even when the DV bounds do not exceed the a priori possible range [ ] 0, 1 . As noted by Sjölander [19], these problems can easily be solved by replacing the DV bounds with the AF bounds whenever the latter are narrower. We thus obtain the modified bounds which we refer to as the "DVS bounds." As before, replacing * q 0 and * q 1 in (2) with their upper and lower bounds, respectively, gives a lower bound for * ψ . Similarly, replacing * q 0 and * q 1 in (2) with their lower and upper bounds, respectively, gives an upper bound for * ψ . For instance, the implied DVS bounds for the causal risk difference − * * q q In many situations, one may observe a positive association between the exposure and the outcome ( > q q 1 0 ), and one may wonder how much confounding is required to fully "explain away" this association. VanderWeele and Ding [20] labeled this degree of confounding as the "E-value." Formally, VanderWeele and Ding [20] defined the E-value as the smallest common value = = RR RR RR EU UD UD 1 0 1 such that the lower bound for * q 1 in (9) is equal to the upper bound for * q 0 in (9). They showed that the E-value thus defined is equal to [7] also derived bounds that use sensitivity parameters on the risk difference scale (Section 6 of their eAppendix). However, these bounds are restricted to the causal risk difference, and they require that the confounder U is categorical with a known number of levels. In the special case when U is binary, the bounds are given by
Ding and VanderWeele
where the sensitivity parameters RD EU binary and RD UDe binary are defined as We refer to the bounds in (11) and (12) as the "binary DV" bounds.
Novel bounds based on sensitivity parameters on the risk difference scale
Similar to DV we propose to compute bounds for * q 0 , * q 1 and * ψ by specifying sensitivity parameters that quantify the maximal strength of association between E and U and between U and D. However, our parameters measure these associations on the risk difference scale rather than the risk ratio scale, and our bounds are not restricted to the causal risk difference and do not require the confounder U to be categorical with a known number of levels. We define , the larger the potential for a strong association between E and U . In this sense, RD EU measures the "maximal association" between E and U , on the risk difference scale. In a similar sense, RD UDe measures the maximal conditional association between D and U , given E.
The parameter RD UDe is an obvious analogue to RR UDe and a natural generalization of RD UDe binary . However, the parameter RD EU differs from RR EUe and RD EU binary in that it compares conditional probabilities on the form p U E . This modification is mainly for technical reasons, to make the calculations of bounds feasible. However, specifying (functions of) ( | ) p E U may also be easier for a subject matter expert than specifying (functions of) ( | ) p U E , since the direction of causality goes from U to E, not the other way around. We note that RD EU is not a function of e, since By definition, RD EU and RD UDe are restricted to the range [ ] 0, 1 . However, when speculating about plausible values for these parameters it is also important to know whether the observed data further restricts the range of possible parameter values, or whether the parameters restrict each other. As a simple example of how such restrictions may arise, consider the multinominal distribution. Even though the individual cell probabilities in a multinomial distribution can be anywhere between 0 and 1, they cannot simultaneously be anywhere between 0 and 1, since they sum to 1. For instance, knowing that one cell probability is equal to 0.9 restricts the other cell probabilities to the range [0, 0.1]. In contrast, the mean and the variance in the normal distribution do not impose any such restrictions on each other. The following theorem, which we prove in Appendix A, establishes that the observed data distribution does not restrict the sensitivity parameters, and that the sensitivity parameters do not restrict each other; this property is referred to as "variation independence" [21,22]. Theorem 1. RD EU , RD UD0 and RD UD1 are variation independent of each other, and of the observed data distribution parameters p 1 , q 0 and q 1 .
The following theorem, which we prove in Appendix B, shows how our proposed sensitivity parameters can be used to construct bounds for * q e .
We refer to the bounds in (13) as the SH bounds. Minimizing and maximizing the expressions in (13) e is a constrained optimization problem, which can be solved with standard software, e.g., the optim function in R. However, since unconstrained optimization is computationally simpler than constrained optimization, it may be preferable to transform ( ) b d , e e into unconstrained parameters. One example of such a transformation is given by , expit is the inverse of the logit function, and the limits ( , max e e are given in Theorem 2. As before, replacing * q 0 and * q 1 in (2) with their upper and lower bounds, respectively, gives a lower bound for * ψ . Similarly, replacing * q 0 and * q 1 in (2) with their lower and upper bounds, respectively, gives an upper bound for * ψ . For instance, the implied SH bounds for the causal risk difference − * * q q When setting the sensitivity parameters RD EU and RD UDe to the extreme value 0 (minimal confounding), the upper and lower limits of b e and d e collapse into 0, so that a e and c e become equal to 0 as well. Hence, both the lower and upper bounds in (13) become equal to q e , so that the statistical association ψ becomes equal to the causal effect * ψ , as expected. At the other extreme, when setting RD EU and RD UDe to 1 (maximal confounding), the lower and upper limits of b e collapse into − q 1 e and the lower and upper limits of d e collapse into − p 1 e , so that a e and c e become equal to −q e and −p e , respectively. After a little algebra the bounds in (13) become equal to the AF bounds in (6). This is also expected, since the AF bounds can never be exceeded, regardless of the amount of confounding.
Using the bounds in (13) we may define the E-value analogously to ref. [20]. Specifically, for a positive association between E and D ( > q q 1 0 ) we define the E-value as the smallest common value = RD EU = RD RD UD UD 0 1 such that the lower bound for * q 1 in (13) is equal to the upper bound for * q 0 in (13). This E-value does not have a simple analytic expression, but it can easily be found numerically.
Both the DVS bounds, the binary DV bounds and the SH bounds require specification of certain sensitivity parameters, intended to quantify the degree of unmeasured confounding. Specifying such parameters becomes challenging, even for a subject matter expert, if the number of parameters is large. Furthermore, one may often want to vary the sensitivity parameters over plausible ranges, and present the bounds as functions of the parameters within these ranges, to convey the sensitivity of the observed associations with various degrees of confounding. In order for such sensitivity analysis to be feasible and transparent, the number of sensitivity parameters has to be small.
Computing the DVS bounds for both * q 0 and * q 1 requires specification of four sensitivity parameters (RR EU0 , RR EU1 , RR UD0 and RR UD1 ), whereas the binary DV bounds and the SH bounds require only three sensitivity parameters (RD EU binary , RD UD0 binary and RD UD1 binary for the binary DV bounds, and RD EU , RD UD0 and RD UD1 for the SH bounds). This may be viewed as a relative advantage of the binary DV bounds and the SH bounds. However, even three sensitivity parameters may be awkward to handle from a practical perspective. To further reduce the dimensionality of the sensitivity parameter space one may replace RR EU0 and RR EU1 with In fact, even though Ding and VanderWeele [7] used the parameters RR UD0 , RR UD1 , RD UD0 binary and RD UD1 binary in their derivations (e.g., in Sections 2.2 and 6 of their eAppendix), they replaced these with RR UD and RD UD binary when presenting their final result. The replacements above reduce the number of sensitivity parameters to two, both for the DVS bounds, the binary DV bounds and the SH bounds. It is easy to show that the bounds remain valid under these replacements, but that they may become wider.
Real data example
We use the same example as in ref. [19] to illustrate the theory. The data for this example are borrowed from the studies of Hammond and Horn [23][24][25], who studied the association between smoking and mortality. These authors carried out several different analyses; in particular, they found an extremely strong association between smoking and lung cancer. We focus here on the association between smoking and total mortality, which is more moderate. We provide R code for the analysis in Appendix C.
Hammond and Horn stratified all their analyses on age. We here re-analyze the data from their oldest stratum, which consists of 29,105 subjects who were between 65 and 69 years old at enrollment into the study. Among these, 6,287 subjects reported that they had never smoked. We consider these 6,287 subjects as being unexposed ( = E 0) and the remaining 22,818 subjects as being exposed ( = E 1). During a total follow-up period of 44 months, the number of subjects who died ( = D 1) were 613 and 2,837 among the unexposed and exposed subjects, respectively. ; hence, smoking appears to elevate the risk of death during follow-up with 2.7 percentage points. However, this statistical association is only adjusted for (e.g., stratified by) age and may thus be partly or fully explained by unmeasured confounding. To appreciate the maximum possible impact of confounding, we compute the AF bounds for the causal risk difference − * * q q 1 0 , which are equal to ( ) −0.708, 0.292 . Hence, without making any assumptions about the degree of unmeasured confounding, the causal risk difference can be as small as −0.708 and as large as 0.292.
The AF bounds are very wide. To narrow down we may consider a range of plausible values for the degree of unmeasured confounding. The contour plots in the top row of Figure 1 The contour plots in the bottom row of Figure 1 illustrate the SH lower (left panel) and upper (right panel) bounds for the causal risk difference, as functions of RD EU and RD UD . The E-value, in terms of RD EU and RD UD , is equal to 0.13; up to this degree of unmeasured confounding the observed data imply the presence of a positive causal effect.
Simulation
We carried out a small simulation study to compare the AF, DVS, binary DV and SH bounds. We considered a categorical confounder U with K levels. We generated distributions ( ) p D E U , , from the model ( ) Technically, ( ) = p U u has a Dirichlet distribution with K parameters all equal to 1. In this way, the distribution { ( 1 is drawn uniformly from the K -dimensional unit simplex [26]. We initially considered a binary confounder, i.e., = K 2. In this special case, ( 1 are drawn uniformly over the interval ( ) 0, 1 . We generated 10,000 distributions ( ) p D E U , , . For each generated distribution we computed the AF, DVS, binary DV and SH bounds for the causal risk difference − * * q q 1 0 , and the AF, DVS and SH bounds for the causal risk ratio / * * q q 1 0 . We used the true values of all sensitivity parameters in the computation of the bounds. The simulation was carried out twice, without and with the parameter reduction described in Section 6.
The top row of Figure 2 shows the width distribution of the bounds for the causal risk difference (left panel) and risk ratio (right panel), without parameter reduction. We observe that the DVS, binary DV and SH bounds are generally much narrower than the AF bounds. We further observe that the width distributions of the DVS and SH bounds are quite similar. Thus, with correct specification of their respective sensitivity parameters, the DVS and SH bounds seem to be equally informative. The binary DV bounds are generally narrower than both the DVS bounds and the SH bounds for the causal risk difference. This is not surprising, given that the binary DV bounds use the strong additional information/assumption that U is binary, and are only valid under this assumption.
The bottom row of Figure 2 shows the results with parameter reduction. We observe that the DVS, binary DV and SH bounds are now wider, but still much narrower than the AF bounds, in general. We further observe that the SH bounds tend to be narrower than the DVS bounds. This is expected, since the parameter reduction eliminates two sensitivity parameters from the DVS bounds, but only one sensitivity parameter from the SH bounds. Thus, one would expect that the parameter reduction generally has a stronger influence on the DVS bounds than on the SH bounds. We finally repeated the simulation for each K in … 2, , 10. For this part of the simulation we omitted the binary DV bounds, since these only apply when the confounder U is binary. Figure 3 shows the median width of the AF bounds (solid lines), DVS bounds (dashed lines) and SH bounds (dotted lines) for the causal risk difference (left panel) and the causal risk ratio (right panel), when not using the parameter reduction (top row) and using the parameter reduction (bottom row). We observe that the width of the AF bounds is constant for the causal risk difference (=1) and appears fairly constant for the causal risk ratio as the number of confounder levels increases. However, the width of the DVS and SH bounds tends to increase with the number of confounder levels, both for the causal risk difference and risk ratio. This is not surprising, given that the sensitivity parameters for these bounds are defined by contrasting the confounder levels that are most extreme, in the sense that these levels maximize/minimize the exposure and outcome prevalences. The more levels there are, the larger the discrepancy between the most extreme values (in a probabilistic sense), and the larger the sensitivity parameters and the wider the bounds.
Conclusions
In this article, we have derived bounds for causal effects, using sensitivity parameters that quantify the degree of unmeasured confounding on the risk difference scale. These bounds can subsequently be used to compute an "E-value," that is, a minimal amount of confounding required to explain away an observed statistical association. Thus, our work complements and extends the work of Ding and VanderWeele [7], who derived similar bounds using sensitivity parameters on the risk ratio scale.
Our work has important practical implications. When the target causal effect is a risk difference, it may be more natural to use sensitivity parameters on the risk difference scale than on the risk ratio scale. Furthermore, some subject matter experts may find it more intuitive to speculate about the degree of unmeasured confounding on the risk difference scale, regardless of the chosen scale for the target causal effect. Novel bounds for causal effects based on sensitivity parameters on the risk difference scale 199 Our proposed bounds are functions of the observed data distribution parameters ( ) p q q , , 1 0 1 . In practice, these are not known but have to be estimated from data, which gives estimated bounds. An important extension would thus be to develop methods that take the sampling variability of the estimated bounds into account. This could either be done with analytic arguments, as in ref. [3], or with bootstrap simulation, as in refs [5,6]. 3.
1, ,6 . We also have that 1, ,6 . We also have that This distribution is valid, since 1, ,6 . We also have that From (4) we have that so minimizing/maximizing * q e is equivalent to minimizing/maximizing ( ) E Δ eU . We have that where the second equality follows from (14). We also have that where the first equality follows from (15) and the third from Bayes' rule. We now have that where the first equality follows from (4) and (20), and the third from (19). Thus, minimizing/maximizing ( ) E Δ eU is equivalent with maximizing/minimizing the covariance ( ) δ cov Δ , it follows from Theorem 1 of [27], and (19), that Consider first the constraint in (22). Let x 1 and x 2 be the solutions to the equations (16) and (17)
|
2021-09-15T13:08:35.852Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "98baddd7fa62afdb34d14e09aec07b9dd4ea0f79",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/jci-2021-0024/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0515aa262e21db97cda421979805ff6869b3d6e8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
8517764
|
pes2o/s2orc
|
v3-fos-license
|
Ethacrynic Acid Inhibits Sphingosylphosphorylcholine-Induced Keratin 8 Phosphorylation and Reorganization via Transglutaminase-2 Inhibition
Sphingosylphosphorylcholine (SPC) is significantly increased in the malicious ascites of tumor patients and induces perinuclear reorganization of keratin 8 (K8) filaments in PANC-1 cells. The reorganization contributes to the viscoelasticity of metastatic cancer cells resulting in increased migration. Recently, we reported that transglutaminase-2 (Tgase-2) is involved in SPC-induced K8 phosphorylation and reorganization. However, effects of Tgase-2 inhibitors on SPC-induced K8 phosphorylation and reorganization were not clearly studied. We found that ethacrynic acid (ECA) concentration-dependently inhibited Tgase-2. Therefore, we examined the effects of ECA on SPC-induced K8 phosphorylation and reorganization. ECA concentration-dependently suppressed the SPC-induced phosphorylation and perinuclear reorganization of K8. ECA also suppressed the SPC-induced migration and invasion. SPC induced JNK activation through Tgase-2 expression and ECA suppressed the activation and expression of JNK in PANC-1 cells. These results suggested that ECA might be useful to control Tgase-2 dependent metastasis of cancer cells such as pancreatic cancer and lung cancers.
INTRODUCTION
Metastasis is the ability of cancer cells to spread from its origin to distant locations within the body and to continue its growth (Valastyan and Weinberg, 2011). The high mortality rates associated with cancer are caused by the metastatic spread of tumor cells away from the site of their origin (Park et al., 2013a). In fact, metastases are the cause of 90% of cancer deaths (Steeg, 2006). Therefore, several researchers are trying to develop new anti-metastatic compounds. Recently, novel approaches have been proposed to characterize the properties of metastatic cancer cells, such as cell elasticity or mechanical properties (Beil et al., 2003;Suresh, 2007). The clinical importance of viscoelasticity or cell stiffness was reported by Cross et al. (2007). In particular, the importance of cell elasticity or viscoelasticity in several metastatic cancer cell lines has also been reported (Beil et al., 2003;Rolli et al., 2010). For example, sphingosylphosphorylcholine (SPC)-induced keratin phosphorylation and reorganization of human epithelial pancreatic cancer cells combined with the resulting changes in viscoelasticity of the cells have been suggested as a possible pathway that facilitates the migration and increased metastatic competence of pancreatic tumor cells (Beil et al., 2003;Rolli et al., 2010).
Tgase-2 is a multifunctional protein. In addition to catalyzing Ca 2+ -dependent transamidation reactions, it can bind
Ethacrynic acid (ECA) is a diuretic that inhibits cellular ion fl ux that leads to an increase in intracellular Na concentrations ( Fig. 1A) (Vivas and Chiaraviglio, 1989;Li and El-Mallakh, 2004). ECA is used rarely as a diuretic because other potent agents have been introduced. Nevertheless, ECA still has a place in the modern practice of medicine (Wall et al., 2003;Han et al., 2005). ECA showed new pharmacological activities independent of diuretic activity. For example, ECA suppressed the all-retinoic acid-induced monocyte chemoattractant protein-1 production and is known to reduce the retinoid-induced ear edema in mice (Kim et al., 2010).
In this study, we found that ECA inhibited Tgase-2 and confi rmed the involvement of Tgase-2 in SPC-induced K8 phosphorylation and reorganization. Our fi nding suggested the possibility that ECA might be used as antimetastatic drugs.
Cell culture
The human pancreatic carcinoma cell line, PANC-1 (ATCC CRL 1469),was obtained from the American Type Culture Collection (ATCC, Rockville, MD, USA). The cells were cultured in Dulbecco's modifi ed Eagle's medium (DMEM) supplemented with L-glutamine (2 mM), penicillin-streptomycin (10,000 IU/ml and 10,000 μg/ml, respectively), and sodium pyruvate (1 mM). The PANC-1 cells were maintained in medium containing 10% (v/v) fetal calf serum(FCS). The cells were incubated at 37 o C in a humidifi ed atmosphere containing 10% CO2. The cells were washed twice in serum-free DMEM and incubated in serum-free DMEM 18 hours before the respective experiments.
In vitro Tgase-2 inhibition assay
The inhibitory effect of each compound was determined by measuring the incorporation of [1,4-14C] putrescine into succinylated casein (Park et al., 2013a). Following 10 min of preincubation of 2.5 milliunits (mU) of Tgase-2 from the guinea pig liver with each concentration of chemicals in 0.1 ml of reaction buffer solution without 10 mM CaCl 2 , we added 0.4 ml of substrate solution containing 5 mg of succinylated casein and 100 nCi of [1,4-14C] putrescine. After further incubation at 37 o C for 1 h, the reaction was terminated by the addition of 4 ml of cold (4 o C) 7.5% (w/v) TCA. TCA-insoluble precipitates were collected in GF/A glass fi ber fi lters (Millipore Co.), washed with cold 5% (w/v) TCA, dried and assessed for incorporation of radiolabel using a scintillation counter (Beckman Coulter Co.). The resultant data represent the means of three independent experiments.
Western blot
The PANC-1 cells were harvested and lysed in 50 mM Tris-Cl (pH7.5), 150 mM NaCl, 1% triton X-100, 1% sodium deoxycholate, 0.1% SDS, 2 mM EDTA, sterile solution and protease inhibitors (Gendepot, Barker, TX, USA) (Park et al., 2011). The protein concentrations of the supernatants were determined using Coomassie Plus (Pierce Biotechnology Inc., Rockford, IL, USA), as recommended by the manufacturer. The protein lysates were loaded onto a precast 4% to 12% polyacrylamide gradient gel (Invitrogen, Carlsbad, CA, USA). The proteins were separated by SDS-PAGE and transferred to a polyvinylidene difl uoride membrane (Pall, Pensacola, FL, USA). The membranes were blocked in 5% non fat milk and probed with the appropriate primary antibodies such as anti-Tgase-2, keratin-8, and JNK. After incubation with the primary antibody, the membranes were washed with TBS+0.1% Tween 20 and incubated with the appropriate peroxidase-conjugated secondary antibodies followed by development with a chemiluminescence substrate (Pierce Biotechnology Inc., Rockford, IL, USA) and exposure to X-ray fi lm (Kodak, Rochester, NY, USA).
Confocal microscopy
PANC-1 cells were grown on coverslips and fi xed 24 hours later with fresh 4% paraformaldehyde, pH 7.0, for 10 min at room temperature. Fixed cells were permeabilized with a 10 min wash in 0.1% Triton X-100 at room temperature followed by several washes in PBS with 3% bovine serum albumin (PBS/BSA). The phosphospecifi c antibody detecting K8 Ser431 (Abcam, Cambridge, MA, USA) primary antibody was incubated with coverslips overnight at 4 o C (Park et al., 2011). Excess antibody was removed with four washes in PBS/BSA. Species-specifi c second antibodies conjugated to goat antirabbit IgG antibody (Alexa Fluor 488, 1:500 Molecular Probes) or goat anti-mouse IgG antibodies (Alexa Fluor 594, 1:500, Molecular Probes) were then reacted with the coverslips for 1 hour at room temperature followed with four washes in PBS/ BSA. The fi nal samples were mounted onto slides and visualized using a Zeiss Axiophot confocal microscope.
Migration
Migration of PANC-1 cells through 8-μm size-limited pores was assessed in response to SPC according to Park's report (Park et al., 2011). PANC-1 cells (5×10 4 cells per well) were treated with the indicated concentrations of SPC for 1 hour. PANC-1 cells plated in the upper chamber were allowed to migrate for 5 hours to establish the temporal kinetics of migration. The transwell membranes were then fi xed and stained with Diff-Quik ® staining kit (Kobe, Japan). Membranes were removed from transwells, and cells on the under surface of the millipore membrane were counted under a light microscope (average of 5 semi-random non-overlapping fi elds at 200× magnifi cation). All treatments were performed in triplicate wells.
Invasion assay using transwell plates
Cell invasion was studied using matrigel-coated (0.5 lg/ml) transwell inserts, as described previously (Park et al., 2013a). Trypsinised cells were suspended in serum-free medium, and 2×10 5 cells were added to the upper chamber of the transwell inserts. Medium with 10% serum was added to the lower chamber. After a 16 h incubation with PANC-1 cells, the nonmigrated cells on the upper surface of the membrane were removed, and the cells on the lower surface were stained using the Hema 3 staining system (Fisher Scientifi c, Houston, TX, USA), photographed (200× magnifi cation) and counted in 10 randomly selected fi elds. All experiments were repeated at least three times with two replicates each.
Statistical analysis
The data are expressed as the mean ± S.E.M. of at least three in-dependent experiments performed in triplicate. A p value <0.05 was considered signifi cant.
ECA inhibits transglutaminase-2
We have shown that Tgase-2 is involved in SPC-induced K8 phosphorylation and reorganization by JNK activation leading to migration of metastatic pancreatic cancer cells (Park et al., 2011). Therefore, Tgase-2 inhibitor may be effective for metastasis treatment. To obtain a Tgase-2 inhibitor, we fi rstly screened a single compound library comprised of used drugs and natural extracts. We found that ECA has a concentrationdependent Tgase-2 inhibitory effect (Fig. 1B).
ECA suppressed the SPC-induced K8 phosphorylation and reorganization in PANC-1 cells
In previous report, cystamine (CTM), a well-known Tgase inhibitor, suppressed the SPC-induced K8 phosphorylation and reorganization (Park et al., 2011). So we examined whether ECA, a newly found Tgase-2 inhibitor, could suppress the SPC-induced K8 phosphorylation and reorganization. SPC induced phosphorylation of serine 431 of K8 and ECA concentration dependently inhibited the SPC-induced K8 phosphorylation ( Fig. 2A). SPC also induced ring like perinuclear reorganization of K8 in PANC-1 cells and Tgase-2 is involved in this event. ECA inhibited the SPC-induced perinuclear reorganization of K8 (Fig. 2B).
ECA suppressed the SPC-induced migration and invasion of PANC-1 cells
The expected fi nal outcome of SPC-induced reorganization of the keratin network in PANC-1 cells is increased migratory properties (Beil et al., 2003). Therefore, in previous report, we demonstrated that Tgase-2 is involved in the SPC-induced migration of PANC-1 cells by CTM and gene silencing (Park et al., 2011). SPC treatment induced the increased migration and invasion of PANC-1 cells (Fig. 3). ECA concentration-dependently inhibited the SPC-induced migration and invasion of PANC-1 cells (Fig. 3). No cytotoxic effects of ECA were observed in our experimental setting of migration and invasion.
ECA suppressed the SPC-induced JNK activation and expression
Tgase-2 is involved in SPC-induced K8 phosphorylation via JNK activation (Park et al., 2011). So, we examined whether ECA suppressed the Tgase-2-dependent JNK activation. SPC treatment increased the phosphorylation of JNK and ECA treatment suppressed the phosphorylation and expression of JNK (Fig. 4A). www.biomolther.org
DISCUSSION
Metastatic cancer cells are reported to have unique mechanical characteristics, such as soft stiffness and elasticity (Cross et al., 2007). Keratins are one of the main intermediate fi laments that control the mechanical characteristics of cells (Bordeleau et al., 2008). This study focused on ECA, Tgase-2 inhibitor modulating the SPC-induced keratin phosphorylation and reorganization in PANC-1 cells that controls the viscoelasticity and migratory properties of cancer cells.
MAP kinase is involved in keratin reorganization through the phosphorylation of keratin (Ku et al., 2002;Park et al., 2011;Busch et al., 2012), but there are few studies on the other proteins affecting keratin reorganization, except plectin (Cheng et al., 2008). Recently, we reported that Tgase-2 is involved in SPC-induced keratin reorganization via JNK activation (Park et al, 2011). Tgase-2 mediates the metastasis and chemoresistance of several cancer cells and is a new and interesting target (Kim, 2011). However, effective Tgase-2 inhibitors are not yet available to clinical application although several approaches revealed promising Tgase-2 inhibitors Lee et al., 2013;Park et al., 2013a). So, we examined inhibitory effects of some drugs on Tgase-2 since drug can be easily applicable to cancer treatment. We found that ECA concentration-dependently inhibited the Tgase-2 (Fig. 1B). The inhibitory mechanism of ECA against Tgase-2 is not clear but the molecular structure of ECA contains an exo-methylene group conjugated to a carbonyl group (Fig. 1A). This electrophilic "eneone" moiety can alkylate thiol groups in proteins or glutathione via a Michael-type addition reaction (Han et al., 2005). Interestingly, one of key residues of Tgase-2 is cystein residue at 277 th amino acid (Lee et al., 1993). Thus, ECA might modify critical thiol residues in 277 th Tgase-2.
The results showed that ECA suppressed the phosphorylation of K8 and perinuclear keratin reorganization (Fig. 2). These observations confi rmed that Tgase-2 is involved in SPC-induced K8 phosphorylation and perinuclear reorganization of K8 (Park et al., 2011).
SPC-induced keratin phosphorylation and reorganization led to increased migration of PANC-1 cells and Tgase-2 inhibition by ECA suppressed the SPC-induced migration and invasion (Fig. 3). ECA is known to have diverse effects such as glutathione-S-transferase inhibition and thiol-adduct formation. So these diverse effects also might be involved in inhibition of migration and invasion. However, to our knowledge, we could not fi nd reports about suppressing the migration of cancer cells via GST inhibition. However, thiol-adduct formation of ECA might contribute to inhibition of Tgase-2 since Tgase-2 has cystein residue at 277th in active site. In previous paper, we showed that SPC induced migration of PANC-1 cells via Tgase-2 expression (Park et al., 2011). Therefore, ECA might suppress the SPC-induced migration by inhibition of Tgase-2. Tgase-2 is involved in SPC-induced JNK activation and ECA, Tgase-2 inhibitor, suppressed the JNK activation in PANC-1 cells (Fig. 4A). Especially, ECA also suppressed the JNK expression (Fig. 4A). These results suggested that ECA inhibited JNK expression via Tgase-2 inhibition.
Our fi ndings confi rmed the role of ECA as a Tgase-2 inhibitor in the suppression of SPC-induced K8 phosphorylation and reorganization of PANC-1 cells via JNK (Fig. 4B). Therefore, ECA might be helpful in modulating the Tgase-2 involved metastasis of cancer cells such as pancreatic cancers, and lung cancers.
|
2017-10-03T09:09:10.604Z
|
2013-09-30T00:00:00.000
|
{
"year": 2013,
"sha1": "051b4cf9a985681b620a13c3345a4c709ce06e69",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201330258586823&method=download",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "68c677c614bc540308c4493bde70f1fa2275a410",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204052406
|
pes2o/s2orc
|
v3-fos-license
|
Large osteochondroma of the coronoid process of left mandible: clinical and imaging findings of a rare case
Osteochondroma (OC) is the most common benign tumor in the axial and appendicular skeleton. It accounts for %35-50 of all benign bone tumors and %8-15 of all primary bone tumors. The World Health Organization describes the OC as a cartilage-capped bony protrusion on the external surface of bone.1 Although Von Langenbeck defined coronoid process hyperplasia in 1853, Jacob first described OC of the coronoid process forming pseudojoint between the coronoid process and the zygoma in 1899, now it was known as Jacob disease.2
Introduction
Osteochondroma (OC) is the most common benign tumor in the axial and appendicular skeleton. It accounts for %35-50 of all benign bone tumors and %8-15 of all primary bone tumors. The World Health Organization describes the OC as a cartilage-capped bony protrusion on the external surface of bone. 1 Although Von Langenbeck defined coronoid process hyperplasia in 1853, Jacob first described OC of the coronoid process forming pseudojoint between the coronoid process and the zygoma in 1899, now it was known as Jacob disease. 2 This benign skeletal tumor is frequently located at the level of the metaphysis of long bones and occurs rarely in the oral and maxillofacial region. OC is commonly seen coronoid and condylar processes of the mandible within the facial bones. 3 The tumour grows slowly and it causes progressive limitation of mouth opening and facial asymmetry when it is sited in the coronoid process. 4 Another symptoms are remodeling, devastation or expansion at the zygoma and/or zygomatic arch or pain with mouth opening. 5 Clinical examination, radiography and cone beam computed tomography (CBCT) is useful diagnostic tools for OC. In the radiologic diagnosis, panoramic radiography and Water' sprojection are useful that can show an expanded coronoid process. 6 CBCT is the gold standard for exact diagnosis because it provides anatomical details. The facial asymmetry and limitation of mouth opening are determinative of the need for surgical removal of the lesion. The excision of the coronoid process with the tumor is the precise treatment. 7 After excision, the recurrence rate is very low (%2). 8 In this case report, we present details of the patient with OC of coronoid process attendance by limited mouth opening.
Case presentation
A 69-year-old male patient referred to the radiology clinic of the Atatürk University with complaint limitation of mouth opening, with no evidence of pain or joint sound. There was no history of facial trauma and medical history was not significant. In clinical examination, limitation of mouth opening and deviation to the left side during the mouth openning was found in patient. Figure 1 Panoramic radiography showed expanded irregularly shaped radiopaque lesion on the left coronoid process. Figure 2 CBCT scan was performed. Three-dimensional (3D) reconstruction of the CBCT images showed a mushroomshaped hyperplastic enlargement of the coronoid process which causesresorption under the bones of the skull base. Figure 3 The size of the lesion was measured 33,1x25,5x30,2mm. Patient was referrred to the surgery clinic with the prediagnosis of osteochondroma.
Discussion
AlthoughOC is rarely seen in the facial bones and skull base,it has been reported in the maxillary sinüs and in different parts of the mandible, such as the condyle, ramus, body, and symphyseal region. 8,9 It is a stemless lesion composed of bone coated with a cartilaginous capsule that commonly seen the coronoid and condylar process within the facial bones. 3 There are different hypotheses regarding the etiology, but none of them has been proved. The most popular etiologic hypotheses are previous childhood trauma and hyperactivity of temporal muscle. Although it is known to occur from metaplastic cartilage composed by periosteum, it has been reported that excessive stress caused by tension of the temporalis muscle might be the reason for this condition. 10,11 This may explain the growing tendency in the coronoid process of the mandible.
Restricted mouth opening is the main clinical symptom of OC. Lateral deviation toward the affected side is frequently found. Pain and disocclusion are uncommon clinical symptoms. 12 In our case, there were no complaints and another clinical findings but the limitation of mouth opening.
In the radiologic diagnosis, panoramic radiography and Water's projection are useful. Although the panoramic radiography is a widely used screening modality, the interpretation may be difficult due to the superimposition area of the bone lesion and posterior direction of the maxillary bone. Three-dimensional reconstruction by CBCT is the gold standard test. CBCT provides anatomical details and visualizes abnormalities of tissues and it clarifies its shape, composition, location and relationships with the neighboring structures. CBCT is essential to determine the exact extent of the lesion. 7,13 Totsuka et al., 14 reported that computed tomography (CT) represented enlargement of the coronoid process and deformity of the surrounding bones. Additionally it revealed the shape of the enlarged coronoid process and that of the replaced surrounding bones. Kerscher et al., 12 reported that only computed tomography can displayed the certain shape of the enlarged coronoid process and the space between the coronoid process and surrounding bones. 12 Besides that, varied authors have reported the usefulness of CT imaging. 7,8 Because of this, in the present case we could not reach the diagnosis of OC with based on panoramic radiography alone. We also used CBCT imaging to confirm the presence of a mass with the same morphological characteristics as those described in previous reports. CBCT provided us to determine the size of the tumor and its relationship to nearby structures.
While CBCT is the gold standard for preoperative diagnosis, histopathological analysis is required for final diagnosis. Removal of the enlarged portion of the coronoid process is the certain treatment. After surgically recurrence is rare. 8 OC should be distinguished from benign situations such as condylar hyperplasia, osteoma, chondroma, chondroblastoma, giant cell tumour, benign osteoblastoma and malignant lesions such as fibro sarcoma and chondrosarcoma. 15 Consequently, clinical features such as occurrence advanced age, rapid growth and invasion to the surrounding structures may be redolent of malignant change. A careful assessment of the patient's history supplies important information for the diagnosis and treatment of OC.CBCT is a very useful method for diagnosis of lesions of the coronoid process and essential for completing the diagnosis, for determining the size of the tumor and its relationship to nearby structures, and for surgical planning.
|
2019-01-24T14:05:50.757Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "d236f2147da80ec0d42e7571d9ce95eb9042f2c2",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MOJCR/MOJCR-07-00193.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "615136ee3eea3a8a1b9201e85ab69187442839c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3568525
|
pes2o/s2orc
|
v3-fos-license
|
Optimal decay estimates for the general solution to a class of semi-linear dissipative hyperbolic equations
We consider a class of semi-linear dissipative hyperbolic equations in which the operator associated to the linear part has a nontrivial kernel. Under appropriate assumptions on the nonlinear term, we prove that all solutions decay to 0, as t ->+infinity, at least as fast as a suitable negative power of t. Moreover, we prove that this decay rate is optimal in the sense that there exists a nonempty open set of initial data for which the corresponding solutions decay exactly as that negative power of t. Our results are stated and proved in an abstract Hilbert space setting, and then applied to partial differential equations.
Introduction
The present work has its origin in the search for decay estimates of solutions to some evolution equations of the general form u ′′ (t) + u ′ (t) + Au(t) + f (u(t)) = 0, (1.1) where H is a real Hilbert space, A is a nonnegative self-adjoint linear operator on H with dense domain, and f is a nonlinearity tangent to 0 at the origin.
When f ≡ 0, then for rather general classes of strongly positive operators A it is known that all solutions decay to 0 (as t → +∞) exponentially in the energy norm. Therefore, by perturbation theory it is reasonable to expect that also all solutions of (1.1) which decay to 0 have an exponential decay rate. The situation is different when A has a non-trivial kernel. In this case solutions tend to 0 if f fulfils suitable sign conditions, but we do not expect all solutions to have an exponential decay rate. Let us consider for example the hyperbolic equation u tt + u t − ∆u + |u| p u = 0, (1.2) with homogeneous Neumann boundary conditions in a bounded domain Ω. In [12], by relying on the so-called Lojasiewicz gradient inequality [15,16], it was established that, for any sufficiently small integer p, all solutions of this problem tend to 0 in the energy norm at least as fast as t −1/p . Showing the optimality of this estimate means exhibiting a "slow solution", namely a solution decaying exactly as t −1/p .
The existence of slow solutions for the Neumann problem was proved in [12] in the special case p = 2. The main idea is that each solution v(t) to the ordinary differential equation v ′′ + v ′ + |v| p v = 0 (1. 3) corresponds to the spatially homogeneous solution u(t, x) := v(t) of (1.2), so that it is enough to exhibit a family of solutions of (1.3) decaying exactly as t −1/2 . It was later shown in [10] that actually any solution of (1.2) tends to 0 either exponentially or exactly as t −1/p . This is the so-called "slow-fast alternative". Moreover at this occasion the set of initial data producing exponentially decaying solutions was shown to be closed with empty interior. In particular the set of "slow" solutions corresponds to an open set of initial data, but apart from the spatially homogeneous solutions no explicit condition on the initial data was found in [10]. The proofs of these results seem to exploit in an essential way the fact that the kernel of the linear part (in this case the set of constant functions) is an invariant space for (1.2). Without this assumption, both the alternative and the optimality of decay rates remained open problems.
Indeed let us consider, as a model case, the hyperbolic equation with homogeneous Dirichlet boundary conditions (here λ 1 denotes the first eigenvalue of −∆ in H 1 0 (Ω)). Now the kernel of the operator is the first eigenspace, which is not invariant by the nonlinear term, and even the existence of a slow solution decaying exactly as t −1/p was unknown until now.
In this paper we consider a general evolution equation of type (1.1), with f a gradient operator satisfying some regularity and structure conditions. Our aim is twofold. To begin with, in Theorem 2.2 we establish a general upper estimate of the energy, valid for all solutions. This estimate is proved in a quite general context through a modified Lyapunov functional, without any analyticity assumption on f . Then in Theorem 2.3 we prove the existence of slow solutions. This is the main result of this paper.
Our abstract theory applies to both (1.2) and (1.4). This shows in particular that the natural upper energy estimate for solutions of these problems is in general optimal, thereby settling an open problem raised in [12] and not solved, even for the special case (1.4), by the results of [10].
The problem of slow solutions has already been considered in the parabolic setting, and in particular in the case of equation with homogeneous Neumann boundary conditions in a bounded domain Ω, and in the case of equation with homogeneous Dirichlet boundary conditions. In the case of (1.5), an easy application of the maximum principle shows that all solutions decay to 0 in L ∞ (Ω) at least as fast as t −1/p as t → +∞. The same property is true for (1.6) but more delicate to establish (see for example [13]). With Neumann boundary conditions, the optimality of this decay rate can be confirmed by looking at spatially homogenous solutions as in the hyperbolic setting. With Dirichlet boundary conditions, a comparison with suitable sub-solutions proves that all solutions with nonnegative initial data are actually slow solutions (see [13] for the details), which verifies the optimality of the upper estimate also in this second case. Moreover, in the case of Neumann boundary conditions, the slow-fast alternative is known (see [2]), fast solutions are known to be "exceptional", and some explicit classes of slow solutions with a sign changing initial datum were found in [3]. On the contrary, in the case of Dirichlet boundary conditions, even the slow-fast alternative is presently an open problem.
All results for these parabolic problems rely on the existence of special invariant sets, or on comparison arguments. Both tools do not extend easily to second order equations of the general form (1.1). For this reason, in this paper we follow a different path. The main idea is to look for slow solutions in the place where they are more likely to be, namely close to the kernel of A. Thus, under the assumption that |f (u)| ∼ |u| p+1 , we look for solutions of (1.1) such that for a suitable constant C. Roughly speaking, under this condition the term Au(t) in (1.1) can be neglected, and the dynamical behavior is decided by the nonlinearity only. Thus we are in a situation analogous to the ordinary differential equation (1.3), for which the existence of slow solutions can be easily established. In order to prove (1.7), one is naturally led to consider the quotient which seems to be a p-extension of the Dirichlet quotient (the same quantity with p = 0), well known in many questions concerning parabolic problems (see for example the classical papers [1,5] or the more recent [14]). The Dirichlet quotient is nonincreasing in the case of linear homogeneous parabolic equations. This could naively lead to guess the monotonicity, or at least the boundedness, of Q p (t) also in the case of the second order problem (1.1). Of course this is not true as stated, but it is true for a hyperbolic version of Q p (t) with a kinetic term in the numerator. Thus we obtain the energy G(t) defined by (3.22), which in turn we perturb by adding a mixing term, in such a way that the final energy G(t) given by (3.23) satisfies a reasonable differential inequality. This strategy is inspired by similar modified Dirichlet quotients introduced in [6], and then largely exploited in [7,8] in the context of Kirchhoff equations. In those papers the setting is different (quasi-linear instead of semi-linear), the goal is different (in [6] the main problem is the existence of global solutions), but the strategy is the same (comparing solutions of partial differential equations with solutions of ordinary differential equations), thus similar tools can be applied.
Our method produces not only some special slow solution, but an open set in the basic energy space. This is the first step towards proving that slow solutions are in some sense generic, in accordance with the general idea that the slowest decay rate is dominant, and faster solutions are somewhat atypical. We plan to consider this issue in a future research. This paper is organized as follows. In section 2 we clarify the functional setting, we recall the notion of weak solutions, and we state our main abstract results. In section 3 we prove them. In section 4 we present some applications of our theory to dissipative hyperbolic equations.
Functional setting and main abstract results
We consider the semilinear abstract second order equation We always assume that H is a Hilbert space, and A is a self-adjoint linear operator on H with dense domain D(A). We assume that A is nonnegative, namely Au, u ≥ 0 for every u ∈ D(A), so that for every α ≥ 0 the power A α u is defined provided that u lies in a suitable domain D(A α ), which is itself a Hilbert space with norm We assume that F : D(A 1/2 ) → R. When we write ∇F (u), we mean that there exists a function ∇F : 3) The existence of ∇F (u) in the sense of (2.3) is enough to guarantee the continuity of F with respect to the norm of D(A 1/2 ). Moreover, for every u ∈ C 1 ([0, +∞); H) ∩ C 0 ([0, +∞); D(A 1/2 )) we have that the function t → F (u(t)) is of class C 1 , and its time-derivative can be computed with the usual chain rule We always assume that ∇F : D(A 1/2 ) → H is locally Lipschitz continuous, namely for every u and v in D(A 1/2 ), for a suitable function L : R 2 → R which is bounded on bounded sets. Under these hypotheses, one obtains the following result concerning global existence, regularity and derivatives of energies. In addition the functions are of class C 1 , and their time-derivative is given by The first main result of this paper is an upper energy estimate, valid for all weak solutions of (2.1).
Theorem 2.2 (Upper decay estimate for weak solutions)
Let us assume that (Hp1) H is a Hilbert space, and A is a self-adjoint nonnegative operator on H with dense domain D(A), (Hp4) ∇F is locally Lipschitz continuous in the sense of (2.4), (Hp5) there exists a constant K > 0 such that (Hp6) there exist p > 0, and a function R 1 : R → R which is bounded on bounded sets, such that Let (u 0 , u 1 ) ∈ D(A 1/2 )×H, and let u(t) be the unique global weak solution of problem (2.1)-(2.2) provided by Proposition 2.1.
Then there exist constants M 1 and M 2 such that Our second main result is the existence of an open set of slow solutions, namely solutions for which (2.11) is optimal. Theorem 2.3 (Existence of slow solutions) Let us assume that hypotheses (Hp1) through (Hp4) of Theorem 2.2 are satisfied. In addition, let us assume that
13)
and that there exist real numbers ρ > 0, R > 0, α > 0 such that Then there exist a nonempty open set S ⊆ D(A 1/2 ) × H and a constant M 3 such that, for every (u 0 , u 1 ) ∈ S, the unique global solution of problem (2.1)-(2.2) provided by Proposition 2.1 satisfies Let P : H → ker A denote the orthogonal projection on ker A, and let Q = I − P denote the orthogonal projection on R(A). From (2.13) and (2.10) it follow that Since |u| 2 = |P u| 2 + |Qu| 2 for every u ∈ H, comparing with (2.15) we obtain that there exists a constant M 4 such that In other words, the range component decays faster, and the slow decay of u(t) is due to its component with respect to ker A. This extends to the general abstract setting what previously observed in the special case studied in [10].
Proof of Proposition 2.1
Local existence We consider the Hilbert space H := D(A 1/2 ) × H, endowed with the norm defined by and the operator It is easy to check that A is a skew-adjoint linear operator, hence in particular a maximal monotone linear operator on H with dense domain D(A), and F : H → H is a locally Lipschitz continuous operator. Introducing U(t) := (u(t), u ′ (t)), one can rewrite problem (2.1)-(2.2) in the form [4]. More precisely, we obtain the following.
• (Continuation) The local solution can be continued to a solution defined in a maximal interval [0, T * ), with either T * = +∞, or lim sup Differentiation of energies We show that for all weak solutions the functions E 0 (t) and F 0 (t) defined by (2.6) are of class C 1 , and their time-derivative is given by (2.7) for every t ∈ [0, T ). Indeed for the first result we can consider the isometry group generated on H by A. Then Lemma 11 of [9] (see also [17] for an earlier more general result in the same direction) gives yielding the proper result for E 0 . The result for F 0 follows also since ∇F (u(t)), u ′ (t) is the derivative of the C 1 function F (u(t)) as a consequence of the chain rule, as already observed.
Global existence Thanks to the "continuation" result, all we need to show is that E 0 (t) is bounded uniformly in time. This follows at once from the nonincreasing character of F 0 and our assumption that F (u) ≥ 0.
A basic a priori estimate
The next simple a priori estimate will be useful in the proof of both main theorems.
Proposition 3.1 Let H be a Hilbert space, let A be a self-adjoint nonnegative operator on H with dense domain D(A), and let F : Let us assume that Proof Let us consider the two different energies Due to assumption (i) and inequality it is easy to see that The function E(t) is of class C 1 , even in the case of weak solutions, and its time- From assumption (iii) we see that which is exactly (3.1). ✷
Proof of Theorem 2.2
Let us describe the strategy of the proof before entering into details. We consider the energies where ε > 0 is a parameter and β := p p + 2 . (3.5) Now we claim three facts (from now on, all positive constants ε 0 , ε 1 , c 0 , . . . , c 10 depend on p, |u 0 |, E(0), K, and on the function R 1 ).
Proof of second claim From (3.7) and (3.3) we have
hence Since p > 0, with the help of (3.6) we deduce This implies that (3.8) holds true provided that c 5 ε 0 ≤ 1/2.
Proof of Theorem 2.3
Let us describe the strategy of the proof before entering into details. Let ν, ρ, R, α be the constants appearing in (2.13) and (2.14). First of all, let us choose δ > 0 such that Note that this condition implies in particular that Let Q denote the orthogonal projection from H to (ker A) ⊥ . Assuming (u 0 , u 1 ) ∈ D(A 1/2 ) × H and u 0 = 0, we set Let S ⊆ D(A) × D(A 1/2 ) be the set of initial data such that It is clear that these smallness assumptions define an open set. This open set is nonempty because it contains at least all pairs (u 0 , u 1 ) with u 1 = 0 and u 0 ∈ ker A with u 0 = 0 and |u 0 | small enough. This is the point where assumption (2.12) and the fact that F (0) = 0 are essential. Now we claim that, for every pair of initial data (u 0 , u 1 ) ∈ S, the global weak solution of (2.1)-(2.2) satisfies u(t) = 0 ∀t ≥ 0, (3.18) and This is enough to prove (2.15). Indeed, setting y(t) := |u(t)| 2 , we observe that and in particular Since y(0) > 0, this inequality concludes the proof. So we are left to prove (3.18) and (3.19). To this end, we set and T := sup t ≥ 0 : ∀τ ∈ [0, t], u(τ ) = 0 and G(τ ) ≤ 2σ 1 .
Since u(0) = 0, and G(0) < σ 1 (because of our definition of σ 1 ), we have that T > 0. We claim that T = +∞, which is equivalent to (3.18) and (3.19). Let us assume by contradiction that this is not the case. Due to the maximality of T , this means that either u(T ) = 0 or G(T ) = 2σ 1 . Now we show that both choices lead to an impossibility.
So it remains to show that G(T ) < 2σ 1 . To this end, we introduce the perturbed energy Due to the second condition in (3.16), the energy G(t) is a small perturbation of G(t) in the sense that The correcting term u ′ (t), Qu(t) appears frequently when looking for boundedness or decay properties for equations whose generator has a non-trivial kernel (see [18] or [11]).
The time-derivative of G is (3.25) Let us estimate I 3 , I 4 , and I 5 . First of all, from Proposition 3.1 we obtain that Therefore, from the first smallness condition in (3.17) and assumption (2.14), it follows that On the other hand, from assumption (2.13) and the fact that δ ≤ √ ν, it follows that (3.28) From (3.27) and (3.28) it follows that From the second smallness assumption in (3.17) we finally conclude that As for I 4 , we exploit that |Qu ′ (t)| ≤ |u ′ (t)| and |Qu(t) Thus from (3.15) we deduce that In order to estimate I 5 , we exploit once again (3.26) and we obtain Since G(t) ≤ 2σ 1 for every t ∈ [0, T ), the third smallness condition in (3.17) gives Plugging (3.29) through (3.31) into (3.25) we obtain Due to the first inequality in (3.16), this implies hence by (3.24) Integrating this differential inequality we easily deduce that Since we already know that u(T ) = 0, we have that G(t) and G(t) are defined and continuous at least up to t = T . Letting t → T − in (3.32), and exploiting (3.24) and our definition of σ 1 , we deduce that This excludes that G(T ) = 2σ 1 , thus completing the proof. ✷ 4 Applications to partial differential equations 4.1 Some equations with a local nonlinearity of power type The following statement represents a bridge between the abstract theory and partial differential equations. Here H is a space of real valued functions, and we explicitly write |u| H for the norm of the function u ∈ H (not to be confused with the absolute value |u| of the same function). Now the abstract assumptions on ∇F are replaced by suitable inequalities between norms, which are going to become Sobolev type inequalities in the concrete settings. Let us assume that µ), and there exists a constant K 1 such that Then we have the following conclusions.
(1) (Decay for all weak solutions) For every (u 0 , u 1 ) ∈ D(A 1/2 ) × H, problem (4.1), (2.2) has a unique global weak solution with the regularity prescribed by (2.5). Moreover there exists a constant M 1 such that Proof Let us set We claim that is the gradient of F in the sense of (2.3), and that all the assumptions of our abstract results (Theorem 2.2 and Theorem 2.3) are satisfied. All constants c 1 , . . . , c 8 in the sequel depend only on µ(X), p, K 1 , K 2 , and on the coerciveness constant ν which appears in (2.13). Assumption (Hp1) is trivial, so that we can concentrate on the remaining ones.
Verification of (Hp2) Assumption (i), and the fact that µ(X) < +∞, imply the following inclusions Thus F is finite at least for every u ∈ D(A 1/2 ). Moreover, it is trivial that F (0) = 0 and F (u) ≥ 0 for every u ∈ D(A 1/2 ).
Verification of (Hp3) Assumption (i) implies that ∇F (u), as defined by (4.5), is in H for every u ∈ D(A 1/2 ). Now we show that for every u and v in D(A 1/2 ) we have that which clearly implies (2.3). To this end, we start from the inequality which follows from the second order Taylor's expansion of the function |σ| p+2 . Setting a := u(x), b := v(x), and integrating over X, we obtain that Plugging this estimate into (4.8), we obtain (4.7).
Verification of (Hp4) We prove for every u and v in D(A 1/2 ) the inequality which implies (2.4). To this end, we start from the inequality which easily follows from the mean value theorem applied to the function |σ| p σ. Setting a := u(x), b := v(x), and integrating over X, we obtain that From (4.4) we infer Plugging this estimate into (4.10), we obtain (4.9).
Verification of (Hp5) It is trivially satisfied.
Verification of assumption (2.14) From (4.6) we find for every u ∈ D(A 1/2 ), which proves (2.14) with α = p for any ρ > 0. ✷ We are finally ready to apply our theory to hyperbolic partial differential equations. We concentrate on the model examples presented in the introduction. We recall that in the Dirichlet case even the existence of a single slow solution was an open problem. Also in the Neumann case, where existence of slow solutions was already known, the method of this paper gives the explicit conditions (3.17) for a solution to decay slowly, conditions which were not known before. Let us consider the damped hyperbolic equation with homogeneous Neumann boundary conditions 12) and initial data u(0, x) = u 0 (x), u t (0, x) = u 1 (x) ∀x ∈ Ω. (4.13) Then we have the following conclusions.
Theorem 4.3 (Dirichlet problem)
Let Ω ⊆ R n be a bounded open set with the cone property, and let λ 1 be the first eigenvalue of −∆ in Ω, with Dirichlet boundary conditions. Let p be a positive exponent, with no further restriction if n ∈ {1, 2}, and p ≤ 2/(n − 2) if n ≥ 3.
Then we have the following conclusions. In both cases, the norms |u| H and |u| D(A 1/2 ) are equivalent to the norms u L 2 (Ω) and u H 1 (Ω) , respectively, and the coerciveness assumption (2.13) is satisfied because Ω is bounded and eigenvalues are an increasing sequence. Now we proceed to the verification of the assumptions of Theorem 4.1, which is the same in both cases. The cone property and the boundedness of Ω guarantee the usual Sobolev embeddings (4.18) All constants c 1 , c 2 , c 3 in the sequel depend only on p, and on the Sobolev constants.
Verification of (4.3) Let u and v be in D(A 1/2 ). If n ≤ 2, we apply Hölder's inequality with three terms and exponents 4, 4, 2, and we obtain Thus from (4.18) with q = 4p and q = 4 we conclude that which is exactly (4.3). If n ≥ 3, we apply Hölder's inequality with three terms and exponents n, 2 * , 2, and we obtain Thus from (4.18) with q = np (note that np ≤ 2 * ) and q = 2 * we conclude that which proves (4.3) also in the case n ≥ 3.
Verification of (4.4) Let u and v be in D(A 1/2 ). If n ≤ 2, we apply Hölder's inequality with exponents 2 and 2, and then (4.18). We derive which proves (4.4) in this case. If n ≥ 3, we apply Hölder's inequality with exponents n/2 and n/(n − 2), and then (4.18). Since np ≤ 2 * , we find which proves (4.4) also in the case n ≥ 3. ✷ Remark 4.4 For the sake of simplicity and shortness, we limited ourselves to the model nonlinearity g p (σ) = |σ| p σ. On the other hand, all results can be easily extended, with standard adjustments (such as the restriction to L ∞ -small initial data in low dimension), to equations with nonlinear terms which behave as g p (σ) just in a neighborhood of the origin.
Some nonlocal equations involving projection operators
The following result is suited to nonlocal partial differential equations where a power nonlinearity is applied to some integral of the unknown, and not to the unknown itself. where M ⊥ denotes the space orthogonal to M. Let P M : H → M denote the orthogonal projection. Let p > 0, and let us consider the second order equation Then we have the following conclusions. Assumptions (Hp1) and (Hp2) are trivial in this case. Assumptions (Hp3) and (Hp4) require a completely standard verification, based on the simple fact that the real function |σ| p σ is of class C 1 when p > 0. We omit the details for the sake of shortness.
Assumption (Hp5) follows from the equality ∇F (u), u = |P M u| p P M u, u = |P M u| p |P M u| 2 ∀u ∈ H.
We have now to verify (Hp6). This requires three steps. Let P : H → ker A denote the orthogonal projection on ker A. The first step is just observing that assumption (2.13) is equivalent to |A 1/2 u| 2 ≥ ν|u − P u| 2 ∀u ∈ D(A 1/2 ). (4.22) The second step consists in proving that there exists c 1 > 0 such that To this end we set c 2 := min |P M v| 2 : v ∈ ker A, |v| = 1 , and we observe that the minimum exists because of assumption (4.19), and it is positive because of assumption (4.20). This is enough to prove that (4.23) holds true with c 1 = c −1 2 . Applying (4.23) with v := P u, we obtain |P u| 2 ≤ c 1 |P M (P u)| 2 = c 1 |P M u − P M (u − P u)| 2 ≤ 2c 1 |P M u| 2 + 2c 1 |P M (u − P u)| 2 ≤ 2c 1 |P M u| 2 + 2c 1 |u − P u| 2 .
Then we have the same conclusions as those of Theorem 4.2.
Then we have the same conclusions as those of Theorem 4.2.
One can state similar results also for the Dirichlet problem, namely by replacing the nonlinear term in Theorem 4.3 with the nonlinear terms appearing in Theorem 4.6 or Theorem 4.7. The only difference is that in the Dirichlet case the non-orthogonality condition (4.24) becomes Ω ϕ(x)e(x) dx = 0 for every nonzero function e(x) in the first eigenspace of the Dirichlet Laplacian.
|
2013-06-16T09:20:43.000Z
|
2013-06-16T00:00:00.000
|
{
"year": 2013,
"sha1": "44c2c1a6ee57ee36a925dae73a839e53102416d2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.3644",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "44c2c1a6ee57ee36a925dae73a839e53102416d2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
226234040
|
pes2o/s2orc
|
v3-fos-license
|
Primary care practitioners’ diagnostic action when the patient may have cancer: an exploratory vignette study in 20 European countries
Objectives Cancer survival rates vary widely between European countries, with differences in timeliness of diagnosis thought to be one key reason. There is little evidence on the way in which different healthcare systems influence primary care practitioners’ (PCPs) referral decisions in patients who could have cancer. This study aimed to explore PCPs’ diagnostic actions (whether or not they perform a key diagnostic test and/or refer to a specialist) in patients with symptoms that could be due to cancer and how they vary across European countries. Design A primary care survey. PCPs were given vignettes describing patients with symptoms that could indicate cancer and asked how they would manage these patients. The likelihood of taking immediate diagnostic action (a diagnostic test and/or referral) in the different participating countries was analysed. Comparisons between the likelihood of taking immediate diagnostic action and physician characteristics were calculated. Setting Centres in 20 European countries with widely varying cancer survival rates. Participants A total of 2086 PCPs answered the survey question, with a median of 72 PCPs per country. Results PCPs’ likelihood of immediate diagnostic action at the first consultation varied from 50% to 82% between countries. PCPs who were more experienced were more likely to take immediate diagnostic action than their peers. Conclusion When given vignettes of patients with a low but significant possibility of cancer, more than half of PCPs across Europe would take diagnostic action, most often by ordering diagnostic tests. However, there are substantial between-country variations.
I am concerned about the representativeness of the PCP respondents with respective to the entire PCP population. This is not just about the response rate, but the characteristics of the responders compared to those of non-responders. This is acknowledged in lines 363-369, but this limitation does affect the interpretation and implications of the findings.
With the above in mind and the likelihood of important confounding factors, I would be cautious about the interpretation of the univariable analyses, in particular the correlation between respondent PCP actions in 4 selected non-red flag scenarios and national cancer survival rates (the 3rd aim of this study-is this feasible to include?). For a large study with large amounts of data, why were multivariate analyses not used more?
I can find no mention of the statistical software package used.
In the questionnaire, PCPs were given 5 action options. However, the main text focuses entirely on the results of only 2 of the options (labelled as immediate action-and I am not sure that they can be conflated, as this does not take into account the longitudinal nature of primary care). Bearing in mind the relatively low PPV of the 4 scenarios, watchful waiting (2 of the options) is often entirely appropriate. Why were the results of the other 3 options not presented?
The varying structures and processes of primary care between the participant countries makes it difficult to compare results between countries (aim 2). This is acknowledged, but again collated results have to be interpreted cautiously.
I appreciate the huge amount of work and organisation that went into this study, but the omissions and assumptions discussed above and with 2 of the 3 research questions not being robustly answered do risk undermining the potential value of this worthy effort. I feel that there is a huge amount of further work that needs to be done and I think that some of the fundamental flaws are not readily remediable.
Overall, this is an important study that uses vignettes to make cross national comparisons. Comparative data are especially helpful by highlighting the importance of care quality variation and opportunities for health care systems to support better practice decisions and associated policy priorities.
General Points
-The scope of this study is broad, as cancer is a disease of multiple etiologies and types. It should be specified that the study is about delayed diagnosis for 4 types of cancers: lung, ovarian, breast and CRC.
-The results would be much stronger if the authors would make use of regression analyses to make the associations between cancers and delays. -The authors refer interchangeably to this as a survey or giving them a vignette. Overall the description of the data collection is a bit muddled. Starting with line 176 the reader needs to understand that there is one set of questions about who they are and another about what they do (i.e., the vignettes). There are really two data collection exercises: doing a survey to learn about their characteristics and having the participants care for a simulated vignette patient. This needs to be cleared up throughout the paper. -The issues contributing to diagnostic delay need to be spelled out: o Diagnostic inaccuracy ▪ Could further delineate reasons for this, e.g.
Cognitive/heuristic biases such anchoring onto an incorrect diagnosis, failure to consider alternate diagnoses, etc. o Inadequate work-up or timely referral o Lack of knowledge or inadequate training o Lack of time or resources to properly evaluate patients o Misaligned provider incentives (e.g. if paid on volume, rather than quality of care) -There needs to be a careful copy-edit of the entire document.
Some errors we noted: o Line 13 "varies vary" o Line 33 in the appendix: Second clinical case (it is the third). o Write out numbers in prose less than ten (two-tailed t test, one-year survival… but Table 1 okay) o 331: Write sentence in active voice "PCPs' stated actions markedly varied..," o 333: Comma after "countries" o 383: Run-on sentence
Abstract
The conclusion is not that there are links between health care organization but between variations in health care practice.
Strengths and limitations
These are correct in our estimation except to note that: Others have used vignettes in large systems and cross-national setting, with even more participants, and regularly achieved 90% participation rates. This applies to line 234 as well. The authors should tell us how often reminders were sent out to complete the survey and do the vignettes? See and consider referencing: Background.
-The authors note the high variation in cancer survival rates across the 20 countries, ranging from 58.2% to 81.1%. However, no mention is made about how much of this variation is attributable to infrastructure versus individual physician's clinical practice. This should be delineated for the reader. It would also suit the paper more if the authors were to reference the variation in survival rates with other disease conditions, such as stroke, pneumonia/sepsis, myocardial infarction, etc. This might be a good indicator of how much variation is attributable to cancer itself versus the healthcare system in general. -The authors make an excellent point regarding the challenges in achieving timely diagnosis not due to just provider factors but to patient and infrastructure factors as well. The case-mix variation alone demonstrates why the authors used simulated patients where the diagnosis and evidence-based treatment course are both known. -The call for studies is not to compare pathways (line 125) but to compare practice. See XXX (DN add a reference here) -The other major factors which this study could address are not referenced in line 130: misdiagnosis and incorrect treatment. That is the enormous advantage of using vignettes. Methods.
-See comment above about the data collection (ref line 176). This is a major weakness of the paper for any reader familiar with this approach. Failing to do this misaligns the data collection with the way the analyses (line 244) were done, too.
-Vignettes vary greatly in their content and their validation. More description is need of the vignettes, namely: 1) placing this in a table in this publication and 2) in an appendix for interested readers to access. -Additionally, there needs to be a discussion in the methods or (better in the discussion) about the use of vignettes to do cross-national studies. This is, in our experience, the only realistic and best way to do these studies. -In Description of the questionnaire (lines 190-197) o See also the comments below on response rates: please address here if and how recruitment efforts varied by country, especially high-vs low-response countries. -How did having online an online questionnaire affect the ability to recruit physicians in hard-to-reach areas, especially in areas such as rural Central Europe? -Regarding the correlation between taking diagnostic action and cancer survival rates, what attempts were made to make the vignettes relative to the country's population, both by presentation and by availability of services? -We would like to see their models run with dummy variables for the countries.
Results
- -Did the authors perform any multivariate regressions to determine any characteristics that made taking diagnostic action more likely? I saw that they did a number of univariate regressions, but I wonder if regional/economic differences would generate more significant results. -Referring back to our general comment above. Was any effort made to only look at physicians responses versus survival rates for the four types of cancers covered in the study? -Similarly, was there any effort made to normalize cancer survival rates with stage at initial presentation/timeliness of diagnosis/etc.? Even a rough normalization effort might improve the predictive ability of physicians taking action to monitor for cancer. -We find the negative correlation between per capita healthcare expenditure and PCPs' taking immediate diagnostic action interesting. Did the authors attempt to ascertain why this might be? For example, are there different national standards that dictate what action a PCP should take for patients presenting with possible early stage cancer? Or do high healthcare expenditure countries have better patient adherence rates and therefore there is less concern with a patient not following a doctor's orders for follow up? -Line 312: why were female PCPs compared to male GPs? Also, we find it unusual that there was not a gender gap in practice patterns: female PCPs tend to outperform male PCPs. -Line 314 did this control for location or size of practice? -Line 323: This needs to be done as a multi-variate regression.
Correlation analysis is not helpful given the breadth of data collected in this study. We suggest asking an experienced statistician to look through all of the data with the lead authors. -Did PCPs from urban practices behave more similarly to one another or did they behave more like other providers in their respective countries?
Discussion
-Several of the conclusions in the Discussion are suspect until the issues, raised above are addressed. We would like to see this section re-written once the above issues are addressed.
In addition: -In the limitations section, please add as a potential bias the varying response rates, especially given the apparent correlation between response rate and a country's score. -The goal of 50 PCPs per country needs to be better explained. Wouldn't a sample be more representative if the target was to have some fixed percentage of the total number of PCPs in the country? Aren't there power calculations that have to be considered (ref also our suggestion of using dummy variables in multi-variate model)? -On page 18, line 56, there should be a citation for "greater reliance on poorly funded secondary care services" as a factor for the lack of significant correlation between likelihood of immediate diagnostic action and cancer survival rates. improvement in primary diagnosis accuracy and 15.3% improvement in treatment accuracy using serial clinical vignettes in the field of oncology, which then aligns with real-world improvements in care in the health system studied, including reductions in unnecessary imaging to better align with guidelines. o The second of these publications references 22% improvement in diagnostic accuracy, as well as increased adherence to preferred chemo regimens in breast cancer using a comprehensive QI strategy leveraging clinical vignettes. This maps onto realworld increases in cardiac pretreatment assessment (+20%) and surveillance testing (+11%). -Line 418 seems to very bold given the limitations of the study. This is a surprising result given 1) how much variation in practice they found and 2) that they did not correlate delayed diagnosis and diagnostic accuracy, workup and treatment. More analyses with the existing data are required. -Availability of the data: The identifying information of the participants should be redacted.
VERSION 1 -AUTHOR RESPONSE
Reviewer: 1 Reviewer Name: Louis S Levene, University of Leicester This is an important and interesting topic, with potentially major for planning of health care service delivery.
Background is set out clearly and is well-referenced.
The aims are explicit, but are multiple (not necessarily a criticism) and not prioritised; thus, the authors should consider describing this as an exploratory study.
Thank you, we have done that.
I am concerned about the representativeness of the PCP respondents with respective to the entire PCP population. This is not just about the response rate, but the characteristics of the responders compared to those of non-responders. This is acknowledged in lines 363-369, but this limitation does affect the interpretation and implications of the findings.
We have now explained in the manuscript that it was not possible to compare the characteristics of the responders with those of non-responders due to lack of equivalent national data on PCP demographics in many of the participating countries.
With the above in mind and the likelihood of important confounding factors, I would be cautious about the interpretation of the univariable analyses, in particular the correlation between respondent PCP actions in 4 selected non-red flag scenarios and national cancer survival rates (the 3rd aim of this study-is this feasible to include?).
Because of the reviewer's concerns about the feasibility of this correlation, we have removed this outcome measure and the related results.
For a large study with large amounts of data, why were multivariate analyses not used more?
We have now done this.
I can find no mention of the statistical software package used.
We have done that.
In the questionnaire, PCPs were given 5 action options. However, the main text focuses entirely on the results of only 2 of the options (labelled as immediate action-and I am not sure that they can be conflated, as this does not take into account the longitudinal nature of primary care).
We do give the data for those 2 options separately, as well as the combined levels. Because the questionnaire already had more than 50 questions, we decided not to make it even longer (and increase the risk of non-completion) by adding questions relating to possible follow-up consultations. We chose to maximise the honesty of responses to potentially sensitive questions by making the anonymising the questionnaire, but this mean that we were unable to see how respondents to this questionnaire would have handled follow-up survey rounds.
Bearing in mind the relatively low PPV of the 4 scenarios, watchful waiting (2 of the options) is often entirely appropriate. Why were the results of the other 3 options not presented?
While our prime interest was about actions that would be likely to diagnose a cancer or exclude on, we now also show the results for the other three action options.
The varying structures and processes of primary care between the participant countries makes it difficult to compare results between countries (aim 2). This is acknowledged, but again collated results have to be interpreted cautiously.
We believe that we do interpret them cautiously.
I appreciate the huge amount of work and organisation that went into this study, but the omissions and assumptions discussed above and with 2 of the 3 research questions not being robustly answered do risk undermining the potential value of this worthy effort. I feel that there is a huge amount of further work that needs to be done and I think that some of the fundamental flaws are not readily remediable.
We believe that the changes we have made answer the reviewer's criticisms. We have modified the paper to cover only two research questions, which we believe are now robustly answered.
Reviewer: 2 Reviewer Name: Dr. John Peabody Please state any competing interests or state 'None declared': None declared The authors have created a study which examines primary care practice across 20 countries in Europe and determines the variation in care across these countries. This study uses standardized patient vignettes as a proxy for real-world practice to measure the care given by primary care providers and attempts to correlate measured practice against real-world cancer outcomes. The focus of this investigation is on using the vignettes to determine if there are delays in making the diagnosis and not, more broadly, diagnostic actions, as stated. Simply rewriting line 69 will make it clear, from the outset, that this is what this paper is investigating. The same change should be made on line 157.
The focus of our research was not, as the reviewer implies, that the vignettes were patients with cancer, and we wanted to know whether there was a diagnostic delay. It was that the vignettes were patients with a low but significant (3%) risk of cancer, and we wanted to know whether or not GPs took action that would, if the patients did have cancer, lead to a diagnosis.
In particular, please define a diagnostic action as either performing the key diagnostic test for the condition in question, or referring to the appropriate specialist, throughout.
As the reviewer suggests, we now define a diagnostic action as 'a key diagnostic test and/or referral to a specialist', and, for clarity, repeat this definition in each section.
Overall, this is an important study that uses vignettes to make cross national comparisons. Comparative data are especially helpful by highlighting the importance of care quality variation and opportunities for health care systems to support better practice decisions and associated policy priorities.
General Points
-The scope of this study is broad, as cancer is a disease of multiple etiologies and types. It should be specified that the study is about delayed diagnosis for 4 types of cancers: lung, ovarian, breast and CRC.
We have now done this at the end of the 'Background' section.
-The results would be much stronger if the authors would make use of regression analyses to make the associations between cancers and delays.
We now use regression analysis to study the effect of PCPs' demographics on their diagnostic action rates.
-The authors refer interchangeably to this as a survey or giving them a vignette. Overall the description of the data collection is a bit muddled. Starting with line 176 the reader needs to understand that there is one set of questions about who they are and another about what they do (i.e., the vignettes). There are really two data collection exercises: doing a survey to learn about their characteristics and having the participants care for a simulated vignette patient. This needs to be cleared up throughout the paper.
We have done that.
-There needs to be a careful copy-edit of the entire document. Some errors we noted: • Line 13 "varies vary" • Line 33 in the appendix: Second clinical case (it is the third).
Abstract
The conclusion is not that there are links between health care organization but between variations in health care practice.
We have made this change.
Strengths and limitations
These are correct in our estimation except to note that: Others have used vignettes in large systems and cross-national setting, with even more participants, and regularly achieved 90% participation rates. This applies to line 234 as well. The authors should tell us how often reminders were sent out to complete the survey and do the vignettes? We have done that. See and consider referencing: We thank the reviewer for recommending his excellent studies as possible references with regard to participation rates. We now cite these as examples of high participation rates for mixed physician populations, while retaining our citations for papers that show that low survey response rates are common in primary care and that our study compares favourably to those.
The study does not look at the complete evaluation of the patient and take advantage of all of the opportunities that measuring by using simulation offers. See and consider referencing: We have looked very carefully at the reviewer's papers, but we are still unclear what he means by 'the complete evaluation of the patient and take advantage of all of the opportunities that measuring by using simulation offers'.
Another limitation is that the PCPs are given a choice of 5 management decisions. Ideally these forced choice responses should be validated against actual practice beforehand.
These forced responses were carefully developed and piloted by GPs and other PCPs, and therefore grounded in their clinical experience. The two levels of piloting gave physicians the opportunity to comment on the survey design, and none commented that they felt that other responses were needed. It was therefore felt unnecessary to validate separately against actual practice before the study. Background.
-The authors note the high variation in cancer survival rates across the 20 countries, ranging from 58.2% to 81.1%. However, no mention is made about how much of this variation is attributable to infrastructure versus individual physician's clinical practice. This should be delineated for the reader.
There is no evidence on how much of this variation is attributable to infrastructure versus individual physician's clinical practice, and we now state this in the Background section.
-The authors make an excellent point regarding the challenges in achieving timely diagnosis not due to just provider factors but to patient and infrastructure factors as well. The case-mix variation alone demonstrates why the authors used simulated patients where the diagnosis and evidence-based treatment course are both known.
We thank the reviewer for this compliment.
-The call for studies is not to compare pathways (line 125) but to compare practice. See XXX (DN add a reference here) Thanks for suggesting the correction, which we have made.
-The other major factors which this study could address are not referenced in line 130: misdiagnosis and incorrect treatment. That is the enormous advantage of using vignettes.
We are not sure what the reviewer is suggesting here. Perhaps he means using vignettes as an educational tool, but our study was to explore what PCPs actually do, and not to address misdiagnosis and incorrect treatment. Methods.
-See comment above about the data collection (ref line 176). This is a major weakness of the paper for any reader familiar with this approach.
Line 176 just says 'Development of the questionnaire', so we are unsure what weakness the reviewer thinks we should address.
Failing to do this misaligns the data collection with the way the analyses (line 244) were done, too.
-Vignettes vary greatly in their content and their validation. More description is need of the vignettes, namely: 1) placing this in a table in this publication and 2) in an appendix for interested readers to access.
1)
We are unsure what description of the vignettes the reviewer would like to see in a table.
2) The vignettes are given in the appendix.
-Additionally, there needs to be a discussion in the methods or (better in the discussion) about the use of vignettes to do cross-national studies. This is, in our experience, the only realistic and best way to do these studies.
We now discuss the use of vignettes to do cross-national studies, both in the Methods section and in the Discussion section.
-In Description of the questionnaire (lines 190-197), the authors note there are 47 items, divided into four sections. However, the number of questions listed in the section do not appear to come to 47. It might be instructive to add the details of the questionnaire in an Appendix.
We now give the complete questionnaire as an appendix.
-Under sample size (lines 225-226), it is unclear why 1000 PCPs is the target number. Was this done for power, and if so, how much difference would there need to be to be able to reject a false negative? Please see comments above.
-In recruitment of participants (lines 228-238), how representative the physicians recruited for the study were compared against each nation's "average" primary care physician is not clear. This is an excellent idea, but for many participating countries there were no data for the PCP demographics that we collected.
-Although snowball sampling (line 237) is a recognized technique, there is no mention of how this was accounted for in the results.
Although we said that our 'local leads' could do this, none actually used snowballing, so we have taken this sentence out.
o See also the comments below on response rates: please address here if and how recruitment efforts varied by country, especially high-vs low-response countries.
We did not collect data on how recruitment efforts varied by country.
-How did having online an online questionnaire affect the ability to recruit physicians in hard-to-reach areas, especially in areas such as rural Central Europe?
An interesting question, but we have no way of answering this.
-Regarding the correlation between taking diagnostic action and cancer survival rates, what attempts were made to make the vignettes relative to the country's population, both by presentation and by availability of services? -We would like to see their models run with dummy variables for the countries.
At Reviewer 1's suggestion, we have removed the sections in the paper correlating diagnostic action with cancer survival rates.
Results
- o It is suspicious that the highest-responding country (Romania) is also the best-scoring in Fig 1, while the lowest-responding (Netherlands) is also the lowest-scoring in Fig 1. We respectfully disagree with the reviewer's implication that a high diagnostic action rate is 'best' in this study. There were no 'right' or 'wrong' answers to the vignette questions, so we do not make a value judgement as to whether a high diagnostic action rate in these vignettes is 'good' or 'bad'. o Convince the reader this is a real finding and not just an artifact of who responded to the survey, perhaps by adding a scatterplot of diagnostic action likelihood vs country's response rate, analogous to the scatterplots on pg 42 and 43, We have done this, and commented on it in the Discussion section. and formally evaluating for any relationship with regression analysis.
We have evaluated this.
- Table 2: Given the gender disparity, would be curious how many female vs male providers were invited, and if their response rates varied.
We did not collect these data. However, some of our national study leads commented that, in their countries, female PCPs far outnumber male PCPs.
-In lines 281-294 and figures 1-3, the authors have provided an overall percentage across all cases, but they did not indicate if there were any differences by case. That is, were there any cases where PCPs were more likely to take immediate action? Or did providers respond similarly to all case types?
We have done this.
-In lines 303-308, the authors note variation in the correlation between likelihood of immediate action and 1-year cancer survival. However, I would have liked to see any correlations performed by case type and 1-year cancer survival for that particular cancer. For example, in the colorectal cancer vignette, was there greater correlation if cancer survival was restricted to colorectal cancer only?
At Reviewer 1's suggestion, we have removed the sections in the paper correlating diagnostic action with cancer survival rates.
-In Figure At Reviewer 1's suggestion, we have removed these sections from the paper.
-Did the authors perform any multivariate regressions to determine any characteristics that made taking diagnostic action more likely? I saw that they did a number of univariate regressions, but I wonder if regional/economic differences would generate more significant results.
We have not examined regional or economic differences in this study.
-Referring back to our general comment above. Was any effort made to only look at physicians responses versus survival rates for the four types of cancers covered in the study?
As above, we have done this.
-Similarly, was there any effort made to normalize cancer survival rates with stage at initial presentation/timeliness of diagnosis/etc.? Even a rough normalization effort might improve the predictive ability of physicians taking action to monitor for cancer.
We are not sure what the reviewer means here, but at Reviewer 1's suggestion, we have removed the sections relating to survival rates from the paper.
-We find the negative correlation between per capita healthcare expenditure and PCPs' taking immediate diagnostic action interesting. Did the authors attempt to ascertain why this might be? For example, are there different national standards that dictate what action a PCP should take for patients presenting with possible early stage cancer? Or do high healthcare expenditure countries have better patient adherence rates and therefore there is less concern with a patient not following a doctor's orders for follow up?
At Reviewer 1's suggestion, we have reduced the number of research questions covered by the paper and so have removed this section from the paper.
-Line 312: why were female PCPs compared to male GPs?
We now cite evidence that female and male PCPs have been found to have different referral rates in other contexts.
Also, we find it unusual that there was not a gender gap in practice patterns: female PCPs tend to outperform male PCPs.
However, as we comment above, the reviewer's comment implies that particular responses to the vignettes implied better performancebut there were no 'right' or 'wrong' answers to the vignette questions, so we cannot make this value judgement: whether or not a PCP referred cannot be used as a quality indicator.
-Line 314 did this control for location or size of practice?
We now use a multivariate regression analysis for the effect of gender, years since graduation and size of practice on likelihood of taking immediate diagnostic action.
-Line 323: This needs to be done as a multi-variate regression. Correlation analysis is not helpful given the breadth of data collected in this study. We suggest asking an experienced statistician to look through all of the data with the lead authors.
At Reviewer 1's suggestion, we have removed this section from the paper.
-Did PCPs from urban practices behave more similarly to one another or did they behave more like other providers in their respective countries?
An interesting question, and one that we are looking at separately, but it was not one of the outcome measures for this study.
Discussion
-Several of the conclusions in the Discussion are suspect until the issues, raised above are addressed. We would like to see this section re-written once the above issues are addressed.
We have done this.
In addition: -In the limitations section, please add as a potential bias the varying response rates, especially given the apparent correlation between response rate and a country's score.
We have done this.
-The goal of 50 PCPs per country needs to be better explained. Wouldn't a sample be more representative if the target was to have some fixed percentage of the total number of PCPs in the country? Aren't there power calculations that have to be considered (ref also our suggestion of using dummy variables in multi-variate model)?
Please see our comments as above.
-On page 18, line 56, there should be a citation for "greater reliance on poorly funded secondary care services" as a factor for the lack of significant correlation between likelihood of immediate diagnostic action and cancer survival rates.
At Reviewer 1's suggestion, we have removed this section from the paper.
-In lines 392 to 416, reference should be made to: These references seem to be about the use of vignettes to standardising cancer diagnostic care, whereas our study is about describing the national differences in decision-making.
-Line 418 seems to very bold given the limitations of the study. This is a surprising result given 1) how much variation in practice they found and 2) that they did not correlate delayed diagnosis and diagnostic accuracy, workup and treatment. More analyses with the existing data are required.
At Reviewer 1's suggestion, we have removed this section from the paper.
-Availability of the data: The identifying information of the participants should be redacted.
We have done this.
REVIEWER
Louis S Levene University of Leicester, United Kingdom REVIEW RETURNED 25-Mar-2020
GENERAL COMMENTS
I appreciate that a huge amount of work has gone into this study and am pleased that most of my concerns about the 1st submission have been addressed.
Regretfully, however, I recommend rejection, as this submission has major flaws: 1. Overall low response rate, with some countries as low as 7% (acknowledged), but with no mention of potential non-respondent bias.
2. Although the questionnaire has been validated, I still have concerns about its design, in particular the absence of any indication of timescale in the arrange follow up option.
3. Merging of vignettes in analyses-was this validated? 4. There is no adjustment in the regression for country (responses nested within PCPs nested within country-should have used a multilevel model)-this is hugely important and has a bearing on interpreting the responses. 5. The regression model is inadequately specified-no details of DV, what were the hypothesis, type of regression (was DV treated as categorical, ordinal), absence of residuals. 6. Research questions are more explicit, but the 2nd has not been answered properly.
GENERAL COMMENTS
I have reviewed the revision, and the authors did an excellent job in responding to my comments, clarifying their points, and making needed edits to the original manuscript.
VERSION 2 -AUTHOR RESPONSE Reviewer: 1 1. Overall low response rate, with some countries as low as 7% (acknowledged), but with no mention of potential non-respondent bias.
Author response: The difference in national response rates is one of the inevitable limitations of carrying out this type of study. We now explain that this may be partly due to national variations in PCPs' willingness to take part in online survey research.
As the reviewer points out, the overall low response rate is something that we have recognised and acknowledged. We explain that low survey response rates are common in primary care, are known to vary between countries, and that those in our study compare favourably with those of a recent ICBP survey.
We already mention potential non-respondent bias: 'the recruitment method used in this study resulted in variable response rates, leading to a risk of non-response bias'.
We respectfully suggest that this is a limitation, not an invalidation of it.
2. Although the questionnaire has been validated, I still have concerns about its design, in particular the absence of any indication of timescale in the arrange follow up option.
Author response: There are, as far as we are aware, no evidence-based recommendations on the timing of follow-up and reassessment for these scenarios. In the survey we were seeking to capture decision-making by individual practitioners who will, quite reasonably, vary in the time-intervals at which they would arrange follow-up and reassessment. The optimal timing will depend on each health-care system. Thus, our goal was not to find the exact preferred time interval, but to know whether follow-up and reassessment would happen at all.
While beyond the scope of the current study, the variation in timing of reassessment and follow-up between individual practitioners is an interesting research question. We thank the reviewer for encouraging us to think about this in our future work.
While the reviewer implies having other concerns about the questionnaire design, she/he does not state what they are, so we are unable to comment on them.
Merging of vignettes in analyses-was this validated?
Author response: We thank the reviewer for this suggestion. We now explain that merging the vignette data for the analyses has face validity as we aim to explore PCPs' diagnostic actions in patients with symptoms that could be due to cancer, and these four cancers between them account for 35% of new cases of cancer in Europe. We also now explain that other authors have merged vignette data where the aim is to compare the action of different groups of healthcare professionals, as we are doing this study, as opposed to comparing the effect of different vignettes on their action.
While we could give the results and the analyses separately for each of the four vignettes, we already have 7 figures showing the results from the combined data. If we show the results separately, we would need to have 28 figures and 4 regression analyses. So, as well as having face validity, merging was necessary to provide a comprehensible paper.
4. There is no adjustment in the regression for country (responses nested within PCPs nested within country-should have used a multilevel model)-this is hugely important and has a bearing on interpreting the responses. 5. The regression model is inadequately specified-no details of DV, what were the hypothesis, type of regression (was DV treated as categorical, ordinal), absence of residuals.
Author response: We thank the reviewer for these helpful suggestions. One of our co-authors, a professor of medical statistics at a UK medical school, advised that the best way to do this is to fit a mixed effects model adjusted for country, to investigate the relationship between PCP demographics and likelihood of immediate diagnostic action. He has now done this and the output, including details of the DV as requested, is now clearly described in the manuscript.
We now explain that this was an exploratory analysis to investigate the relationship between PCPs' demographics and their diagnostic action rates. We are unclear what the reviewer was looking for when asking for 'the hypothesis', apart from the implicit null hypothesis that there is no difference between the effect of the sub-categories in each demographic factor. In Table 3 we give the margin and 95% confidence interval (CI) for each sub-category. In the case where there is no overlap, the null hypothesis has been refuted, and in the main text we state that this is a significant difference. 6. Research questions are more explicit, but the 2nd has not been answered properly.
Author response: We stated three research questions: • 'We therefore aimed to explore the diagnostic action rates of PCPs for patients with symptoms that could be due to four types of cancer (lung, ovarian, breast and colorectal), • how they compared across European countries, and • to explore the effect of PCPs' demographics on their diagnostic action rates for these cancers.' So we assume that the reviewer is referring to the between-country comparison. The reviewer does not say in which way it 'has not been answered properly', so we are unable to comment on this, but we do compare the diagnostic action rates across the countries in Figure 3 and discuss it in the paper.
Reviewer: 2 I have reviewed the revision, and the authors did an excellent job in responding to my comments, clarifying their points, and making needed edits to the original manuscript.
Author response: We thank the reviewer for this acknowledgement.
REVIEWER
Dr Louis S Levene University of Leicester, United Kingdom REVIEW RETURNED 29-Jun-2020
GENERAL COMMENTS
Reviewing the revised submission in light of my comments about the previous version and the authors' responses: 1. I accept now that the aims have been addressed. 2. The low response rate still concerns me. However, the authors do acknowledge this important limitation and the risk of nonrespondent bias. I will accept that this does not invalidate the study, but the authors need to ensure that they cautiously present their interpretation of the results. I appreciate that this is a difficult problem and there are limits to what can be done to overcome it, but generally low response rates in other surveys should not be a criterion for assessing the robustness of their methodology. 3. The authors' further explanations about the questionnaire design have addressed my concerns. Their description on merging the vignettes is fine. 4. I am pleased that the authors have now fitted a mixed effects model adjusted for country and have provided details of the DV and the analyses undertaken.
VERSION 3 -AUTHOR RESPONSE
We are pleased that the reviewer felt that we satisfied all his requests apart from one further, minor change that he would like to see: 'the authors need to ensure that they cautiously present their interpretation of the results'. We proposed to Amy Branch-Hollis, Assistant Editor, that we add this sentence to the 'Interpretation of the results' section, and she replied to say that this should be acceptable: 'However, the response rates were low in most of the countries surveyed, so the results should be interpreted with caution. ' We have therefore made this change.
|
2020-11-03T14:06:24.808Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1237da477437df0d2766ee3576cfe5d7fadf6248",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/10/e035678.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9895ca000d5a8abe701363404b8ef5bdef8d412",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8904321
|
pes2o/s2orc
|
v3-fos-license
|
Selective Neck Dissection for Clinically Node-Positive Oral Cavity Squamous Cell Carcinoma
Purpose The treatment of a clinically node-positive (cN+) neck is important in the management of oral cavity squamous cell carcinoma (OSCC). However, the extent of neck dissection (ND) remains controversial. The purpose of our study was to evaluate whether level IV or V can be excluded in therapeutic ND for cN+ OSCC patients. Materials and Methods We performed a retrospective chart review of 92 patients who underwent a comprehensive or selective ND as a therapeutic treatment of cN+ OSCC from January 1993 to February 2009. Results The incidence rate of metastasis to level IV or V was 22% (16 of 72) on the ipsilateral neck. Of 67 cases without clinically suspicious nodes at level IV or V, 11 cases (16%, 11 of 67) had pathologically proven lymphatic metastasis to level IV or V. Only a nodal staging above N2b was significantly relevant with the higher rate of level IV or V lymph node metastasis (p=0.025). In this series, selective ND, combined with proper adjuvant therapy, achieved regional control and survival rates comparable to comprehensive ND in patients under the N stage of cN2a OSCC. Conclusion In conclusion, level IV and V patients can avoid recurrence under cN2a OSCC.
INTRODUCTION
The primary goals of treatment of oral cavity squamous cell carcinoma (OSCC) are to control the local disease, eliminate the neck node metastasis, and prevent distant metastasis. Of these, the importance of regional control cannot be overemphasized because cervical lymph node metastasis is the single most potent prognostic factor of OSCC, and locoregional control is also known as a strong predictor of distant metastasis. 1,2 About half of the patients with OSCC had pathologically positive lymph node metastases at the time of diagnosis, either clinically or subclinically. 3,4 Therefore, the treatment of a clinically node-positive (cN+) neck is very important in the management of OSCC.
However, the extent of neck dissection (ND) remains controversial. Since the time when the surgical treatment of the neck had started as the classical radical ND gin of primary tumor, T stage, N stage based on the criteria of the American Joint Committee on Cancer (2009), and the type of surgery performed. All patients were preoperatively determined as clinically N+, with the clinical N+ neck defined as cervical lymph nodes detected at the physical examination, imaging studies (either a computed tomography scan, magnetic resonance imaging or positron emission tomography scan), or fine needle aspiration cytology. The Institutional Review Board of Yonsei University College of Medicine approved this retrospective study.
This study included 57 males and 15 females in the CND group and 17 males and 3 females in the SND group. The mean age was 53 years (range: 20-79) in the CND group and 56 years (range: 35-73) in the SND group. The distribution of primary sites is shown in Table 1. Table 2 and 3 summarize the clinical stages of all the cases. The age, sex and clinical stage were not different in both groups (data not shown).
During ND, the contents of the level IV and V specimen were dissected, labeled and processed separately from the main neck dissection specimen. Surgical specimens were then sent to the Pathology Department for permanent section analysis. Histopathologic examination of the metastases included identifying the number and location of the nodes containing metastatic disease. The relationship between level IV or V lymph node metastasis and clinicopathologic predictive factors was assessed.
We compared the survival rates of cN1 or cN2a patients treated by CND or SND. Thirty-four patients were treated with CND and 20 patients with SND. The follow-up period ranged from 5 to 182 months (mean follow-up: 47 months). Patients were followed up for the minimum of two years, or until death. The actuarial 2-year disease-free survival rates were generated.
Statistical analysis was done with the Fisher's exact test, Mann-Whitney U test, Kaplan-Meier method and log-rank test. Statistical significance was defined as p<0.05.
Treatment of cervical lymph nodes
In the CND group, ipsilateral CNDs were performed in 72 patients; radical, modified radical, or extended radical NDs in 15, 49, or 8 necks, respectively. The contra-lateral neck was managed with RND in 5, SND I-III in 42, and observation in 25 patients. An average of 39.3 (range 18-75) lymph nodes were collected from each neck. The mean number of (RND), there has been a lot of improvement in the aspect of reducing morbidity or extent of treatment. Comprehensive ND (CND) including all levels of the neck has been accepted as the standard treatment for cN+ OSCC in many countries, but selective ND (SND) followed by adjuvant therapy such as radiotherapy or chemo-radiotherapy also has been performed in other institutes. [5][6][7] In order to decide the extent of ND, we investigated the incidence of level IV or V lymph node metastases, attempted to identify the predictive factors of metastasis in cN+ OSCC patients, and compared the survival rate of cN+ OSCC patients treated with SND or CND. The purpose of our study was to determine whether level IV or V could be excluded in therapeutic ND for cN+ OSCC patients.
MATERIALS AND METHODS
We retrospectively reviewed the charts of previously untreated OSCC patients who were treated at the Yonsei Head and Neck Cancer Clinic between January 1993 and February 2009. Only patients who met the following criteria were included: 1) the initial treatment was a simultaneous curative surgery on the primary tumor and the neck and 2) therapeutic ND (radical/modified radical ND or SND) was performed for the treatment of a clinically node-positive neck. We excluded patients 1) whose initial treatment was radiotherapy or chemotherapy, 2) had presence of other simultaneous primary tumors, or 3) had distant metastasis at the time of initial presentation.
We proceeded with this study in two categories. First, we investigated the incidence of level IV or V lymph node metastases and identified the predictive factors of level IV or V metastasis in cN+ OSCC patients who were treated with CND. Second, we compared the survival rate or regional recurrence of cN+ OSCC patients who were treated with SND including level I, II and III (SND I-III) or CND. Prior to 2005, authors performed CND for the treatment of cN+ OSCC patients, but since 2005, the policy changed so that for the cases with cN1 or cN2a, we performed SND I-III. Therefore, we compared the oncologic outcome of cN1 or cN2a OSCC patients who were treated before 2005 with patients after 2005. Consequently, a total of 92 patients were included in this study-72 in the CND group and 20 in the SND group. Seventy-two patients of the CND group also included 34 patients with cN1 or cN2a necks.
Patients' charts were reviewed regarding age, gender, ori-lymph nodes, but none were detected pre-operatively. The occult metastasis rate of level V was 7% (5 of 72), and the clinical N stage of these patients were cN2b in four cases and cN2c in one. Six patients were clinically suspected to have bilateral metastasis (Table 2). One patient who had a suspicious node only in contra-lateral level I was treated with SND (I-III), while the remaining five cases underwent comprehensive ND. Contra-lateral level IV or V involvement was confirmed in three cases (Table 3). No patients had been detected to have contra-lateral level IV or V metastasis pre-operatively, and all patients who had contra-lateral level IV or V metastasis were diagnosed with advanced T stage (all T4a) and had multiple neck nodes in the contra-lateral neck. However, we found no statistically significant predictive factor for contra-lateral level IV or V metastasis because of the small number of patients. We analyzed the relationship between level IV or V lymph node metastasis and several clinicopathologic factors in 67 patients with clinically positive nodes within neck level I, II lymph nodes harvested from each level was as follows: 4.1 (range 0-15) from level I, 12.5 (range 2-24) from level II, 7.7 (range 1-20) from level III, 7.2 (range 1-22) from level IV and 8.3 (range 1-19) from level V.
In the SND group, SND I-III was performed in 20 patients for the treatment of cN1 or cN2a neck. The contralateral neck was managed with SND I-III in 12 and observation in 8 patients.
Patients received postoperative radiotherapy if they had at least one of the following criteria: 1) multiple lymph node metastases, 2) extracapsular spread of the metastases on histopathological evaluation of the material from the neck dissection, 3) the resected primary tumor demonstrated a positive or very close margin, or 4) the primary tumor was stage 3 or 4. In the CND group, 28 out of 34 cN1, cN2a patients received postoperative radiotherapy. Postoperative radiotherapy was performed after initial surgical treatment in 17 out of 20 SND patients. The mean dose in the CND and SND group were 58.4 Gy (range, 50.4-66.8 Gy) and 58.9 Gy (range, 50.4-74.4 Gy), respectively.
The incidence of level IV or V metastases and predictive factor
From the 72 patients that were evaluated, 62 (86%) were revealed to have lymph node metastases by pathologic examination. Of these 62 patients, 60 had ipsilateral positive lymph nodes, while the remaining two had contra-lateral lymph nodes only. Fourty-five percent (27 of 60) of patients had ipsilateral metastatic lymph nodes at a single level and 55% (33 of 60) at multiple levels. Level I and II were most frequently affected on the ipsilateral side, with a similar prevalence of 45.8% (33 of 72). The distribution of pathologically positive lymph nodes by level is described in Table 3.
The incidence rate of metastasis to level IV or V was 22% (16 of 72) on the ipsilateral neck. 11 patients were revealed to have nodal metastasis in level IV, 1 in level V, and 4 in level IV and V. Of those 16 patients, 5 had clinically suspicious metastatic nodes on level IV and the remaining 11 did not. Therefore, all five cases that were preoperatively suspected to have metastatic nodes at level IV eventually were revealed to have pathologic positive nodes in level IV. There was no patient who was suspected to have level V metastasis pre-operatively. Of the 67 cases that were believed to have a clinically node-negative level IV neck, 11 cases were histologically proven to have level IV metastases. Therefore, the occult metastasis rate of level IV was 16% (11 of 67). In level V, there were five cases with pathologically positive T1 T2 T3 T4 Total N1 3 6 1 1 11 N2a 1 2 2 4 9 Total 4 8 3 5 20 ND, neck dissection.
Survival rate and regional recurrence
In the CND group, nine patients (26.4%) died of tumors, five (14.2%) died of causes unrelated to OSCC, and 20 patients (59%) were alive and free of cancer at the time of final follow-up. Of the nine patients who died of tumors, local failure developed in three patients, regional or locoregional failure in three, and distant failure in the remaining three. In the SND group, six patients (30%) died of tumors, two (10%) died of causes unrelated OSCC and 12 patients (60%) were alive and free of cancer. Local failure occurred in two patients, regional failure in three, and distant failure in one.
Regional recurrence (RR) occurred in four and three patients in the CND and SND group, respectively. In the CND group, RR developed in the ipsilateral dissected neck in three patients and one in contralateral undissected neck, while all recurrences occurred within the ipsilateral field of dissection in the SND group. There was no RR in level IV or V in either group. The CND patient who had recurred in the contralateral undissected neck was reoperated successfully with salvage ND followed by postoperative chemoradiotherapy, but the remaining six patients died of recurred neck disease. The clinical information of these patients are described in Table 6.
Calculated by the Kaplan-Meier method, the 2-year actuarial disease-free survival rate was 71.8% in the CND group and 69.2% in the SND group, and difference was not statistically significant (p=0.823). Similar to the disease-free survival, the 2-year neck control rate of both groups were not statistically different (CND 88.0% vs. SND 84.0%, p=0.719) (Fig. 1).
DISCUSSION
It is well known that the surgical removal of cancer is one of the most important treatment in OSCC and locoregional control is closely connected with survival. 8,9 In addition, the most common cause of treatment failure of OSCC is known to be nodal failure. As it is extremely difficult to salvage from recurrence after initial surgery, 8 the first surgical management of the neck should include proper extent if indicated.
Classical RND was accepted as the treatment of choice in neck management of head and neck cancers since 1906 when it was first described by Crile. However, performing or III. There were no statistically significant differences in age, sex, clinical T stage, or histologic grade. Only a nodal staging above N2b was significantly relevant with level IV I, IV 2 II, III 3 II, IV 2 II, IV+CIII (1) II, V 1 I ,II, III 2 I, II, III+CI, CII, CIII, CIV (1) I, II, IV 1 I, III, V 1 I, IV, V 1 I, II, III, IV 2 I, II, III, V 1 I, II, III, IV, V 1 I, II, III, IV, V+CI, CII, CIV, CV (1) Total 72 12 No., number; Pt, patient; LN, lymph node; C, contra-lateral. However, there are many widely known long-term morbidities caused by chemo-radiotherapy, such as xerostomia, dysphagia or neck fibrosis. 17,18 If we could accurately predict the possibility of nodal metastasis with the patients' clinical characteristics and remove the suspicious area appropriately, we could decrease the severity of adjuvant therapy. However, this study is only a retrospective study, so there is still a long way to go until we are able to precisely predict nodal metastasis and subsequently reduce the dosage or extent of adjuvant therapy. Future multi-center studies might be helpful in overcoming these obstacles.
Many authors have suggested that patients with a greater than 20% risk of occult metastases, based on the anatomic location and the T stage of the primary tumor, should undergo elective ND. 19,20 In this study, the occult metastasis rate of level IV in patients who have suspicious nodes in level I, II or III was 16% (11 of 67). However, the occult metastasis rate for level IV in patients under the N stage of cN2a was 6% (2 of 33) and 26% (9 of 34) in patients over cN2b. The difference between two groups was statistically RNDs risks causing surgical morbidities. Regarding level IV or V, there could be spinal accessory nerve injury, 10 phrenic nerve paralysis 11 or chylous leakage. 12,13 Therefore, there has been a general effort to reduce morbidity and overtreatment when performing neck dissections. In OSCC, it has been reported that lymph node metastases usually occur in level I, II or III in several post-surgical pathologic studies. 14,15 SND I-III is widely accepted as an elective treatment for clinically node-negative OSCC patients, while comprehensive ND removing every neck node from level I to level V is still regarded as the standard treatment for clinically node-positive OSCC patients in many institutes. 9,14 Several authors have reported equivalent regional control and survival rates with protocols of SND followed by adjuvant therapy, such as radiotherapy or chemo-radiotherapy, compared to RND. 5,6,16 However, most patients who had pathologically proven metastatic lymph nodes received high dose post-operative adjuvant therapy; therefore, it is difficult to assess whether the control of neck disease was accomplished by proper surgery or by adjuvant therapy. . significant in this series (p=0.025). Hence, we should consider a removal of level IV lymph nodes in patients with an N stage higher than N2b. However, the occult metastasis rate to level V was 7% (5 of 72) and 13% (5 of 38) even in patients over cN2b. This result was similar with our previous report 21 and other studies. 22,23 Based on these studies and our own clinical data, the policy of neck treatment for OSCC changed in 2005 and since then, patients under the N stage of cN2a have been treated with SND I-III rather than modified radical ND or RND. Therefore, we were able to compare the oncologic outcome of cN1 or cN2a OSCC patients according to neck treatment, CND or SND I-III. In this series, SND combined with proper adjuvant therapy, achieved a survival rate comparable to CND in patients under the N stage of cN2a OSCC. In addition, regional recurrence did not occur in level IV or V in either group.
In conclusion, due to the low occult metastasis rate, equivalent survival rate and regional control, level IV and V did not have regional recurrance in patients under cN2a OSCC, but the exclusion of level IV or V in the therapeutic ND in patients with an N stage higher than cN2b needs further investigation. Thus, we suggest level IV and V patients can avoid recurrence under cN2a OSCC patients.
|
2016-05-12T22:15:10.714Z
|
2012-11-28T00:00:00.000
|
{
"year": 2012,
"sha1": "ec6612b1ef741c54dba420cc657a9bcdbea42610",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2013.54.1.139",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec6612b1ef741c54dba420cc657a9bcdbea42610",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
144004963
|
pes2o/s2orc
|
v3-fos-license
|
Understanding parental views of adolescent sexuality and sex education in Ecuador: a qualitative study
Parents' contribution to sex education is increasingly receiving research attention. This growing interest stems from recognition of the influence that parental attitudes may have both on young people's sexual attitudes and behaviour, and on school-based sex education. Studies regarding parental attitudes towards sexuality are, however, still rare. The two main objectives of this study were to explore parental views about sexuality and to understand parental attitudes towards sex education. Four focus group discussions were conducted with parents from high schools in Cuenca, Ecuador. Data were analysed using thematic analysis. The study revealed that parents held a restricted view about sex education, grounded in traditional religious ideas about sexuality, which led parents to understand it as a morally and physically dangerous activity. Although parents expressed a willingness to make good quality sex education available to their children, they reported having insufficient personal resources to fulfil that objective. The results of this study provide important information about the need to develop and adapt sex education to each specific cultural context, thereby confirming the importance of knowing about the cultural traditions and religious beliefs that may form obstacles to effective sex education for young people in Ecuador.
Introduction
Sexuality is a vital aspect of human development with biological, psychological and social components, which may facilitate identity, well-being, pleasure, affectivity, relationships and reproduction (Formenti 2005;Ahmadi 2010). Sexuality also refers to the human potential of consciousness and specific forms of behaviour that are likely to change at different stages of life (Tiefer 1995;Zubarew 2006). Such a broad-based understanding of sexuality implies that a broad perspective is also needed for sex education. In particular, sex education should go far beyond providing information within a biology or social science course, and should include the nurturing of skills, attitudes and behaviours, as well as critical reflection on personal experiences in the arena of relationships and sexuality (Halstead and Reiss 2006). However, when sex education is understood in such a holistic manner, it presupposes something more than the provision of school-based sex education. It becomes necessary to understand the contribution of a contributes to a sexual double standard, according to which greater freedom is assumed for men than for women in terms of sexual behaviour (Sierra et al. 2007).
Latin American cultures are also characterised by the strong influence of religion and the church on social policy, especially in the field of (sex) education. More generally, the impact of religion and religious ideas on sexuality has long been recognised (Haffner 2011). Current views about sexuality in Latin America are still strongly based on definitions of sexuality, and on religious teachings that emphasise procreation as the only purpose of sex. These stress sexual behaviour as exclusively reserved for married couples, virginity as highly valued and masturbation and homosexuality as practices to be admonished (Hubbard 1990;Daniluk and Brown 2008).
In Ecuador, in recent years there has been a rejection of formal sex education programmes because the content of these programmes was thought to be incongruous with religious teaching. Moreover, implementing such programmes has been seen as overruling parents' right to be primary sex educators (Ecuadorinmediato.com 2006;Dupret 2007;Clavijo 2011), a right that has been recognised in international agreements (Convención Iberoamericana de Derechos de la Juventud 2005; IPPF 2009) and is guaranteed by the National Constitution (Constitución de la República del Ecuador 2008). In the same Constitution, however, the right to autonomous decision-making about sexuality and sexual orientation, and the right to free access to sex education taking into account a gender and empowerment approach, are also recognised. In this respect, it is striking that the last National Census held in Ecuador revealed high rates of teenage pregnancy with consequent hasty marriages, as well as high divorce rates, which are all indirect indications that there is room for improving the sex education young people receive. Of all the cities in Ecuador, Cuenca appeared to perform especially badly in terms of adolescent sexual health indicators (INEC 2012). This may possibly be linked with the fact that Cuenca has also been identified as a city with strong cultural traditions that are reflected in the attitudes and beliefs of its population. This raises questions about how these same traditions might be reflected in the sex education given to young people.
Therefore, the aims of the present study were to (a) explore the views of parents in Cuenca about sexuality; (b) examine parental perceptions and emotions about sex education; and (c) understand the problems parents face in providing sex education within the family.
Methods
Because of the exploratory nature of the study, a qualitative research strategy was chosen. A qualitative approach allows for the examination of a broad range of parental ideas and perspectives on this topic, without imposing strongly preconceived notions from the researchers. This study's primary data were collected during the academic year 2008 -2009. The study was based on focus group sessions conducted with both male and female parents of young people attending high schools in Cuenca.
Participants
Since the study was part of a broader research project aimed at improving current sex education programmes in schools, the parents who participated in this study were members of local Parents Committees. In Ecuador, every class has a parents committee constituted by four elected parents who represent parents' views to the school authorities. In three high schools -two public and one private -participating in the broader study, parents were recruited through initial contact with the high school authorities and were included based on the fact that they had at least one child in the 12-19 year age range at the time of the focus group sessions.
The group of participants consisted of 20 parents: 8 men and 12 women. The average age of participants was 44 years old (ranging from 33 to 64). The average number of children they had was two with an average age of 16 years old. The majority of parents were married (n ¼ 14), some were divorced (n ¼ 5) and one of the participants was widowed. In terms of educational level, 3 parents had only completed primary school, 11 had completed secondary school and 6 had completed higher education. With regard to religious affiliation, most of the parents reported themselves to be Catholic (n ¼ 19), with only one reporting herself to be Evangelist, thereby reflecting the distribution of religions in accordance with the national distribution in Ecuador (INEC 2012).
Data collection
The use of focus groups as a method of data collection provided the opportunity to study individual beliefs, attitudes, values, norms and experiences within the context of group interaction (Wagstaff, Abramson, and Pinkerton 2000) as well as to explore more latent attitudes, opinions and behavioural patterns (Byers, Zeller, and Byers 2002). Focus groups of participants included male and female parents, with some participants from each school also being married to one another and taking part in the same focus group. The focus group discussions were conducted in Spanish and were moderated by a female developmental psychologist in her early 40s, although the principal researcher also attended all groups. All discussions were recorded, transcribed and checked for accuracy. After participating in the focus group discussions, parents were invited to attend a meeting about sexuality and sex education in which they could discuss their main concerns about their children's sexual development with members of the research team.
Data analysis
Data were processed using inductive thematic analysis (Braun and Clarke 2006), which was performed using Atlas-ti software (Muhr and Friese 2004). Transcripts were reviewed by three independent coders. First, the data were carefully read to identify the codes relevant to this research topic, and a list of codes was created. Second, codes were sorted and collated into themes based on which thematic maps of analysis were built. Third, the internal homogeneity and external heterogeneity of codes and themes were analysed and, for each individual theme, a detailed analysis was conducted in order to identify subthemes.
Throughout the entire process of analysis, constant comparative method was used, continually comparing the codes against each other, and the codes and themes with the data. After four focus group sessions, data saturation was reached.
Different strategies were used to optimise the reliability of the findings, including coordinated communication between the coders through regular meetings in which analyses were shared, the cross-checking of codes through the comparison of independently derived researcher results, peer debriefing with other project members reviewing the outcome of the study, and member checking with the final report being returned to participants for comment.
The quotations used in this paper were translated into English by the principal researcher and then translated back into Spanish by one of the co-investigators.
The English language quotations were modified where necessary to ensure that the original meanings from the Spanish quotations were retained.
Ethics
Approval for this project was granted by the Institutional Research Board of the University of Cuenca. Parents were informed about the objectives, procedures and benefits of the study. Participation was voluntary and each participant signed an informed consent form in which confidentiality and the anonymous analysis of data were assured.
Findings
Parents were first asked to express their views about sexuality, since understanding their perceptions about sexuality would facilitate a better exploration of their views about sex education.
Parents' views about sexuality
Parents stated that talking about their children's sexuality generates concern and anxiety, especially when referring to current indicators of sexual behaviour among young people in Cuenca, e.g. sexual initiation at increasingly younger ages, the high number of unplanned teenage pregnancies and the consequential requirement to marry.
Parents' narratives revealed that their views regarding sexuality are closely related to the cultural forces and societal traditions that have shaped their experiences. For participants, sexuality was related to moral values and social mores that change neither over time nor from place to place. Of the different features informing parental views, religion and machismo were the most salient factors. Parents recognised that religionboth Catholic and Evangelical -had exerted, and still exerts, a fundamental role in setting the parameters of behaviour in the area of sexuality. In fact, parents referred to religion, and the rules they had learned in, and promoted by, the church, as their immediate point of reference for their understanding of sexuality.
In conjunction with these religious undertones, parents mentioned strong links between sexuality, love and commitment. This demonstrates a somewhat narrow view of sexuality equating sexuality with sexual intercourse, restricted to adult heterosexual couples within the institution of marriage and with reproduction being the main reason for this union.
Parents considered that premarital sex could potentially be seen as debauchery, or would even amount to what they considered to be prostitution: For me, sexuality should be taken seriously, so I tell my daughter that if you give yourself to your boyfriend [which means having sex with him], and then you break up with him later and another boyfriend comes along and you give yourself to him too . . . that's obscene! That, for me, is like prostitution! And I do not agree with that! (Teresa, age 55) Furthermore, virginity until marriage and fidelity were viewed by parents as the most valuable qualities to instil in their offspring. This view of sexuality -as something exclusively for adults -implies a lack of acceptance of young people's sexuality and, therefore, parents refused to talk about the use of contraceptives. Most usually, when parents were asked about contraceptive use, the conversation was changed through emphasising the importance of abstinence as the only 'fool-proof' method: Participant: In relation to that I have said: 'men can also wait until marriage, [it] is not obligatory to have sexual relations before marriage, men can also be chaste until they get married and then the two of you can learn together. In that way, you'll also prevent so many diseases that there are nowadays . . . sexually transmitted diseases and so on . . . '. (Catherine,age 39) The conception of sexuality as an expression of love within a partnership was reflected in the rejection of other sexual practices such as masturbation. Some parents indicated that masturbation was believed to cause brain damage in young people and sexual problems in their future marriage: Somewhat paradoxically, other parents suggested that masturbation might be acceptable for single men as a strategy for avoiding sexual intercourse, but it would not be acceptable for married men, whereas for women, masturbation was not even a topic for discussion. The influence of religious admonition was also reflected in the opposition that parents showed towards abortion and homosexuality, although parents had different reactions to the two topics. Parents avoided discussing the issue of abortion by expressing their clear and firm rejection of it; they considered it to be a crime and therefore not acceptable under any circumstances: From the moment you get pregnant, even if the woman has been raped, you could not do anything like this . . . because life already exists there! (Mercedes, age 44) While discussion of abortion was limited, the issue of homosexuality was raised several times during the focus group discussion and generated a series of questions and concerns. Parents expressed a distinct lack of awareness by associating homosexuality with mental illness.
Another important cultural factor that parents brought up was machismo and its relationship to traditional gender role expectations and norms. Machismo implies a double standard with clear differences and inequalities between men and women in terms of sexuality and this was expressed as a clear dichotomy in the parents' judgements. Women were considered to be vulnerable and in need of protection, while men were presented as being guided by an uncontrollable sex drive and irresponsible in terms of sexual behaviour. As such, abstinence before marriage was expected of women, while for men sexual experience was acceptable and even encouraged. In congruence, the fears and concerns expressed by parents were also different according to their children's gender. For daughters, parents expressed a permanent fear of the possibility of an unwanted pregnancy and the negative consequences that this could bring, especially because of the social rejection involved. In this context, the depreciation of a single mother became clear as it is linked to economic factors affecting their quality of life: I am afraid of two things: that my daughter would become a single mother, and that my daughter would not become a professional. Because women, wherever we are, are still in a situation where a single mother is rejected. In my opinion, a single mother will no longer have the opportunity to pursue a career. (Yolanda,age 34) For sons, parental concerns were more focused on avoiding contracting sexually transmitted diseases and the possibility that they might 'damage a girl' for not controlling their sexual urges: Participants did, however, recognise that society has not been fair to women due to the existence of machismo and they expressed an intention to change society for future generations. This intention to change was, however, mainly expressed by those parents who had daughters: I think that we live in a machista society and we are fighting to change it. This is necessary, at least when you have a daughter... that gives you a reason to reflect. (Daniel,age 55) Parents' views about sex education In discussing sex education, parents were first questioned about their own experiences of the sex education they themselves had received and, thereafter, they were asked about their experiences of sex education as parents.
Sex education in the previous generation Parents stressed that during their own adolescence, formal sex education had been almost non-existent: I am thirty-six years old and nobody has talked to me about it [sexuality], or at least I don't remember. (Edison,age 36) With regard to sex education at home, participants did not recognise this as a formal process, although they did mention that some topics had been addressed by their parents, and these showed clear gender differences. Women reported that they had received information about menstruation, virginity, fidelity and marriage. In contrast, men reported that they had not received any specific information at home.
Parents indicated that their experiences of sex education at home had been based on an informal process in the control and repression of sexuality, resulting in feelings of loneliness and frustration: My life was very repressed in my teens. My father was very strict and controlling in that respect. Back then, my life was very limited, my youth was like a lonely field . . . that was a life that I've never lived. (Luis,age 55) With regard to sex education at school, participants recognised that they had received sex education, but they also said that it had been largely focused on biological aspects and on the prevention of sexually transmitted diseases and pregnancy.
Sex education at home nowadays
When parents were asked about current efforts to address the topic of sexuality with their children, their comments ran along two lines: on the one hand, they expressed their desire and willingness to provide sex education at home, but on the other hand, they mentioned several significant obstacles to doing so. Parents indicated that their main limitation was their own lack of knowledge. In addition, parents mentioned that their children had received so much information already and, as a consequence, they refused to talk about the topic anymore, which gave parents the feeling that 'they already know everything!' Furthermore, parents also indicated that talking with their children about sexuality generated feelings of shame and anxiety.
Broaching the issue of sexuality was a polarised problem for parents based on their gender. Informants conveyed that, in most cases, it was the mother who talked about the issue with the children, and that daughters received more attention than sons. The role of the father seemed to be restricted to that of an authority figure, warning children about risky situations, preventing the transgression of rules and giving reprimands or punishments in the case of transgression. Even though some parents considered the father to be the best person to talk to their sons, they also recognised the fact that fathers do not generally talk or communicate about this topic with them. Moreover, and quite strikingly, communication between father and daughter in the area of sexuality was thought to be disrespectful and an intrusion into a daughter's privacy.
Gender differences are thus indicative of cultural patterns whereby girls and boys receive disparate forms of sex education. For daughters, parents reported that they seek intimate communication and aim to be careful and somewhat repressive with them, while for sons, a repressive approach was less evident and communication was less frequent.
Parents expressed clear aims for communicating about sexuality. It was clear that their main objective for sex education was to prevent premarital sex and to promote abstinence. To reach this goal, parents described using different strategies to forestall the onset of sexual intercourse and especially to protect their daughters. These strategies included formal discussion, threats, life project promotion and efforts to develop values, while other strategies focused on more informal processes such as modelling and controlling.
The majority of strategies took the form of a kind of intimidation. Sex education in families appeared to be mainly based on informing children about the risky aspects of sexuality and creating a climate of fear around sexuality: [When] I talk to my daughters, [I do] not just tell them that sex is bad, but I tell them about venereal diseases, that the worst things that can hurt a woman are AIDS and HPV, which is a major cause of cancer of the uterus, and many other things . . . I also tell her that she might be affected psychologically . . . . (Magui,age 44) This fear of sexuality was also engaged by threatening young people in order to prevent them from taking sexual risks: Once my daughter asked me: 'Daddy, what would you do if I got pregnant?' and I told her: 'You would have to have your child and then you would need to sell candies to bring him up. You would have to stop studying and start working!' (Miguel,age 54) It was important for some parents to emphasise long-term life goals, stressing that sex would interfere with the attainment of these goals. Parents stated that they stressed that an unwanted pregnancy would lead to a lost opportunity to gain entry to a profession and so early commencement of sexual activity may go against success in later life.
Thus, parents often framed their method of sex education as developing values in their children, stressing that the best strategy to help their children to face reality was moral education. Values mentioned by parents were chastity, fidelity and respect for self and others. These values were often emphasised through some form of modelling as a way of teaching their children about sexuality: We can be a role model for them. To create the environment where they grow up and where they learn from the time they are children. It is not only necessary to talk about sex, and to say: 'sex . . . and sex!' I think if we have defined moral and ethical principles, that is enough. (Silvia,age 52) As common as modelling were efforts to more directly control children's behaviour. Parents' comments showed that they used a variety of tactics to control their children and they cited several examples of strategies they undertook, such as setting strict curfews, giving them rides to and back home from parties, getting to know their friends and activities, checking their belongings -even diaries -and establishing rules for social interaction with romantic partners (e.g. teenagers are not allowed to spend time alone with a romantic partner).
Sex education in schools
Parents indicated that they trusted teachers as sex educators, believing that they do have the correct information and skills needed to address sexuality in a professional way. Despite the fact that parents saw schools as the best place to provide sex education, they also mentioned that the education their children currently received in schools merely provided biological information without focusing on values education. In addition, sex education at school was perceived as being given too late when young people already have too much information from various other sources that are not considered to be the best influence.
Parents expressed a wish for the chance to gain more knowledge to enable them to address this issue. They wondered whether schools could help them to become better sex educators, suggesting that schools should offer sex education for parents as part of the curriculum: I would really like to take lessons about sexuality, it would allow us to educate and guide them [the children]. All schools should have these programmes. Courses for parents of high school students would be ideal. (Francisco, age 54)
Discussion
This study has explored parents' views about sexuality and sex education for young people in a Latin American context. It revealed that parents hold a restricted view of sexuality in which cultural values such as conservatism, religious beliefs and machismo are reflected, and where the societal pressure to maintain tradition seems to create a situation of uncertainty, confusion and concern. In line with tradition, parents understood sexuality as being connected to moral and physical dangers. This was clearly illustrated by their positive attitudes towards virginity and their use of strategies to protect their children, especially their daughters, from the risks that expressing sexuality was perceived to involve. In the current study, parents related sexuality to the moral values promoted by the church. As a result, sexuality could only be accepted if properly channelled into and within a marriage (Osborne 1995;Caricote 2006). This vision fits with the understanding that sexuality is intended exclusively for procreation, implying that all forms of sexual expression and enjoyment other than heterosexual intercourse are considered 'bad' or 'sinful' (Hubbard 1990;Daniluk and Brown 2008). The denial and rejection of certain behaviours, such as masturbation and contraceptive use, lead to an avoidance of parentchild communication, avoidance that might be currently filled by other sources (e.g. friends, peers, Internet, media) that may not always offer accurate information.
Parents in this study reported not having received a satisfactory sex education themselves when they were young (Ingham et al. 1998). The fact that parents had not received sex education -in terms of formal discussion at home -was reflected in their concerns about assuming their role as their own children's sex educators (Walker 2004;Meschke and Dettmer 2012). In the current study, parents associated a lack of understanding about sexuality with feelings of fear and repression and expressed the hope that their children would be spared from such feelings, mentioning their intention to provide good sex education at home. Despite these intentions, though, parents described several obstacles to reaching this goal. The obstacles mentioned by this group of parents are in line with those previously reported, and included a lack of knowledge and sources of information (Kakavoulis 2001;Gonzalez-Lopez 2004;Kirana et al. 2007), lack of communicative skills (Soper and Tristán 2004;Walker 2004), embarrassment (Ingham et al. 1998Rosenthal and Feldman 1999;Fogarty and Wyatt 2006) and lack of a formal sex education model to follow (Soper and Tristán 2004;Walker 2004).
In the present study, mothers appeared to be responsible for talking with their children about sexuality. This also confirms findings of previous studies (Miller et al. 1998;Miller, Benson, and Galbraith 2001;Caricote 2006;Kirana et al. 2007;El-Shaieb and Wurtele 2009;Lagina 2010;Meschke and Dettmer 2012). Although fathers participated enthusiastically in the focus groups and showed interest in getting involved in the process of their children's sex education, their current actual involvement seems to be limited (Meschke and Dettmer 2012).
Also in line with previous research (Moore, Peterson, and Furstenberg 1986;Miller et al. 1998;Rosenthal and Feldman 1999;Walker 2001Walker , 2004Gonzalez-Lopez 2004;Caricote 2006), it appears that formal discussion about sexuality in the family more often takes place with daughters than with sons. This finding may be explained by the double standard present in Latino cultures, which implies that in cases of unwanted pregnancy, the woman is the one who would assume responsibility and be confronted with the direct consequences (Miller et al. 1998;Gonzalez-Lopez 2004). However, in this study, participants also discussed how unfair society has been for women, which also seems to be an important factor in understanding the emphasis given to sex education for their daughters.
The onset of sexual relationships and the possibility of an early pregnancy were seen as losing the chance to lead a successful life and to attain socio-economic independence. Parents linked the importance of preserving virginity with their daughters' future socioeconomic success in the context of a society where there are clear power imbalances between the sexes. Virginity maintained until marriage was seen as a requirement for improving living conditions in a society with large differences between socio-economic strata, in which an early pregnancy is linked with the cycle of poverty. This interpretation of parental concerns in terms of their socio-economic significance rather than the preservation of virginity per se has been previously identified in Mexican emigrant fathers (Gonzalez-Lopez 2004) and goes beyond the validation of virginity as a religious concern. More exploratory research is needed to better understand this shift in the meaning of virginity.
In accordance with the view of sexuality being exclusively for adult couples within marriage, young people's sexuality needs to be repressed, regulated and continuously watched over. As a consequence, strategies used for bringing up offspring take the form of warning, threatening and controlling, an educational style that has previously been identified in Latino families (Raffaelli and Ontai 2001). Although it has been reported that parental control is associated with a lower frequency of risky sexual behaviours among young people, it has also been shown that excessive or coercive parental control is associated with negative outcomes in behaviour (Miller, Benson, and Galbraith 2001). Moreover, sex education based on a limited view of sexuality, with a clear double standard and a focus on control, restriction and warning about risky situations, implies a lack of recognition and non-acceptance of the sexual rights of young people. These rights are, however, recognised in different international agreements (Convención Iberoamericana de Derechos de la Juventud 2005; IPPF 2009) and are stated in the National Constitution, which mentions the right for free, informed, voluntary and responsible decision-making about sexuality and sexual orientation (Constitución de la República del Ecuador 2008).
In line with their views about sexuality, parents reported providing sex education at home that could best be characterised by an abstinence-only approach. Parental support for an abstinence-only model may conflict with the contents of programmes that are currently implemented in high schools, creating a confusing environment for young people. Furthermore, parents' refusal to provide information about contraceptive methods limits young people's access to sexual health, which goes against the furtherance of the achievement of universal access to reproductive health as stated in the Millennium Development Goals (United Nations 2000).
Faced with uncertainties and confusion as primary sex educators, parents turn to schools for help, expressing their confidence in the capability of teachers and schools to provide sex education. However, in common with previous studies, parents had several suggestions for improving sex education in schools, including the fact that school education should go beyond a limited biological approach and include a focus on values and the relational aspects of sexuality (Kakavoulis 2001), and that sex education at school should start at an early stage (Kakavoulis 2001;Kirana et al. 2007).
Finally, as a strategy for promoting and improving their participation in the sex education of their children, parents suggested that schools should offer specific parent training programmes. Such programmes should not only provide parents with information and knowledge, but also help to develop the skills needed to better approach the topic of sexuality and improve communication with their children. This suggestion fits well with previous research that emphasised the potential of schools to help parents improve communication about sexuality (Miller et al. 1998;Walker 2001Walker , 2004Fogarty and Wyatt 2006;Kirana et al. 2007;Ballard and Gross 2009;Lagina 2010;Meschke and Dettmer 2012).
This study has several limitations that should be mentioned. First, there was a rather broad age range among the young people whose parents participated. For this reason, it is not possible to identify specific concerns or ideas that parents might have in relation to the age of the children and their specific phase of development. A second limitation is that parents who participated in this study were members of the parents committee at each high school, meaning that they were elected by the other parents to be representatives for the school authorities. This procedure could create bias as these parents may be especially involved and concerned about the education of their children in general and sex education in particular. Linked with this limitation is the fact that the group of parents who participated in the focus groups was somewhat heterogeneous in terms of age and level of education.
Conclusion
The results of this study offer important insights that may facilitate the improvement of educational strategies and educational programmes about sexuality for young people in Cuenca. After a long history of failed attempts to provide sex education programmes, these results highlight the importance of understanding cultural traditions and religious views on sexuality that may constitute obstacles to the success of such programmes. These results also emphasise the need to identify opportunities for sex education in which the cultural characteristics of the population to which it is oriented are recognised. Sex education based on scientifically appropriate knowledge may help to overcome myths and taboos about sexuality. The study also articulates some of the confusion and uncertainty of young people's parents being caught between traditions and their wish for a better future for their own children, an uncertainty that reflects the ignorance of the parents regarding the sexual rights of young people and the need for a learning space for parents. The findings from this study may inspire school authorities and social scientists to improve the environment in which young people develop, incorporating a sexual rights approach as required by national and international legal frameworks. Policy-makers may also find the findings helpful as a means of making future decisions more evidence-informed.
|
2019-05-04T13:08:05.018Z
|
2014-01-02T00:00:00.000
|
{
"year": 2014,
"sha1": "b897758b52cbbe88d5c09887454ff2070905921d",
"oa_license": "CCBYNCSA",
"oa_url": "http://dspace.ucuenca.edu.ec/bitstream/123456789/22035/1/scopus%2070.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4a005f12a2de0b8d7af18df3b33401b88b5646f6",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
119163265
|
pes2o/s2orc
|
v3-fos-license
|
Adelic Ahlfors-Bers theory
The universal arithmetic one dimensional solenoid $S^1_{\mathbb{Q}}$ is the Pontryagin dual of the additive rationals $\mathbb{Q}$ and it is isomorphic to the ad\`ele class group ${\mathbb{A}}_{\mathbb{Q}}/{\mathbb{Q}}$. It is also isomorphic to the algebraic universal covering on the unit circle $S^1$ obtained by the inverse limit of the tower of its finite coverings. It is the boundary of the surface lamination with boundary obtained as the algebraic universal covering of the punctured closed disk $\overline{\Delta}-\{0\}\subset{\mathbb{C}}$. The interior of this lamination is the inverse limit of the tower of finite coverings of the open punctured disk $\Delta-\{0\}$. The latter is a Riemann surface lamination denoted $\mathbb{H}_\mathbb Q$ and it is foliated by densely embedded copies of the hyperbolic plane $\mathbb H$. The boundary of the leaves are densely embedded copies of $\mathbb{R}$ in $S^1_{\mathbb{Q}}$. In this framework the pair $(S^1_\mathbb{Q},{\mathbb{H}}_{\mathbb{Q}})$ is the adelic version of the pair $(\mathbb{R},\mathbb{H})$. The stage is set to develop the adelic theory of Beltrami differentials, Ahlfors-Bers theory, quasi-symmetric homeomorphisms of $S^1_\mathbb{Q}$ and Teichm\"uller theory. This paper is a first step towards this goal in parallel with the work by Dennis Sullivan on the universal Teichm\"uller spaces of Riemann surface laminations. \end{abstract}
Introduction
Since its creation by Claude Chevalley and André Weil the ring of adèles of the rationals A Q = R × p Q p has played a fundamental role in number theory, for instance in on class field theory [RV], [Ta] and the Langlands program. The canonical diagonal inclusion i : Q → A Q embeds Q as a discrete cocompact subgroup of A Q which we identify with Q. The quotient A Q /Q is the adèle class group with its additive structure is a compact abelian group and its Pontryagin dual is the additive group of the rationals (Q, +) with the discrete topology. There is another description of A Q /Q as the inverse limit of all finite coverings p n (z) = z n (z ∈ S 1 , n ∈ Z) of the circle S 1 . This is a one dimensional solenoidal compact abelian group in the sense of Pontryagin. It is a sort of "diffuse circle" (a lamination, a current or a foliated cycle in the sense of Sullivan [Su2]) and in fact we denote this group as S 1 Q = Q ∨ = Pontryagin dual of Q to convey the idea that it is a generalization of a circle. This solenoid is a lamination with dense leaves which are embedded copies of the real line. If we consider S 1 Q × [0, ∞) one obtains a 2-dimensional lamination with boundary S 1 Q whose leaves are densely embedded copies of the closed upper half plane H. The interior H Q = S 1 Q ×(0, ∞) is a two dimensional Riemann surface lamination with hyperbolic dense leaves isometric to the upper half-plane with the Poincaré metric. The metric is given explicitly by ds 2 = dx 2 +dt 2 t 2 where dx is the natural flat metric on the one dimensional solenoid S 1 Q . The laminated space H Q is the adelic hyperbolic upper half-plane. This lamination can also be obtained as the inverse limit of the the coverings of the closed unit disk ∆ − {0}, p n (z) = z n , |z| ≤ 1, n ∈ Z. The interior of this lamination is the inverse limit of the tower of coverings of the punctured open unit disk ∆ * = ∆ − {0}. Another important locally compact abelian group is the inverse limit of the tower of coverings of C * , the algebraic solenoid C * Q . As a group this group is isomorphic to S 1 Q × R • where R • = {t ∈ R : t > 0} is the multiplicative group of the positive reals. We endow C * Q with its Haar measure η. The 2-dimensional solenoid C * Q is a Riemann surface lamination foliated by densely embedded copies of C. The leaves are the orbits of a free and holomorphic action of C.
The corresponding notions of the operators ∂ z and ∂z and the notion of quasiconformal mappings can be introduced in C * Q . Given µ ∈ L ∞ (C * Q , η) with ||µ|| ∞ < 1 one can define the Beltrami equation: Now we have the perfect setting to study the Ahlfors-Bers theory and the corresponding Teichmüller theory. This is the main subject of this paper.
We will restrict the considered set of Beltrami differentials to those whose restrictions to every leaf and every fiber belong to L ∞ respectively. A necessary condition for the existence of a quasiconformal solution to the equation (1) is that µ should be uniformly vertical continuous: Consider the canonical left action m :Ẑ → Aut(C * Q ) such that m(a) is left product by a. We say that µ ∈ L ∞ (C * Q ) is uniformly vertical L ∞ -continuous if the mapẐ → L ∞ (C * Q ) such that (a → µ • m(a)) is continuous at zero. For every fiber F z = π −1 1 (z) (z ∈ C * ), the restriction µ z of a vertical L ∞ -continuous µ to F z can be represented by a continuous function fromẐ → C. The function of representatives C * → C 0 (Ẑ, C), z → µ z , is not necessarily continuous, actually it could be quite bizarre. This transversal continuity condition is indeed necessary: if there exists a quasiconformal solution f µ :Ĉ Q →Ĉ Q of equation (1), in particular it is uniformly continuous along the fibers and so must be µ. HereĈ Q denotes the adelic Riemann sphere: It is the inverse limit of the branched coverings projective system {Ĉ, p n,m } n,m≥1,n|m where p n,m (z) = z m/n andĈ is the Riemann sphere.
However, this condition is not sufficient to ensure the existence of quasiconformal solutions. An example is given in section 4.
Among these Beltrami differentials, we have the periodic ones: We say µ ∈ P er 1 is a periodic adelic Beltrami differential if there is some natural n and Beltrami differential µ n ∈ L ∞ (C) 1 such that µ = π * n (µ n ). The importance of periodic adelic Beltami differentials is that they trivially have a quasiconformal solution of their respective Beltrami equation: Consider the periodic adelic Beltrami differential µ = π * n (µ n ) and the quasiconformal solution f n to the µ n -Beltrami equation fixing 0, 1, ∞. Define the leaf and orientation preserving homeomorphism f such that: Then, f is the quasiconformal solution to the µ-Beltrami equation (1).
At this point, it is natural to ask for a topology T such that the interior of the closure of these Beltrami differentials constitute new Beltrami differentials for which there exist quasiconformal solutions of their respective Beltrami equations; i.e.: Bel(C Q ) = Interior (P er 1 T ) The first natural guess would be the metric topology T ∞ but this doesn't work since: Interior (P er 1 ∞ ) = L vert ∞ (C * Q ) 1 and as we said before this is not a sufficient condition. We find a family of complete metric topologies T Ren,S solving this problem. This is one of the main results of the paper. However, the optimality of these solutions remains an open problem. We would like to have sufficient and necessary conditions as well.
Compact solenoidal laminations by Riemann surfaces (solenoidal surfaces) appear in various branches of mathematics. For instance, following an original idea of Dennis Sullivan [Su], in the paper in Acta Mathematica [BNS] it is constructed the universal Teichmüller space of the solenoidal surface Σ obtained by taking the inverse limit of all finite pointed covers of a compact surface of genus greater than one and chosen base point. The sequence of the chosen base points upstairs in the covers determine a point and a distinguished leaf L in the inverse limit solenoidal surface. In this space, the commensurability automorphism group of the fundamental group of any higher genus compact surface acts by isometries. By definition, this group is independent of the genus.
The space of hyperbolic structures up to isometry preserving the distinguished leaf on this solenoidal surface Σ is non Hausdorff and any Hausdorff quotient is a point.
The proof of this result relies on the recent deep results due to Jeremy Kahn and Vladimir Marković on the validity of the Ehrenpreis Conjecture [KM]. The remark by Sullivan is that the action of the commensurability automorphism group of the fundamental group is not only by isometries but also minimal. This action is described in the paper in Acta Mathematica [BNS] mentioned before.
Concerning Dynamical systems theory, D. Sullivan [Su] studies the linking between universalities of Milnor-Thurston, Feigenbaum's (quantitative) and Ahlfors-Bers. As he points out, S 1 2 × S 1 (his second example) is the basic solenoidal surface required in the dynamical theory of Feigenbaum's Universality [Fe]. Here S 1 2 is the 2-adic solenoid. This work was continued, for instance, in the use of 3-dimensional hyperbolic laminations by Misha Lyubich and Yair Minsky. in [LM]. Another important application of solenoidal surfaces follows from the fact that they parametrize tessellation spaces [Gh].
We hope that the theory of adelic Beltrami differentials developed in this work shed some new light on these universalities.
In this paper we also describe different equivalent Teichmüller models. This is a straightforward generalization of the classical models. The relation with Sullivan's work is the following: There is a canonical continuous injective map Finally, we construct the p-adic generalization to the Nag-Verjovsky map in [NV]: ι : Dif f P (S 1 p )/R BL → T P (1) We prove that this map is differentiable analytic. This is another main result of the paper. We would like to point out that one of the motivations of this map was its relation to string theory [HR], [BR1], [BR2], [Pe]. We believe that the theory developed here could be applied to p-adic string theory and its relationship with number theory.
Adelic solenoid 2.1 Adelic solenoid
In what follows we will identify the group U (1) with the unit circle S 1 = {z ∈ C : |z| = 1} and the finite cyclic group Z/nZ with the group of n th roots of unity in S 1 .
By covering space theory, for any integer n ≥ 1, it is defined the unbranched covering space of degree n, p n : S 1 → S 1 given by z −→ z n . If n, m ∈ Z + and n divides m, then there exists a covering map p n,m : S 1 → S 1 such that p n •p n,m = p m where p n,m (z) = z m/n . We also denote with the same letters the restriction of p n and p n,m to the n th roots of unity. In particular we have the relation: p n,m • p m,l = p n,l This determines a projective system of covering spaces {S 1 , p n,m } n,m≥1,n|m whose projective limit is the universal one-dimensional solenoid or adelic solenoid Thus S 1 Q consists of sequences (z n ) n∈N, z∈S 1 which are compatible with p n i.e. p n,m (z m ) = z n if n divides m.
The canonical projections of the inverse limit are the functions S 1 Q πn → S 1 defined by π n (z j ) j∈N = z n . Each π n is an epimorphism. In particular each π n is a character which determines a locally trivialẐ-bundle structure where the group Z := lim pn ←− Z/mZ is the profinite completion of Z, which is a compact, perfect and totally disconnected Abelian topological group homeomorphic to the Cantor set. BeingẐ the profinite completion of Z, it admits a canonical inclusion of Z ⊂Ẑ whose image is dense. We have an inclusionẐ φ → S 1 Q and a short exact sequence 0 →Ẑ φ → S 1 Q π 1 → S 1 → 1. The solenoid S 1 Q can also be realized as the orbit space of the Q-bundle structure Q → A → A/Q, where A is the adèle group of the rational numbers which is a locally compact Abelian group, Q is a discrete subgroup of A and A/Q ∼ = S 1 Q is a compact Abelian group (see [RV]). From this perspective, A/Q can be seen as a projective limit whose n-th component corresponds to the unique covering of degree n ≥ 1 of S 1 Q . The solenoid S 1 Q is also called the algebraic universal covering space of the circle S 1 . The Grothendieck Galois group of the covering isẐ, the algebraic fundamental group of S 1 Q . By considering the properly discontinuously free action of Z onẐ × R given by The solenoid S 1 Q is identified with the orbit spaceẐ × Z R. Here, Z is acting on R by covering transformations and onẐ by translations. The path-connected component of the identity element 1 ∈ S 1 Q is called the baseleaf [Od] and will be denoted by R BL .
Clearly, R BL is the image of {0} × R under the canonical projection exp :Ẑ × R → S 1 Q defined below and it is a densely embedded copy of R. Hence S 1 Q is a compact, connected, Abelian topological group and also a one-dimensional lamination where each "leaf" is a simply connected one-dimensional manifold, homeomorphic to the universal covering space R of S 1 , and a typical "transversal" is isomorphic to the Cantor groupẐ. The solenoid S 1 Q also has a leafwise C ∞ Riemannian metric (i.e., C ∞ along the leaves) which renders each leaf isometric to the real line with its standard metric dx. So, it makes sense to speak of a rigid translation along the leaves. The leaves also have a natural order equivalent to the order of the real line hence also an orientation.
Summarizing the above discussion we have the commutative diagram: whereẐ is the adelic profinite completion of the integers and the image of the group monomorphism φ : (Ẑ, +) → (S 1 Q , ·) is the principal fiber. We notice that π n (x) = π n (y) implies π n (y −1 x) = 1 and therefore y −1 x = φ(a) where a ∈ nẐ for some n ∈ Z ⊂Ẑ.
• For the second item, the second exact sequence follows exactly from the same arguments as the first. Because π 1 = z n • π n , we have the right commutative square. The left square is trivial (diagram chasing).
• π 1 : S 1 Q → S 1 is a fiber bundle with fiber isomorphic toẐ and monodromy the shift T (x) = x + 1.
• exp is a local homeomorphism.
• Restricted to a leaf, π 1 is a local homeomorphism.
• S 1 Q is the dynamical suspension of the shift T (x) = x + 1. • S 1 Q is foliated by dense R-leaves. Proof: • If diam(U ) < 2π then U is a trivializing neighborhood of S 1 . • Z acts as translations by ι(Z) and because ι(Z) is discrete inẐ × R then Z acts proper and discontinuously. We conclude that exp is a local homeomorphism.
• By definition π 1 is an open continuous epimorphism. Restricted to a leaf and a trivializing neighborhood π 1 is one to one.
• The foliationẐ × R is invariant under translations by ι(a) for every integer a hence it induces a foliation in the solenoid. Z is dense in its profinite completionẐ and so is every coset ofẐ/Z. By the preceding item, we have that every R-leaf is dense in the solenoid.
Geometrically, the structure of the fiber is the disjoint union: As an example, consider the subsystem n i = 2 i and the diadic solenoid S 1 2 with fiber Z 2 , the diadic profinite completion of the integers. The diadic solenoid is illustrated in Figure 1.
Tensoring the adelic solenoid with the group C * we get the algebraic solenoid C * Q : All the properties discussed before are shared by the algebraic solenoid with the natural extensions and the proofs are verbatim. For clarity purposes we mention them once again for the algebraic solenoid: Lema 2.4. The following is a short exact sequence: and we have the commutative diagram: We define the principal baseleaf ν : C → C * Q as follows: In particular, the inmersion ν is a group morphism such that ν(2πx) = φ(x) for every integer x. Define: Lema 2.5. We have the short exact sequence: such that ι(a) = (a, −2πa).
• π 1 : C * Q → C * is a fiber bundle with fiber isomorphic toẐ and monodromy the shift T (x) = x + 1.
• exp is a local homeomorphism.
• Restricted to a leaf, π 1 is a local homeomorphism.
• C * Q is the complex dynamical suspension of the shift T (x) = x + 1. • C * Q is foliated by dense C-leaves. Because ν(2πx) = φ(x) for every integer x, we have the equivalent descriptions: for every natural n. These are the appropriate descriptions to lift the homeomorphisms z p/q : Lema 2.7. We have the commutative diagram: such that ι(a) = (a, −2πa).
Remark 2.1. Because z n = z n and the conjugation has a continuous extension to the Riemann spherez :Ĉ →Ĉ, there is a homeomorphism: z :Ĉ Q →Ĉ Q such that π n (x) = π n (x) for every x ∈Ĉ Q . Becausez = z −1 on S 1 this relation extends to the solenoid S 1 Q and by continuity we have that the composition: is a leaf preserving homeomorphism fixing 0, 1, ∞.
As defined by D. Sullivan [Su]: A two dimensional solenoid is hyperbolic if every leaf is conformally covered by the disk.
Corollary 2.8. Consider the solenoid H Q = π −1 1 (∆ * ) where ∆ * is the open unit circle minus the origin. Then H Q is a hyperbolic solenoid.
Proof: By equation (2) we have the covering exp :Ẑ × U → H q where U is the hyperbolic upper half plane.
Continuous maps and degree theory
The following lemmas and propositions tell us how continuity properties of solenoidal maps are related to limit periodic properties of their restriction on the baseleaf. For pedagogical reasons, we introduce the notion of limit periodic as a particular case of almost periodic functions.
The following definition is due to Harald Bohr in 1924 [Bo]: Definition 2.2. A function f : R → C is almost periodic if for every > 0 there is a relatively dense subset A ⊂ R such that: There is a beautiful discussion of almost periodic functions in the context of constructive mathematics in [Br]. Restricting the reletively dense subsets to be of the form N Z for some natural N we have: Definition 2.3. A function f : R → C is limit periodic if for every > 0 there is a natural number N such that: for every x ∈ R and n ∈ N Z.
An interesting discussion relating limit periodic functions, solenoids and adding machines can be found in [Be]. The following generalization is the appropriate one needed for our subsequent theory: Definition 2.4. A function f : C → C is limit periodic respect to x if for every > 0 and compact set K ⊂ R there is a natural number N such that: Lema 2.9.
• Consider a limit periodic function f : R → C. Then the map f • 2π : Z → C is uniformly continuous respect to the relative adelic topology on Z. In particular, the map extends uniquely to a continuous map onẐ.
• Consider a continuous limit periodic respect to x function f : C → C. Then the map h : Z × C → C such that h(n, z) = f (z + 2πn) extends uniquely to a continuous map onẐ × C. Proof: • Consider an > 0. There is a natural N such that • Consider a compact set K ⊂ R and the map l : Z → C(R × K, C) such that l(n)(z) = h(n, z). Consider an > 0. There is a natural N such that for every z ∈ R × K and n N . In particular, if n − m ∈ N Z then hence there is a unique continuous extensionl :Ẑ → C(R × K, C). Finally, we have the unique continuous extensionĥ such thatĥ(a, z) =l(a)(z). Because the real line is σ-compact and continuity is a local property, we have the result.
The following Lemma justifies the name of limit periodic maps.
• For every limit periodic map f : R → C there is a sequence (f n ) n∈N such that f n is 2πn-periodic and (f n ) converges pointwise to f respect to the divisibility net.
• A map f : R → C is continuous limit periodic respect to x if and only if there is a sequence (f n ) n∈N of continuous maps such that f n is 2πn-periodic respect to x and (f n ) uniformly converges to f in bands R × K where K ⊂ R is a compact set, respect to the divisibility net. Moreover, the sequence (f n ) n∈N can be assumed to be equicontinuous. Proof: • Consider F : Z×R → C such that F (n, x) = f (x+2πn). Because f is limit periodic, by Lemma 2.9 for every x ∈ R the function F ( , x) is uniformly continuous hence there is an extensionF :Ẑ×R → C such that for every x ∈ R the extensionF ( , x) is continuous on the compactẐ. Consider the inverse limit morphisms π n :Ẑ → Z/nZ and define f n (x) = n where da denotes the normalized Haar measure on the compact abelian groupẐ. See that that for every x ∈ R the extensionF ( , x) is integrable for it is continuous.
Consider the shift T :Ẑ →Ẑ such that T (a) = a + 1. Because the Haar measure is invariant under the shift, T n is an automorphism of Ker(π n ) andF (T (a), x) = F (a + 1, x) =F (a, x + 2π) we have that f n is 2πn-periodic: Finally, for every > 0 and every x ∈ R there is a natural N ,x such that for every n N we haveF (nẐ, x) ⊂ U (F (0, x), ). In particular, for every n N .
• Consider F : Z × C → C such that F (n, z) = f (z + 2πn). Because f is limit periodic respect to x, by Lemma 2.9 there is a unique continuous extensionF :Ẑ × C → C of F . Consider the inverse limit morphisms π n :Ẑ → Z/nZ and define where da denotes the normalized Haar measure on the compact abelian groupẐ. Again, see that that for every z ∈ C the extensionF ( , z) is integrable for it is continuous.
Because F (n + 1, z) = F (n, z + 2π) andF is the continuous extension, we have the relationF (a + 1, x) =F (a, x + 2π) hence there is a continuous functionf such that: Consider the annulus D r,R where r and R denote the inner and outer radius respectively. Because the solenoid is compact,f is uniformly continuous there henceF is uniformly continuous. Then,for every > 0 there is a δ > 0 and a natural N such thatF (NẐ × U (z, δ)) ⊂ U (F (0, z), ) for every z ∈ R × [a, b]. In particular, for every n N and every z ∈ R × [a, b]; i.e. (f n ) n∈N uniformly converges to f in bands R × K where K ⊂ R is a compact set, respect to the divisibility net. By the same argument as before, f n is 2πn-periodic.
Conversely, consider a compact set K ⊂ R and let > 0. There is a natural N such that n N implies ||f − f n || ∞ < /2 on the band R × K. Define T : C → C such that T (z) = z + 2π. Because f n = f n • T n we have: for every n N ; i.e. f is limit periodic respect to x. Because every f n is continuous and the convergence is uniform on compact sets, we have that f is continuous.
The first item of the above Lemma is surprising for a non-continuous limit periodic could be quite bizarre. However, it can always be approximated by periodic functions.
Definition 2.5. Define the baseleaf topology on C as the topology such that ν : C BL → C * Q is an embedding (instead of just an inmersion) and denote the this new topological space as C BL . The baseleaf topology on R is defined analogously and will be denoted as R BL .
Because of the relation π m • ν = e iz/n and the fact that, by definition, the topology of C * Q is the coarser topology such that every π m is continuous, we have that the following sets U = U + 2πmZ where U ⊂ C is a usual open set and m is a natural number, constitute a basis for the baseleaf topology. In particular, we have the homeomorphism: Another form of the above homeomorphism is the following one: Remark 2.2. The space C BL is not a topological vector space for the vector space action of R or C with the usual topologies is not continuous. However, (C BL , +) is a topological group. Because (C * , ·) is a complete topological group and the inverse limit of such groups is again a complete topological group, the algebraic solenoid (C * Q , ·) is also a complete topological group. Because ν : C BL → C * Q is a dense embedding we conclude that the topological completion of (C BL , +) is the algebraic solenoid (C * Q , ·): as topological groups. A similar discussion holds for the solenoid and R BL . It is interesting to see that formulating the problem backwards is much more difficult: Question: Given the topological group (R BL , +) with the explicit topology described before, what is its completion? Answer: The adelic solenoid.
Lema 2.11. Consider a continuous baseleaf preserving function f : C * Q → C * Q . Then, there is a unique rational number q and a unique continuous limit periodic respect to x function h such that f 0 (z) = qz + h(z) where f 0 is defined by the commutative diagram: Proof: Endow C with the baseleaf topology. We have the commutative diagram: Remark 2.3. Because the baseleaf topology is coarser than the usual one, every connected subset in the usual sense is also connected in the baseleaf sense.
Consider an annulus D r,R where r and R denote the inner and outer radius respectively.
is compact and f is continuous, the restrictions of f and therefore f 0 are uniformly continuous; i.e for every > 0 and natural λ there is a real number δ ,λ > 0 and a natural number N ,λ such that Define g m such that g m (z) = f 0 (z + 2πN m) − f 0 (z) for every integer m. Consider < π/2. We will prove that there is an integer k ,λ such that g m (R × [a, b]) ⊂ U (2πkm, ) for every integer m. We will prove it in the following steps: • Base case: Because g 1 is continuous and Because g M +1 (z) = g M (z + 2πN ) + g 1 (z) and the inductive hypothesis, we have that for every natural m.
Let's see that the quotient k ,λ /N ,λ is independent of the and λ chosen. Consider another 0 < < π/2 and λ . There is a real number δ ,λ > 0 such that δ < δ, a natural number N ,λ and an integer k ,λ such that for every z ∈ R × [a, b] an every integer m . Choose m and m such that N m = N m. Then, and because , < π/2 we have that k.m = k .m hence k/N = k /N . Denote this , λ-independent rational by q.
In particular, because the compact [a, b] was arbitrary, we have proved that where h is continuous limit periodic respect to x: Because f 0 is continuous we have that h is continuous. It rest to show that it is limit periodic respect to x. Because we proved that the rational q was , λ-independent, equation (5) reads as follows: For every compact set K ⊂ R and real number > 0 there is a real number δ K, > 0 and a natural number N K, such that: for every z ∈ R × [a, b] and every integer m. This proves the claim. Moreover, this decomposition is unique for a linear limit periodic function must be zero.
Corollary 2.12. For every uniformly continuous map f : R BL → C BL there is a unique rational number q and a unique continuous limit periodic function h such that f (x) = qx + h(x). In particular, f is continuous respect to the usual topologies; i.e. f : R → C is continuous.
Proof: Because f is uniformly continuous it extends continuously to the completions and by remark 2.2 we have the commutative diagram: By Lemma 2.11, we have the result.
Definition 2.6. The rational number q of the above lemma will be called the degree of f and will be denoted deg(f ).
A continuous map f : C * Q → C * Q can be assumed to be baseleaf preserving just by multiplying it by f (1) −1 . The following proposition gives the converse of Lemma 2.11. Proposition 2.13. There is a continuous (holomorphic) baseleaf preserving map f : and only if there is a continuous (holomorphic) limit periodic respect to x map g such that the following diagram commutes: where deg(f ) ∈ Q is the degree of f . Proof: By Lemma 2.11 there is such g. If f is holomorphic then it is holomorphic on every leaf. In particular it is holomorphic on the baseleaf and we have that g is holomorphic.
For the converse, suppose that deg(f ) = p/q such that p and q are coprime natural numbers.
for every integer n. Because g is continuous limit periodic respect to x, by Lemma 2.9 function h : qZ × C → C such that h(n, z) = g(z + 2πn) admits a unique continuous extensionĥ : qẐ × C → C such thatĥ(a, z + 2πq) =ĥ(a + q, z). Then, there is a unique continuous extensionF : and satisfies the same structural condition as F : By Lemma 2.7, there is a continuous map f such that the following diagram commutes: Let a ∈ qẐ and consider a sequence of integers (n i ) i∈N such that (q.n i ) converges to a. If f 0 is holomorphic then by equation (7) f n i is holomorphic for every natural i. By Lemma 2.9 the sequence of continuous maps (f n i ) converges uniformly toF (a, ) on compact sets henceF (a, ) is holomorphic for every f n i is holomorphic. Then f is holomorphic on every leaf and by remark 2.4 we conclude that f is holomorphic.
We have proved that for every uniformly continuous map f : R BL → R BL there is a unique rational number q and a continuous limit periodic map h such that f (x) = qx + h(x). In particular, every uniformly continuous map f : R BL → R is limit periodic. Because the baseleaf topology is coarser than the usual topology, we have the natural inclusion with cokernel the rational numbers. We have proved the following topological characterization of the rational numbers: Lema 2.14. Consider a pair of continuous (holomorphic) baseleaf preserving maps f, g : Conversely, there is a rational q = deg(f ) = deg(g) such that: where h and l are continuous (holomorphic) limit periodic respect to x. Because every linear combination of continuous (holomorphic) limit periodic functions respect to x is continuous (holomorphic) limit periodic respect to x, the mapĤ : is holomorphic for every t). An almost verbatim construction to the one given in Proposition 2.13 gives a continuous map H such that: Then f and g are homotopic. If f and g are holomorphic, by Proposition 2.13 H(t, ) is holomorphic for every t hence H is a conformal isotopic.
Corollary 2.15. For every baseleaf preserving continuous (holomorphic) map f : C * Q → C * Q there is a unique rational number q such that f is homotopic (conformal isotopic) to z q . In particular, every character of the group C * Q is of the form z q for some rational number q.
Proposition 2.16. There is a continuous (holomorphic) map f : C * Q → C such that: if and only if g is continuous (holomorphic) limit periodic respect to x.
The following Lemma shows that degree zero functions map all the solenoid to only one leaf.
Lema 2.17. Consider a continuous (holomorphic) baseleaf preserving map f : There is a continuous (holomorphic) map g such that: Proof: By proposition 2.16 there is a limit periodic respect to x continuous (holomor- Because the maps are continuous and the image of ν is dense embedding, we have that f = ν • g. Corollary 2.18. Consider a continuous (holomorphic) baseleaf preserving map f :
Differentiable structure and derivatives
Now we discuss the differentiable structure and derivatives.
Definition 2.7. Because restricted to a leaf π 1 is a local homeomorphism, we define the complex and differentiable structure of every leaf of the algebraic solenoid as the pullback of the respective structures of C * by π 1 : C * Q → C * .
Remark 2.4. Because π 1 is a group morphism, for every a ∈ Ker(π 1 ) Ẑ we have π 1 (a.ν(z)) = e iz hence the complex and differential structures induced by π 1 and a.ν on the leaves coincide for e iz is holomorphic. In particular, a function is holomorphic on C * Q if and only if it is holomorphic on every leaf.
Thinking the leaves a.ν : C → C * Q as coordinate charts, we have the following definition: for every a ∈ Ker(π 1 ); i.e. for every leaf. We say that f is of class Proposition 2.19. Consider a continuous baseleaf preserving map f : C * Q → C * Q such that: Then, the continuous derivative ∂ i z ∂ j z f exists if and only if ∂ i z ∂ j z g exists and is continuous limit periodic respect to x. In particular, f is C n if and only if ∂ i z ∂ j z g exists and is continuous limit periodic respect to x for every i, j ≥ 0 such that i + j ≤ n.
Proof: By definition, there are continuous maps ∂ i z ∂ j z f : C * Q → C for every i, j > 0 and i + j ≤ n such that: for every a ∈ Ker(π 1 ); i.e. for every leaf. In particular, it is verified for the baseleaf (a = 0) and by Lemma 2.16 the functions ∂ i z ∂ j z g : C * Q → C are continuous limit periodic respect to x for every i, j > 0 and i + j ≤ n Conversely, suppose that ∂ z g is continuous limit periodic respect to x and deg(f ) = p/q such that p and q are coprime natural numbers. In the proof of proposition 2.13 we constructed the commutative diagram: for every integer n. Because ∂ z f 0 is continuous limit periodic respect to x, by Lemma 2.9 there is a unique continuous extensionF z : qẐ × C → C of F z such thatF z (a, z + 2πq) = F z (a + q, z). By Lemma 2.7, there is a continuous map f z such that the following diagram commutes: Let a ∈ qẐ and consider a sequence of integers (n i ) i∈N such that (q.n i ) converges to a. BecauseF (n i , ) converges uniformly toF (a, ) on compact sets and ∂ zF (n i , ) = (0,F z (n i , )) converges uniformly to (0,F z (a, )) on compact sets we conclude that ∂ zF (a, ) = (0,F z (a, )) and it is continuous limit periodic respect to x. We have proved that there exist the partial derivative ∂ z f = f z .
In the case that f 0 is of class C m such that m = 1, 2, . . . ∞, an analogous inductive argument shows that there exist the all the other continuous partial derivatives of f ; i.e. f is of class C m .
We have a completely analogous proposition for functions with almost verbatim proof: Proposition 2.20. Consider a continuous function f : C * Q → C such that: Then, the continuous derivative ∂ i z ∂ j z f exists if and only if ∂ i z ∂ j z g exists and is continuous limit periodic respect to x. In particular, f is C n if and only if ∂ i z ∂ j z g exists and is continuous limit periodic respect to x for every i, j ≥ 0 such that i + j ≤ n.
We have an improved version of Lemma 2.14: Lema 2.21. Consider a pair of C n baseleaf preserving maps f, g : C * Q → C * Q . Then, f and g are C n -isotopic if and only if deg(f ) = deg(g).
Proof: Almost verbatim to the proof of Lemma 2.14.
Picard theorem
Proposition 2.22. There is a continuous (holomorphic) map f : C * Q → C * such that: if and only if g is continuous (holomorphic) limit periodic respect to x and q is a rational number.
Proof: Almost verbatim to the proof in Lemma 2.13.
Definition 2.9. We will call the above rational number the degree of f and denote it by deg(f ). The following corollary justifies this notation: The following corollary justifies this notation: Corollary 2.23. For every continuous (holomorphic) map f : C * Q → C * there is a unique continuous (holomorphic) baseleaf preserving mapf such that: Proof: By proposition 2.22, there is a unique rational number q = deg(f ) and a continuous (holomorphic) limit periodic respect to x map g such that: By proposition 2.13, there is a unique baseleaf preserving continuous (holomorphic) map f of degree q such that: Then π 1 •f • ν = f • ν and because the image of ν is dense and the maps are continuous we have that π 1 •f = f .
The following corollary shows the relation of the degree introduced here with the classical degree: Corollary 2.24. For every continuous (holomorphic) map f : C * → C * there is a unique continuous (holomorphic) baseleaf preserving mapf such that: Proof: Because the map π 1 is holomorphic and the degree is multiplicative under composition we have that f • π 1 is a continuous (holomorphic) map with the same degree as f ; i.e.
where deg(f ) is an integer and g is a continuous (holomorphic) map periodic respect to x. Then deg(f ) = deg(f • π 1 ). By the previous corollary there is a unique continuous (holomorphic) baseleaf preserving mapf such that: We only prove that V = C Q is trivializing for the other case is completely similar. Consider a hoolomorphic function f : C Q → C * . In particular, its restriction to C * Q is holomorphic and by Lemma 2.22 there is a map h such that: Restricted to the real line, h has the form: and because h is holomorphic we have: In particular, its imaginary part is the following: Because f has a continuous extension at zero such that f (0) ∈ C * and |f (ν(z))| = |e ih(z) | = e −Im(h(z)) , the limit of Im(h) when y tends to +∞ must be finite for every x. We conclude that deg(f ) = 0 and a q = 0 for every q < 0; i.e.
f (z) = e i q≥0 aqz q Define the conformal isotopy: f t (z) = e it q≥0 aqz q We have proved that every holomorphic function f : C Q → C * is conformal isotopic to a constant function and we have the claim. Then, the bundle L is determined by its holomorphic clutching function f : C * Q → C * and by Lemma 2.25 there is a unique rational number q such that f is conformal isotopic to z q hence L is isomorphic to the complex holomorphic line bundle O(q) with clutching function the character z q . Because O(p) ⊗ O(q) O(p + q), the result follows.
Remark 2.5. It is tempting to argue just that U and V are contractible hence trvializing but this is true in the continuous category and we ere in the holomorphic one.
Proof: For every complex line bundle π : L →Ĉ we have its pullback: and becauseπ 1 is onto we have thatπ * 1 is a monomorphism: Take the trivializing cover U =Ĉ − {∞} and V =Ĉ − {0} of the Riemann sphereĈ. For every clutching function f : C * → C * the clutching function of the pullback of its associated bundle respect to the trivializing cover U =Ĉ Q − {∞} and V =Ĉ Q − {0} is f • π 1 . Then the pullbackπ 1 is inyective for every pair of clutching functions such that f • π 1 = g • π 1 we have f = g. Because the tensor product of bundles with clutching functions f and g has the clutching function f.g, the pullbackπ * 1 is a group morphism for (f.g) Definition 3.2. Consider the normalized Haar measure η onẐ and the induced measure on the fibers π −1 1 (x). We define the n-th renormalization map as the linear operator The n-th renormalization map is the average respect the n-th level π n : C * Q → C * of the algebraic solenoid renormalized such that its operator norm be one; i.e. ||I n || ∞ = 1. This is illustrated for the diadic solenoid in Figure 2.
Remark 3.1. See that, by definition, I n (µ) factors through π n ; i.e. there is aμ n ∈ L ∞ (C * ) such that: Because π 1 • φ = 1 we have π 1 • m a = id for every a ∈Ẑ. This way m a : π −1 1 (x) → π −1 1 (x) for every x ∈ C * and a ∈Ẑ; i.e. the fibers are invariant under the action m a for every a ∈Ẑ. To get a feel of this notion of continuity, see that every uniform continuous function is L ∞ -continuous. For a less trivial example, see that the Dirichlet function (D(x) is uniformly vertical L ∞ -continuous then I n (µ) converges uniformly to µ respect to the divisibility net.
Corollary 3.2. Consider a uniformly vertical L ∞ -continuous µ ∈ L vert ∞ (C * Q ). Then, for almost every fiber F x :Ẑ → C * Q of the fiber bundle π 1 : C * Q → C * the pullback F * x (µ) : Z → C can be represented by a continuous function.
Proof: For every natural n and almost every fiber F x :Ẑ → C * Q the map F * x (I n (µ)) is locally constant (See remark 5.1). In particular they are continuous and by the previous Lemma they converge uniformly to F * x (µ) and we have the result. The above corollary can be written in the following way: Definition 3.4. We say µ ∈ P er(C Q ) if there is some natural n and µ n ∈ L ∞ (C) such that µ = π * n (µ n ).
• If f : C * Q → C is continuous then the family of functions I n (f ) is equicontinuous and the sequence (I n (f )) n∈N converges uniformly to f on compact sets.
) n∈N converges to f in the C m -topology on compact sets.
Proof:
• An analogous construction to the one given in the proof of Lemma 2.10 gives the commutative diagram: such that F is continuous, F (n, z) = f 0 (z + 2πn) and F (a + 1, z) = F (a, z + 2π) for every integer n, z ∈ C and a ∈Ẑ. Define the function I n (F ) such that: Because of the relation: there is a function conjugated to F by the exp map. It is clear that this map is I n (f ) and we have the commutative diagram: See that these maps coincide with the maps defined in the proof of Lemma 2.10 and by the same proof we have that they are periodic respect to x and equicontinuous. By Proposition 2.16, the family of functions I n (f ) is equicontinuous and the sequence (I n (f )) n∈N converges uniformly to f on compact sets.
• Suppose that there is a continuous derivative ∂ z f such that: An analogous construction to the one given in the proof of Lemma 2.10 and Proposition 2.19 gives the commutative diagrams: and analogous relations for ∂ z F for every n ∈ Z, z ∈ C and a ∈Ẑ. In the same way as before, we have the commutative diagrams: It only rest to show that ∂ z I n (F ) = I n (∂ z F ): Because ∂ z F is continuous we can interchange the integral and the derivative: and this proves the claim.
Because ∂ z (I n (f ) 0 ) = I n (∂ z f ) 0 by the above item these functions are periodic respect to x and equicontinuous. By Proposition 2.20 and the above item, the equicontinuous derivatives ∂ z I n (f ) exists and Finally, by the above item again and the last relation, the sequence (∂ z I n (f )) n∈N converges uniformly to ∂ z f on compact sets.
An inductive argument shows that the result holds for every derivative of order less than or equal to m and we have the result.
Pontryagin series
To motivate the following discussion, recall the proof of uniform convergence of the Fourier series of a C 1 function: Consider the Fourier series such that the series a priori converges in L 2 . However, by the Cauchy-Schwarz inequality: and by the Weierstrass M -test we have that the Fourier series actually converges uniformly.
When we try to reproduce the above argument to a C 1 function on the solenoid it breaks down for: If l|m|n then R l,m • R m,n = R l,n and R defines an inverse system of complex vector spaces over the divisibility net with inverse limit the complex vector space lim ← C(S 1 , C), p n .
Consider the inverse limit morphisms π n : S 1 Q → S 1 . By remark 3.1 the functions I n (f ) factor through π n : If m|n, by definition π n/m n = π m hence: for every x ∈ S 1 . Then, for every x ∈ S 1 ; i.e. R m,n •f n =f m . We have a natural linear morphism I : Actually it a monomorphism: Lema 3.5.
• The linear morphism I is a monomorphism; i.e. I : • (g n ) = I(f ) if and only if (g n • π n ) n∈N converges uniformly to f .
Proof:
• Consider a pair of functions f 1 , f 2 ∈ C(S 1 Q , C) such that I(f 1 ) = I(f 2 ) = (g n ). By definition I n (f 1 ) = I n (f 2 ) = g n • π n and because of Lemma 3.4 f 1 = f 2 for the sequence (I n (f i )) n∈N uniformly converges to f i and the limit is unique.
• By definition I n (f ) = g n • π n and because of Lemma 3.4 (g n • π n ) n∈N converges uniformly to f . For the converse, consider a natural n and let > 0. There is a By the fact that ||R n,N || ∞ = 1 we have ||f n − g n || ∞ < and because > 0 was arbitrary we conclude thatf n = g n .
Proposition 3.6. For every C m+1 function f : S 1 Q → C such that m ≥ 0 its Pontryagin series converges in the C m -topology.
Proof: Let's see how the operator R m,n acts on monomials: for every x ∈ S 1 hence R m,n (z λn/m ) = z λ . Consider a natural r such that 1 ≤ r ≤ (n/m) − 1. Choose a solution y of the equation y n/m = x. The set of points y/y such that y n/m = x is the set of (n/m)-th roots of unity. If r|(n/m) then the set of points (y/y ) r such that y n/m = x is the set of (n/mr)-th roots of unity otherwise the set of points is the set of (n/m)-th roots of unity as before. Either way, because the sum of all k-th roots of unity is zero for arbitrary k, we have that: hence R m,n (z λn/m+r ) = 0 for r = 1, 2, . . . (n/m) − 1. By remark 3.1 and Lemma 3.4, I n (f ) 0 is C m+1 and 2πn-periodic hence its Fourier series converges in the C m -topology; i.e. we have that: and it converges in the C m -topology for every natural n. Claim: The coefficients a (n) q are independent of n. In particular we have that:f n (z) = i∈Z a (n) i/n z i and it converges in the C m -topology for every natural n and because the linear operator R m,n is bounded (i.e. continuous, actually ||R m,n || = 1) we have: i/m for every pair of naturals m, n such that m|n and every integer i. We proved the claim.
Then, there are coefficients a q ∈ C indexed on the rationals such that: I n (f )(z) = q∈ 1 n Z a q z q and it converges in the C m -topology for every natural n. By Lemma 3.4, the sequence (I n (f )) n∈N converges to f in the C m+1 -topology and we conclude that: f (z) = q∈Q a q z q and the series converges in the C m -topology. Because the solenoid is compact, in particular it also converges in L 2 and because the Potryagin series is unique, we have the result.
Corollary 3.7. For every C ∞ function f : S 1 Q → C its Pontryagin series converges in the C ∞ -topology.
Remark 3.2. Actually we have proved that the renormalization maps act in the following way: where the series converge at least uniformly.
Proof: We already have the result for p = ∞. Because the operator I n acts as a projection on modes, by Proposition 3.6 we have that, restricted to the C 1 functions, the linear operator I n : C 1 (S 1 Q , C) → C 1 (S 1 Q , C) has operator norm ||I n || p = 1 for every p > 1. Because C 1 (S 1 Q , C) is dense in L p (S 1 Q , C) for every p > 1, there is a unique extension of I n with the same norm. Now, with these new tools at hand, we are able to tackle the problem we discuss at the beginning as a motivation.
Lema 3.9. Every C 1 function on the solenoid has a L 1 Pontryagin transform.
Proof: Consider a C 1 function f with its Pontryagin series: f (z) = q∈Q a q z q and its derivative along the solenoid f : For every natural n consider the 2π-periodic function I n (f )•ν •(n ) and its Fourier series: and see that its derivative respect to x coincides with I n (f ) • ν • (n ): Because a j/n = b j for every integer j, by Cauchy-Schwartz and Parseval identity we have: A simple direct calculation shows that ||I n (f ) • ν • (n )|| 2 = ||I n (f )|| 2 and because ||I n || 2 = 1 by the previous corollary we have: Taking the limit on the left hand side we finally have: Remark 3.3. Because the solenoid has unit area by definition, the last useful identity can be written as: Ahlfors-Bers theory
Introduction and Preliminaries
We define the adelic Riemann sphereĈ Q as the inverse limit of the ramified coverings: Because these are the inverse limit of the ramification points, their topological nature is quite different from the other points. They are cusps. In particular, every homeomorphism ofĈ Q must fix these new points or permute them. In the following theory, this fixation will be a constraint of the theory and no longer a choice as in the classical theory. Now we turn to the question of whether continuous maps and differentials on the algebraic solenoid C * Q can be extended to the adelic sphereĈ Q . Lema 4.1. Consider a continuous (holomorphic) map f and a continuous (holomorphic) function limit periodic respect to x function g such that: Then: • f can be continuously extended to C Q if and only if there is a complex number a such that: lim y→+∞ ||a − g| Im(z)≥y || ∞ = 0 Moreover, the extension is f (0) = a.
• f can be continuously extended to C * Q ∪{∞} if and only if there is a complex number b such that: lim Proof: We prove the first item for the second one is completely analogous. It a simple calculus exercise to see that the extension f (0) = a is continuous if and only if: Because f is continuous and the image of the baseleaf ν is dense, the above condition is equivalent to the one in the statement for π 1 • ν(z) = e iz hence |π 1 (ν(z))| ≤ r if and only if Im(z) ≥ −ln(r) and we have the result.
Lema 4.2. Consider a differential µ ∈ C(C * Q )dπ 1 ⊗ (dπ 1 ) −1 and a differential η ∈ C(C)dz ⊗ (dz) −1 such that η = ν * (µ) where ν is the baseleaf. Then, as a function µ has a continuous extension to the whole adelic sphereĈ Q if and only if there are constants a, b such that: such that deg(f ) = 0 and a continuous (holomorphic) limit periodic respect to x map g such that the following diagram commutes: If ||Im(g)|| ∞ < ∞ then f has a continuous extension fixing 0, ∞ to the whole adelic sphereĈ Q .
Proof: Define M and m such that m ≤ Im(g(z)) ≤ M for every z ∈ C. Because Im(deg(f )z + g(z)) > y if Im(z) > (y − m)/deg(f ) we have that f is continuous at zero. Analogously, because Im(deg(f )z + g(z)) < y if Im(z) < (y − M )/deg(f ) we have that f is continuous at ∞. By Lemma 2.17, the degree zero case in the above Lemma is Lemma 4.1.
Lema 4.4. Hol(Ĉ Q ) C Proof: Consider a holomorphic function f :Ĉ Q → C. Its restriction to the solenoid (equator) is: f (ν(x)) = q∈Q a q e iqx and because it is holomorphic we have: Because f is continuous on the adelic sphereĈ Q , by Lemma 4.1 a q = 0 for every non zero rational q; i.e. f = a 0 . Let's see how a homeomorphism permute leaves. Consider a homeomorphism h : C * Q → C * Q homotopic to z p/q . Because exp is a local homeomorphism, there is a homeomorphism h : qẐ × C → pẐ × C such that for every a ∈ qẐ and z ∈ C.
BecauseẐ is totally disconnected,ĥ maps leaves to leaves; i.e. there are homeomorphisms s :Ẑ →Ẑ and f a : C → C such thatĥ(a, z) = (s(a), f a (z)). The structural condition implies s(a + q) = s(a) + p for every a ∈ qẐ. In particular we have that s(qn) = s(0) + pn for every integer n and because s is continuous we have: s(qa) = s(0) + pa for every a ∈Ẑ. We have proved the following lemma: Lema 4.5. Consider a homeomorphism h : C * Q → C * Q homotopic to z p/q and a homeomorphismĥ : qẐ × C → pẐ × C such that whereĥ(a, z) = (s(a), f a (z)). Then, there is λ ∈ pẐ such that s(qa) = λ + pa Corollary 4.6. A homeomorphism is leaf preserving if and only if it is homotopic to the identity. Proof: Under the notation of the above Lemma, if h is leaf preserving then s(a) = a+λ such that λ is now an integer. In particular, deg(h) = 1 and by Lemma 2.14 h is homotopic to z. For the converse, z is leaf preserving and because the space of leavesẐ/Z is totally disconnected h is leaf preserving too. Since we want to build a theory of continuous deformations of the identity, the above corollary shows that we only need leaf preserving homeomorphisms in our theory.
Definition 4.1. A leaf preserving homeomorphism h ofĈ Q is quasiconformal if it fixes 0, ∞ and h a is quasiconformal for every a ∈Ẑ such that whereĥ(a, z) = (a, h a (z)); i.e. h restricted to every leaf is quasiconformal.
Definition 4.2. We say µ ∈ P er is a periodic adelic differential if there is some natural n and differential µ n ∈ L ∞ (C) dz ⊗(dz) −1 such that µ = π * n (µ n ). We say that µ is a periodic Beltrami adelic differential if µ ∈ P er and ||µ|| ∞ < 1. We denote these differentials as P er 1 .
The importance of the periodic adelic Beltami differentials is that they trivially have a quasiconformal solution to the respective Beltrami equation: Consider the periodic adelic Beltrami differential µ = π * n (µ n ) and the quasiconformal solution f n to the µ n -Beltrami equation fixing 0, 1, ∞. Define the leaf and orientation preserving homeomorphism f such that:Ĉ Bel(C Q ) = Interior P er 1 T The first natural guess would be the metric topology T ∞ but this won't do for: Interior P er 1 ∞ = L vert ∞ (C * Q ) 1 and there are L ∞ -vertical Beltrami differentials µ for which there is no solution to its Beltrami equation (See example 4.1). The rest of the chapter is devoted to this problem and we will find a family of complete metric topologies T Ren,S solving it. However, the optimality of these solutions remains an open problem.
Adelic Beltrami differentials
In what follows, we will make the following abuse of notation: Remark 4.1. Every leaf ν a : C → C * Q is a translation surface modeled on π 1 and we will consider differentials in the space L ∞ (C * Q ) dπ 1 ⊗ (dπ 1 ) −1 . In particular, see that the space of Beltrami differentials L ∞ (C) dz ⊗ (dz) −1 embeds in this space via π * 1 for: In pursuit to ease the notation we will make the following abuse of notation: Unless confusion, in what follows we will write a differential µ dπ 1 ⊗ (dπ 1 ) −1 just as µ and identify L ∞ (C * Q ) dπ 1 ⊗ (dπ 1 ) −1 with L ∞ (C * Q ); i.e. we will use the same notation to denote the differential and the function. Unless explicitly written, the context will make clear which one we are using.
Definition 4.4. Define the vector subspaces of adelic differentials: P er n = π * n (L ∞ (C)) for every natural n. See that P er m ⊂ P er n if m|n. Consider a cofinal totally ordered divisibility subsystem S = (n j ) j∈N . Then, P er S = j P er n j ⊂ Ren S The space of periodic adelic Beltrami differentials P er S,1 is the set of periodic adelic differentials µ ∈ P er S such that ||µ|| ∞ < 1: Lema 4.7.
• P er S Ren,S = Ren S . In particular, P er S,1 ⊂ Bel S (C Q ) is a dense subset.
• Bel S (C Q ) is closed under multiplication by functions λ ∈ L ∞ (C Q ) such that ||λ|| ∞ ≤ 1. In particular, Bel S (C Q ) is star shaped respect to zero. Proof: • By Lemma 3.1 we have: Then: n j+1 ||I n j+1 (µ) − I n j (µ)|| ∞ = ||µ|| Ren,S and we have that the inclusion is continuous. Let's see that the inverse is not: Consider µ n such that: n! e − y 2 n! 2 dz dz where ν is the baseleaf and z = x + iy. By Lemma 4.2 they are continuous in C Q hence uniformly continuous in C * Q . In particular µ n ∈ L vert ∞ (C * Q ) and because ||µ n || Ren,S = 1 we have that µ n ∈ Ren S for every n. However, ||µ n || ∞ = 1/n! tends to zero where S = (n!) n∈N .
Proof: By the previous Lemma, Bel S (C Q ) ⊂ Ren S is an open set and it only rest to show that (Ren S , || · || Ren,S ) is complete. Consider a Cauchy sequence (µ n ) n∈N in (Ren S , || · || Ren,S ). Again by the previous Lemma, the inclusion is continuous and (µ n ) n∈N is a Cauchy sequence in (L vert ∞ (C * Q ), || · || ∞ ). Because this space is complete, there is a unique µ ∈ L vert ∞ (C * Q ) such that the sequence converges to it respect to the || · || ∞ norm. Because the norm || · || Ren,S is a series of positive terms we can interchange the limit and series and we have: where we have used that I n are bounded linear operators respect to the || · || ∞ norm. By the same argument and the fact that (µ n ) n∈N is a Cauchy sequence we have: lim m ||µ − µ m || Ren,S = 0 and we conclude that µ ∈ Ren S and it is the limit of the Cauchy sequence. , h a (z)).
Ahlfors-Bers theorem
The following is the adelic version of Ahlfors-Bers theorem: Theorem 4.9. For every adelic Beltrami differential µ there is a unique quasiconformal leaf preserving solution f :Ĉ Q →Ĉ Q to the µ-Beltrami equation such that f fixes 0, 1, ∞.
Remark 4.3. If f is a solution to the µ-Beltrami equation then µ must be uniformly vertical continuous for µ = ∂zf /∂ z f . This is why we ask for the L ∞ -vertical continuous condition in the adelic differential definition (See corollary 3.2).
Before presenting the proof of the adelic version of the Ahlfors-Bers theorem, it is important or at least pedagogical to describe some problems within and understand the capricious nature of the adelic Beltrami differential definition.
By Ahlfors-Bers theorem, there is a quasiconformal solution h a for every leaf modulo postcompositions with affine transformations. In particular, the solutions can be chosen such that they verify the structural constraint: h a+1 (z) + 2π = h a (z + 2π) for every z ∈ C and a ∈Ẑ defining this way a leaf preserving map h :Ĉ Q →Ĉ Q fixing 0, ∞. However, there is a priori no reason to expect that the resulting map would be continuous. Is clear that it will be continuous along the leaves but in general not across them. It's like drawing a picture separately in every piece of a puzzle and expect that the we get a clear picture after we put the pieces together. We have decomposed the foliated object in leaves and solved the problem for each leaf. To assure a continuous solution we need a global structural constraint.
The natural guess is that imposing some notion of vertical continuity (L ∞ -vertical continuity) to the Beltrami differential would give the desired continuity of the solution across the leaves. Although this is a necessary condition, it is not enough. As the next example shows, even for a continuous Beltrami differential there is no need to continuity of the solution across the leaves. where z = x + iy. Let's see that it is a Beltrami differential; i.e. ||µ|| ∞ < 1. Define: h ± (n)(z) = 1 2n! e ±ix/n! (1 ∓ 2y/n!) e − y 2 n! 2 2 where z = x + iy. Because ||h ± (n)|| ∞ < 1/2n! and the identity: we have that each term of the sum has supremum norm less than 1/n! hence the supremum norm of the sum is less than e − 1. We conclude that: Because it is limit periodic respect to x and decays to zero when y tends to ±∞, by Lemmas 2.16 and 4.2 there is a continuous adelic Beltrami differentialμ on C * Q with a continuous extension to the whole adelic sphereĈ Q as a function such that µ = ν * (μ) where ν is the baseleaf. However, the quasiconformal solution of the µ-Beltrami equation: is not of the type z + g(z) with g limit periodic respect to x and by lemma 2.13 it is not the conjugation of any continuous map of the adelic sphereĈ Q by the baseleaf ν; i.e. There is no continuous mapŵ such that The above example shows that we still need some other global structural condition on the Beltrami differential to assure the continuity of its solution. This is precisely the convergence of the renormalized average series: There is a cofinal totally ordered divisibility subsequence S = (n j ) j∈N such that the following series converge: The following definitions and Lemmas are the prelude to the Ahlfors-Bers theorem: Theorem 4.10. Consider k such that 0 ≤ k < 1. Then, there exists a real number p > 2 only depending on k such that: For every Beltrami differential µ ∈ L ∞ (C) 1 with ||µ|| ∞ ≤ k and compact support there is a unique quasiconformal map f : C → C such that f (0) = 0 and f z − 1 ∈ L p (C) (globally and not merely locally) verifying: fz = µf z on C in the sense of distributions.
A map verifying the conditions of the theorem is called a normal quasiconformal solution.
The previous Theorem can be found in [Ah], [IT].
Lema 4.11. If f is a normal solution of the µ-Beltrami equation such that µ has compact support, then there are constants A and p > 2 such that: Moreover, the constant A is monotone respect to the area of the µ support and depends also on p.
Lema 4.12. If f is a normal solution of the µ-Beltrami equation such that µ has compact support, then there are constants B and p > 2 such that: Moreover, the constant B is monotone respect to the area of the µ support and depends also on p.
Proof: Consider the inverse normal homeomorphism f −1 with Beltrami differential µ f −1 such that: Then, Hölder's inequality gives: C||µ|| p 2 = C 2 ||µ|| p p and we conclude that: In the same way as in the previous proof, there is a constant C (actually C = C) such that: and proceeding just as in the above Lemma, we have: Substitution of z = f (ζ) and triangular inequality gives the desired result. Consider µ ∈ L ∞ (C * Q ) 1 . For every natural n define the Beltrami differential µ n ∈ L ∞ (C * ) 1 dz ⊗ (dz) −1 such that (Recall remark 4.1): Consider the quasiconformal normal solution f n :Ĉ →Ĉ of the µ n -Beltrami equation such that f (0) = 0 and f z − 1 ∈ L p (C), p > 2. If n|L, define the maps f ↑L n andf n such that:Ĉ • The map f ↑L n is a quasiconformal normal solution of the µ ↑L n -Beltrami equation such that µ ↑L n = (z L/n ) * (µ n ). Moreover, (f ↑L n ) z − 1 ∈ L p (C) with the same p > 2 as the one we used for f n .
• The composition of quasiconformal normal maps is a quasiconformal normal map. Proof: • Define n = L/n. First, let's see that f ↑L n is quasiconformal. Indeed, it verifies Ahlfors quasiconformal definition A [Ah]: -It is an orientation preserving homeomorphism: Because z n is a covering and f n is an orientation preserving homeomorphism then f ↑L n is so. -It is ACL, absolutely continuous on lines: Because f n is absolutely continuous respect to any finite length rectifiable curve and the covering is C 1 we have that f ↑L n is ACL. -It has bounded maximal dilatation: Locally, the map z 1/n is defined outside zero and we have f ↑L n = z 1/n • f n • z n . Then, is a solution to the Beltrami equation with Beltrami differential µ ↑L n . In particular, it has bounded maximal dilatation. Now, let's see that it is normal. Consider the normal quasiconformal solution g to the µ ↑L n -Beltrami equation. Because g and f ↑L n are quasiconformal solutions of the same equation and both fix the origin, there is a non zero λ such that g = λf ↑L n . Locally it means that: g(z) n = λf n (z n ) for every z ∈ C. Because µ ↑L n has compact support, they are both univalent outside a disk of sufficiently large radius R and we can write: and an analogous expression for g outside the disk. Substituting these expansions in equation (9) and comparing the leading term we get λ = 1. Because of the following relation: ||µ ↑L n || ∞ = ||(z n ) * µ n || ∞ = ||µ n || ∞ ≤ k by Theorem 4.10 (f ↑L n ) z − 1 ∈ L p (C) with the same p > 2 as the one we used for f n and the claim is proved.
• Consider the quasiconformal normal maps f 1 and f 2 with respective Beltrami differentials µ 1 and µ 2 . Their composition is a quasiconformal map fixing the origin with Beltrami differential µ with compact support. Consider the quasiconformal normal solution g to the µ-Beltrami equation. Again, because g and f 1 • f 2 are quasiconformal solutions of the same equation and both fix the origin, there is a non zero λ such that: In the same way as before, because µ, µ 1 and µ 2 have compact support, they are univalent outside a disk of sufficiently large radius R and we can write: and an analogous expressions for f 1 and f 2 outside the disk. Substituting these expresions in equation (10) and comparing the leading terms we get λ = 1 just as before. This proves the claim.
Remark 4.4. The above Lemma explains why we choose this normalization. The Douady-Hubbard normalization f (z) − z ∈ O(1/|z|) is easy to work with but doesn't necessarily fix the origin and as we said before, this is no longer a choice but a constraint of the new theory. The normalization f (0) = 1 and f (1) = 1 is compatible with the maps z → z n of the inverse system. However, it is very difficult to control the growth of the maps in terms of their respective Beltrami differentials. The above Lemma shows that the chosen normalization is compatible with the inverse system with the advantage of having some control on the maps. If The quasiconformal normal map f ↑L n,m is the solution of the µ ↑L n,m -Beltrami equation such that: where µ ↑L n and µ ↑L m on the right side denote the functions and not the differentials (recall remark 4.1). Because ||I n || ∞ = 1 for every natural n, we have ||µ ↑L n || ∞ = ||I n (µ)|| ∞ ≤ ||µ|| ∞ for every natural L such that n|L. We also have: for every m|n|L, where the last step follows from the following calculus: See that the right hand side of relation (11) doesn't depend on L.
Lema 4.14. Consider a vertical essentially bounded µ ∈ L ∞ (C Q ) 1 with compact support. Suppose there is a subsequence (n i ) i∈N of the divisibility net such that: Let L = n J . There are constants A and A such that: Proof: By hypothesis, equation (11) and the fact that: for every natural i, we conclude that: Hence, by Lemma 4.10 we can take the same value p > 2 for all the maps f ↑L n i and f ↑L n i ,n i−1 . Because the supports of all µ ↑L n i and µ ↑L n i ,n i−1 are uniformly bounded: by Lemma 4.11 we can also take the same constant A for all the maps f ↑L n i and f ↑L n i ,n i−1 . Then we have: For the second: Finally, for the third assertion we have: |π n j •f n j ,n j−1 (x)| = |f n j ,n j−1 (π n j (x))| ≤ A ||I n j (µ) − I n j−1 (µ)|| ∞ |π n j (x)| 1−2/p + |π n j (x)| ≤ A a n j n j + 1 max{ 1, |π n j (x)| } where a n j = n j ||I n j (µ) − I n j−1 (µ)|| ∞ . Because π n j /L n j = π L we have: |π L •f n j ,n j−1 (x)| = |π n j •f n j ,n j−1 (x)| n j /L ≤ A a n j n j + 1 In particular, because the right hand side of the above equation is greater than or equal to one, then: and by the same argument, relation (12) implies: Becausef . . •f n J+1 ,n J •f L induction on relation (13) and relation (14) imply: and the result follows.
Corollary 4.15. Consider a vertical essentially bounded µ ∈ L ∞ (C Q ) 1 with compact support. If there is a subsequence (n i ) i∈N of the divisibility net such that the renormalized average series converge: ∞ j=1 n j+1 ||I n j+1 (µ) − I n j (µ)|| ∞ < ∞ Then, for every natural i ≥ J: where L = n J and A, A are constants.
Proof: The convergence of the series implies the hypothesis of the previous Lemma 4.14. Taking the limit i → ∞ on the right hand side of the relation gives the result.
Lema 4.16. Under the same hypothesis of corollary 4.15 above we have: where L = n J and B, B are constants.
Proof: The proof is almost verbatim to the proof of Lemma 4.14 with refernce to Lemma 4.12 instead of 4.11.
Lema 4.17. Under the same hypothesis of corollary 4.15, for every natural L there is a constant M L ≥ 1 such that: for every i ≥ J where L = n J and A is a constant.
Proof: We take the same values of k < 1, p > 2 and constants A and A = A/(1 − k 2 ) as those in the proof of Lemma 4.14. Denote n = n i and m = n i−1 . By Lemma 4.11 and relation (11) we have: where A = A/(1 − k 2 ) and µ ∞ = k. Define n = n/L. Lagrange Theorem implies: for some ξ in the interior of the segment joining π n •f n,m (x) and π n (x). In particular, Equations 15, 16 and 17 imply: In particular, becausef n =f n,m •f m we have: By the previous corollary 4.15 there is a constant M L such that: for every i ≥ J where L = n J . This bound implies: where we have used that M L max{ 1, |π L (x)| } ≥ 1 and the formula is proved.
Lema 4.18. Under the same hypothesis of corollary 4.15 and previous definition of the mapsf n i , there is a continuous leaf preserving mapf :Ĉ Q →Ĉ Q such that (f n i ) i∈N converges pointwise tof .
Proof: For each L = n J , Lemma 4.17 implies that (π L •f n i ) i∈N is a uniform Cauchy sequence on compact sets so there is a continuous function g L : C Q → C such that the sequence (π L •f n i ) i∈N converges uniformly to g L on compact sets. Consider another L = n J such that J > J. Because z L /L • π L •f n i = π L •f n i for every i ≥ J and the continuity of z L /L we have that z L /L • g L = g L . By the universal property of inverse limits there is a unique functionf : C Q → C Q such that π n i •f = g n i for every natural i. Because every g n i is continuous we have thatf is continuous and verifies that (π L •f n i ) i≥J converges uniformly to π L •f on compact sets. In particular, (f n i ) i∈N converges pointwise tof .
Let's see thatf is proper: Consider a compact set K ⊂Ĉ Q . The compact K is closed for every compact subset of a Hausdorff space is also compact and becausef is continuous, f −1 (K) is closed. By Lemma 4.16 and the fact that (f n i ) i∈N converges pointwise tof , we have that for every L = n J there is a constant M L such that: Choose some natural L = n J . Define R such that d(0, π L (K)) = R < ∞ for π L is continuous; i.e. π L (K) is compact. By the above relation we have that d(0, π L (f −1 (K))) ≤ M L R and because π L is proper, the closed setf −1 (K) is contained in the compact π −1 L (D(0; M L R)) hencef −1 (K) is compact and we have the claim.
In particular, the extensionf :Ĉ Q →Ĉ Q such thatf (∞) = ∞ is continuous and becausef n i (∞) = ∞ for every natural i, we have that (f n i ) i∈N converges pointwise tof onĈ Q . In particular, by definition everyf n i is leaf preserving and so isf . This finishes the proof.
Theorem 4.19. For every adelic Beltrami differential µ there is a unique quasiconformal leaf preserving solution f :Ĉ Q →Ĉ Q to the µ-Beltrami equation such that f fixes 0, 1, ∞.
Proof: (Uniqueness) Suppose that f and g are quasiconformal solutions to the µ-Beltrami equation fixing 0, 1, ∞. Then, f • g −1 is leaf preserving 1-quasiconformal fixing 0, 1, ∞. By Lemmas 2.13, 2.14 and Corollary 4.6 there is a holomorphic limit periodic respect to x function h such that ν −1 • f • g −1 • ν(z) = z + h(z) where ν is the baseleaf. On the other hand, by Weyl's Lemma ν −1 • f • g −1 • ν is a holomorphic homeomorphism of C; i.e. an affine transformation. Because it fixes zero, we have that ν −1 • f • g −1 • ν = id hence f • g −1 = id and f = g. (Existence) First suppose that µ has compact support in C Q . Consider an arbitrary leaf ν : C → C * Q ⊂Ĉ Q . Under the same notation and definitions as before, by Lemma 4.18 there is a continuous baseleaf preserving mapf :Ĉ Q →Ĉ Q such that (f n i ) i∈N converges pointwise tof . By Lemmas 4.15 and 4.16, actually we have the restrictionf : C * Q → C * Q and becausef n i andf are leaf preserving, the compositions ν −1 •f n i • ν and ν −1 •f • ν are well defined. Moreover the sequence (ν −1 •f n i • ν) i∈N converges pointwise to ν −1 •f • ν. Claim: The maps ν −1 •f n i •ν are quasiconformal solutions of the respectives ν * (I n i (µ))-Beltrami equations. We have the following diagram: Because the front, behind , left and right faces commute, the bottom face commutes. We already know that ν −1 •f n i • ν is continuous by Lemma 2.13 (or just because it is conjugated to a continuous function by a covering map). Locally the inverse of the map e iz/n i exists and we have Because ∂ z f n i ∈ L 2,loc (C), by the above identity the same holds for ∂ z ν −1 •f n i • ν . An analogous result holds for the other derivative. Finally, ν −1 •f n i • ν is the solution of the Beltrami equation with Beltrami differential (e iz/n i ) * (µ n i ) = (π n i • ν) * (µ n i ) = (ν) * (π n i ) * (µ n i ) = (ν) * (I n i (µ)) This proves the claim.
Define the affine maps A i (z) = a i z + b i such that A −1 i • ν −1 •f n i • ν is the quasiconformal solution of the ν * (I n i (µ))-Beltrami equation fixing 0, 1, ∞ (See remark 4.5 below). Concretely: Because (f n i ) i∈N converges pointwise tof , the sequence of affine maps (A i ) i∈N converges locally uniformly to the map A(z) = az + b such that: A priori a could be zero. Define the map g as the quasiconformal solution of the ν * (µ)-Beltrami equation fixing 0, 1, ∞. Because I n i (µ) tends to µ in L ∞ (C Q ) we have that ν * (I n i (µ)) tends to ν * (µ) in L ∞ (C) and by Lemma 4.20 we conclude that: locally uniformly. Then: and we conclude that: Becausef is continuous and fixes 0, ∞ it cannot be constant. In particular a = 0 and we have that ν −1 •f • ν is a quasiconformal solution of the ν * (µ)-Beltrami equation for every leaf ν. Finally,f is a homeomorphism for every continuous bijective map between compact sets is a homeomorphism. We have proved thatf is quasiconformal. Multiplying byf (1) −1 we have the quasiconformal solution fixing 0, 1, ∞. Now we remove the hypothesis of the compact support of µ by the standard well known trick: Define µ 1 = µ.χ |π 1 (z)|≥1 and consider the Möbius inversion γ :Ĉ Q →Ĉ Q such that γ(z) = 1/z. Because γ * (µ 1 ) has compact support on C Q , by the previous part there is a unique quasiconformal leaf preserving solution g :Ĉ Q →Ĉ Q to the γ * (µ 1 )-Beltrami equation such that g fixes 0, 1, ∞. Define f 1 such that the following diagram commutes: The map f 1 is the quasiconformal solution of the µ 1 -Beltrami equation fixing 0, 1, ∞: Because γ and g are homeomorphisms fixing 0, 1, ∞ so is f 1 . For every leaf ν a we have the diagram: Because every ν a is injective and the left, right, top, bottom and front sides commute we have that the back face also commutes. By definition We have: and this proves the claim. Define the adelic differential µ 2 such that: where µ and µ 1 on the right side denote the functions and not the differentials (recall remark 4.1). Under the same abuse of notation we have: and a similar expression and definition for ν * a (µ 1 ): A similar calculation gives: Because µ 2 has compact support on C Q there is a unique quasiconformal leaf preserving solution f 2 to the µ 2 -Beltrami equation fixing 0, 1, ∞. Define the map f = f 2 • f 1 . It is clearly quasiconformal leaf preserving and fixes 0, 1, ∞ for it is the composition of maps of the same kind. Because: and the following fact: we conclude that ν * a (µ) = µ a dz ⊗ (dz) −1 is the Beltrami differential of ν −1 a • f • ν a for every leaf ν a ; i.e. f is the unique quasiconformal solution to the µ-Beltrami equation fixing 0, 1, ∞.
Remark 4.5. At first sight it seems there is something terribly wrong in the above proof: Whilef fixes 0, ∞ and has only one degree of freedom as a solution of the µ-Beltrami equation, its conjugated map ν −1 •f • ν has two degrees of freedom. Why the conjugated map has an extra degree of freedom? Let's see: The conjugated map has the same freedom asf plus the property of being uniformly limit periodic on horizontal bands. Once this last property is destroyed by an affine transformation, an extra degree of freedom comes out.
Remark 4.6. We have also proved that: • The mapf in Lemma 4.18 is in fact quasiconformal.
The following Lemma is Proposition 4.36 of [IT]: Proposition 4.20. If µ n tends to µ in L ∞ (C) then f µn tends to f µ locally uniformly.
As a final remark, consider the morphism φ :Ẑ × C → S 1 Q × S 1 given by φ(a, x + iy) = (exp(a, x), e iy ). Because Ker(φ) = Ker(exp) the morphism factors through exp and we have the commutative diagram: The deck transformations group of the covering p : C * Q → S 1 Q × S 1 is generated by multiplication of ν(2πi) on the algebraic solenoid where ν is the baseleaf: T (z) = ν(2πi)z for every z ∈ C * Q . Consider a group G acting on the algebraic solenoid C * Q . Define the space of G-invariant S-adelic Beltrami differentials Bel S (C * Q ; G) as the set of differentials µ ∈ Bel S (C * Q ) such that g * (µ) = µ for every g ∈ G. In particular, we have an Ahlfors-Bers theory on the torus S 1 Q × S 1 : Bel S (S 1 Q × S 1 ) = Bel S (C * Q ; T ) and a Teichmüller space of this adelic torus: The 2-adic case of the above construction, S 1 2 × S 1 , is the second example in [Su]. As it was commented there, this is the basic solenoidal surface required in the dynamical theory of Feigenbaum's Universality [Fe]. We hope that our theory of adelic Beltrami differentials could shed some new light on the link between this univeraslity and the Ahlfors-Bers one.
Infinitesimal deformations
Now we turn the discussion to infinitesimal deformations. We discuss it in the general case and then apply it to the p-adic case. We need an appropriate notion ofḟ [η] where η ∈ T 0 Bel S (C Q ) = Ren S (See remark 4.8).
Proof: Consider the quasiconformal solutions f µ(t) and f (z k ) * µ(t) of the µ(t) and (z k ) * µ(t)-Beltrami equations respectively fixing 0, 1, ∞ such that µ(t) = tν + O(t 2 ). Unicity of the solutions imply the following commutative diagram: , deriving respect to t gives: for every ζ ∈ C where we have used on the right side thatḟ [ν] is actually a derivation and not a function. The above Lemma motivates the following definition: for dπ 1 = n π n−1 n dπ n .
See thatḟ [I n (ν)] is continuous on the whole adelic sphere. Now we defineḟ [ν] as the uniform limit of its periodic approximations just defined. The following Lemma is Theorem 4.37 in [IT]: Lema 4.24. Consider a family of Beltrami coefficients {µ(t)} depending on a real parameter t such that: where η ∈ L ∞ (C). Then,ḟ [η](ζ) = lim t→0 f µ(t) (ζ) − ζ t exists for every ζ ∈ C and the convergence is locally uniform on C. Moreover, for every ζ ∈ C.
Lema 4.25. Consider ν ∈ T 0 Bel S (C Q ) = Ren S . There is a continuous derivationḟ [ν] such that the sequence of continuous derivations (ḟ [I n i (ν)]) i∈N converges uniformly to it and (n i ) i∈N = S.
Proof: Consider ν ∈ T 0 Bel S (C Q ) = Ren S , the tangent space of the S-adelic Beltrami differentials at zero. Recall formula (4.24). For every x ∈ S 1 Q and naturals m, n such that m|n we have: where we used that |π n (x)| = 1 for every x ∈ S 1 Q and the constant C: In particular, we have: Because the following renormalized average series converges: where S = (n i ) i∈N is cofinal totally ordered divisibility subsystem under consideration, the sequence (ḟ [I n i (ν)]) i∈N is a Cauchy sequence in the complete space of continuous derivations with the supremum norm. Then, there is a continuous derivationḟ [ν] such that the sequence (ḟ [I n (ν)]) n∈N converges uniformly to it respect to the divisibility net.
Remark 4.8. In particular we have the following property: There is a continuous function g : C Q → C such that g(0) = 0 andḟ [η] (z) = g(z) d dπ 1 for every z ∈ C * Q . An analogous result holds for the whole adelic sphere. Although the expression in corollary 4.26 below would be the natural definition ofḟ [ν], this approach makes clear the continuity at zero of the map g previously defined that otherwise would be difficult to prove.
Corollary 4.26. Consider η ∈ T 0 Bel S (C Q ) = Ren S . Then, where ν is the baseleaf and the derivative is evaluated at zero.
for every z ∈ C Q such that π 1 (z) > 1. It is easy to see thatμ ∈ Bel S (C Q ) is actually an S-adelic Beltrami differential. By theorem 4.19 there is a unique quasiconformal solution f µ to theμ-Beltrami equation fixing 0, 1, ∞. Becauseμ = (1/z) * (μ) we have that By remark 2.1, the solenoid S 1 Q and hence H Q are invariant under f µ . Define the following equivalence relation on The universal Teichmüller space is defined as the quotient: with the quotient topology induced by (Bel S (H Q ), || · || S ). Although we have the usual group structure µ ν = η if f µ • f ν = f η , unfortunately the space Bel S (H Q ) is not closed under this product. The following Lemma shows the relation between this Teichmüller space with the Sullivan's one [Su].
Proposition 5.1. We have a continuous canonical injective map: Proof: In the same way as in the classical case, by Weyl's Lemma every adelic Beltrami differential defines a complex structure on every leaf of H Q = ∆ * ∞ in Sullivan's notation and we have a map ϑ : Bel S (H Q ) → T Sullivan (∆ * ∞ ). Because there is a bounded homotopy respect to the hyperbolic metric between ϑ(µ) and ϑ(η) if and only if f µ | S 1 Q = f η | S 1 Q , there is an injective map such that: By the same reason as before and Corollary 4.22, ϑ is continuous for pointwise convergence of continuous maps to a continuous map on a compact set (the solenoid S 1 Q in our case) is actually uniform. In particular, if (µ n ) n∈N converges to µ in the Banach space (Bel S (C Q ), ||·|| Ren,S ), then d(ϑ(µ n ), ϑ(µ)) tends to zero where d is the Teichmüller metric defined in [Su]. Because the topology of T S (1) is induced by the one in Bel S (H Q ), we have that is continuous. Model B: This is the model of quasisolenoids fixing the unit . Define the compact H * Q = π −1 1 (D(0; 1) c ) ⊂ C Q . Now, for every µ ∈ Bel S (H Q ) consider the quasiconformal solution f µ to the µ-Beltrami equation fixing 0, 1, ∞. See that f µ | H * Q is univalent on every leaf. In fact, the application of the theory of univalent functions in Teichmüller theory is one of Bers great accomplishments.
Define the following equivalence relation on The universal Teichmüller space is defined as the quotient: with the quotient topology induced by (Bel S (H Q ), || · || S ).
Proof: (Modulo technicalities, the proof is almost verbatim as the one in [IT]). Define the map g : C Q → C Q such that: The map g is clearly leaf preserving continuous. For every leaf ν a , the map ν −1 a • g • ν a is explicitly given by: and by definition A it is quasiconformal and we have that g is quasiconformal.
The map (f µ ) −1 • g • f η is 1-quasiconformal and fixes 0, 1, ∞ hence by remark 4.6 it is the identity. Then, f µ | H * Q = f η | H * Q . For the converse, by continuity f µ = f η on H * Q ∪ S 1 Q and we have a 1-quasiconformal By the same argument as before, h = id and we have f µ | S 1 Q = f η | S 1 Q .
Corollary 5.3. Teichmüller space models A and B are homeomorphic.
Model C: Here we present the quasisymmetric model.
where ν a is a leaf. Define w a : U → U such that: Because every f a is quasisymmetric the map w a is quasiconformal for every leaf ν a . Claim: There is a homeomorphismŵ of H Q such that w a = ν −1 a •ŵ • ν a for every leaf ν a . A direct calculation shows that there are continuous limit periodic maps g a respect to x such that w a (z) = z + g a (z) and g a+1 (z) = g a (z + 2π). By Lemma 2.9 there is a continuous map w :Ẑ × U →Ẑ × U such that w(a, z) = (a, w a (z)) and w(a + 1, z) = w(a, z+2π)+(1, −2π). By Lemma 2.7 we have a continuous mapŵ such that the following diagram commutes: It rest to show thatŵ can be extended to a homeomorphism on H Q . Consider the continuous extension to the boundary w :Ẑ × R × [0, +∞) →Ẑ × R × [0, +∞) and define w(+∞) = +∞. Define the following neighborhood basis at +∞: {+∞} ∪Ẑ × R × (y, +∞) such that y ≥ 0 See that {+∞} ∪Ẑ × R × [0, +∞) with the above basis at +∞ is compact. Because every w 0 : R × [0, +∞) → R × [0, +∞) is a homeomorphism, for every y ≥ 0 the preimage of the compact R × [0, y] is a compact set hence there is some y ≥ 0 such that w 0 (R × (y , +∞)) ⊂ R × (y, +∞). Because w n (z) = w 0 (z + 2πn) − 2πn we have that w n (R × (y , +∞)) ⊂ R × (y, +∞) for every integer and because the sequence (w n )n ∈ N converges locally uniformly on horizontal bands respect to the divisibility net, we have that w a (R × (y , +∞)) ⊂ R × (y, +∞) for every a ∈Ẑ taking a bigger y if necessary; i.e. w is continuous at +∞. Then w is a homeomorphism on {+∞} ∪Ẑ × R × [0, +∞) for a continuous bijective map between compact sets is a homeomorphism. Define exp(+∞) = 0. Is clear that exp : Q is continuous hence a homeomorphism for the same reason as before. Because the following diagram commutes: the mapŵ is a homeomorphism. This proves the claim. A complete analogous construction to the one before gives the commutative diagram: where the maps on the top square are homeomorphisms and every w a is quasiconformal. By equation (23) we have the relation w a (z) = w a (z) henceŵ is a quasiconformal map fixing 0, ∞ such that:ŵ (1/z) = 1 w(z) In particular, the map ψ : T S (1) → Bel S (H Q ) such that ψ(f ) = µŵ defines a section of the proyection B: B • ψ = id.
For the converse, consider a map f : S 1 Q → S 1 Q such that there is a quasiconformal mapŵ : H Q → H Q with continuous extension f . Becauseŵ is a leaf preserving homeomorphism of H Q then the continuous extension f is so and because ν −1 a •ŵ • ν a is quasiconformal then by Lemma 5.4 ν −1 a • f • ν a is quasisymmetric for every leaf ν a . Because the baseleaf ν is a morphism it defines leaf preserving left action ρ : R BL → S 1 Q such that ρ(a)(x) = ν(a)x where a ∈ R and x ∈ S 1 Q . This action is the translation along the leaves. By conjugation, it defines a left action on QS(S 1 Q ) such that a.f = ρ(a) • f • ρ(a) −1 . Consider a cofinal totally ordered divisibility subsequence S and the subspace QS S (S 1 Q ) of restrictions of quasiconformal solutions of S-adelic Beltrami differentials. Define the Teichmüller space T S (1) as the quotient: Q ) 1 denote those maps fixing the unit. By the previous Lemma we have: and is the double covering of the projective space: It seems that the p-adic solenoid S 1 p has the boundary ∂Z p × S 1 or some dynamical suspension of ∂Z p with some monodromy map. We wonder whether these ideas connect to tropical geometry.
In the same way we defined a notion of vertical uniform continuity in L ∞ we define the notion of vertical uniform derivative: As an example, consider the p-adic integers Z p with its p-adic norm ||·|| p . By definition, the p-adic norm is continuous on this space. Let's see that it has a directional derivative at every point. If x = 0 then there is some natural number N such that a ∈ N Z p imply ||x + a|| p = ||x|| p hence its derivative is zero at x. However, its directional derivative at zero is one for every direction. We have proved that the derivative exists and equals the delta Kronecker: d ||x|| p dx = δ 0 x Lema 5.7. Consider a vertical L ∞ -continuous µ ∈ L ∞ (C * p ) such that there is a converging S-renormalized average series. Then and dµ/dZ p exists along the direction determined by S and it is zero; i.e: dµ dZ p S = 0 Proof: There is a subsequence (n j ) j∈N of (p n ) n∈N such that 8 holds. In particular, the non-renormalized average series converges: ∞ j=1 ||I n j+1 (µ) − I n j (µ)|| ∞ < ∞ and because of Lemma 3.1 we have: µ = I n 1 (µ) + +∞ j=1 (I n j+1 (µ) − I n j (µ)) Recall that m a (x) = φ(a)x and φ(a) ∈ Ker(π n ) imply π n (φ(a)x) = π n (x) for π n is a group morphism. Because φ(a) ∈ Ker(π n ) if and only if a ∈ nZ p , we have that π −1 n (π n (x)) is invariant under the action m a such that a ∈ nZ p . In particular, because the measure is invariant under m a , we have that I n (µ • m a ) = I n (µ) for every a ∈ nZ p . Then, lim J n J ||µ • m n J − µ|| ∞ = lim J n J || +∞ j=J (I n j+1 (µ • m n J ) − I n j (µ • m n J )) − +∞ j=J (I n j+1 (µ) − I n j (µ))|| ∞ ≤ 2 lim J n J +∞ j=J ||I n j+1 (µ) − I n j (µ)|| ∞ ≤ 2 lim J +∞ j=J n j+1 ||I n j+1 (µ) − I n j (µ)|| ∞ = 0 Finally we have: Moreover, we conjecture the following: Conjecture 1. In the p-adic case, the renormalized average series condition in the adelic Beltrami differential definition can be substituted by the condition dµ/dZ p = 0. We don't mean these conditions are equivalent, we mean that the whole theory that follows, in particular Ahlfors-Bers theorem, holds with this alternative condition.
In what follows, we will take the sequence P = (p n ) n∈N 0 and in pursuit of alleviating the notation, when there were no confusion, we will make the following abuses: • Denote || · || P = || · || Ren,P .
• Denote I n = I p n .
• For every complex continuous function f from the solenoid, algebraic solenoid or the adelic sphere, denote I n (f ) = I n (f • ν) and ||f || P = ||f • ν|| P where ν is the baseleaf.
• For degree zero maps f , by Lemma 2.17 there is a complex function g such that f = ν • g. Denote ||f || P = ||g|| P .
• We will omit the notation making reference to orientation an leaf preserving padic solenoidal diffeomorphisms: Dif f + lp (S 1 p ) = Dif f (S 1 p ). Instead, we will denote the orientation and leaf preserving p-adic solenoidal C m -diffeomorphisms fixing the unit by Dif f m (S 1 p ) 1 . The same abuses hold for the p-adic solenoidal quasisymmetric maps QS(S 1 p ).
for every m ≥ i ≥ 0. We conclude that the functions considered are actually metrics. In the case m = ∞, the C ∞ -topology is defined as the one generated by the union of all the C ∞ -topologies. In particular, the inclusions Dif f n (S 1 p ) 1 ⊂ Dif f m (S 1 p ) 1 are continuous for m ≥ n including m = ∞.
Lema 5.9. The space Dif f m (S 1 p ) 1 is complete for m = 0, 1, 2, . . . ∞. Proof: Consider a C m -Cauchy sequence (f n ) n∈N . By the above relation we have that the sequence of limit periodic respect to n∈N is a Cauchy sequence respect to || · || P for every 0 ≤ i ≤ m. By Lemma 4.8, for every 0 ≤ i ≤ m there is a limit periodic respect to x function g i such that the Cauchy n∈N converges to it respect to || · || P . In particular, by the first item of Lemma 4.7, the convergence is uniform for every 0 ≤ i ≤ m and we have g i = d i dx i g 0 for such i. By Lemma 2.13 there is a continuous leaf preserving map f such that ν −1 • f • ν = id + g 0 and we conclude that the sequence (f n ) n∈N converges to f in the C m topology. We define the set Dif f m P (S 1 p ) 1 of p-adic solenoidal leaf and orientation preserving C m diffeomorphisms f fixing the unit with the property that there is an adelic Beltrami differential µ ∈ Bel P (H p ) such that f = f µ | S 1 p . Because every diffeomorphism y quasisymmetric, we have the Nag-Verjovsky map [NV] ι : Dif f m P (S 1 p ) 1 → QS P (S 1 p ) 1 T P (1) See that Dif f 0 P (S 1 p ) 1 = QS P (S 1 p ) 1 . Lema 5.10. The Nag-Verjovsky map is differentiable for m ≥ 2.
Because the inclusions Dif f m (S 1 p ) 1 ⊂ Dif f n (S 1 p ) 1 are continuous for m ≤ n, it is enough to prove the result for m = 2. Using the Ahlfors-Beurling extension formula (23), given the C 2 diffeomorphism f in Dif f 2 P (S 1 p ) 1 such that: where a −q = a q and a 0 = 0, we have the quasiconformal extension w f such that: a q e iqx l(qy) where z = x + iy and l is the real function: See that the above expression makes explicit the relation w(z) = w(z). By Lemma 4.2 there is a continuous addelic Beltrami differential µ f on the p-adic sphereĈ p such that: ν * (µ f )(z) = q∈Q iq a q e iqx l(qy)+l (qy) 2 1 + q∈Q iq a q e iqx l(qy)−l (qy) 2 The above expression is well defined for 1/2|l(x) ± l (x)| < 1 and because the solenoid is compact and f is a diffeomorphism, there is a constant k > 0 such that: for every x ∈ R. Now we have an explicit expression for the Nag-Verjorvsky map: where ϕ(f ) = µ f and ι(f ) = [ϕ(f )] = [µ f ]. We claim that (recall the abuse of notation 5.2): || q∈Q iq a q e iqx l(qy) ± l (qy) 2 || P ≤ π √ 3 ||f || C 2 This is just a calculation. Because: || q∈Q iq a q e iqx l(qy) ± l (qy) 2 || P = n∈N 0 p n ||I n (. . .) − I n−1 (. . .)|| ∞ and because of the fact 1/2|l(x) ± l (x)| < 1 and remark 3.3 for each term we have: and we have the claim. In particular, ϕ is continuous at the identity for: ||ϕ(f )|| P ≤ || q∈Q iq a q e iqx l(qy)+l (qy) 2 || P 1 − || q∈Q iq a q e iqx l(qy)−l (qy) 2 || P ≤ π √ 3 d(id, f ) C 2 1 − π √ 3 d(id, f ) C 2 for f sufficiently close to the identity in the C 2 topology. In particular, ι is continuous. ι is differentiable: Again, by the same argument as before, it is enough to prove it at the identity. For every degree zero solenoidal map h such that id.h is a C 2 diffeomorphism in Dif f 2 P (S 1 p ) 1 (h is a perturbation of the identity in the C 2 topology), define the linear map d id ϕ such that: ν * (d id ϕ (h))(z) = q∈Q iq a q e iqx l(qy) + l (qy) 2 dz dz Actually, d id ϕ is the differential at the identity for: This concludes the Lemma for d id ι = [d id ϕ].
Remark 5.2. In the above proof, several different quasiconformal extensions can be defined with appropriate functions l, for example: This function has the property of decaying to zero when x tends to ±∞, l(0) = 1 and 1/2|l(x) ± l (x)| ≤ 1 for every x ∈ R. Actually, this is the extension used in example 4.1.
From now on we will consider m ≥ 2. We will define complex structures on both ends of the Nag-Verjovsky map and conclude that it is analytic respect to these structures. Consider the tangent space at the identity T id Dif f m P (S 1 p ) 1 of C ∞ -vector fields v such that: where a −q = a q and a 0 = 0. Because of the monomorphism: d id ϕ id : T id Dif f m P (S 1 p ) 1 → T 0 Bel P (H p ) = Ren P these vector fields can be characterized as those for which there is some η ∈ Ren P such that v =ḟ [η]. Define an (a priori) almost complex structureĴ such that: ν * (Ĵv) = q∈Q −i sg(q)a q e iqx and translate it by conjugation with the adjoint map Ad to the whole tangent bundle T Dif f m P (S 1 p ) 1 . Although Dif f m P (S 1 p ) 1 is not a group, this translation of structure can be done.
On the other hand, the canonical linear complex structure of L ∞ (H p ) induces a complex structure J on the Banach manifold of solenoidal quasisymmetric maps QS P (S 1 p ) 1 .
Proof: Consider η ∈ T 0 Bel P (H Q ) such thatḟ [η] is a C ∞ derivation and its restriction to the solenoid is determined by: where a −q = a q and a 0 = 0. Define the derivation F such that: See that F is continuous on the whole adelic sphere (see remark 4.8), in particular at zero. Because of the following fact: ∂ḟ [η] = η in distributional sense on every leaf where ∂ = dz ∂z, we have that ∂zF = 0 on every leaf of H Q in distributional sense. By Weyl's lemma on every leaf, F is holomorphic on H Q such that the real part of its continuous extension to the solenoid is the function defininġ f [η] respect to d/dπ 1 . The only derivation F satisfying this property is the continuous one such that: ν * (F )(z) = 2 q>0 a q e iqz d dz Because the imaginary part of its continuous extension to the solenoid is the function definingḟ [iη] respect to d/dπ 1 , we conclude that: Corollary 5.12. The automorphismĴ defines a complex structure.
|
2018-06-18T17:25:38.000Z
|
2016-03-17T00:00:00.000
|
{
"year": 2016,
"sha1": "054728900bb1400ef704fbd0b7a6ce30d3ea5d4d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "054728900bb1400ef704fbd0b7a6ce30d3ea5d4d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
10634406
|
pes2o/s2orc
|
v3-fos-license
|
The afterglow of the short/intermediate-duration gamma-ray burst GRB 000301C: A jet at z=2.04
We present Ulysses and NEAR data from the detection of the short or intermediate duration (2 s) gamma-ray burst GRB000301C (2000 March 1.41 UT). The gamma-ray burst (GRB) was localised by the Inter Planetary Network (IPN) and RXTE to an area of 50 arcmin^2. A fading optical counterpart was subsequently discovered with the Nordic Optical Telescope (NOT) about 42h after the burst. The GRB lies at the border between the long-soft and the short-hard classes of GRBs. If GRB000301C belongs to the latter class, this would be the first detection of an afterglow to a short-hard burst. We present UBRI and JHK photometry from the time of the discovery until 11 days after the burst. Finally, we present spectroscopic observations of the optical afterglow obtained with the ESO VLT Antu telescope 4 and 5 days after the burst. The optical light curve is consistent with being achromatic from 2 to 11 days after the burst and exhibits a break. A broken power-law fit yields a shallow pre-break decay power-law slope of a_1=-0.72+-0.06, a break time of t_b=4.39+-0.26 days after the burst, and a post-break slope of a_2=-2.29+-0.17, which is best explained by a sideways expanding jet in an ambient medium of constant mean density. In the optical spectrum we find absorption features that are consistent with FeII, CIV, CII, SiII and Ly-a at a redshift of 2.0404+-0.0008. We find evidence for a curved shape of the spectral energy distribution of the observed afterglow. It is best fitted with a power-law spectral distribution with index b ~ -0.7 reddened by an SMC-like extinction law with A_V~0.1 mag. Based on the Ly-a absorption line we estimate the HI column density to be log(N(HI))=21.2+-0.5. This is the first direct indication of a connection between GRB host galaxies and Damped Ly-a Absorbers.
Introduction
The discovery of the first X-ray afterglow (Costa et al. 1997) and optical counterpart (van Paradijs et al. 1997) to a longduration gamma-ray burst (GRB) have led to a revolution in GRB research. The determination of a redshift of 0.835 for GRB 970508 (Metzger et al. 1997), and the subsequent determination of redshifts of 13 bursts with a median redshift of ∼1.0, have firmly established their cosmological origin (Kulkarni et al. 2000a;This work;Bloom et al. 2000).
The intriguing case of an association of the peculiar supernova SN1998bw with GRB 980425 (Galama et al. 1998) was the first indication of a possible connection with supernovae. Evidence for supernova signatures in the late light curves of GRB 970228 (Reichart 1999;Galama et al. 1999) and GRB 980326 (Castro-Tirado & Gorosabel 1999;Bloom et al. 1999) suggests that at least some long-duration GRBs may be related to the collapse of massive (> 25 M ⊙ ) stars. Breaks in the power-law declines of GRB 990123 ) and GRB 990510 (Harrison et al. 1999) are interpreted as evidence for collimated outflows ('jets') (see also Holland et al. 2000). Further evidence for this collapsar + jet model (e.g., MacFadyen & Woosley 1999) comes from the light curve of GRB 980519 which is best interpreted as a jet expanding into a preexisting circumburst stellar wind (Jaunsen et al. 2001).
The high-energy properties of GRBs show a bi-modal distribution of burst durations (Kouveliotou et al. 1995) which, in the simplest scenario, may indicate the existence of binary compact mergers as the progenitors of the short-duration bursts (T 90 < 2 s). From an analysis of the Third BATSE Catalog, Mukherjee et al. (1998) have shown that, in addition to the short (T 90 < 2 s) and long (T 90 > 5 s) classes, there may exist a third, intermediate soft-spectrum class of GRBs with duration 2 s < T 90 < 5 s.
In this paper we report the discovery and subsequent observations and analysis of the afterglow of the short-tointermediate duration GRB 000301C (Fynbo et al. 2000a).
Sect. 2 reports the detection, IPN localisation and the highenergy data of the GRB obtained from Ulysses and NEAR. Sect. 3 describes the discovery of the optical counterpart and our subsequent optical and infrared observations. Sect. 4 details the optical and infrared photometry and Sect. 5 describes the VLT spectroscopy. Sect. 6 describes the results obtained on the spectroscopy and spectral energy distribution and Sect. 7 is Send offprint requests to: B.L. Jensen ⋆ Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. ⋆⋆ Based on observations collected at the European Southern Observatory, La Silla and Paranal, Chile (ESO project No. 64.H-0573) ⋆⋆⋆ Based on observations at the German-Spanish Astronomical Centre, Calar Alto, operated by the Max-Planck-Institute for Astronomy, Heidelberg, jointly with the Spanish National Commission for Astronomy. Correspondence to: brian j@astro.ku.dk devoted to the discussion and interpretation, with Sect. 8 presenting our conclusions.
Detection and localisation of the gamma-ray burst
GRB 000301C was recorded by the Ulysses GRB experiment and by the NEAR X-Ray/Gamma-Ray Spectrometer. Because this burst was relatively weak, it did not trigger the Ulysses Burst Mode, and the only data available from Ulysses is the Observation Mode 1 0.25 s resolution 25-150 keV light curve (Hurley et al. 1992b). NEAR records the light curves of bursts in the 150-1000 keV energy range with 1 second resolution, but takes high-energy spectra only with 40 min resolution. Analysis of the Ulysses and NEAR relative timing data yields an annulus centred at (α, δ) 2000 = (20 h 34 m 7.56 s , +20 • 32 ′ 19.62 ′′ ), with a radius of 57.520±0.083 degrees (at 3σ full-width). This annulus intersected the error-box of the All-Sky Monitor (ASM) on the RXTE spacecraft, at near-right angles to create a composite localisation of a parallelogram of area 50 arcmin 2 (see Fig. 2).
Since no high-energy spectra are available, we have estimated the peak fluxes and fluences for trial power-law spectra with indices between 1 and 4 using the Ulysses data. For a typical power-law index of 2, we find a 25-100 keV fluence of 2.1 × 10 −6 erg cm −2 , and a peak flux over the same energy range, and over 0.25 s, of 6.3 × 10 −7 erg cm −2 s −1 . The uncertainties in these numbers are partly due to the lack of a high-energy spectrum. For example, the fluence estimates range from 1.45 × 10 −6 to 2.24 × 10 −6 erg cm −2 as the spectral index is varied from 4 to 1. The statistical uncertainty is approximately 30%. From the NEAR data we estimate the 150-1000 keV fluence to be approximately 2 × 10 −6 erg cm −2 .
To date, the only GRBs with identified long-wavelength counterparts have been long-duration bursts. As measured by both Ulysses and NEAR, in the >25 keV energy range, the duration of this burst was approximately 2 s. (Note that the earlier estimate of a 10 s duration of GRB 000301C by Smith et al. (2000) was based on the <10 keV energy range). Thus it falls in the short class of bursts, though it is consistent with belonging to the proposed intermediate class or the extreme short end of the distribution of long-duration GRBs (Hurley et al. 1992a;Mukherjee et al. 1998). Although we do not have any measurements of the high-energy spectra above 25 keV, it is possible to derive a crude estimate of the spectral index, and therefore the hardness ratio (the 100-300 keV fluence divided by the 50-100 keV fluence), from the Ulysses and NEAR count rates. We obtain a hardness ratio of 2.7±0.6(cutoff)±30%(statistical error) from fitting a powerlaw, with the index as a free parameter, to the count rates from NEAR and Ulysses, assuming a range of cut-off energies. Fig. 1 shows the location of GRB 000301C in a hardness vs. duration plot. The contour plot contains 1959 GRBs for which data on fluence and duration were available in the Fourth BATSE GRB Catalog (revised) (Paciesas et al. 1999) and the (Paciesas et al. 1999) and the Current BATSE GRB Catalog). The triangle with an error-bar near the center of the plot represents GRB 000301C. Other symbols represent 10 other BATSE bursts with identified counterparts for which data on fluence and duration are available. Triangles are bursts which have a break in their optical light curves. Errors in the BATSE data are smaller than the symbol size. Contour levels scale linearly. The centroid in the lower left corner indicates the resolution. BATSE Current GRB Catalog 1 . The symbols represent the 10 GRBs included in the BATSE catalogs for which an afterglow has been identified, with GRB 000301C located near the center of the plot. Triangles are bursts where a break has been found in the optical light curve. From this sparse set of data there does not appear to be any marked difference in the distributions of bursts with, or without, an identified break.
Of the 1959 BATSE bursts in Fig. 1, the ratio between bursts with a duration of T 90 ≥ 2.0 s and with T 90 < 2.0 s is 3:1. To date, at least 23 GRB optical afterglows have been discovered (Kulkarni et al. 2000a;Andersen et al. 2000;this work;Klose et al. 2000;Fynbo et al. 2001a;Fynbo et al. 2001b;Henden 2001; grb-webpage of J. Greiner 2 ). If the distribution of the 23 GRBs with identified counterparts follows the general BATSE distribution, one would expect that 17 ± 4 bursts were in the long class, and 6 ± 2 bursts were in the short class. However, GRB 000301C is the only GRB with a duration consistent with the short-duration class. The expected number of identified short burst counterparts is moderated by the strong selection bias caused by the technical difficulties of obtaining precise localisations for the short GRBs.
Discovery and observations of the afterglow
The IPN/RXTE error-box of GRB 000301C (Smith et al. 2000) was observed with the 2.56-m Nordic Optical Telescope (NOT) (Fynbo et al. 2000a). A finding-chart of the IPN-errorbox and the two ALFOSC pointings used to cover the field can be seen in Fig. 2. The transient nature of the candidate was subsequently confirmed at optical and infrared wavelengths Stecklum et al. 2000;Veillet et al. 2000;Fynbo et al. 2000b;Kobayashi et al. 2000). At the time of discovery the magnitude of the OT was R=20.09±0.04 (see Sect. 4 for a detailed discussion of the photometry and Fig. 3 for a finding chart and UBRI-images of the OT on 2000 March 3 UT).
We obtained subsequent optical observations using the NOT, the Antu telescope (UT1) of ESO's Very Large Telescope (VLT), the USNOFS 1.0-m telescope, the 2.2-m telescope at Calar Alto (CAHA) and the Wide Field Imager (WFI) on the ESO 2.2-m telescope. In addition we obtained near-infrared (NIR) data from the United Kingdom Infra-Red Telescope (UKIRT).
The journal of our optical and NIR observations, including the derived magnitudes, is given in Table 1.
The OT was also observed from several other optical and infrared telescopes, and the counterpart was subsequently detected in the mm-band at 250 GHz (Bertoldi 2000), and at radio (8.46 GHz) wavelengths 3 . Papers detailing the properties of the counterpart of GRB 000301C . Lower panel: A region of 30×30 arcsec 2 centred on the OT from the combined U, B, R and I frames obtained on 2000, March 3.14-3.28 UT, about 42 hours after the burst.
have been presented at radio and mm wavelengths and in the infrared (Rhoads & Fruchter 2001), and optical bands (Masetti et al. 2000;Sagar et al. 2000). Table 1. Journal of our observations of GRB 000301C with NOT+ALFOSC, USNO, CAHA 2.2-m, ESO VLT+FORS1, ESO 2.2-m+WFI in the optical bands, and the UKIRT observations in the infrared bands. The magnitudes were obtained from PSF photometry of the OT using DAOPHOT II, except for the VLT observation of 2000 March 6.39 UT, which has been derived from the March 6 VLT spectra (see Table 2).
Optical data
To avoid contamination from the nearby star A (located at a separation of 6 ′′ west and 1 ′′ south of the OT in Fig. 3), we measured the magnitude of the OT relative to stars in the field by performing Point Spread Function (PSF) photometry, using DAOPHOT II (Stetson 1987(Stetson , 1997. There are several bright and unsaturated stars in the field from which a good PSF could be determined. For the data presented here, there is no indication of a contribution from a host galaxy to the emission at the position of the OT (see Sect. 7.2 for a discussion of the host galaxy). Hence, extended emission from a faint galaxy at the position of the OT will not affect the PSF photometry appreciably (much less than observational errors). The quality of the PSF photometry was checked by subtracting the PSFs from the images of star A and the OT. In all frames the residuals are consistent with being shot-noise.
To avoid errors due to colour terms or colour differences in our photometry (the conditions during most of the observations at the NOT were possibly non-photometric due to increasing amounts of Saharan dust in the atmosphere over the telescope at the time of observations), the magnitudes of the OT for all our optical photometry were calibrated relative to stars of similar colours in the field.
The photometric standard UBVR C I C calibration of the field was performed at the USNOFS 1.0-m telescope and is available in Henden (2000). This calibration has an estimated zero-point uncertainty of 2 percent, which is well below the errors in the relative magnitudes. The results of the PSF photometry are presented in Table 1. The 2000 March 6.39 VLT Rdata point has been derived from the March 6 combined VLTspectrum (Table 2).
Based on this photometric calibration we conclude that star A showed no sign of variability within observational errors throughout our observations, and that it had the following magnitudes: U = 20.427±0.133, B = 19.837±0.030, V = 18.767±0.018, R = 18.084±0.043, and I = 17.526±0.044.
Near-infrared data
The UKIRT images were processed using the ORAC imaging data reduction routines developed for UKIRT (Bridger et al. 2000). The J, H and K magnitudes of the OT were then measured from the UKIRT data as follows. First we measured the magnitude of the OT relative to star A using DAOPHOT II PSF photometry as described above. Then, in order to transform this magnitude to the standard UKIRT system, we performed aperture photometry in an aperture with a diameter of 2. ′′ 7 on calibration images obtained of the standard stars S868-G and p389-d from the list of UKIRT faint standards 4 and on star A. The estimated error in the zero-point is about 0.05 in each of J, H and K. We have assumed negligible extinction difference between standard and program field. The results of these measurements are presented in Table 1.
Spectroscopy
Spectroscopic observations were carried out on 2000 March 5 and 6 UT with VLT-Antu equipped with FORS1 (for details see the observing log in Table 2). We used the GRIS 300V+10 grism and the GG375 order separation filter, which provide a spectral coverage from 3600Å to 8220Å and a dispersion of 2.64Å/pixel. The effective exposure time was 800 s on March 5.39 UT and 1200 s on March 6.38 UT. Standard procedures were used for bias and flat field correction, and the optimal extraction procedure for faint spectra (described in Møller (2000)) was used to extract one dimensional spectra.
The position angle of the long slit was chosen such that both the OT and star A were centred onto the slit. From the magnitude of star A we calibrated the flux of the trace of the Table 3. The spectrum is binned to 7Å pixels, and the lower curve shows the noise (per pixel).
optical counterpart. The spectral flux calibration derived for the OT on March 5 is consistent with the optical photometry displayed in Table 1. We derive a value for the spectral index of β = −1.15 ± 0.26 on March 5 and β = −1.43 ± 0.28 on March 6, corrected for interstellar extinction of E(B − V) = 0.053 ± 0.020, using the dust maps of Schlegel et al. (1998).
GRB absorption lines and redshift determination
The combined spectrum, reproduced in Fig 3, has a resolution of 14Å FWHM, and signal-to-noise (S/N) in the range 15-30 per resolution element redwards of 4000Å. From 4000Å to 3600Å the S/N drops rapidly. Due to the poor resolution, only very strong absorption lines can be detected individually. In Table 3 we list the only four absorption features which were detected at a S/N in excess of 4.5. Two of the features were found bluewards of 4000Å, and were initially ignored. The line at 4712Å is broader than the resolution profile, and we tentatively identified it as a possible C IV absorption complex with redshifts in the range 2.038 to 2.042. With this identification, the other three features would fit the proposed identifications given in Table 3. Note, however, that the Si II 1260 line is far too strong and wide to be a single line, and we hence assume that it is a blended feature. The Fe II 2600 line is strong and narrow, but also this line seems excessively strong given the lack of other strong Fe II lines. In order to provide a more strict test of our proposed identification, and to obtain an accurate value for the redshift, we proceeded as follows. First we shifted and stacked pieces of the spectra where we would expect common low ionization lines. We selected the singly ionized species of Si, C and Fe, all of which are known as strong absorbers in quasar absorption line systems. In total our spectrum covers positions of the lines Fe II 1608, Fe II 2344, Fe II 2374, Fe II 2382, Fe II 2586, Fe II 2600, Si II 1260, Si II 1304, Si II 1526, and C II 1334. Si II 1260 (at 3832Å) was in the very low S/N part of spectrum, and almost certainly blended, so it was not included. Treating each ion separately, a weighted mean absorption feature was calculated using the oscillator strength of each line as statistical weight.
The regions of the co-added spectra for each of the ions Fe II, Si II and C II, transformed into redshift space, is shown in the left panel of Fig. 5. A combined "Low Ionization" absorption feature (bottom of left panel of Fig. 5) was obtained by coaddition of the three sets of features, but using the number of lines as statistical weights. The redshift range searched for low ionization absorption systems by this method was 1.95-2.14 and no other candidate systems were found in this range. For comparison of the redshifts, we plot in the right panel of Fig. 5 again the combined "Low Ionization" absorption feature together with the C IV absorption trough. The bottom panel here shows the combination of all the lines. Given the significance of this combined "Metal Absorption Feature", we conclude that the tentative identification of this system is confirmed.
It is commonly seen in quasar absorption line systems that the low ionization species, tracing the cold dense gas, have a more well-defined redshift than the high ionization species. Hence, we shall adopt the redshift z abs = 2.0404 ± 0.0008, measured from the combined Low Ionization feature, as the systemic redshift. It is seen from the bottom right panel of Fig. 5 that inclusion of C IV would result in a slightly higher redshift. Note that z abs = 2.0404 ± 0.0008 is consistent with the redshift z=1.95±0.1 based on the Lyman break Feng et al. 2000), but is significantly higher than the value z=2.0335±0.0003 reported by Castro et al. (2000).
The oscillator strength weighted mean observed equivalent width of the Fe II lines is 2.56Å, which is strong enough that by comparison to known quasar absorbers one would expect this to likely have a column density of neutral Hydrogen in excess of 2 × 10 20 cm −2 . Such absorbers are known as Damped Lyα Absorbers (DLAs), and hold a special interest because of the large amounts of cold gas locked up in those The combined "Low Ionization" absorption feature (weighted mean of the three sections of spectrum shown above). Right: Upper panel: The combined "Low Ionization" absorption feature lined up with the C IV trough. Lower panel: Combined "Metal Absorption Feature". All the sections of spectra have been normalized to 1 in the continuum, but shifted along the abscissa by suitable off-sets. Note that none of the figures start at 0, instead the scale on the abscissa provides the proper reference.
objects (Wolfe et al. 1995;Storrie-Lombardi et al. 1997). It is commonly assumed that the DLAs are the progenitors of present day disk galaxies, but they have proven extremely difficult to identify (see e.g. Møller & Warren 1993;Kulkarni et al. 2000b). Observational evidence has been accumulating (Møller & Warren 1998;Fynbo et al. 1999) which suggests that a likely reason why DLA galaxies are so hard to identify is their small gas cross-sections and faint magnitudes, causing them to stay hidden under the point spread functions of the bright quasars. A GRB selected DLA galaxy sample would not be hampered by this problem once the OT has faded sufficiently, and could as such help greatly in understanding the nature of DLA galaxies. We shall therefore now briefly consider the low S/N part of the GRB spectrum below 4000Å, to investigate if any information concerning the H I column density can be extracted. In Fig. 6 (lower panel) we have plotted the spectral region around Lyα, and for comparison of redshifts, the "Combined Metals" feature (upper panel). Also plotted on the lower panel is the noise per bin (for redshift binsize = 0.003). It is clearly seen that the spectrum drops steeply before the expected central position of the Lyα line, and well before the S/N drops below detection. One likely explanation for this is the presence of a very broad Lyα absorption line. To quantify this we have modelled several Lyα absorption lines, all at redshift 2.0404, and calculated the χ 2 of their fit to the data in the range 3700Å to 3750Å. For N(H i) = 0 the χ 2 per degree-offreedom DOF is 6.46 which confirms that an absorption feature is indeed present. The formal χ 2 minimum is found at N(H i) = 1.5 × 10 21 cm −2 (χ 2 per DOF = 0.86), but any value within a factor 3 of this is acceptable.
It should be recalled that the above estimate, in a strict sense, only applies in the case the OT lies "behind" the absorbing cloud. In case the OT is in fact embedded "inside" a DLA cloud, resonant scattering of Lyα photons may alter the profile of the absorption line somewhat. In the present case this is a detail which the quality of our data will not allow us to discern.
The multi-wavelength spectrum around March 4.5 UT
The wide wavelength coverage obtained around 2000 March 4.5 UT allows us to construct the multi-wavelength spectrum of the afterglow, by using our USNO and UKIRT data. Most fireball models (e.g. Sari et al. 1998;Piran 1999;Mészáros 1999 and references therein) and observations of previous afterglows suggest a power-law Spectral Energy Distribution (SED). However, a global fit to the broadband SED of GRB 000301C by demonstrates that the mm-optical range cannot be described by a single power-law. Thus, we have only considered wavelengths shorter than IR when fitting the SEDs presented in this section. This gives eight measurements in the period March 4.39-4.55 UT, namely UBVRIJHK. The eight measurements, plotted against log ν together with the VLT spectrum in Fig. 7, allow us to constrain the SED of the OT at this epoch. As extinction is highly wavelength dependent, a progressive deviation from a pure power-law fit can be explained as being due to dust extinction in the GRB host galaxy. In order to test this possibility, we first fit a power-law to the data in the NIR-optical range with the addition of an intrinsic extinction law, i.e., using the expression F ν ∝ ν β × 10 (−0.4Aν) , where A ν is the rest frame extinction in magnitudes at the rest frame frequency ν. Four extinction laws have been applied in order to establish a relationship between A ν and ν. Following Rhoads & Fruchter (2001), we have fitted the extinction curves for the Galaxy and Magellanic Clouds published by Pei (1992). Pei (1992) provides extinction laws for the Milky Way (MW), the Large Magellanic Cloud (LMC), and the Small Magellanic Cloud (SMC) based on their different proportions in the dustto-gas ratio (1:1/5:1/10) and in the abundance of heavy elements (1:1/3:1/8). The most significant difference is the sequential change in the strength of the 2175Å extinction feature, prominent for the MW, moderate in the LMC, and nonexistent for the SMC extinction curve. It is important to note that for the redshift of GRB 000301C this extinction feature falls in the observed R-band. So, the presence of this feature would result in a clear decrease in the R-band flux compared to the I and V-bands. The MW extinction curve requires an equal amount of graphite and silicate grains, while the SMC extinction curve can be explained by silicate grains only, with the LMC extinction law as an intermediate stage.
We have applied these four extinction laws to our data in order to infer qualitative information about the dust-to-gas ratio, the abundance of heavy elements and the composition of the dust in the host galaxy of GRB 000301C. We leave A ν as a free parameter, so fitting a function like F ν ∝ ν β × 10 (−0.4Aν ) allows us to determine β and A ν simultaneously. The values of χ 2 are displayed in Table 4 for each case.
A pure power-law fit (F ν ∝ ν β ) to the eight data points (after correction for Galactic extinction) leads to a reduced χ 2 of 1.69 (see Table 4), making an acceptable description of the NIR-optical range of the SED. However, the fit can be improved if a modest amount of extinction is introduced. This is because the SED is slightly bending down towards higher frequencies (see Fig. 7). The eight data points show that there The frequency scale has been redshift corrected to rest frame values assuming z = 2.0404 (see Sect. 6.1). The data points include our optical (UBVRI) and near infrared (JHK) data (see Table 1) from March 4.5 UT. In the plots, a range of extinction models have been fitted to the data, cf. the text and Table 4. Upper panel: UBVRIJHK photometry with the VLT spectrum (dotted line) of March 5.4 included (the offset from the photometry is due to the fading of the OT between March 4.5 and 5.4). The solid line represent the fitted SED when the emperical SMC extinction law is applied. A fit (longdashed) assuming no extinction of the host is shown for comparison. Lower panel: here is showed the effects on the LMC and MW-like SED fits when reasonable values of β and A V are considered. The solid line, as in the upper panel, shows the SED when a SMC-like extinction law is fitted. Then, the values obtained of β and A V (β = 0.7, A V = 0.09) are applied to plot power-law afterglow SEDs with MW (long dashed) and LMC-like (dashed) extinction. Circles: USNO UBVRI-photometry, Triangles: UKIRT JHK-photometry.
is not any presence of a redshifted 2175Å absorption bump in the R-band at all. In short, the near-IR SED of GRB 000301C can be described as a curved power-law but with no broad absorption features.
As expected by the lack of the absorption bump in the Rband, the MW and the LMC extinction laws are completely inconsistent with our data. In fact, both fits imply an unphysical negative extinction (see Table 4). This is because the R-band flux is slightly over the linear interpolation between the I-band Table 4. Result of fitting different extinction law models to observations of the afterglow on 2000 March 4.5 UT. The extinction curves include the models by Pei (1992) for the Milky Way (MW), the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) (see Fig. 7 and text for further details). Pei (1992) < 0 LMC, Pei (1992) < 0 SMC, Pei (1992) 0.91 −0.70± 0.09 0.09 ± 0.04 and V-band fluxes (in a Log-Log space), and both of these two extinction laws fit the 2175Å bump as an emission feature instead of an absorption bump. To illustrate the problem with the MW and LMC extinction curves, we have, in the lower panel of Fig. 7, plotted the effect of having the MW and LMC extinction laws with the parameters derived for the SMC extinction law (β=-0.7, A V =0.09). As seen, the shapes of both SEDs are incompatible with our UBVRIJHK measurements. The quality of the MW, LMC and SMC and unextincted SEDs can also be compared checking the flux predicted at 250 × (1+z) GHz (rest-frame), where reports a flux of 2.1 ± 0.3 mJy at 250 GHz on March 4.29 UT. Extrapolating the four extinction curves, we obtain the following fluxes in increasing order; 5.0 mJy (SMC), 18.1 mJy (No extinction), 33.9 mJy (MW) and 51.6 mJy (LMC). As with the NIR-optical range, the SMC extinction provides the most reasonable results. The actual measured flux in mm is below the value predicted by the mm-optical extrapolation, because the pure power-law assumption is not correct in the mm-NIR spectral range and an additional curvature effect is present in the SED, as demonstrated by (see their Fig. 1). Thus, the value of A V = 0.09 ± 0.04 for the SMC-fit given in Table 4 should be taken as a good indication of the real extinction, although strictly speaking it is just an upper limit.
In conclusion, the featureless SMC extinction law provides the best fit to our data, improving the quality of the fit obtained for an unextincted afterglow (see Table 4). It is interesting to note the dramatic dependence of the quality of fit on the existence of the 2175Å absorption bump. Extinction laws with high moderate dust-to-gas ratios that produce such an absorption feature do not provide good fits to our data points. Therefore, the spectral energy distribution of GRB 000301C supports a scenario where the host is in the early stages of chemical enrichment.
The power-law + extinction fits to the SED in the NIRoptical range allow us to predict the UV flux at the spectral range of MAMA-HST when the UV spectrum was obtained at 2000 March 6.375 UT assuming that the shape of the SED has not changed between the two epochs. This assumption is supported by the imaging data (Sect. 6.3). First, we consider the best fit to the SED at 2000 March 4.39-4.55 UT and calculate the flux at March 4.39-4.55 UT at 3000Å. Then, making use of the light curve models Table 5. Best fits for the colours of the OT from 2000 March 3 to 11 UT, assuming an achromatic evolution. V−R is for March 4.4 only. P(χ 2 ) is the probability to obtain a lower value of χ 2 /DOF for the given model (constant colour). Colours are not corrected for galactic or intrinsic extinction. See also Fig. 8. (presented in Table 6), we estimate the value of the flux at 3000Å for March 6.375 UT. The predicted flux ranges from 5.9 ×10 −18 erg cm −2 s −1Å−1 to 7.9 ×10 −18 erg cm −2 s −1Å−1 , depending on the light curve model. A final analysis (Smette et al. 2001) of the MAMA-HST data revealed a flux of ∼ 7.3 +0.8 −1.8 10 −18 erg cm −1 consistent with our extrapolation.
Evolution of the spectral energy distribution
From our UBRI photometric data (presented in Table 1) we have multi-band optical coverage from 2 to 10 days after the burst-trigger (on March 1.41 UT). When analysing the colours, we find that the simplest reliable fit is for constant colours. Thus we find no evidence for optical chromatic evolution for the afterglow during the period of observations (see Fig. 8).
For these constant fits, we obtain the values presented in Table 5. These values are not corrected for Galactic or intrinsic extinction.
The light curve
According to the simple fireball model the optical afterglow should follow a power-law decay, F ν ∝ ν β t α (Sari et al. 1998). However, a single power-law is excluded at more than the 99.9% confidence level. The parameters for this power-law are given in Table 6. The photometry suggests that the optical afterglow follows a shallow power-law decay for the first few days and then steepens. This behaviour has been seen previously in GRB 980519 (Jaunsen et al. 2001), GRB 990123 , GRB 990510 (Harrison et al. 1999), GRB 991208 (Castro-Tirado et al. 2001) and GRB 000926 (Fynbo et al. 2001b and is predicted by many models for gamma-ray bursts (see below). Sagar et al. (2000) report that there are seven components to the R-band light curve. Here we are primarily interested in the overall structure of the light curve, not the structure at small time scales. Therefore, we fit a broken power-law of the form, Table 1. The results of the fitting is presented in Table 5.
to the UBRI data presented in Table 1. The flux, in µJy, at time t days after the burst is denoted by f ν (t). The time of the break in the decay is denoted t b . The slope before the break is α 1 , and the slope after the break is α 2 . The flux at the time of the break is f ν (t b ). We used CERN's MINUIT function minimization package, and a chi-square minimization scheme, to simultaneously solve for the four free parameters (α 1 , α 2 , t b , and f ν (t b )) and their formal 1-σ errors in the fit for each parameter. The data was corrected for Galactic reddening and extinction before the fits were made. No corrections were made for reddening or extinction in the host galaxy. The photometry was transformed to the R band using the colours given in Table 5 and then converted to units of flux using a photometric zero point of f ν,0 = 3.02 × 10 −20 erg cm −1 s −1 Hz −1 (Fukugita et al. 1995).
The best fit to the combined UBRI photometry is listed in Table 6, and shown in Fig. 9. To test the sensitivity of the results to the fitting function, we also fit our data with the smooth function used by Stanek et al. (1999, their Eq. 1) on GRB 990510. The results are given in Table 6. The broken power-law gives the smallest chi-square value, and the errors in the individual parameters are smaller for the broken power-law fit than they are for the smooth function. The correlation coefficient Table 6. The parameters of the best-fitting functions to the optical decay of GRB 000301C. The number of degrees-offreedom (DOF) in each fit is the number of data points minus the number of parameters. The number of parameters is the number of free parameters in each model plus the number of colours that the data was adjusted with, in order to bring it into the R band. between t b and α 1 is −0.39 and the coefficient between t b and α 2 is −0.84. The broken power-law fit is consistent with the data at the 43% confidence level. Even though we, from the theory of fireballs, would expect that the light-curve evolution is a smooth function, we find in the case of GRB 000301C, in agreement with , that the broken power-law provides the most reliable fit. Additionally, a broken power-law provides the most reliable metod of determining the time when the decay of the light has steepened, and thus is a useful way of parameterising the data. We combined all our UBRI-data (Table 1) with those from the literature. Fig. 10 shows the best-fitting broken power-law for all of the UBRI data in the literature (Sagar et al. 2000 and references therein) and Table 1. This data was shifted to the R band in the manner described above. The parameters of the fit are shown in Fig. 10 and are not significantly different from the parameters of the broken power-law that was fit to our data (see Table 6). The conspicuous short-term behaviour of the light-curve has been detailed by Masetti et al. (2000), Sagar et al. (2000) and . Garnavich, Loeb & Stanek (2000) find that the variation of the lightcurve can be interpreted as a microlensing event, peaking about 3.5 days after the burst, superposed on a power-law broken at t b = 7.6 days. This superposed event peaks at a more sparsely sampled period in our data, coinciding partly with where we identify the break. Thus it is not possible, from our data, to further constrain the existence of such an event. We choose here to work only with the data presented in Table 1 as it represents a consistently derived set.
Interpretation of the light curve
The fit to the multi-colour light curves shows a break at t b = 4.39 ± 0.26 days, with the light curve steepening from α 1 = −0.72 ± 0.06 to α 2 = −2.29 ± 0.17, i.e., by ∆α = α 1 − α 2 = 1.57 ± 0.18. A broken light curve can arise in a number of circumstances: i) If the frequency separating fast cooling electrons from slow cooling ones moves through the optical at t b , the resulting light curve would steepen by ∆α ∼ 0.25 (Sari Fig. 9. The upper panel shows the UBRI photometry from Table 1 and the best-fitting broken power-law fit to the data. The photometry was offset to the R band using the colours given in Sect. 6.3 (see the text). The horizontal line at the location of the break shows the 1-σ uncertainty in the time of the break. The lower panel shows the residuals of the fit in the sense R obs − R fit . The uncertainties in the residuals are the uncertainties in the photometry. et al. 1998). ii) The light curve may also steepen if a spherical fireball slows down to a non-relativistic expansion (Dai & Lu 1999), resulting in ∆α = −(α 1 +3/5) = 0.12 for our value of α 1 . iii) If the outflow is collimated with a fixed opening angle, the break in the light curve occurs when the relativistic beaming of the synchrotron radiation becomes wider than the jet opening angle (Mészáros & Rees 1999). In this case the break is a geometrical effect and the steepening is ∆α = 3/4. iv) If the afterglow arises in a sideways expanding jet, the steepening will be ∆α = (1 − α 1 /3) = 1.24 (Rhoads 1999) for our value of α 1 . The above estimates all assume a constant mean density distribution of the ambient medium. We note that collimated outflows in general result in faster decaying light curves than the spherically symmetric ones. If the mean density distribution is not constant, e.g. it has a stellar wind density profile, the light curves also decay faster, but the break will be less pronounced (Panaitescu et al. 1998).
Based on the light-curve properties alone, the model that best fits the observations is that of a sideways expanding jet in an ambient medium with a constant mean density distribution. In that interpretation, the observed light curve indices imply an electron energy distribution index of p = 2.13 ± 0.09 that results in a theoretical spectral index of β = −(p − 1)/2 = −0.56±0.05. This is in agreement with the spectral index of β = −0.70±0.09 inferred from our spectroscopic observations when correcting for extinction in the host galaxy (Sect. 6.2), independently strengthening the described model for the afterglow.
With a combined fluence in the 25 -1000 KeV range of 4 × 10 −6 erg cm −2 , and a redshift of z = 2.0404, the isotropic energy release of GRB 000301C is E = 4.6 × 10 52 erg. Following Rhoads (1999), the energy estimate and the light curve break time, t b = 4.39 ± 0.26 days, implies a jet opening angle, at that time, of θ ≈ 15 • n 1/8 , where n is the number density of the ambient medium (in units of cm −3 ), and the break is assumed to occur when the opening angle equals the bulk Lorentz factor, θ = Γ.
This interpretation is similar to that of GRB 990510 which was almost 5 times more energetic, but had a jet opening angle of approximately 5 • , leading to an earlier break in the light curve. The best fit to GRB 990510 was a smooth function (e.g. Stanek et al. 1999;Harrison et al. 1999;Holland et al. 2000), as compared to a broken power-law in the case of GRB 000301C.
The host galaxy
The host galaxy appears to be very faint. There is no evidence for any extended emission from a host galaxy in any of the data presented in this paper. Deep images obtained with HST+STIS about 1 month after the GRB indicate that any host galaxy must be R≥27.8±0.25 . Hence, we can safely conclude that the host galaxy of GRB 000301C is very faint compared to other known populations of galaxies at high redshifts. From fitting extinction curves to our photometry, we have found evidence for some extinction (A V ∼ 0.1) in the host galaxy. The A V derived from the best fit corresponds to an absorption at rest-frame 1500Å of about 0.4 magnitudes.
For comparison, Lyman Break Galaxies (LBGs) at slightly higher redshifts (z ∼ 2.5-3.5) on average have values of extinction at rest frame wavelength 1500Å of approximately 1.7 magnitudes, and, in rare cases up to 5 magnitudes (Steidel et al. 1999). Hence, the faint optical appearance of the host galaxy relative to the star-forming LBGs at high redshift is most likely not imposed by massive extinction, but is rather due to a lower overall star formation rate of the host galaxy.
As the host galaxy furthermore has a very high H I column density, log(NH I)=21.2±0.5 as derived from the Lyα absorption feature and supported by the strong Lyman break , it is interesting to compare with the population of galaxies identified as Damped Lyα Absorbers (Wolfe et al. 1986) in the spectra of background QSOs. These galaxies have H I column densities higher than log(NH I)=20.3. Based on the luminosity function of LBGs and the typical impact parameters of DLAs, Fynbo et al. (1999) show that the majority of DLAs at z = 3 must be fainter than the current flux limit for LBGs of R=25.5 and that there hence is a very abundant population of galaxies fainter than the LBG flux limit. A similar conclusion has been reached by Haehnelt et al. (2000). The dustto-gas ratio towards the line-of-sight of GRB 000301C gives a value of A V /N(H i) ≤ 0.1/1.6 × 10 21 cm 2 = 0.6 × 10 −22 cm −2 . This upper limit is, within errors, consistent with the expected A V /N(H i) for DLAs. The corresponding value for the Milky Way is 2 × 10 −22 cm −2 (Allen 2000).
It is still uncertain what fraction of the integrated star formation rate at high redshift is accounted for by the LBGs and what fraction has to be accounted for by galaxies further down the luminosity function. The relative occurrence of GRBs in a given population of galaxies is expected to be proportional to its relative contribution to the total star formation rate (Totani et al. 1997;Wijers et al. 1998;Mao & Mo 1999;Blain & Natarajan 2000). However, so far only one GRB (GRB 971214 at z = 3.418) is confirmed to have occurred in a galaxy similar to the faint members of the LBGs selected in ground based surveys (Odewahn et al. 1998). The fact that GRB 000301C occurred in an intrinsically very faint galaxy and that most GRBs with identified OTs have occurred in L * or sub-L * galaxies, suggest that a large fraction of total star formation at high redshift occurs in a population of galaxies that is further down the luminosity function than the bright LBGs found in ground based surveys and that is likely to have a large overlap with the DLAs.
Conclusion
GRB 000301C is so far the GRB of shortest duration, for which a counterpart has been detected. The high-energy properties of the burst are consistent with membership of the short-duration class of GRBs, though GRB 000301C could belong to the proposed intermediate class of GRBs or the extreme short end of the distribution of long-duration GRBs. Our VLT-spectra show that GRB 000301C occurred at a redshift of 2.0404±0.0008. The light curve of the optical transient is well-fitted by a broken power-law and it is consistent with being achromatic. From the light-curve properties we find that the best model for GRB 000301C is that of a sideways expanding jet in an ambi-ent medium of constant density. This interpretation is further supported by the achromatic light-curve evolution, and by the agreement between the theoretically predicted and observationally derived spectral indices. The spectral energy distribution at March 4.5 reveals SMC-like extinction in the host galaxy at a level of A V < 0.10, which is significantly lower than for the strongly star-forming LBGs. Hence, the extreme faintness of the host galaxy indicates a low overall star-formation rate in the host galaxy, raising the possibility that the host may be a chemically less evolved, relatively low-luminosity galaxy containing SMC-type dust. We argue that there may be a connection between the host galaxy of GRB 000301C and DLAs, suggesting that substantial star-forming activity at high redshift takes place in relatively faint galaxies. Future studies of high redshift GRBs will further help explore this connection.
|
2014-10-01T00:00:00.000Z
|
2000-05-31T00:00:00.000
|
{
"year": 2000,
"sha1": "899fe640281003c1038d4326e09bdb593ef6a78c",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2001/18/aah2254.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ad5d1ee068b2bbdacbf5ef8da6ad5862e231114b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253882149
|
pes2o/s2orc
|
v3-fos-license
|
Ameliorating Child poverty through Connecting Economic Services with child health Services (ACCESS): study protocol for a randomised controlled trial of the healthier wealthier families model in Sweden
Background Sweden is often held up as an example of a country with low child deprivation; yet, rates of relative deprivation are rising. Every municipality in Sweden is required to provide free, timely and accessible budget and debt counselling under the Social Services Act. The services have been encouraged to perform preventative practice with families; however, this has not been realised. The Healthier Wealthier Families (HWF) model embeds universal screening for economic hardship into child health services and creates a referral pathway to economic support services. Given the universal child health system in Sweden, which is freely available and has excellent coverage of the child population, implementation of the HWF model has potential to support families to access the freely available municipal budget and debt counselling and ultimately improve rates of child deprivation in Sweden. Methods/design We will conduct a two-arm randomised waitlist-control superiority trial to examine the effectiveness and cost-effectiveness of the HWF model in the Sweden. A longitudinal follow-up with the cohort will explore whether any effects are maintained in the longer-term. Discussion HWF is a collaborative and sustainable model that could maximise the effectiveness of current services to address child deprivation in Sweden. The study outlined in this protocol is the first effectiveness evaluation of the HWF model in Sweden and is a crucial step before HWF can be recommended for national implementation within the child health services. Trial registration Clinicaltrials.gov; NCT05511961. Prospectively registered on 23 August 2022. https://clinicaltrials.gov/ct2/show/NCT05511961
Johansson et al. BMC Public Health (2022) 22:2181 Strengths and limitations of this study • This study will assess the effectiveness of the Healthier Wealthier Families (HWF) model on selfreported child deprivation in Sweden by randomised controlled trial • The study design includes measures on a number of hypothesised mediators, namely self-reported financial knowledge, financial control, readiness for change, parental mental health and financial stigma, which will give insight to the intervention logic model • Data on income, and sources of income, will allow the amount of financial gain that can be attributed to the HWF model to be described • Inclusion of personal goal attainment as an outcome will allow participants to express intervention expectations in their own words and report on how far these expectations have been met • The study design includes a cost-effectives evaluation that will consider group differences in capability-adjusted life-years • An internal pilot will assess the feasibility of the RCT processes • A 12-month follow-up will allow for exploration of whether any effects are maintained in the longerterm, albeit without the ability to make group comparisons at that stage
Background
Sweden is often held up as an example of a country with low child deprivation; yet, rates of relative deprivation are rising. Since 2000, Sweden has witnessed a two-fold increase in the proportion of children living in relative poverty according to the European Union (EU) standard, i.e. living in a household with an income below 60 per cent of the national median [21]. The latest statistics place this figure at 17% [21]. When removing the aspect of relativity and considering families receiving social benefits and/or with a low-income standard, i.e. the household disposable income is lower than considered necessary for living expenses, the most recent figure stands at 9% [19]. Certain families are at greater risk than others. For instance, family constellation affects economic vulnerability, with increased risk of poverty [18] and elevated concerns regarding savings and managing unexpected costs [1] among single-parent households. Parental level of education and immigrant background have also been linked to higher risk of exposure to child poverty in Sweden ( [1,17]). The impact of relative child poverty has been widely documented, with research reporting negative impacts across various outcomes including mental health problems, obesity, long-standing illness and mortality [10,22]. The Family Stress Model [4] describes how parental psychological stress can mediate poor child outcomes; economic hardship affects parental mental health, causing parental conflict and/or difficulties with parenting, which in turn negatively affect child outcomes.
Budget and debt counselling, covering aspects such as budgeting, saving and debt management, can offer necessary support to families. Budget and debt counsellors work to improve financial knowledge and enable greater financial control. Not only has the ability to exert control over one's finances been associated with financial wellbeing, it is actually a stronger predictor than income [23]. In some cases, the ability to meet costs can be improved even in the absence of being able to increase household income, through greater control over the existing income. Every municipality in Sweden is required to provide free, timely and accessible budget and debt counselling under the Social Services Act. The services have been encouraged to perform preventative practice with families; however, this has not been realised. Financial stigma could be playing a role in this. Research shows that Swedish people tend not to share their situation of economic vulnerability with governmental services, even though the services could improve their economic situation [6]. When parents share information about their economic situation, the moral dimension is something that is emphasised. Often, parents want to demonstrate they are financially responsible people, that they prioritise necessary costs and that they put the needs of their children first. Another important aspect to consider is readiness for change. Michie, Atkins, and West [12] describe that in order for behaviour change to take place, one must feel capable, have the opportunity to change, and be motivated to make the change.
Systematic integration of financial counselling and income maximisation services into routine health services can provide an opportunity for families to seek support and shift the focus away from the individual, instead emphasising need at the societal level. A recent systematic review identified examples targeting parents of young children across New Zealand, the UK and USA [3]. One such example is the Healthier Wealthier Children project, funded by the Scottish Government, which created information and referral pathways between the National Health Service (NHS) early years workforce and financial counselling services [16]. Prepost evaluation of the model demonstrated monetary gain, as well as improved health, housing and quality of life [16]. The model has been integrated into the Scottish Government policy and adapted to international contexts, including Australia where it has been given the name Healthier Wealthier Families (HWF). Whilst the existing evidence is promising, more rigorous evaluation of the model is warranted to evaluate its effectiveness [3]. Given the universal child health system in Sweden, which is freely available and has excellent coverage of the child population [24], implementation of the HWF model has potential to support families to access the freely available municipal budget and debt counselling. This paper sets out the study protocol for a randomised controlled trial (RCT) evaluation of the HWF model in Sweden. The logic model for HWF in Sweden is illustrated in Fig. 1.
Objectives
The objectives of the trial are: 1. To evaluate whether allocation to municipal budget and debt counselling services via the HWF model and provision of a financial guidance book (intervention arm) has an effect on self-reported child deprivation in comparison to similar families who only receive the financial guidance book (waitlistcontrol arm). 2. To evaluate whether allocation to the intervention arm has an effect on self-reported financial knowledge, financial control, readiness for change, attainment of personal goals to improve one's financial situation, parental mental health and financial stigma, which relate to the model theory of change. 3. To describe the amount of financial gain, and sources of income, that can be attributed to the HWF model. 4. To assess whether effects on self-reported child deprivation, financial control, financial knowledge, readiness for change, personal goals, parental mental health and financial stigma are maintained 12 months post-randomisation. 5. To estimate the cost-effectiveness of the HWF model.
It is hypothesised that, when compared with the waitlist-control arm 3 months post-randomisation, families in the intervention arm will report a lower rate of child deprivation, measured as necessary items lacking on a standardised list, due to inability to cover costs. It is further hypothesised that, when compared with the waitlistcontrol arm, the intervention arm will report greater financial control and knowledge, and lower levels of mental health difficulties and financial stigma.
Design
A two-arm randomised waitlist-control superiority trial (1:1 allocation ratio) will be conducted to evaluate the effectiveness of the HWF model in improving the financial situation of families who have self-reported economic difficulties. The intervention arm will be referred to budget and debt counselling immediately after randomisation and the waitlist-control arm 3 months later. Informed by Patient and Public Involvement (PPI), so all participants receive some form of support at randomisation, both arms will be offered a freely available financial guidance book. RCT assessments will take place at two points: pre-intervention (T1) and post-intervention (T2; 3 months after randomisation).
A longitudinal cohort study will be conducted with all participants, from both trial arms (intervention and waitlist-control). Longitudinal assessment will take place 12 months after randomisation (T3), to assess any changes in measures between T2 and T3. See Fig. 2 for the participant timeline.
Setting
The screening for the RCT will take place at children's health care centres, known in Swedish as barnavårdscentraler (BVCs). BVCs examine the health of children (0-5 years) to monitor growth and development and provide advice and support to parents, e.g. about the child's development, breastfeeding, food and diseases. Parents are also offered vaccinations for their children at the BVC. Visits take place from just after birth until the child begins school. These services are offered free of charge and achieve excellent reach (99%) [24]. The RCT will focus on BVCs in municipalities experiencing high rates of children living in relative poverty. The 290 municipalities in Sweden have been categorised according to the proportion of children in households with income less than 60 percent of the median [20]. BVCs in the most severe categories (19.7-26.9%; and 27.0-46.0%) will be targeted for participation in the RCT. Concerning the larger municipalities, a more fine-grained categorisation will be conducted using local statistics, to be able to identify the proportion of children in households with income less than 60 percent of the median in residential areas.
Participants
The target population is families with children in contact with BVCs. To be eligible for the project, the parent or caregiver needs to score on one or more items that indicate economic difficulties and not have been in contact with a financial advisory service in the last month. The screening questions were developed as part of an initial adaptation and pilot study, and include items used by the Swedish Inspection of Finance. The screening includes six items covering financial worry, ability to meet costs, occupation and handling unexpected costs and current use of financial advisory service(s): 1. During the last 12 months, have you been worried that your family will run out of money at the end of the month? 2. During the last 12 months, have you been able to pay current expenses such as rent, bills, insurance? 3. During the last 12 months, have you had the opportunity to buy necessary items such as clothes and food for yourself and your children? 4. Is there at least one person in your household who currently has a paid job? 5. Would you be able to handle an unexpected expense of 20,000 SEK (approx. £1,500) without asking for help or borrowing money? 6. Have you used a financial advisory service within the last month? (By that we mean if you have had a scheduled meeting with a budget and debt counselor to get help with your economy)
Recruitment
During a routine BVC visit, the nurse will ask the parent/caregiver if they consent to answering questions about their economy. If they agree, the child health nurse will systematically work through an electronic screening form. The form has been co-designed with child health nurses, financial counsellors and families. It is hosted on a secure online platform called Research Electronic Data Capture (REDCap), which is specifically geared to support online and offline data capture for research studies and operations. It is fully customizable to meet the needs of this project and allows secure multi-site access with authentication and data logging that meet General Data Protection Regulation (GDPR) regulation requirements. It is a dynamic form that prompts the nurse to show informational material (e.g. a short video about the financial counselling service) at the appropriate moments during the screening discussion. The data are securely transferred to the research group. Basic data are captured for all families screened and personal details shared for those who consent to be contacted about study participation. A member of the research team then contacts the parent/caregiver to complete the informed consent procedure and the study questionnaire, which can be completed via RED-Cap, on paper, or orally. All parents/caregivers have the right to decline participation in the research study and can still gain access to the municipal budget and debt counselling service.
Sample size
The primary analysis for the RCT will be a comparison of the unweighted mean number of child material and social deprivation (MSD) items lacked (T2). The target sample size has been calculated based on this primary analysis. The power calculation has been informed by child MSD data for Sweden collected via the European Union Statistics on Income and Living Conditions (EU-SILC) survey, which reports an unweighted mean of 4.5 items lacked among the materially-deprived population. Using a significance level of 0.05 and 80% power, a total of 142 families (71 per group) will be required to detect a 1-item group difference in the unweighted mean number of child MSD items lacked, assuming a standard deviation of 3. As the trial will be conducted in areas with high rates of socioeconomic disadvantage, we estimate that up to 30% of families will experience financial hardship and be eligible for inclusion. The estimated rate of uptake is 50% among eligible families, based on a pilot in the Sandviken municipality, Sweden. Loss to follow-up is projected at 50%. See Fig. 2 for the participant timeline with projected numbers.
Randomisation
A computer-generated randomisation sequence will be used to assign the participants to the intervention and waitlist-control arms in a 1:1 ratio. Block randomisation will be generated in a computerised randomisation schedule. Separate randomisation lists will be created for each BVC site. Randomisation will take place after preintervention data collection. The allocation sequence will be concealed using an online central randomisation service set up and maintained by a professional third party (www. seale denve lope. com) that will conceal the sequence until group assignment. The randomisation process will require the research team to log into a password-protected website and enter the relevant data of each newly recruited participant in order to receive the allocation. The research team will inform the participant of the randomisation outcome -that they will be referred to the municipal budget and debt counselling straight away or after a period of 3 months.
Blinding
Randomisation will take place at the research institution, conducted by the research team, directly after T1 data collection. Allocation status will be recorded on the randomisation website, and emailed to the research team member conducting the randomisation (NJ) and the trial manager (GW). Participants will not be blinded to group allocation, and data collection will not be blinded. Given that neither participants nor budget and debt counsellors are blinded, there is no requirement for an unblinding procedure. Questionnaire data spreadsheets will use participant identity numbers; however, group status will be apparent due to the inclusion of counselling session attendance data for the intervention group.
Intervention arm
All participants will be given a copy of 'Your child, your money', a freely available financial guidance book for new parents at the time of screening. The book has been developed by Finansinspektionen, Sweden's financial supervisory authority, and covers topics such as planning finances, consumer rights, private insurance, family law, parental leave, pensions, saving, and budgeting. (see Table 1 for an overview of content). Participants randomised to the intervention arm will be immediately referred to the local budget and debt counselling service. After receiving the participants' details, first contact from a budget and debt counsellor will take place within a few weeks. Budget and debt counselling will involve at least one meeting with a counsellor. Examples of assistance the counsellors can offer include: suggestions on ways to improve a participant's financial situation; checking eligibility and helping to apply for social welfare support; helping to organise finances and develop a budget; and assistance with debt management, threatening letters or harassment by debt collectors, or imminent house eviction. The municipal budget and debt counselling services involved in the trial will receive guidance on how to work preventatively with families, which prompts discussion on needs, personal goals, financial behaviour and income maximisation. Participants may end counselling at any time, and there will be no restrictions on access to other support during the study period.
Control arm
As described above, all participants receive a copy of the book. Those randomised to the waitlist control arm will be referred to the local financial counselling service after a period of 3 months. Figure 2 provides an overview of the participant timeline. Parents/caregivers are screened for eligibility at a routine BVC appointment. Screening data will be automatically transferred to the research team via a secure online platform. When a parent screens positive and indicates they are interested in participating in the project, a member of the research team will contact them to complete the informed consent procedure and the study questionnaire. A case will be randomised once all pre-intervention data has been collected. Follow-up data will be collected from all participants at scheduled group meetings at two points: 3 months after randomisation (T2) and 12 months after randomisation (T3).
Measures
The primary outcome measure for the RCT is child material and social deprivation (MSD). Secondary measures include household income and sources of income, financial knowledge, financial control, readiness to change, personal goal attainment, parental mental health, and financial stigma. These measures will be administered at all time points and assessed at T2 (for the RCT) and T3 (for the longitudinal cohort study). See Fig. 3 for an overview of the enrolment, intervention and assessment schedule. Due to the target population being both Swedish speaking and non-Swedish speaking, there will be translated measures available in the data collection process. Translated measures will be provided for the larger language groups at each project site. For some measures, there are translated versions readily available. For the other measures, we will use a certified translation service. For validation, back-translations will be conducted.
Child MSD
The European Union (EU) measure of child MSD [8] contains 17 items; 13 child-specific items (e.g. Two pairs of properly fitting shoes; Celebrations on special occasions) and 4 household items (e.g. Home adequately warm; Access to the Internet). Parents are asked to indicate whether or not they have the item. If not, they are asked if it is because they cannot afford it (enforced lack) or for another reason (simple lack). Data are collected at the household level; if one child does not have an item it is assumed that all the children in the household lack that item. Cronbach's alpha indicates adequate reliability of the 'enforced lack' concept in Sweden (α = 0.76) and across EU countries (α = 0.76-0.94) [8]. The items have been shown to be additive, i.e. a household with a score of "2" is in reality suffering from more severe MSD than a household with a score of "1" or a score of "0" [8]. As recommended by Gui et al. (2018), an unweighted sum of the 17 MSD items will be calculated for each household in the trial. The primary analysis for the RCT will be a comparison of the unweighted mean number of child MSD items lacked at T2. A secondary outcome of the trial will be a comparison of the proportion of children classified as experiencing deprivation at the 3-month follow-up. As reported by Gui et al. [8], a threshold of 3 items will be applied to the data. In other words, a household lacking two or less items will be classified as 'not experiencing deprivation' and a household lacking three or more items will be classified as 'experiencing deprivation' . Both the unweighted mean number of child MSD items lacked and the proportion of children classified as experiencing deprivation will also be assessed at T3 as part of the longitudinal follow-up.
Household income and sources of income
Participants will be asked to report their overall household income on an incremental scale up to the median salary for Sweden, as well as their fixed outgoings (e.g. rent and bills) to enable calculation of the net income for the household. They will also be asked to select all applicable sources of income from a list, which includes all available benefits in Sweden.
Financial knowledge
A selection of questions covering financial knowledge (e.g. invoice payment periods, mortgages, debt collection) and where to turn for help for better control over finances and debt management will be taken from a survey conducted by the Swedish Consumer Agency survey [9].
Financial control
Respondents will be asked to indicate the extent to which they agree or disagree with a series of nine statements (e.g. My financial situation is largely outside of my control) using a 5-point scale, ranging from 1 (strongly disagree) to 5 (strongly agree).
Readiness for change
To assess readiness for change, participants will be asked to complete a Readiness Ruler [14] regarding changing their financial situation. The ruler will employ a 0-10 visual analogue scale. Scores of 1-3 represent nonreadiness to change, scores of 4-6 uncertainty, scores of 7-8 represent readiness, and 9-10 represent ongoing attempts at changing.
Personal goal
Participants will be asked to write a personal goal for improving their financial situation (Over the next 3 months, what is your personal goal for improving your financial situation?) at T1. Their own words will be presented back to them at T2 and T3, and they will be asked to rate how far they have achieved their goal (On a scale of 0-10, how far have you achieved your goal: [parent's own words inserted here]?) A visual analogue scale ranging from anchors "not at all" for the value 0 to "extremely" for the value 10 will be used.
Parental mental health
The 12-Item General Health Questionnaire (GHQ-12) [7] consists of 12 items (e.g. Able to concentrate, Loss of sleep over worry, Capable of making decisions). Respondents are asked to rate the degree to which they have experienced a symptom during the last week with four response categories (e.g. Less than usual, No more than usual, Rather more than usual, or Much more than usual. The questionnaire primarily includes depression symptoms, but also some anxiety symptoms. The minimum score is 0, and the maximum is 36. A higher score indicates a higher level of distress. The Swedish version of the GHQ-12 performed excellently in a case-control study assessing discriminant validity (sensitivity = 85.5; specificity = 83.2 [11].
Perceived financial stigma
An 8-item measure of perceived financial stigma will be used [13]. Respondents are asked to indicate the extent to which they agree or disagree with a series of eight statements using a 5-point scale, ranging from 1 (Definitely disagree) to 5 (Definitely agree). The items cover two dimensions of perceived stigma (each consisting of four items): internalized stigma (e.g., I feel that I am odd or abnormal because of my financial situation) and experienced stigma (e.g., I feel that others look down on me because of my financial situation).
Demographic information and consumption of societal resources
The study will gather demographic information about the parent and their child(ren), including variables such as employment status, child age, gender. Based on previous research [1,17], we anticipate an over-representation of participants with cognitive difficulties, non-Swedish speaking participants and single caregiver households. Due to this, demographic questions on disabilities in the household, native language and living arrangements will be included. The demographic data will be used to describe the trial sample, examine the extent to which demographic characteristics are balanced between trial arms, carry out attrition analyses, and inform the cost-effectiveness evaluation. The demographics questions will be administered at T1. Items for which the response may change (e.g. employment status, contact with health care professionals) will be administered at T2 and T3.
Capability index
A 6-item measure of capability [15] will be used to estimate capability adjusted life years (CALY) for the costeffectiveness evaluation. Respondents are asked to indicate whether they totally agree, partly agree, or do not agree with a series of six statements. The items cover health, occupation, social relations, security, and political and civil rights, and financial situation (e.g. I have an economy (salary, other income or savings) that allows me to always have a permanent home and for the most part (at least 8 times out of 10) allows me to buy what I think I need). These questions will be administered at all time points and assessed at T2 and T3.
Intervention fidelity
Session attendance and the topics covered during sessions will be recorded by the budget and debt counsellors on a brief fidelity form, which will be shared with the research team to inform individual participant dose and adherence to the preventative family counselling guidance. Further to this, participant identification numbers will be checked against budget and debt counselling records. This process will identify any group contamination within the trial i.e. participants allocated to the intervention arm that do not receive counselling, or participants allocated to the waitlist control that do receive counselling.
Data collection
T1 data collection and randomisation is planned to take place between September 2022 and June 2024. T2 data collection occurs around 3 months after T1 data collection, and is therefore due to take place between December 2022 and September 2024. T3 data collection occurs 12 months after T1 data collection and is due to take place between September 2023 and June 2025.
Data will primarily be collected using REDCap. The study questionnaire can be completed via an electronic link. Alternatively, it can be completed over the phone with a member of the research team, or on paper via post. This inclusive data collection practice allows for participant preferences to be taken into account and overcomes potential literacy difficulties that could be present in the target population. The screening form includes a question on need for interpretation, which will be consulted prior to the research team contacting the parent/caregiver to allow for necessary interpreter arrangements to be made. Participants will be offered shopping vouchers at T2 and T3 to compensate for their time completing the study questionnaire. Data will be exported into the Statistical Package for the Social Sciences (SPSS) for analysis. Participant identity numbers will be used. The file will be saved on the university server, which is automatically backed up. All data management procedures comply with current regulations on personal data management.
Statistical methods
Baseline and demographic characteristics will be summarised using means and standard deviations (or medians and interquartile ranges) for continuous variables and percentages for categorical variables. For items that align with national survey questions, population-level comparisons will be made. Descriptive data will also be reported on process outcomes, e.g. number of families screened, proportion eligible for inclusion, and proportion consenting to participate.
The primary comparison of the trial arms will use an intention-to-treat framework. A regression analysis will be undertaken to quantify the extent to which a potential intervention effect on the primary outcome is determined by allocation to the intervention. Per-protocol comparison will be performed as a secondary analysis. Further moderation analyses will examine the associations between outcomes and participants' characteristics. Differences in the means and proportions on measures between T2 and T3 will be assessed using parametric and non-parametric tests.
The cost-effectiveness analysis will compare the two arms of the trial at T2 follow-up using capability-adjusted life-years (CALY) gained as the outcome of interest. To analyse the cost differences between the two arms, we will collect information about intervention costs (i.e. personnel time, material, etc.) and societal resource use. We will express the final output as a cost/ CALY. Results will be presented as incremental cost-effectiveness ratios (ICER, expressing the cost per additional CALY gained). Uncertainty will be demonstrated using a cost-effectiveness acceptability curve, using bootstrapped regression estimates [5].
Internal pilot
An internal pilot RCT will be conducted. The target N is 20 eligible families (10 per arm). The primary objective of the pilot study is to assess the feasibility of the RCT processes as a qualitative pilot evaluation has already taken place. Effectiveness will not be evaluated at this stage, but descriptive statistics will be reported for the trial measures, as well as adherence rates and reasons for nonadherence. A process for Decision-making after Pilot and feasibility Trials (ADePT) [2] will be used to support systematic decision-making in moving forward with the trial.
Discussion
HWF is a collaborative and sustainable model that could maximise the effectiveness of current services to address child deprivation in Sweden. The study outlined in this protocol is the first effectiveness evaluation of the HWF model in Sweden and is a crucial step before HWF can be recommended for national implementation within the child health services. However, several operational challenges are anticipated. Given that parental level of education and immigrant background have been linked to higher risk of exposure to child poverty in Sweden, the accessibility of the study questionnaire is a potential issue. The trial will operate across multiple sites and there is likely to be variation in the municipal budget and debt counselling services. As the services are freely available and open for selfreferral there is also relatively high risk for trial arm contamination. However, we have put actions in place to have oversight of and try to mitigate against these risks. The study information materials and questionnaire will be made available in the languages commonly spoken in the targeted areas, and data collection can take place online, by post, or via phone. Brief counselling guidance has been developed specifically for the HWF project and will be provided to all trial sites in an effort to standardise the intervention, without compromising the personalised nature of budget and debt counselling. A fidelity checklist will be administered to record adherence to the guidance topics. Further to this, participant identification numbers will be checked against budget and debt counselling records to identify trial arm contamination.
Organisationalstructure and responsibilities
The Principal Investigator (AS) and lead researcher (GW) are responsiblefor the design and conduct of the ACCESS study; the preparation of the protocoland revisions; organising advisory panel meetings; and publication of trialreports. The advisory panel is formed of all other authors and patient andpublic involvement (PPI) representatives. The advisory panel is responsible forreviewing the progress of the study and, if necessary, agreeing changes to theprotocol to facilitate the smooth running of the trial. A data management committeehas not been formed and no independent auditing will take place. Interim datamonitoring will take place to inform aspects of trial conduct, such asrecruitment. For instance, an internal pilot will be conducted (see Internalpilot section for more details). The ultimate decision on any amendments to thetrial protocol or conduct will be made by the Principal Investigator (AS).
Patient and public involvement
The development of the ACCESS study design was supported by a group of parentswho have faced economic hardship (MM, SÅ & JV). The PPI contributorsattended research design planning meetings and will continue to attend meetingsthroughout the duration of the study. The advisors made several importantcontributions to the research development, including the selection of measures.Specifically, the PPI contributors emphasised the importance of includingmeasures on parental mental health and perceived financial stigma.
Trial status
This is the first version of the protocol. Thetrial had received ethical clearance and recruitment was due to commence at thepoint of submitting this paper to the journal for publication (September 2022).
|
2022-11-26T14:44:43.792Z
|
2022-11-25T00:00:00.000
|
{
"year": 2022,
"sha1": "766a4ec6210743d174b5eeaae400538af2739249",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "766a4ec6210743d174b5eeaae400538af2739249",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253125788
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between estimated desaturase enzyme activity and metabolic syndrome in a longitudinal study
Desaturase enzyme activities (DEA) are associated with several metabolic diseases. The aim of the present study was to assess the relationship between estimated plasma DEA and the metabolic syndrome (MetS), as well as their relationship with individual components of the MetS. We conducted a longitudinal study of 148 participants recruited at random from the PREDIMED trial (Hospital Clinic site). At baseline and after 1 year of follow-up, DEA were estimated from product/precursor ratios of individual plasma fatty acids. Logistic regressions were used to assess the relationship of estimated DEA MetS, adjusted for potential cofounders. Estimated Δ5 desaturase (D5D) activity was associated with lower risk of MetS, whereas stearoyl-CoA (SCD)-16 and SCD-18 were negatively associated with MetS status. SCD-16, SCD-18, and Δ6 desaturase (D6D) were positively associated with triglycerides, SCD-18 was inversely associated with HDL-cholesterol. Estimated D6D activity was found to be associated with increases in diastolic blood pressure. In contrast, D5D was negatively associated with triglycerides, diastolic blood pressure and waist circumference. The present longitudinal study suggests that estimated SCD-16, SCD-18, and D6D have a negative impact in MetS and its components, whereas D5D may have beneficial effects for metabolic health.
Introduction
The prevalence of metabolic syndrome (MetS) has increased in the last three decades and the global prevalence has been estimated to be about 1 quarter of the world population (1). The prevalence in Spain reached 30% in 2012 (2), and this number is estimated to increase in approximately 94,000 cases every year (3). MetS is defined as a set of criteria that, when grouped together, represent a risk for developing cardiovascular disease (CVD) and type 2 diabetes (T2D), such as elevated blood triglycerides (TG) or glucose (4). The development of MetS can be promoted by unmodifiable risk factors, including genetics or aging, but also by modifiable lifestyle habits, such as physical activity or diet (5). The incidence of MetS is particularly higher in men aged over 45 years with an educational level below university studies. Healthier lifestyle therapies in the management of MetS focus on reducing weight, sedentarism, and improving the diet. It has been reported that the incidence of MetS can be reduced with a higher adherence to the Healthy Lifestyle Score, which includes never smoking, moderate to high physical activity, higher adherence to Mediterranean diet, or moderate alcohol consumption, among others (6). Other studies recommend to reach a greater adherence to Mediterranean diet to reduce its development (3).
The traditional Mediterranean diet has been recognized as protective against the development of MetS and other chronic diseases, such as T2D, CVD, and hypertension (7,8). This healthy dietary pattern is characterized by a high intake of fruits, vegetables, legumes, nuts, and whole grains, and olive oil (9). The Mediterranean diet provides a high content of healthy fats that mostly come from olive oil and favors a better lipid profile.
The plasma fatty acid (FA) profile is considered a more reliable biomarker of dietary fat intake than food frequency questionnaires (FFQ) (10), but it may also be affected by nondietary factors, such as endogenous metabolism. FAs can be synthesized, elongated, or desaturated in reactions catalyzed by the enzymes stearoyl-CoA desaturase (SCD-1), 6 desaturase (D6D), and 5 desaturase (D5D) (11), as shown in Figure 1. Altered desaturase enzyme activities (DEA), calculated with the ratios of the FAs that intervene in the reaction, are associated with cardiometabolic risk factors, such as T2D, obesity and MetS (12,13). However, studies assessing the effect of estimated DEA on MetS and its individual components remain scarce.
We hypothesized that altered ratios of estimated DEA would be associated with MetS and the individual components that constitute it. Thus, the aim of this substudy was to assess the relationship of estimated DEA with MetS and its components after 1 year of follow-up in a Mediterranean population.
Study design
The PREDIMED (PREvención con DIeta MEDiterránea) study was a 5-year large, parallel-group, multicenter, randomized, controlled, clinical trial conducted in Spain from October 2003 to December 2010 with the aim of assessing the effect of a Mediterranean diet on the primary prevention of CVD. 1 In summary, 7,447 participants aged 55-80 years at high cardiovascular risk were included. Eligible participants were men and women with T2D, dyslipidemia, hypertension, overweight/obesity or family history of premature CVD. Exclusion criteria included sever chronic illnesses, alcohol or drug abuse, and BMI > 40 kg/m 2 . A detailed description of methods and participants has been published elsewhere (14).
For the current analysis, we used a randomly selected subsample of participants from the PREDIMED-Hospital Clinic recruitment center. To estimate DEA, a total of 148 participants with available data on plasma FA profiles at baseline and after 1 year of follow-up were included.
The protocol was approved by the Research Ethics Committees at the Hospital Clinic recruiting center and all participants signed a written informed consent form.
Covariate assessment
A validated semi-quantitative 137-item FFQ was collected by trained dietitians to assess dietary intake at baseline and after 1 year (15). Nutrient intakes were calculated from Spanish food composition tables (16). One female participant who reported implausible energy intakes (>3,500 and <500 Kcal/day for females, and >4,000 and <800 Kcal/day for males) was excluded from the analysis (17). Mediterranean diet adherence was assessed with a 14-item questionnaire with a value of 0 or 1 for each dietary component (18).
Trained personnel carried out anthropometric measurements at baseline and 1-year follow-up. Physical activity was assessed with a validated Spanish version of the Minnesota physical activity questionnaire (19). The anthropometric measurements used in this study were body mass index (BMI), calculated as weight in kg/height 2 in m 2 , and waist circumference (WC). Diastolic and systolic blood pressure (DBP and SBP, respectively) was measured in triplicate with a validated semi-automatic sphygmomanometer after a minimum of 5 mins rest in a seated position.
Laboratory measurements
Blood samples were collected after an overnight fast, coded, and stored at -80C • until analysis. Biochemical analyses [glucose, triglycerides, total cholesterol, and highdensity lipoprotein cholesterol (HDL-c)] were performed by standard enzymatic procedures. The FA profile in plasma was determined in total lipids by fast gas chromatography with a flame ionization detector (GC-FID) with a previous derivatization to the corresponding FA methyl esters (FAMEs) (20). Fast analyses were performed on a Shimadzu GC-2010 Gas Chromatograph (Shimadzu, Kyoto, Japan) equipped with an FID and Shimadzu AOC-20i Autoinjector. Separation of FAMEs was carried out on a capillary column (10 m × 0.10 mm i.d.), coated with an SGE-BPX70 cross-linked stationary phase (70% cyanopropyl polysilphenylene-siloxane, 0.20 µm film thickness) from SGE (SGE Europe Ltd., United Kingdom). Methyl ester peaks were identified by comparison of their relative retention times with the standards Supelco 37 component FAMEs mix and PUFA No. 2 (Animal source), purchased form Merck (Darmstadt, Germany). Results were expressed as relative percentages of total FAs.
Estimation of desaturase activities
Plasma FA levels at baseline and changes after 1year of follow-up are detailed in Supplementary Table 1 according to MetS status.
Definition of metabolic syndrome
For the present work we applied the definition of MetS proposed by six major organizations and societies (IDF, NHLBI, AHA, WHF, IAC, and IASO) (21). Accordingly, participants who presented 3 of 5 of the following risk factors were included in the MetS group: elevated TG (>150 mg/dL or drug treatment for elevated TG), reduced HDL-c (< 40 mg/dL in men and <50 mg/dL in women), elevated blood pressure (SBP > 130 and/or DBP > 85 mmHg, or antihypertensive drug treatment), elevated fasting glucose (>100 mg/dL or drug treatment of elevated glucose), and elevated WC (>102 cm for men and >88 for women).
Statistical analysis
Baseline characteristics of the participants with and without MetS are presented as means ± SD for continuous variables Frontiers in Nutrition 03 frontiersin.org and percentages for categorical variables. T-tests were used to assess differences in continuous variables and Chi-Square tests were used for categorical values. T-tests were also used to assess differences in plasma FA profile between participants with and without MetS, as well as within-group differences between baseline and 1-year of follow-up. Baseline values and 1-year changes of estimated DEA were normalized and scaled in multiples of 1-SD with Blom inverse normal transformation (22). Changes in estimated DEA and MetS components were calculated as a 1-year value minus the baseline value.
The associations between the prevalence of MetS and estimated DEA were assessed with a logistic regression analysis to calculate the odds ratios (OR) and 95% confidence interval (CI) adjusting for potential confounders (sex, age, physical activity, BMI, smoking status, educational level, and total energy intake) and stratifying for sex. Multinomial logistic regression was employed to assess the relative risk ratio (RRR) of 1-year changes in MetS status and in estimated DEA, also stratifying for sex, and incorporating the intervention group into the adjustment models.
Multivariable adjusted linear regression models were used to assess differences between estimated DEA per 1-SD increase and MetS components (TG, HDL-c, DBP, SBP, glucose, and WC). The adjustment model for potential confounders included: sex, age, physical activity, smoking status, educational level, total energy intake, and BMI (except for WC). In addition, related medication was added in the adjustment model for each MetS component: TG and HDL-c were further adjusted for cholesterol-lowering drugs; DBP and DSP were further adjusted for antihypertensive medication; glucose was further adjusted for insulin and other hypoglycemic drugs; SCD-16 and SCD-18 were further adjusted for PUFA intake. The longitudinal analysis considering 1year changes in estimated DEA and MetS components was carried out using the same models, further adjusted for the intervention groups.
For all analyses, two-sided significance was determined at a p < 0.05. Analyses were performed with Stata 16.0 (Stata-Corp LP, TX, USA).
Results
General characteristics Table 1 shows the baseline characteristics of the 148 participants according to MetS status. Approximately two thirds of the participants had MetS, whereas 47 volunteers were considered to not suffer from this syndrome. Among the participants with MetS, the majority were women (62.4%), whereas among those without MetS, the majority were men (57.4%). As expected, more participants with MetS suffered from T2D (83.2%), had higher BMI (29.5 ± 3.8 kg/m 2 ) and performed less physical activity (249 ± 237.3 METS-min/day). Surprisingly, a higher percentage of participants without MetS had dyslipidaemia (82.98%).
Dietary intake of all participants and stratified for MetS are in Table 2. The mean energy intake was 2,515.6 ± 541.1 kcal/day and the most consumed type of fat were monounsaturated fatty acids (MUFA). The two groups of participants with and without MetS were well-balanced and there were no differences in any nutrient except for polyunsaturaded fatty acids (PUFA) intake, as those with MetS reported significantly lower consumption (17.1 ± 6.6 g/day).
Associations of desaturase activities and metabolic syndrome at baseline
The OR of having MetS according to estimated DEA per 1-SD increase at baseline are shown in Table 3. The logistic The associations between estimated DEA per 1-SD and MetS components after full adjustment are shown in Table 4.
Associations of desaturase activities and metabolic syndrome after 1 year of follow-up
The relationship between 1-year changes in MetS status and estimated DEA per 1-SD increase is presented in Table 5.
Discussion
In the present longitudinal substudy of the PREDIMED trial, we observed that higher estimated activities of SCD-16, SCD18, and D6D had an adverse effect on MetS status and its components after 1 year of follow-up. In contrast, estimated D5D activity showed a protective effect against MetS and its components, particularly TG and DBP. To our knowledge, this is the first study to assess the effect of estimated DEA with MetS and its components after 1 year of follow-up in a Mediterranean population.
The activity of these desaturases is known to be related to metabolic health. Differences in the plasma FA profile and estimated DEA have been previously described between metabolically healthy and unhealthy individuals (23,24). On this basis, Svendsen et al. proposed that these enzymatic activities could serve as novel biomarkers of metabolic health (13,25). This is in accordance with the results of the present study, as we found that estimated DEA were associated with MetS at baseline and after 1 year of followup.
The analysis of plasma estimated DEA related to FA metabolism showed a beneficial effect of estimated D5D activity on the prevalence of MetS. These results are consistent with previous studies that report a positive influence of D5D on cardiovascular health. For example, higher D5D activity has been favorably associated with stroke risk factors, T2D, and abdominal obesity (26)(27)(28). D5D is the rate-limiting enzyme that catalyzes the transformation of eicosatetraenoic and dihomogamma-linoleic acid into eicosapentaenoic acid (EPA) and arachidonic acid (AA), respectively. Therefore, lower D5D activity leads to the accumulation of precursors and other intermediate FAs that increase cardiometabolic risk, such as gamma-linoleic or dihomo-gamma-linoleic acid (29). Despite all these findings, Mayneris-Perxachs et al. did not observe any association between D5D and the odds of having MetS in a cross-sectional sub-analysis with baseline data of the PREDIMED study (30). However, they did find that D6D and MetS, metabolic syndrome; DEA, desaturase enzyme activities; SCD, stearoyl coenzyme A desaturase; D6D, D 6 desaturase; D5D, D 5 desaturase; TG, triglycerides; HDL-c, high-density lipoprotein cholesterol; DBP, diastolic blood pressure; SBP, systolic blood pressure; WC, waist circumference; ß, difference between groups; CI, confidence interval. 1 Adjusted for age, sex, smoking status, educational level, BMI, physical activity, total energy intake, intervention group, and cholesterol-lowering medication. SCD-16 and SCD-18 were further adjusted for PUFA intake. 2 Adjusted for age, sex, smoking status, educational level, BMI, physical activity, total energy intake, intervention group, and antihypertensive medication. SCD-16 and SCD-18 were further adjusted for PUFA intake. 3 Adjusted for age, sex, smoking status, educational level, BMI, physical activity, total energy intake, intervention group, insulin, and hypoglycemic medication. SCD-16 and SCD-18 were further adjusted for PUFA intake. 4 Adjusted for age, sex, smoking status, educational level, physical activity, total energy intake, and intervention group. SCD-16 and SCD-18 were further adjusted for PUFA intake.
SCD-18 were adversely associated with MetS, which is consistent with our results. The activity of D6D, the key enzyme in the conversion of linoleic and alpha-linoleic acid, and SCD-1, which catalyzes the synthesis of MUFA from saturated fatty acids (SFA), is inhibited by PUFA intake (31). In this regard, evidence obtained in clinical trials has shown that diets with high intakes of PUFA down-regulate SCD-1 activity (32), particularly PUFA resulting from fish consumption (33). However, in relation with our findings, including PUFA intake as a confounder variable in the analyses of SCD-16 and SCD-18 minimally altered the results, suggesting that their associations with MetS and its components were not dependent on PUFA intake. Overall, our findings confirm the results of previous studies in which elevated D5D and reduced D6D and SCD-1 activities positively impacted cardiometabolic risk factors (24,34). SCD-16, SCD-18, and D6D are generally known to exert negative effects on metabolic health and other CVD risk factors. SCD-16 and SCD-18 have also been positively associated with BMI, blood pressure, and total cholesterol (35,36), which is in accordance with our results. Other studies have found that D6D is related to higher TG, blood pressure, BMI and total cholesterol (37,38). Moreover, D6D has showed a positive association with inflammatory biomarkers, such as ICAM-1 or C-reactive protein (37,39), which suggests that this enzyme has a negative effect on metabolic health due to the activation of inflammatory pathways.
In contrast, estimated D5D activity has been favorably associated with MetS components, as it has been to related to higher HDL-c, lower blood pressure, and lower BMI (36, 40).
Several mechanisms may explain the associations found between estimated DEA and MetS components. PUFA synthesized by D5D and D6D can modulate the expression of transcription factors that participate in lipogenesis and FA oxidation, such as PPAR. In addition, these FA also produce eicosanoids, which are inflammatory mediators that play major roles in lipogenesis or insulin resistance (41). Taken together, these results suggest that D5D products are involved in antiinflammatory responses and upregulation of transcription factors that lead to a better lipid profile and decreased CVD risk, whereas D6D and SCD products may have the opposite effect.
The main strength of the present study is its longitudinal nature, as this is considered the most rigorous method to establish a cause-effect relationship. Other strengths include the analysis of biological samples. Among the limitations of the study is that all the participants were >55 years and at high risk of CVD, thus the results may not be representative of other populations. Additionally, the sample size was relatively small compared to similar studies.
The present study shows that in a Mediterranean population of over 55 years and at high cardiovascular risk, estimated SCD-16, SCD-18, and D6D activities were adversely associated with MetS, whereas D5D was associated with a protective effect. Among the components that constitute the MetS, TG, HDL-c, DBP, and WC were adversely affected by estimated activities of SCD-16, SCD-18, and D6D. In contrast, D5D was associated with beneficial changes in TG and DBP. Therefore, our results contribute to the hypothesis that FA metabolism influences metabolic health and desaturases dysregulations may be indicative of metabolic alterations. Further research is needed to confirm the current findings in the general population.
Data availability statement
The data analyzed in this study is subject to the following licenses/restrictions: The datasets presented in this article are not readily available because there are restrictions on the availability of data for the PREDIMED trial, due to the signed consent agreements around data sharing. Requestors wishing to access the PREDIMED-dataset generated and/or analyzed during the current study can make a request to the PREDIMED trial Steering Committee chair. Requests to access these datasets should be directed to RL-R, lamuela@ub.edu.
Ethics statement
The studies involving human participants were reviewed and approved by the Research Ethics Committees at the Hospital Clinic recruiting center and all participants signed a written informed consent form. The patients/participants provided their written informed consent to participate in this study.
Author contributions ID-L: conceptualization, investigation, formal analysis, and writing -original draft. CA-R: methodology and writingoriginal draft. AT-R, RC, and ZV-R: writing -review and editing. SC-B: methodology and writing -review and editing. ER, MF, and RE: investigation and writing -review and editing. ML-S: formal analysis and writing -review and editing. RL-R: conceptualization, investigation, and writing -review and editing. All authors contributed to the article and approved the submitted version.
Funding
This research has been supported by the CICYT [PID2020-114022RB-I00], CIBEROBN from the Instituto de Salud Carlos III, ISCIII from the Ministerio de Ciencia, Innovación y Universidades (AEI/FEDER, UE), and Generalitat de Catalunya (GC) [2017SGR 196]. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Acknowledgments
The PREDIMED trial was supported by the official funding agency for biomedical research of the Spanish government (Instituto de Salud Carlos III) through grants provided to research networks specifically developed for the trial: RTIC G03/140 (Coordinator: RE) and RTIC RD 06/0045 (Coordinator: Miguel Ángel Martínez-González). All investigators of the PREDIMED trial belong to Centro de Investigación Biomédica en Red (CIBER), an initiative of Instituto de Salud Carlos III. ID-L and SC-B thank the Spanish Ministry of Science Innovation and Universities for the Formación de Profesorado Universitario (FPU20/02478 and FPU17/00785) contract. AT-R is a Serra Húnter Fellow.
Conflict of interest
Author ER reports grants, personal fees, non-financial and other from the California Walnut Commission while the study was carried out; grants, personal fees, non-financial support and other from Alexion; and personal fees and other from Amarin, outside the submitted work. Author RL-R reports personal fees from Cerveceros de España, personal fees and other from Adventia, Wine in Moderation, Ecoveritas S.A., outside the submitted work. Author RE reports grants from the Fundación Dieta Mediterránea (Spain), and Cerveza y Salud (Spain), and personal fees for given lectures from Brewers of Europe (Belgium), the Fundación Cerveza y Salud (Spain), Pernaud-Ricard (Mexico), Instituto Cervantes (Alburquerque, United States), Instituto Cervantes (Milan, Italy), Instituto Cervantes (Tokyo, Japan), Lilly Laboratories (Spain), and the Wine and Culinary International Forum (Spain), as well as nonfinancial support for the organization of a National Congress on Nutrition and feeding trials with products from Grand Fountain and Uriach Laboratories (Spain).
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2022-10-27T14:23:40.145Z
|
2022-10-26T00:00:00.000
|
{
"year": 2022,
"sha1": "c105d649e2aafb252702f35cace1d42175a33ea5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c105d649e2aafb252702f35cace1d42175a33ea5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.