id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
87941884
pes2o/s2orc
v3-fos-license
Comparative assessment of sire evaluation by univariate and bivariate animal model for estimation of breeding values of first lactation traits in HF cross cattle The aim of the present investigation was to study the superiority of bivariate over univariate sire evaluation. Data were collected on 1,988 first parity Karan Fries cows, spread over 31 years. The (co) variance components estimated by using average information restricted maximum likelihood (AIREML) were fitted into univariate and bivariate animal models for prediction of breeding values. Low heritability estimates were obtained for fertility traits ranging from 0.02 (FDPR) to 0.19 (AFC) indicating lesser role of additive gene action in fertility of dairy cattle. Comparative analysis revealed that the breeding values estimated using bivariate animal model had lower error variance and greater range in comparison to univariate animal models. The mean sire breeding values for production traits estimated by bivariate analysis ranged from 3055.50 to 3063.15 kg and were higher compared to the mean sire breeding values estimated by univariate animal model. The inclusion of fertility traits along with production traits improved the differentiating ability of bivariate animal model with respect to the production performance. In most of the genetic improvement programmes in the country, selection was focussed on production traits; whereas, fertility performance of the animal was not given the due emphasis. Therefore, there is a need to consider fertility traits in addition to production traits during selection. Selection considering fertility along with production performance was advocated under Indian conditions due to small number of daughters per sire; as such selection will improve the accuracy and efficiency of sire evaluation (Sahana and Gurnani 1999). Therefore, including fertility along with production traits in sire evaluation would enable genetic improvement in production potential along with improvement in fertility traits. The Karan Fries (KF) crossbred dairy cattle was developed by crossing Holstein Friesian (H), Brown Swiss (B) and Jersey (J) bulls with Tharparkar cows, under a crossbreeding project at NDRI, Karnal in 1971. The level of Holstein inheritance was fixed around 62.5% (Gurnani et al. 1986). Sire evaluation for the progeny testing programme of the Karan Fries is done by contemporary Comparative assessment of sire evaluation by univariate and bivariate animal model for estimation of breeding values of first lactation traits in HF cross cattle comparison method, a univariate sire evaluation method that considers the first lactation milk production performance only (Singh and Gurnani 2004). The present investigation aims at studying the efficiency of bivariate over univariate sire evaluation. Comparative analysis was carried out using different combinations of 2 trait models in first-parity cows, considering production along with fertility traits. MATERIALS AND METHODS Present study was carried out on Karan Fries cows maintained at National Dairy Research Institute (NDRI), Karnal, Haryana. Data on first lactation fertility and production performance of 1988 Karan Fries cows sired by 186 bulls, spread over a period of 31 years (1982 to 2012), were utilized for the study. The indicator traits for fertility performance of the Karan Fries cows considered were age at first calving (AFC), first service period (FSP), first calving interval (FCI), first lactation daughter pregnancy rate (FDPR) and the production traits considered for analysis were first lactation 305 day milk yield (F305) and first lactation total milk yield (FTMY). FDPR was calculated by referring to VanRanden et al. (2004). Breeding values of the sires for production and fertility traits were estimated by both univariate and bivariate animal models. The (co)variance components were estimated by the average information maximum likelihood method https://doi.org/10.56093/ijans.v86i2.55803 DASH ET AL. [ Indian Journal of Animal Sciences 86 (2) 62 (AIREML) algorithm in WOMBAT genetic analysis tool (Meyer 2007 Sires were ranked on the basis of breeding values estimated by both types of animal model. Efficiency of univariate and bivariate animal models was adjudged on the basis of Spearman's rank correlation between the rankings by univariate and bivariate animal model (Table 1) as well as by the standard deviation (SD) and error variance of estimated breeding values (Tables 2, 3). Spearman's rank correlation estimates: Sires were ranked for both production and fertility traits, on the basis of EBVs, estimated by both univariate and bivariate animal models. Comparison of rankings was done on the basis of spearman's rank correlation estimate. The rank correlation estimates of production traits indicated very strong and highly significant correlation between the univariate and bivariate rankings indicating that EBVs of production traits estimated by bivariate animal model in which fertility trait was considered in addition to a production trait (F305MY or FTMY) were similar to those estimated by univariate animal model. The rank correlation estimates for fertility traits such as AFC and FSP indicated lesser variation between univariate and bivariate rankings. However, for traits such as FCI and FDPR the estimates indicated greater variation of sire rankings on the basis of univariate and bivariate animal models. The fertility traits were also considered in combination with FTMY in bivariate animal model. The spearman's rank correlation estimates for AFC and FDPR indicated moderate correlation between univariate and bivariate rankings. Highly and significant rank correlation estimates were obtained for FSP and FCI. The rank correlation estimates varied greatly when fertility traits were considered in association with FTMY. In fertility traits univariate rankings of sire varied greatly in comparison to bivariate rankings which may be attributed to the lesser additive genetic nature of fertility traits as well as their negative association with production traits. The findings were in agreement with the results of multi-trait sire evaluation reported by Raheja et al. (2000), Sun et al. (2010) and Divya et al. (2014). Standard deviation (SD) and error variance of EBVs The breeding values estimated by bivariate animal model for F305MY or FTMY, with AFC, FSP, FCI and FDPR as fertility traits in the model had higher standard deviation (SD) in comparison to the breeding values estimated by univariate model. The error variance of estimated breeding values by bivariate animal model was lower than univariate animal model both in F305MY and FTMY. The results indicated that bivariate model, in which production trait was analyzed with inclusion of fertility traits, had greater ability for differentiating superior and inferior sires with respect to F305MY or FTMY than univariate animal model. This may be attributed to the correlation between the production and fertility traits that was accounted for by the bivariate animal model. Kadarmideen et al. (2003) recommended the bivariate genetic evaluation and selection of dairy cattle on the basis of both fertility and production performance. Similar observation was given by Mukherjee (2005) and Kumar (2007). Sun et al. (2010) reported models combining milk production traits, showed better stability and predictive ability than single-trait models for all the fertility traits. Divya et al. (2014) reported bivariate animal model to be superior to univariate animal model on the basis of standard deviation of EBVs for first lactation production fertility and production traits. Zink et al. (2012) used the bivariate animal model for estimation of genetic parameters using AIREML algorithm, the genetic parameters were further utilized in the computation of selection indices, which had higher accuracy when fertility traits were combined with production traits. An overview of the results of sire evaluation for first lactation fertility and production traits indicated that the 2trait models were superior in comparison to single trait animal models for estimation of breeding values. The heritability estimates obtained using AIREML indicated fertility traits in comparison to production traits were less affected by additive gene action. The inclusion of fertility traits along with production traits improved the differentiating ability of bivariate animal model with respect to the production performance as the bivariate model accounted for the correlation between the production and fertility traits. Therefore, fertility traits need to be given due importance in sire evaluation along with production performance.
2019-03-31T13:41:38.064Z
2016-02-11T00:00:00.000
{ "year": 2016, "sha1": "61738168b770bc9e4b94bac4ef458fce8c4b6161", "oa_license": "CCBYNCSA", "oa_url": "https://epubs.icar.org.in/index.php/IJAnS/article/download/55803/23472", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e49b32e057ace0a2caa4869e14c59a911675b38e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
267685279
pes2o/s2orc
v3-fos-license
Revisiting the Solar Wind Deceleration Upstream of the Martian Bow Shock Based on MAVEN Observations The solar wind deceleration upstream of the Martian bow shock is examined using particle and magnetic field measurements obtained by the Mars Atmosphere and Volatile EvolutioN (MAVEN). Mars lacks a strong intrinsic magnetic field so its upper atmosphere extends beyond the Martian bow shock and interacts directly with the solar wind. Neutral atoms in the Martian upper atmosphere can be ionized through several physical processes and then start to move with the solar wind flow to form pickup ions. In return, the solar wind is expected to slow down due to the momentum transfer to the pickup ions. The present study surveys the MAVEN solar wind measurements between 2015 and 2019 to evaluate the solar wind deceleration upstream of the Martian bow shock. Different from previous studies of solar wind deceleration, our analysis carefully excludes the solar wind deceleration in the shock foot region. The average solar wind deceleration obtained is about 0.7% of the initial solar wind speed, much smaller than the values given by previous studies. Further calculation using several reasonable Martian upper atmosphere density profiles demonstrates that the deceleration observed is consistent with the pickup ion mass-loading scenario. Introduction The solar wind deceleration is a physical phenomenon commonly observed upstream of planetary and cometary bow shocks.It was first observed upstream of the Earth's bow shock in the 1960s (Formisano & Amata 1976).A series of studies have shown that the solar wind deceleration upstream of the Earth's bow shock is due to the interactions between the solar wind ions and the ultra-low-frequency waves excited by the shock-reflected ions (Bame et al. 1980;Bonifazi et al. 1980Bonifazi et al. , 1983;;Fu et al. 2009).The wave-particle interactions lead to an effective momentum exchange between the solar wind and shock-reflected ions.On the other hand, the solar wind deceleration upstream of the Martian bow shock was first observed by Phobos 2 in 1991 and exhibits different characteristics from that at Earth (Verigin et al. 1991).Verigin et al. (1991) found the solar wind deceleration to be about 100 km s −1 by analyzing the first three orbits of Phobos 2 measurements around Mars.A statistical study of 70 bow shock crossings showed that the average values of the solar wind deceleration in different regions upstream of the Martian bow shock are about 4%-7% of the undisturbed solar wind speed (Kotova et al. 1997).The values are higher than those observed upstream of the Earth's bow shock.In addition, solar wind deceleration is a common feature upstream of the Martian bow shock and occurs for both quasi-parallel and quasiperpendicular shocks (Barabash & Lundin 1993;Dubinin et al. 1994;Kotova et al. 1997).In contrast, upstream of the Earth's bow shock, the solar wind speed decreases mostly in front of quasi-parallel shocks (Zhang et al. 1995).Therefore, it is difficult to explain the solar wind deceleration upstream of the Martian bow shock using the aforementioned momentum transfer between the shock-reflected ions and the solar wind ions.Instead, the mass-loading mechanism was proposed to explain the difference in the solar wind deceleration between Mars and Earth (Verigin et al. 1991;Kotova et al. 1997;Zhang et al. 1997). The mass-loading mechanism was first proposed to explain the solar wind deceleration upstream of comets (Biermann et al. 1967).Neutral atoms originating from comets are ionized through processes such as photoionization, charge exchange, and electron impact in the solar wind.After ionization, the newborn ions start to move together with the solar wind due to the motional electric field and the interplanetary magnetic field (IMF) in the solar wind and/or wave-particle interactions.They are, therefore, referred to as pickup ions (PUIs).The PUIs gain momentum from the solar wind during this mass-loading process and cause the solar wind to slow down.The massloading mechanism is believed to operate upstream of the Martian bow shock as well (Verigin et al. 1991;Kotova et al. 1997;Zhang et al. 1997).The Martian bow shock is relatively close to Mars because it does not have a strong intrinsic magnetic field.The Martian upper atmosphere (corona) extends beyond the Martian bow shock and interacts directly with the solar wind.PUIs are expected to arise when the neutral atoms in the Martian corona are ionized and, subsequently, slow down the solar wind flow.However, there are also studies showing that the mass-loading mechanism plays at most a minor role upstream of the Martian bow shock, because the densities of the hot oxygen corona and the hydrogen corona of Mars are too low to provide the observed values of solar wind deceleration.Some evidence indicates that rather than mass loading, large-amplitude Alfvén waves play the primary role in generating the observed signatures of solar wind deceleration from Phobos 2 (Dubinin et al. 2000a(Dubinin et al. , 2000b)).Zhang et al. (2006) used a gas dynamic model to estimate the solar wind deceleration caused by the PUI mass loading from the hot oxygen corona.Even considering extreme oxygen density profiles, the deceleration along the solar wind streamline was found to be only about 10-15 km s −1 (∼2%-3%).Halekas et al. (2017) analyzed data from the Solar Wind Ion Analyzer (SWIA) on board MAVEN, covering the period from 2014 to 2016, to study the solar wind deflection caused by the PUI mass loading.They found that the solar wind deflection ranges from −2.9 to −4.9 km s −1 (with a reference value of 0 if no deflection occurs), indicating a very weak mass-loading effect upstream of the Martian bow shock. Clearly, whether the PUI mass loading can produce the solar wind deceleration observed upstream of the Martian bow shock is still an open question.The present study revisits this problem by examining the MAVEN measurements between 2015 and 2019.In the rest of the paper, Section 2 describes the relevant MAVEN instruments and the data selection criteria.Section 3 presents the observation results of solar wind deceleration in comparison with the previous studies.In Section 4, the solar wind deceleration due to the PUI mass loading is estimated along the MAVEN orbit using both the momentum conservation equation and the gas dynamic model of Zhang et al. (2006), and the results are then compared with the observations.Finally, Section 5 concludes the study and provides further discussions. Instrumentation and Data Selection Criteria MAVEN was launched by NASA in 2013 November and entered its Mars orbit in 2014 September.It is a polar orbit satellite with an orbital inclination of 75°and an orbital period of 4.5 hr.The apoapsis and periapsis of the MAVEN orbit are 6200 km and 150 km above the Martian surface, respectively.Thus, MAVEN can cross the Martian bow shock twice during each orbit and flies for a long time in the upstream solar wind (Jakosky et al. 2015), which makes it suitable for studying the solar wind deceleration upstream of the Martian bow shock. The present study investigates the solar wind deceleration using the measurements of the Magnetometer (MAG) and SWIA instruments on board MAVEN.MAG provides threecomponent magnetic field data with a time resolution of 32 Hz or 1 Hz (Connerney et al. 2015).SWIA measures ions in the energy range of 25 eV-25 keV.Depending on the operating mode, SWIA returns three-dimensional ion velocity distribution data with different energy and angular resolutions.Additionally, SWIA measurements can be used to derive ion moment parameters (density, velocity, and temperature).Note that SWIA typically operates in the "fine" and "coarse" modes when MAVEN is in the solar wind and the Martian magnetosheath, respectively (Halekas et al. 2015).The mode transition can lead to variations in the obtained plasma moment data.The changes in the measured solar wind velocity caused by the mode transition will contaminate the calculation of the solar wind deceleration upstream of the Martian bow shock if SWIA happens to switch its operating mode in the solar wind.These events need to be carefully excluded from our analysis. The moment parameters with a time resolution of 4 s from SWIA and magnetic field data with a resolution of 1 s from MAG between 2015 January 1 and 2019 December 31 were used in the present study.Each orbit of the spacecraft was divided into two segments: the outbound segment from the periapsis to the apoapsis and the inbound segment from the apoapsis to the periapsis.The data of different segments were then selected according to the following criteria: (1) the spacecraft has flown in the upstream solar wind for a duration greater than 70 minutes; (2) there is no SWIA mode switch within the interval for the subsequent solar wind deceleration analysis; (3) there are no abrupt solar wind velocity drops caused by interplanetary shocks or other transient phenomena; and (4) the apoapsis angle (the angle between the MAVEN apoapsis direction and the x-direction in the Mars Solar Orbital (MSO) coordinates) should be less than 45°.Note that the apoapsis angle varies with the MAVEN orbital precession.When the apoapsis angle is small, MAVEN moves a relatively long distance along the x-direction of the MSO coordinates during the orbital segment.Since the solar wind flows mostly along the negative x-direction, the solar wind deceleration due to the PUI mass loading would be more pronounced when the apoapsis angle is small.The above selection criteria yielded a total of 310 segments of data for the subsequent statistical analysis. ), where B is the total local magnetic field measured, and B u and B d represent the average upstream and downstream magnetic fields.In the present study, B u and B d are computed as the magnetic fields averaged over 10-25 minutes upstream and 5-15 minutes downstream of the first shock overshoot, respectively.Note that the numbers in the first line below the horizontal axis represent the relative times with respect to the moment of shock crossing, and the relative time has been flipped to positive values for the inbound case shown in Figure 1(f) for convenience.The numbers in the second line below the horizontal axis still give the original universal times.Additionally, to mitigate the influence of short-period fluctuations in the solar wind speed, the speed data were further smoothed with a 1 minute (15 data points) averaging window in Figures 1(c) and (f).Both Figures 1(c) and (f) demonstrate a gradual deceleration of the solar wind as it approaches the Martian bow shock. Statistical Results Since the solar wind speed far upstream is certainly not a constant and varies by itself in reality, the solar wind speeds obtained for different orbital segments (as shown in Figures 1(c) and (f)) have been averaged over all the 310 segments selected to make the solar wind deceleration signal of interest better stand out.The solid black line in Figure 2 shows the resultant average solar wind speed versus the relative time (flipped to positive values for the inbound segments).In the study of Kotova et al. (1997), the solar wind deceleration has been quantified as v v v ) , where v r is the undisturbed solar wind speed far upstream (defined as the average speed between 20 and 50 minutes of the relative time) and v s is the speed at the moment of shock crossing.The average solar wind deceleration calculated according to this definition is ∼7.8% in our data set.This value is slightly larger than but consistent with the numbers given by Kotova et al. (1997) based on Phobos 2 observations.It is important to note that the solar wind deceleration defined in Kotova et al. (1997) contains the solar wind slowing down inside the magnetic foot of the Martian bow shock, which is more due to the shock dynamics related to the shock-reflected ions rather than the PUI mass loading (Woods 1971).This probably explains why the PUI mass loading estimated in the previous studies (e.g., Zhang et al. 2006) was not sufficient to account for the solar wind deceleration calculated this way. The width of the shock foot varies with several plasma parameters.Generally, a smaller shock normal angle (the angle between the shock surface normal vector and the upstream background magnetic field) allows the shock-reflected ions to return further upstream, resulting in a wider foot (Balikhin & Gedalin 2022).Additionally, shock-reflected ions with larger gyroradii can move further away from the shock front (Liu et al. 2022), so the width of the foot is also influenced by the upstream ion temperature.For the Martian bow shock, MAVEN observations indicate that the shock foot width is smaller than the upstream local proton convected gyroradius (r ci = v sw /ω ci , where v sw is the solar wind speed, and ω ci is the upstream proton cyclotron frequency) when the shock is quasiperpendicular (Burne et al. 2021).In order to focus on the solar wind deceleration related to the PUI mass loading, the present study chooses to define the solar wind deceleration as the difference between the average speeds during 65-70 minutes and 20-25 minutes of the relative time with respect to the shock crossing moment.The relative time interval of 20-25 minutes has been chosen to ensure that MAVEN is at a distance exceeding r ci upstream of the bow shock for most of the data segments.With the new definition, the average solar wind deceleration is approximately 0.7% of the undisturbed solar wind speed.This is significantly smaller than the values given by Kotova et al. (1997), but consistent with the weak massloading results revealed by Halekas et al. (2017) in terms of the observed solar wind deflection.On the other hand, it needs to be clarified that the solar wind deceleration value would vary if one chooses different relative time intervals to evaluate the solar wind deceleration.This is indeed expected in the massloading mechanism because the mass-loading effect accumulates along the solar wind streamline. Momentum Conservation and Gas Dynamic Models In the mass loading mechanism, neutral atoms initially at rest (in the MSO reference frame) are ionized and then picked up by the solar wind, thereby gaining momentum.Due to the conservation of momentum, the solar wind must lose momentum and thus slows down. The law of conservation of momentum is first used to estimate the solar wind velocity change from the PUI mass loading.The calculation starts with the undisturbed solar wind of a certain density of n sw,0 at x = 5 R M in the MSO coordinates, where R M is the Mars radius.PUIs are then gradually generated in the solar wind as it flows along the negative x-direction with an initial speed of v sw,0 .The solar wind region upstream of the Martian bow shock is numerically divided into uniform cubes of 20 km × 20 km × 20 km.For each cube, the incoming flow exchanges momentum with the PUIs newly generated inside the cube, and they then leave the cube with the same bulk flow velocity.Thus, the law of conservation of momentum leads to Here, m sw is the mass of the solar wind proton, n sw,i and v sw,i are the solar wind density and velocity flowing out of the ith cube, n i s PUI, is the PUI density of species s in the ith cube, m s PUI is the PUI mass of species s, and the summation is over different PUI species involved.It should be noted that the calculation assumes an instantaneous momentum exchange between the newborn PUIs in a cube and the incoming plasma flow.As will be further discussed in Section 5, this assumption is overly simplistic but should provide a reasonable upper limit for the solar wind deceleration caused by the PUI mass loading. PUIs in the solar wind upstream of the Martian bow shock come from the neutral atoms in the Martian corona after they are ionized through photoionization, charge exchange, and electron impact.The PUI number density of species s newly generated in the ith cube is given by where n i s neu, is the neutral corona density of species s in the cube, Δx = 20 km is the size of the cube (along the x-axis), and f s ph , f i s ex, , and f s el are the photoionization, charge exchange, and electron impact frequencies, respectively.Only hydrogen and oxygen PUIs are considered in the present calculation because previous studies have shown that the mass loading is mainly contributed by these two species upstream of the Martian bow shock (e.g., Kotova et al. 1997).The hydrogen and oxygen corona densities are taken from Modolo et al. (2016).As shown in Figure 3(a), they decrease rapidly with altitude and differ between solar maximum and minimum.In addition, the photoionization frequency also changes between solar maximum and minimum.In the present calculation, the values adopted are f f 4.28 10 s , 31.25 ´-for solar minimum (Modolo et al. 2005).Moreover, the charge exchange between the solar wind protons and the neutrals in the Martian upper atmosphere leads to the increase of n i s PUI, (and the decrease of n sw,i ).The charge exchange frequency is given by f , where σ s represents the charge exchange cross section of species s.In this study, the charge exchange cross sections for oxygen and hydrogen are taken as 8 × 10 −16 cm 2 and 2 × 10 −15 cm 2 , respectively (Stebbings et al. 1964;Mott & Massey 1965).Finally, the electron impact frequency is usually much lower than the other ionization frequencies.In the upstream region of the Martian bow shock, the electron impact frequencies of oxygen and hydrogen are typically in the range of 10 −9 -10 −7 s −1 (Cravens et al. 1987), and the present study assumes a value of 10 −8 s −1 for both species. Equation ( 2) can be integrated/added over the cubes along a streamline of the solar wind, which is simply along the negative x-direction in the present study, to get the PUI density in a certain cube.The PUI density obtained can then be fed to Equation (1) to calculate the solar wind velocity v sw,i .The calculation is performed for all the cubes upstream of the Martin bow shock, and Figure 3(b) presents the solar wind velocity calculated with n sw,0 = 2.5 cm −3 , v sw,0 = 400 km s −1 , and other parameters corresponding to the solar minimum conditions.Since the solar wind has been simply assumed to flow along the negative x-direction and the neutral densities vary only with the altitude above the surface of Mars, the resultant solar wind deceleration due to mass loading is strictly axisymmetric about the x-axis through the subsolar point.Therefore, Figure 3(b) only displays the result in the x-z plane.The white area is the region downstream of the Martian bow shock, whose location is provided by Gruesbeck et al. (2018).As expected, the maximum solar wind deceleration occurs immediately upstream of the bow shock.Interestingly, the maximum deceleration values are similar (∼2%) at the subsolar location and in the flank regions.Although the deceleration rate (the deceleration over a certain distance along the negative xdirection) is most significant at the subsolar location, the deceleration occurs over a longer distance in the flank regions.On the other hand, the maximum deceleration of 2% in Figure 3(b) seems larger than the value of 0.7% when the deceleration is defined as the difference between the average speeds during 65-70 minutes and 20-25 minutes of the relative time with respect to the shock crossing moment (as described in Section 3).This is because 20-25 minutes of the relative time corresponds to regions at some distance from the immediate bow shock upstream.So the deceleration ratio obtained is naturally smaller. In order to be better compared with the MAVEN measurements, the solar wind velocity along the MAVEN trajectory needs to be derived.However, the undisturbed solar density (n sw,0 ) and velocity (v sw,0 ) need to be first figured out for each of the 310 segments of data.In this regard, the most upstream point (with the largest x-value in the MSO coordinates) of the MAVEN trajectory during a certain data segment is identified.With the observed solar wind density and velocity at this location, Equation (1) is then used to trace backward along the solar wind streamline to the location of x = 5 R M to get n sw,0 and v sw,0 .Using the n sw,0 and v sw,0 derived, the solar wind velocities at the other locations along the MAVEN trajectory during this data segment can be calculated, similar to how the result shown in Figure 3(b) has been obtained.Figure 3(c) shows the calculation result for the segment of 15:24:57-16:34:57 on 2019 March 24 as an example.Here the black curve represents the spacecraft trajectory, and the color indicates the solar wind speed variation upstream. The same calculation illustrated by Figure 3(c) has been done for all the 310 data segments.The solar wind velocities obtained, after being averaged over all the segments according to the relative time with respect to the shock crossing moment, are shown as the two blue lines in Figure 2. The solid and dashed blue lines correspond to the solar maximum and minimum conditions, respectively.The 5 yr data period from 2015 to 2019 falls between solar maximum and minimum (Petrovay 2020;Courtillot et al. 2021).Therefore, as expected, the black line obtained from observation lies between the solid and dashed blue lines except between 0 and 10 minutes of the relative time.Thus, the mass loading mechanism can largely explain the solar wind deceleration observed.The deviation from the theoretical results given by the simple conservation of momentum between 0 and 10 minutes of the relative time is not surprising, because the solar wind deceleration there is more due to the shock dynamics related to the shock-reflected ions rather than the PUI mass loading.On the other hand, the comparison of the two blue curves with the black curve between 20 and 70 minutes of the relative time suggests that the observation result (the black curve) agrees better with the theoretical result under the solar minimum conditions.This could be due to the assumption of instantaneous momentum exchange in the momentum conservation model, which likely overestimates the solar wind deceleration. In addition, we also utilized the gas dynamic model given by Zhang et al. (2006) to estimate the solar wind deceleration caused by mass loading and compared the results with the simple momentum conservation model.The gas dynamic method still assumes that momentum exchange occurs instantaneously but takes into account the changes in energy and pressure of the plasma fluid during the mass-loading process.The calculation details of this model are consistent with the momentum conservation model described earlier, but the ratio of the solar wind speeds flowing into and out of a certain cube is determined by Equation (4) in Zhang et al. (2006; with the adiabatic index γ = 5/3 in our calculation).The results are shown in Figure 2 as the solid and dashed red lines, corresponding to the solar maximum and minimum conditions, respectively.Similar to the results from the momentum conservation model, the general trend matches well with the observation result.However, compared to the momentum conservation model, the gas dynamic model predicts higher deceleration values. Conclusion and Discussion Using the SWIA data from MAVEN between 2015 and 2019, the present study analyzes the solar wind deceleration upstream of the Martian bow shock.The observation results are consistent with the previous study that the deceleration measured at the bow shock crossing is ∼7.8% of the undisturbed solar wind velocity.However, this deceleration value contains the solar wind slowing down inside the magnetic foot of the Martian bow shock, which is more due to the shock dynamics related to the shock-reflected ions rather than the PUI mass loading.Excluding the influence of the magnetic foot, the deceleration is approximately 0.7%.Furthermore, both the simple momentum conservation model and the gas dynamic model are used to estimate the solar wind speed changes caused by the PUI mass loading.The results demonstrate good agreement with the observation in the range where the PUI mass loading is expected to dominate the solar wind deceleration. In Section 4, the momentum exchange between the newborn PUIs and the solar wind has been assumed to complete instantaneously (or in a very short time).When the IMF in the solar wind is perpendicular to the solar wind velocity, newly ionized particles can be quickly accelerated by the motional electric field and the IMF in the solar wind, so momentum exchange can happen quickly.In the case that the IMF is parallel to the solar wind velocity, the momentum exchange between the solar wind and the newborn ions can only be achieved gradually through wave-particle interactions.This process takes minutes or tens of minutes (Cowee & Gary 2012;Cowee et al. 2012;Cheng et al. 2023), longer than the transit time it takes for the solar wind to pass through the region upstream of the Martian bow shock.In the more general situation that the IMF is at an angle to the solar wind velocity, the exchange of momentum perpendicular to the IMF might occur quickly, while the momentum exchange in the parallel direction takes time to complete.This implies that the real solar wind deceleration caused by the PUI mass loading would be smaller than the results presented in Section 4. Indeed, Dubinin et al. (2000b) have provided a correction factor for the solar wind deceleration caused by the PUI mass loading considering the IMF direction.For the typical conditions upstream of Mars, the correction factor is estimated to be ∼0.5-0.8.Including the correction factor will shift the theoretical curves in Figure 2 upwards and make the model and observational results better agree with each other. For a newborn PUI initially at rest, it is first accelerated along the motional electric field in the solar wind and then gyrates around the local IMF to form a cycloidal trajectory.Over a time interval much longer than the PUI cyclotron period, the average PUI velocity is approximately the solar wind velocity (along the negative x-direction).However, since the scale of the PUI cycloidal trajectory can be large in comparison with the effective solar wind deceleration region upstream of the Martian bow shock and more PUIs are produced closer to the bow shock, the PUI velocities on average should have both a component in the negative xdirection and a component along the motional electric field.While the former can cause the solar wind to slow down in the negative x-direction, the latter is expected to lead to a lateral solar wind deflection (Halekas et al. 2017).Indeed, similar (but larger) plasma flow deflections have been shown to occur in the Martian magnetosheath and were explained using a two-fluid model with the assumption that the PUIs have velocities along the motional electric field (Dubinin et al. 2018;Romanelli et al. 2020).Such a two-fluid model should be generalized to assess the solar wind deflection and deceleration upstream of the Martian bow shock.It probably can give better results than the simple momentum conservation model used in the present study, but the assumption of the PUI velocities being along the motional electric field needs to be improved.Moreover, the role that wave-particle interactions play in causing the momentum exchange between the PUIs and the solar wind also warrants further investigation. It should be mentioned that our analysis only discusses the different solar wind decelerations under the solar maximum and minimum conditions.A series of studies using Mars Express and MAVEN observations have shown that the neutral densities in the Martian atmosphere, particularly hydrogen, are strongly modulated by seasons on Mars (Zou et al. 2011;Dong et al. 2015;Yamauchi et al. 2015;Halekas 2017).These seasonal variations also have an influence on the solar wind deceleration.Furthermore, the apoapsis altitude of the MAVEN orbit is only about 6200 km.This has forced us to derive the unperturbed solar wind density and velocity from the observation at the most upstream point of the MAVEN trajectory.In contrast, the orbit of the Tianwen-1 mission has an apoapsis altitude of 12,500 km (Zou et al. 2021).The measurements made by Tianwen-1 should be more suitable to study the solar wind deceleration upstream of the Martian bow shock, as the solar wind deceleration caused by the PUI mass loading is a cumulative effect over long distances. Figure 1 Figure 1 illustrates two example cases to demonstrate how the solar wind deceleration is quantified from the MAVEN measurements in the present study.The left and right columns present one outbound segment and one inbound segment, respectively.The first and second rows give the total magnetic field (Figures 1(a) and (d)) and the solar wind speed (Figures 1(b) and (e)) in terms of universal time.Figures 1(c) and (f) further display the solar wind speed during the 70 minute intervals after and before the shock crossing for the outbound and inbound cases, respectively.The moment of shock crossing is defined as when B B B 2 u d = + () , where B is the total local magnetic field measured, and B u and B d represent the average upstream and downstream magnetic fields.In the present study, B u and B d are computed as the magnetic fields averaged over 10-25 minutes upstream and 5-15 minutes downstream of the first shock overshoot, respectively.Note that the numbers in the first line below the horizontal axis represent the relative times with respect to the moment of shock crossing, and the relative time has been flipped to positive values for the inbound case shown in Figure1(f) for convenience.The numbers in the second line below the horizontal axis still give the original universal times.Additionally, to mitigate the influence of short-period fluctuations in the solar wind speed, the speed data were further smoothed with a 1 minute (15 data points) averaging window in Figures1(c) and (f).Both Figures1(c) and (f) demonstrate a gradual deceleration of the solar wind as it approaches the Martian bow shock.Since the solar wind speed far upstream is certainly not a constant and varies by itself in reality, the solar wind speeds obtained for different orbital segments (as shown in Figures1(c) and (f)) have been averaged over all the 310 segments selected to make the solar wind deceleration signal of interest better stand out.The solid black line in Figure2shows the resultant average solar wind speed versus the relative time (flipped to positive values for the inbound segments).In the study ofKotova et al. (1997), the solar wind deceleration has been quantified as v v v Figure 1 . Figure 1.Example outbound segment (left column) and inbound segment (right column): (a) and (d) the total magnetic field, (b) and (e) the solar wind speed, (e) the solar wind speed within the 70 minute interval after the shock crossing (corresponding to the time interval 03:24:04-04:35:04 in (b)), (f) the solar wind speed within the 70 minute interval before the shock crossing (corresponding to the time interval 09:37:46-08:27:46 in (e)).The horizontal axis represents the universal time in the first two rows, and the vertical red dashed lines mark the shock crossing moments.The horizontal axis in the last row is the relative time with respect to the shock crossing (the numbers in the second line still give the corresponding universal times).Note that the relative time has been flipped to positive values for the inbound segment in (e) for convenience. Figure 2 . Figure 2. The average solar wind speed vs. the relative time with respect to the shock crossing.The solid black line is from MAVEN observations, while the red and blue lines represent theoretical results as labeled. Figure 3 . Figure 3. (a) The hydrogen (black lines) and oxygen (red lines) corona density profiles adopted.The solid and dashed lines represent the results at solar maximum and minimum, respectively.(b) The solar wind velocity upstream of the Martian bow shock calculated using the law of conservation of momentum (Equation (1)) using typical parameters under the solar minimum conditions.The black curves are the contour lines and the color represents the solar wind speed according to the right color bar.(c) The solar wind velocity calculated for the MAVEN trajectory from 15:24:57 to 16:34:57 on 2019 March 24.The black curve represents the MAVEN trajectory and the color gives the solar wind velocity upstream.
2024-02-16T16:21:37.756Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "d16907b4a7e8bc0ecb8e3b5e4e98cd8762b36af0", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad1f56/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4ca7fdca5a0e0074c178da1679f17bbfaff632d5", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
13961566
pes2o/s2orc
v3-fos-license
Tropicalization of group representations In this paper we give an interpretation to the boundary points of the compactification of the parameter space of convex projective structures on an n-manifold M. These spaces are closed semi-algebraic subsets of the variety of characters of representations of the fundamental group of M in SL_{n+1}(R). The boundary was constructed as the tropicalization of this semi-algebraic set. Here we show that the geometric interpretation for the points of the boundary can be constructed searching for a tropical analogue to an action of the group on a projective space. To do this we need to construct a tropical projective space with many invertible projective maps. We achieve this using a generalization of the Bruhat-Tits buildings for SL_{n+1} to non-archimedean fields with real surjective valuation. In the case n = 1 these objects are the real trees used by Morgan and Shalen to describe the boundary points for the Teichmuller spaces. In the general case they are contractible metric spaces with a structure of tropical projective spaces. Introduction Let M be a closed oriented n-manifold such that π 1 (M ) is virtually centerless, it is Gromov-hyperbolic and it is torsion-free. We denote by T c RP n (M ) the parameter space of marked convex projective structures on M . If S is an orientable hyperbolic surface of finite type, we denote by T cf H 2 (S) the Teichmüller space of S. In [A2] we showed that the space T c RP n (M ) can be identified with a closed semi-algebraic subset of the character variety Char(π 1 (M ), SL n+1 (R)). Then we applied the Maslov dequantization to this semi-algebraic set (see also [A1]) and, using an inverse limit of logarithmic limit sets of this space, we constructed the tropical counterpart of T c RP n (M ). The spherical quotient of this tropical counterpart, denoted by ∂T c RP n (M ), can be glued to T c RP n (M ) "at infinity", defining a compactification T c RP n (M )∪∂T c RP n (M ) of the parameter space. The same construction applied to the Teichmüller space T cf H 2 (S) gives back the Thurston boundary ∂T cf H 2 (S). The aim of this paper is to give a geometric interpretation of the points of these tropical counterparts. We are guided by the idea that the points of the tropicalization of a parameter space should be related with the tropical counterparts of the parametrized objects. Every point of T c RP n (M ) corresponds to a conjugacy class of representations of π 1 (M ) in SL n+1 (R). Geometrically such a representation corresponds to an action of π 1 (M ) on a vector space R n+1 , or, equivalently, on a projective space RP n . In this paper we introduce the tropical counterparts of these actions, i.e. actions of a group on tropical modules and tropical projective spaces. This is the notion we propose of tropicalization of a group representation. There is a naif notion of tropical projective space, the projective quotient of a free module T n , but these spaces have few invertible projective maps, hence they have few group actions. We give a more general notion of tropical modules and, correspondingly, of tropical projective spaces. We show that these objects have an intrinsic metric, the tropical version of the Hilbert metric, that is invariant for tropical projective maps, and that the topology induced by this metric is contractible. Then we construct a special class of tropical projective spaces, denoted by P n , by using a generalization of the Bruhat-Tits buildings for SL n+1 to non-archimedean fields with a surjective real valuation. In the usual case of a field F with discrete valuation, Bruhat and Tits constructed a polyhedral complex of dimension n with an action of SL n+1 (F). In the case n = 1, Morgan and Shalen generalized this construction to a field with a general valuation, and they studied these objects using the theory of real trees. We extend this to general n, and we think that the proper structure to study these objects is the structure of tropical projective spaces. The paper [JSY], developed independently from this work, contains a similar approach to the Bruhat-Tits buildings. Tropical geometry is used there to study the convexity properties of the Bruhat-Tits buildings for SL n (F), for a field F with discrete valuation. With every point of the boundary we can associate a class of representations of π 1 (M ) in SL n+1 (F), where F is real closed non-archimedean field with a surjective real valuation. Every representation of π 1 (M ) in SL n+1 (F) induces an action by tropical projective maps on our tropical projective spaces P n . We compute the length spectra of these actions on P n , and we show that the length spectrum of an action identifies a boundary point in ∂T c RP n (M ). Then we use the fact that tropical projective spaces are contractible to show that for every action of the fundamental group of the manifold on a tropical projective space there exists an equivariant map from the universal covering of the manifold to the tropical projective space. This theorem can hopefully lead to interesting consequences about the interpretation of the boundary points. For example in the case n = 1, where P 1 is a real tree, the equivariant map induces a duality between actions of the fundamental group on P 1 and measured laminations on the surface (see [MS84], [MS88] and [MS88']). It would be very interesting to extend this result to the general case. For example an action of the fundamental group of the surface on a tropical projective space P n induces a degenerate metric on the surface, and this metric can be used to associate a length with each curve. Anyway it is not clear up to now how to classify these induced structures. This is closely related to a problem raised by J. Roberts (see [Oh01,problem 12.19]): how to extend the theory of measured laminations to higher rank groups, such as, for example, SL n (R). A brief description of the following sections. In section 2 we give elementary definitions of semifields, semimodules and projective spaces over a semifield, and we give some examples of semimodules. In section 3 we discuss invertibility of linear maps in T n and the tropicalizations of linear maps on a vector space F n over a non archimedean field F. With every such map f we associate a linear map f τ on T n , and we discuss the relations between f τ and (f −1 ) τ : globally they are not inverse one of the other, but this happens on a specific "inversion-domain". In section 4 we define the structure of tropical projective space we put on the generalization of the Bruhat-Tits buildings, and we give a description of this space. Tropical modules T n can be seen as the tropicalization of a vector space F n over a non-archimedean field F, but this tropicalization depends on the choice of a basis of F n . Our description with tropical charts, one for each basis of F n , can be interpreted by thinking the Bruhat-Tits buildings as a tropicalization of F n with reference to all possible bases. In section 5 we define in a canonical way a metric on tropical projective spaces making tropical segments geodesics and tropical projective maps 1-Lipschitz. This metric is the transposition to tropical geometry of the Hilbert metric on convex subsets of RP n . The topology induced by this metric is shown to be contractible. Finally, in section 6 we consider a representation of a group Γ in SL n+1 (F), and we study the induced action by tropical projective maps on our Bruhat-Tits building. First we compute the length spectrum of the action with reference to the canonical metric, and, if Γ = π 1 (M ), we show show how we can recover the information characterizing a boundary point. Then by using the fact that tropical projective spaces are contractible, we show that every action of π 1 (M ) on a tropical projective space has an equivariant map from the universal cover of M to the tropical projective space. First definitions 2.1 Tropical semifields We need some linear algebra over the tropical semifield. By a semifield we mean a quintuple (S, ⊕, ⊙, 0, 1), where S is a set, ⊕ and ⊙ are associative and commutative operations S × S−→S satisfying the distributivity law, 0, 1 ∈ S are, respectively, the neutral elements for ⊕ and ⊙. Moreover we require that every element of S * = S \ {0} has a multiplicative inverse. We will denote the inverse of a by a ⊙−1 . Given an element b = 0 we can write a ⊘ b = a ⊙ b ⊙−1 . Note that 0 is never invertible and ∀s ∈ S : 0 ⊙ s = 0. A semifield is called idempotent if ∀s ∈ S : s ⊕ s = s. In this case a partial order relation is defined by We will restrict our attention to the idempotent semifields such that this partial order is total. In this case (S \ {0}, ⊙, ≤) is an abelian ordered group. Vice versa, given an abelian ordered group (Λ, +, <), we add to it an extra element −∞ with the property ∀λ ∈ Λ : −∞ < λ, and we define a semifield: with the tropical operations ⊕, ⊙ defined as We will use the notation 1 T = 0, as the zero of the ordered group is the one of the semifield, and 0 T = −∞. If a ∈ T and a = 0 T , then a⊙(−a) = 1 T . Hence −a = a ⊙−1 , the tropical inverse of a. The order on Λ ∪ {−∞} induces a topology on T that makes the operations continuous. Semifields of the form T = T Λ will be called tropical semifields. The semifield that in literature is called the tropical semifield is, in our notation, T R . We are interested in tropical semifields because they are the images of valuations. Let F be a field, Λ an ordered group, and v : F−→Λ ∪ {+∞} a surjective valuation. Instead of using the valuation, we prefer the tropicalization map: The tropicalization map satisfies the properties of a norm: For every element λ ∈ T we choose an element t λ ∈ F such that τ (t λ ) = λ. We will denote the valuation ring by O = {z ∈ F | τ (z) ≤ 1 T }, its unique maximal ideal by m = {z ∈ F | τ (z) < 1 T }, its residue field by D = O/m and the projection by π : O−→D. Tropical semimodules and projective spaces Definition 2.2. Given a semifield S, an S-semimodule is a triple (M, ⊕, ⊙, 0), where M is a set, ⊕ and ⊙ are operations: ⊕ is associative and commutative and ⊙ satisfies the usual associative and distributive properties of the product by a scalar. We will also require that Note that the following properties also holds: The first follows as a⊙0⊕b = a⊙0⊕a⊙(a −1 ⊙b) = a⊙(0⊕a −1 ⊙b) = a⊙a −1 ⊙ b = b. And then the second follows as 0 Most definitions of linear algebra can be given as usual. Let S be a semifield and M a S-semimodule. A submodule of M is a subset closed for the operations. If v 1 , . . . , v n ∈ M , a linear combination of them is an element of the form c 1 ⊙ v 1 ⊕ · · · ⊕ c n ⊙ v n . If A ⊂ M is a set, it is possible to define its spanned submodule Span S (A) as the smallest submodule containing A or, equivalently, as the set of all linear combinations of elements in A. A linear map between two S-semimodules is a map preserving the operations. The image of a linear map is a submodule, but (in general) there is not a good notion of kernel. If S is an idempotent semifield, then M is an idempotent semigroup for ⊕. In this case a partial order relation is defined by Linear maps are monotone with reference to this order. Let S be a semifield and M be an S-module. The projective equivalence relation on M is defined as: x ∼ y ⇔ ∃λ ∈ S * : x = λ ⊙ y This is an equivalence relation. The projective space associated with M may be defined as the quotient by this relation: The quotient map will be denoted by π : The image by π of a submodule is a projective subspace. The linear map induces a map between the associated projective spaces provided that the following condition holds: We will denote the induced map as f : P(M )−→P(N ). Maps of this kind will be called projective maps. The condition does not imply in general that the map is injective. Actually a projective map f : P(M )−→P(M ) may be not injective nor surjective in general. Examples From now on we will consider only semimodules over a tropical semifield T = T Λ . The simplest example of T-semimodule is the free T-semimodule of rank n, i.e. the set T n where the semigroup operation is the component wise sum, and the product by a scalar is applied to every component. If x ∈ T n we will write by x 1 , . . . , x n its components: These modules inherit a topology from the order topology of the tropical semifields: the product topology on the free modules and the subspace topology on their submodules. The partial order on these semimodules can be expressed in coordinates as ∀x, y ∈ T n : x y ⇔ ∀i : Other examples are the submodules The projective space associated with T n is P(T n ) = TP n−1 , and the projective space associated with F T n is P(F T n ) = F TP n−1 . We will denote its points with homogeneous coordinates: These projective spaces inherit the quotient topology, and projective maps are continuous for this topology. TP 1 = P(T 2 ) can be identified with Λ ∪ {−∞, +∞} via the map: With this identification TP 1 inherits an order: given a = [a 1 : All tropical projective maps TP 1 −→TP 1 are never increasing or never decreasing with reference to this order. We give a name to three special points: When Λ = R, T R P n−1 may be described as an (n − 1)-simplex, whose set of vertices is {π(e 1 ), . . . , π(e n )} (e i being the elements of the canonical basis of T n ). Given a set of vertices A, the face with vertices in A is the projective subspace π(Span T (A)). F TP n−1 is naturally identified with the interior of the simplex TP n−1 . Tropical matrices As before let T = T Λ be a tropical semifield. Let e i be the element of T n having 1 as the i-th coefficient and 0 as the others. These elements form the canonical basis of T n . Let f : T n −→T m be a linear map. Then we can define the matrix A = [f ] = (a i j ) as a i j = (f (e j )) i . The usual properties of matrices and linear maps hold in this case: 4. There is a binary correspondence between linear maps and matrices with entries in S. 5. The matrix of the composition of two maps is the product matrix, i.e. The identity matrix, corresponding to the identity map Id T : T n −→T n , will be also denoted by Projective maps f : TP n−1 −→TP m−1 are induced by matrices mapping no non-zero vector to zero. These are precisely the matrices such that every column contains a non-zero element. Tropical linear maps are very seldom surjective. This depends on the following property: Hence a tropical linear map is surjective if and only if it has, among its columns, all the elements of the canonical basis of the codomain. Let f : T n −→T m be a linear map, with matrix [f ] = (a i j ). Suppose that every column of [f ] contains a non-zero element. We will denote by f pi : T m −→T n the map defined by: (in the previous formula, by −0 T we mean an element greater than every other element in T. This value is never the minimum, thanks to the condition on the columns). In [CGQ04] this map is called residuated map. Theorem 3.1. Let y ∈ T m . Then y ∈ Im f if and only if exists a sequence ǫ : {1, . . . , m}−→{1, . . . , n} such that Moreover we have This implies that f −1 (y) is a single point if and only if every function ǫ as before is surjective. The function f pi plays the role of a pseudo-inverse function, as it sends every point of the image in one of its pre-images, in a continuous way. It has the following properties: Proof. The point y is in the image if and only if exists x ∈ T n such that f (x) = y. Then In this case x ǫ k = y k − a k ǫ k . All the claims of the theorem follows from the calculations above. Simple tropicalization of linear maps Let F be a valued field, with tropicalization map τ : F−→T. An F-vector space F n may be tropicalized through the componentwise tropicalization map, again denoted by τ : F n −→T n . Let f : F n −→F m be a linear map, expressed by a matrix [f ] = (a i j ). Its tropicalization is the map f τ : T n −→T m defined by the matrix [f τ ] = (α i j ) = (τ (a i j )). Proposition 3.2. The following properties hold: Anyway it has the property that every column and every row contains a non-zero element, hence it has a pseudo-inverse function, and it induces a linear map F T n −→F T n , and projective maps TP n−1 −→TP n−1 and F TP n−1 −→F TP n−1 . Now let B = A −1 , the inverse of A. We will write β = B τ . We would like to see β as an inverse of α, but this is impossible, as α is not always invertible. 1. It follows from: 2. It follows from the previous statement. 3. This is equivalent to ∀i : This always holds as, from the first statement, we The reversed inequalities always holds. If α and β are tropicalizations of two maps A, B ∈ GL n (F) such that Proposition 3.4. The inversion domains have this name because of the following property: The set D αβ is a tropical submodule, and we can write explicit equations for it: Note that the matrices α and β are not one the inverse of the other, but, in the hypothesis D αβ = ∅, then ∀i : (α ⊙ β) i i = 1 T . The map β |D αβ is the composition of a permutation of coordinates and a tropical dilatation: there exists a diagonal matrix d and a permutation of 4 Tropical projective structure on Bruhat-Tits buildings Definition Given a non-archimedean field F with a surjective real valuation, we are going to construct a family of tropical projective spaces we will call P n−1 (F), or simply P n−1 when the field is well understood. This family arises as a generalization of the Bruhat-Tits buildings for SL n to non-archimedean fields with surjective real valuation. In the usual case of a field with integral valuation, Bruhat and Tits constructed a polyhedral complex of dimension n − 1 with an action of SL n (F). In the case n = 2, Morgan and Shalen generalized this construction to a field with a general valuation, and they studied these objects using the theory of real trees. We want to extend this to general n, and we think that the proper structure to study these objects is the structure of tropical projective spaces. Let V = F n , an F-vector space of dimension n and an infinitely generated O-module. We consider the natural action GL n (F) × V −→V . a i e i = 0. We may suppose τ (a 1 ) ≤ · · · ≤ τ (a m ). There exist elements b 1 , . An element of L is an O-linear combination of {e 1 , . . . , e m } because they are generators, and the linear combination is unique because they are Findependent. Hence L is free. If L is a finitely generated O-submodule of V , its rank is a number from 0 to n. We denote by U n (F) (or simply U n ) the set of all O-lattices of V = F n , and by F U n (F) (or simply F U n ) the subset of all maximal O-lattices and the O-lattice {0}. U n and F U n can be turned in T-semimodules by means of the following operations: The associated tropical projective spaces will be denoted by P(U n (F)) = P n−1 (F) and P(F U n (F)) = F P n−1 (F). We will simply write P n−1 and F P n−1 when the field F is understood. As we said there is a natural action GL n (F) × V −→V . Every element A ∈ GL n (F) sends O-lattices in O-lattices, hence we have an induced action GL n (F) × U n −→U n . This action preserves the rank of a lattice, and in particular it sends F U n in itself. Among the O-lattices with the same rank this action is transitive, for example there exist an A ∈ GL n (F) sending every maximal O-lattice of V in the standard lattice O n ⊂ V . Hence the group SL n (F) acts naturally on U n and F U n by tropical linear maps and on P n−1 and F P n−1 by tropical projective maps. Description Let E = (e 1 , . . . , e n ) be a basis of V . We denote by ϕ E : T n −→U n the map: ϕ E (y) = ϕ E (y 1 , . . . , y n ) = I y 1 e 1 + · · · + I y n e n = Span O (t y 1 e 1 , . . . t y n e n ) Proposition 4.4. Let < e 1 , . . . , e m > be a O-basis of an O-lattice L, and let p i ∈ F. Then: Proof. It follows from the properties of valuations. This proposition implies that ϕ E is injective and ϕ E (F T n ) ⊂ F U n . For every basis E we have a different map ϕ E . The union of the images of all these maps is the whole U n , and the union of all the sets ϕ(F T n ) is equal to F U n . We will call the maps ϕ E tropical charts for U n . Theorem 4.7 will justify this name. Note that the charts respect the partial order relations on T n and on U n : x y ⇔ ϕ(x) ⊂ ϕ(y) Lemma 4.5. Let L, M ⊂ V be two O-lattices, and suppose that L is maximal. Then there is a basis v 1 , . . . , v n of L and scalars a 1 , . . . , a n ∈ F such that a 1 v 1 , . . . a n v n is a basis of M . Proof. Corollary 4.6. Given two points x, y ∈ U n , there is a tropical chart containing both of them in its image. Given two bases E = (e 1 , . . . , e n ) and F = (f 1 , . . . , f n ), we have two charts ϕ E , ϕ F . We want to study the intersection of the images. We We want to describe the sets I F , I E and the transition function: The transition matrices between E and F are denoted by We will write α = A τ and β = B τ , i.e. α = (α i j ) = (τ (a i j )), β = (β i j ) = (τ (b i j )). Theorem 4.7 ([Description of the tropical charts]). We have that I F = D αβ and I E = D βα , the inversion domains described in proposition 3.4. Moreover ϕ F E = α |I E and ϕ EF = β |I F , the tropicalizations of the transition matrices. Proof. First, we need to prove the following two assertions: The map ϕ E is injective, hence, given a fixed x, if an y satisfying the last condition exists, it has to be unique. Then the interval in which its coordinates are free to vary must degenerate to a single point. Then we have: We can prove the symmetric equalities reversing the roles of E and F. Now we look at ϕ −1 5 Tropical projective spaces as metric spaces Finitely generated semimodules Free semimodules have the usual universal property: let M be a Tsemimodule, and v 1 , . . . , v n ∈ M . Then there is a linear map: This map sends e i in v i and its image is Span T (v 1 , . . . , v n ). Hence every finitely generated T-semimodule is the image of a free Tsemimodule. In the following we will need some properties of finitely generated semimodules over T R , so for this section we will suppose T = T R . First we want to discuss a pathological example we prefer to neglect. Consider the following equivalence relation on T 2 : x 1 < x 2 , y 1 < y 2 and x 2 = y 2 or x 1 ≥ x 2 , y 1 ≥ y 2 and x 1 = y 1 The quotient for this relation will be denoted by B. If a ∼ a ′ and b ∼ b ′ , then a ⊕ b = a ′ ⊕ b ′ and λ ⊙ a = λ ⊙ a ′ . Hence the operations ⊕, ⊙ induces operations on B, turning it in a finitely generated T-semimodule. We will denote the equivalence classes in the following way: if (x 1 , x 2 ) satisfies x 1 < x 2 we will denote its class as [(·, x 2 )], if x 1 ≥ x 2 we will denote its class as [(x 1 , ·)]. The ⊙ operation act as and analogously for the other classes. The ⊕ operation acts as If we put on the quotient a topology making the projection continuous, then the point [(x 1 , ·)] is not closed, as its closure must contain the point [(·, x 1 )]. We define a T-semimodule to be separated if it does not contain any submodule isomorphic to B. We will see in the following section that every separated T-semimodule has a natural metrizable topology making all linear maps continuous. Examples of separated T-semimodules are all free semimodules (as there exists no submodule in T n whose associated projective space has exactly two points) and the semimodules U n (as every two points in U n are in the image of the same tropical chart, hence in a submodule isomorphic to T n ). Lemma 5.1. Let M be a T-semimodule and let f : T 2 −→M be a linear map such that f x 1 x 2 = f y 1 x 2 = m when y 1 ≤ x 1 . Then ∀y ≤ x 1 : f y x 2 = m. Proof. Case 1): If y 1 ≤ y ≤ x 1 , then y 1 x 2 y x 2 x 1 x 2 . Linear maps are monotone with reference to , hence m f y x 2 m. Case 2): If y = y 1 − (x 1 − y 1 ), then consider the points a = (y 1 − x 1 ) ⊙ Case 3): General case. Iterating the proof of case 2 we can prove the lemma for y = y 1 − n(x 1 − y 1 ). Then by case 1 we can extend the result to every y. Proof. The map f is associated with a mapf : T 2 −→M . There exists lifts x,ȳ ∈ T 2 such thatf (x) =f (ȳ) =p. Now: Case 1) Ifx ȳ then one of their coordinates is equal. Else there is a scalar λ < 1 T such that x λ ⊙ y, andp λ ⊙p, a contradiction. Then we can apply the previous lemma, and we have that ∀z ≺ y : f (z) = p. Case 2) Ifȳ x as before we have ∀z ≻ x : f (z) = p. Case 3) If they are not comparable, then both are minor than their sum,x ⊕ȳ, andf (x ⊕ȳ) = p. Then, by previous cases we have that ∀z ∈ TP 1 : f (z) = p. Suppose that M is a separated T-semimodule,f : T n −→M is a linear map and f : TP n−1 −→P(M ) is the induced projective map. As usual we denote by e 1 , . . . , e n the points of the canonical basis of T n , and we pose v i =f (e i ) ∈ M . We want to describe the set V i = f −1 (π(v i )). It is enough to describe V 1 . As Span T (e j , e 1 ) is isomorphic to T 2 , we know that S j = V 1 ∩ π(Span T (e j , e 1 )) is a closed initial segment of π(Span T (e j , e 1 )), with extreme point π(w j ). We can suppose that w j = a j e j + e 1 . Lemma 5.4. The set V 1 is Hence there is a point h 1 = e 1 ⊕ a 2 ⊙ e 2 ⊕ · · · ⊙ a n ⊕ e n such that π(h 1 ) is an extremal point of V 1 . The restriction off to the submodule Span T (h i , h j ) is injective. Definition of the metric Any convex subset C of a real projective space RP n has a well defined metric, the Hilbert metric. This metric is defined by using cross-ratios: if x, y ∈ C, the projective line through x and y intersects ∂C in two points a, b. The distance is then defined as d(x, y) = 1 2 log[a, x, y, b] (order chosen such that ax ∩ yb = ∅). If C, D are convex subsets of RP n and if f : C−→D is the restriction of a projective map, then d(f (x), f (y)) ≤ d(x, y). In particular any projective isomorphism f : C−→C is an isometry. Moreover this metric has straight lines as geodesics. See [Ki01] for a reference on the Hilbert metric in relation with projective structures. We can give an analogous definition for separated tropical projective spaces over T R . In the following we will assume Λ = R and T = T R . If M is a separated T-module there is a canonical way for defining a distance Let T be a tropical semifield and let a = [a 1 : There is a unique tropical projective map A satisfying A(0 T ) = a, A(1 T ) = b, A(∞ T ) = d. This map is described by the matrix Given an x ∈ T, x ≥ 1 T , we have that The Consider a tropical projective map B : TP 1 −→TP 1 such that B(0 T ) = b and B(∞ T ) = c. This map is described by a matrix of the form: The inverse images B −1 (b) and B −1 (c) are, respectively, an initial segment and a final segment of TP 1 with reference to the order of TP 1 . This segments have an extremal point, b 0 and c 0 respectively. The restriction . When we define the Hilbert metric we don't need to take the logarithms, as coordinates in tropical geometry already are in logarithmic scale. Hence the Hilbert metric on TP 1 is simply the Euclidean metric: This definition can be extended to every separated tropical projective space P(M ). If a, b ∈ P(M ), we can choose two liftsā,b ∈ M . Then there is a unique linear mapf : T 2 −→M such that f (e 1 ) =b, f (e 2 ) =ā. The induced projective map f : TP 1 −→P(M ) sends 0 T in a and ∞ T in b. By corollary 5.3 the sets f −1 (a) and f −1 (b) are closed segments, with extremal points a 0 , b 0 . We can define the distance as d (a, b) = d(a 0 , b 0 ). It is easy to verify that this definition does not depend on the choice of the liftsā,b. Now we have to verify the triangular inequality, but it is more comfortable to give an example first. For the projective spaces associated with the free modules we can calculate explicitly this distance. It is a well known distance, the Hilbert metric on the simplex in logarithmic coordinates. Proposition 5.5. Let x, y ∈ TP n−1 . Then, for all liftsx,ȳ ∈ T n : Proof. The mapf as above is defined in this case by the 2 × n matrix:    This is equal to y if ∀i : By changing signs inside the absolute value, we have the thesis. From this explicit computation we can deduce easily that the triangular inequality holds for the distance we have defined in TP n−1 , and that the topology induced by this distance on TP n−1 is the quotient of the product topology on T n . Once we know that the triangular inequality holds for TP n−1 , we can use this fact to prove it for all separated tropical projective spaces. Proof. Fix liftsx,ȳ,z ∈ M . We can construct a map f : T 3 −→M such that f (e 1 ) = x, f (e 2 ) = y, f (e 3 ) = z. By lemma 5.4 there exist points h 1 , h 2 , h 3 ∈ T 3 such that f is injective over Span T (h i , h j ). Then d(π(h i ), π(h j )) = d(π(f i ), π(f j )). As the triangular inequality holds in TP 2 , then it holds for x, y, z. The metric we have defined for separated tropical projective spaces can achieve the value +∞. Given a T-semimodule M we can define the following equivalence relation on M \ {0}: x ∼ y ⇔ d(π(x), π(y)) < +∞ The union of {0} with one of these equivalence classes is again a Tsemimodule, and their projective quotients are tropical projective spaces with an ordinary (i.e. finite) metric. For example, if M = T n , the equivalence class of the point (1 T , . . . , 1 T ) is the T-semimodule F T n , and its associated projective space is F TP n−1 , a tropical projective space in which the metric is finite. For the T-semimodule U n an equivalence class is F U n , and its associated projective space is F P n−1 , a tropical projective space in which the metric is finite. We can calculate more explicitly the metric for F P n−1 . Let x, y ∈ F P n−1 and letx,ȳ ∈ U n be their lifts. By lemma 4.5 there exists a basis E = (e 1 , . . . , e n ) ofx such that a 1 e 1 , . . . , a n e n is a basis ofȳ. In the tropical chart ϕ E , the pointx has coordinates (1 T , . . . , 1 T ), while the pointȳ has coordinates (τ (a 1 ), . . . , τ (a n )). Hence Homotopy properties In this section we will show that every separated tropical projective space with a finite metric is contractible. If (X, d) is a metric space, we denote by C 0 ([0, 1], X) the space of continuous curves in X, with the metric defined by Note that the following pairing is continuous Lemma 5.7. Let (X, d) be a metric space and suppose we can construct a continuous map: such that 1. C x,y (0) = x and C x,y (1) = y 2. C x,x is a constant curve. Then X is contractible. Proof. We can construct a retraction H : X × [0, 1]−→X retracting X on one of its points {x} as H(y, t) = C y,x (t) By definition of C we have that H(y, 0) = y and H(y, 1) =x, and H is continuous as it is a composition of continuous functions. Lemma 5.8. Let x, y, a, b ∈ T n and let φ x,a and φ y,b be, respectively, the linear maps T 2 −→T n defined by the matrices: Proof. Without loss of generality we can suppose that v = (t, 1 T ), so that (φ x,a (v)) i = max(x i + t, a i ). Then It is easy to check that max(x i +t, a i )−max(y i +t, b i )) ≤ max(x i −y i , a i −b i ) by analyzing the four cases. Proposition 5.9. For every separated T-module M , its associated projective space P(M ) is contractible with reference to the topology induced by the canonical metric. Proof. We have to construct a map C as in lemma 5.7. We will use tropical segments, rescaling their parametrization to the interval [0, 1]. If x, y ∈ P(M ), we take liftsx,ȳ ∈ M and the mapf : f (e 1 ) = x,f (e 2 ) = y. As usual f : TP 1 −→P(M ) is the induced map. By corollary 5.3 the sets f −1 (x) and f −1 (y) are closed segments, with extremal points x 0 , y 0 , hence f restricted to the interval [x 0 , y 0 ] is a curve joining x and y. Let φ be the affine map from the interval [x 0 , y 0 ] to the interval [0, 1]. We define C x,y as the reparametrization of f by φ. Properties 1 and 2 of the lemma 5.7 holds for C. To prove 3 we can show that: ∀x, y, z, w ∈ P(M ) : ∀t ∈ [0, 1] : d(C x,y (t), C z,w (t)) ≤ 3 max(d(x, z), d(y, w)) To do this we take liftsx,ȳ,z,w ∈ M , and a mapf : ). Moreover f is 1-Lipschitz on π(Span T (h 1 , . . . , h 4 )), hence our property on M follows from the same property on T 4 , and this follows from lemma 5.8. 6 Tropicalization of group representations 6.1 Length spectra Let Γ be a group and ρ : Γ−→GL n+1 (F) be a representation of Γ in the general linear group of a non-archimedean field with surjective real valuation. Let F be a non-archimedean field with surjective real valuation. The group GL n+1 (F) acts by linear maps on the tropical modules U n+1 (F) and F U n+1 (F), and by tropical projective maps on the tropical projective spaces P n (F) and F P n (F). The representation ρ defines an action of Γ on F P n (F). For every matrix A ∈ GL n+1 (F), we can define the translation length of A as: Proposition 6.1. Let x ∈ F P n , and L ⊂ V be a lift of x in F U n+1 . We denote by e 1 , . . . , e n+1 a basis of L, and byà the matrix corresponding to A in this basis. Then Proof. By lemma 4.5 applied to the O-modules L and A(L), there exist a basis v 1 , . . . , v n of L and scalars λ 1 , . . . , λ n ∈ F such that λ 1 v 1 , . . . , λ n v n is a basis of A(L). Then d(x, Ax) = max i (τ (λ i )) − min i (τ (λ i )). We will denote by M 1 the transition matrix from e 1 , . . . , e n to v 1 , . . . , v n . As they are bases of the same O-module, M 1 is in GL n (O). We will denote by M 2 the transition matrix from λ 1 v 1 , . . . , λ n v n to A(e 1 ), . . . , A(e n ), and it is again in GL n (O). Let ∆ be the diagonal matrix: Then the following relations hold: In the same way we have: The case n = 1 has been studied in [MS84]. If A ∈ SL 2 (F), we have l(A) = 2 max(0, τ (tr(A))) (see [MS84,prop. II.3.15]). In the following we give an extension of this result for generic n. Let F be a non-archimedean real closed field of finite rank extending R, with a surjective real valuation v : F * −→R such that the valuation ring is convex. The field K = F[i] is an algebraically closed field extending C, with an extended valuation v : K * −→R. We will use the notation τ = −v. We will also use the complex norm | · | : K−→F ≥0 defined by |a + bi| = √ a 2 + b 2 and the conjugation a + bi = a − bi. Note that the function is a consistent norm on M n (K), hence, by the spectral radius theorem, we have r(A) ≤ |A|. Proposition 6.2. Suppose the field K is as above. Then a matrix A ∈ GL n+1 (K) acts on F P n (K). Then the inf in the definition of l(A) is a minimum, and it is equal to Proof. By proposition 6.1 we have that for every x ∈ F P n (K) or, in other words, We only need to show that the lower bound of previous corollary is actually achieved. The Jordan form of A is  where the entries marked by * are 0 or 1. Let v 1 , . . . , v n+1 be a Jordan basis, and let L = Span O (v 1 , . . . , v n ) ∈ U n+1 . By proposition 6.1 d(π(L), Aπ(L)) = τ λ 1 λ n+1 Now suppose that A ∈ GL n (F), with F a non-archimedean real closed field as above. Hence A acts on F P n (F), and now we want to study the translation length of A over F P n (F). As before, we denote by λ 1 , . . . , λ n+1 ∈ K its eigenvalues, ordered such that |λ i | ≥ |λ i+1 |. Proposition 6.3. Suppose that F is as above, and that A ∈ GL n+1 (F). We consider the translation length l(A) with respect to the action of A on F P n (F). Then the inf in the definition of l(A) is a minimum, and it is equal to Proof. As F P n (F) ⊂ F P n (K), by proposition 6.2 we have the inequality To prove that this lower bound is achieved, we will choose a suitable basis, as above. Consider the decomposition into sum of generalized eigenspaces For every λ i ∈ F, the generalized eigenspace ker((A − λ i Id) n+1 ) has a basis of generalized eigenvectors in F n+1 . If λ i ∈ K \ F, then λ i is an eigenvalue, and if v 1 , . . . , v s is a basis of generalized eigenvectors of ker((A − λ i Id) n+1 ), then v 1 , . . . , v s is a basis of generalized eigenvectors of ker((A − λ i Id) n+1 ). Boundary points Here we give a geometric interpretation to the points of the boundaries of the spaces of convex projective structures. Let M be a closed n-manifold such that the fundamental group π 1 (M ) has trivial virtual center, it is Gromov hyperbolic, and it is torsion free (note that every closed hyperbolic n-manifold whose fundamental group is torsion-free satisfies the hypotheses). In [A2, subsec. 6.4], we considered the family G = {e γ } γ∈π 1 (M ) , and we constructed a compactification of T c RP n (M ): The cone over the boundary C(∂ G T c RP n (M )) can be identified with a subset of R G = R π 1 (M ) . Theorem 6.4. Let F = R((t R r )), where r is the dimension of T c RP n (M ) (see the definition in [A2, subsec. 3.3]). The points of C(∂ G T c RP n (M )) are length spectra of actions of the fundamental group π 1 (M ) on the tropical projective space F P n (F). Proof. The semi-algebraic set T c RP n (M ) has an extension to the field F, that we will denote by T c RP n (M ) ⊂ Char(π 1 (M ), SL n+1 (F)). Every element of T c RP n (M ) is a conjugacy class of a representation ρ : π 1 (M )−→SL n+1 (F). Let x ∈ C(∂ G T c RP n (M )) ⊂ R G . As we said in [A2, subsec. 3.3]), there exists a representation ρ ∈ T c RP n (M ) such that for every γ ∈ π 1 (M ), the matrix ρ(γ) satisfies x eγ = τ λ 1 λ n+1 . Consider the action of π 1 (M ) on F P n (F) induced by the representation ρ. By proposition 6.3, the translation length of an element γ is l(ρ(γ)) = τ λ 1 λ n+1 This result is an extension of the interpretation of the boundary points of the Teichmüller spaces given by Morgan and Shalen in [MS84]. Here we review their result in our language. Let S = Σ k Proof. The group π 1 (M ) acts diagonally on the spaceM × Z: γ(x, z) = (γ(x), γ(z)) This action is free and proper,M × Z is simply connected, hence P :M × Z−→K = (M × Z)/π 1 (M ) is a universal cover, and π 1 (K) = π 1 (M ). As M is a manifold it is homeomorphic to a CW-complex of dimension n with only one 0-cell. Hence the hypothesis that π 2 (M ) = · · · = π n−1 (M ) = 0 implies that the isomorphism π 1 (M )−→π 1 (K) is induced by a map ψ : M −→K, well defined up to homotopy. Consider the liftγ of the path γ inM starting from the point y. The other extreme ofγ is the point γ(y). The same way the lift ψ * (γ) of the path ψ * (γ) inM × K starting from the pointφ(y) is the imageφ(γ), hence the other extreme of this path is the pointφ(γ(y)). This is precisely the definition of γ(φ(y)). Suppose that M is as above, and that we have an action of π 1 (M ) on the tropical projective space P m . As P m is a contractible space there is a π 1 (M )-equivariant map An interesting open problem is to understand the dual structure this equivariant map induces on M . The case where M is an hyperbolic surface and m = 1 has been studied by Morgan and Shalen in [MS88] and it is well understood: P 1 is a real tree and the equivariant map induces a measured lamination on M , that is dual to the action. This work can possibly lead to the discovery of analogous structures for the general case. For example an action of π 1 (M ) on P m induces a degenerate metric on M , and this metric can be used to associate a length with each curve. Anyway it is not clear up to now how to classify these induced structures. This is closely related to a problem raised by J. Roberts (see [Oh01,problem 12.19]): how to extend the theory of measured laminations to higher rank groups, such as, for example, SL n (R).
2014-10-01T00:00:00.000Z
2007-03-20T00:00:00.000
{ "year": 2007, "sha1": "bafb23e985685ad5b5ca11be6399621102f81344", "oa_license": null, "oa_url": "http://msp.org/agt/2008/8-1/agt-v8-n1-p10-s.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "eb5c88f64462bfe59ae6fdc77d7d318952f4c8a8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
76661194
pes2o/s2orc
v3-fos-license
In vivo expansion and regeneration of full‐thickness functional skin with an autologous homologous skin construct: Clinical proof of concept for chronic wound healing A new cell‐tissue technology uses a patient's skin to create an in vivo expanding and self‐organising full‐thickness skin autograft derived from potent cutaneous appendages. This autologous homologous skin construct (AHSC) is manufactured from a small full‐thickness skin harvest obtained from an uninjured area of the patient. All the harvested tissue is incorporated into the AHSC including the endogenous regenerative cellular populations responsible for skin maintenance and repair, which are activated during the manufacturing process. Without any exogenous supplementation or culturing, the AHSC is swiftly returned to the patient's wound bed, where it expands and closes the defect from the inside out with full‐thickness fully functional skin. AHSC was applied to a greater than two‐year old large (200 cm2) chronic wound refractory to multiple failed split‐thickness skin grafts. Complete epithelial coverage was achieved in 8 weeks, and complete wound coverage with full‐thickness functional skin occurred in 12 weeks. At 6‐month follow‐up, the wound remained covered with full‐thickness skin, grossly equivalent to surrounding native skin qualitatively and quantitatively equivalent across multiple functions and characteristics, including sensation, hair follicle morphology, bio‐impedance and composition, pigment regeneration, and gland production. A new cell-tissue technology uses a patient's skin to create an in vivo expanding and self-organising full-thickness skin autograft derived from potent cutaneous appendages. This autologous homologous skin construct (AHSC) is manufactured from a small full-thickness skin harvest obtained from an uninjured area of the patient. All the harvested tissue is incorporated into the AHSC including the endogenous regenerative cellular populations responsible for skin maintenance and repair, which are activated during the manufacturing process. Without any exogenous supplementation or culturing, the AHSC is swiftly returned to the patient's wound bed, where it expands and closes the defect from the inside out with fullthickness fully functional skin. AHSC was applied to a greater than two-year old large (200 cm 2 ) chronic wound refractory to multiple failed split-thickness skin grafts. Complete epithelial coverage was achieved in 8 weeks, and complete wound coverage with full-thickness functional skin occurred in 12 weeks. At 6-month follow-up, the wound remained covered with full-thickness skin, grossly equivalent to surrounding native skin qualitatively and quantitatively equivalent across multiple functions and characteristics, including sensation, hair follicle morphology, bio-impedance and composition, pigment regeneration, and gland production. | INTRODUCTION Wound healing has traditionally been defined through four distinct physiological phases; haemostasis, inflammation, proliferation, and tissue remodelling. Each phase involves complex and interdependent signalling and coordination of diverse cellular populations including inflammatory, endothelial, stromal, and progenitor, or stem cells. Notable cell populations present in skin include cells expressing leucine-rich repeat-containing G protein-coupled receptor 6 (Lgr6), CD34, and keratin 15 that reside within dermal appendages including the follicular bulge, sebaceous glands, and inter-follicular epidermis. [1][2][3] In large skin defects, many cell populations are lost, and the remaining stem cells along the wound's periphery are unable to adequately regenerate lost tissues. Failure to achieve early and appropriate wound coverage can result in refractory non-healed wounds. As the population ages and comorbidities such as diabetes become more prevalent, chronic wounds are expected to increase and already account for 3% of total health care expenditure in developed countries with an excess of $5 and $50 billion spent annually in the United Kingdom and United States, respectively. [4][5][6][7] While full-thickness skin grafts and split-thickness skin grafts (STSGs) can achieve early autologous wound coverage, both have measurable rates of failure approaching 30%, especially in the setting of chronic wound treatment. 8 Graft failure can result from numerous factors including systemic comorbidities, the inability of the wound bed to support the metabolic demand of intact tissue, traumatic detachment of nascent vasculature, and physical shearing. 8,9 Commercially available skin substitutes can replicate the hierarchical morphology of skin, but they do not replace functional appendages, such as follicular and glandular structures, and they too are vulnerable to loss from shear stress leading on the graft (Table 1). A novel, commercially available, cell-tissue therapy, derived from a patient's own skin can expand and regenerate appendage-bearing skin and be used to heal chronic wounds. This autologous, homologous skin construct (AHSC) treatment is created from a small full-thickness skin harvest (epidermis, dermis, and hypodermis) taken from an unaffected healthy area, which is shipped to a biomanufacturing facility. The AHSC is manufactured without ex vivo expansion and swiftly returned to the provider. A single application of AHSC is applied to the properly debrided chronic wound and covered by a standard wound dressing. Once engrafted, the AHSC self-propagates and expands into full-thickness skin that contains all the critical components for native tissue function including dermal appendages, such as hair follicles and sweat glands. Here, we present this technique and report the first outcome of AHSC-treated tissue compared with native skin and STSG in a chronic lower extremity wound, which had repeatedly failed the clinical standard of care (autograft) and other advanced wound care skin substitutes. | METHODS Patient authorisation and consent were obtained for use of all photographs, images, and figures contained within this manuscript, in accordance with institutional policies. Following production and application of a single application of AHSC, the patient was followed for 6 months and assessed for AHSC safety, efficacy, donor site morbidity, graft take, time to wound closure, pigmentation, hair follicle development, and sweat and sebaceous gland production, sensation, contracture, and pliability. | Preparation of AHSC therapy A full-thickness harvest is taken from an unaffected area of the patient such as the groin or thigh, and the site is closed primarily. The tissue is shipped in normal saline at 4 C to an Food and Drug Administration-regulated biomedical manufacturing facility (PolarityTE, Salt Lake City, Utah). All the tissue is processed into AHSC (SkinTE; PolarityTE), which involves processing of tissue to improve the surface area to volume ratio and activation of the endogenous regenerative cellular populations akin to the activation that occurs when native skin is injured. The AHSC can be returned to the clinical site the same day depending on the location, or up to 11 days following tissue harvest with the goal of providing the patient with their own AHSC as expeditiously as possible. The wound bed is sharply debrided and the AHSC is spread evenly across the wound bed analogous to distributing a skin graft. It is dressed with a non-absorbent, non-adherent dressing such as silicone with regular dressing changes until mature epithelialisation in a manner consistent with STSG dressing. | Wound Healing and Functional Tissue Assessment Pre-treatment cutaneous defect and post-treatment wound healing were documented with high-resolution digital single lens reflex (DSLR) photography (Cannon, Melville, New York). Pain was subjectively rated by the patient. Baseline static twopoint discrimination was performed on AHSC-treated areas, native skin, and STSG-treated areas as previously described taking the average of 5 random locations for each group. 10 Bioimpedance analysis (RJL Systems, Clinton Township, Michigan) was performed using the two-electrode method as previously described on AHSC-treated areas, native skin, and STSGtreated areas to assess water content, oil content, and pliability. Briefly, electrodes are placed 4 cm apart in the region of skin to be tested with time-varying sinusoidal 1 V applied in a 200 Ω resistance circuit with voltage drop measured with current passage less than 10 μA. [11][12][13][14] Differences between means were measured using two-way analysis of variance with a Tukeys post-hoc test with a P-value of <0.05 considered as significant. | Tissue architecture and compositional analysis Molecular composition of skin and hair follicles from AHSCtreated areas and native skin was analysed using Raman Key Messages • The growing clinical and financial burden of chronic nonhealing wounds mandates effective wound coverage options that result in robust permanent skin. • Skin grafts are one standard of care but they require specialised surgeons, create a large and painful donor site defect, and can fail because of shear stress on the graft. • An autologous homologous skin construct (AHSC) can be derived from a patient's own skin. Innate regenerative cellular populations from a small full-thickness healthy skin harvest are fractionated and activated during manufacturing. The product expands within the wound bed using the natural healing environment created by the patient's body to heal the wound from the inside out. • Compared with native uninjured skin, neo-regenerated skin from the AHSC was found to be equivalent across multiple functions and characteristics, including sensation, hair follicle composition, pigment regeneration, and gland production. spectroscopy (ThermoFisher Scientific, Madison, Wisconsin). Hair follicles removed from uninjured skin and AHSC-treated skin were whole mount imaged with a digital compound microscope (Zeiss V16, Oberkochen, Germany), a confocal microscope (Leica TPS SP8, Wetzlar, Germany) following labelling with wheat germ agglutinin and phalloidin (ThermoFisher Scientific), an environmental scanning electron microscope (ESEM, Zeiss EVO LS10, Oberkochen, Germany), and a second harmonic multi-photon microscope (Leica SP, Wetzlar, Germany). | CLINICAL CASE A 31-year-old previously healthy African American male suffered polytraumatic injuries from a motorcycle accident 24 months earlier. His acute traumatic injuries included wounds in both lower extremities, with a large traumatic soft tissue avulsion covering the majority of the anterior RLE resulting in bone exposure and a large avulsion flap that later failed and resulted in complete necrosis. The flap was debrided and a STSG was placed 1-month post-injury. The STSG failed with complete graft loss likely because of the inadequacy of the wound bed to support the graft. Over the subsequent 2 years, the wound chronically failed to heal despite application of multiple advanced wound care products, skin substitutes, and additional STSG efforts. The patient initially presented for AHSC therapy with a pretibial defect >200 cm 2 in size with drainage, exposed bone and reported absent or impaired light touch sensation, and increased pain throughout wound and peri-wound bed surfaces (Figure 1). The patient elected to undergo AHSC treatment to avoid another STSG. Two days prior to the AHSC application procedure, a small (6 cm long), elliptical full-thickness skin harvest was performed in clinic by a sterile technique from the patient's right groin for the creation of AHSC. At 48-hours following harvest, the wound bed was prepared using direct contact low frequency ultrasonic debridement (SonicOne; Misonix, Farmingdale, New York) followed by topical application of the AHSC product into the full-thickness wound bed (200 cm 2 ). AHSC was spread evenly applied across the entire wound surface and the treated wound was covered with an occlusive, non-adherent, non-absorbent silicone dressing similar to a skin graft, and wrapped with multi-layer compression dressings. The patient was discharged home with instructions to return for weekly follow-up for 8 weeks followed by monthly follow-up for a total of 6 months. | Wound healing and donor site AHSC had complete (100%) graft take and resulted in complete epithelial coverage within 8 weeks and full-thickness functional skin coverage within 12 weeks (Figure 2). At the last follow-up at 6 months post-application of AHSC, the wound remained completely closed and covered with full-thickness functional skin, which was grossly equivalent to the surrounding native A, Image of left lower extremity fasciotomy wound resurfaced with a split-thickness skin graft 30 months previously. B, Image of debrided chronic right lower extremity wound 24 months following injury with granulation tissue and exposed tibia skin. Serial examination of the wound following application of AHSC demonstrated regeneration of full-thickness skin tissue and associated cutaneous appendages (hair follicles, sweat/oil glands). In addition to wound closure by epithelialisation, serial follow-up assessments showed progressive restoration of tissue volume, melanin pigment deposition, and improvement of gross cutaneous sensation. The patient subjectively reported decreased pain and improved functional and aesthetic outcome compared with the contralateral STSG, which he reported continuously believed dry and required moisturisation. | Functional and compositional analysis Digital single-lens reflex photography demonstrated focal expansion of AHSC within the full-thickness wound bed, with progressive melanocyte pigmentation and full-thickness skin regeneration throughout the wound, with minimal contracture (Figure 2). Static 2-point discrimination demonstrated no difference between AHSC and native skin (P = 0.076). In contrast, the healed STSG placed on the contralateral extremity demonstrated a reduction in sensation relative to native skin and AHSC (P < 0.0001) (Figure 3). Bio-impedance analysis of AHSC-regenerated skin relative to native skin showed no significant difference in moisture, oil, or pliability (P = 0.25), whereas the healed STSG of the LLE had significantly reduced features along all three parameters (P < 0.05) ( Figure 3). Hair follicles that were regenerated from AHSC demonstrated normal cellular and structural architecture, with complex hierarchical dermal papilla morphology, and properly oriented keratinised hair shaft growth similar to that of native hair follicles by compound microscopy, fluorescent confocal microscopy, second-harmonic multiphoton microscopy, and ESEM ( Figure 4). Notably, the complex cellular architecture of the follicular bulge where regenerative cellular populations are located was completely recapitulated with AHSC treatment. Additionally, Raman spectroscopy demonstrated no significant difference in molecular composition of AHSC-regenerated and native hair follicles ( Figure 5). | DISCUSSION True successful healing of chronic wounds requires complete epithelialisation with the regeneration and replacement of normal skin end organs and function. Advanced skin substitutes and STSGs are limited by their inability to fully recapitulate the cellular physiology of skin. AHSC uses autologous cells derived from intact skin that can activate in situ the complex coordination of epidermal and dermal cellular populations, extracellular matrix, and repair pathways required for successful healing. 15 These differences improve the quality of complete healing beyond epithelialisation, and represent a desired state for clinical wound care. The quest to develop a fully recapitulative topical healing modality has been pursued for years. Chronic wounds are characterised by multiple impaired physiological processes that perturb normal healing, including ischaemia, inflammation, infection, reduced levels of growth factors, proteinase imbalance, and cellular senescence as well as local factors such as foreign bodies and tissue insult. 15 Impairment of cellular activity and efficacy has been attributed to decreased mitogenic response of wound fibroblasts to growth factors and the persistence of a hyperproliferative but less differentiated state of keratinocytes. 15,16 FIGURE 4 Comparative multi-modality imaging of hair follicles (HFs) harvested from either native skin or autologous homologous skin construct (AHSC) at 14-week post-application. Correlative fluorescent probe imaging was conducted using confocal microscopy to determine relative quantity, and colocalisation of structures including nuclei (blue), F-actin (red) and collagen (green) analytes. Environmental scanning electron microscopy (ESEM) and dark field stereoscopic imaging was conducted to determine the relative surface/subsurface microanatomy of the structures FIGURE 5 Comparative Raman spectroscopic fingerprinting of hair follicles (HFs) harvested from (A) autologous homologous skin construct (AHSC)regenerated tissues and (B) native skin to determine the relative molecular composition, quantity, structure, and energy state of the specimens shifted to show their respective signal peaks. C, Direct comparison of the spectra by subtraction creates a flat line demonstrating minimal differences in the molecular fingerprint of the tissues Within this setting of compromised wound biology, conventional therapies and advanced skin substitutes commonly fail with reported failure rates approaching 30% in clinical trials with no specific skin replacement therapy demonstrating clinical superiority over another in a Cochrane Database Systematic Review. 17,18 Similarly, skin grafts with their relatively large tissue mass and metabolic demands have reduced graft take in the setting of ischaemia and poor tissue perfusion found in chronic wounds. 9 The AHSC technology was developed to survive austere tissue environments and in vitro, in vivo, and emerging clinical data support these capabilities. The entirety of the harvested tissue is used during the manufacturing of AHSC, so it contains all necessary structural and cellular elements, including the endogenous stem cell populations. The manufacturing of AHSC activates these cells, with the segmentation of the skin tissue. The processing of AHSC improves the surface area to volume ratio of reimplanted tissue, which aids cell survival via plasmatic imbibition, the passive diffusion of oxygen, nutrients, and metabolite, until inosculation and blood vessel formation can take place. Because AHSC is swiftly returned to the patient and the cells are not cultured in vitro, they expand physiologically in the wound bed using the body's endogenous wound repair support pathways, in contrast to cells produced in tissue culture, which can alter gene expression and cell behaviour. 19 In the clinical case presented in this report, the functional capability of the resulting AHSC-regenerated tissue was similar to that of uninjured native skin and a healed STSG on the contralateral across all parameters tested, including digital singlelens reflex photography, microscopic imaging, sensory examination, and bio-impedance analysis. In contrast, STSG-treated areas were found to be significantly different from native skin ( Table 1). Patient-reported outcomes demonstrated a strong preference by the patient for treatment with AHSC compared with STSG, for parameters of pain, function, and cosmesis. We have thus presented a clinical proof of concept that a patient-derived autologous cell-tissue therapy, AHSC, can achieve regenerative healing in a chronic wound, complete with replacement of skin end organs. Remarkably, the successful healing occurred in the setting of multiple failed prior STSG and use of xenograph skin substitute through a single delivery of AHSC. Repeated AHSC applications were not required. The achievement of complete wound closure by AHSC with full-thickness skin regeneration with functional appendages (hair, glands, and light touch sensation) is unique and has not been previously reported. We point out that this patient was a young otherwise healthy patient without the comorbidities frequently encountered in patients with chronic wounds such as with diabetic lower extremity wounds, venous stasis ulcers, and arterial ulcers. Therefore, further translational and clinical research of AHSC is warranted to explore its full potential for successful, high quality wound healing under more challenging host environments.
2019-03-15T02:58:04.754Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "3b4e4d9b56c1985bc0cb82ae7630e378e27d42ab", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.13109", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "67d049cc6ed9c83cda1d4757a35679bfd2a5ccd5", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
252696012
pes2o/s2orc
v3-fos-license
A Novel CTLA-4 affinity peptide for cancer immunotherapy by increasing the integrin αvβ3 targeting Immune checkpoint inhibitors (ICIs) are changing all aspects of malignant tumour therapy as an immunotherapy subverter in oncology. However, the current ICIs might induce systemic immune activation in other tissues and organs since they are not tumour-specific, causing the immune system to attack some normal tissues and organs of the human body. The toxicity can also amplify greatly although combined immunotherapy for cancer has increased the curative efficacy. The LC4 peptide was modified to improve its tumour-targeting ability and reduce peripheral immune system activation, which was obtained through phage display peptide library screening and could block the CTLA-4/CD80 interaction. The LC4 peptide as a result, like other ICIs, exerts anti-tumour effects by refreshing T cell function, and also activates the peripheral immune system. We used the PLGLAG peptide as a linker at the C-terminal of LC4 to connect with a tumour-targeting peptide RGD to increase the tumour tissue targeting ability, and obtain LC4-PLG-RGD. Further experiments demonstrated that the anti-tumour LC4-PLG-RGD activity was better than LC4 in vivo, and the ability to activate the peripheral immune system was weakened. In conclusion, LC4-PLG-RGD can increase the ICIs tumour-targeting and reduce excessive peripheral tissue immune activation, thereby reducing the side effects of ICIs, while increasing their anti-tumour efficacy. This study confirmed that enhanced ICI tumour targeting can effectively reduce immune-related adverse reaction occurrence. Supplementary Information The online version contains supplementary material available at 10.1007/s12672-022-00562-6. e.g. treatment with a combination of occurrence of ipilimumab and nivolumab increased severe side effects by 2-fourfold compared with monotherapies alone [2]. The immune activation of most immune-related adverse reactions (irAE) are related to the activity required for an anti-tumour immune response. Therefore, how to enhance the anti-tumour ICI activity while reducing the irAE occurrence is an urgent drug development problem. Immune checkpoint therapy can inhibit immune checkpoint activity, release immune brakes in the tumour microenvironment, and reactivate T cell immune tumour response, thus achieving anti-tumour effect. Therefore, increasing ICI enrichment in the tumour microenvironment and decreasing peripheral immune system activation is an effective method to realise this idea. It was reported that dual-targeted antibodies improve T lymphocyte infiltration in tumour tissues and thus achieve a stronger anti-tumour effect than single targeted antibodies [3], while there are few reports on reducing peripheral immune system activation. CTLA-4 plays a negative regulatory role in the initial T cell activation stage, and is mainly expressed on activated CD8 + T cells and CD4 + T cells, and constitutively expressed on Treg cells [4,5]. Blocking CTLA-4 can reverse and restore depleted T cell function, improve proliferation and T cell effector capacity, and inhibit tumour growth significantly [6]. The CTLA-4 monoclonal antibody ipilimumab was officially approved for unresectable stage III / IV metastatic melanoma treatment by the FDA. IrAEs from anti-CTLA-4 agents are dose-dependent and occur more frequently, which limit their clinical application. Integrin is an important cell adhesion receptor, which is highly expressed in tumour vascular endothelial cells and some tumour cells [7]. RGD peptide has dual targeting, which can simultaneously target tumour cells and tumour endothelial cells by specific affinity with integrin αvβ3. It has been used to deliver anti-tumour drugs or contrast agents in tumour therapy and diagnosis [8,9]. Matrix Metalloproteinase-2 (MMP-2) is a protease involved in ECM degradation in tumours. PLGLAG is its restriction site. MMP-2 is highly expressed in almost all tumour tissues. It is sensitive and highly specific in tumour tissues, which has a good application prospect in cancer treatment [10,11]. Hong Xia Wang et al. confirmed that PLGLAG can be cut off when it enters tumour tissue [12]. Therefore, PLGLAG is the restriction site of MMP-2, and is a widely used linker, which can release the modified drug in tumour tissue. We first obtained CTLA-4 affinity peptide LC4 by phage display peptide library screening in this study. LC4 peptide has been demonstrated to effectively block CTLA-4 /B7 protein interactions and has good anti-tumour effect in vitro and in vivo. PLGLAG was used to connect LC4 with RGD, a tumour-targeting peptide sequence with high affinity for integrin αvβ3, and obtain a modified peptide LC4-PLG-RGD. LC4-PLG-RGD as a result has better anti-tumour effect, could reduce peripheral immune system activation, and reduce irAEs produced in CTLA-4 treatment. 1 3 3 The affinity peptide interferes with the CTLA-4 to its ligands CD80 and CD86 binding by specifically binding to CTLA-4 Affinity peptides specifically bind CTLA-4 molecules on cells, impairing the binding of CTLA-4 to its ligand We constructed a CHO-K1 / hCTLA-4 cell line with high CTLA-4 expression to determine whether the affinity peptide specifically binds to CTLA-4 molecules on the cell membrane. CTLA-4 affinity peptides with biotin label were synthesised and co-incubated with CHO-K1/hCTLA-4 cell line. LC4, LC7 and LC8 had affinity with CHO-K1/hCTLA-4 cell line compared with the GA peptide group, which proved that these three peptides could specifically bind with hCTLA-4 at the cellular membrane (Fig. 2). Affinity peptides impaired the CTLA-4 and CD80 interaction Targeting CTLA-4 to treat tumours requires blocking the interaction between CTLA-4 and its CD80/CD86 ligand. The affinity between CTLA-4 and CD80 is higher than that between CTLA-4 and CD86 [13]), and only one of the four amino acids at the binding site is different. Therefore, the next step is to identify whether LC4, LC7 and LC8 could block the CTLA-4 and CD80 interaction. Cell-based blocking assays showed that LC4, LC7 and LC8 could all block the interaction between CTLA-4 and CD80 (Fig. 3). LC8 was abandoned by its weakest blocking effect, and only LC4 and LC7 were retained. LC4 activated CD8 + T cells and CD4 + T cells in vitro CTLA-4 could be inducible and expressed on peripheral CD8 + T cells and CD4 + T cells upon activation ( Fig. 4A-D). Peripheral blood mononuclear cells (PBMCs) from healthy donor blood were isolated to verify whether LC4 and LC7 can activate T cells in vitro, CD3 and CD28 antibody were added to stimulate T cells and incubated with PBS LC4 or LC7. The IFN-γ secretion amount by the cells was measured using ELISA. IFN-γ production in PBMC cells was significantly enhanced when treated with LC4 ( Fig. 4E-G). At the same time, we detected the CD8 + T cell and CD4 + T cell percentage producing IFN-γ in each T cell subset by flow cytometry. As shown in Fig. 4, LC4 significantly increased both the percentage of CD8 + IFN-γ + and that of CD4 + IFN-γ + compared with the control. LC7 also had a similar effect, but LC4 was better in activating the T cell ability to secrete cytokines generally. LC4 could inhibit CT26 tumour growth but activate peripheral immune response We compared the Ig V region amino acid sequences of human CTLA-4 protein and mouse CTLA-4 protein to explore the interaction between human CTLA-4 binding peptide LC4 and mouse CTLA-4, and found that the consistency and similarity between them reached 66% and 79%. It has been reported at the same time that there are 11 binding sites between human CTLA-4 and its ligand CD86, while the mouse CTLA-4 binding site only changes at position 105, Tyr changes to Phe [14]. This indicated that human and mouse CTLA-4 protein had high consistency and similarity. LC4 with biotin label were co-incubated with CHO-K1/mCTLA-4 cell line, the results showed that LC4 also has affinity with CHO-K1/ mCTLA-4 cell line, which may be the reason why the CTLA-4 site and its ligand interaction is highly conserved. LC4 also increases the CD8 + IFN-γ + T cells and CD4 + IFN-γ + T cells percentage in mouse spleen significantly, and increased the IFN-γ secretion in the supernatant of culture (data were shown in supplementary information). Therefore, it is feasible to verify the anti-tumour LC4 effect in vivo by tumour-bearing model in mice. CT26 tumour-bearing mice were treated with LC4 (1 mg/kg/day or 4 mg/kg/day) for 14 days to identify the in vivo LC4 effect. The results showed that LC4 could significantly inhibit CT26 tumour growth in a dose-dependent manner, but had no significant effect on the weight of mice (Fig. 5A, B). CD8 + T cell and CD4 + T cell infiltration were detected in tumours. As a result, CD8 + T cell and CD4 + T cell infiltration in the LC4 group (4 mg/kg/day) significantly increased compared with the control group (Fig. 6A). The percentages of IFN-γ producing CD8 + T cells and CD4 + T cells in the spleen, and draining lymph node were determined. The results showed that both high-dose and low-dose LC4 groups could significantly increase the CD8 + IFN-γ + T cells and CD4 + IFN-γ + T cells ratio compared with the control group ( Fig. 6B-E). All these results indicated that although peritumoral administration was selected, LC4 could still activate peripheral immune organs through tissue infiltration and diffusion. Therefore, we preliminarily modified LC4 peptide to improve the LC4 targeting to improve its anti-tumour activity and reduce the impact on peripheral immune organs. LC4-PLG-RGD inhibit CT26 tumour growth more efficiently by refreshing CD8 + T cells and CD4 + T cells in tumour The tumour targeting of LC4 can be improved by connecting the tumour-targeting peptide RGD, but the simple RGD peptide connection cannot guarantee the LC4 peptide release in the tumour microenvironment. Therefore, LC4-PLG-RGD and RGD-PLG-LC4 were obtained by linking RGD and LC4 with PLGLAG sequence. RGD could bring LC4 into tumour tissue, after PLGLAG digestion by matrix metalloproteinases. LC4 is exposed to realise the targeted delivery. N-terminal IFN-γ + T in the draining lymph nodes. Data are expressed as the mean ± SD (n = 5) and statistical significance between groups was determined by Student's t test: *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001; ns = no significance RGD-modified peptide RGD-PLG-LC4 and C-terminal RGD-modified peptide LC4-PLG-RGD were synthesised (the amino acid sequences were shown in Table 1). The infiltration of CD8 + T cells, CD4 + T cells, and FOXP3 + Tregs were detected in tumours. As a result, the infiltration of CD8 + T cells and CD4 + T cells significantly increased both in the LC4 and LC4-PLG-RGD groups, while the CD4 + T cells percentage in the LC4-PLG-RGD group was significantly higher than that in the LC4 group (Fig. 7E and F). Meanwhile, there was a tendency to reduce the infiltrating Treg cells in the tumour. This might be because RGD sequence improves peptide targeting. LC4-PLG-RGD targets tumour tissues and has higher concentration in tumour than LC4, so the antitumour effect of LC4-PLG-RGD is better than that of LC4 in vivo although LC4-PLG-RGD and LC4 had the same molar concentration. LC4-PLG-RGD exhibited lower peripheral immune activation activity CD8 + T cells can kill tumour cells by secreting granzyme B (grzB), perforin, and IFN-γ. CD4 + T cells can produce direct anti-tumour effects by secreting IFN-γ, We next detected the T cell ability in the spleen and draining lymph nodes to proliferate and secrete cytokines by flow cytometry to figure out whether LC4-PLG-RGD activates peripheral immunity (Fig. 8A-F). The results showed that CD8 + GrzB + T cells, CD8 + Perforin + T cells, CD8 + IFN-γ + T cells, and CD4 + IFN-γ + T cell percentage in the spleen increased in the LC4 group, while the ability to stimulate splenic T cells to secrete cytokines was weaker in LC4-PLG-RGD group than that of the LC4 group. A similar phenomenon was observed when draining lymph nodes (data not shown). The RGD modified LC4-PLG-RGD ability to stimulate proliferation and T cell secreting cytokines in the peripheral immune organs was weakened compared with LC4, which proved that RGD modification successfully reduced the systemic effect LC4 administration on peripheral immune organs and achieved specific targeting on tumour tissues. Discussion While ICI therapy has improved melanoma patient outcomes, it has also resulted in unique immune-related adverse events (irAEs) rise [15,16]. The CTLA-4 antibody drug ipilimumab, which has been listed, can produce strong anti-tumour effects in vivo, but antibody drugs are expensive and have strong immunogenic effect, while CTLA-4 activation antibody to the systemic immune system will cause irAEs in a variety of peripheral tissues [17,18]. IrAEs is the main obstacle limiting ICIs therapy application. Enhancing the anti-tumour checkpoint inhibitor effects and reducing irAE is an urgent problem that needs a solution. Peptide drugs has the advantage to conveniently synthesise compared with antibody drugs, lower immunogenic response, and transform easily. In this study, the peptide LC4 with specific affinity to the human CTLA-4 protein by phage display was identified. LC4 peptide could effectively block the CTLA-4/B7 protein interactions, activate peripheral immune organ activity while activating T cell activity in tumour tissues. Both LC4 peptide and ipilimumab can effectively block CTLA-4/B7 binding to activate the immune response compared with ipilimumab alone. LC4 could increase CD8 + T cell percentage in tumour tissue and activate the infiltrating T cell function in tumour tissue in vivo, which shows that LC4 peptide has ICIs activity. The spleen and draining lymph nodes are important peripheral immune organs. On the one hand, their activation is conducive to body anti-tumour, while on the other hand, systemic immunity generation may cause immune-related adverse reactions [19,20]. Both LC4 high-dose and low-dose groups could significantly increase CD8 + IFN-γ + T cell and CD4 + IFN-γ + T cell percentage in the spleen and drain lymph nodes, which is consistent with the reported ICIs (antibody) effect [19]. Pan Zheng and Yang Liu in 2018 have shown that the main molecular mechanism of CTLA-4 antibody drugs is to clear the local tumour Treg through the ADCC effect mediated by the Fc CTLA-4 segment, rather than by blocking CTLA-4/B7 interaction [21,22]. However, our results do not support this view. LC4 is a peptide drug without Fc segment, which will not cause ADCC effect in vivo, but LC4 can cause CD8 + T cell chemotaxis, which means its mechanism needs to be further studied. Data are expressed as the mean ± SD (n = 5) and statistical significance between groups was determined by Student's t test: *p < 0.05, **p < 0.01, ***p < 0.001 CT26 colon tumour model was used to verify the anti-tumour activity of LC4 in vivo, which has the highest immunogenicity, and is the model most responsive to CTLA-4 inhibitor treatment. We took out the CT26 tumour-bearing mouse model's heart, liver, lung, and kidney, and observed the tissue sections under microscope after H&E staining. No obvious pathological changes were found in the organs of mice after treatment with LC4 (data not shown). Cha E et al. has shown that patients who produce irAEs after treatment with CTLA-4 antibody are characterised by the proliferation of autoreactive CD4 + T cells [23]. Dardalhon V et al. has shown that CD4 + T cells (Th1) secreting IFN-γ are related to the autoimmune response occurrence [24]. In our study, the CD8 + T cells and CD4 + T cells secreting IFN-γ ratio in the spleen and draining lymph nodes in CT26 xenograft mouse models significantly increased after treatment with LC4 peptide. This suggested that although no obvious pathological changes were observed by H&E staining, the CD4 + T cell secreting IFN-γ increase proved that CTLA-4/B7 interaction blocking by LC4 still resulted in irAE increase. At the same time, CD8 + T cells secreting IFN-γ increase suggested that the peripheral immune organs were indeed activated, which may be accomplished with the assistance of activated CD4 + T cells. Based on the above results, we next improved LC4 targeting to enable LC4 to carry out local immunotherapy at the tumour tissue in order to reduce peripheral immune system activation. We coupled RGD and LC4 by use PLGLAG, RGD can bring LC4 into tumour tissue and cut the linker PLGLAG to expose LC4 peptide under the action of matrix metalloproteinases, which has a high expression in tumour cells. Our studies demonstrated that the LC4-PLG-RGD anti-tumour activity is better than LC4 in vivo, which can infiltrate more CD8 + T cells and CD4 + T cells in tissues, reduce the Treg cell proportion infiltrated by tumour, and weaken the activation ability of T cells in peripheral immune organs. The limitations of this study was only observed the functional change of T lymphocytes in different organizations and the stimulus, according to the data to speculate the immune toxicity reaction, but not to look at the actual toxic effects such as colitis, to directly prove the increase tumor targeting can improve the antitumor activity of the immune checkpoint inhibitors and reduce its peripheral toxicity., and this part needs to be further improved.. In conclusion, a novel peptide LC4 targeting CTLA-4 was identified by phage display bio-panning, which has specific affinity with CTLA-4 and blocks the interaction between CTLA-4 and its CD80 ligand, which inhibits tumour growth. We modified LC4 to obtain LC4-PLG-RGD peptide since it can activate T cells in tumour tissues, as well as T cells in peripheral immune organs, whose ability to stimulate the T cell activation in peripheral immune organs was weakened, and could target tumour tissue while inhibiting tumour growth. Its anti-tumour activity is better than that of LC4. The results of this study suggested that enhancing checkpoint inhibitor tumour targeting can not only enhance their anti-tumour effect, but also effectively reduce their side effects, which is a new strategy in cancer immunotherapy. Mice Female 6-week-old BALB/c mice were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). The animals had free access to food and water and were maintained in a specific pathogen-free facility (24 °C ± 1 °C). Animal welfare and experimental procedures were carried out and approved in accordance with the Ethical Regulations on the Care and Use of Laboratory Animals of Zhengzhou University (Zhengzhou, China). Tumour model and treatments Female BALB/c mice were subcutaneously injected with 1 × 10 5 syngeneic CT26 cells to establish colorectal cancer xenograft model. Tumour sizes were measured using a digital caliper, and tumour volumes were calculated as Eq. (1): Treatment of mice was initiated after the tumours had been grown for 8-10 days until reaching a palpable size of 40-80 mm 3 . Tumour-bearing mice were randomly grouped, paraneoplastic LC4 injection and LC4-PLG-RGD was administered via tail vein for 14 days. Normal saline was used as a negative control. The tumour volume was measured every 2 days, and the body mass was weighed. Peptide binding assay by flow cytometry In brief, 5 × 10 5 CHO-K1 cells transfected with pLVX-Puro/hCTLA-4 and pLVX-Puro/mCTLA-4 were used. Cell suspension was incubated with rat serum to block Fc receptors for all flow cytometry assay, and stained with corresponding antibody or biotin-peptides at 4 °C for 30 min, and then analysed by flow cytometry after washing twice.
2022-10-05T14:00:38.870Z
2022-10-04T00:00:00.000
{ "year": 2022, "sha1": "d4b6623ef312084e6e7b93f9f5a942d3fb392d40", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9337357fea17ba4fed2390f9cb68e2eabe51df1e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255783330
pes2o/s2orc
v3-fos-license
Selection on MHC class II supertypes in the New Zealand endemic Hochstetter’s frog The New Zealand native frogs, family Leiopelmatidae, are among the most archaic in the world. Leiopelma hochstetteri (Hochstetter’s frog) is a small, semi-aquatic frog with numerous, fragmented populations scattered across New Zealand’s North Island. We characterized a major histocompatibility complex (MHC) class II B gene (DAB) in L. hochstetteri from a spleen transcriptome, and then compared its diversity to neutral microsatellite markers to assess the adaptive genetic diversity of five populations (“evolutionarily significant units”, ESUs). L. hochstetteri possessed very high MHC diversity, with 74 DAB alleles characterized. Extremely high differentiation was observed at the DAB locus, with only two alleles shared between populations, a pattern that was not reflected in the microsatellites. Clustering analysis on putative peptide binding residues of the DAB alleles indicated four functional supertypes, all of which were represented in 4 of 5 populations, albeit at different frequencies. Otawa was an exception to these observations, with only two DAB alleles present. This study of MHC diversity highlights extreme population differentiation at this functional locus. Supertype differentiation was high among populations, suggesting spatial and/or temporal variation in selection pressures. Low DAB diversity in Otawa may limit this population’s adaptive potential to future pathogenic challenges. Background The frog family Leiopelmatidae contains four species of the genus Leiopelma [1]. These frogs are among the most archaic in the world [2] and only found in New Zealand. Leiopelma hochstetteri is a small, semi-aquatic species [3]. It is the most widespread and common species within this genus, but populations are fragmented and scattered over an extensive area of the North Island ( Figure 1) [4,5]. Subfossil evidence indicates that L. hochstetteri was historically more widely distributed throughout the North Island and in the northern half of the South Island [6]. The introduction of mammalian predators and habitat modifications following human settlement of New Zealand in the 17th century have been major contributors to the modern-day fragmentation and population declines [6]. Populations show significant genetic and cytogenetic distinctiveness [7][8][9], with 13 evolutionarily significant units (ESUs) defined using mitochondrial and nuclear genetic markers [10]. Molecular dating has estimated that this genetic differentiation originated from the early Pleistocene, a geographically turbulent period in New Zealand that would have impacted population connectively [10]. The overall population size of L. hochstetteri is estimated at greater than 100,000 mature individuals, but a population decline of at least 10% in total population or area of occupancy over the next three generations has been predicted [5]. As such, L. hochstetteri has been classified as "Vulnerable" on the IUCN Red List of Threatened Species [11] and as ' At Risk: Declining' under the New Zealand threat status criterion [5]. In its current fragmented state, L. hochstetteri faces many threats, most significantly the alteration of rocky stream ecosystems by land management processes such as logging, farming or by feral stock, as well as predation by introduced mammals [12][13][14]. High population structuring in L. hochstetteri, combined with these threatening processes, has significant conservation implications [5,10]. Previous genetic studies employed a variety of neutral genetic markers, but information on functional diversity is lacking. In the current study, we characterize diversity in an adaptive genetic region, the major histocompatibility complex (MHC). The MHC is a large gene family with a vital role in the vertebrate immune response [15]. The class II molecule is a heterodimer formed by an α chain and a β chain that is expressed on the surface of antigen-presenting cells [15]. MHC class II molecules present extracellular peptides to T-helper cells, with residues encoded by the α1 and β1 domains contributing to peptide recognition and binding [15]. These domains interact with extracellular peptides, such as those derived from bacterial, parasitic or fungal pathogens, and are usually highly polymorphic, driven by pathogen-mediated selection and mate-choice [16][17][18][19][20]. In natural populations, diversity of functional genomic regions, such as the MHC, is affected by both selective and neutral evolutionary forces. By contrast, non-functional genetic regions will reflect primarily only neutral forces, such as genetic drift and gene flow. Comparing functional and neutral diversity allow researchers to infer the relative influence of selection on this adaptive gene region [21][22][23][24][25]. For conservation aims, MHC has increasingly been used as an indicator of adaptive genetic variation, and has been recently employed to evaluate immunogenetic health [26], delineate conservation units [27], evaluate genetic restoration [28], and evaluate the genetic impacts of translocations [29]. In its application to infer immunogenetic health, populations with high MHC diversity may be better able to adapt to future pathogenic challenges as the chance of resistance alleles being present is greater [19], thus with decreased potential extinction risk relative to less-diverse populations. This conjecture has raised some debate [25], with multiple examples of long-term survival of populations with low MHC diversity [30][31][32]. Nevertheless, multiple empirical examples of associations between MHC variation and disease susceptibility [33][34][35] highlight the potential susceptibility of populations of low MHC diversity to disease epidemics. In this study, we generated a L. hochstetteri spleen transcriptome to identify a MHC class II B gene (DAB). We then characterized diversity in the β1 domain (exon 2) and compared the results to diversity from nine microsatellite markers across five ESUs. Our results showed high MHC polymorphism with extremely high differentiation between studied ESUs, a pattern that was not reflected in the microsatellites. Samples One L. hochstetteri individual (collected from a mountain stream in the Pukeamaru region on the east coast of the North Island, New Zealand, 37°38"S, 178°15"E, Figure 1) was sacrificed for the spleen transcriptome preparation under Department of Conservation Authority OT-29713-FAU. Spleen tissue was collected and fixed in liquid nitrogen and subsequently stored at −80°C. The tissue was disrupted under liquid nitrogen in a mortar and pestle then RNA extracted using TRIzol reagent (Invitrogen) according to manufacturer's instructions. RNA quality was assessed on a 2100 Bioanalyzer (Agilent Technologies Genomics) and extractions were stored at −80°C. Toe clip samples for genetic analyses were collected under ethics approval granted by the Department of Conservation New Zealand Animal Ethics Committee (permit no. 181). This study used samples from six sites, representing five of the 13 ESUs described by Fouquet et al. [10]: the Brynderwyn Range; Northern, Central and Southern Coromandel; and Otawa ( Figure 1). Genomic DNA was extracted using an AquaPure Genomic Tissue Kit (Bio-Rad), following the manufacturer's protocol, and stored at −20°C. Microsatellite markers A total of 168 individuals ( Figure 1) were genotyped for 11 polymorphic microsatellite loci following the reaction and thermocycling protocols of Clay et al. [36]. Full microsatellite genotyping protocols, including multiplex details, are provided in Additional file 1: Supplementary Methods section "Microsatellite genotyping". Characterization and genotyping of MHC class II DAB gene To characterize L. hochstetteri MHC sequences, we generated a spleen transcriptome, sequenced on a Roche GS Junior 454 Sequencer (Landcare Research, Auckland). These data allowed us to sequence a 745-bp fragment of MHC IIB (designated DAB; Genbank accession: KP892996) incorporating partial exon 2 through to the 3' UTR, as predicted from alignment with X. laevis MHC class II beta sequence (Genbank accession: D13684 [37]). Full details of the specific protocol used to identify and characterize L. hochstetteri MHC DAB are provided in Additional file 1: Supplementary Methods section "Transcriptome sequencing and MHC class II DAB characterization′. To assay population levels of diversity, PCR primers (LehoIIBUpper: 5΄− GCGAAGTCTCAGTGTT −3΄ and LehoIIBLower: 5΄− CTTGTCTACAGTGTAAGGTT −3΄) were designed using Oligo6 (Molecular Biology Insights, Inc), targeting a 249-252 bp fragment within exon 2 of the MHC DAB gene (full length of exon 2 predicted to be 282 bp). These primers were designed to anneal to the most evolutionarily conserved regions, as predicted from multiple sequence alignments with anuran class II beta genes, while including the maximal number of putative peptide binding sites predicted from X. laevis MHC class II beta sequence. We carried out two PCRs per individual and cloned these using the pGEM-T Easy Vector System II (Promega), following the manufacturer's recommendations. PCRs were carried out at Landcare Research Auckland laboratories (New Zealand) prior to export of the synthetic DNA products to the Australian Wildlife Genomics Group laboratories at the University of Sydney (Australia) where all further protocols were carried out. Full PCR and cloning methods are provided in Additional file 1: Supplementary Methods section "Amplification and cloning"). Twelve clones per PCR product were sequenced using the T7 primer at the Australian Genome Research Facility, Ltd (AGRF, Sydney, Australia). A maximum of two allele variants was obtained per individual, indicating that our primers were specific for a single locus. A clone sequence was accepted as a DAB allele if it was isolated from two independent PCR reactions (either two reactions from the same individual, or from two different individuals). Polymorphism analyses Cloned DAB sequences were checked for quality, trimmed, and aligned using ClustalW, all within BioEdit v7 [38]. For each individual, duplicate sequences were summarized into genotypes with two alleles retained for analysis. Individuals that either returned new alleles (previously unobserved in other individuals) or more than two alleles from the first PCR product were retyped using the second PCR product. All except one individual were confirmed to have no more than two alleles using this cloning and sequencing approach. One individual from Northern Coromandel (sample ID: CorC327) showed 4 alleles as isolated from 2 independent PCR products. As this could either represent contamination of the DNA sample or duplication of the DAB gene within the individual, we removed the individual from further analysis. DAB sequence polymorphism statistics (haplotype diversity, Hd; number of polymorphic sites, S; nucleotide diversity, π; average number of nucleotide differences, k) were calculated using DnaSP v4.10 [39]. GenAlEx v6.5 [40] was used to calculate microsatellite polymorphism statistics (observed, H O , and expected heterozygosity, H E ). Arlequin v3.5 [41] was used to test for Hardy Weinberg Equilibrium for both microsatellite marker and DAB genotype data, and used to test for linkage disequilibrium between microsatellite markers. For both marker types, allelic richness for each ESU was calculated using FSTAT v2.9.3.2 [42]. To control for differences in sampling sizes, 95% CIs of the expected number of alleles based on sample size alone were calculated using permutation tests in R 3.0.2 (R Core Team). To investigate the contribution of genetic drift on the DAB gene, we performed a linear regression for allelic richness of microsatellite markers and the DAB gene in R, under the assumption that our microsatellite loci were selectively neutral. Traditional measures of genetic differentiation such as F ST and G ST can give misleading results when calculated for highly polymorphic genes, such as the MHC [43]. These statistics approach zero when gene diversity is high even when populations are completely differentiated (no shared alleles). As such, using F ST to compare populations for microsatellites, loci which have limited variability, versus our MHC gene, which has extremely high variability, would be inappropriate. Therefore, we used G' ST [44] and D EST [43] to measure both MHC and microsatellite differentiation between our populations, calculated using SMOGD 1.2.5 [45]. Isolation by distance at each marker type was investigated by Mantel tests as implemented by the ade4 1.6-2 package in R. Tests of recombination and positive selection A global test for positive selection on MHC was carried out in MEGA using a codon-based Z test for selection across the DAB alleles (test statistic: d N -d S ). We used the HyPhy package [46] implemented on the Datamonkey webserver [47] for model selection, to test for recombination and to detect sites under selection. The model selection tool [48] was used to identify the optimal nucleotide substitution model for further analyses. Evidence for recombination among L. hochstetteri DAB alleles was detected using single breakpoint (SBP) analysis using small sample AIC (AIC C ) [49]. Recombination was taken into account in the implementation of three separate models of codon-based positive selection: singlelikelihood ancestor counting (SLAC) [50], random-effects likelihood approach (REL) [50], and mixed effect model of evolution (MEME) [51], which use different methods to detect sites under selection. We adopted a conservative approach whereby amino acid sites identified by two or more models were retained as sites under positive selection for further analyses. Identification of DAB supertypes Amino acid sites under positive selection as identified above were used for cluster analysis to define DAB supertypes, following Doytchinova and Flower [52]. All other amino acid sites (i.e. those that were not found to evolve under positive selection) were excluded during supertype definition. Each retained site was characterized according to five physiochemical descriptor variables: z1 (hydrophobicity), z2 (steric bulk), z3 (polarity), z4 and z5 (electronic effects) [53]. Discriminant analysis of principle components (DAPC) was implemented to define DAB gene clusters using adegenet 1.4-0 package in R [54,55]. This analysis implements a k-means clustering algorithm based on Bayesian Information Criterion (BIC); we used a ΔBIC ≤2 to identify optimal numbers of clusters. DAPC was then performed on retained principal components to assign LehoDAB alleles to a supertype. Population differentiation based on supertypes was estimated using SMOGD and isolation by distance analyzed by Mantel testing, as described above. Genetic diversity at microsatellite markers and MHC class II-DAB Two microsatellite markers (Lhoc10 and Lhoc26) showed variable amplification success across sampled ESUs and were therefore removed from the analysis. All individuals were successfully genotyped at the remaining nine microsatellite markers. A total of 34 alleles were observed (2-5 alleles per marker). The microsatellite markers showed an average observed heterozygosity of 0.164 across the ESUs (ranging from 0.073-0.327). We observed eight occurrences where microsatellite allele frequencies deviated from Hardy-Weinberg Equilibrium (HWE) but these were not statistically significant after Holm-Bonferroni correction for multiple comparisons (Additional file 2: Table S1). From 36 comparisons, linkage disequilibrium was observed between six microsatellite marker pairs, but these were not statistically significant after Holm-Bonferroni correction. At the MHC class II-DAB locus, 74 sequence variants (hereafter referred to as "alleles") were characterized from 121 L. hochstetteri samples (Genbank accessions: KP892997-KP893070). DAB alleles were between 216-219 nucleotides, encoding a 72-73 amino acid product that varied in length on account of a single amino acid indel at position 69. These 74 alleles contained 76 polymorphic nucleotide sites with an average of 19.6 nucleotide differences (k), gene diversity (Hd) of 0.958, and nucleotide diversity (π) of 0.0906 (Table 1). Global tests for positive selection provided evidence for historical selection on the DAB region with a significant test statistic (d N -d S ) of 2.08 (P-value = 0.020). Lower DAB diversity was observed in the Southern Coromandel and Otawa ESUs and lower microsatellite diversity was observed in Northern Coromandel, Southern Coromandel and Otawa (Table 1). By linear regression analysis, we observed a correlation between allelic richness of microsatellites and DAB (slope = 12.085; P-value = 0.009, N = 5 populations). Statistically significant homozygous excess (P-values <0.001) was observed at the DAB locus for all ESUs, except Otawa (P-value = 1). We could not rule out the presence of null alleles in these four ESUs. If null alleles are present, they may indicate a single common allele or cluster of alleles that were not amplified by our DAB primers, despite employing a low annealing temperature during PCR (53°C) and designing primers for conserved regions of the DAB exon 2. If the homozygote excess is due to null alleles, there may be more DAB alleles present in Brynderwyn, Northern Coromandel, Central Coromandel, and Southern Coromandel, in excess to those characterized here. The Otawa population appeared to be in Hardy-Weinberg equilibrium. Recombination and Positive selection Recombination was detected in the DAB sequence dataset with strong AIC C support for breakpoint located at nucleotide site 162. With this recombination taken into account, positive selection was detected at 11 sites from at least two tests (SLAC/REL/MEME) ( Table 2). Six of these sites overlapped with peptide binding residues predicted from alignment with Xenopus laevis [37] (Figure 2). Differentiation between populations Genetic differentiation estimated across microsatellite markers ranged from G' ST 0.077-0.587 and D EST 0.039-0.397 (Table 3). Otawa showed the highest levels of differentiation from all other ESUs (D EST 0.174-0.397), while the four ESUs from the Coromandel Peninsula displayed the lowest differentiation from one-another (D EST 0.039-0.098). For MHC, ESUs were highly differentiated at the DAB locus, with only three alleles (4.1%) observed in more than one ESU (LehoDAB*49 and LehoDAB*55 present in Northern Coromandel and Central Coromandel; LehoDAB*73 present in Southern Coromandel and Otawa). Correspondingly, population differentiation measures for the DAB locus were very high (Table 3). Northern Coromandel and Central Coromandel also shared amino acid products encoded by different sequence variants (LehoDAB*15 and Leho-DAB*44; LehoDAB*21 and LehoDAB*45; and Leho-DAB*29 and LehoDAB*50). DAPC analysis revealed an optimum of four supertype clusters based on a ΔBIC ≤ 2 (Additional file 3: Figure S1). Each supertype cluster contained between 10-33 DAB alleles (Additional file 2: Table S2). All supertypes were represented in every ESU, except Otawa, which had only two LehoDAB alleles present ( Figure 3). Interestingly, these two alleles were assigned to different supertypes, implying that this ESU, despite having relatively low MHC diversity, does have some functional variability present. Population differentiation at the level of DAB supertype ranged from 0.044-0.835 (D EST ) and 0.012-0.397 (G' ST ) ( Table 3). The Northern and Central Coromandel ESUs showed the lowest supertype differentiation from one-another, with D EST of 0.044 and G' ST of 0.012. Mantel tests found no significant association between genetic differentiation and geographic distance at microsatellite markers, MHC or MHC supertypes (Additional file 2: Table S3). Although we implemented a conservative approach to identifying 11 positively selective sites for our supertyping analysis, we note that the MEME approach of predicting sites under pervasive versus episodic selection might be more appropriate to infer balancing selection on the MHC. As such, we repeated our supertyping analysis using the 16 sites predicted under just MEME, for comparison. Similar to our main analysis, the MEME-only results gave three supertypes, which were similarly represented in each population with the exception of Otawa, which contained two supertypes (Additional file 3: Figure S2). Population differentiation estimates were qualitatively similar to our main analysis, with lower differentiation between ESUs on the Coromandel Peninsula (Additional file 2: Table S4). Discussion We characterized a MHC II-DAB locus in this threatened species, with 74 alleles identified across five populations. MHC diversity present in wild populations is the result of the interplay between neutral evolutionary forces, such as genetic drift and gene flow, and selective forces such as mate choice and host-pathogen co-evolution [16,17,56,57]. Generally, positive selection may act on beneficial mutations arising within the MHC, and this variation is hypothesized to be maintained by selective forces mediated by pathogen variability and disassortative mating [16,56,57]. We found evidence for positive selection acting on the DAB in L. hochstetteri and extreme population differentiation between ESUs: every population was nearly unique in their DAB sequence variation. In four of our five populations, we found a high excess of homozygosity; we cannot rule out null alleles as a possible driver of this pattern. Nevertheless, the patterns we observed at the detected alleles are still informative for inferring patterns of diversity and selection across populations. As selection acts on phenotypes, not genotypes, we examined the functional properties of the DAB alleles characterized in L. hochstetteri, to investigate whether ESUs retained similar DAB peptide binding functions despite lacking shared DAB alleles. Supertyping approaches have been employed in several other MHC studies in wild populations for both studying population diversity and for investigating associations between MHC supertypes and disease [34,35,58,59]. We identified four DAB supertypes that were all represented in each ESU, excluding Otawa, which had only two supertypes present. Despite nearly all sites sharing the same DAB supertypes, considerable population differentiation was observed, resulting from differences in supertype frequencies (Table 3; Figure 3). A notable exception to this pattern is the ESU pair of Northern and Central Coromandel, which were very weakly differentiated (Table 3). In general, amphibians are regarded to be poor dispersers with high site fidelity [60,61]. Leiopelma hochstetteri is a habitat specialist; preferring rock piles in unsilted streambeds in which they have micro-territories [4] with limited daily movements [12]. Dispersal behavior and ability within the species is still unknown; however the extreme genetic structuring seen at adaptive genetic markers (current study), neutral genetic markers [7,10] and cytogenetic distinctions [9,62] does imply minimal dispersal. If this study were expanded to other L. hochstetteri ESUs, it is likely that additional DAB alleles would be characterized, and that each ESU could have a unique pattern of DAB variation with very few shared DAB alleles. If ESUs lack connectivity, each unit would experience distinct pathogen diversities over their evolutionary histories and the spatially and/or temporally variable selection would contribute to the observed differentiation of DAB supertypes [18]. The low differentiation observed between Northern and Central Coromandel for DAB supertype suggests the presence of similar environmental factors across the continuous habitat of the Coromandel Peninsula, resulting in similar selective pressures in both ESUs. We are unaware of any studies of disease prevalence in L. hochstetteri on the Coromandel Peninsula, but such investigation would shed more light into the potential pathogen-mediated selection pressures. We observed a correlation between allelic richness of microsatellites and DAB, implying the contribution of genetic drift in shaping MHC variation in our studied ESUs (Table 1). Notably, the Southern Coromandel and Otawa ESUs possess lower genetic diversity at both marker types, compared to the other ESUs. This may reflect the predominance of genetic drift over balancing selection acting on the MHC in these ESUs. In particular, Otawa had the lowest DAB diversity of all studied ESUs: only two allele variants were present, with the predominant allele (LehoDAB*74) occurring at a frequency of 0.947, and the alternate allele (LehoDAB*73) only present in two heterozygotes (n = 19 individuals). This could reflect strong directional selection acting in this ESU drawing the LehoDAB*74 allele close to fixation, or a depletion in DAB diversity due to population decline or bottleneck. A meta-analysis of the relative roles of genetic drift and balancing selection on MHC variation revealed that there is may be greater loss of MHC than neutral diversity during population bottlenecks [21]. Furthermore, simulations have shown that balancing selection acting in small populations can deplete MHC variation faster than drift [63]. We suggest that the Otawa ESU, with its limited MHC variation, may be at a greater risk from disease outbreaks. Investigations into L. hochstetteri disease and pathogens are lacking, but have attracted high priority in the recently published native frog recovery plan [1] and would greatly improve our understanding of the contribution of MHC variation to pathogen resistance/susceptibility in this species. The DAB gene is not, however, the only locus involved in pathogen recognition and immune response [64], and further immunogenetic investigation into other functionally significant genes, such as toll-like receptors or anti-microbial peptides, in this population will improve our risk estimates. A recent assessment of the threat status of Leiopelma spp proposed that the Otawa L. hochstetteri population was conservation dependent [5]. Human interventions, such as translocation of individuals with DAB alleles spanning different supertypes to this site, may increase genetic diversity. However, our results do not rule out the possibility that limited DAB diversity in this ESU represents local adaptation; close genetic monitoring of translocation success could reveal this. Leiopelma hochstetteri provides an important benchmark for future MHC studies in other Leiopelma species, which are substantially more vulnerable, with fewer numbers and fewer populations [1]. Leiopelma archeyi occurs in two natural populations on the New Zealand North Island: on the Coromandel Peninsula and in the Whareorino forest [65,66]. Between 1996 and 2001 the Coromandel population experienced a rapid decline [67] and is now persisting at severely reduced numbers. Leiopelma pakeka occurs in a stable, natural population in remnant forest on Maud Island, and in two introduced populations, in another habitat on Maud Island and on Motuara Island [68,69]. Leiopelma hamiltoni is found in a single natural population in a small rock tumble on Stephens Island, with a population estimate of only 300 mature individuals, and an introduced population established in 2004 on Nukuwaiata Island [1,[70][71][72]. The MHC primers developed herein could be used in these related species to help understand functional genetic variation in these populations and assist in translocation planning and monitoring to ensure adequate supertype variation is retained in both donor and translocated populations. Longitudinal studies could provide insights into selective pressures acting in introduced populations, where frog-naïve environments may not harbor co-evolved pathogens. Finally, cross-species studies would be valuable for identifying long-term selection pressures that may have shaped MHC diversity within the ancient Leiopelma genus. Conclusions Our study found high MHC-DAB allelic diversity in L. hochstetteri as a result of positive selection and extremely high population differentiation. Nearly every population possessed a unique DAB allele pool. DAB-supertype differentiation was high among ESUs suggesting that selection pressures vary spatially and/or temporally. Northern and Central Coromandel were exceptions to this, with lower differentiation of DAB supertype frequencies, which may imply similar selective pressures as a result of shared environmental characteristics. Very low DAB diversity in Otawa, with only two alleles present, may contribute to a greater extinction risk from disease outbreaks in this ESU. for their support in this research. We would also like to thank Amanda Haigh, previously of Department of Conservation NZ, who was instrumental in obtaining approval for DNA sampling and coordinating collections. Thanks to Richard Jacob-Hoff of Auckland Zoo, and Alice Dennis and Duckchul Park of Landcare Research, Auckland. Funding support was from Landcare Research and the University of Sydney. CEG acknowledges the support of San Diego Zoo Global. Author details
2023-01-14T14:56:37.996Z
2015-04-13T00:00:00.000
{ "year": 2015, "sha1": "33b458b15256e75aaba87efd3a235b730d2f1e15", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12862-015-0342-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "33b458b15256e75aaba87efd3a235b730d2f1e15", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
7070506
pes2o/s2orc
v3-fos-license
Combined actions of Na+/K+-ATPase, NCX1 and glutamate dependent NMDA receptors in ischemic rat brain penumbra Instrumental role of Na+ and Ca2+ influx via Na+/K+ adenosine triphosphatase (Na+/K+-ATPase) and Na+/Ca2+ exchanger 1 (NCX1) is examined in the N-Methyl-D-aspartate (NMDA) receptor-mediated pathogenesis of penumbra after focal cerebral ischemia. An experimental model of 3, 6, and 24 h focal cerebral ischemia by permanent occlusion of middle cerebral artery was developed in rats. The changes in protein expression of Na+/K+-ATPase and NCX1 as well as functional subunits of NMDA receptor 2A and 2B (NR2A and NR2B) in the penumbra were assessed using by quantitative immunoblottings. The most prominent changes of Na+/K+-ATPase (78±6%, n=4, *P<0.05) and NCX1 (144±2%, n=4, *P<0.05) in the penumbra were developed 24 h after focal cerebral ischemia. The expression of NR2A in the penumbra was significantly increased (153±9%, n=4, *P<0.05) whereas the expression of NR2B was significantly decreased (37±2%, n=4, *P<0.05) as compared with sham-operated controls 3 h after focal cerebral ischemia. However, the expression of NR2A and NR2B in the penumbra was reversed 24 h after focal cerebral ischemia (NR2A: 40±7%; NR2B: 120±16%, n=4, *P<0.05). Moreover, the decreased expression of neuronal nuclei (NeuN) in the penumbra was most prominent than that of glial fibrillary acidic protein (GFAP) 24 h after focal cerebral ischemia. These findings imply that intracellular Na+ accumulation via decreased Na+/K+-ATPase exacerbate the Ca2+ overload cooperated by the increased NCX1 and NR2B-containing NMDA receptor which may play an important role in the pathogenesis of the penumbra. Introduction Depending on the duration of the ischemia, permanent occlusion of the cerebral artery has been shown to result in characteristic pathophysiological events. In general, during focal cerebral ischemia, neuronal damage evolves over time and space and is not limited only to the lesion itself but is also 202 doi: 10.5115/acb. 2010.43.3.201 www.acbjournal.com www.acbjournal.org which are closely associated with the intracellular Na + overloading and ischemic depolarization of neuronal cells (Fuller et al., 2003). There is general agreement that the ischemic depolarization during ischemia is probably due to the depression of Na + /K + -ATPase activity, and the resultant elevation of [K + ] o and the interstitial accumulation of glutamate (Glu) from excitatory synaptic terminals (Ben-Ari 1990; Martin et al., 1994). It has been suggested that the Glu accumulation in the interstitial space results from the reverse operation of the Glu transporter by ischemia or anoxia may leads to cell death due to NMDA receptor-induced Ca 2+ overload (Madl & Burgesser, 1993). The Na + /Ca 2+ exchanger 1 (NCX1) is a transmembrane protein that is not only expressed in the brain and heart but has also been found in many other tissues and cells, including kidney, skeletal muscle, smooth muscle, lung, and spleen (Quednau et al., 1997). NCX1 has been reported to catalyze the extrusion of one intracellular Ca 2+ and the influx of three extracellular Na + in each reaction cycle depending on the Na + gradient generated by the Na + /K + -ATPase. It has been revealed that NCX1 can function in the forward and reverse direction, and that its activity is regulated by many factors including Na + , Ca 2+ , intracellular pH, and ATP (Boscia et al., 2006). In general, NCX1 plays a role in glial and neuron damage induced by ischemia, glucose deprivation, and excitotoxicity, although controversy remains as to whether net NCX1 activity is beneficial or detrimental (Matsuda et al., 2001;Pignataro et al., 2004). Recent studies demonstrated that inhibition of NCX1 by substitution of Na + with Li + and Cs + affects NMDA-induced intracellular Ca 2+ increase in glucosedeprived and depolarized cerebellar granule cells (Blaustein & Lederer, 1999;Kiedrowski 1999). The N-methyl-D-aspartate (NMDA) type of ionotrophic glutamate receptor has been demonstrated to play a key role in neuronal plasticity, learning, and memory in the central nervous system due to its high Ca 2+ permeability (Mori & Mishina, 1995). Although inappropriate activation of the NMDA receptor and neurotoxicity has been well described (Lipton & Rosenberg, 1994), little is known regarding the modulation of individual subunits that make up the NMDA receptors after ischemia. Recent studies demonstrated that treatment with the NMDA-antagonist MK-801 in ouabain, Na + /K + -ATPase inhibitor, -induced excitotoxicity attenuated the infracted volume of brain tissue exhibiting the ouabaininduced injury is indeed excitotoxic in nature which needs overestimation of glutamate receptors such as the NMDA receptors (Lees & Leong, 1996;Veldhuis et al., 2003). In particular, ouabain-induced drop in the driving force of Ca 2+ influx via NMDA channels was offset by an increased driving force of reverse NCX1 (Czyz et al., 2002). Structurally, NMDA receptors are hetero-oligomeric proteins formed by obligatory NMDA receptor 1 subunit (NR1) interacting with NMDA receptor 2A-2D subunits (NR2A-D), conferring functional variability (Monyer et al., 1992;Ishii et al., 1993). The prominent NR2 subunits in adult brain are reported to be NR2A and NR2B. Considerable interest has been placed on the potential involvement of NMDA receptors in the neurodegenerative process that follows ischemia or hypoxia. Given that glutamate receptors, and in particular the NMDA receptor subtype, allow an influx of extracellular Ca 2+ after stimulation, changes in the properties or numbers of these receptors could lead to the presentation of inappropriate amounts of intracellular Ca 2+ to the neurons (Besancon et al., 2008). Recent studies demonstrated that accumulation of glutaric acid (GA), analogue of glutamate, in the glutaric aciduria type I (MIM 231670) and chronic stimulation of NMDA simultaneously down-regulate the NR2B subunit and decreases Na + /K + -ATPase activity (Resink et al., 1996;Kölker et al., 2002). However, NR2B was up-regulated by tetrodotoxin (Audinat et al., 1994), suggesting a contribution of spontaneous electrical activity to block the fast Na + current in the neuronal cells. The present study therefore aimed at examining whether the protein expression of Na + /K + -ATPase, NCX1, and functional NMDA receptor subunits (NR2A and NR2B) in the ischemic penumbra were altered. This was done because neurons in the penumbra undergo acute and delayed elevations of intracellular Na + and Ca 2+ levels through the significant interactions and feedback between the glutamatedependent NMDA receptors and Na + and Ca 2+ ion channels (Na + /K + -ATPase and NCX1), which have been reported to directly or indirectly, lead to cell death after focal cerebral ischemia (MacDonald et al., 2006;Besancon et al., 2008). Induction of focal cerebral ischemia in rats All studies were carried out in a 9-week-old male Sprague-Dawley rats (n=16, 250~280 g) that had free access to drinking water and standard rodent food pellets. The experimental procedures were reviewed and approved by the Animal Care and Use Committee of Dongguk University (IRB: 09-45). Further, animal care and use were in accordance with the guidelines of the National Institutes of Health (Bethesda, MD). Focal cerebral ischemia was induced by occlusion of the left middle cerebral artery as described previously (Hasegawa et al., 1994). Anesthesia was induced with 3% isoflurane in a mixture of oxygen/nitrous oxide (30 : 70) and rats were maintained with 1% isoflurane in the oxygen/nitrous oxide gas mixture. A catheter was inserted and positioned in the femoral artery and arterial blood pressure was measured and recorded continuously throughout the procedures. Body temperature was monitored continuously during all procedures using a rectal thermometer probe. Temperature control was accomplished with the aid of a heating pad which was kept at 37 o C. Under the dissecting microscope, left middle cerebral artery was occluded for 3 h, 6 h, and 24 h using a 4-0 mono filament (3 cm in length) coated with a mixture of silicone resin. Sham-operated controls rats were subjected to middle cerebral artery surgery without occlusion. After 3 h, 6 h, and 24 h of occlusion of the middle cerebral artery, rats were anesthetized with isoflurane again and the brain tissues were removed for 2% 2,3,5-triphenyltetrazolium chloride (TTC; Sigma Aldrich Corp., St Louis, MO) staining. TTC staining for infarction and penumbra zones Rats were sacrificed and their brains were quickly removed and sectioned into 2-mm-thick slices starting from the frontal pole using a Brain Matrix Slicer (Vibratome Co.) (n=4). Slices were then immersed in TTC in a Petri dish and incubated at 37 o C for 20 minutes. Slices were flipped at the 10-minute mark to ensure staining of anterior and posterior faces. Cresyl violet staining At the scheduled time, sham-operated (n=3) and ischemic rats (n=3) were reanesthetized and their brain were fixed with a transcardiac infusion of 4% paraformaldehyde following perfusion with isotonic saline to remove blood from the cerebral vasculature. The brain was removed and post-fixed in the same fixative for 12 hours. Perfused brains were then paraffin-embedded and serial coronal sections 5 μm thick were obtained at the level of dorsal third ventricle (bregma-4.16 mm). Paraffin wax was removed in xylene over night at room temperature (RT) and the sections were rehydrated with ethanol (99%, 96%, 70%). After washing in distilled water, the sections were then stained with cresyl violet for 30 minutes at RT. The sections were then treated successively with ethanol (50%, 70%, 95%, 100%) and a differentiator (glacial acetic acid and 95% ethanol). SDS-PAGE and immunoblotting Penumbral or control tissues were removed from the TTC-stained brains of ischemic (n=4) and sham-operated rats (n=3), respectively for immunoblotting analysis. For protein extraction, the tissue was homogenized in homogenizing buffer (0.32 mM sucrose, 25 mM imidazole, 1 mM ethylenediaminetetraacetate (EDTA), pH 7.2 containing 8.5 mM leupeptin, 1 mM phenylmethylsulfonyl fluoride). Samples of homogenates were run on 9~15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (Bio-Rad Mini Protean II) in duplicates in which one gel was run in parallel and subjected to Coomassie blue (Coomassie brilliant blue 0.3 g, 2-propanol 200 ml, acetic acid) staining to assure identical loading. The other gel was subjected to immunoblotting. Presentation of data and statistical analysis Quantitative data are presented as mean±standard error of the mean (SEM). Comparisons between groups were made by unpaired student t-test. P values<0.05 were considered significant. Infarction and penumbra after focal cerebral ischemia In the current study, experimental focal cerebral ischemia was induced in rats by the permanent middle cerebral artery occlusion for 3, 6, and 24 hours. Slices were divided into two zones, i.e., infarction zone (marked in black arrow) and penumbral zone (marked in white arrows) in the ipsilateral hemisphere according to the TTC staining pattern (Fig. 1A, pMCAO-3h). The penumbra was defined in static terms as the cellular interface between the infracted core cells that were committed to die and unaffected area of normal blood flow. The -numbers represent distances from the bregma. Cresyl violet staining was undertaken to examine whether the severity of neuronal injury in the penumbra was associated with duration of focal cerebral ischemia. Viable cells (arrows) were significantly decreased from 3 to 6 h after focal cerebral ischemia (Fig. 1B, C). Moreover, viable cells were not detected 24 h after focal cerebral ischemia which was very similar to that of the ischemic core (Fig. 1D). This finding indicated that the extent of neuronal damage was associated with the Fig. 1. TTC (2,3,5-triphenyltetrazolium chloride) staining of brain slice from bregma -2.30 mm 3 h after permanent middle cerebral artery occlusion (pMCAO). Tissues of the penumbra (marked with white arrows) represented the red zone near the infarction zone in the ipsilateral hemisphere over a series of brain sections (A). Cresyl violet staining demonstrated that viable cells in the penumbra (marked with short black arrows) were significantly decreased 3 to 6 h after pMCAO (B, C). Moreover, viable cells were not detected 24 h after pMCAO which resembled the ischemic core (D). Scale bar=50 μm. Altered expression of Na + /K + -ATPase and NCX1 in the penumbra after focal cerebral ischemia To evaluate the effect of focal cerebral ischemia in the penumbra, immunoblotting analyses of Na + /K + -ATPase were performed ( Fig. 2A). The expression of Na + /K + -ATPase was not significantly altered as compared with that of the shamoperated controls at 3 or 6 h following focal cerebral ischemia. However, the expression of Na + /K + -ATPase was significantly decreased at 24 h of focal cerebral ischemia (78±6% of shamoperated controls, n=4, *P<0.05) (Fig. 2B). Furthermore, the expression of NCX1 was significantly increased 24 h after focal cerebral ischemia as compared with sham-operated controls (144±2% of sham-operated controls, n=4, *P<0.05) (Fig. 3A, B). Discussion The present study revealed that 1) the altered expression of Na + /K + -ATPase and NCX1 in the penumbra after focal cerebral ischemia indicates that deranged transport of Na + , K + , and Ca 2+ which is closely associated with the NMDA receptor-mediated Ca 2+ influx ; 2) the different expression of NR2A-containing-and NR2B-containing glutamatedependent NMDA receptors in the penumbra may play different roles depending on the duration of ischemia; 3) prominent decrease of NeuN than that of GFAP in the penumbra may suggest that neurons in the penumbra are likely to be more susceptible to ischemic injury than astroglia. Reduction of Na + /K + -ATPase in the penumbra indicates the disruption of intracellular Na + and K + homeostasis after focal cerebral ischemia Middle cerebral artery occlusion produces regions of brain with near complete and incomplete ischemia (reduced blood flow). In general, areas of mild ischemic injury occur where Na + /K + ATPase is preserved, while in areas of more severe ischemia, ATP levels are low and Na + /K + -ATPase activity is reduced (D' Ambrosio et al., 2002). Therefore, significant reduction of Na + /K + -ATPase in the penumbra at 24 h as compared with those of 3 and 6 h indicates a deleterious effect of 24 h of ischemia. Furthermore, the current study verified the histological findings demonstrating that the extent of neuronal damage depends on the duration of ischemia. Several studies have examined the role of Na + /K + ATPase ion channels in hypoxic-ischemic neuronal damage and have concluded that Na + influx is an important initiating event leading to anoxic damage (Stys et al., 1992;Tasker et al., 1992). The decreased Na + /K + ATPase expression in the present study postulates the disturbance of NCX1 and/or NMDA-induced Ca 2+ influx by enhanced cellular K + efflux and Na + influx which result in ischemic depolarization in neurons and astrocytes. In general, Ca 2+ influx or extrusion of NCX1 Depends on the ability of Na + /K + ATPase to pump K + and Depolarized the plasma membrane (Silver et al., 1997). The reduction of Na + /K + ATPase on NMDAinduced Ca 2+ influx also might be related to an enhancement of Ca 2+ permeation of NMDA channels (Czyz et al., 2002). Furthermore, intracellular Na + accumulation in astrocytes can contribute to glutamate release, which occurs by reversal of the Na + /glutamate cotransporter (Anderson & Swanson, 2000). This cotransporter normally mediates the entry of two Na + ions along with one molecule of glutamate and represents an important mechanism of glutamate uptake by astrocytes, which ensures neuronal survival (Storck et al., 1992). If it is reversed due to excessive intracellular Na + , astrocytes begin to promote glutamate release and might contribute to neuronal damage during ischemia. Therefore, intracellular Na + accumulation plays a critical role in NCX1 and/or NMDA-induced neuronal cell death by participating in mechanisms that brings about Ca 2+ overload and accelerates glutamate release from the astrocytes. In a normal brain, NCX1 is thought to be important in buffering neuronal intracellular Ca 2+ by transporting one Ca 2+ out of cells and three Na + into the cells (Blaustein & Lederer, 1999). However, based on vitro and in vivo studies, it appears that during and following cerebral ischemia, it is likely that under depolarizing conditions, NCX1 can contribute to Ca 2+ influx and neuronal injury. After NCX1 reverses, any further depolarization of the plasma membrane increases the electrochemical driving force of Ca 2+ influx via this pathway (Hansen & Zeuthen, 1981;Benveniste et al., 1984). The role of reverse NCX1 in mediating toxic Ca 2+ influx is supported by neuroprotective effects of NCX1 inhibitors (Schröder et al., 1999;Matsuda et al., 2001). The neurotoxic mechanism that leads to penumbra cell death in response to elevated intracellular Na + in the current study represent the reverse operation of the plasma membrane NCX1, engaged by plasma membrane depolarization and intracellular Na + overload through a decreased Na + /K + -ATPase. However, reverse NCX1 does not significantly contribute to Ca 2+ influx after inactivation of NMDA receptors (Kiedrowski 2001). The mechanism of Na + -dependent Ca 2+ influx requires open NMDA channels, because occlusion of the channels with MK-801 almost completely inhibited Ca 2+ accumulation. Our results suggest that increased reverse NCX1 in the penumbra participates in Na + /K + -ATPase-dependent amplification of NMDA-induced Ca 2+ influx in ischemic depolarized neuronal cells. Altered expression of NR2A and NR2B, which depends on the duration of focal cerebral ischemia, could play variable roles in secondary brain cell injury in the penumbra Our results indicate that increased NR2B in the penumbra after 24 h of focal cerebral ischemia may be comprised of combinations of NR2B subunit, along with the NR1 subunit (Monyer et al., 1992). Pharmacologic studies show that NR2B-containing NMDA receptor channels, expressed in Xenopus oocytes, exhibits a higher affinity for L-glutamate and considerably longer offset decay time courses following brief application of L-glutamate than the NR1-NR2A channel (Meguro et al., 1992). In particular, the offset decay time course is thought to be crucial for the determination of intracellular Ca 2+ concentration (Perkel et al., 1993). These reports suggest that NR2B-containing NMDA receptors are more efficient than receptors containing NR2A in the process of Ca 2+ influx. Chronic incubation with GA resulted in a down-regulation particularly of the NR2B subunit and reduced the NMDA receptor-mediated increase in intracellular Ca 2+ (Kölker et al., 2002). In the present study, we demonstrated a reduction of Na + /K + ATPase and increased expression of NCX1 and NR2B in the penumbra by focal cerebral ischemia. Because the Na + /K + ATPase is particularly important to evoke the ischemic depolarization, decrease in Na + /K + ATPase would result in a relief of the voltage-dependent Mg 2+ block of NMDA receptors (Gegelashvili & Schousboe, 1997) and further increase of Na + -dependent Ca 2+ influx mediated by NR2B-containing NMDA receptors and NCX1. In addition, extrasynaptic NR2B-containing NMDA receptors antagonize nuclear signaling to cAMP response element binding protein (CREB), block induction of brain derived neurotrophic factor (BDNF) expression, and are involved in mitochondrial dysfunction and cell death (Hardinghan et al., 2002). Significant increase of NR2A 3 h after focal cerebral ischemia might couple with the compensatory response to diminish the Ca 2+ which is associated with lower affinity for glutamate and considerably shorter offset decay time for Ca 2+ compared with those of NR2B-containing NMDA receptors. This result suggests the possibility that the other Ca 2+ ion channels beyond the NMDA receptors play a major role in the neuronal injury 3 h following focal cerebral ischemia. Moreover, depending on the type of NMDA receptor, Ca 2+ entry can determine the biological outcome of Ca 2+ signaling, as shown by site-specific differences in the regulation of CREB-mediated transcription (Hardinghan et al., 2002). Increases in the synaptic NMDA receptor subunit, NR2A, 3 h after focal cerebral ischemia may be associated with neuronal survival in the penumbra given that synaptic NMDA receptors promote nuclear signaling to CREB, induce BDNF gene expression, and activate an anti-apoptotic pathway (Rumbaugh & Vicini, 1999). Differential expression of NeuN, GFAP, and CNPase in the penumbra depending on the duration of ischemia There are many instances in which focal brain lesions also seem to have an impact on the function of surrounding or remote brain areas due to the fact that the brain can be considered as a network with multiple and intricate connections (Beck et al., 1996). Therefore, it is very important to analyze in detail which prognosis of focal cerebral ischemia is a direct consequence of the lesion, the perilesional area, or a reaction of the surrounding brain to the lesion. The most remarkable decrease of neural (NeuN) and astroglial protein (GFAP) 24 h after focal cerebral ischemia supports evidence demonstrating that increased NCX1 and NR2B-containing NMDA receptor is closely associated with Ca 2+ -dependent neuronal injury in the penumbra. Moreover, further declines of NeuN than that of GFAP in the penumbra may suggest that neurons in the penumbra are likely to be more susceptible to ischemic injury than astroglia. The underlying mechanisms for the susceptibility of neurons to ischemic insults may be explained in terms of glucose metabolism. Recent studies using multiphoton microscopy demonstrated that neurons use primarily oxidative metabolism, whereas astrocytes are glycolytic (Kasischke et al., 2004). In addition, glucose uptake in primary cultured astroglias was increased in response to the elevated extracellular K + than that of neurons (Yu et al., 1989). In conclusion, the current study suggest the new intracellular Ca 2+ overloading mechanisms after focal cerebral ischemia in which Ca 2+ influx through the reverse mode of NCX1, Intracelluler Ca 2+ overhead is positively reinforced by glutamate-dependent NR2B-containing NMDA receptors which is triggered by deranged transport of Na + through decreased Na + /K + -ATPase.
2014-10-01T00:00:00.000Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "eb11dde63f864c3c63225e3e3bcd966fc5900062", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3015038?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "eb11dde63f864c3c63225e3e3bcd966fc5900062", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
196571773
pes2o/s2orc
v3-fos-license
Study to Test and Operationalize Preventive Approaches for CKD of Undetermined Etiology in Andhra Pradesh, India Introduction High prevalence of chronic kidney disease (CKD) not associated with known risk factors has been reported from coastal districts of Andhra Pradesh. The Study to Test and Operationalize Preventive Approaches for Chronic Kidney Disease of Undetermined Etiology in Andhra Pradesh (STOP CKDu AP) aims to ascertain the burden (prevalence and incidence) of CKD, the risk factor profile, and the community perceptions about the disease in the Uddanam area of Andhra Pradesh. Methods Study participants will be sampled from the Uddanam area using multistage cluster random sampling. Information will be collected on the demographic profile, occupational history, and presence of conventional as well as nonconventional risk factors. Glomerular filtration rate (GFR) will be estimated using the Chronic Kidney Disease Epidemiology Collaboration equation, and proteinuria will be measured. All abnormal values will be confirmed by repeat testing after 3 months. Cases of CKD not associated with identified etiologies will be identified. Biospecimens will be stored to explore future hypotheses. The entire cohort will be followed up every 6 months to determine the incidence of CKD and to identify risk factors for decline in kidney function. Qualitative studies will be performed to understand the community perceptions and expectations with respect to the interventions. Implications CKD is an important public health challenge in low- and middle-income countries. This study will establish the prevalence and determine the incidence of CKD not associated with known risk factors in a reported high-burden region, and will provide insights to help design targeted health systems responses. The findings will contribute to the policy development to tackle CKD in the region and will permit international comparisons with other regions with similar high prevalence. Introduction: High prevalence of chronic kidney disease (CKD) not associated with known risk factors has been reported from coastal districts of Andhra Pradesh. The Study to Test and Operationalize Preventive Approaches for Chronic Kidney Disease of Undetermined Etiology in Andhra Pradesh (STOP CKDu AP) aims to ascertain the burden (prevalence and incidence) of CKD, the risk factor profile, and the community perceptions about the disease in the Uddanam area of Andhra Pradesh. Methods: Study participants will be sampled from the Uddanam area using multistage cluster random sampling. Information will be collected on the demographic profile, occupational history, and presence of conventional as well as nonconventional risk factors. Glomerular filtration rate (GFR) will be estimated using the Chronic Kidney Disease Epidemiology Collaboration equation, and proteinuria will be measured. All abnormal values will be confirmed by repeat testing after 3 months. Cases of CKD not associated with identified etiologies will be identified. Biospecimens will be stored to explore future hypotheses. The entire cohort will be followed up every 6 months to determine the incidence of CKD and to identify risk factors for decline in kidney function. Qualitative studies will be performed to understand the community perceptions and expectations with respect to the interventions. Implications: CKD is an important public health challenge in low-and middle-income countries. This study will establish the prevalence and determine the incidence of CKD not associated with known risk factors in a reported high-burden region, and will provide insights to help design targeted health systems responses. The findings will contribute to the policy development to tackle CKD in the region and will permit international comparisons with other regions with similar high prevalence. D iabetes and hypertension are the leading causes of CKD worldwide. 1 Forms of progressive kidney injury not associated with any of the known causes or risk factors are being recognized, especially among the rural working-age populations in some low-and middle-income countries (LMICs), and have been dubbed "CKD of uncertain etiology" (CKDu). This disease has emerged as an important problem in parts of El Salvador, 2 Nicaragua, Costa Rica, Mexico, Guatemala, Egypt, 3 Sri Lanka, 4 and India. 5 Adult men, primarily outdoor agricultural field workers in their third to fifth decade, are primarily affected. CKDu typically remains asymptomatic in early stages. Features usually associated with progressive CKD, such as hypertension, edema, and oliguria are conspicuous by their absence. By the time patients report to the health system, the need for renal replacement therapy is often imminent. The kidney are usually small, making biopsy impossible. In the small number of cases in which it has been done, findings are dominated by bland interstitial fibrosis. 6 There is no definitive evidence for a specific etiological pathway, 7 and the proposed causes include heat stress, dehydration, pesticides, infections, and water contamination. 8 In India, diabetes, chronic glomerulonephritis, and hypertension are the most common known etiological categories of CKD. In the report of the pan-India CKD Registry that included data on 52,273 adult patients with CKD, 9 "CKDÀcause unknown" emerged as the second most frequent etiology (16%) after diabetes (30%). Patients with CKD of unknown origin were younger, poorer, and more likely to present in more advanced stages than were patients with CKD of known causes. Geographic clusters with high burden of CKD have been reported from Andhra Pradesh, Odisha, Maharashtra, and Goa. 10 The best known of these hot-spots is the Uddanam region of Srikakulam District, Andhra Pradesh, a geographically distinct rural coastal area with rich cashew and coconut plantations. As in other world geographies where CKDu is endemic, young men have been reported to be most frequently involved. 11 An estimated 34,000 persons are reported to have kidney disease, with more than 4500 deaths in the last 10 years in this region. Despite extensive coverage in the lay press, there have been few studies of prevalence and natural history of the so called "Uddanam nephropathy." The few available surveys 5 have been nonsystematic and have used different methodologies, disease definitions, subject selection criteria, and nonvalidated creatinine and proteinuria assays. Unpublished cross-sectional data have indicated the existence of clusters with CKD prevalence ranging from 30% to 60%. A recent study estimated the prevalence of CKD at 18.3% in the area when proteinuria and/or decreased estimated glomerular filtration rate (eGFR) were taken as markers, and the cause could not be established in 13%. 12 This study, however, did not use the standard definition of CKD, which calls for confirmation after repeat testing. Lay opinion puts the blame for the genesis of CKD on contaminated drinking water and pesticides. However, the few studies of chemical analyses of drinking water and cultivated rice from the region have failed to show any impurities, and there are few data on pesticides. 13,14 A cross-sectional study from another CKDendemic village, noncontiguous to Uddanam, showed raised silica and strontium levels in drinking water. 15 A recently published, standardized protocol (the Disadvantaged Populations eGFR Epidemiology Study [DEGREE]) provides a framework to identify and characterize communities where there is a high prevalence of reduced eGFR, and to undertake international comparisons by mandating a population-representative sample and standardized collection of information on sociodemographic factors, occupational and environmental exposures, body composition, and kidney function. 7 We describe the protocol of a study designed to investigate the prevalence of various types of CKD in the Uddanam region including CKDu, and to determine the age-specific incidence and natural history of CKD in the region, with the goal of sharing expertise across disciplines and countries to accelerate knowledge dissemination and to guide research priorities toward establishing the causes. 16 STUDY OBJECTIVES The STOP CKDu AP aims to do the following: (i) conduct representative surveys to estimate the proportion of individuals with reduced eGFR including those without any known cause in the Uddanam region of Andhra Pradesh; (ii) measure the prevalence and describe the clinical presentation of CKDu in the region; (iii) establish a community-based cohort to determine the incidence of CKDu and identify risk factors for decline in kidney function over time; (iv) develop a methodological framework for the establishment of etiological factors behind the development of CKDu; and (v) understand the community perceptions around CKD, its burden, and community expectations with respect to interventions. STUDY SETTING The STOP CKDu AP will be undertaken in the Uddanam region of Srikakulam District, the extreme northeastern District of Andhra Pradesh. The District is skirted by mountains of the great Eastern Ghats. Vizianagaram district flanks it in the south and west, Orissa on the north,and the Bay of Bengal on the east. Srikakulam had a population of 2,703,114 as per the 2011 census, 17 and is divided into 38 administrative divisions (mandals). DESIGN The study will be implemented in 2 phases. In phase 1, the study will measure the prevalence and describe the pattern of CKD in the affected regions, in particular those with reduced eGFR but without any known cause. We aim to establish whether there is a clustering of disease. In case of clustering, we will evaluate its relationship to living space, lifestyle, and dietary habits. This phase will facilitate setting up a population cohort that will be followed up to determine incidence and risk factors for decline in kidney function. In addition, qualitative studies will be performed to understand stakeholder and community perspectives. In phase 2, we will follow up this cohort at 6-month intervals with a focus on studying participants at risk for developing CKD and CKDu in particular, that is, participants who do not already have CKD or factors that would exclude a CKDu diagnosis. At the same time, the cohort with established CKD at baseline will be followed up to determine the rate of kidney function decline. Age-specific incidence will be determined over 3 years. STUDY POPULATION AND SAMPLE SIZE CONSIDERATIONS The study will be conducted in a geographically contiguous area comprising 118 villages among the 7 administrative regions (mandals) that constitute the Uddanam region. A cross-sectional study will be undertaken using a cluster random sampling technique using probability proportionate to size (PPS) methodology. 18 A total of 2400 subjects will be sampled from 40 clusters (villages) selected in the defined study area. As the population prevalence of CKD in the area is unknown, the sample size was estimated assuming a prevalence of CKD of 10% in the low-prevalence clusters, a relative precision (acceptable error in the estimate) of 20% (2% absolute precision, that is AE2% on either side of 10%, i.e., 8%À 12%), a design effect of 2 and inflated to account for an estimated 25% loss to follow-up in the prospective component of the study. The formula used was where n is the sample size, p is the prevalence, and d is the precision. If we assume that prevalence of CKDu is around 5%, then 2400 samples will allow us to estimate the true prevalence with 30% relative precision, that is, to within AE1.5%, with 95% confidence, accounting for a design effect of 2 and a 25% loss to follow-up. In each of the selected clusters, households will be identified based on hand-drawn structural maps of the cluster. A total of 60 households will be selected within each cluster by a systematic random sampling technique. Within each household, the individual participant will be randomly selected among the members of the household based on the preassigned quota-based age-stratified groups and sex to ensure adequate representation from both sexes and age subgroups.. The disease CKDu has been reported typically to affect young people. However a recent study from the Uddhanam region showed that only a minority of individuals (<20%) were <40 years of age, and women are equally affected. 12 We anticipate, therefore, that changes in GFR occurring in all age groups will need to be carefully determined. To ensure inclusion of all participants old enough to experience an identifiable decline in kidney function, we will include all individuals >18 years of age with equal representation from men and women. Moreover, women with CKDu are of scientific interest in that their inclusion may suggest alternative risk factors or may help to rule out some that have been previously proposed. To ensure that we have adequate representation of each of the age groups, we will use preassigned quota with respect to the sex and age groups. All the subjects in the cohort thus created will be followed up every 6 months for a period of 3 years. The sample size calculations in the study have been based on estimating the prevalence of CKD and not CKDu. However, including a design effect of 2 will help to identify area-level clustering, presumably because of CKDu. STUDY PROCEDURES Community-level meetings will be conducted to inform the population about the study before initiating screening. The design includes a preparatory phase during which wide-ranging discussions and qualitative interviews will be conducted to understand the prevailing perceptions and practices around CKD in the study communities. Informed by the qualitative interviews, we will develop appropriate socio-cultural strategies for community acceptance and consent for study implementation. After securing informed consent, the survey questionnaire will be administered, followed by clinical measurements and collection of blood and urine samples. In the case of refusal to participate in the study, the survey questionnaire will not be administered; CLINICAL RESEARCH O John et al.: CKDu in Uddanam however, the listing of members of the household and their ages will be collected. Focus group discussions and in-depth interviews will be conducted to understand the factors associated with CKD in the study area. SURVEY QUESTIONNAIRE The questionnaire (Supplementary Combined CKDu Subject Questionnaire) will elicit the basic demographic profile, socio-economic status, occupational history, medical history, and health-seeking behavior. The questionnaire was developed by adapting the DEGREE Protocol, 7 the WHO STEPwise approach to noncommunicable disease risk factor surveillance (STEPS) survey 19 24 ; and the EuroQoL (EQ) 5D for quality of life assessment. 25 The survey will include questions that will will seek to identify the type of habitation, the water supply for drinking, cooking and other household activities, water drinking patterns, type of cooking and ventilation in each of the households, household practices in terms of disposal of domestic waste, sourcing of food ingredients, use of any indigenous/local produce in the cooking, work habits, and consumption of herbs, indigenous medications, tobacco, or alcohol. In addition to income, we will capture household living standard through questions on household's ownership of selected assets, such as televisions and bicycles; materials used for housing construction; and types of water access and sanitation facilities. Furthermore, we will determine the number of hours that the study participants spend at home and at their place of work, in case of work that involves direct exposure to sun, and high temperatures (e.g., industrial ovens, brick kiln, large-scale cooking). The number of hours of such exposures and habits around hydration and rest in between these intense work periods will be determined. Intake of any medications for any symptoms developed due to these occupational exposures will be documented, and the duration of exposure will be recorded. Calorie and protein intake will be estimated by a standardized dietary diary validated for use in the Indian context. CLINICAL MEASUREMENTS Blood pressure would be measured after 5 minutes of rest in the sitting position using an automated clinically validated sphygmomanometer (OMRON model HEM 7121, Kyoto, Japan). An average of 3 readings will be recorded. Height and weight would be measured (without footwear) using a stadiometer (SECA model 213, Hamburg, Germany) and digital calibrated scales (OMRON Model HN 865). BIOSAMPLES A 20-ml sample of nonfasting venous blood will be collected in 2 Vacutainers (Becton Dickinson, Franklin Lakes, NJ), 1 vial containing ethylenediamine tetraacetic acid (EDTA) and the other plain. A 5-ml quantity of first morning void urine will be collected in sample collection cups distributed a day prior to the collection. Samples will be stored in the field in coolers with icebox (4 C) for no more than 6 hours and shipped to the processing laboratory at the study coordinating center. Plasma, serum, and buffy coat will be separated and stored at À80 C in barcoded cryovials. Creatinine will be measured using the modified Jaffe assay (traceable to isotope dilution mass spectrometry [IDMS] reference standards). Urine protein will be measured using pyrogallol test and corrected for creatinine. Blood counts would be performed on a fully automated, 3-part differential hematology analyzer (Sysmex XP 100, Sysmex Asia Pacific, Singapore). and glycosylated hemoglobin would be determined on an automated analyser (Hb Vario, Erba Mannheim, London, United Kingdom) in compliance with the National Glycohemoglobin Standardization Programme (NGSP). All tests will be performed on the same day of collection. Glomerular filtration rate will be estimated using the Chronic Kidney Disease Epidemiology Collaboration equation. FOLLOW-UP AND RETENTION All subjects who are found to have an eGFR of #60 ml/ min per 1.73 m 2 and/or urine protein-to-creatinine ratio of $0.15 in initial testing will undergo repeat testing after 3 months. Follow-up visits will be conducted in all subjects at 6-month intervals for 3 years. All new values of eGFR <60 ml/min per 1.73 m 2 and urine protein-to-creatinine ratio >0.15 will be confirmed on repeat testing 3 months apart. We will carry out a series of activities to increase participant engagement and retention. These will include distribution of reading materials, text and voice messages, and home visits by the study staff. Annual testing will be done to determine eGFR and protein-to-creatinine ratio in all subjects. We will also visit the households of those who refused participation and record whether any of the household members have had any major health events, hospitalizations, or have died. If there have been any major health outcomes, their relationship to kidney disease will be explored. CKD will be diagnosed and staged according to the Kidney Disease: Improving Global Outcomes (KDIGO) criteria. 26 CKDu will be defined according to the criteria proposed by Wijewickrama et al., 27 with some modifications: eGFR of <60 ml/min per 1.73 m 2 and/or urine protein-to-creatinine ratio of >0.15 at 2 time points 3 months apart in individuals who do not have diabetes, long-standing hypertension (>5 years' duration), or urine protein-to-creatinine ratio of >3. Subjects with other known causes of CKD will be excluded. QUALITATIVE STUDY Focus group discussions and in-depth interviews will be conducted to understand the community perceptions around CKD, its burden, and community expectations with respect to the interventions. We will conduct 14 in-depth interviews and 12 focus group discussions. The focus group discussion and in-depth interview participants will be community members, physicians, nephrologists, patients diagnosed with kidney disease, government officials, officials from the industry, and experts in environmental issues. A purposive sampling method will be used to recruit participants. Written informed consent will be obtained from the participants. The project team will brief the participants on the purpose and aims of the discussion and interactions. The Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines 28 for reporting qualitative research for data collection and process will be followed. Each focus group discussion or interview will last for 45 to 90 minutes and will be conducted by a team of 2 individuals, a moderator and a note taker. The discussions will be audio-recorded and transcribed. Transcripts will be translated into English, and the quality will be checked by the study team members. Data will be analyzed based on the essential principles of qualitative research 29,30 to understand the perceived extent of the problem, referral patterns, and perceived barriers to kidney health, as well as opportunities for improvement. The focus group discussions and in-depth interviews will be analyzed using Nvivo 9 software (2008; QSR International Pvt Ltd, NVIVO Qualitative data analysis software, Melbourne, Australia). Analysis will be based on the thematic framework approach to identify common emerging themes. 31 All data will be reviewed by 2 members of the research team to identify the recurrent themes to minimize the risk of subjectivity and established validity. Researchers will familiarize themselves with the data and will identiy broad thematic areas. A coding scheme will be formulated using an inductive approach, and the responses based on the codes will be grouped under each theme. Discrepancies in coding will be identified, and consensus will be obtained through discussion among study team members. DATA MANAGEMENT Questionnaires and samples will be labeled using unique participant identifiers and bar codes. A customized electronic data capture system using open source framework hosted on the secure servers with end-to-end data encryption will host the questionnaire database. Validation and data quality monitoring will be undertaken to eliminate transcription errors. Data management procedures will follow the standard operating practices and guidelines of the George Institute (Data Management SOP/DM -SOP-32 Version 3.0) and the Indian Council of Medical Research. 32 DATA ANALYSIS For the population prevalence component of the study, we will use descriptive statistics whereby categorical variables will be reported as proportions and continuous variables will be reported as means and standard deviations. For the risk factor analysis, exposure variables under consideration are age, sex, level of education, occupation, presence of diabetes and hypertension, exposure to heat, agrochemicals, and tobacco, alcohol, painkillers, and indigenous medication use. The unadjusted relationships between the exposure variables and the outcomes of interest, that is, CKD and CKDu, will be examined in univariate analyses. Multiple logistic regression will be used to examine the simultaneous effects of various exposure variables while adjusting for potential confounders. During follow-up, newly detected cases of CKD among individuals with eGFR of >60 ml/min per 1.73 m 2 and urine protein-to-creatinine ratio of <0.15 at baseline will be the incident cases. All newly identified abnormal values will be confirmed by repeat testing after 3 months. The risk factors from baseline and ongoing risk exposure variables collected during the follow-up phase will be analyzed to determine any associations. Change in eGFR over time will be analyzed as a continuous variable, as well as as tertiles of renal function decline. 33 ETHICS The STOP CKDu AP has been approved by the Institutional Ethics Committee of The George Institute for Global Health, New Delhi, India and will be conducted in accordance with the principles of the Declaration of Helsinki. Written informed consent will be obtained from all participants for use of the data and stored biosamples for future research. Findings will be disseminated widely by publication in peer-reviewed journals and presentations/representations to relevant local stakeholders. Interim findings of public health importance will be communicated through appropriate local administrative offices and with their approval through media channels to inform population at large. Any subject identified to have an abnormality will be referred to the appropriate public health facility. The study team will work in close partnership with local health providers/healthcare systems to handle participants needing referral for medical care. GOVERNANCE A Technical Advisory Group (TAG) comprising representatives from the Indian Council of Medical Research, Government of Andhra Pradesh (GoAP), and subject experts regularly review and provide guidance on the design, implementation, and progress of the study. An external Advisory Board will provide scientific oversight and will guide the study team in incorporating emerging evidence from the CKDu research globally. Detailed information about the progress of the study, field activities, and protocols are hosted on a dedicated study Web link hosted on the George Institute website (https://www.georgeinstitute. org/projects/stop-ckdu-study-to-test-operationalizepreventive-approaches-for-ckdu-in-ap). DISCUSSION The STOP CKDu AP study protocol has been developed in response of the public health need to define the epidemiology and to start investigations to establish cause(s) of CKD affecting the Uddanam area in Andhra Pradesh. The existing studies of CKDu in India have not yet defined the epidemiology of this conditionthe first step needed to understand the disease burden, natural history, and risk factors. This protocol aims to provide a framework to address this, and is designed to capture the entire at-risk population by recruiting adult subjects from both sexes and all age groups. The protocol goes farther than administering only a 1-time cross-sectional survey: we will repeat the appropriate tests after 3 months to confirm the diagnosis, and will have a longitudinal component to understand the natural history of the disease by observing change in eGFR over time. Following up all enrolled subjects will allow us to define disease incidence, to capture the earliest stages of disease, and to identify associations with possible exposures and potential risk factors. Our protocol uses a questionnaire that combines questions from a number of standard validated survey instruments aimed at capturing a variety of exposures, including sociodemographic data, occupational and environmental exposures, lifestyle factors, and health-seeking behavior in the population of interest. The qualitative component of the study will help to understand the community perceptions about the burden of CKD and noncommunicable diseases in the The study area was selected to ensure that both reported high-and low-prevalence villages are included in the geographic areas defined for the random selection of clusters using the PPS methodology. The sample size calculations include a design effect of 2 to account for the clustering. Resistance by the communities and nonconsenting for study participation and collection of biological samples, as several research teams had collected biological samples in the Uddhanam region prior to this study and had not provided any reports or feedback to the communities. The design includes a preparatory phase during which wide-ranging discussions and qualitative interviews will be conducted to understand the prevailing perceptions and practices around CKD in the study communities. Informed by the qualitative interviews, we will develop appropriate socio-cultural strategies for community acceptance and consenting for study implementation. The sample size calculation took into consideration a 33% loss to follow-up/ noncompliance with the biological sampling within the study defined time points. The reduction in eGFR and/or proteinuria being transient Only confirmed cases of CKD as per KDIGO guidelines based on 2 independent assessment 3 months apart for eGFR and proteinuria will be considered. Risk factors/environmental exposure factors being missed due to seasonal variations The environmental exposures/potential risk factors will be assessed at defined timepoints taking into consideration the seasonal variations, with special attention to summer and pre-and post-monsoon analysis. Difficulties in ascertaining painkiller use due to over-the-counter purchase of medicines Use of visual cue cards with the blister packs of common NSAIDs available in the study area and use of the same for the respondents to identify the medications and to record how long they have been used with or without prescriptions. Subjects traveling away from study site for work All subjects will be tracked and followed up when they return during festivals and holidays. They will be asked about specific work-related risk factors. CKD, chronic kidney disease; eGFR, estimated glomerular filtration rate; KDIGO, Kidney Disease: Improving Global Outcomes; NSAIDs, nonsteroidal anti-inflammatory drugs; PPS, probability proportionate to size. study area, prevailing belief systems around the causes of CKD, occupational patterns, dietary habits, and household practices, health-seeking patterns for chronic ailments including various stages of CKD, and popular expectations from the health delivery systems for tackling the disease. Study of biosamples will allow future exploration of additional hypotheses through collaborative research. A bio-repository is embedded in the study, and specific consent for future analysis including genetic studies is being obtained. The samples will be stored in the India Chronic Kidney Disease Registry bio-repository, 34 which follows all mandatory requirements for a biorepository. Tackling the problem of CKDu requires global cooperation. This and other such studies will allow identification of unique CKD risk factors to help the development of locally appropriate screening and prevention strategies. This study represents an example of cooperation among government and academic centers, and is of relevance to other state governments with similar hot-spots so that the menace of CKDu can be fought with greater political and administrative resolve and public health goals can be achieved. During the development phase, we anticipated possible operational challenges and designed the protocol carefully to overcome those challenges (Table 1). There are a few limitations, mainly related to the way that GFR is estimated in the study population. We are using Chronic Kidney Disease Epidemiology Collaboration formula, which can potentially overestimate GFR in Indian subjects. An independent study, however, is ongoing to develop a correction factor for the existing equation, and we will reanalyze the data. Also, our study is not designed to evaluate the impact of dietary factors, namely, protein intake, on GFR. In conclusion, the STOP CKDu AP will aim to determine the incidence, prevalence, and rate of decline of kidney function in subjects with CKD in the Uddanam area of Andhra Pradesh, as well as to provide key insights that will help to establish the cause(s) of the disease. DISCLOSURE The funding agencies have no role in the design, implementation, analysis and reporting of the study. VJ reports consulting or paid advisory board fees from Baxter Healthcare, Zydus Cadilla, and NephroPlus and grant support from Baxter Healthcare, GlaxoSmithKline, Department of Biotechnology (India), Medical Research Council, United Kingdom, and Indian Council of Medical Research, New Delhi, India. All the other authors declared no competing interests.
2019-07-15T22:29:44.634Z
2019-06-21T00:00:00.000
{ "year": 2019, "sha1": "abfc2494308d7ca906b6d4f44e560f2a20a94cfd", "oa_license": "CCBYNCND", "oa_url": "http://www.kireports.org/article/S2468024919313610/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86e2cdcbb9c1eddb295e901e0ef25183af53612e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216202730
pes2o/s2orc
v3-fos-license
Contrast-enhanced ultrasound imaging features and clinical characteristics of combined hepatocellular cholangiocarcinoma: comparison with hepatocellular carcinoma and cholangiocarcinoma Purpose The purpose of this study was to retrospectively compare the clinical characteristics and imaging features on (CEUS) of combined hepatocellular cholangiocarcinoma (CHC) with those of hepatocellular carcinoma (HCC) and cholangiocarcinoma (CC). Methods The clinical information and CEUS features of 45 patients with CHC from 2015 to 2019 and 1-to-1-matched control subjects with HCC and CC (45 each) were compared. Results Simultaneous elevation of α-fetoprotein (AFP) and cancer antigen (CA) 19-9 was more common in CHC than in HCC and CC. In the arterial phase, hyperenhancement (homogeneous and heterogeneous) was more common in CHC (73.3%) and HCC (100%), while peripheral rim-like enhancement was more common in CC (55.6%). In the portal phase, marked washout was significantly more frequent in CHC and CC than in HCC (42.2% and 53.3% vs. 6.7%). In the delayed phase, marked washout was more common in CHC (82.2%) and CC (93.3%) than in HCC (40.0%). The washout time (WT) was much shorter in CHC and CC than in HCC (33.8±13.1 seconds and 30.1±11.6 seconds vs. 58.4±23.5 seconds). Using the combination of simultaneous elevation of AFP and CA 19-9 with marked washout in the delayed phase and a WT <38 seconds or arterial hyperenhancement to differentiate CHC from HCC or CC, the accuracy, sensitivity, and specificity were 74.4%, 93.3%, and 55.6% and 71.1%, 80.0%, and 62.2%, respectively. Conclusion Although some CEUS imaging features of CHC, HCC, and CC overlap, the combination of tumor markers and CEUS features can be helpful in differentiating CHC from HCC and CC. Introduction Hepatocellular carcinoma (HCC) and cholangiocarcinoma (CC) account for the vast majority of primary liver malignancies, while combined hepatocellular cholangiocarcinoma (CHC)-also referred to as a "biphenotypic" tumor-comprises a distinct minority, accounting for 0.4%-14.5% [1]. Although CHC has been categorized into three distinct types based on the relative separation of the different CC and HCC components, only those consisting of intermixed components and transitional cell types appear to represent true combination tumors, whereas other types may exist along the spectrum of collision tumors [2]. The prognosis of CHC appears to be worse than that of HCC and similar to that of CC due to the high frequency of vascular invasion and lymph node metastasis [2][3][4]. At present, according to guidelines, surgical resection, transplantation, and percutaneous ablation constitute the treatment strategy for HCC [5], while for CC, resection has been regarded as the first-line approach [6]. However, the most appropriate treatment for CHC remains unclear, even though radical liver resection likely yields the greatest survival benefit in limited-stage patients [7]. Therefore, the conundrum of the preoperative diagnosis is a potential factor shaping treatment selection. A few studies have evaluated the imaging characteristics of CHC based on contrast-enhanced computed tomography (CECT) [8,9] and magnetic resonance imaging (MRI) [8,10,11]. Theoretically, because CHC comprises HCC and CC components, the imaging features of both HCC and CC would be visualized, with either an HCC-like or CC-like appearance predominating [8]. Contrastenhanced ultrasonography (CEUS) has been found to be clinically valuable in the diagnosis of focal liver lesions for years, as it can non-invasively reflect the blood perfusion of tumor tissue in real time [12]. However, only sporadic reports have investigated the CEUS features of CHC [13][14][15][16]. Furthermore, because of its rarity, limited studies have assessed the diagnostic performance of CEUS in the differential diagnosis of CHC from HCC and CC. Regarding the clinical features, cancer antigen (CA) 19-9 and α -fetoprotein (AFP), which are useful adjuncts to imaging in patients with CC and HCC, respectively, are the main tumor markers of interest. Although prior studies have suggested that once both AFP and CA 19-9 are simultaneously elevated or are elevated in discordance with the imaging features (mainly CECT or MRI), a diagnosis of CHC should be considered [8,17]. However, the diagnostic performance of this widely accepted algorithm for the differential diagnosis of CHC and HCC or CC has not yet been investigated. Therefore, the purpose of this study was to evaluate the diagnostic performance of CEUS and clinical features in distinguishing CHC from HCC and CC, and to identify preoperative clues that may indicate the diagnosis and better guide clinical management decisions. Patients Institutional review board approval with a consent waiver was obtained for this retrospective study. Our institutional pathology database was searched for consecutive CHC tumors between January 2015 and June 2019, and these results were crossreferenced with the radiology database, excluding any patients without preoperative CEUS. The pathology and radiology databases were also searched for HCC and CC cases over the same period. Due to the relative rarity of CHC, the number of HCC and CC cases far exceeded that of CHC cases; therefore, a random number generator software tool was used to randomly choose HCC and CC cases according to a 1:1 proportion. As a result, 135 patients-including 45 with CHC, 45 with HCC, and 45 with CC-were included in this retrospective study. All patients underwent hepatectomy and the diagnosis was confirmed through a postoperative pathology report. CEUS Examinations The CEUS examinations were performed by an experienced sonologist (R.F.H.) with more than 20 years of liver ultrasonography experience using two scanners (C1-5, 1-5 MHz, Logiq E9, GE Healthcare, Chicago, IL, USA; C5-1, 1-5 MHz, IU22, Philips Medical Systems, Foster City, CA, USA). On grayscale ultrasonography, the tumor number, location, and size were recorded. Then, CEUS was performed with a low mechanical index of <0.1, and 2.4 mL of contrast agent (SonoVue, Bracco, Switzerland) was antecubitally injected as a bolus followed by a 5-mL saline flush. The timer was started at the contrast agent injection (0 second), and the lesion was scanned continuously for up to 3 minutes. As a routine examination procedure, the technical settings were fixed for CEUS: dynamic range, 65-70 dB; frame rate, 12-15 fps; gain, 75%; and one focus below the lesion. The entire vascular phase was recorded on a hard drive for further analysis. In this study, a per-patient analysis was performed. In patients with more than one hepatic lesion, only the largest and bestvisualized lesion was targeted because CEUS could not be used to scan multiple nodules simultaneously after a single injection of contrast agent. CEUS Interpretation All the CEUS videos were reviewed and evaluated by two experienced sonologists (T.Z. and L.W.) in consensus. The entire Ultrasonography 39(4), October 2020 e-ultrasonography.org vascular phase consisted of three phases: arterial (0-30 seconds after the injection), portal (31-120 seconds after the injection), and delayed phase (>120 seconds after the injection) [18]. In the arterial phase, the enhancement pattern was defined by a comparison of enhancement behavior between the tumor and liver parenchyma, and was classified as follows (Figs. 1-5): (1) homogeneous hyperenhancement: entirely hyperenhanced without any defects compared with the liver parenchyma; (2) heterogeneous hyperenhancement: mixed hyperenhancement in both the peripheral and central parts, with enhancement defects; (3) peripheral hyperenhancement: irregular rim-like hyperenhancement at the periphery of the lesion, with sparse filiform and punctiform internal enhancement; and (4) isoenhancement/hypoenhancement: enhancement of the lesion to a similar or lesser degree compared with the liver parenchyma. In the portal and delayed phases, the presence of washout and washout degree were evaluated. Washout was defined as hypoenhancement of the lesion in the portal or delayed phase preceded by arterial hyperenhancement. In patients with arterial peripheral or heterogeneous hyperenhancement, washout was confined to the hyperenhanced portion within the lesion. The washout degree in the portal and delayed phase was classified as marked washout (obviously lower echogenicity than that of the liver parenchyma), mild washout (slight hypoechogenicity compared to the surrounding liver parenchyma) and no washout (similar or slightly higher echogenicity relative to the liver parenchyma preceded by hyperenhancement in the arterial phase). Furthermore, time-related CEUS parameters were visually recorded. Enhancement time (ET) was defined as the time interval between the contrast agent injection (0 second) and its emergence A. On grayscale ultrasonography, a small hypoechoic lesion measuring 22 mm is demonstrated in the right lobe. B. In the arterial phase, the lesion shows homogeneous hyperenhancement (21 seconds). C, D. In the portal and delayed phase, the lesion shows mild washout (95 seconds at C, and 166 seconds at D). A B C D calculated the accuracy, sensitivity, and specificity of the features that played a statistically significant role in the differential diagnosis. Clinical Characteristics The comparisons of the clinical characteristics of CHC, HCC, and CC are summarized in Table 1. The percentage of hepatitis B infections and the percentage of patients with a fibrotic or cirrhotic hepatic background showed no significant differences among the three entities. Elevated AFP was more common in CHC (55.6%) and HCC (71.1%) than in CC (2.2%), while CA 19-9 was more common in CHC (28.9%) and CC (40.0%) than in HCC (2.2%). Simultaneous elevation of AFP and CA 19-9 was observed in 17.8% (8 of 45) of CHCs and 2.2% (1 of 45) of HCCs, and in no CCs. The tumor size of within the lesion; time to peak (TTP) was defined as the time interval between the emergence of contrast agent within the lesion and peak enhancement; and washout time (WT) was defined as the time interval between the emergence of contrast agent and the time point of hypoechogenicity within the lesion. Statistical Analysis All statistical analyses were performed using SPSS version 24.0 (IBM Corp., Armonk, NY, USA). The clinical and CEUS characteristics of the patients were expressed as mean±standard deviation and range or as count and proportion. The chi-square test or Fisher exact test was applied to compare differences in categorical variables. The independent-sample t test was used to compare differences in timerelated parameters, including ET, TTP, and WT. P-values of <0.05 were considered to indicate statistically significant differences. We CHCs was comparable with that of HCCs (P=0.247), while it was significantly smaller than that of CCs (P=0.035). CEUS Imaging Features The CEUS imaging features of the three entities are summarized in Table 2. In the arterial phase, hyperenhancement (either homogeneous or heterogeneous) was much more common in CHC and HCC than in CC (73.3% and 100% vs. 37.8%), while peripheral enhancement was predominantly displayed in CC (62.2% vs. 0% in HCC and 26.7% in CHC). In the delayed phase, the frequency of marked washout in CHC and CC was comparable (82.2% vs. 93.3%, P=0.108), and significantly higher than in HCC (40.0%). The majority of HCCs displayed mild washout (57.8%). Therefore, the most common enhancement pattern of CHC was hyperenhancement (homogeneous or heterogeneous) in the arterial phase followed by marked washout in the delayed phase, and the second most common enhancement pattern of CHC was peripheral hyperenhancement in the arterial phase followed by marked washout in the delayed phase. However, based on the enhancement pattern, eight and 12 CHCs were misdiagnosed as HCCs and CCs, respectively, because they showed a typical HCC enhancement pattern (hyperenhancement with mild washout in the delayed phase) or a typical CC enhancement pattern (peripheral Of the time-related CEUS parameters, ET and TTP showed no significant differences among the three different entities. The WT in CHC (33.8±13.1 seconds) was comparable to that in CC (30.1±11.6 seconds) (P=0.229), but was much shorter than that in HCC (58.4±23.5 seconds) (P=0.002). Diagnostic Efficacy of CEUS and Clinical Features The diagnostic efficacy of CEUS features and clinical tumor markers is presented in Table 3. For differentiating between CHC and HCC, the combination of the enhancement pattern, tumor marker, and WT showed the highest diagnostic value, with accuracy, sensitivity, and specificity of 74.4%, 93.3%, and 55.6%, respectively. For differentiating between CHC and CC, this combination also showed higher efficacy than CEUS enhancement features and tumor markers alone, with accuracy, sensitivity, and specificity of 71.1%, 80.0%, and 62.2%, respectively. Discussion With the wide application of ultrasound contrast agents in clinical practice, CEUS has notably improved the diagnostic performance of ultrasonography for several diseases, especially for the differentiation of focal liver lesions, with diagnostic performance that is even comparable to that of CECT and MRI [18]. Thus, in this retrospective study, the value of CEUS in combination with clinical features for differentiating between CHC, HCC, and CC was evaluated, thereby making a novel contribution to the literature. Regarding clinical features, although AFP and CA 19-9 are useful adjuncts for the diagnosis of HCC and CC, respectively [19], neither of them alone is sensitive or specific enough to identify CHC. However, the combination of these tumor markers may improve the sensitivity of the diagnosis. In present study, simultaneous elevation of AFP and CA 19-9 was more frequently detected in CHC (17.8%) than that in HCC (2.2%) or CC (0%), a pattern that is comparable with the findings of prior studies [20]. Although the sensitivity of this criterion was relatively low for differentiating these entities (17.8%), the high specificity and negative predictive value demonstrated its strong ability to prevent CHC from being misdiagnosed as HCC or CC. In such cases, the diagnosis of CHC could be made with greater confidence. In previous studies, the discordance between tumor marker elevation and imaging morphology (e.g., elevated CA 19-9 with imaging findings of HCC, or elevated AFP with imaging findings of CC) has been reported to be suggestive of CHC [8]. However, those results were based on the imaging features derived from CECT and dynamic contrast-enhanced MRI, in which the enhancement features of HCC and CC are clearly distinct, with arterial hyperenhancement followed by washout in the portal venous or equilibrium phase being characteristic of HCC [8,21], and peripheral arterial enhancement with progressive enhancement in the portal venous or equilibrium phase being characteristic of CC [8,22]. However, considerable overlap of CEUS features between HCC and CC has been reported, in accordance with the present study. Thus, the discordance between CEUS features and tumor markers was not analyzed in our study. Histologically, CHC is a combination of intermixed HCC and CC A. On grayscale ultrasonography, a round hypoechoic lesion measuring 24×20 mm is detected in the right lobe. B. A CEUS image reveals peripheral rim-like hyperenhancement in the arterial phase (18 seconds). C. Rapid washout is observed in the portal phase (47 seconds). D. In the delayed phase, the lesion demonstrates marked washout (132 seconds). A B C D components. Therefore, the well-known imaging features of HCC and CC may provide a framework for approaching the diagnosis of CHC [8]. It has been reported that the ratio of HCC and CC components within the lesion can serve as a distinct CHC imaging appearance [8]. Arterial hyperenhancement followed by portal or delayed washout is considered to be the most characteristic CEUS feature of HCC [5,23], and the common CEUS findings of CC include peripheral arterial rim-like enhancement with portal or delayed washout [23]. However, some CC lesions may also demonstrate similar enhancement patterns to those of HCC, especially in small lesions and those with a cirrhotic background [24], for which reason CEUS was dropped from the list of diagnostic techniques recommended for cirrhotic nodules in the guideline of the European Association for the Study of the Liver, European Organization for Research and Treatment of Cancer in 2012 [25]. Therefore, it is unreliable to use only the enhancement and washout pattern to differentiate CHC from HCC and CC, since there are considerable overlaps between CHC and both CC and HCC. In recent studies, the time and degree of washout have been proposed and proven important for differentiating between HCC and CC components [14,15,20]. In our study, although arterial hyperenhancement and the presence of washout in the portal and delayed phases showed no differences between HCC and CHC, a marked degree of washout in the delayed phase was much common in CHC than in HCC (82.2% vs. 40.0%). Li et al. [14] reported that marked washout in the delayed phase was present in 76% of CHCs, but only in 10% of HCCs. Therefore, marked washout in the delayed phase may have the potential to provide diagnostic clues for CHC. In our study, when marked washout in the delayed phase was used as a criterion for differentiating CHC from HCC, the sensitivity and specificity were 82.2% and 60.0%, respectively. However, the corresponding values reported by Li et al. [14] were much higher (78% and 90%, respectively). This discrepancy may be due to differences in the tumor size, tumor differentiation, hepatic Comparison between solitary and multiple lesions. b) Comparison between left lobe and right lobe. c) Comparison between cirrhotic and non-cirrhotic background. Ultrasonography 39(4), October 2020 e-ultrasonography.org background, ratio of HCC and CC components within CHC, and other factors between the two studies. The value of WT has not yet been explored in the diagnosis of CHC. According to previous studies on the differential diagnosis of HCC and CC, early washout (<60 seconds) was more common in CC than in HCC (87.9% vs. 16.0%) [26], and the majority of CCs displayed washout within 43 seconds [20,26,27]. In our study, the WT was much shorter in CHC than in HCC (33.8±13.1 seconds vs. 58.4±20.5 seconds), and 57.8% (26 of 45) of CHCs displayed a WT <38 seconds, versus only 17.8% (8 of 45) of HCCs. However, in our study, WT was calculated differently from previous studies. Since the time of the emergence of contrast agent within the lesion may vary considerably due to individual differences in cardiac function, we defined WT as the interval between the emergence of contrast agent in the lesion and hypoenhancement, while in other studies, time 0 was set at the injection of the contrast agent. Nevertheless, the basic pattern of more rapid washout in CHC than in HCC remains clear. In our study, by using the combination of marked washout in the delayed phase, a WT <38 seconds, and simultaneous elevation of AFP and CA 19-9 to differentiate CHC from HCC, the sensitivity and specificity could be increased to 93.3% and 55.6%, respectively. However, since the ratio of histologically predominant components within the lesion may vary considerably, a larger sample size of CHCs should be studied in the future. Concerning the CEUS features of CHC and CC, washout degree in the portal and delayed phases, as well as WT, showed no significant differences. In the arterial phase, hyperenhancement was more common in CHC than in CC (73.3% vs. 37.8%, P=0.001). Ye et al. [16] also demonstrated that peripheral rim-like arterial Comparison between peripheral enhancement and hyperenhancement (homogeneous and heterogeneous hyperenhancement). b) Comparison between marked washout and no/mild washout. c) Comparison between marked washout and no/mild washout. enhancement was an independent risk factor for CHC. When arterial hyperenhancement was used as the criterion to differentiate CHC from CC, the sensitivity and specificity in our study were 73.3% and 62.2%, respectively. The sensitivity was higher, and the specificity was somewhat lower than the values reported by Li et al. [14] (55% and 78%, respectively). However, in the study of Li et al. [14], the range of tumor size was large (6 cm), and the patient population was relatively small (30 CHCs, 30 HCCs, and 32 CCs). Furthermore, the prevalence of cirrhosis in patients with CHC and HCC was much higher than that in patients with CC (52% and 60% vs. 22%) in their study [14], while in our study, the prevalence of cirrhosis for each entity was comparable and was much higher than was observed for the corresponding types in the study of Li et al. [14]. This may cause differences in the diagnostic efficacy of CEUS feature of hyperenhancement in the differential diagnosis, because it has been reported that smaller tumors and higher frequencies of liver cirrhosis are associated with a higher likelihood of detecting arterial hyperenhancement within CC lesions [24]. This hypothesis should be further validated with studies concerning the CEUS features of CHCs with differences in size and hepatic background. Using the combination of simultaneous elevation of AFP and CA 19-9, the sensitivity and specificity were 80.0% and 62.2%, respectively. There are some limitations of our study that should be noted. First, the sample size of CHC was relatively small due to its rarity, and the influence of tumor size and hepatic background on CEUS features was not analyzed. Second, due to the nature of the retrospective study design, correlations between the CEUS features of CHC and histopathological findings were not investigated. Furthermore, we separately compared CEUS features between CHC and HCC and between CHC and CC, and an intermingled differential diagnosis among the three tumor types was not conducted. Third, only CEUS was used to differentiate CHC from HCC and CC in our study, but more imaging modalities should be utilized to improve the diagnostic efficacy, such as CECT, dynamic contrast-enhanced MRI, and some other functional imaging modalities, including parametric imaging and positron emission tomography. In conclusion, although the CEUS features of CHC, HCC, and CC may overlap, the combination of tumor markers, marked washout in the delayed phase, and a WT <38 seconds was confirmed to be helpful for differentiating between CHC and HCC, and the combination of tumor markers and arterial hyperenhancement may provide a differential diagnostic clue between CHC and CC.
2020-03-19T10:28:24.027Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "7ed26e1f14dc14d2f409ced97998ac8a6ea9c2da", "oa_license": "CCBYNC", "oa_url": "https://www.e-ultrasonography.org/upload/usg-19093.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "56369b7202672354a99f6172266a84038ee8dfcf", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247070688
pes2o/s2orc
v3-fos-license
Evaluation of early scoring predictors for expedited care in patients with emphysematous pyelonephritis Introduction: Emphysematous pyelonephritis (EPN), an acute necrotizing infection of the kidney and surrounding tissues, is associated with considerable mortality. We evaluated how existing critical care scoring systems could predict the need for intensive care unit (ICU) management for these patients. We also analyzed if CT-imaging further enhances these predictive systems. Patients and Methods: A retrospective analysis of 90 consecutive patients diagnosed clinico-radiologically with EPN from January 2011 to September 2020. Five scoring systems were evaluated for their predictive ability for the need for ICU management and mortality risk: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), ‘quick’ Sequential Organ Failure Assessment score (qSOFA), Systemic Inflammatory Response Syndrome score (SIRS), and Sequential Organ Failure Assessment score (SOFA). CT images were classified as per Huang & Tseng and evaluated as stand-alone or added to the different predictive models. Receiver operating characteristic (ROC) curves were plotted for each critical care score and CT-Class using logistic regression, to obtain the area under curve (AUC) value for comparison of ICU admission predictability. Patients were analyzed up till discharge. Results: Ninety patients were diagnosed with EPN. Twenty-six patients required ICU management and nine patients died. The best scoring system to predict the need of early ICU management is NEWS (AUC 0.884). CT Class had no independent predictive power, nor did it add significantly to improvement in most of the early warning scoring systems, but rather guided us to the need for radiological, endourological or surgical intervention. Conclusion: In patients with EPN, the NEWS scoring system predicts best the requirement of ICU care. It aids in triage of patients with EPN to appropriate early management and reduce mortality risk. Introduction Emphysematous pyelonephritis is a potentially fatal urologic emergency characterized by acute necrotizing parenchymal and perirenal infection often caused by gas-forming uropathogens. It is commonly associated with diabetes mellitus, urinary tract obstruction with or without associated renal and /or immune dysfunction. [1][2][3] There has been a paradigm shift in its management and prognosis with the balance of treatment tilting in favor of aggressive medical management, with percutaneous or endourological drainage as indicated in case of urinary obstruction. [4][5][6][7] However, there remains a need to better identify patients who would benefit from early ICU Evaluation of early scoring predictors for expedited care in patients with emphysematous pyelonephritis management, thereby obviating subsequent potentially unnecessary surgical intervention, improving renal salvage rates, and better overall patient outcomes. [8][9][10][11] Systemic Inflammatory Response Syndrome (SIRS), Modified Early Warning Score (MEWS), National Early Warning Score (NEWS), Sequential Organ Failure Assessment Score (SOFA), and Quick Sequential Organ Failure Assessment Score (qSOFA) are among the clinical scoring systems used to predict patient outcomes in emergency care, [12][13][14][15][16] in addition to its utility in the management of conditions leading to sepsis and/or multi-organ dysfunction syndrome (MODS). [17][18][19][20][21][22][23] While in emergency care, decision making is based on different systems in use, it remains unclear which one performs best in EPN, and if the systems can be further improved including specific data based on the condition treated. To address this, we assessed the individual predictive accuracy of each of five scoring systems -MEWS, NEWS, SOFA, qSOFA and SIRS, when applied to a large EPN cohort. In addition, we studied the different scoring systems in combination with the CT Class to assess any potential improvement in predicting the need for early and aggressive intensive care management. Study design and participants A retrospective analysis of 90 consecutive patients with a clinico-radiological diagnosis of EPN from January 2011 up to September 2020 in our tertiary hospital, was conducted after obtaining approval from the Kasturba Medical College (KMC) and Kasturba Hospital (KH) Institutional Ethics Committee: IEC No: 583-2019. Variables and scoring systems Five scoring systems were applied using the appropriate clinical variables for calculating MEWS, NEWS, SIRS, qSOFA and SOFA scores (Supplemental Material Table 1). Patients were analyzed up till discharge. The individual CT Class was also analyzed independently and in combination with each scoring system, to assess patient outcomes, primarily, the need for ICU management, and secondarily, mortality. The need for ICU admission was taken in conjunction with the Critical Care Specialist and was primarily based on the need for support of two or more organ systems. Definitions and management protocol EPN is a clinico-radiological diagnosis, the hallmark finding being presence of intra-renal or extra-renal gas on non-contrast CT scan (NCCT). The appropriate stage was designated as per the Huang and Tseng 29 classification ( Figure 1). Statistical analysis The data were entered into the Microsoft Excel spreadsheet 2007 and analyzed with IBM SPSS version 22 software. Descriptive data were presented as mean and standard deviation (SD) for normally distributed continuous variables, and as median and interquartile range (IQR) for skewed variables. Data were presented as frequency (n) and percentages (%) for categorical data. Comparison of proportions was performed using the chi-square test and non-parametric data by Mann-Whitney U test. All results were considered significant at a P-value of < 0.05. Receiver operating characteristic curves were plotted for each score and CT-Class using logistic regression, to obtain the area under curve (AUC) value for comparison of ICU admission predictability. Results Class 2 EPN was the commonest type as detected by computed tomography in 35 (39%). Class 4 EPN was seen in 12 patients. Bilateral EPN on presentation was seen in 11 (12%). One patient had a solitary kidney. Baseline characteristics including 90 patients are depicted in Table 1. CT-Class did not have an independent predictive power, with an AUC of 0.667, (Supplemental Material Tables 4 and 5) but did improve the AUROC values when applied to the early warning scoring systems. (Supplemental Material Table 6) (Figure 1(b)). CT Class 3 EPN was associated with the highest rates of ICU admission. (Supplemental Material Table 4). The data depicting the association of the scoring systems and CT Class with mortality is shown in Supplemental Material Table 3(A) and (B). Discussion The use of predictive systems in outcomes is increasingly in vogue in urology. CROES, STONE, and GUYS to predict stone free rate, the RENAL, PADUA, C-index, CSA in kidney cancer and the Partin's nomogram in prostate cancer. [24][25][26][27][28] These nomograms are based on big data to predict outcome and offer guidance in medical management in specific situations. Emphysematous pyelonephritis (EPN) is an acute, severe necrotizing, often polymicrobial gas-forming infection affecting the renal parenchyma, the collecting system, and perirenal tissue. [1][2][3]29,30 High mortality rates of up to 78% half a century ago was due to poor recognition by virtue of its rarity. This often led to early nephrectomy, which was then the treatment of choice. A heightened index of suspicion, coupled with early cross-sectional imaging has permitted an algorithmic comprehensive management of EPN comprising of aggressive resuscitation, appropriate antibiotic therapy and the correction of any reversible precipitating factors, along with percutaneous or endourological decompression, as indicated. [4][5][6][7] Nephrectomy, emergent or elective, is now relegated to a last option. These advances have decreased mortality rates to 18%, with improved renal salvage rates and overall patient outcomes. 6 With the progress made in earlier diagnosis and more effective medical and minimally invasive treatment of EPN, one avenue for improvement in the treatment of EPN remains, and that is the timely institution of ICU management. It is well documented that delays in admitting patients requiring ICU care is associated with higher mortality rates. 31 Objective criteria to assess and predict an EPN patient's need for ICU admission has, however, yet to be established. General guidelines for admission to the ICU are available from the Society of Critical Care Medicine, 32 though these recommendations are highly dependent on clinical expertise and experience. Existing scoring systems such as NEWS, SOFA, qSOFA, and SIRS are not disease-specific and a consensus is yet to be reached on their appropriate clinical use. [12][13][14][15][16] Kapoor et al. 7 found altered mental status, thrombocytopenia, renal failure, and severe hyponatremia, at presentation, to be significantly associated with higher mortality. They did not find any association with the radiological classification. They reported higher renal salvage (22/24, 92%) with minimally invasive treatment. These findings were further corroborated by the study by Aswthaman et al. 4 Falagas et al. 8 reported a meta-analysis of 7 studies (175 patients) to assess the risk factors for mortality in EPN. The overall mean mortality rate was 25%. Conservative treatment alone, bilateral EPN, Wan type 1 EPN and thrombocytopenia all had higher mortality. In addition, altered consciousness, systolic blood pressure of < 90 mmHg, and a serum creatinine > 2.5 mg/dL also contributed to mortality. Recently, more attempts have been made to improve the prediction of clinical outcomes of EPN patients by combining the variables used previously, in formulating EPN-specific-scoring systems. The novel prognostic scoring systems described by Prakash et al., 9 and Jain et al. 10 however focus mainly on predicting the associated mortality rates and the need for nephrectomy, but are of limited sample size, and remain to be validated. Krishnamoorthy S et al. has provided another elaborate risk stratification protocol using eighteen variables including clinical, biochemical, hematological, and radiological findings to better predict prognosis. This too focused more on mortality rates and was not externally validated. 11 The present work in this large cohort of patients with EPN attempts to identify the best approach to evaluating and triaging these critically sick patients, to expedite their care using MEWS, NEWS, SOFA, qSOFA and SIRS. In our study, all the predictive clinical scoring systems performed well, each achieving AUC > 0.7. NEWS was best at predicting the need for ICU admission. CT Class neither had an independent predictive power, nor did it add significantly to improvement in most of the early warning scoring systems. We may therefore safely conclude that for EPN we can best use the NEWS to evaluate the level of severity and need to ICU admission. The National Early Warning Score (NEWS2) was developed by the Royal College of Physicians to improve the detection of and response to clinical deterioration in patients with acute illness, especially sepsis. NEWS is based on a simple aggregate scoring system in which a score is allocated to physiological measurements. A NEW score of 5 or more is the key trigger threshold for urgent clinical review and action. 13 The early warning scores have demonstrated varying sensitivities and specificities in the early detection of organ dysfunction, sepsis, and inhospital mortality. [17][18][19][20][21][22] They expedite the care of critically ill patients with EPN who present with varying degrees of sepsis and organ dysfunction by virtue of its simplicity and utility to the entire range of treating medical and nursing staff. While CT scanning can confirm and classify EPN, the present study demonstrated that this does not have strong predictive power for ICU admission. However, CT imaging provides information of the extent and localization of the EPN and helps us to decide on the need for radiological, endourological or surgical intervention. The limitation of this work is the retrospective nature of the evaluation with data collection during a period covering almost one decade. Also, different critical care professionals were involved using different systems and decision-making policies for evaluating and treating critically sick patients. On the other hand, this is one of the largest data sets available to study the use of different scoring systems in this rare but serious disease. Conclusion In our study, a NEWS score ⩾ 5 best predicted the requirement for expedited ICU care, and the mortality rate in EPN, thereby improving patient outcomes. Eight of the nine patients that died had NEWS ⩾ 5. Combination with CT-Class did not significantly further enhance the predictive value of these scoring systems. CT evaluation rather guides us to the need for radiological, endourological or surgical intervention. More robust, prospectively conducted studies will be required to determine the extent to which aggressive early ICU management translates to reduced mortality rates, along with its financial implications. Conflict of interest statement The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The authors received no financial support for the research, authorship, and/or publication of this article. Ethics approval and consent to participate Approved by Kasturba Medical College (KMC) and Kasturba Hospital (KH) Institutional Ethics Committee: IEC No: 583-2019. The requirement for written Informed Consent was waived by the approving ethics committee as this was a retrospective study. Supplemental material Supplemental material for this article is available online.
2022-02-24T16:12:24.004Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "27c4ee44e9ff1fbdfac0dc0666752f6aeebac87b", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17562872221078773", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c78756a9e2a5c9a6965c2be48ee760365110d9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
38343963
pes2o/s2orc
v3-fos-license
A Novel 3α-p-Nitrobenzoylmultiflora-7:9(11)-diene-29-benzoate and Two New Triterpenoids from the Seeds of Zucchini (Cucurbita pepo L) Three novel multiflorane-type triterpenoids, 3α-p-nitrobenzoylmultiflora-7:9(11)-diene-29-benzoate (1), 3α-acetoxymultiflora-7:9(11)-diene-29-benzoate (2), and 3α-acetoxymultiflora-5(6):7:9(11)-triene-29-benzoate (3), along with two known related compounds 4 and 5 were isolated from the seeds of zucchini (Cucurbita pepo L). Their structures were determined on the basis of 1D and 2D NMR spectroscopy and HREIMS. Triterpenoids possessing a nitro group were not isolated previously. Introduction The species Cucurbita pepo is a cultivated plant of the genus Cucurbita that includes varieties of squash, gourd, and pumpkin.Cucurbita pepo L (zucchini, also known as field pumpkin or summer squash) (Cucurbitaceae) are widely cultivated in America, Europe, and Asia.The zucchini is a hybrid of the cucumber, and has been a commercially important crop in many countries since the 1950-1960s.It is a highly nutritional low caloric food that requires relatively little effort to prepare.It is full of nutrients like vitamin A, vitamin C, potassium, folate and fiber-all of which support a healthy OPEN ACCESS metabolism.Zucchini, grows well in warm climates.This readily available vegetable can also be an important part of weight loss efforts. General Procedures Melting points were determined on a Yanagimoto micro-melting point apparatus and are uncorrected.Optical rotations were measured using a JASCO DIP-1000 digital polarimeter.IR spectra were recorded using a Perkin-Elmer 1720X FTIR spectrophotometer. 1 H-and 13 C-NMR spectra were obtained on a Varian INOVA 500 spectrometer with standard pulse sequences, operating at 500 and 125 MHz, respectively.CDCl 3 was used as the solvent and TMS, as the internal standard.EIMS were recorded on a Hitachi 4000H double-focusing mass spectrometer (70 eV).Column chromatography was carried out over silica gel (70-230 mesh, Merck, Darmstadt, Germany) and MPLC was carried out with silica gel (230-400 mesh, Merck, Darmstadt, Germany).HPLC was run on a JASCO PU-1586 instrument equipped with a differential refractometer (RI 1531).Fractions obtained from column chromatography were monitored by TLC (silica gel 60 F 254 , Merck). Cytotoxicity Assay The cytotoxicity assay was determined previously [18].Briefly, the HL-60 and P388 cell lines (each 1 × 10 4 cells in 100 μL) were treated with test compounds for 72 h, and MTT solution was added to the wells.The grown cells were labeled with 5 mg/mL MTT in phosphate-buffered saline (PBS), and the absorbance of formazan dissolved with 20% sodium dodecyl sulfate (SDS) in 0.1 N HCl was measured at 550 nm using a microplate reader (Model 450, BioRad, Richmond, CA). 3 a Melanin content (%) and cell viability (%) were determined based on the absorbances at 450 nm, and 550 nm, respectively, by comparison with those for DMSO (100%).Each value represents the mean ± S.D. of three determinations.Concentration of DMSO in the sample solution was 2 μL/mL; b Reference compound. a Assignments were based on 1 H-1 H COSY, HMQC, HMBC and NOESY spectroscopic data. a HL-60 and P388 cell lines (each 1 × 10 4 cells in 100 μL) were treated with test compounds for 72 h, and Table 3 . Melanogenesis inhibitory activities and cytotoxicities in B16 mouse melanoma cell line of multiflorane-type triterpenes isolated from Cucurbita pepo a .
2014-10-01T00:00:00.000Z
2013-06-26T00:00:00.000
{ "year": 2013, "sha1": "218ae0cde2e3e3cbb5b4ff0cc8bc9441cd5d7b6d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/18/7/7448/pdf?version=1403114848", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "b81ad59825f2ef611abddade5ab95cc80052e8e6", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
189403641
pes2o/s2orc
v3-fos-license
Laparoscopic Intra Peritoneal Onlay Mesh Repair: Our Experience Ventral hernia are among the most common pathologic conditions encountered with estimated prevalence of one fourth individuals being born with it, developing, or acquiring a ventral hernia in their lifetime. Ventral hernias include both primary abdominal wall hernia and incisional hernia. Management of ventral hernia is a complex entity owing to heterogeneity of the disease and existing co morbidities like malnutrition , obesity , malnutrition , diabetes , smoking etc. The laparoscopic intra peritoneal onlay mesh (IPOM) is the new popular emerging technique for repair of ventral hernias. The advantages of laparoscopic IPOM in comparison to open technique have been proved in multiple randomised control studies. The advantages of Laparoscopic IPOM includes lesser post operative pain, less duration of hospital stay, early recovery, lesser seroma formation and lesser recurrence in follow up and better cosmesis and concurrent management of swiss cheese fascial defect. Laparoscopic IPOM can be combined with bariatric procedure, cholecystectomy, appendectomy, diagnostic laparoscopy and gynaecological laparoscopic studies. In our study done at tertiary care hospital 60 patients with ventral hernia underwent laparoscopic IPOM PLUS repair. Introduction Ventral hernia remains a vexing problem for the surgeon and the public alike. It represent an incredibly varied clinical entity with a wide spectrum of disease. Laparotomy is associated with an incisional hernia rate of 3-23% [1] . Ventral hernia repair is often a culmination of complex decision making process by the surgeon. Defect size, location, patient co morbidities, the presence of contamination, acuity of the patient's presentation and the necessity of an ostomy and history of prior repair with or without prosthetic all way into the ultimate repair approach. is a very common condition seen by General surgeon in practice. Ventral hernias are quite debilitating to the patient leading to chronic pain, abdominal asymmetry and rarely obstruction. It is advocated that the defect be repaired by technique of hernioplasty rather than herniorrhaphy. Mesh repair has decreased the long www.jmscr.igmpublication.org Impact Factor (SJIF): 6.379 Index Copernicus Value: 71.58 ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v6i3.95 term rate of recurrence from 63% for primary repair to 32% [2] . While many open approaches have been developed for the correction of this ventral wall defect; the main focus currently is on the minimally invasive approach of Laparoscopic Intra Peritoneal Onlay Mesh repair. The obvious advantages of this approach include less post operative pain, smaller scar, shorter hospital stay which in turn translates to early overall recovery of the patient. However the technique does have complications of its own. These include the general complications of laparoscopic surgery such as those of general anaesthesia, pneumoperitoneum related complications and the complications specific to the surgery which include port site herniation, pain, recurrence, vascular and visceral injuries. Aims and Objectives 1) To assess the outcome of ventral hernia patients after Lap IPOM repair. 2) To classify and enumerate the various complications of Lap IPOM repair over predefined time limits. Materials and Methods We performed a prospective study of 60 patients with ventral hernia who presented at our institute over 6 months. All the patients underwent the same treatment modality of Lap IPOM repair by the same surgeon using a composite mesh. Preoperatively a thorough history was taken and physical examination of the patients was done by senior surgeon. An Ultrasound abdomen was done in each of the patients and the size, location and contents of the defect of ventral hernia noted. After reviewing the inclusion and exclusion criteria, the patient was then planned for an elective Laparoscopic IPOM repair. Laparoscopic IPOM was done in each patient. The immediate complications of injury, infections, pain was noted in all cases. We then followed up each patient prospectively in the postoperative period at 1month, 3months and 6months to assess the incidence of port site hernia, recurrence and overall patient satisfaction. Procedure After valid written consent patient was induced under general anesthesia in the reverse Trendelenberg position. After draping the patient with aseptic precautions, pneumoperitoneum was created by closed technique at the Palmers point. This site was used as camera port using 12mm optical trocar. After inspecting the abdomen and the site and contents of the defect, 2 lateral 5mm ports were introduced in the flanks opposite to the side of herniation. The contents in the defect were then reduced carefully by a combination of blunt, sharp and electrocautery dissection. Once the defect was free of the contents, appropriate size Polypropylene composite mesh was introduced from the 12mm port site into the abdomen. The mesh was prepared by placing 4-6 sutures at the corners and in the centre using Prolene 1-0 keeping both the ends of the knot long. The centre and corners of the mesh were lifted transfascially using Aberdeen Needle and tied on the outside thereby placing the knot anterior to the fascia. This led to hitching up of the mesh to the anterior abdominal wall. The mesh was then fixed by spiral tacks every 1cm. After confirming the hemostasis, the ports were removed and pneumoperitoneum was reversed. Port sites were sutured with Port Vicryl and skin with Ethilon 2-0. Sterile dressing applied. Results Out of the total 60 patients, 42 were male (70%) and 18 were female (30%) with a M:F ratio being 2. Discussion Lap IPOM repair was initiated as a minimally invasive approach in the technique of performing ventral hernioplasty. It follows all the sound principles of hernia surgery albeit the morbidity involved in the closure of big ventral defects by open technique. We made this case series in an attempt to assess the feasibility and outcomes of performing this surgery in a high volume referral tertiary care centre such as our institute. We then assessed the incidence of various possible complications that could occur in the perioperative and remote postoperative period in order to gain a realistic perspective of this technique before proposing it as a standard of care. Pain as a complication was seen in 20% patients on postoperative day 1 which then decreased to 6.67% on Day 3. The incidence of chronic pain was then constant at 1 month and 3 months but was reported in upto 10% patients at 6 months. The incidence of postoperative pain is reported to be equal in both the Lap IPOM and ope groups. The reason behind this is believed to be due to extensive subcutaneous dissection and adhesiolysis that is required with the minimally invasive approach akin to the open approach albeit with smaller skin incision [3] . Nevertheless the length of hospital stay has been reported to be shorter and the time taken to resume daily activity level was lesser for persons undergoing Lap IPOM compared to those undergoing open surgery. [4] Most of the RCTs, Meta analysis and comparative studies show a significantly lower rate of short term postoperative complications with Lap IPOM compared to open surgery [5] . The reduction in complications is mostly due to reduction in the incidence of wound infection. In our study wound infection occurred in 2 patients of which 1 presented at 1 month and the other presented at 3 months. Both of them required mesh removal. In a study by Itani and colleagues the incidence of wound infection thereby mandating mesh removal was seen in 2.8% and 21.9% in laparoscopic and open hernia repair respectively [6] In the meta analysis by Forbes et al the rate of mesh removal secondary to infection was 0.7% in Lap IPOM and 3.5% in open surgery. [7] Visceral injury was seen in only 1 patient intraoperatively. But this was managed by suturing of the serosal tear. In LeBlanc's 2007 review article the incidence of enterotomy in ventral hernia repair was 1.78%. This complication was associated with an increase in mortality from 0.05% to 2.8%. [8] The most important outcome in hernia repair surgery is recurrence. In our series the recurrence was nil at 1 month but noted to be 3.33% at 3 months which remained the same at 6 months as well. The introduction of mesh in hernia repair was a major advance in reducing the rate of recurrence [
2019-06-13T13:15:17.766Z
2018-03-18T00:00:00.000
{ "year": 2018, "sha1": "ace255f5581d44e8f92a057e439b6348010be6d5", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v6i3.95", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dc2617a8f9da382f7ddb89014eb0be0326343e17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236257364
pes2o/s2orc
v3-fos-license
A Methodology for Reliable Code Plagiarism Detection Using Complete and Language Agnostic Code Clone Classification : Code clone detection plays a vital role in both industry and academia. Last three decades have seen more than 250 clone detection techniques with lack of single framework that can detect and classify all 4 basic types of code clones with high precision. This serious lack of clone classification impacts largely on the universities and online learning platforms that fail to validate the projects or coding assignments submitted online. In this paper, we propose a complete and language agnostic technique to detect and classify all 4 clone types of C, C++, and Java programs. The method first generates the parse tree then extracts the functional tree to eliminate the need for the preprocessing stage employed by previous clone detection techniques. The generated parse tree contains all the necessary information for detecting code clones. We employ TF-IDF cosine similarity for the proper classification of clone types. The proposed technique achieves incredible precision rate of 100% in detecting the first two types of clones and 98% precision in detecting type-3 and type-4 clones for small codes of C, C++, and Java containing an average line count of 5. The proposed technique outperforms the existing tree-based clone detection tools by providing the average precision of 98.07% on the C, C++, and Java programs crawled from Github with an average line count of 15 which signifies that cosine similarity measure on ANTLR functional tree accurately detects all 4 types of small clones and act as proper validation tools for identifying the learning level in the submitted programming assignment. Introduction Code cloning is the process of creating functionally similar codes with syntactic modifications. It can also be defined as semantically similar code fragment pairs with or without syntactical change [1]. Many researchers refer this process with different terms like similar code [2], identical code [3] or duplicate code [4]. Large systems contain 10-15% and 20-50% of duplicate code in the codebase [5]. Based on the milestone, literature like [6,7,8,9], and based on the study of Wang, W. L. (2020) [1], the code clones are of 4 types that can be categorized into Type-1 which is also called as exact clones, Type-2 which is also called as renamed clones and Type-3 which is also called as near-miss clones. Semantically similar codes that are implemented differently are called as Type-4 clones. Language agnostic code clone detection has a great role to play in building reliable code plagiarism detection. In order to provide justification to academic integrity, an attempt to code plagiarism detection has already started in late 1976. Based on the survey conducted by Chivers [10] The code plagiarism detection is based on the 3 different techniques a) attribute-based b) structure-based c) hybrid technique. Attribute counting technique was first conducted by Ottenstein [11] The study was based on metrics of Halstead [12] considering the number of unique operators and operands. In the year 1981 Grier [13] added 16 new attributes to the existing metrics of Ottenstein [11] that include looping statements, conditional statements, and tokens like white space, line. A study by J. L. Donaldson et al [14] is based on counting the programming constructs like looping and conditional statements. An empirical approach proposed by Faidhi and Robinson [15] for detection of program similarity is based on 24 metrics. These initial studies were completely based on the text or strings and counting the attributes present in the program. In a comparative study made by Whale [16] argue that more application-specific metrics and structural features of code need to be considered for accurate detection of code plagiarism. Existing plagiarism detection tools like MOSS, JPag calculates similarities in terms of percentage which can present the amount of similarity between two codes but fail to validate the submitted coding assignment when they are implemented differently with type 4 clones. Correlating clone type classifications (type 1, 2, 3, and 4) will give a better understanding of learning of students from submitted programming assignments. Type 1 and 2 are ugly practices that breach academic integrity. Type 3 is bad practice and type 4 is good practice as it increases the level of learning by making students implement functionally similar codes using different syntax. This research paper contributes in following way.  We use the capability of freely available ANTLR parser generator to extract functional tree by providing corresponding grammar files for the input programs. Extraction of functional tree eliminates the need for the preprocessing phase employed by earlier clone detection techniques.  Vector representation of the functional tree using TF-IDF is given as input to cosine similarity which proved to be a more accurate classification of all 4 clone types for the micro programs with line count of 5, 15 and 32.  Existing code plagiarism detection tools that work on similarity matching and report type 4 clones as plagiarism but with respect to academia, it is a good learning practice. We relate clone detection to academic code plagiarism to identify the good, the bad and ugly practices of students. Background and Related Work In this section, we present the examples to understand the various clone types, literature on clone detection and literature on code plagiarism. To justify our understanding of clone types, we present the examples based on [6]. According to [8], there are 9 types of clones. Based on the editing taxonomy there exist 4 basic clone types [6]. Background In the following section we present small programs of our data set to define clone types. Type-1 clone: Syntactical and semantically similar codes with a change in white space and comments [1]. main() // addition program { int first=10, second=20, sum; sum= first+ second; //logic printf("sum of two numbers=%d", sum); } Code-1 /* addition program */ main() { int first=10, second=20, sum; sum= first+ second; //logic printf("sum of two numbers=%d", sum); } Code-2 Code-1 and Code-2 are an example of type 1 clones. These are also called as exact clones or copy/paste clones. This practice of copying the program as it is from the peer needs to be detected to stop the ugly practice of learning in students and also breach software integrity in industry. Code-3 and 4 are examples of type 2 clones. These clones are also called as renamed clones. This practice of renaming the multiple entities in the program like identifier, method name, and the class name is a bad practice of coding by students which breaches academic integrity. Type Type 3: types-2 clones with addition and deletion of lines creates type-3 clones. below code-1 and code-5 are type-3 main() // addition program { int a=10,b=20,c; c=a+b; //logic printf("sum of two numbers=%d",c); } Code-1 main() // addition program { int a=10,b=20,c; c=a+b; //logic printf("program find addition") printf("sum of two numbers=%d", c); } Code-5 Type 3 clones are a matter of interest for many researchers in the past where many tools mentioned in [7] struggled to detect type3 clones. In academia, these are just superset of type2 clone which is considered as bad practice by students. Code-6 and Code-7 are type 4 clones. Type 4 clones are matter of interest to both industry and academia. Type 4 clone detection was out of the scope of many great scalable tools mentioned in the introduction. A major issue with existing code plagiarism detection tools is that/, they report these codes as clones but with respect to the academic point of view it improve the learning levels in students. Software clone detection Significant research has happened in finding the software clone types. In this section, we present the summary of clone detection tools/ techniques. Based on the milestone literature works of [6,8], we group all the clone detection approaches into 5 classes like Text-based, Token-based, Tree-based, PDG-based, Metric-based [7] . Text-based approaches: Text based approaches compare two code fragments based on the input text or string. The tools like Duploc [4], simian [18], EqMiner [19], NICAD [20], DuDe [21]. Except for NICAD none of the tools address detecting even small instances of Type 3. Whereas tool proposed by Johnson, Duploc, DuDe, and SDD detect only type 1 and other tools were meant to find first two clone types.. The work of (Kim, 2018) detects type 1 and 2 of C and C++ code. Highly scalable tool VUDDY [22] detects first two clone types of C/C++. The tool CCCD [23] has made an attempt to detect type 3 and 4 clones of C language. The tool vfdtect [24] detects type 3 and type 4 clones of Java code. Token-based Techniques: The technique works by performing lexical analysis to extract the tokens from source code. These extracted tokens are used to form the suffix tree or suffix array for matching. Tools like Dup [25], CCFinder [26], iClones [27], CP-Miner [28]. These tools have detected both type 1 and type 2. The tool Siamese [29] detects first three clone types of java code and the tool CP-Miner detects type 3 clones moderately. The work of [30] finds the first 3 clone types of IJDataset. The tool CCAligner [31] detects first 3 clone types of C and Java language. Higly scalable tool SourcererCC [32] detects first 3 clone types of IJDataset. The language agnostic tool CCfindersw [33] detects only first two clone types. PDG-based Techniques These techniques prepare the program dependency graph to represent the control and data flow of source code [47]. The technique has addressed the detection of type 4 clones. Tools like PDG-DUP [48], Scorpio [49], Duplix [50], Choi [51] concentrate on finding first 3 types of clones. Metrics-based Techniques like CLAN/Covet [52], Antoniol [53], Dagenias [54] that counts a number of different category of tokens and stores them in a matrix. Both matrixes are matched to get the clones. The tool Vincent [55] detects first 3 clone types of Java code. These tools suffer from false positives for detecting type 3 and 4 clones. In table 1, we summarize the number of clone detection tools developed to address various clone types and language they support. With having a great number of studies in clone detection, we still find a lack in complete and accurate code clone detection techniques. Maximum of 68 tools work on java code, 30 tools work on C code and 13 tools work on C++ code for clone detection. We find only 4 language agnostic tools which is the big gap in clone detecion reasearch. In order to provide justification to academic integrity, an attempt to code plagiarism detection was started in late 1976. From 2005 onwards detection of code plagiarism detection was based on string matching, fingerprinting, and tree-based. There are many state-of-art tools for conducting code plagiarism, those include JPlag [56], Marble, SIM, Plaggie, MOSS, Sherlock. JPlag works on tokenizing, greedy string tilling, and optimization. It supports C, C++, Java, C# and text files. Marble is structure-based tool, it works by the recursive splitting of the file till the top line reaches, then removes easily modifiable lexical patterns like class name, function name, identifier name, white space, and comments, and finally applies Linux diff to calculate the score of line similarity. Marble supports Java, Perl, PHP, and XSLT languages. MOSS works on the Linux platform based on document fingerprinting and supports more number of languages like C, C++, Java, C#, Python, VB, Javascript FORTRON, MIPS, and Assembly languages. Plaggie is the command line java application to find plagiarism in java codes. Tool SIM works by tokenizing the source code file then apply forward reference table for matching. It supports C, Java, pascal, Modula-2, Lisp, Mirad and text files. [58] that works on C and Java codes. CPDP [59] works on Java to find copy/paste activities. This tool can be used in finding type 1, type 2 software clones. The study by [60] works on binary code to check file similarity, [61] works on any language to find file similarity. BCFinder [60] works on C/CP++. A tool PlaGate [62] uses Latent semantic analysis to improve the performance of current plagiarism detection tools. A more detailed and analytical comparative study was conducted by [55], which includes 30 code similarity analyzers including fuzzywuzzy and jellyfish. in his discussion he concludes by saying, the code similarity tools behave differently on pervasive code modification and boiler-plate code. Often used tool ccfx, and Python string matching algorithm, fuzzywuzzy work better on pervasive code modification. The experiment conducted on SOCO data sets for boilet-plate codes ranks jplag-text plagiarism detector followed by simjava, simian, jplag-java, and Deckard [55]. Proposed Methodology Proposed work is based on the generation of ANTLR functional trees from the source code using corresponding language grammar. The proposed method works in 4 phases. 1. Repository Building and parse tree generation. 2. Functional tree generation. 3. Vector representation. 4. Measuring the functional tree similarity and displaying clone types. Before we start explaining the methodology we present a brief introduction to ANTLR and similarity metrics. Introduction to ANTLR (Another Tool for Language Recognition) Terence Parr is the man behind ANTLR who is working with ANTLR since 1989. ANTLR is LL (*) parser generator that generates the parse trees for the program according to language freely available grammar [63]. Even though ANTLR is written in java, it generates lexer and parser that respectively perform lexical and semantic analysis to build the parse tree from the input files. In this research work, we generate the parse tree by using formal language description called grammar, along with lexer and parser. ANTLR generates various files like grammar tokens, lexer tokens, BaseListner, Listener, parse tree visitor and parse tree walker that can be used to process the parse tree according to our needs [64]. ANTLR parser creates lexer and parser based on the language grammar that later parses the input file based on the grammar. For example, we can write a grammar and include as file.g4 to parse the simple arithmetic expression like 100+2*34 as Upon installing the latest ANTLR 4.8-v4 version and java (JDK) classpath set in the system, we have following ANTLR tutorial available at ANTLR site to generate parse tree for the arithmetic expression 2+3*4+(7-2).  Execute the antlr command as "antlr file.g4" at command prompt we get various files such as file.tokens, fileBaseListener.java, fileLexer.java, fileListener.java, fileParser.java, file.interp.  Compile all the generated java files to get the class files using the javac compiler.  Run the java org.antlr.v4.gui.TestRig for the input file to obtain the parse tree as follows. All the 3 steps have to be performed manually at the command prompt or in memory compilations can be done by using the automated APIs of "inmemantlr-tool" which has 14 releases so far and available at Github (Thome). Once we perform step 2 and get class files, using tree Listener class we can implement our own application to process the parse tree by creating the methods like getFirstChild(), getLastChild(), deChild(), getSubtree(), replaceSubtree() to access the basic ANTLR tree class. Introduction to similarity metrics According to [65] there exist several similarity metrics to find the similarity of the documents. Table 2 presents various similarity measures that classify the documents based on the data. Smith-Waterman 7 Damerau-Levenstein 8 Jaro-wrinkler Token-based 9 Cosine similarity 10 Jaccard 11 Dice 12 Word/N-gram 13 Monge-Elkan Hybrid 14 Soft -TFIDF grammar E; start: (E NEWLINE)* ; These algorithms work by string matching or token matching by excluding the consideration about the position of tokens in the document hence they do not produce the proper results if applied on the input codes directly. Based on the motivation of [66] these metrics behave differently on the compiled and decompiled code. We perform code similarity on the source code and corresponding ANTLR generated parse tree using a widely used code similarity metric cosine similarity which return 0 for no similarity and 1 for high similarity. The results of both similarity measures are shown in tables 3. Application of cosine similarity directly on two source codes addc and add1c gives matching similarity as 0.28 whereas we get similarity of 0.93 on the corresponding parse trees of addc and add1c. This significant difference is because of the fact that parse trees provide the syntactic and semantic information about the source code which provides evidence that application of cosine similarity on generated parse trees or subset of parse tree will work as accurate code clone detection technique. Table 3. Similarity measures on source code and parse tree using cosine similarity. Sl.No Source file Destination file Cosine similarity on source code Cosine similarity on parse tree (dot file) Figure 3 presents the architecture of parse tree generation. It is the automated process of running the ANTLR tool through the java application that performs all the manual work explained in generating figure 2 from the input expression 2+3*4+(7-2). We have used "inmemantlr-tool-1.6" APIs available at maven central [67] to generate the parse tree in dot file. Since ANTLR provides grammars to parse all the language, the proposed method is language agnostic. We explain the clone detection and classification for C, CPP and Java codes that act as a evidence to language agnostic nature of the work. ANTLR tool generates various token and java files like Grammar.Tokens, Lexer.tokens, Lexer.java, Parser.java, Listner.java, BaseListner.java. All the java files are then compiled to get class files. Upon generation of java class files one has to provide input file (.C/.C++/.Java) files to generate the parse tree. This process can be done by writing the java application to read the grammar files and input file then calling the Listner class generated in the previous stage of ANTLR processing. This application can be made to work on the generated parse tree to extract the nodes of our interest. Repository Building and parse tree generation Parse tree generation: as a case study we have considered small academic programs containing 135 C programs, 99 CPP programs and 33 codes of Java stored in a separate directory. The proposed method makes total of 9180 pair wise matching for C codes, 4950 comparisons for CPP codes and 561 comparisons for Java codes. The advantage of using functional tree extraction is that, it automatically performs pre-processing step adopted by previous clone detection techniques to eliminate program header, and comments. Since the size of generated functional tree is very long, we take small example of printing "hello world" from our data set. The basic principle of ANTLR is to generate the complete parse tree that includes the node information of all the lines present in the input code. Functional tree is the subset of parse tree that includes the generation of nodes only for the line number 4 to 6 for the above C-code, line number 4-7 for above CPP-code and line number 4-8 for above Java-code. These are the statements that represent the main functionality of the code, and hence the tree generated by including all the information in the main function of C and CPP code and the class body of java code is named as functional tree. Figure 4, 5 and 6 respectively shows the ANTLR parse tree for the above case studies of C, CPP and java code. Functional tree for both C and CPP code starts with the node name statementseq that contains node information for line number 5 for C and line number 5 and 6 for CPP. Sub tree generated by extracting only the lines of function definition is termed as functional tree in our paper. The parse tree of java code starts with node name compilationunit followed by left subtree importDeclartion that corresponds to importing the packages. A class definition starts from the right subtree with node name typeDeclartion followed by classDeclaration, classBody, classBodyDeclaration. The left children of this node correspond to modifier public static. The class definition starts from the rightmost child memberDeclaration followed by methodDeclaration. So far all the nodes of the parse tree represent in building the header information for the java class. These nodes just bring structural information of java code. We first allow ANTLR to generate the complete parse tree by calling the base class method of antlr tree on the listener variable dt. TreeListner dt; dt.getParseTree() creates the ANTLR parse tree. We use common grammar file CPP14.g4 to parse C and CPP code to get parse tree. Parse tree for C and CPP code starts with node name translationunit followed by declarationseq, declaration, functiondefinition, functionbody and so on. Functional tree generation This section provides the technical details to extract the functional tree from the parse tree. From the figure 4 and 5 we find that parse tree of C and CPP code starts with the node name translation unit followed by declarationseq, declaration, functiondefintion along with all these nodes till the left descending children of functiondefintion corresponds to C/CPP header. These rules do not contribute to finding the functional similarity of any program. The main computation or functionality of the code starts from line number 5 which corresponds to the the node name "statementseq" in the parse tree. Hence we generate the functional tree by extracting the subtree with node name statementseq for C/CPP parse tree. The code for extracting the functional tree from ANTLR generated parse tree is presented below. parseTree=dt.getParseTree().getSubtrees(n-> n.getRule().equals("statementseq")).iterator().next(); similarly the parse tree of java code starts with node name compilationunit followed by left subtree importDeclartion that corresponds to importing the packages. A class definition starts from the right subtree with node name typeDeclartion followed by classDeclaration, classBody, classBodyDeclaration. The left children of this node corresponds to modifier public static. The class definition starts from the rightmost child memberDeclaration followed by methodDeclaration. So far all the nodes of the parse tree represent in building the header information for the java class. These nodes just bring structural information of java code. The functionality of the java code is written inside the two curly braces of the main method from the line number 7 of Java code. Hence functional tree for Java code can be extracted by passing the argument as node name "block" to equals method on getRule() as follows. Finding the functional tree similarity and displaying clone types The parse Functional tree generated in the previous phase can be stored in dot file using the APIs of "inmemantlrtool" (Thome) that can be used for comparison using any of the natural language processing techniques such as Levenstein distance, Edit distance, Hamming distance, Longest Common Subsequence. By looking at the fact that parse tree generates a lot of information through repetitive rules we consider term frequency inverse document frequency with cosine similarity to find the similarity between two different parse tree representations in a dot file. The sequence of finding the functional tree similarity is shown in the figure 7. The Cosine Similarity function [68] Is widely used to compute the similarity between two given term vectors. Which is ratio of the inner product (v1•v2) to the product of vector length. Similarity between two vectors v1, v2 is given by Following pseudo code, statements find cosine similarity between vectors v1 and v2 [68]. Finally, we get similarity between Doc A and Doc B as Vector Representation The Term Frequency Inverse Document frequency (tf.idf) [69] is a hash map-like data structure that finds frequencies of term occurrence in a document with the relative location. It is the SVM based metric that is used for document processing and comparison. Term frequency gives the count of each token in a document and inverse document frequency gives the uniqueness of each token in a document. The advantage of using this approach is it gives the relative position of every token in a document. Equation (1) Where N is the total number of documents? Table 4 shows the results obtained by combining TF and IDF for the below code-A and code-B. Significance of using TF-IDF is term frequency identifies the words having a unique occurrence of word in the documents. Term frequency is calculated by counting the tokens in the respective documents like for int in code-A is calculated as 1/7 which denotes the term int occurs once among 7 readable tokens. Similarly for a, b, c, and z in a code-A is 2/7, 2/7, 2/7, and 0/7 respectively. In the same way the term frequencies for all the tokens in code-B is calculated and presented in 5 th columns of table 4. Inverse document frequency for each tokens of code-A and code-B are presented in column 6 and 7 respectively. Finally tf-idf value 0.4837 signifies uniqueness of token c in code-A and token z in code-B. Classification of clone types As the last step, we classify clone types by manually validating the matching percentage of all four clone types according to the following thresholds. The following judgment criteria are based on the validation of more than 4000 known clone pairs taken from sanfoundry.com. The classification threshold is based on the similarity value as shown in the table 5. Experimental Setup and Dataset Creation The experiment is conducted on the Windows 7 operating system with Intel core 2 duos CPU having 2 GHZ speed and 2GB RAM on three sets of data samples. We plan 3 set of case studies as shown below. i) Dataset-1: To understand the parsing ability of ANTLR, and clone classification accuracy of cosine similarity, we initially validated our approach on 135 of C codes, 99 C++ codes and 33 java codes with average line count of 5. We recorded time taken to parse the various inputs files. ii) Dataset-2: We have collected sample of C, C++ and Java codes from sanfoundry.com which contains 1000 algorithm based codes of C, C++ and Java. We have edited all the codes according to clone type definitions to get total of 4234 true clone pairs of each code samples. iii) Dataset-3: Next we perform systematic GitHub-web scrapping on 73,075 active repositories of C and 86,505 active repositories of C++ to collect 12,600 sample clone pairs of C and 14,480 clone pairs of C++ with average line count of 15. Since BigCloneBench is the standard data set for java, we use the sample dataset similar to that of (Wang, 2020) which contains 9,134 java codes type. The table 6 presents the details of three datasets on various known clone pairs. Results and Analysis In section 3, we have applied TF-IDF cosine similarity directly on the C source files and found the results in table 3. The obtained similarity measure can only be used to find the type 1 clones and do not support detection of other three Recall tries to find how many positives are identified correctly. Recall= True Positive / True Positive + False Negative In terms of precession and recall by selecting the few results randomly from C, CPP and Java to understand clone classification accuracy is as follows. Results on Dataset-1 We present few samples of dataset 1 for both C and C++ in table 7 and 8 respectively using CPP14.g4 grammar. Table 9 shows the similarity of two functional tree's of ANTLR generated functional tree for Java files using JavaLexer.g4 and JavaParser.g4 grammar. The experiment was extended to total of 135 C codes with 9180 comparisons, 99 C++ codes with 4950 comparisons and 33 Java codes with 561 comparisons were made to record the following precision and recall is shown in the table 10. The results show excellent precision and recall for type 1 and 2 but yields false positive values for type 3 and 4. The table 10 shows the improvement in the precision and recall of type 3 and type4. One noted good thing about TFID-cosine similarity is that it takes just 40 seconds to compare 9180 C code comparisons and takes 78 minutes to compare equivalent functional trees which make it computationally infeasible for exhaustive comparisons. Results on Dataset-2 This section presents the classification results for the known clone pairs of dataset-2 containing exactly 4234 clone pairs of C, C++ and Java codes as shown in table 5. We have applied exhaustive comparison on the each dot file containing functional tree using TF-IDF cosine similarity. The results of clone classification for entire dataset-2 are presented in table 11 below. The results presented in table 11 proves that the proposed method works very well in detecting the clone types 1 and 2 on all three languages with almost 100% precision. We have also obtained excellent results in detecting type 3 and 4 with precision of 98.26% and 97.14% respectively that outperforms the existing tree based clone detection tools for detecting type 3 and 4 clone types. Results on Dataset-3 This section presents the classification results for the known clone pairs of dataset-3 containing exactly 12,600 sample clone pairs of C and 14,480 clone pairs of C++ with average line count of 15 and 9,134 java codes from BigCloneBench as shown in table 5. We have applied exhaustive comparison on the each dot file containing functional tree using TF-IDF cosine similarity. The results of clone classification for entire dataset-3 are presented in table 12 below. The results presented in table 12 proves that the proposed method works very well in detecting the clone types 1 and 2 on all three languages with almost 98.93% precision. We have also obtained encouraging results in detecting type 3 and 4 with average precision of 97.18%. Comparative Study In this section, we present the recent tree based techniques on the two important parameters such as clone type classification and language supported by tool. The comparison is presented on [72] and Wang, 2020 [1] have classified all 4 clone types but the major issue is they are limited to Java code for clone detection. Apart from this both the works are computationally infeasible. We are the first to apply the clone detection and classification to three languages. The evidences of current study gives the hint that since the grammars for parsing all the languages are freely made available by ANTLR, the current methodology can be extended to include all the programming languages that are in practice by all the universities in the world. Next, we present table 14 to compare our study with the open source tree based techniques to record the precision in detecting the clone pairs on our dataset-3. We have considered Deckard [35], iclones [27], ccfx [75] and re-use the accuracy data presented in FA-AST+GMN [1] to validate our approach. From the above table 14, we can conclude that, the proposed method is the more reliable, complete and language agnostic in detecting clone with the excellent average precision of 98.07% in detecting all four clone types thereby act as a proper validation tool for detecting learning levels by the students in submitted code assignments. The string based document comparison techniques for plagiarism detection in Arabic languages presented by Mohd. Binai [76] and plagiarism detection for Kurdish language proposed by Karzan Wakil et al. [77] can only detect type 1 and type 2 clones but fail to detect type 3 and type 4 clones because of the semantic factor involved in detecting code plagiarism hence cannot be applied to code plagiarism detection. Conclusion This research paper proposes more realiable code plagiarism detector by implementing complete and languageagnostic clone detection for C, C++ and Java languages on the datasets containing average line count of 5, 15, and 32 respectively. The technique works by extracting a functional tree from the ANTLR generated parse tree to eliminate the need of preprocessing stage employed by previous clone detection tools. We employ TF-IDF cosine similarity on the generated functional tree in dot file that takes less than 3 seconds to match the clone pairs of 1396 codes which provides evidence that the method works on large scale repositories. The results prove that classification of clone types-1 and 2 are done with 100% precision and precision of 98.50 and 98.12 respectively for detecting type 3 and type 4 clones on 30 small codes of C, C++, and Java. Proposed technique exhibits the precision of 99.9% for type-1, 97.96% for type-2, 96.7% for type-3, and 97.66% for type-4 clone detection on the C, C++ and Java programs crawled from active repositories of Github. As ANTLR grammars are made available freely, the proposed model can be extended to include other existing programming languages to detect code plagiarism with clone types classified to get proper validation in submitted coding assignments.
2021-07-26T00:06:09.597Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "df0789edd3d10511d1dd2a92302dc7fde8b21207", "oa_license": null, "oa_url": "http://www.mecs-press.org/ijmecs/ijmecs-v13-n3/IJMECS-V13-N3-4.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "540857db991feb8dffd43b1a8637e425212d7db8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
81041993
pes2o/s2orc
v3-fos-license
Comparison of intramuscular methylergometrine, rectal misoprostol, and low-dose intravenous oxytocin in active management of the third stage of labor Objective: Active management of the third stage of labor (AMTSL) is a critical intervention for the prevention of postpartum hemorrhage (PPH), which is still the most common cause of maternal morbidity and mortality worldwide. The objective of the study is to compare the effect of intramuscular methylergometrine, rectal misoprostol, and low-dose intravenous oxytocin in the AMTSL in terms of amount of blood loss and duration of the third stage of labor, cost-effectiveness, and side effect profile. Materials and Methods: Seventy-five pregnant patients admitted in the maternity ward for vaginal delivery from February 2017 to February 2018 received either intramuscular methylergometrine (0.2 mg) or rectal misoprostol (400 mcg) or low-dose intravenous oxytocin (5 units oxytocin in 100 mL normal saline) for AMTSL. Data were recorded in three groups: Group A (methylergometrine), Group B (misoprostol), and Group C (oxytocin) consisting of 25 cases each. Results: Mean blood loss was found to be least in methylergometrine group (246.87 ± 65.44 mL) as compared to misoprostol (346.13 ± 58.35 mL) and oxytocin (334.5 ± 69.20 mL) (P = 0.000) Mean duration of the third stage of labor was also least in methylergometrine group (6.21 ± 1.58 min) (P = 0.0008). Conclusion: Although methylergometrine was found to have higher incidence of side effects such as nausea, vomiting, headache, and raised blood pressure, it was found to be the most effective drug for minimizing blood loss in the third stage of labor. In remote places where healthcare facilities are limited and drugs cannot be administered by parenteral route, rectal misoprostol remains an alternative. Introduction P ostpartum hemorrhage (PPH) is one of the leading causes of maternal morbidity and mortality worldwide [1]. Active management of the third stage of labor (AMTSL) as recommended by the WHO is a critical intervention for the prevention of PPH and is composed of three components: (1) administration of a uterotonic, preferably oxytocin, immediately after birth of the baby; (2) controlled cord traction (CCT) to deliver the placenta; and (3) massage of the uterine fundus after the placenta is delivered. A unified consensus has been reached over the years that AMTSL with use of uterotonic drugs is more effective in reducing the duration of the third stage, incidence of retained placenta, amount of blood loss, and hence puerperal morbidity and mortality [2]. We conducted this study in context with Indian scenario where majority of the population is anemic, and Patients who had hemoglobin <7 g/dL, previous history of PPH, pregnancy-induced hypertension, mal-presentations, coagulation abnormalities, antepartum hemorrhage, intrauterine demise, history of previous cesarean section, medical disorders such as diabetes, heart disease, stroke, peripheral vascular disorders, epilepsy, asthma, liver and kidney disorders, uterine rupture, or scar dehiscence were excluded from the study. The third stage of labor was actively managed in these patients by either intramuscular methylergometrine (0.2 mg) (Group A) or rectal misoprostol (400 mcg) (Group B) or low-dose intravenous oxytocin (5 units in 100 mL normal saline) (Group C). For randomization, computer-generated random table was used. Data were recorded for a total of 75 cases with 25 cases in each group. The placenta was delivered by CCT (modified Brandt-Andrews method) and uterine massage was given in all cases. The blood loss during the third stage of labor was measured in a blood collecting bag (BRASSS-V-DRAPE). Blood clots were weighed separately considering 1 g equal to 1 mL of blood. Blood-soaked swabs were weighed, the known dry weight subtracted, and the calculated volume was added to the volume of blood in the measuring bag to get the total blood loss. Duration of the third stage of labor for each case was also noted. Maternal hemoglobin and hematocrit were repeated 24 h after the delivery, and the fall in hemoglobin and hematocrit was taken as an objective measure of blood loss. The occurrence of side effects such as nausea, vomiting, headache, shivering, fever, and diarrhea was noted for the next 24 h. Results One patient each in Group A, Group B, and Group C had blood loss >500 mL and was labeled as PPH. They were managed according to the guidelines and excluded from our study. Final data consisted of 24 cases each in Group A (methylergometrine), Group B (misoprostol), and Group C (oxytocin). Table 1, and duration of the three stages of labor, episiotomy, baby weight, and mean blood loss in the third stage of labor are shown in Table 2. Figure 1 shows flowchart depicting group distribution of the patients and Figure 2 shows the distribution of cases according to duration and blood loss in the third stage of labor and side effects. Demographic profile of the patients is shown in Mean duration of the third stage of labor in Group A (methylergometrine) was 6.21 ± 1.58 min, Group B (misoprostol) was 7.79 ± 1.35 min, and Group C (oxytocin) was 7.46 ± 1.41 min. P value was found to be 0.0008. Mean blood loss during the third stage of labor in Group A (methylergometrine) was 246.87 ± 65.44 mL, Group B (misoprostol) was 346.13 ± 58.35 mL, and Group C (oxytocin) was 334.5 ± 69.20 mL. P value was found to be 0.000. Fall in the hemoglobin and hematocrit level was also found to be least in methylergometrine group but was not statistically significant [ Table 3]. Twenty-nine percent of cases in Group A, 8% cases in Group B, and 8% cases in Group C complained of nausea and vomiting. 8% cases in Group A, 8% cases in Group B, and 12.5% cases in Group C complained headache. 8% cases in Group A, 12.5% cases in Group B, and 8% cases in Group C developed pyrexia. 12.5% cases in Group B complained of diarrhea in the postoperative period. Cost per dose was found to be Indian National Rupees (INR) 47.5 in methylergometrine group, INR 10.7 in misoprostol group, and INR 71.72 in oxytocin group [ Table 4]. Discussion Hemorrhage along with hypertension and infection forms a part of the "deadly triad," contributing to maternal morbidity and mortality worldwide [1]. An estimated 500,000 women die from pregnancy-related causes every year with up to a quarter of deaths occurring due to hemorrhage, especially in the developing countries. It is axiomatic that PPH occurs unpredictably and no parturient is immune from it. PPH is a significant contributor to maternal morbidity, to long-term disability, as well as to other severe conditions generally associated with more substantial blood loss, including shock and organ dysfunction. Blood loss up to 500 mL following vaginal delivery is generally considered as physiologically normal [3]. The effect of blood loss is more important than the amount of blood loss; therefore, defining PPH as any amount of blood loss accompanied by signs and symptoms of hypovolemia regardless of the route of delivery is clinically more significant as opposed to the traditional definition [4]. In 2012, the results of a large WHO-directed, multicenter clinical trial were published and showed that the most important [5]. The most commonly used uterotonic drugs in the management of third stage of labor include ergot alkaloids, prostaglandins, and oxytocin. McDonald et al. [6] (2004) addressed prophylactic ergometrine-oxytocin versus oxytocin for the third stage of labor in their meta-analysis. Their review indicated that the use of ergometrine-oxytocin as a part of the routine AMTSL appears to be associated with a small but statistically significant reduction in the risk of PPH when compared to oxytocin for blood loss of 500 mL or more. No statistically significant difference was observed between the groups for blood loss of 1000 mL or more. Satwe et al. [7] conducted a study to assess effect of injection oxytocin versus injection methylergometrine in AMTSL. The study concluded that intramuscular oxytocin and intravenous methylergometrine are equally effective in reducing the third-stage blood loss. This reduction in blood loss reduced the incidence of postpartum anemia, infection, and hence maternal mortality and morbidity. In our study, methylergometrine was found to be most effective drug with mean blood loss of 246.87 ± 65.44 mL as compared to misoprostol 346.13 ± 58.35 mL and oxytocin 334.5 ± 69.20 mL (P = 0.000 which is highly significant). Mean duration of the third stage of labor was also least in methylergometrine group (6.21 ± 1.58 min) (P = 0.0008). Methylergometrine was found to be associated with higher incidence of nausea, vomiting (29%), headache (8%), and raised blood pressure. Headache was more commonly noted in oxytocin group (12.5%). Pyrexia (12.5%) and diarrhea (12.5%) were most commonly reported in misoprostol group. Gohil and Tripathi [8] conducted a study to compare the efficacy of misoprostol 400 μg per rectally, injection oxytocin 10 IU intramuscular, injection methylergometrine 0.2 mg intravenously, and injection ergometrine-oxytocin (0.5 mg ergometrine + 5 IU oxytocin) intramuscular in the management of the third stage of labor. They also concluded that Methylergometrine and oxytocin should be maintained in cold storage to preserve their efficacy and shelf life. They require syringes and needles for intramuscular and intravenous administration. Hence, rectal misoprostol is a more cost-effective drug as it can be easily stored at room temperature and does not require parenteral administration. Another advantage of rectal misoprostol is easy administration that can be done by health care workers and it does not require any specialized training. Conclusion As methylergometrine is found to be the most effective drug in reducing blood loss in the third stage of labor, its use needs to be reemphasized in the anemic population. However, further multicentric trials are needed to substantiate our finding. In remote places where health care facilities are limited and drugs cannot be administered by parenteral route, rectal misoprostol remains an alternative and is also the more cost-effective. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-03-18T14:03:06.562Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "b0aec493af584d4760d5ecd93fa4ca024d765419", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/tcmj.tcmj_89_18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0aec493af584d4760d5ecd93fa4ca024d765419", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202304114
pes2o/s2orc
v3-fos-license
Understanding Agriculture within the Frameworks of Cumulative Cultural Evolution, Gene-Culture Co-Evolution, and Cultural Niche Construction Since its emergence around 12,000 years ago, agriculture has transformed our species, other species, and the planet on which we all live. Here we argue that the emergence and impact of agriculture can be understood within new theoretical frameworks developing within the evolutionary human sciences. First, the improvement and diversification of agricultural knowledge, practices, and technology is a case of cumulative cultural evolution, with successive modifications accumulated over multiple generations to exceed what any single person could create alone. We discuss how the factors that permit, facilitate, and hinder cumulative cultural evolution might apply to agriculture. Second, agriculture is a prime example of gene-culture co-evolution, where culturally transmitted agricultural practices generate novel selection pressures for genetic evolution. While this point has traditionally been made for the human genome, we expand the concept to include genetic changes in domesticated plants and animals, both via traditional breeding and molecular breeding. Third, agriculture is a powerful niche-constructing activity that has extensively transformed the abiotic, biotic, and social environments. We examine how agricultural knowledge and practice shapes, and are shaped by, social norms and attitudes. We discuss recent biotechnology and associated molecular breeding techniques and present several case studies, including golden rice and stress resistance. Overall, we propose new insights into the co-evolution of human culture and plant genes and the unprecedented contribution of agricultural activities to the construction of unique agriculture-driven anthropogenic biomes. Introduction Although once united under the single term "natural philosophy," for over a century scholars within the biological sciences striving to understand and manipulate the natural world have seldom interacted with scholars studying culture and society. This is problematic for many reasons, not least the social and cultural consequences of increasingly powerful biotechnology. However, recent developments at the intersection of the natural and social sciencesspecifically, theories of cultural evolution, niche construction, and gene-culture co-evolutionhave begun to bridge the gap between the study of biology and culture. In this paper we explore how these new interdisciplinary approaches might contribute to the study of agriculture -a topic that straddles the natural-social science divide. The transition from hunting and gathering to agriculture observed in most human societies is a key event that has radically transformed human societies. For much of its evolutionary history our species practised hunting and gathering, as a few isolated societies still do today (Panter-Brick et al. 2001). Beginning around 12,000 years ago, some human populations began domesticating plant and animal species . The adoption of agriculture triggered the establishment of small permanent settlements and as populations expanded cities, kingdoms, and states. It allowed the creation of new political institutions and forms of social organization and stimulated an upsurge in scientific and technological innovation. It also brought many problems, such as the spread of new diseases and increased social inequality. Agricultural knowledge and technologies have continued to advance at an increasing pace particularly in the last century. The discovery of the rules of genetics by Mendel (Mendel 1866) and their rediscovery around 1900 (Corcos and Monaghan 1990) resulted in the application of plant breeding technologies from the 1930s onwards (Carlson 2004;Koornneef and Stam 2001;Heslop-Harrison and Scwarzacher 2012). The "green revolution" in several developing countries during , which utilized new highyield crops together with fertilizers and pesticides, was an important landmark in agricultural plant breeding, yet still based on traditional Mendelian breeding methods (Farmer 1986). The era of molecular breeding, including markerassisted selection (MAS), from 1983 onwards (Ben-Ari and Lavi 2012; Smith and Simpson 1986) was followed by widespread genetic engineering/modification of crop plants (Gasser and Fraley 1989), and more recently by genome editing technologies (Bortesi and Fischer 2015;Sander and Joung 2014). Molecular breeding has transformed agricultural practices worldwide although it often faces strong public and political opposition. Despite the importance of agriculture to our species' history and recent rapid advances in molecular breeding technologies, there remain disagreements over which theoretical framework offers the best understanding of the origin, spread, and ongoing transformation of agriculture. Several recent debates and exchanges have revealed a tension between, on the one hand, interpretive, humanities-oriented frameworks that focus on culture and agency on the part of agriculturalists and the sociopolitical contexts within which agriculture is practised, and, on the other hand, neo-Darwinian approaches that use tools such as optimal foraging theory derived from behavioural ecology to understand agricultural decisions, assuming that human decision-making has genetically evolved to maximise inclusive genetic fitness (e.g., Cochrane and Gardner 2011;Gremillion et al. 2014). The former approaches are laudable in their attempt to situate agriculture within the rich socio-cultural contexts that they demand, yet often lack rigorous scientific methods and sometimes suffer from the general malaise within the humanities of being politically motivated, agenda-driven, and disconnected from the natural and behavioural sciences (Barkow 2005;D'Andrade 2000;Slingerland and Collard 2011). The latter approaches are often limited in their theoretical assumptions, and, we would argue, do not fully incorporate the role of culture as more than a proximate mechanism (Laland et al. 2011;Mesoudi et al. 2013). Here we follow others (O'Brien and Laland 2012;Rowley-Conwy and Layton 2011;Zeder 2015) in arguing that the study of agriculture can benefit from being situated within a set of new evolutionary approaches to human behaviourcultural evolution, gene-culture co-evolution, and cultural niche constructionthat attempt to incorporate cultural change and individual agency within a rigorous scientific and multidisciplinary evolutionary framework. We highlight several ways in which the study of agriculture can benefit from these frameworks. We also highlight ways in which a consideration of agriculture yields new insights into cultural evolution, gene-culture co-evolution and niche construction. Specifically, we argue that (see also Fig. 1): & Changes in agricultural knowledge and practices are a prime example of cumulative cultural evolution (CCE), where beneficial ideas and inventions are selectively preserved and accumulate in number and effectiveness over successive generations. We apply the large body of modelling and experimental insights already obtained for CCE generally to agriculture. This illuminates the recent rapid advance in agricultural knowledge in the last two centuries, and also highlights the role of intentional versus nonintentional modification. & Agriculture is a prime example of gene-culture co-evolution (GCC), where culturally transmitted practices affect a species' genetic evolution, and vice versa. However, this is not just (as frequently argued previously) the case of culturally transmitted agricultural practices changing human genes, but also changing non-human genes contained within domesticated and genetically modified organisms. & Agriculture is associated with extensive cultural niche construction (CNC), where agricultural practices transform the environment and those environmental changes alter the selection pressures on agricultural CCE. We argue that agriculture can modify (i) the abiotic environment (e.g., water, salinity, soil composition), (ii) the biotic environment (e.g., domesticated species, pests including insects, fungi, and weeds), and (iii) the social environment (e.g., social norms, regulation, markets), and focus in particular on the latter. The following sections address each of these points in the context of selected examples of plant breeding via new molecular tools. We apply these insights to two case studies: golden rice and stress tolerance. We conclude by highlighting outstanding questions that arise from our attempt to place agriculture within these frameworks. Agriculture as Cumulative Cultural Evolution For most of the twentieth century, the study of cultural change remained largely separate from the biological sciences. From the 1970s, scholars began developing a formal theory of cultural evolution, in which cultural change is viewed as an evolutionary process that shares key characteristics with, but differs in important ways from, genetic evolution (Boyd and Richerson 1985;Feldman 1973, 1981;see Mesoudi 2011a, 2017 for reviews). This approach incorporates cultural change and variation into a theoretical framework that is consistent with the evolutionary sciences. Central to this approach is the idea that cultural change constitutes an evolutionary process in its own right: it is a system of inherited variation that changes over time, as Darwin defined evolution in The Origin of Species (Darwin 1859). 'Culture' is defined here as learned information that passes from individual to individual via social learning processes such as imitation, teaching, and spoken or written language. Social learning therefore provides the inheritance system in cultural evolution, paralleling genetic inheritance in genetic evolution. Recognising this parallel, we can borrow and adapt tools, concepts, and methods from the biological sciences to study cultural change (Mesoudi et al. 2006). These include mathematical models (Boyd and Richerson 1985;Cavalli-Sforza and Feldman 1981), phylogenetic analyses (Gray and Watts 2017), lab experiments, archaeological data and field research (Mesoudi 2011a). Importantly, this research does not unthinkingly import genetic models of change and apply them to cultural change without considering the unique aspects of the latter. For example, we can incorporate multiple pathways of inheritance: not just from parents to offspring like genetic evolution, but also transmission from non-parents and between peers (Cavalli-Sforza and Feldman 1981). Psychological processes such as conformity work to favour common behaviours, while prestige bias spreads behaviours associated with high status individuals (Boyd and Richerson 1985). There may be Lamarckian-like transformation such that novel cultural variants are not blind with respect to function (Boyd and Richerson 1985) but may be intentionally created by individuals to solve specific problems. This allows agent-based decision-making forces to be incorporated into an evolutionary framework (Mesoudi 2008). One interesting property of human cultural evolution is that it can be cumulative (Tennie et al. 2009). Other species exhibit social learning, and this is sometimes powerful enough to generate between-group behavioural traditions. For example, chimpanzee communities across Africa exhibit group-specific tool use profiles (Whiten 2017). Yet only humans appear able to accumulate and recombine behavioural modifications over time via social learning, generating complex cultural traits that could not have been invented by a single individual alone (Dean et al. 2014;Tennie et al. 2009). Agriculture is a prime example of cumulative cultural evolution (Fig. 1). Other species practice agriculture in a sense, most famously leaf-cutter ants of the genera Acromyrmex and Atta that cultivate a type of fungus (Schultz and Brady 2008). However, the adaptations responsible for this are genetic, not cultural. Human agriculture is the result of repeated behavioural innovations that spread, accumulate, and recombine via social learning through and beyond communities. This allows for great flexibility, often involving the simultaneous use of multiple domesticated species, and more rapid change over time, on the order of thousands, hundreds, or tens of years rather than millions as in the case of ant-fungus genetic evolution (Schultz and Brady 2008). In humans, agricultural knowledge, practices, and technologies are culturally evolving traits that often show a cumulative increase in scope and complexity over time (Fig. 2). Typically, these traits are sequentially linked, with prior inventions necessary for the emergence of subsequent ones. Key innovations include irrigation by controlling water flow via canals and other waterways, the invention of different types of plough, the conversion of gaseous nitrogen to inorganic nitrogen fertilizers to enhance crop yields, the industrial mechanization of a variety of agricultural processes, and the discovery of the principles of genetics that allowed classical plant breeding. Recent CCE has resulted in new agricultural and computerized technologies, e.g., drip irrigation (Camp 1998) and precision agriculture (Mulla 2013), and the application of novel molecular tools for breeding of crops and farm animals, such as the use of in vitro procedures for plant propagation (Loberant and Altman 2010), fertility control, and genetic modifications Fig. 1 A schematic illustration of three approaches for understanding agriculture and plant breeding. (a) Cumulative cultural evolution (CCE) occurs as beneficial modifications are accumulated over time via repeated innovation and social learning, with an increase in some measure of improvement (e.g. crop yield and quality). (b) Gene-culture coevolution (GCC) typically describes the interaction between human genes and agricultural practices (an example of CCE), to which we add the additional interaction with non-human genes of domesticated animals and plants. (c) Cultural niche construction (NC) describes how agricultural practices may shape the abiotic, biotic and social environment, with those changes feeding back to shape agricultural practices in farm animals (Hasler 2003;Xu et al. 2006) and molecular markers for selection (Smith and Simpson 1986, Ben-Ari and Lavi 2012), genetically-modified (GM) plants (Gasser and Fraley 1989;Harfouche et al. 2019) and genome editing of crops (Bortesi and Fischer 2015;Sander and Joung 2014). As expected for a historically contingent, culturally evolving process these various innovations occurred in stops and starts, showed different trajectories in different societies, and were sometimes lost, reintroduced, or recombined . Agriculture therefore fits several 'extended criteria' of CCE specified by Mesoudi and Thornton (2018): not just repeated improvement as a result of individual and social learning, but also sequential dependence of innovations, branching lineages, and recombination across lineages. Viewing agriculture as CCE allows us to draw on the large body of formal models and experiments that have explored the factors that allow, facilitate, and constrain CCE and apply these insights to agriculture. CCE is thought to depend on high fidelity social learning, which is required to faithfully preserve beneficial innovations across generations and over time (Lewis and Laland 2012). This social learning also needs to be selective, either selectively preserving successful practices, or selectively learning from successful individuals (Laland 2004;Mesoudi 2011b). In the context of small-scale agriculture, this may involve the observation of, or teaching by, expert plant and animal breeders. Since the emergence of formal systems of science, one-to-one transmission has been replaced by the transmission of knowledge in publications such as journals, books, and patents, which greatly increase the fidelity of social learning. Equally important to mechanisms of social learning are aspects of demography. In order to support continued CCE, populations must be large enough to sustain the repeated Fig. 2 Key evolutionary events in agriculture and general biotechnology. Schematic illustration of cultural evolution of the major agricultural niches and sub-niches and accompanying technological and biotechnological innovations are depicted (bold line) as a relative skills and research index vs. the timeline (from the accepted start of agriculture, domestication, to present). The parallel evolution of general key biotechnological events is also depicted (standard line). Several major events in agricultural evolution are indicated with arrows pointing to the approximate time. The resulting major agricultural sub-niches are indicated in a series of encircled bold numbers above the timeline (1 to 7), at the approximately corresponding period: initial plant domestication resulted in small-scale horticultural food production (1); With further domestication, large-scale agricultural food production took place as a result of trial-anderror plant trait selection and agronomic improvements (2); As excess quantities of food became more available, people started to extend the shelf life of fresh food by preservation via drying, salting, smoking and other technologies, some of which were practised already by huntersgatherers (3) and by fermentation (Nummer 2002) (4); Three key events further enhanced food quantity and quality from the thirteenth century (5): (a) long distance travelling and discoveries of new countries resulted in imports and exports of new plants between countries, which allowed for new gene combinations, global gene exchange and domestication of new species, (b) introduction of agricultural machinery during the industrial revolution, foremost the steel plough, cotton gin, seed drills, and later tractors as well as (c) chemical synthesis of ammonia that resulted in massive use of nitrogen fertilizers and large increase in crop production (Erisman et al. 2008). Discovery of Mendel's laws of genetics and its rediscovery later, allowing revolutionary intentional science-based traditional breeding (Hallauer 2011) (6). This was followed by molecular breeding using genetic engineering, and more recently by genome editing (7) transmission of knowledge (Henrich 2004;Powell et al. 2009), and they should also ideally be partially connected, e.g., via migration, such that different innovations can emerge in different groups and then become recombined, rather than the entire population fixating too soon on a single suboptimal solution (Derex and Boyd 2016). The recombination of beneficial traits can generate exponential increases in knowledge, as seen in the patent record (Youn et al. 2015) and in the need for aplying macine learning (Harfouche et al. 2019). Finally, the type of innovation can affect the dynamics of CCE. Miu et al. (2018) found, in a computer programming tournament, two classes of innovations: small, incremental 'tweaks' that were common but unlikely to lead to major increases in performance, and rarer 'leaps' that made bigger changes to existing knowledge, were more likely to fail, but had a small chance of a major improvement. These rare innovative leaps may play a disproportionate role in CCE (Kolodny et al. 2015) (see Fig. 2). An interesting question is whether innovation is intentional or not. In genetic evolution, there is no foresight. Genetic mutations arise randomly with respect to their adaptive effects; beneficial mutations are no more likely to arise when they are needed than when they are not. In cultural evolution, however, innovation may be intentionally directed in ways that make adaptive variants more likely to occur. Clearly, people are not omniscient (Mesoudi 2008), but this intentionality may speed up CCE compared to random modifications, as suggested by models of 'guided variation' (Boyd and Richerson 1985) and 'iterated learning' (Griffiths et al. 2008). On the other hand, major innovative leaps in CCE often arise by accident, suggesting that randomness can be useful; classic cases include the discovery of penicillin and x-rays (Simonton 1995). Of course, real cases of innovation may involve both chance and intention. The issue of intentionality in the emergence of agriculture has been debated extensively (Abbo et al. 2014;Fuller et al. 2012;Kluyver et al. 2017), often in oppositional terms. Cultural evolution models, such as those of guided variation, permit the inclusion of both intentional and non-intentional factors, to compare their combined effects on the speed and form of agricultural CCE. Recent GM technology represents, however, the ultimate in intentional modification, with agricultural CCE no longer dependent on random genetic mutation and recombination to create superior breeds. Agriculture as a Driver of Gene-Culture Co-Evolution Gene-culture co-evolution incorporates CCE, but focuses on those cases where cultural inheritance causes changes in gene frequencies, which feeds back on cultural evolution, forming a co-evolutionary dynamic (Feldman and Laland 1996;Laland et al. 2010). Several classic cases of human gene-culture co-evolution involve agriculture, given the growing evidence that agricultural practices have left indelible signatures on the human genome over the last 12,000 years (Laland et al. 2010;Richerson et al. 2010). O'Brien and Laland (2012) discuss two classic cases: first, the spread of lactose tolerance alleles from around 7500 years ago in central European populations as a consequence of the cultural practice of dairy farming (Gerbault et al. 2011;Itan et al. 2009); and second, the spread of sickle-cell alleles in West African populations that confer resistance against malaria, which increased in prevalence following the clearing of forests for yam cultivation, creating pools of standing water where mosquitoes breed (Wiesenfeld 1967). In both cases, there is clear archaeological, anthropological, and genetic evidence that cultural practices came first, followed by genetic responses that continue to affect behavioural variation across contemporary human populations. What is less often recognised in discussions of gene-culture co-evolution is that agriculture also causes genetic change in non-human species. Many definitions of agriculture require there to be human-induced genetic changes in the domesticated plant or animal (Rowley-Conwy and Layton 2011). This non-human genetic changes may be the result of intentional or unintentional artificial selection for traits that increase yields, or the side effects of such selection. The entire package of genetic changes in a domesticated species is sometimes called the "domestication syndrome" ). There is extensive evidence, particularly since the advent of gene sequencing, for sustained genetic changes in domesticated species of both plants and animals (Zeder 2015). In plants the domestication syndrome may include larger seeds, synchronous germination, or fruit ripening that makes sowing or harvesting easier, and reduction in chemical defences . In animals, the syndrome includes increased docility, changes in body shape and size, and altered reproduction patterns (Larson and Fuller 2014). In some cases, nonhuman genetic change coincides with human genetic change, as in the case of lactose tolerance genes in humans and corresponding changes in cattle genes (Beja-Pereira et al. 2003). Genetic modification by conventional and molecular intentional breeding represents further genetic change as a result of agricultural practices, and is covered in more detail below. Agriculture as Niche Construction As O'Brien and have argued, agriculture is also a prime example of cultural niche construction. Niche construction is the general biological principle that organisms do not just passively adapt to their environments. Often they actively construct their environments, with those modifications in turn affecting their own and other species' evolution (Odling Smee et al. 2003). These modified environments may be inherited via what is termed ecological inheritance. Cases of non-cultural niche construction occur in numerous species; examples include earthworms' burrowing and mixing activities, which alter soil nutrient content, and beaver dam building, which creates standing water. These activities have evolutionary consequences: for example, earthworms have retained their freshwater kidneys rather than adapt to the terrestrial environment because the mixed soil they create allows easier absorption of water. Cultural niche construction occurs when the behaviours that modify environments are at least partly socially learned, and the consequences potentially affect subsequent cultural evolutionary dynamics (as well as, potentially, genetic evolutionary dynamics; this would be a case of GCC) (Kendal et al. 2011;Laland et al. 2000). The 'environment' here can be physical or abiotic (e.g., soil composition or climatic conditions, both of which strongly affect plant development), biotic (composed of other species; in the case of domesticated plants this would include phytopathogenic fungi, bacteria, insects and viruses) and social (composed of other individuals of the same species, e.g., competition between neighbouring plants at the root level). Despite romantic notions of the "noble savage" living passively in an unaltered environment, hunter gatherers frequently engage in cultural niche construction by modifying their environments through cultural practices such as controlled burning of vegetation (Boivin et al. 2016;Rowley-Conwy and Layton 2011;Smith 2011). Large-scale agriculture brought about cultural niche construction orders of magnitude more extensive (O'Brien and Laland 2012; Rowley-Conwy and Layton 2011). Agriculture caused huge changes to physical environments, including the clearing of forests, the irrigation of previously arid environments, the dispersal of domesticated plants and animals, and the introduction of new parasites and pests. Agriculture also brought about huge changes to human social environments, including increased population density and new forms of social organisation (e.g., new forms of hierarchies). Finally, the accumulation of agricultural knowledge and practices shaped environments in which further accumulation of agricultural practices was made more likely (CCE). In fact, large-scale agriculture, which produces the majority of the food consumed worldwide (e.g., rice, corn, wheat, canola, soybean) is generally a monoculture (i.e., a single type of plant species that is cultivated in large land areas as a crop for human consumption), unlike home gardens, natural savannahs, pastures and forests, which contain many species. Agriculture therefore results in modified niches compared with the natural vegetation, with clear effects on ecosystems (Matson et al. 1997). A consideration of how agricultural practices shape, and are shaped by, social environments allows us to consider the mutual dynamics among agriculture and the social norms, regulations, and markets that often determine whether a particular technology or practice spreads or not. A good example of this is the acceptance and rejection of GM foods (see below). Case Studies: Biotechnology Most previous discussion of GCC in the context of agriculture concerns deep human history and prehistory (e.g., lactose tolerance and dairy farming; O'Brien and Laland 2012). In our case studies we instead focus on recent biotechnology and molecular breeding to illustrate the points raised above and demonstrate the relevance of these theoretical frameworks to contemporary issues. Moreover, studying recent scientific discoveries and technologies offers richer data for testing theories of cultural change compared to the ancient events of early domestication, which can only be studied indirectly via historical or archaeological methods. Following the Neolithic agricultural revolution and initial crop domestication, and all subsequent agricultural improvements including traditional breeding methods based on Mendelian genetics, a new agricultural phase started in the middle of the twentieth century: the era of molecular breeding, genetic engineering, and in vitro biology (Fig. 2). While some scholars refer to these as 'revolutions' (or at the extreme, a single 'agricultural revolution'), they are clearly all a process of CCE, with each major advance dependent on previous advances. Molecular breeding and genetic engineering could not have been invented without existing knowledge of Mendelian genetics. Yet, there are differences. The Neolithic agricultural period, i.e., plant and animal domestication, as well as other technological improvements in agriculture and biology (e.g., the use of irrigation and fertilizers) are more protracted and evolved sequentially over a period of hundreds or thousands of years (Fig. 2). In contrast, the time span of adopting and applying molecular plant breeding technologies and in vitro biology has been much shorter. Such technologies emerged far more rapidly, and became a working reality only within the last few decades. The molecular structure of DNA was first published in 1953 (Watson and Crick 1953), and the first genetically modified (GM) or transgenic plant (i.e., produced via incorporation of recombinant DNA), tobacco, was first created in the laboratory in 1982 (De Framond et al. 1983;Gasser and Fraley 1989;Tepfer 1984;Zambryski et al. 1983). Farmers began to plant GM crops in 1996, and in 2017, the 21st year of commercialization of biotech crops, 189.8 million hectares (a~112-fold increase) of biotech crops were planted by up to 17 million farmers in 24 countries, which makes GM crops the fastest adopted crop technology in recent times . This is also true for in vitro and molecular genetic procedures in farm animals and humans, e.g., in vitro fertilization (Bavister 2002). The molecular breeding technology described above is clearly a case of CCE, building on what went before (e.g., Mendelian genetics) and far exceeding what any single individual could achieve alone. The exponential accumulation of knowledge is a well-recognized characteristic of CCE (Enquist et al. 2008). There are many potential explanations for this, including the recombination of an increasing number of traits (Enquist et al. 2011;Youn et al. 2015) or the enhancement of innovation and discovery as a result of CCE products such as scientific instruments (Enquist et al. 2008;Mesoudi 2011b). Molecular breeding is also a case of GCC, where the genes of other species are directly and intentionally modified using culturally evolving scientific techniques. These genetic modifications in turn demand new and more powerful scientific techniques and knowledge. Finally, molecular breeding involves extensive CNC, in terms of major changes to the abiotic, biotic, and social environments. Case Study 1: Golden Rice Rice, originally domesticated in East Asia around 8-9kya, is a major staple food for billions of people worldwide, supplying the majority of energy and carbohydrate requirements in addition to other nutritional factors (Wing et al. 2018). Historically, rice is thought to have played a role in human GCC by driving the selection of alcohol dehydrogenase alleles in rice-farming populations in which rice was used in fermentation of food and beverages (Peng et al. 2010). In addition to this long history of traditional breeding, rice has more recently been subject to some of the first molecular breeding. Rice is generally consumed in its "polished" refined form by removing its outer layers. As a result, the edible part of rice grains consists of the endosperm that contains starch granules and protein bodies. However, this part lacks several essential nutrients that are more abundant in the outer layers of the grain, such as the carotenoid pro-vitamin A (β-carotene), which is converted in the body to vitamin A. Thus, reliance on polished rice as a primary staple food, which is an example of culturally evolving culinary traditions, results in vitamin A deficiency, a serious public health problem that is the primary cause of blindness and other diseases in new-borns in many developing countries (Srikantia 1975). Conventional breeding of rice to increase vitamin A content is impractical due to the lack of appropriate rice cultivars that produce pro-vitamin A in the grain. Research into the βcarotene biosynthetic pathway resulted in the ability to defeat vitamin A deficiency by genetically transforming commercial rice varieties using two daffodil genes and one bacterial gene, resulting in vitamin A-rich rice (Burkhardt et al. 1997). This genetically engineered, polished, fortified "golden rice" can supply sufficient pro-vitamin A for the body to convert into vitamin A (Potrykus 2001). Subsequent molecular breeding is leading to "green super rice," that has a lower ecological footprint (Wing et al. 2018). The continual modification and accumulation of GM rice breeds, from traditional rice to golden rice to green super rice, represents a case of CCE where we see continual improvement in multiple criteria of yield, nutritional quality, fit to local agricultural practices, and ecological sustainability. The genetic changes in rice brought about with domestication and selection have been succeeded by traditional breeding and recently by direct, intentional genetic modification, representing a case of GCC between human agricultural scientific practices and rice genomes (as well as human genes, in the case of alcohol dehydrogenase). Rice has also been responsible for extensive CNC. This involves not only the modification of abiotic and biotic environments, but also social environments. One key feedback between agricultural practices and social environments has been oppositional. Like many other GM crops, the adoption of golden rice, despite its health benefits, has been delayed considerably due to legislation, socioeconomic issues, and public concerns. Compared to non-GM rice varieties, the adoption and deployment of golden rice was delayed for more than 14 years by the demanding GM-regulation process. The first scientific procedure was published in 1997. Under regular processes golden rice could have reached farmers' fields in Asia by 2002, but in fact was not officially approved for human consumption, except for planting by selected farmers, until 2013-2014 (Potrykus 2010). While regulation is needed to establish public safety, many hurdles existed not because of scientific problems or safety regulation, but rather due to the negative political climate surrounding GM-technology and the activities of anti-GM activists, the lengthy Intellectual Property (IP) rights approval, the lack of financial support from the public domain, and GM-regulation procedures that required several technological solutions (Potrykus 2010). These delays created a situation where no public institution could deliver GM products because of the high expenses of large-scale production, which resulted in the de facto monopoly of a few potent commercial industries that supplied highpriced seeds to farmers. Since then, GR2E Golden Rice, a provitamin-A biofortified rice variety, received its third positive food safety evaluation by the United States Food and Drug Administration (US FDA) in May 2018, following earlier approvals by Food Standards Australia New Zealand (FSANZ) and Health Canada, all based on the principles of the World Health Organization (WHO), the Food and Agriculture Organization (FAO) of the United Nations, and other international agencies (IRRA 2018). This negative feedback in the form of oppositional social norms and increased regulation has prevented the timely adoption of an available solution to vitamin A deficiency, and similar situations exist for other GM crops. Together with other technologies, GM crops have the potential to help ameliorate many of the world's most challenging problems, including hunger, malnutrition, disease, and poverty. However, this potential cannot be realized if the major barriers to adoption -which are largely socio-cultural rather than technicalare not overcome (Altman and Hasegawa 2012a, b;Farre et al. 2010). Social norms, culinary preferences, and legal regulations are themselves culturally evolving systems that co-evolve with scientific knowledge and technological practices. Consequently, the acceptance and spread of agricultural practices and products may vary cross-culturally. For example, while large global commercial companies tend to invest mainly in major world staple crops (e.g., soybean, corn, canola, wheat, and rice), many other local plants remain "orphan crops." This is why the government of India, where eggplants are an important part of the diet, embarked on a mission to produce GM insect-tolerant Bt brinjal (eggplants), which were adopted rapidly and commercialized despite some legislative problems and concerns that were later raised (Kolady and Lesser 2012;Medakker and Vijayaraghavan 2007). An appreciation of the social environment within which agricultural practices are situated, as follows from a CNC approach, has much in common with social science approaches that stress the embeddedness of new plant crops within socio-political contexts, not just performative qualities such as potential yield (Stone and Glover 2017). Indeed, recently demand has been growing for heirloom rice, traditional rice breeds that have lower yield than Green Revolution rice, but which are marketed as socially and environmentally responsible products embedded in local cultural traditions (Stone and Glover 2017). Case Study 2: Plant Stress Tolerance/Resistance Major advances in molecular breeding have resulted in the genetic modification of crops to improve biotic stress resistance, including resistance to pests like insects, phytopathogenic fungi, viruses, nematodes, weeds, and others (Ceasar and Ignacimuthu 2012;Gurr and Rushton 2005;Scholthof et al. 2011;Suzuki et al. 2014;Vidavsky and Czosnek 1998), and abiotic stress tolerance, including tolerance to drought, salinity, extreme temperatures, heavy metal toxicity, and others (Hirayama and Shinozaki 2010;Vinocur and Altman 2005;Zhu 2016). The two specific examples discussed here are herbicide and insect resistance. Herbicide resistance was developed to combat weeds. With the intensification of agriculture, weeds became a serious economic threat to farming, resulting in increased agricultural production costs and yield loss of cultivated crops. This is especially the case with intensively grown and irrigated plants that enhance weed growth in addition to the desired crop. This problem has been dealt with traditionally either by labour-intensive manual weeding, which is usually performed in less developed countries by women, by tillage, or by heavily spraying fields with large amounts of toxic herbicide chemicals that pollute the environment (Christensen et al. 2009;Griepentrog and Dedousis 2009;Melander et al. 2005). To avoid these costly solutions, weed management was simplified and manual work was reduced by genetically modifying crops to be herbicide resistant. This allows the use of considerably smaller amounts of broad-spectrum herbicides since they kill only the weeds and not the crop (Bonny 2016;Gressel 2009a, b). For example, herbicide-tolerant GM crops were created that express a soil bacterium gene that produces a glyphosate-tolerant or glyphosate-degrading form of an enzyme, resulting in glyphosate-tolerance (Castle et al. 2004) and resistance to commonly used glyphosate herbicides. This cannot be achieved by traditional breeding. Currently, herbicide-resistance is the dominant trait deployed globally in soybean, maize, canola, cotton, sugar beet, alfalfa, and other crops, and is being adopted increasingly rapidly by farmers, comprising about 53% of the 180 million hectares of all GM crops in 2015/16 (ISAAA 2017). Insect resistance provides crops with defences against herbivorous insects. Over the centuries farmers have selected plant varieties that are more resistant to insect pests. As for herbicide resistance, traditional breeding for insect resistance was not very successful, and was followed from the 1940s by widespread spraying of fields with chemical insecticides. This had several drawbacks, including environmental pollution and damage to other non-pest organisms (Newton 1988;Weston et al. 2011). The biotechnological solution involved genetic modification of cultivated crops resulting in insect resistant plants that kill specific pests when digested. Insect tolerant GM cotton, potato, canola, corn, and other crops were developed through the introduction and expression of the soil bacterium Bacillus thuringiensis (Bt) cry genes, resulting in production of the endotoxin cry protein crystals that selectively kill target insect larvae eating the leaves (de Maagd et al. 1999). This technology has several limitations, and improved methods have been developed recently, including genomeediting technology and "gene stacking," i.e., the introduction and expression of multiple genes that create several toxic proteins (e.g., Gatehouse 2008;Lombardo et al. 2016). The successive inventions and discoveries that led from traditional breeding and use of chemical pesticides to genetically modified herbicide and insect tolerant plants constitutes another case of CCE. Each step is dependent on earlier innovations, and measures of improvement have increased, from crop yield and quality to reduced environmental harm. With our expanded definition of GCC to include non-human genes, the genetic modification of crops to incorporate bacterial genes to improve tolerance are also cases of GCC, given the culturally-driven changes in non-human genes. Finally, traditional and molecular selection for stress tolerance constitutes an extensive example of CNC. Human efforts to genetically modify plants to improve their tolerance to biotic and abiotic stress has allowed the spread of cultivated plants into land and regions where they could not have survived before. This involved the spread of organisms and their genes, either by straightforward domestication of new plant genes (e.g., the potato from Peru-Bolivia, and tomato from Chile to Europe (Diamond 1997). See also Fig. 2 on gene transfer that accompanied European expansion to the New World), by traditional breeding, or by gene transfer from any organism to the GM plants as described above. All of these activities create new agricultural niches that feed back to the agricultural process. The spread of agriculture is also associated with the spread of pests. The use of both herbicide and insect tolerant crops reduces the amount of sprayed chemicals and thus can positively impact the environment, countering some of the negative consequences of the agriculturally constructed niches (Pimentel 1995). It may also reduce the toxic effects of insecticides and other pesticides on human health (Levine and Doull 1992;Nicolopoulou-Stamati et al. 2016), including Parkinson's disease (van der Mark et al. 2012). As in the case of golden rice, the impact on and feedback from the social environment is of great interest and importance. As noted, women are the main work force in planting, weeding, and harvesting agricultural plots in many developing countries (Gressel 2009a, b;. In reducing the need for time-consuming manual labour, GM herbicide tolerant crops can potentially improve their socioeconomic status, can save many women from long working hours in the field and improve their economic situation and quality of life, as indicated in several cases (Carpenter 2013). Other studies show that biotechnology and the adoption of insectresistant cotton in India generated more productive employment and greater earning power for women, with a consequent im provement in qual ity of life (Agarwal 1984;. Similarly, a study in South Africa found that planting of Bt cotton benefitted women in the household (Bennett et al. 2003). In Burkina Faso, fewer insecticide applications needed for Bt cotton meant women spent less time fetching water (Zambrano et al. 2013), although cultivation of herbicide-tolerant cotton in Colombia resulted in the hiring of fewer women for weeding, traditionally a female task, with potentially negative economic consequences (Zambrano et al. 2013). However, there are some indications that, unlike with traditional crops, women in Colombia and the Philippines appear to participate equally with men in the decisionmaking and supervision of insect tolerant (Bt) cotton cultivation (Yorobe and Smale Yorobe and Smale 2012). Interestingly, these recent developments relating to gender roles may be reversing the historical effects of culturally evolving agricultural practices on gender-biased division-oflabor. Alesina et al. (2013) provide evidence that the introduction of the plough several centuries ago allowed men to monopolise food production, resulting in the loss of socioeconomic power for women, who had previously participated in food production. Discussion In summary, we have argued that new and complementary approaches within the evolutionary human sciencescumulative cultural evolution (CCE), gene-culture co-evolution (GCC), and cultural niche construction (CNC) (see Fig. 1)can provide theoretical frameworks for understanding the many impacts that agriculture has had on human societies and on the planet. Unlike prior papers that argue similarly (Heslop-Harrison and Scwarzacher 2012;O'Brien and Laland 2012), we have focused on recent biotechnology rather than the distant past, both to demonstrate that these frameworks are relevant for contemporary issues and events, and to make some novel points not apparent when focusing only on the past. First, we argue that agriculture is an excellent case of CCE. It involves the sequential improvement over time of agricultural knowledge (both scientific and non-formal knowledge systems) and practices (from small-scale habits and routines to large-scale technology) via the repeated cycle of innovation and cultural transmission. Viewing changes in agricultural practices as an evolutionary process and recognizing the resultant co-evolutionary dynamics and feedbacks facilitates connecting this cultural process with the biological/evolutionary/natural sciences, preventing a false and unproductive nature-culture dichotomy. Agriculture informally exhibits the classic exponential increase in knowledge and practices that is typical of CCE, with recent change seemingly orders of magnitude faster than past rates of change, allowing the large body of work exploring the drivers and inhibitors of CCE to further contribute to agricultural research. Second, we argue that the standard notion of GCC, where human cultural practices shape human genes and vice versa should be expanded to include culturally driven changes in non-human genes. This includes, by definition, domestication, which entails the traditional breeding of domesticated species. More recently this has involved direct genetic modification with the introduction of GM crops. Third, agriculture is a prime example of CNC, involving extensive modification of abiotic, biotic, and social environments, and feedback from these environments to agricultural knowledge and practices. Most interesting from our perspective are feedbacks with the social environment. Adoption of golden rice and other GM crops has generated resistance from activist groups, political parties, and regulators due to fears over food safety, genetic contamination, and an aversion to 'tampering with nature.' These concerns provoke increased regulation and safety testing within the agriculture industry to ensure that GM products are as safe as possible. While adequate levels of health regulation are of course needed, overly stringent regulation can prevent potentially beneficial innovations from spreading. The ideal outcome would be increased population health and reduced environmental impact as a result of GM crops such as golden rice, green super rice, and herbicide/insect resistant plants, as well as drought and salinity tolerant crops, post-harvest loss of food, use of novel fertility control in farm animals and more. Another positive social feedback is the impact on gender roles, with herbicide tolerant GM crops releasing women from tedious manual labour (weeding) and improving educational and economic outcomes (Fig. 3). To expand the utility of these theoretical frameworks we propose the following novel research questions: How Does Agricultural CCE Operate? As noted, theoretical models and experiments suggest several complementary mechanisms upon which CCE depends, including high-fidelity social learning, selectively biased social learning targeted towards successful traits or individuals, recombination of disparate solutions, innovation that includes large risky leaps, and large (or partially connected) populations. Which of these is responsible for agricultural CCE could be addressed via archaeological and historical records, e.g., by quantifying the frequency and impact of different innovations (cf. Miu et al. 2018 for computer code) or the rate of recombination across different domains (cf. Youn et al. 2015 for patents). We might expect these mechanisms to change over time, or vary crossculturally (Mesoudi et al. 2016). The cases of recent agricultural breeding technologies discussed here afford the opportunity to study the drivers of CCE in real time, with richer datasets than those available to archaeologists and historians. One interesting distinction already studied in the CCE literature is between intentional change by individuals (often called 'guided variation; ' Boyd and Richerson 1985) and unintentional change via the copying of successful traits or individuals (often called 'direct' or 'indirect' bias). This relates to debates in the archaeological literature over the extent to which domestication developments were intentional or unintentional (Abbo et al. 2014;Kluyver et al. 2017). Formal modelling of the kind used in the CCE literature may inform this debate, at the least highlighting how both processes can operate together, or vary in importance across different species, historical periods, and societies, and should not be viewed as mutually exclusive. Molecular breeding seems to be under more precise control than traditional breeding due to the fact that only specific genes are targeted rather than whole genomes of two traditionally bred species, but still risks unforeseen consequences especially in its social effects. Finally, there are interesting questions regarding the 'fitness' criteria of agricultural CCE, i.e., the quantity that is being maximised (Mesoudi and Thornton 2018). Two obvious criteria are crop yield (productivity) and nutritional content, but we have raised several additional criteria that may tradeoff with these. Golden rice, for example, maximises human health by reducing Vitamin A deficiency beyond simple calorific intake. Green super rice and herbicide tolerant GM crops minimise environmental degradation. Heirloom rice explicitly trades off yield and productivity with local cultural preferences (Stone and Glover 2017), albeit only in smallscale traditional farming communities. In this sense, the cultural fitness criteria that shape CCE are themselves evolving amongst farmers, scientists, and consumers. CNC within Social Environments We have argued that the most interesting niche construction dynamics involve feedback between agricultural practices and the social environment, e.g., social norms of consumers, regulatory bodies, and markets. Social norms also culturally evolve, partly according to the psychological biases of members of society that make some ideas or attitudes more likely to be recalled and transmitted than others, known as 'content biases' (Mesoudi 2011a). These may well affect moral norms concerning biotechnology (Mesoudi and Danielson 2008). For example, GM foods may violate psychological biases that provide us with 'folk intuitions' about the natural world (Atran 1998), including that species have discrete essences that are violated when genes are transferred across species. Similarly, people seem to have general psychological biases to attend to, recall, and transmit disgust-eliciting stimuli (Eriksson and Coultas 2014), and moreover disgust-related taboos are more likely to occur against meat than plant products (Fessler and Navarrete 2003). This fits with evidence that there is more opposition to GM animals than GM plants (Schuppli and Weary 2010). Nevertheless, consumption of GM plants is still debated in many countries, mainly on the basis of health hazard concerns (Altman and Hasegawa 2012a, b;Davison 2010;Echols 1998). Further experimental and observational work integrating the many psychological dimensions of norm transmission can be applied to norms surrounding biotechnology (Mesoudi and Danielson 2008). There is evidence for cross-cultural differences in acceptance or rejection of GM foods. For example, consumers in the US seem much more accepting than EU consumers towards GM foods (Gaskell et al. 1999). Such differences demand explanation in terms of divergent cultural histories. Intriguingly, there is some evidence that agriculture and societal organisation have been co-evolving for millennia. Talhelm et al. (2014) show that in China, historically ricefarming regions are more collectivistic than historically wheat-farming regions. They suggest that the intensive and demanding labour required by rice farming created closer social ties and social interdependence. For example, rice agriculture demands more water and thus greater coordination of irrigation across plots of land; when rice is grown on steep hill slopes, as it often is, the farmers must cooperate and coordinate to ensure adequate irrigation for all plots. Wheat farming, by contrast, requires less irrigation management and therefore less need to coordinate and cooperate across farms. In such cases, we see agriculture shaping social orientations, which may in turn shape the subsequent spread or acceptance of further agricultural practices. Conclusion Agriculture has transformed our species and our planet to such an extent that it is one of the primary reasons why some scholars advocate the renaming of the current epoch to the Anthropocene (Ellis 2015;Ellis et al. 2018;Lewis and Maslin 2015). The rapid rates of socio-cultural and scientific-technological change over the last century have only increased this impact, sometimes positive and sometimes negative. Here we have attempted to integrate several recent scientific-technological changes in agricultural knowledge and practices with an understanding of agriculture's impact on environments, including social environments, within novel theoretical frameworks of CCE, GCC and CNC. Fig. 3 Major agriculture and culture-associated niche construction and plant gene-culture coevolution. The different interacting components of cumulative cultural evolution (CCE), plant-specific gene-culture coevolution (GCC), and environmental/agricultureassociated cultural niche construction (CNC) are schematically represented. Two major components are implicated: the physical environment, i.e., geography, the terrain, climate, and more (Box 1), human cultural factors, including ingenuity, technology and scientific discoveries (Box 2). Both may modify, shape, interact and coevolve with specific genes of domesticated plants (and farm animals) (Box 3). Once a certain selected gene combination has been fixed in a domesticated plant (or a farm animal) it can be again modified by traditional breeding techniques or by employment of novel molecular tools (MAS, GM, Genome editing) to produce novel gene combinations affecting mainly genes associated with modified plant products and metabolites (Box 4) and genes for improving plant survival/ tolerance to environmental stresses (Box 5). The novel plant products or traits can in turn result in the creation of new environmental niches, affect the expression of human genes through consuming those products, resulting in ongoing coevolution of biomes (i.e., the entire complex body of living organisms including plants, animals, and microorganisms), CCE, GCC, and ENC (Box 6)
2019-09-11T08:12:16.966Z
2019-08-15T00:00:00.000
{ "year": 2019, "sha1": "33c9519370f002b2b16c6300577f4169046795a1", "oa_license": "CCBY", "oa_url": "https://ore.exeter.ac.uk/repository/bitstream/10871/40350/2/Altman_Mesoudi_HumanEcology_2019_submittedversion.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "53e48df5e32797d6ecbbf1a6d73f804c7d737f90", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Geography" ] }
3816737
pes2o/s2orc
v3-fos-license
Ray-optical negative refraction and pseudoscopic imaging with Dove-prism arrays A sheet consisting of an array of small, aligned Dove prisms can locally (on the scale of the width of the prisms) invert one component of the ray direction. A sandwich of two such Dove-prism sheets that inverts both transverse components of the ray direction is a ray-optical approximation to the interface between two media with refractive indices +n and –n. We demonstrate the simulated imaging properties of such a Dove-prism-sheet sandwich, including a demonstration of pseudoscopic imaging. Introduction Negative refraction is the unusual bending of light that does not normally occur in nature 2 . The concept was first discussed by Veselago [2], who noticed that materials with negative permittivity and permeability possess a negative refractive index. Such materials have been recently built in the form of metamaterials [3]- [5]-resonant electromagnetic structures periodic on a scale below the wavelength, where they act as a homogeneous optical medium. This has revived interest in negative refraction, leading for example to the ray-tracing visualization of objects with negative refractive index [6]. Ray-optical components such as lenses can also be miniaturized and arranged periodically. We consider here simple combinations of such periodic arrangements. To be clear, these are not metamaterials; they affect passing light waves very much like inhomogeneous media. However, they can affect light rays like homogeneous media. In this sense, they can be considered to be ray-optical metamaterials. Negative refraction has already been realized ray-optically in the form of lenslet arrays: pairs of lenslet arrays with a common focal plane bend light rays like the interface between optical materials with refractive indices +n and −n. These have been realized in the form of standard [7] and GRIN lenslet arrays [8], and their three-dimensional (3D) imaging properties, including pseudoscopic imaging, have been examined. We investigate here another way of achieving ray-optical negative refraction, which uses combinations of miniaturized Dove prisms. Our combinations of Dove prisms consist of two periodic Dove-prism arrays, which we call Dove-prism sheets, whereby one sheet is rotated with respect to the other by 90 • . Our Dove-prism-sheet sandwiches work differently from the lenslet arrays described above: whereas the lenslet arrays work by forming an intermediate image, our Dove-prism-sheet sandwiches work by successively inverting the ray vector's x-and y-components. This work is mainly driven by curiosity and the desire to work towards 'experiencing' the optics of negative refraction on a macroscopic scale. However, our approach is of additional interest because it can be generalized to rotation angles between the Dove-prism sheets other than 90 • , resulting in optical sheets that rotate the local ray direction through an arbitrary, but fixed, angle around the sheet normal, which is unprecedented. We will investigate this in future papers. Dove prisms and negative refraction The basic building block of a Dove-prism sheet is a Dove prism. With the coordinate system chosen as in figure 1, a Dove prism inverts the y-direction of any transmitted light ray. It also offsets the rays, whereby the offset is on the scale of the prism diameter. We are considering here the limit of small Dove prisms, so small in fact that we can ignore this offset. Clearly, wave-optically this limit breaks down as the prism diameter reaches the wavelength of the light. Acceptable compromises for visual purposes could be prism diameters of between 10 µm and 1 mm. Dove prisms that are streched in the x-direction (again with the choice of coordinate system shown in figure 1) and stacked on top of each other form a Dove-prism sheet (figure 2). Note that the prisms need to be separated by a few wavelengths to ensure that total internal reflection at the long side (see figure 1(a)) is not frustrated. The ray optics of such a sheet are simple: in the limit of small Dove prisms the sheet flips the y-direction of individual light rays in a beam passing through it. This implies that for light rays incident in a plane parallel to the (y, z)-plane, the angles of incidence, α 1 , and refraction, α 2 , Relationship between object and image distance for crossed Doveprism sheets. A chess piece-the object-is positioned at a distance z behind the sheets; the crossed Dove-prism sheets image it to a position at a distance z in front of the sheets. The different frames show the image of the chess piece for various object distances; the sheets and the camera are stationary. In the first (z = 43) and second (z = 77) frames the image becomes larger and larger as it moves towards the camera, positioned at a distance of 120 units in front of the sheets. The image then moves through the camera plane and behind it, where it re-appears upside-down and getting smaller. In the first two frames, z = 43 and z = 77, the camera is focused on to the image of the chess piece; its image can be gleaned by inspection of the position of the focus on the chequered floor, which has a square length of 20 units. In the second two frames simple focusing is not possible as the chess piece is behind the camera, which is roughly focused on to the sheets. The frames are from a movie (MPEG-4, 256 KB, available from stacks.iop.org/NJP/10/023028/mmedia) calculated by performing ray tracing through the detailed prism-sheet structure, using the freely-available software POV-Ray [10]. are related through the equation It is particularly interesting to combine a Dove-prism sheet with another, parallel, Doveprism sheet that is rotated around the z-direction through 90 • , and which therefore flips the x-direction of light rays passing through it. Such Dove-prism-sheet sandwiches then flip both transverse ray directions (x and y), and invert the angle of incidence for any plane of incidence. When the two crossed Dove-prism sheets are close together, they lead to no additional ray offset. They therefore act like the interface between two optical media with equal and opposite refractive indices, +n and −n: Snell's law, written for this situation, states that n sin (α 1 ) = −n sin (α 2 ), which (provided that −90 • α 1 , α 2 +90 • ) is equivalent to equation (1). ; z c is the position of the camera. From left to right, the frames show the simulated view as seen with a camera moving closer to the Dove-prism sheets; both the sheets and the chess piece are stationary. Because the distance between camera and image is less than that between camera and sheets, a decrease in both distances by the same absolute amount, that is moving the camera in the direction of image and sheets, decreases the distance to the image by a larger factor than that to the sheets. This means that the angle under which the image of the chess piece is seen grows more than the angle under which the sheets are seen. The frames are from a POV-Ray [10] movie (MPEG-4, 204 KB, available from stacks.iop.org/NJP/10/023028/mmedia). Pseudoscopic imaging Images produced by single lenses are orthoscopic: if two objects at longitudinal positions z 1 and z 2 are imaged into positions z 1 and z 2 , and if the first object is in front of the second, i.e. if z 1 < z 2 , then the image of the first object will be in front of the image of the second, so z 1 < z 2 . The opposite is true in pseudoscopic imaging [9], where the image of the second object is in front of that of the first, so z 1 > z 2 . The effect of the inversion of the angle of incidence by crossed Dove-prism sheets is to image any object a distance d behind the sheets to the same distance in front of the sheets (figure 2). In other words, if the longitudinal coordinate z is chosen such that the sheets are at z = 0, then an object distance z corresponds to an image distance −z. For the two longitudinal object positions with z 1 < z 2 discussed above this results in image positions z 1,2 = −z 1,2 , and therefore the inverted relationship between the longitudinal image positions z 1 > z 2 . Crossed Dove-prism sheets therefore produce pseudoscopic images. Figures 3 and 4 demonstrate this pseudoscopic imaging with ray-tracing simulations performed using the software POV-Ray [10]. Both figures visualize imaging of a chess piece through crossed Dove-prism sheets, each comprising 200 Dove prisms. In figure 3 the distance of the chess piece behind this Dove-prism-sheet sandwich is varied; in figure 4 the distance of the (simulated) camera from the sheet sandwich is varied. The inversion of the z-coordinate during imaging implies that crossed Dove-prism sheets produce pseudoscopic images. Figure 5 demonstrates various properties of these pseudoscopic images. Specifically, it shows that pseudoscopic images appear to be 'inside out'; the pseudoscopic image of a convex chess piece, for example, is concave. When looking at this image from different directions, the image appears to have rotated, just like the hollow face The pieces are arranged such that one image is at the same distance as one of the chess pieces in front of the sheet, the other image is at the same distance as the other piece in front of the sheet. This can be seen by one chess piece always being below one image, independent of viewing angle, which means they are always undergoing the same parallax, which in turn implies that they are at the same distance from the camera. However, while the left side of the front piece is visible from the left-most viewing point (a) and the right side from the right-most viewing point (c), the opposite is true for the pseudoscopic images. Also, while the piece in front (which, of course, appears bigger) obscures the piece behind it, the image in front (again the bigger image) is obscured by the image behind it. The frames are from a movie (MPEG-4, 848 KB, available from stacks.iop.org/NJP/10/023028/mmedia) calculated by performing ray tracing through the detailed prism-sheet structure, using the freely-available software POV-Ray [10]. mask in the famous hollow-face (or 'Bust of the Tyrant') illusion [11]. In the case of the chess piece shown in figure 5, looking at the pseudoscopic image of one of the chess pieces placed behind the Dove-prism sheets from the left lets us see the right side of the chess piece, not the left side, as is the case with the chess piece placed in the same longitudinal position for comparison. Figure 4 in [11] shows the same effects in the hollow-face illusion. Figure 5 also demonstrates another striking property of pseudoscopic images. If two objects are placed behind one another, the object in front obscures the object behind. In the pseudoscopic images of two objects placed behind one another, the image behind obscures the image in front. Conclusions and future work Transferring a basic idea from metamaterials-research-miniaturization and repetition of interesting electro-magnetic components-to ray optics, we have investigated the effect of 7 miniaturizing and repeating an optical component with interesting ray-optical properties, the Dove prism. Using ray-tracing simulations, we have demonstrated that the resulting Dove-prism sheets can ray-optically act like the interface between optical media with refractive indices of the same magnitude but opposite sign. We have also demonstrated some of the unusual properties of their pseudoscopic imaging. We are currently generalizing the ideas from this paper, for example varying the rotation angle between the sheets. We are also investigating the possibility of building Dove-prism sheets about the size of an A4 piece of paper. Such sheets would be suitable for demonstration experiments and allow the optics of negative refractive indices to be 'experienced'.
2018-03-10T18:35:36.356Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "7396903ac34c00fb33371624248743c37627485a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/10/2/023028", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "83d2284471ed3cc4aca2b28df4233b30704bfeaa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
136214318
pes2o/s2orc
v3-fos-license
Tuning the stacking behaviour of a 2D covalent organic framework through non-covalent interactions † . Introduction 1][12] However, much of the inherent potential of COFs to rationally design their structures is impeded by their relatively low crystallinity as compared to the related metal-organic frameworks (MOFs). 13The hallmark of COFs, namely their strong covalent bonds combined with their low-temperature synthesis through the utilization of reversible reactions by dynamic covalent chemistry, often leads to a compromise between framework stability and crystallinity. 14,15COFs made by highly reversible reactions such as the nitrosyl dimerization can be obtained in single crystalline form, but they suffer from comparatively low stability due to the intrinsic weakness of these ON-NO bonds. 16Compromising reversibility for stability leads to reduced crystallinity, i.e. disorder and small crystallite sizes. 14,17Twodimensional COFs rely on van der Waals and other non-covalent interactions in addition to covalent interactions to form a threedimensional solid.The weak non-covalent interactions often cause the formation of 2D crystals where large deviations from the ideal stacking geometry are possible in the third dimension, leading to complex or ill-defined polytypes.While local analytical techniques such as solid-state NMR are largely insensitive to the stacking sequence, X-ray powder diffraction (XRPD) of COFs with moderate crystallinity returns average structures with no or little information about the layer arrangement.As this is a common challenge in 2D COFs, several design principles have been proposed to improve, control, or alter their stacking behavior.These include for example the use of non-flat, propeller-shaped building blocks that induce stacking without offset and lead to preferred, ''locked-in'' configurations, 18,19 as well as the use of donor and acceptor (DA) molecules that can stack in an alternating fashion, 20 or the manipulation of the dipole of the linkers, which can lead to energetic minimum structures through dipolar View Article Online View Journal | View Issue alignment. 21,22In all cases, however, high crystallinity is a key prerequisite to derive the 3D structure of COFs by means of XRPD. 7][28] Truly slip-stacked structures based on flat building blocks however have not been identified by XRPD so far, possibly due to stacking disorder and crystallite size effects, which lead to line broadening, precluding the exact determination of the stacking structure.Instead, the higher symmetry structure based on eclipsed layer stacking is usually assumed as an average structure model. 3,23,29nspired by the existing design principles, we have synthesized a COF in which the stacking can be rationally adjusted based on the geometry and non-covalent interactions of the building blocks.We demonstrate that individual layers can selfassemble to form DA-type stacks where imine bond polarization or similar interlayer interactions may be efficient in determining the polytype.To add evidence for our hypothesis, we ''turned off'' the possibility of DA stacking in a closely related system by introducing a propeller-shaped building block that causes the formation of an averaged eclipsed geometry.These two very similar systems allow us to gauge the influence of symmetry, geometry and polarity of the building blocks on the stacking characteristics of the COF.The stacking of the COFs was analysed experimentally and theoretically and we provide first evidence of well-defined slipped stacking in a COF based on a combination of Rietveld refinement of XRPD data and transmission electron microscopy (TEM).We thus demonstrate that careful analysis of the stacking mode can serve as a design principle to direct and control disorder and crystallization in COFs. Results Two imine COFs based on the triphenyl aryl unit were synthesized through the reaction of a triamine with a trialdehyde under solvothermal conditions in mesitylene/dioxane 1 : 1 and aqueous acetic acid as a catalyst (Fig. 1). 30The difference between these COFs lies in the nitrogen content of the triamine precursor, which is based on a central phenyl ring or a triazine ring in case of the TBI-COF or the TTI-COF, respectively.The successful condensation reaction was confirmed via the disappearance of N-H and CQO vibrations and the appearance of CQN vibrations through IR spectroscopy (Fig. S1, ESI †).The porosity of the structures was determined via argon physisorption (Fig. S2, ESI †), which showed BET surface areas of 1108 m 2 g À1 for the TBI and 1403 m 2 g À1 for the TTI-COF (Fig. S3, ESI †). While both networks are crystalline (Fig. 2), the TTI-COF has narrower line widths and shows a pronounced splitting of the [100], [110], [020] and [120] diffraction peaks as well as a discernable stacking peak (Fig. 2) showing that this COF is highly crystalline.This unusual diffraction pattern of TTI-COFs is distinct from previously reported, highly symmetrical frameworks. To determine the structure of the two COFs, several structural models were considered to explain the observed powder patterns.We developed three models based on different stacking modes influencing the overall symmetry as well as the molecular conformations, which were compared to the experimental powder patterns using Pawley refinement 31 and Rietveld analysis. 32The initial values of the cell parameters were obtained from the force field optimized structures, which were constructed based on geometrical considerations, and the in-plane connectivity was derived from the topology of the molecular building blocks.All models are based on a honeycomb structure with a hcb net 33 (Fig. 1). High symmetry case: eclipsed stacking For the eclipsed model C 3 symmetry was chosen for the in-plane structure.These individual layers were then stacked in a perfectly eclipsed fashion to form a one-layer cell with P3 symmetry and cell parameters a = b a c and a = b = 901, g = 1201.This type of model is simplistically assumed for most COFs in the absence of detailed structural information from the X-ray powder patterns. 23,26,28hile these COFs have an apparent high symmetry due to disorder, 23 only some COFs stack without any lateral offset between layers. 18When this high symmetry cell is applied to the observed XRPD pattern of TTI-COF, stark differences between the simulated and the observed pattern are obvious and most prominently reflected by the different numbers of reflections (Fig. 2, R wp : 9.319, see eqn (S1) in ESI †).We therefore explored lower symmetry models for the TTI-COF.For the case of the TBI-COF the eclipsed model yielded a good fit (R wp : 1.365), as no symmetry reduction is apparent. Low symmetry layers: oblong pores As a first lower symmetry model, the in-plane C 3 symmetry was removed and varying the conformation of the imine bonds lead to oblong pores (Fig. S4, ESI †), while the eclipsed layer stacking was retained.This structural modification leads to a Pm symmetry unit cell with a a b a c and a = b = 901, g = 1201.Pawley refinement of the oblong model shows a relatively good fit (R wp : 5.359) to the observed powder pattern of the TTI-COF (Fig. 2).However, when the structural model was constructed and the cell parameters were applied based on the Pawley refinement (Table S1, ESI †), the structure was compressed in the b direction, resulting in aromatic C-C bond lengths as small as 1.36 Å.The oblong model is only able to fit the experimental XRPD pattern with a marked reduction of the cell parameter b with respect to a, more than would be expected by the conformational changes of the imine bonds.This strain disfavors the oblong model as a cause for the experimentally observed reduction in symmetry. Shifted layer model: slipped stacking Another conceivable way to lower the symmetry of the unit cell is to shift the individual pseudohexagonal layers along one direction.This model follows previous calculations on boronate ester COFs predicting that a slipped configuration in flat 2D COFs is energetically much more favorable than eclipsed stacking, which lacks experimental confirmation so far. 26,27 S1, ESI †).The lattice parameters of the slipped model Pawley refinement were implemented and showed no signs of strain such as unrealistically small bond lengths or angles.The one-layer geometry optimized unit cell was then refined using Rietveld analysis.Initially, the slipping direction was fixed with the constraint a = b.To explore other possible slipping directions, the parameter space of different directions and magnitudes of slipping at constant layer-layer distance was used for refinement and plotted against the layer offset (Fig. 3).The obtained ''landscape'' of stacking maps the hexagonal 3).The obtained models for the TTI-COF and the TBI-COF differ considerably despite the similarity of both COFs.To further confirm the models we performed periodic boundary condition DFT calculations in which the unit cells and atomic positions of the COFs were relaxed (Table S2, ESI †).These showed a minimum for a slipped TTI-COF and a slipped TBI-COF.While the slipping in the TTI-COF is seen in the XRPD by the symmetry reduction, no such indication of slipping can be seen in the XRPD of the TBI-COF.In these DFT calculations the TTI-COF slips along [100], just as observed in the XRPD refinement.The DFT based structure of the TBI-COF is slipped along [120], which is in contrast to the observed XRPD pattern showing P3 symmetry.In order to understand the difference between the DFT based structure and the observed powder pattern we performed DIFFaX simulations to find an explanation for the apparent higher symmetry obtained from the XRPD pattern (Fig. 4). In the simplified model used in our simulations, subsequent layers of the structure had a chosen probability to slip in either one direction or the opposite, while the magnitude and the stacking offset was kept constant.When the probabilities of slipping in either direction become equal (0.5-0.5), the apparent symmetry of the simulated XRPD pattern increases to P3.Thus, with this simple model we are already able to rationalize the observed higher symmetry of the XRPD of TBI-COF, which can be attributed to disorder in the stacking of the TBI-COF.][28] We performed TEM and scanning electron microscope (SEM) experiments to confirm the results from XRPD and to gain further insights into the local structural features of these COFs. TEM images of the TBI-COF (Fig. 5) show crystalline domains with domain sizes in the range of 30 nm up to 80 nm, which exhibit the hexagonal symmetry of the pores.The fast Fourier transform (FFT) and the selected area electron diffraction (SAED) patterns show the expected repeat distance of 2.1 nm that matches the 100 reflection obtained from the structural model based on the XRPD data.The morphology of the TBI-COF as observed in the SEM and TEM resembles individual slabs that are composed of smaller crystallites (Fig. S5, ESI †). TEM of the TTI-COF shows significantly larger crystalline domains than the TBI-COF with crystallite sizes in the range of 50 nm up to 200 nm.The pseudo-hexagonal symmetry of the pores is apparent along the [001] zone axis, while the pore channels are visible when viewing in the direction along the a-b plane (Fig. 6).The FFT and the SAED of the TTI-COF both show lattice spacings close to the values expected from XRPD.The microscopic morphology of the TTI-COF exposes large polycrystalline rods in which some crystallites show bending along the direction of the channels (Fig. S5 and S6, ESI †). The FFT shows the hexagonal pore structure of the COF along [001] (Fig. 6B) as well as zone axes allowing the observation of 00l and h À k0 reflections simultaneously such as [110] (Fig. 6D).In addition to the sharp reflections from the (1 À 10) and higher order reflections, a prominent streak along hk1 is visible at a distance of 2.90 nm À1 (3.5 Å) which is in excellent agreement with the expected layer-to-layer distance.Close inspection of the SAED reveals a further streak at the distance of 1.45 nm À1 (6.85 Å), which indicates the existence of two individual layers per unit cell along c.The simulated SAED of a two-layer model fits well to the experimentally obtained SAED (Fig. 6D and E; model: Fig. 8, right).In contrast to the simulation, the reflections hk1 and hk2 are smeared out to form streaks.The direction of these streaks indicates in-plane disorder as the cause of this diffuse reflection, since stacking disorder would cause streaks along c.A possible cause of in-plane disorder might be a random variation of the conformation of the imine linkages such as described in the oblong model.Since the SAED indicates two layers per unit cell, we developed possible models with different stacking geometries of imines with two layers per unit cell based on the structures of known molecular imines.From the crystal structures of molecular imine compounds three major geometric motifs are conceivable for the TTI-COF (Fig. 7).Molecular imines have a variety of stacking modes, where sometimes one molecule exhibits different kinds of stacking in one crystal or differently stacked polymorphs exist for a single compound. 34Ordered geometries include the direct slipped geometry where the imine This journal is © The Royal Society of Chemistry and the Chinese Chemical Society 2017 orientation is the same for all molecules that are stacking (Fig. 7A), 35 and the antiparallel geometry with the imine orientation changing with a twofold axis from one layer to the next (Fig. 7B). 36These motifs are present in imines with different substituents and the influence of these might guide the stacking behavior. 348][39] The symmetric substitution in the TTI-COF would point toward the disordered stacking, which however is not compatible with the observed SAED with clearly discernable streaks along (hk1) and (hk2). To further narrow down the possible stacking geometries, we performed periodic boundary DFT calculations on two layer unit cells and compared whether the alignment of the imine bonds (Fig. 7A) or the antiparallel imine bond (Fig. 7B) are energetically more favorable.We relaxed the structures of the antiparallel imine and the parallel imine models and obtained two closely resembling slipped structures that match the obtained lattice parameters from Pawley refinement of the XRPD well (Table S2, ESI †).The difference in total energy of these two structures was calculated and showed that the antiparallel configuration is more stable by approximately 0.32 eV (30.9 kJ mol À1 ) per unit cell.This is not surprising, as an antiparallel stacking from one layer to the next leads to donor-acceptor (DA) interactions between the more electron rich triazine triphenyl amine (TT-NH 2 ) and the electron poorer triazine triphenyl aldehyde (TT-CHO) across the layers, which is a well-known phenomenon for two flat molecules that have electron poor as well as electron rich character. 40In addition, the antiparallel stacking creates antiparallel aligned dipoles, which stabilize the structure.The comparison of the parallel and the antiparallel stacking in the TBI-COF yielded only a negligible energetic difference of 0.04 eV (3.9 kJ mol À1 ), which could be explained by a competition of the favorable DA stacking and the unfavorable geometric mismatch between the propeller-shaped triphenyl benzene core (TB) and the flat triphenyl triazine core (TT). To investigate the origin of the high crystallinity and the unidirectionally slipped geometry of the TTI-COF, we compared the energy landscape of stacking the layers with different offsets of the antiparallel and parallel imines (Fig. 9), determined by DFT.In both energy landscapes, eclipsed stacking, corresponding to zero offset, is energetically non-favorable in contrast to the slipped geometry.The ''parallel imine'' stacking landscape shows a shallow and widespread minimum with multiple symmetry-related minima with a pseudo-hexagonal structure.Such an energy landscape might be expected to yield a random direction offset in stacking.In contrast, the ''antiparallel imine'' stacking landscape shows reduced symmetry, which can be attributed to the slight out-of-plane torsion of phenyl rings, which is more pronounced in the antiparallel imine case than in the parallel imine case (Fig. S7, ESI †).This torsion reduces the symmetry of one individual layer, but at the same time is able to propagate the preferential slipping direction to the next layer, providing a rationale to the observed reduction in the symmetry of the unit cell.The comparison of both energy landscapes shows that in the case of the antiparallel imine bonds the minimum is steeper and less distributed than in the case of the parallel imine bonds.A steep minimum likely directs the crystallization process during the synthesis of the TTI-COF and therefore may be linked to the observed slipped stacking mode and the high crystallinity of the TTI-COF. As the antiparallel stacked TTI-COF is the most stable configuration according to DFT, this structural model was used for the Rietveld refinement.The crystal structure consisting of two independent layers was refined using Rietveld methods, by refining the lattice parameters, atomic coordinates using rigid bodies for the layers and their shift with respect to each other.The final TTI-COF model is shown in Fig. 8 and the corresponding refinement in Fig. 2. Discussion As outlined above, the particular slip-stacking mode seen in the TTI-COF can be explained by an interplay of several factors that are different in the TBI-COF.Most notably, the TTI-COF is a relatively flat system, which allows the individual layers to slip, in contrast to propeller like out of plane elements, which can cause locking in of a structure. 18,19However, the flat structure alone does not seem to be sufficient for introducing unidirectional slipping, since many COFs are flat, but do not show the same layer offset in only one direction and hence, symmetry reduction, as observed in the TTI-COF. 7,23,26Therefore, another factor influencing the stacking might be the self-complementarity of the TTI-COF, which means that individual layers can form DA stacks just by alternation of the different building blocks along the c-direction (Fig. 7).This feature is fairly unique, since it requires the use of two linkers with C 3 -symmetry that have the same size, geometry, but different electronic structures.Generally, the parallel stacking of imines can be seen as a valid model for most imine COFs since conditions for self-complementary Fig. 8 The DFT optimized structures of the TBI-COF (left, parallel) and the TTI-COF (right, antiparallel).Carbon atoms are shown in grey, nitrogen in blue and hydrogen in white. Fig. 9 Energy landscape for slipping of the TTI-COF composed of two extended layers that are offset with respect to each other, while keeping the stacking distance constant.The energy landscape was sampled in close proximity to the optimum offset calculated by DFT for the 3D periodic structure, located at the center of each landscape.The layers were shifted with respect to each other at a constant distance between both layers.The obtained energies were normalized with respect to the lowest energy geometry.Values between data points were smoothed to aid the eye.A zero (0 Å, 0 Å) shift represents a perfectly eclipsed geometry. This journal is © The Royal Society of Chemistry and the Chinese Chemical Society 2017 antiparallel stacking as outlined above are rarely met.If the size of the building blocks is different, then the contact during alternation would not be that intimate and the dipole of the imine bonds could not be aligned in a close, antiparallel fashion to enable favorable dipole-dipole interactions. 21,22In addition, the planarity of TTI-COF favors the DA stacking, which is in contrast to TBI-COF.The TBI-COF could be expected to stack with no offset between the layers (eclipsed) since it bears a propeller-shaped building block. 18However, the DFT calculations showed an energy minimum for an offset structure, which is why an averaged structure with an apparent zero layer offset is more likely for this COF.In principle, the TBI-COF could be expected to show an even more pronounced stacking in a DA fashion since a benzene core is more electron rich than the triazine core.However, the out-of-plane twisting of the TB system is likely to make efficient contact to the TT core in an adjacent layer difficult.Therefore, the lower crystallinity and the different observed stacking geometry of the TBI-COF is largely linked to the disturbance of planarity. Conclusion In conclusion, we have synthesized two imine-COFs with similar molecular connectivity but distinctly different stacking geometries.While the TBI-COF adopts the archetypical random layer off-set as seen for most 2D COFs, giving rise to an average higher symmetry structure which is isostructural with eclipsed layer stacking, the TTI-COF shows an unusual slip-stacked geometry with uniform direction of the layer offset in each subsequent layer.SAED in conjunction with DFT calculations revealed a two-layer unit cell of the TTI-COF with antiparallel imines as a preferred stacking mode.The observed stacking preference of the TTI-COF directly translates into significantly increased domain sizes and crystallinity as compared to TBI-COF.DFT based energy landscapes for the stacking of the TTI-COF suggest that the alternate imine stacking creates steeper and deeper minima, which can be seen as the rationale for the unidirectional offset-stacking and the resulting improved overall crystallinity.In conclusion, the observed interlayer donor-acceptor type stacking interactions in TTI-COF may be used as a more general design principle based on non-covalent interactions that facilitate crystallization. Fig. 1 Fig. 1 Schematic representation of the synthesis of TTI-COF and TBI-COF from a triamine and a trialdehyde. This model was implemented in a P1 unit cell with the constraints a = b a c and a = b, g = 1201.Pawley refinement with the slipped model showed the best fit (R wp (Pawley): 4.461, Fig. 2, bottom) of the three applied models (Table Fig. 2 Fig. 2 Top: XRPD patterns (l = Cu K a1 ) of the TBI-COF and the TTI-COF (black) with the final Rietveld fits of the COFs (red) and their respective difference curves (blue).R wp (Rietveld) (see eqn (S1) in ESI †) values for the TTI-COF and the TBI-COF are 7.138 and 2.744, respectively.Bottom: XRPD pattern (l = Cu K a1 ) of the TTI-COF with eclipsed, oblong and slipped Pawley refinements with detail view of the reflections showing the reduction in symmetry. Fig. 3 Fig. 3 Contour plot of the relative quality of refinement (R wp ) of the slipping direction in the TTI-COF, by means of changing the a and b angles of the unit cell.To visualize the pseudo-hexagonal symmetry, the plot is shown in Cartesian coordinates, where the x-axis is collinear with the [100] and the y-axis with the [120] direction of the unit cell.The cartoon insets indicate the approximate stacking geometry of the respective positions in the refinement landscape. symmetry of the individual layers and reflects the fact that not all slipping directions fit the powder pattern equally well when comparing constant lateral offsets.The preferred slipping direction in this COF is along [100] and all equivalent directions ([110], [010], [ Fig. 4 Fig. 4 DIFFaX simulated XRPD pattern with varying degrees of disorder in the slipping direction [120] for the TBI-COF. Fig. 5 Fig. 5 TEM image of the TBI-COF (A) with FFT of the entire image (C) and of the area indicated by the black rectangle (B).The SAED (D) shows the lattice spacing of the 100 and equivalent reflections, which are close to the value obtained by XRPD refinement (2.1 nm). Fig. 6 Fig. 6 TEM image of the TTI-COF (A) with FFT of the area indicated by the black square (B) and of the entire image (C).The contrast-enhanced SAED (D, logarithmic contrast) shows a pattern taken along the [110] zone axis, which corresponds to the simulated SAED (E).Profile plot (F) along c corresponding to the [001] direction of the image (D) clarifies the streak features visible at 2.90 and 1.45 nm À1 corresponding to the 001 and 002 reflections, respectively.The SAED (G) shows the repeat distance of the 100 reflection with lattice spacing close to the value obtained by XRPD refinement (2.1 nm). Fig. 7 Fig. 7 Three of the possible stacking motifs of the TTI-COF, where blue and green represent the amine and aldehyde building blocks, respectively.
2019-03-08T10:43:39.465Z
2017-06-28T00:00:00.000
{ "year": 2017, "sha1": "4bdd56af27d3b1432b4dbf1ebf53b9e4004599f7", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/qm/c6qm00378h", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "fa4adf75b505ba7a7e0c66b4918a8c55938bb810", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
10735493
pes2o/s2orc
v3-fos-license
Ferroelectric Instability under Screened Coulomb Interactions We explore the effect of charge carrier doping on ferroelectricity using density functional calculations and phenomenological modeling. By considering a prototypical ferroelectric material, BaTiO3, we demonstrate that ferroelectric displacements are sustained up to the critical concentration of 0.11 electron per unit cell volume. This result is consistent with experimental observations and reveals that the ferroelectric phase and conductivity can coexist. Our investigations show that the ferroelectric instability requires only a short-range portion of the Coulomb force with an interaction range of the order of the lattice constant. These results provide a new insight into the origin of ferroelectricity in displacive ferroelectrics and open opportunities for using doped ferroelectrics in novel electronic devices. Ferroelectric materials are characterized by the spontaneous electric polarization that can be switched between two (or more) orientations. 1 This property makes them attractive for technological applications, such as nonvolatile random access memories, ferroelectric field-effect transistors, and ferroelectric tunnel junctions. 2, 3,4 The importance of ferroelectrics also stems from a fundamental interest in the understanding of the electric-dipole ordering, structural phase transitions, and symmetry breaking. 5 The perovskite ABO 3 ferroelectric compounds are especially important group due to the relative simplicity of their atomic structure. The ferroelectric phase transition in these materials is a displacive transition from a high-symmetry paraelectric phase to a polar ferroelectric phase below the critical temperature. This transition is characterized by a decreasing frequency of a transverse optical phonon mode (the soft mode) which drops to zero at the transition point and then becomes imaginary in the ferroelectric phase, corresponding to a collective displacement of ions from their centrosymmetric positions with no restoring force. 6 The ferroelectric instability can be explained by the interplay between long-range Coulomb interactions favoring the ferroelectric phase and short-range forces supporting the undistorted paraelectric structure. 7 Additional hybridizations between O cation 2p and metal anion d orbitals are required to diminish the short-range repulsion and thus to allow for the ferroelectric transition. 8,9 This view is supported by first-principles calculations which indicate that the large destabilizing Coulomb interaction yielding the instability is linked to giant anomalous Born effective charges arising due to the strong sensitivity of O-metal hybridizations to atomic displacements. 10 While doping a ferroelectric material may enhance its range of functionalities, charge carriers produced by doping will screen the Coulomb interactions that favor the off-center displacements and eventually quench ferroelectricity. This is why it is naturally expected that a ferroelectric phase could not exist in conducting materials. Contrary to this expectation, however, ferroelectric displacements have recently been observed in oxygen reduced conducting BaTiO 3-δ . 11,12 It was found that the ferroelectric instability is sustained up to a critical electron concentration n ≈ 1.9×10 21 cm -3 , which corresponds to about 0.1 e per unit cell (u.c.) of BaTiO 3 . The origin of this "metallic ferroelectricity" is directly related to several important and interesting fundamental questions. 13 How does the screening of the Coulomb interaction affect the ferroelectric displacements? What is the minimum effective range of the Coulomb force to preserve the ferroelectric instability? What happens with the soft mode with charge doping? The answers to these questions would not only provide a better understanding of the nature of ferroelectricity, but also open new possibilities for functional materials. In this paper, we explore the charge carrier doping effect on ferroelectricity using density functional calculations along with phenomenological modeling based on screened long-range Coulomb interactions and the short-range bonding and repulsion effects. By considering a prototypical ferroelectric material, BaTiO 3 , we demonstrate that ferroelectric displacements are sustained in electron doped BaTiO 3 up to a critical concentration of 0.11 electron per unit cell volume, thus revealing that the ferroelectric phase and conductivity can coexist. Our investigations show that the ferroelectric instability requires only a short-range portion of the Coulomb force with an interaction range of the order of the lattice constant. Our calculations employ density functional theory (DFT) implemented in the plane-wave pseudopotential code QUANTUM-ESPRESSO. 14 The exchange and correlation effects are treated within the local-density approximation (LDA). The electron wave functions are expanded in a plane-wave basis set limited by a cut-off energy of 600eV. 14×14×14 and 24×24×24 Monkhorst-Pack k-points meshes are used for structural relaxation and density of states (DOS) calculations respectively. The self-consistent calculations are converged to 10 -5 eV/u.c. The atomic positions are obtained by fully relaxing the lattice and all the ions in the unit cell until the Hellmann-Feynman force on each atom becomes less than 5 meV/Å. The electron doping in BaTiO 3 is achieved by adding extra electrons to the systems with the same amount of uniform positive charges in the background. For the undoped tetragonal BaTiO 3 , our calculation gives the lattice constant a = 3.933Å and c/a = 1.015, polarization P = 28.6 µC/cm 2 , and Ti-O and Ba-O relative displacements of 0.113Å and 0.091Å respectively, consistent with previous LDA calculations. Doping BaTiO 3 with electrons pushes the Fermi energy, E F , to the conduction band and screens the electric potential of an ionic charge. Fig. 1 shows the DOS of BaTiO 3 for different electron doping concentrations n. A typical scale associated with screening is the screening length, λ, which depends on n. We estimate the screening length using the Thomas-Fermi model according to which Here D(E F ) is the DOS at E F and ε is the dielectric permittivity of undoped BaTiO 3 not associated with the spontaneous polarization which we assume to be 0 44 ε ε ≈ . 15 Undoped BaTiO 3 (n = 0) is an insulator so that D(E F ) = 0 and hence λ is infinite. As n becomes larger, more conduction band states are populated ( Fig. 1), thus increasing D(E F ) and reducing the screening length. As seen from the inset in Fig. 1, when n is raised up to 0.2 e/u.c. λ decreases down to about 4Å. Next we study the effect of screening due to electron doping on the ferroelectric displacements in BaTiO 3 . Fig. 2a shows the calculated displacements between M and O (M = Ti, Ba) ions as a function of n. Surprisingly, we find that ferroelectric displacements hardly change with electron doping up to n as high as 0.05e/u.c., and then decay very fast and vanish above the critical electron concentration n c = 0.11e/u.c. The c/a ratio of BaTiO 3 under the increasing n, as shown in Fig. 2b, also displays a similar critical behavior as that of polar displacements. BaTiO 3 transforms from the tetragonal phase with c/a = 1.015 to the cubic phase with c/a = 1.0 at n c = 0.11e/u.c. The critical doping concentration n c found from firstprinciples is consistent with the experimental result. According to the inset in Fig. 1 the critical electron concentration n c = 0.11e/u.c. corresponds to a screening length λ c ≈ 5Å. Therefore, we conclude that only the short-range Coulomb forces with the interaction range comparable to the lattice constant are responsible for maintaining ferroelectric instability in BaTiO 3 . Since changes in hybridization with doping can also affect ferroelectric displacements, we calculate the occupation numbers N d for the Ti-3d orbitals ( The signature of the ferroelectric phase transition can also be seen from the softening of the phonon mode in the paraelectric phase when approaching the critical point with the frequency becoming imaginary in the ferroelectric phase. To confirm the phase transition at the critical concentration we have performed phonon calculations within the density functional perturbation theory, as implemented in QUANTUM-ESPRESSO. In these calculations we consider cubic BaTiO 3 with the lattice constant fully relaxed. Fig. 4 shows the lowest frequency of the triple degenerate phonon mode at the Γ point as a function of electron concentration n, along with the relative cation-anion displacements. We see that the frequency remains imaginary up to an electron concentration as high as 0.11e/u.c. and becomes real above this critical concentration. This critical behavior of the ferroelectric instability is echoed by the cation-anion displacements in cubic BaTiO 3 . To further understand the critical behavior of ferroelectricity due to screened Coulomb interactions, we have developed a physically realistic model explicitly including the screening effect. We consider a 3dimensional lattice of ions in the cubic perovskite structure. In the Thomas-Fermi approximation each ion is shrouded by an exponentially decaying screening charge density with screening length λ. The analytical form of the Coulomb interaction energy w ij between two screened point charge q i and q j at locations r i and r j , respectively, is (| |) ( ) (1) is the distance and screening length dependent coefficient, which reflects the effect of screening and converges to 1 as λ → ∞ . The electrostatic energy per unit cell is given by a lattice sum over all interaction terms of the form (1): The total energy of undoped BaTiO 3 obtained by adding all the energies described above yields a typical potential with minima at two non-zero polarizations, as seen from the inset in Fig. 5. As the electron screening length λ begins to decrease with increasing doping, these minima drop in energy slowly in the beginning. When λ approaches the critical value of λ c , the two wells become shallower quite rapidly. For λ < λ c , the wells merge into a single well at P = 0 indicating a transition to the paraelectric phase. The critical value predicted by the model, λ c ≈ 5.3Å, is consistent with that obtained from the Thomas-Fermi estimate based on the DFT calculations. Fig. 5 shows M-O displacements versus the normalized screening length. It is seen that the critical behavior predicted by our model (solid lines) is in agreement with our DFT calculation (open symbols). Thus, our phenomenological model confirms the fact that only a short range portion of the Coulomb interaction is required to sustain ferroelectric displacements. The co-existence of the ferroelectric phase and conductivity is interesting for device applications because such a conducting bistable material has new functionalities. Although in such a material an external electric field induces a flow of electric current which makes switching of the ferroelectric displacements difficult, resistive materials may sustain the coercive voltage. For example, ferroelectric tunnel junctions are switchable despite the current flowing across them. 17 Also, there exist means to switch ferroelectrics with no applied voltage which may be used in such devices. 18 In conclusion, using first-principles calculations and a phenomenological model we have demonstrated that ferroelectric displacements are well preserved in doped BaTiO 3 until the doping concentration exceeds a critical value of n c = 0.11e/u.c. This critical behavior is due to the electron screening of the Coulomb interactions responsible for the ferroelectric instability. The critical screening length is found to be surprisingly small, about 5Å, demonstrating that "short-range" Coulomb interactions are sufficient to lead to collective ferroelectric displacements. This value may be considered as a qualitative estimate for a lower limit for the critical size of BaTiO 3 of a few unit cells for the existence of ferroelectricity. Our results provide a new insight into the origin of ferroelectricity in displacive ferroelectrics and open opportunities for using doped ferroelectrics in novel electronic devices. The authors are thankful to David Vanderbilt and Philippe Ghosez for helpful discussions. 2 Given a screened point charge q j at r j the work required to bring in another screened point charge q i from infinity to r i is Rewriting this integral in terms of the Fourier expressions we obtain Therefore, the interaction energy between screened ions i and j separated by distance which converges to the bare Coulomb potential as λ → ∞ . Evaluation of total electrostatic energy The electrostatic energy per unit-cell required to construct the crystal is given by a lattice sum over all interaction terms of the form (8): ( ) Here R = a (m x , m y , m z ), are the lattice vectors with the m running over all integers. The ′ on the summation over i, j in (9) indicates that for the R = 0 terms, i = j should be excluded to avoid self-interactions and the factor of ½ takes care of double counting. For large λ, evaluating (9) via "brute force" summation in real space by truncating those terms with |R| > R max is untenable. In the spirit of an Ewald sum, we break up w(d) into two terms: a long range term, w L (d), which is amenable to summation over a reasonably small number of Fourier terms, and a short range term, w S (d), which dies off quickly in real space and therefore is amenable to a reasonably small R max , e.g. encompassing only one or two unit-cells. Explicitly, the Fourier transform of w(d) in (8) is given by The short range contribution to w(d) comes from Fourier terms with large k. Indeed for large k, Here η is an as-yet-to-be-determined scaling factor which gives us another degree of freedom to optimally localize the short-range term (more details below) and σ is a Gaussian broadening factor roughly corresponding to an effective length of the short-range interaction, which needs to be chosen judiciously to minimize the error between the true expression for w(d) and the approximate w S (d) + w L (d). Fourier transforming (11) The short-range contribution w S (d) is obtained straightforwardly: Using (12), the leading order terms of w s as d tends toward infinity we obtain Now we return to (9) and approximate it in terms of the long and short range Ewald Since we have removed the singularity at d = 0 from w L (d) we can rewrite W L without the ′ by subtracting away the terms for i = j when R = 0 which sum to give rise to the self-interaction term ( ) and G are the reciprocal lattice vectors: G = (2π/a)(n x , n y , n z ), where the n runs over all integers up to a maximum cut-off of N max . By matching the approximate electrostatic energy W′ to the true electrostatic energy W, which can be calculated via brute force for a few representative structures and screening lengths, we find a maximum error less than 0.1meV for N max = 7 and σ = 0.6 Å.
2017-07-31T18:35:19.627Z
2012-08-29T00:00:00.000
{ "year": 2012, "sha1": "2357db448adaaaf16930469c689a3ddcaa051372", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.109.247601", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "ee0c9ebd774c8e66da7175460f91262e5d2be061", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine", "Physics" ] }
80431009
pes2o/s2orc
v3-fos-license
Factors associated with delivering premature and/or low birth weight infants among pregnant HIV-positive women on antiretroviral treatment at Dr George Mukhari Hospital, South Africa Background: Prematurity and low birthweight (LBW) deliveries amongst pregnant women infected with the human immunodeficiency virus (HIV) remain a challenge worldwide. The association between prematurity, LBW and antiretroviral therapy (ART) or prophylactic antiretroviral drug (ARV) exposure in pregnancy is unclear. This study evaluates the risk of delivering a premature and/or LBW infant among HIV-positive pregnant women on ART or prophylactic ARV. Methods: A cross-sectional study was conducted (April to October 2012). HIV-positive women on prophylactic ARV (dual therapy) or lifelong ART (triple therapy or HAART) were enrolled in the study. Women who did not have a documented HIV result during pregnancy, those tested before delivery and those found to be HIV-positive were considered as not exposed to ARV drugs during pregnancy. This group received a standard dose of nevirapine during labour. The control group was made up of HIV-negative women. Results: Of the 496 mothers enrolled in the study, 59% (288/496) were HIV-positive, of whom 72% (206/288) were on ART or prophylactic ARV. The mean age was 27.6 ± 6.5 years (15 to 47 years). The mean gestational age (GA) was 35.9 ± 3.6 weeks (24– 42 weeks). Infants’ birthweights ranged from 550 to 4 900 g (2.5 ± 0.9 kg). HIV-positive mothers not on ART or ARV prophylaxis were likely to deliver an infant at GA < 28 weeks (p < 0.05) or birthweight < 1 000 g (p < 0.05) compared with their counterparts. Conclusion: HIV-positive pregnant women not on ART or ARV prophylaxis were at a risk of delivering babies at GA < 28 weeks or birthweight < 1 000 g. There is a need to encourage early and regular attendance for antenatal care so that HIV-positive pregnant women can be identified and have access to treatment during pregnancy. Introduction An estimated 36.7 million people were living with the human immunodeficiency virus (HIV) worldwide in 2015, of which most (68%) lived in sub-Saharan Africa. 1 In South Africa (SA), the national surveillance of HIV and syphilis infection found an HIV prevalence of 29.7% amongst pregnant women in 2013. 2 The use of antiretroviral therapy (ART) and antiretroviral (ARV) prophylaxis for the reduction of mother-to-child transmission of HIV has been a global health strategy since 2000; however, studies have provided inconsistent findings regarding the association between premature births, low birthweight (LBW) and ARV drug exposure. There is clear evidence of the benefits of ARV drug regimens given to pregnant women for both the mother and infant; [3][4][5][6] however, studies in developed countries have found an increased rate of advanced maternal age, LBW and premature birth in HIVpositive women on ART when compared with HIV-negative women. 7,8 A meta-analysis of 10 studies revealed that the use of protease inhibitor (PI)-based ART drugs during pregnancy significantly increases risk of preterm birth. 9 In contrast, a study conducted in SA found that ART drug exposure reduces very LBW and premature delivery rates. 10 However, Xiao and coauthors, in their study, reported that ART drug usage did not have a significant impact on the weight of the infant. 11 Given the discrepant findings, there is little or no information on the risk of LBW and premature birth in HIV-positive pregnant women on ART or ARV prophylaxis at Dr George Mukhari Hospital. Therefore, this study aimed to evaluate the risk of delivering a premature or LBW infant amongst HIV-positive pregnant women on ART and ARV prophylaxis in our setting. Study design An observational cross-sectional study was conducted at Dr George Mukhari Hospital (DGMH) for a period of seven months from April to October 2012. Study settings and population The hospital is a tertiary academic hospital of the Sefako Makgatho Health Science University previously known as the University of Limpopo (Medunsa Campus). It is located in the north of Pretoria, the capital city of South Africa. Pregnant women with complications are referred to the DGMC from surrounding primary healthcare clinics and a secondary hospital in the area. The hospital services a large community of mostly black African individuals. The data for the study were collected from the postnatal and neonatal wards of this hospital. Mothers with non-complicated vaginal and Caesarean section (CS) deliveries are sent to the postnatal ward with their babies if they are term or preterm with a weight of more than 1 800 g. Preterm babies less than 1 800 g and sick babies are admitted to the neonatal ward. The study included all mother and baby pairs from both the postnatal ward and the neonatal ward. Sampling technique and sample size A minimum sample size of 322 was calculated based on the 95% confidence level, 5% sampling error and prevalence of HIV amongst pregnant women of 30%. 2 A consecutive sample of pregnant women who consented to participate in the study was selected every day (Monday to Friday). Exclusion criteria Pregnant women with a chronic medical condition, such as cardiac disease, epilepsy, renal disease, liver failure, psychiatric condition and diabetes mellitus on treatment before pregnancy, were excluded. We also excluded multiple pregnancies because of the high risk of a premature birth. Infants weighing less than 500 g as per WHO definition of viability or less than 24 weeks gestational age (GA) were also excluded. Data-collection methods Data were collected from mothers' files in the postnatal ward, the antenatal card and from a questionnaire answered verbally by the mothers. The following information was obtained: age, HIV status, CD4 cell count and mode of delivery. The infant's weight was measured with a mechanical infant scale model RGZ-20 and the gestational age was calculated using the Ballard score. Prematurity was defined as delivery before 37 completed weeks by vaginal delivery or Caesarean section. A kit for a rapid HIV Elisa test (HIV test first response HIV card test 1-2, Kachigan, Daman, India and Pareekshack HIV Triline card test Bangalore, Karataka, India) was used by nursing staff to test the HIV status of mothers who did test for HIV during pregnancy. The HIV-positive women were then divided into two groups, those who were HIV-positive on ART (triple therapy during pregnancy) or ARV prophylaxis and those who were HIV-positive but not on ART or ARV prophylaxis. The HIV-positive not on ART group was made up of women who tested HIV-positive in the labour ward and received only a single dose of nevirapine (sdNVP) during labour, three-hourly zidovudine (AZT) until delivery and then tenofovir (TDF)/emtracitabine (FTC) postdelivery. Those mothers on dual therapy were on daily AZT during pregnancy and received the sdNVP during labour plus three-hourly AZT during labour and TDF/FTC post-delivery. There were no women on a PI-based HAART (triple therapy). They were all on efavirenz (EFV) or NVP-based ART as per the 2010 National South African PMTCT guidelines, which were applied by all the surrounding clinics and at the DGMH during the study period. The CD4 cell count was done by the National Health Laboratory System (NHLS) laboratory using a Beckman Coulter (Fullerton, CA) Epics XL MCL cytometer and Beckman Coulter TQ PREP. Data analysis Data were captured by Microsoft Excel® (Microsoft Corp, Redmond, WA, USA) and exported to SPSS® version 20 software (IBM Corp, Armonk, NY, USA) for statistical analysis. The data obtained were analysed using Student's t-test, analysis of variance (ANOVA) as appropriate and the chi-square or Fisher's exact test. Post hoc analysis was performed by using Bonferroni multiple-comparison tests. Statistical significance was considered at p < 0.05. Ethical considerations Ethics approval was obtained from the School Research Committee and from Sefako Makgatho Health Science Research Ethics Committee. All participants were asked to sign informed consent before participating in the study. Confidentiality and anonymity of the data was ensured by group data analysis without any personal identifiers. The study also obtained full permission from the Head of the Department of Obstetrics and Gynaecology at DGMH. Results A total of 505 of mother and infant pairs were screened during the period of the study. Some 2% (9/505) of mothers and their infants were excluded from the study because of a 22% (2/9) chronic medical condition and 78% (7/9) had twin infants. Of the 496 remaining mothers, 41% (208/496) tested HIV-negative, while 59% (288/496) were HIV-positive. Of the HIV-positive mothers, 72% (206/288) were on ART, while 18% (82/288) were not on ART. The ages of the mothers ranged from 15 to 47 years with a mean of 27.6 ± 6.5 years. The mean GA was 35.9 ± 3.6 weeks (range 24 to 42 weeks). The infants' birthweights ranged from 550 g to 4 900 g, with a mean birth weight of 2 500 g ± 900 g. Table 1 shows the relationship between maternal HIV status and mother's age, mode of delivery, birthweight and GA. The mean age of HIV-positive mothers on ART or ARV prophylaxis was significantly higher than the control group (p < 0.05). HIV-negative mothers significantly delivered by CS compared with the other groups (p > 0.05). The risk of birthweight < 1 000 g was higher (12%) amongst HIV-positive mothers not on ART or ARV prophylaxis compared with HIV-positive mothers on ART or ARV prophylaxis (4%) and (4%) the control group (p < 0.05). However, there was no significant difference in infants weighing ≥ 1 000 g among the three groups (p > 0.05). A significantly greater proportion (7%) of mothers not on ART or ARV prophylaxis delivered babies at GA < 28 weeks, compared with mothers on ART or ARV prophylaxis (1%) and (1%) the control group (p < 0.05). As seen in Table 2, there was no significant relationship between mode of delivery and mother's age and the gestational age of the infant (p > 0.05). Infants delivered by CS had significantly higher mean birthweight than those delivered vaginally (2.68 ± 0.89 vs 2.47 ± 0.91, p < 0.05). Few (n = 86) of the HIVpositive mothers had a CD4 count cell recorded due to the NHLS change of system to Lab Track, which made it impossible to make a meaningful comparison. Discussion From our study, 72% of the HIV-positive pregnant mothers were on ART, which was comparable to the rate of 72% found in Kenya 12 and 82% found in Ghana. 13 The report by UNAIDS showed that, in 2013, only 68% of HIV-positive pregnant women in sub-Saharan Africa received ART. 14 Evidently, early initiation of ART or ARV prophylaxis during pregnancy has significant clinical benefits. 4,5 However, these studies show that the uptake of ART among HIV-positive pregnant women in sub-Saharan Africa remains a challenge and is associated with a shortage of healthcare workers, the poor attitudes of healthcare workers, transport costs and long waiting times. 15 In the present study, HIV-positive mothers on ART or ARV prophylaxis and HIVnegative mothers had a lower risk of premature birth compared with HIV-positive mothers not on ART or ARV prophylaxis. Overall, nearly half (41%) of the HIV-positive pregnant women in our study delivered LBW babies, which is slightly higher than the rate of LBW babies (34%) observed in India. 16 Interestingly, Xiao and co-workers, in their meta-analysis, found that ARV drug usage did not significantly change the association of maternal HIV exposure to ARV drugs with LBW. 11 In contrast, one study in South Africa found that HIV-positive mothers not on ART or ARV prophylaxis delivered significantly LBW infants compared with the HIV-positive mothers on ART or ARV prophylaxis. 17 It is worth noting that, in our study, HIV-positive women not on ART or ARV prophylaxis significantly delivered babies with birthweight < 1 000 g. The findings in our study could be due to the fact that the HIV-positive mothers not on ART might have had a lower CD4 cell count and higher HIV RNA viral load, which might have contributed to extremely LBW (< 1 000 g) babies. 18 In our study, 35% of the HIV-positive pregnant women delivered preterm babies, which is higher than the 11% reported in Nigeria 19,20 and 25% in India. 16 The reason for the high premature birth rate in our study is unclear, but it could be related to young age, low education levels, no or low pregnancy weight gain and HIV disease stage 2 or more. 21 Several studies have shown an association between prematurity, low birthweight and CD4 cell count. 17,18 In our study, few (n = 86) HIV-positive pregnant women had their CD4 cell count documented, which made it impossible to make a meaningful comparison. With regard to the association between CS and maternal HIV status, our findings show a significantly greater proportion of HIV-negative pregnant women delivered by CS compared with HIV-positive pregnant women on ART or those not on ART. 22 The obstetric reasons for lower CS rate amongst HIV-positive pregnant women in our study were not documented. However, the 2010 South African National Guidelines on HIV Treatment did not recommend an elective Caesarean for the prevention of mother-to-child transmission (PMTCT). 23 We found no significant difference in the mean age of HIV-positive mothers on ART or ARV prophylaxis and those not on treatment, which is in agreement with previous studies. 7 Study limitations The study did not assess other risk factors associated with preterm delivery, such as socio-economic factors, mother's height, pregnancy-induced hypertension and sexually transmitted disease. The CD4 cell count and the viral load data were missing for most of the HIV-positive women. Few of the HIV-positive mothers had their CD4 count documented, as the NHLS changed its previous electronic results program (DISALAB) to a better computerised program, Hospital Statistics. Because of this changeover, most of the results were missing. We included some of the women on ARV prophylaxis and their babies as exposed to ARV drugs. Lastly, ART drugs were not documented in the admission maternity files, even for those on ARV prophylaxis. Conclusion The findings of this study show that HIV-positive pregnant women not on ART or ARV prophylaxis were at a high risk of delivering babies at GA < 28 weeks with birthweight < 1 000 g. The first PROMISE study, conducted in 2013, showed a significantly lower early rate of mother-to-child HIV transmission amongst women on lifelong ART/triple therapy (EFV and NVP based) compared with those receiving ARV prophylaxis. 24 Therefore, since 2013, the World Health Organization (WHO) PMTCT guidelines do not recommend ARV prophylaxis for PMTCT, as was the case in the 2010 WHO PMTCT guidelines. There is a need to encourage early and regular attendance at antenatal care sessions so that HIV-positive pregnant women can be identified and be given access to early treatment and support during pregnancy. Disclosure statement -No potential conflict of interest was reported by the authors. Acknowledgement -The authors would like to thank the women and infants who participated in this study. The authors also thank the nurses and doctors and the laboratory technicians at DGMH.
2019-03-17T13:08:08.145Z
2018-06-29T00:00:00.000
{ "year": 2018, "sha1": "94a9d009f70c00dd064f6d865e23ebf206465211", "oa_license": "CCBYNC", "oa_url": "https://sajid.co.za/index.php/sajid/article/download/18/14", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1713f06b75ceb6e211ee75e05992b91b52866154", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231910951
pes2o/s2orc
v3-fos-license
Charge Density Analysis of Actinide Compounds from the Quantum Theory of Atoms in Molecules and Crystals The nature of chemical bonding in actinide compounds (molecular complexes and materials) remains elusive in many respects. A thorough analysis of their electron charge distribution can prove decisive in elucidating bonding trends and oxidation states along the series. However, the accurate determination and robust analysis of the charge density of actinide compounds pose several challenges from both experimental and theoretical perspectives. Significant advances have recently been made on the experimental reconstruction and topological analysis of the charge density of actinide materials [Gianopoulos et al. IUCrJ, 2019, 6, 895]. Here, we discuss complementary advances on the theoretical side, which allow for the accurate determination of the charge density of actinide materials from quantum-mechanical simulations in the bulk. In particular, the extension of the Topond software implementing Bader’s quantum theory of atoms in molecules and crystals (QTAIMAC) to f- and g-type basis functions is introduced, which allows for an effective study of lanthanides and actinides in the bulk and in vacuo, on the same grounds. Chemical bonding of the tetraphenyl phosphate uranium hexafluoride cocrystal [PPh4+][UF6–] is investigated, whose experimental charge density is available for comparison. Crystal packing effects on the charge density and chemical bonding are quantified and discussed. The methodology presented here allows reproducing all subtle features of the topology of the Laplacian of the experimental charge density. Such a remarkable qualitative and quantitative agreement represents a strong mutual validation of both approaches—experimental and computational—for charge density analysis of actinide compounds. C hemical bonding in actinide compounds is a complex and fascinating phenomenon, yet to be fully rationalized, with both fundamental and technological implications. Strong relativistic effects, strong electron correlation, and weak crystal fields contribute to the identification of a broad active valence manifold constituted by the 5f, 6p, 6d, and 7s orbital shells, whose degree of participation in the formation of chemical bonds varies as a function of several factors and along the actinide series. 1−4 In particular, the 5f electrons are known to participate in bonding from thorium up to plutonium and then to abruptly become less involved from americium on. 5,6 An intriguing, much investigated, but still elusive, aspect of actinide chemistry is the occurrence and degree of covalency of 5f electrons in the chemical bonding. 1,7−9 Beside such fundamental aspects, a detailed understanding of chemical bonding in actinide compounds is also relevant to technological applications in the nuclear power industry. In energy production from nuclear fission, the effectiveness of the separation process of uranium from lanthanides and other minor actinides depends on their relative bond strength. 10,11 A variety of techniques can be used to characterize chemical bonding in actinide compounds, both experimentally (photoelectron, Mossbauer, and X-ray absorption spectroscop-ies; 9,12−14 nuclear magnetic resonance; 15 resonant inelastic X-ray scattering; 6 and others) and theoretically (energy decomposition analysis; 16,17 molecular orbital population and bond order analyses; 18−20 Hirshfeld, Voronoi deformation density, natural bond orbital, and electron localization function analyses; 21−23 and others). The performance of different theoretical approaches has been recently reviewed. 24−26 Arguably, the most general, formally rigorous technique allowing for a consistent and quantitative description of multiple aspects of chemical bonding is represented by Bader's quantum theory of atoms in molecules and crystals (QTAIMAC). 27,28 At the core of this methodology is the topology of the electron density, and therefore, it can in principle be adopted both experimentally and theoretically, thus allowing for a mutual validation of the two approaches. Despite a broad consensus on its ability to describe subtle features of the chemical bonding, only very recently could the QTAIMAC be successfully applied to actinide compounds because of the many experimental and theoretical challenges related to an accurate determination of their charge density. Pioneering synchrotron X-ray diffraction measurements on actinide materials with the experimental reconstruction of the electron density date back to the late 1990s. 29,30 Pinkerton and co-workers have recently reported significant advances in the experimental reconstruction of the charge density of actinide compounds from X-ray diffraction by means of improvements in (i) data collection and reduction strategy and (ii) flexibility of the Hansen−Coppens multipolar formalism. 31−33 Their improved protocol allowed for the reconstruction of the charge density (and its topological analysis via the QTAIMAC) of the tetraphenyl phosphate uranium hexafluoride cocrystal [PPh 4 + ]-[UF 6 − ]. 31 The accuracy of such an experimental procedure can be evaluated from a comparison with the outcomes of quantum-mechanical simulations. However, the accurate description of the charge density of actinide compounds is challenging also from a theoretical perspective as one needs to (i) account for relativistic effects, (ii) consider strong electron correlation, (iii) describe the correct localization/delocalization of 5f and 6d orbitals, and (iv) provide enough variational freedom through a rich and angularly flexible basis set. Recently, the QTAIMAC started being applied to the quantum-mechanical study of chemical bonding in molecular actinide complexes. 34 − ] crystal, performed a QTAIMAC study, and compared their theoretical results with those from the experiment on the crystals. While an overall agreement between the molecular calculations and the experiments on the crystal was observed for some features of the chemical bonding, some significant quantitative, and even qualitative, discrepancies remained, which require further analysis. In particular, the different topology of the Laplacian of the density around the uranium atom from theory and experiment prevented a full validation of the experimental procedure. The discrepancies were tentatively attributed to missing crystal field effects on the molecular calculations and to the shape of the effective-core pseudopotentials used in the calculations. The former of such effects (i.e., that of the environment on chemical bonding features of actinide complexes) has been the subject of a recent investigation by Wellington and coauthors where, by treating intermolecular interactions with different approaches, it was concluded that it is minor and could not explain the large reported differences between molecular calculations and experiments on the description of U−O bonds in Cs 2 UO 2 Cl 4 , for instance. 42 In this Letter, we report on both formal and software advances that allowed us to set up a robust computational strategy for the accurate investigation of chemical bonding on both actinide complexes and actinide materials through the QTAIMAC. We have applied our newly developed methodology to the study of chemical bonding on both UF 6 molecular fragments (both symmetric and distorted, both neutral and charged) and [PPh 4 + ][UF 6 − ] crystals. This analysis makes it possible to decouple crystal field effects from intramolecular features of chemical bonding. In particular, the increase of the anisotropy of the charge density distribution, due to the crystal field, around the two sets of nonequivalent fluorine atoms (four equatorial and two apical) bound to the uranium center could be quantified. Crucially, our method describes topological features of the Laplacian of the density around the uranium atom in remarkable agreement with experiment, which strongly validates both approaches. In our methodology, both molecular and crystalline orbitals are expressed as linear combinations of atomic orbitals (LCAO), which is a suitable representation when chemical features of bonding are to be analyzed. Quantum-mechanical calculations are performed with a developmental version of the CRYSTAL program, 43,44 where the LCAO approach has recently been extended to g-type basis functions. 45,46 Scalar relativistic effects must be accounted for 39−41,47 and here are described by use of small-core effective pseudopotentials (with 60 electrons in the core for U). 48,49 While the program has recently been extended to the treatment of spin−orbit coupling, 50−53 this relativistic effect is disregarded here. This is because, while making the calculations significantly more demanding, it has been previously shown to induce very minor changes to chemical bonding. 54 The topological analysis of the electron density ρ(r) and of its Laplacian ∇ 2 ρ(r) is performed with a developmental version of the TOPOND program 28,55,56 that was previously parallelized 57 and that we have here generalized to work in terms of f-and g-type basis functions, thus allowing for a QTAIMAC analysis of lanthanides and actinides. Crystals of [PPh 4 + ][UF 6 − ] belong to the tetragonal I4̅ space group; its UF 6 − molecular subunits are distorted with four equivalent equatorial fluorine atoms and two slightly more elongated apical fluorine atoms (see Figure 1). This species, fully embedded in the crystal lattice, is here labeled cry-UF 6 − (these are calculations performed on the actual periodic structure of the crystal, thus including all PPh 4 + molecules). We have also studied the properties of the distorted, asymmetrical, unit as extracted from the crystal and treated instead as an isolated molecular fragment (a-UF 6 − ). Calculations have also been performed on a symmetric model of the UF 6 molecule, both neutral and charged (s-UF 6 and s-UF 6 − ). All structural models have been fully relaxed through geometry optimizations. Experimental geometries have also been used for a more direct comparison with experiments. All results presented in the main body of this Letter are obtained with the − ] tetragonal crystal (view down the c crystallographic axis). The UF 6 molecular fragments in the crystal are distorted with four equatorial fluorine atoms, F e , and two slightly more elongated apical fluorine atoms, F a . The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter hybrid B3LYP exchange−correlation functional of the density functional theory (DFT) and basis set BSA (fully uncontracted for the U atom) described in the Supporting Information. Our analysis of chemical bonding starts from the inspection of the orbital shell populations and oxidation state of U in the four systems here considered (three molecules and one crystal), as reported in Table 1. The 32 outermost valence electrons of U are explicitly treated in the calculations (atomic electronic configuration: 5s 2 5p 6 5d 10 5f 3 6s 2 6p 6 6d 1 7s 2 ). Atomic charges are computed from a simple Mulliken approach as well as from QTAIMAC. While Mulliken atomic charges are systematically smaller than Bader ones, trends along the series of four systems are quite consistent in the two cases. According to the QTAIMAC, the atomic charges of U and F are of +3.48 and −0.58 in s-UF 6 . Orbital shell populations reveal that the 7s 2 electrons of U are transferred to the 2p orbitals of F, along with one of the three f electrons in 5f 3 . The populations of d-type orbitals appear to be less affected by bonding but show a clear trend from the neutral to charged species. Table 1 also shows how g-type functions (unpopulated on the isolated U atom) are partially involved in the description of the U−F bonds, with a population of 0.02 electrons. In this respect, we stress that by working in terms of spherical and not Cartesian functions, our g-type functions are not contaminated by s-type character. Passing from the neutral species (s-UF 6 ) to the anion (s-UF 6 − ), the positive charge of U decreases to +3.22 and the negative charge of F becomes −0.70. This shows that about 70% of the extra electron is hosted by 2p orbitals of the F atoms and less than 30% by the central U atom. In particular, in the charged species, the 5f 3 orbitals of U get less depopulated while 6d 1 orbitals get significantly more depopulated. The distortion of the charged species induced by the crystal field, with the formation of two more elongated apical U−F a bonds and four shorter U−F e equatorial bonds, produces an overall decrease in the absolute value of the atomic charges of U and F, thus suggesting a lower ionicity and a larger degree of covalency of the bonds. This is already seen in passing from s-UF 6 − to a-UF 6 − and becomes even more pronounced when the effect of intermolecular interactions on the electron distribution of the molecule are explicitly taken into account in the crystalline environment (cry-UF 6 − ). We will get back to this point later when various bond type descriptors from the QTAIMAC will be presented and discussed. The crystalline environment of the uranium hexafluoride species in [PPh 4 + ][UF 6 − ] induces its geometrical frustration from a symmetric octahedron to a distorted one with two symmetry-independent sets of fluorine atoms (two apical F a and four equatorial F e ), which is also reflected in its electronic structure. This structural distortion is larger in the experimental than in the optimized theoretical structure. Figure 2 reports the atomic charges of F atoms, as obtained from QTAIMAC by numerical integration of the electron density over the corresponding atomic basins, for the three ionic species here considered (s-UF 6 − , a-UF 6 − , and cry-UF 6 − ). In the symmetric species, the atomic charge of the six equivalent F atoms is −0.703. In the distorted molecular fragment (as extracted from the crystal) we observe a splitting of the atomic charges of the F atoms, with a larger charge in the apical atoms and a lower charge in the equatorial ones. This trend is confirmed when going further from the molecular fragment to the actual crystal, with an enhancement of the splitting. In particular, crystal field effects are such to increase the charge of the two apical F atoms, which is consistent with the experimental evidence of a larger deformation density on the apical atoms. 31 Inspection of Figure 2 thus suggests a higher ionicity (i.e., lower covalency) in the two apical U−F a bonds than in the four equatorial U−F e bonds. This evidence will further be corroborated below by the analysis of various bond descriptors from the QTAIMAC and, moreover, will prove crucial in the assessment of the reliability of different models used in the reconstruction of the experimental density. Let us now analyze the chemical bonding of the UF 6 ] crystal more closely. We start by performing a topological analysis of the electron density ρ(r), which allows us to find and characterize bond critical points along the U−F a and U−F e bonds. Table 2 reports several bond descriptors evaluated at the bond critical points from the QTAIMAC. Computed values for a-UF 6 − and cry-UF 6 − are reported and compared with experimental values obtained from two different models in ref 31 (referred to as models 1b and 1c). The overall agreement between computed and experimental values is remarkable, both clearly confirming the The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter mixed ionic/covalent nature of the U−F bonds based on the various descriptors with ∇ 2 ρ > 0, H < 0, 1 < |V|/G < 2, and small and negative H/ρ at the bond critical points. From a quantitative point of view, the agreement is particularly impressive on ρ and |V|/G. Indeed, the computed values for the electron density at the bond critical points fall between the two values obtained experimentally from the two models: ρ 1b exp < ρ calc < ρ 1c exp , with deviations never exceeding 4% and often below 1%. The |V|/G ratio at the critical points is about 1.3 in all cases, with small deviations between theory and experiment. Let us now address a subtle (and critical) aspect of the chemical bonding of the system, that is, the difference in bonding of U−F a and U−F e . Comparison of a-UF 6 − and cry-UF 6 − results in Table 2 shows how the electron density at the bond critical point is significantly affected by the intermolecular interactions. In particular, the difference Δρ between the apical and equatorial bonds increases almost by a factor of 2 in passing from a-UF 6 − to cry-UF 6 − . Inspection of the computed bond descriptors confirms the larger covalent character of the equatorial bonds that are indeed characterized by a shorter bond length, larger value of the density, larger value of |V|/G, and a more negative value of the bond degree H/ρ. Comparison with the experiment is much more critical because, on this subtle aspect, the two models 1b and 1c are in qualitative disagreement, with model 1b describing equatorial bonds slightly more covalent than apical ones (matching the theoretical predictions) but model 1c describing apical bonds as more covalent than equatorial ones. On the one hand, model 1c allowed for a more stable refinement; 31 on the other hand, the shorter equatorial bonds in the structure would seem consistent with their higher degree of covalency as described by model 1b and by present quantum-mechanical calculations. We now analyze the topology of the Laplacian of the density ∇ 2 ρ(r), which provides additional information on the spatial distribution of the electrons and in particular on the asphericity of (bonded) atoms. 58 Critical points of the Laplacian correspond to charge concentrations and depletions in the core and valence shells. Valence shell charge concentrations (VSCCs) are particularly relevant to the rationalization of chemical bonding and can be analyzed in terms of critical − ] crystal predicted a qualitatively similar spatial distribution of the VSCCs around the U atom as in d 6 transition metals (see panels in the last column of Figure 3c). However, the topology of the Laplacian derived by the experimental density of the crystal is significantly different, with both quantitative and qualitative discrepancies with respect to those first calculations, as shown in Figure 3. A total of 14 VSCCs were reported around the U atom: (i) 8 critical points arranged at the vertices of a cube with the edges slightly tilted off the U−F axes (red spheres in the figure); (ii) 4 critical points forming a square in the equatorial plane, with vertices slightly tilted off the bisector of the Fe−Û−Fe angle (yellow spheres in the figure); (iii) 2 critical points along the U−F a axes (yellow spheres in the figure). Experimentally, all 14 VSCCs are at a distance of about 0.38 Å from U while in the previous calculations the 8 VSCCs were found at about 0.85 Å, which seems inconsistent with the radial distribution of the valence of U, as discussed below. Getting rid of such large discrepancies in the description of the topology of the Laplacian of the [PPh 4 + ][UF 6 − ] crystal around the U atom is therefore compelling to assess the accuracy of the experimental procedure as well as that of any theoretical approach in the description of the electron density of actinide compounds. The first panels in Figure 3c show the VSCC (3, +3) critical points of the Laplacian as obtained from present quantum-mechanical calculations on both the a-UF 6 − and cry-UF 6 − systems. Inspection of the figure suggests that the agreement with the experimental spatial distribution of the Laplacian is recovered to a large extent. Present calculations are indeed able to confirm the whole set of 14 critical points found in the experiments. The predicted radial distance of the (3, +3) critical points of the Laplacian is of 0.30 Å and coincides with the minimum of the VSCC of the principal quantum number 6 (the VSCC for n = 7 is not visible neither in the isolated U atom Laplacian profile, nor in the UF 6 compound, as the negative Laplacian due to n = 7 orbital components is overcompensated by positive Laplacian contributions due to the innermost shells). As a consequence, only a VSCD (valence shell charge depletion) is visible after the n = 6 VSCC (see Figure 3 b). Furthermore, according to present calculations, the 14 critical points can be grouped into two independent sets with slightly different properties: 8 critical points arranged at the vertices of a cube (red spheres in the figure) and 6 critical points arranged at the vertices of an octahedron (yellow spheres in the figure). The only difference with respect to the experiment consists in the red cube and yellow octahedron not being tilted off the U−F bonds, which, however, seems consistent with the symmetry of the system. The spatial distribution of the two sets of VSCCs around the U atom can be rationalized in terms of the hybridization of the valence atomic orbitals. It has recently been shown that a sp 3 d 2 hybridization leads to a octahedral 6-fold coordination and a sp 3 d 3 f hybridization leads to a cubic 8-fold coordination. 61 In conclusion, we have extended the QTAIMAC implementation in the TOPOND package to f-and g-type basis functions. This now makes it possible to analyze the electron density of materials containing lanthanides and actinides, as obtained from LCAO quantum-mechanical calculations. Application of this methodology to the rationalization of chemical bonding in [PPh 4 + ][UF 6 − ] crystals nicely shows the potential of the approach. In particular, some previously reported discrepancies between experimental and theoretical features of the topology of the density are reconsidered and largely removed, which proves significant in the mutual validation of the experimental and theoretical route to the accurate description and analysis of the electron density of actinide compounds and materials.
2021-02-14T06:16:17.248Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "7c27bddc8e1b7a340bedcf8c68c3f5b14b317dfd", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpclett.1c00100", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4dc6581298aa0bb696e0ca70e88175789e1f6fa2", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
36796567
pes2o/s2orc
v3-fos-license
Comment on ‘standards on restoratives’ Sir,I am writing this letter after going through dozens of research articles written and published on the issue of distilled water conditioning and artificial saliva conditioning of dental restoratives containing polymers (at least as a binder). Many important mechanical properties are evaluated after the so called conditioning, like wear, fracture toughness, tensile strength, compressive strength, stiffness and physico-chemical properties to disqualify restoratives, which do not conform to prescriptions and expectations.In my opinion, some of the investigators neither understand the importance of distilled water conditioning and artificial saliva conditioning nor are they knowledgeable about the implications and correctness of the property evaluations which follow. Polymers take several days to reach a steady state of equilibrium in water or saliva absorption. Any measurement of the qualifying mechanical properties will only be transient if these restoratives are conditioned for a pre-determined duration of just 24 hours or 72 hours and evaluated. Comment on 'standards on restoratives' Sir, I am writing this letter after going through dozens of research articles written and published on the issue of distilled water conditioning and artificial saliva conditioning of dental restoratives containing polymers (at least as a binder).Many important mechanical properties are evaluated after the so called conditioning, like wear, fracture toughness, tensile strength, compressive strength, stiffness and physico-chemical properties to disqualify restoratives, which do not conform to prescriptions and expectations. In my opinion, some of the investigators neither understand the importance of distilled water conditioning and artificial saliva conditioning nor are they knowledgeable about the implications and correctness of the property evaluations which follow.Polymers take several days to reach a steady state of equilibrium in water or saliva absorption.][4][5] The aspect of water or saliva conditioning should be more clearly understood and transient conditioning followed by testing should not be considered as the numero uno criteria in the design of standard methods for these restoratives, veneers and crowns.The point is that many of these 'transient' investigations are not followed up with research and publications on steady state conditioning and the implications on testing for qualification.Though it cannot be denied that a quick conditioning followed by mechanical testing provides some transient details, they cannot be relied upon for veracity on long term mechanical behaviour.Thus, it becomes important to follow up these investigations with those on long term conditioning. [8] In one investigation the researchers have used distilled water to identify tribochemical reactions in a restorative that was conditioned for less than 72 hours. [5]s ionomers are present in the restoratives, deionized water should have been used in a closed environment and conditioning carried out to saturation before looking for tribochemical reactions.The results of such transient investigations, thus, cannot be of any significance.Some of these reports even claim that typical oral conditions were maintained even though only transient conditions were lEttErs to Editor maintained during the investigation.The publication list presented here forms only the tip of the iceberg. Reports of investigations which clearly spell out the implications of steady state conditioning, i.e. post saturation, must gain priority in the design of standard methods for qualification of polymer containing restorative materials and ceramics/glasses which are known to react with water and saliva.Further, according to ISO 7405 guidelines, the restoratives must demonstrate biocompatibility, which can be correctly assessed only upon long term conditioning replicating the oral environment. Copper content of various constituents of betel quid Sir, Oral submucous fibrosis (OSF) is a chronic inflammatory condition leading to trismus and impairment of various oral functions.The malignant transformation rate of this premalignant condition is reported to be as high as 14%. [1]F is prevalent in South East Asia.Various etiological factors have been implemented in the causation of this condition.Currently, the habit of chewing areca nut (Areca catechu) is recognized as the most important etiological agent in the pathogenesis of this condition, focusing at the high level of copper present in areca nut. [2]The chewing habit varies across the country.With the introduction of various flavored areca nut products with or without tobacco, and other ingredients in the market, pure areca nut chewing habit has decreased. The accumulation of copper in an organ may cause intracellular damage, ultimately expressing itself in cancerous lesions. [3]A substantial amount of copper is released into saliva while chewing areca nut.The high exposure to oral tissues during chewing could increase the local absorption and accumulation of copper.Copper reaches the connective tissue by transmucosal transport through the epithelial cells bound to metallothionein protein, by nonenzyme dependent diffusion. [2] has been demonstrated that copper chloride in vitro, significantly increases the production of collagen by fibroblast.Collagen deposition in the OSF tissue may be attributed to increase in the lysyl oxidase activity, which is a metalloenzyme of copper.Lysyl oxidase causes posttransitional modification of collagen fibers rendering them resistant to action of collagenase, the increase in the copper level in tissue is said to cause excessive cross-linking and accumulation of collagen. [4]ny of the recent studies on OSF implicate copper as the causative factor in initiation of this disease.Keeping this in mind, we evaluated the copper content of various ingredients frequently mixed with areca nut to see whether these constituents may contribute to addition of copper content.Various ingredients were dried and powered and sent to Central Plantation Crops Research Institute (Vittal, India) for the estimation of the copper content using atomic absorption spectrometry. The copper content reported is as follows: Red arecanut (18.3 ppm), white areca nut (14.9 ppm), betel leaf (18.5 ppm), gutkha (13.2 ppm), flavored areca nut (12.2 ppm), tobacco leaves (6.3 ppm).The above data shows that the betel leaf contains the highest amount of copper and tobacco contributes little to copper content.To our knowledge, the copper content of these products (except gutkha) has not been reported till date.This shows the need to individually assess the effect of these ingredients on elevating the copper content in oral mucosa.We propose that two or more of these ingredients mixed with areca nut with high copper content can produce an additive effect thereby hastening the time for causing OSF.
2018-04-03T03:51:32.034Z
2009-10-01T00:00:00.000
{ "year": 2009, "sha1": "876025b3ef0f1b70c64493e14decb61c11c9261e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0970-9290.59429", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "876025b3ef0f1b70c64493e14decb61c11c9261e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
7037637
pes2o/s2orc
v3-fos-license
B-cell subpopulations in humans and their differential susceptibility to depletion with anti-CD20 monoclonal antibodies In humans, different B-cell subpopulations can be distinguished in peripheral blood and other tissues on the basis of differential expression of various surface markers. These different subsets correspond to different stages of maturation, activation and differentiation. B-cell depletion therapy based on rituximab, an anti-CD20 mAb, is widely used in the treatment of various malignant and autoimmune diseases. Rituximab induces a very significant depletion of B-cell subpopulations in the peripheral blood usually for a period of 6 to 9 months after one cycle of therapy. Cells detected circulating during depletion are mainly CD20 negative plasmablasts. Data on depletion of CD20-expressing B cells in solid tissues are limited but show that depletion is significant but not complete, with bone marrow and spleen being more easily depleted than lymph nodes. Factors influencing depletion are thought to include not only the total drug dose administered and distribution into various tissues, but also B-cell intrinsic and microenvironment factors influencing recruitment of effector mechanisms and antigen and effector modulation. Available studies show that the degree of depletion varies between individuals, even if treated with the same dose, but that it tends to be consistent in the same individual. This suggests that individual factors are important in determining the final extent of depletion. Introduction to B-cell subpopulations In humans from birth all new B cells originate from common precursors in the bone marrow. In the bone marrow, peripheral blood and secondary lymphoid tissues, diff erent B-cell subpopulations can be distinguished corresponding to diff erent stages of maturation, activation and diff erentiation. B-cell subpopulations are characterised mainly by the diff erential expression of diff erent cell surface markers that include various cluster of diff erentiation (CD) molecules and diff erent surface immuno globulin isotypes (B-cell antigen receptor). B-cell develop ment can be separated into an earlier antigenindependent phase, which takes place in the bone marrow, and a later antigen-dependent phase that takes place mainly in secondary lymphoid tissues. In a simplifi ed way, the diff erent B-cell lineage subsets include pro-B cells, pre-B cells, immature and transitional B cells, mature naïve B cells, memory B cells, plasmablasts and plasma cells (Figure 1). Plasmablasts are recently diff erentiated antibody-producing cells that are usually shortlived but can recirculate and home to tissues such as the mucosa or the bone marrow, where they can diff erentiate into fully mature plasma cells. In addition, centroblasts and centro cytes are B cells participating in germinal centre reactions. B-cell precursor subpopulations are found in the bone marrow. In the peripheral blood, transitional, naïve mature and memory B cells and plasmablasts, and more rarely plasma cells, can be identifi ed. Plasma cells are more frequently seen in the bone marrow and peripheral lymphoid tissues. Centrocytes and centroblasts are found in secondary lymphoid tissues where germinal centre reactions take place, and are not found circulating in peripheral blood. Marginal zone B cells can be found in the marginal zone of the spleen and similar populations are described in particular locations in other secondary lymphoid tissues [1]. Marginal zone B cells in human adults are mainly memory B cells. Th ere is still controversy on what drives formation of human marginal zone B cells, to what extent they are similar to mice marginal zone B cells and what is their relationship with circulating IgM + memory B-cell subsets [1,2]. Immunophenotyping of B cells with multiparameter fl ow cytometry has allowed identifi cation of an increasing number of diff erent subpopulations, increas ing our Abstract In humans, diff erent B-cell subpopulations can be distinguished in peripheral blood and other tissues on the basis of diff erential expression of various surface markers. These diff erent subsets correspond to diff erent stages of maturation, activation and diff erentiation. B-cell depletion therapy based on rituximab, an anti-CD20 mAb, is widely used in the treatment of various malignant and autoimmune diseases. Rituximab induces a very signifi cant depletion of B-cell subpopulations in the peripheral blood usually for a period of 6 to 9 months after one cycle of therapy. Cells detected circulating during depletion are mainly CD20 negative plasmablasts. Data on depletion of CD20-expressing B cells in solid tissues are limited but show that depletion is signifi cant but not complete, with bone marrow and spleen being more easily depleted than lymph nodes. Factors infl uencing depletion are thought to include not only the total drug dose administered and distribution into various tissues, but also B-cell intrinsic and microenvironment factors infl uencing recruitment of eff ector mechanisms and antigen and eff ector modulation. Available studies show that the degree of depletion varies between individuals, even if treated with the same dose, but that it tends to be consistent in the same individual. This suggests that individual factors are important in determining the fi nal extent of depletion. know ledge of normal B-cell biology and, in particular, changes associated with diff erent disease states. For example, diff erent memory B-cell subsets have now been described in peripheral blood including sub sets that do not express CD27, a marker previously thought to be present on all memory B cells [3,4]. Memory B-cell subpopulations include pre-switch IgD + IgM + CD27 + memory B cells, IgD -IgM + CD27 + memory B cells (IgMonly memory B cells), post-switch IgA + CD27 + and IgG + CD27 + memory B cells and also IgA + CD27and IgG + CD27memory B cells [5]. Th ese memory subpopulations show diff erent frequen cies of somatic mutation and diff erent replication histories that are thought to refl ect their formation on primary or secondary germinal centres or outside germinal centre reactions [5]. A potential new marker for human memory B-cell subpopulations has been identifi ed recently [6]. A proposal has been made that immunophenotyping of peripheral blood B cells should include the markers CD19, CD20, CD24, CD27, CD38 and IgD to be able to distinguish the major subpopulations [7]. More detailed information including separation into further subsets and subtle diff erences in activation status that may be impor tant when looking at disease states may require use of other markers such as diff erent immunoglobulin iso topes, activation markers or chemo kine receptors [6,[8][9][10][11][12][13][14]. Anti-CD20 monoclonal antibodies -rituximab Anti-CD20 mAbs were developed in the late 1980s and in the 1990s for the treatment of non-Hodgkin's lymphoma of B-cell origin. Rituximab (MabTh era®, Rituxan®; Roche, Basel, Switzerland) was licensed for the treatment of follicular lymphoma in 1997/98 and later for diff use large non-Hodgkin's lymphoma and chronic lymphocytic leukaemia. In 2006 rituximab was licensed for the treatment of rheumatoid arthritis (RA). Rituximab is also used off -license for the treatment of other B-cell malignant diseases, in transplantation and for the treat ment of a variety of other autoimmune diseases, pre domi nantly diseases associated with the presence of auto antibodies. Various other therapeutic anti-CD20 mAbs are either avail able on the market (Ofatumumab -Arzerra®; GlaxoSmithKlein, UK -licensed for the treatment of chronic lymphocytic leukaemia), undergoing clinical trials or under development [15]. Th e CD20 antigen is expressed by the majority of cells in the B-lymphocyte lineage, but not by haematopoietic stem cells, the earliest B-cell precursors (pro-B cells) or terminally diff erentiated plasmablasts and plasma cells ( Figure 1). Th e CD20 molecule is a transmembrane protein thought to function as a calcium channel and to be involved in B-cell activation and proliferation. A recent case report of a patient with CD20 defi ciency suggested a role in T-cell-independent antibody responses [16]. Because haematopoietic stem cells are not directly depleted by anti-CD20 antibodies, one course of treatment with rituximab is followed by B-cell repopulation of the peripheral blood starting usually within 6 to 9 months -but it can take several months or even years for total B-cell numbers in the peripheral blood to recover to pretreatment levels. Repopulation occurs mainly with naïve B cells, with increased frequency and numbers of transitional B cells similar to that seen after bone marrow transplantation [14,17]. Th e time at which B-cell repopulation of the peripheral blood starts is probably determined by the extent of earlier depletion, drug clearance and the capacity of the bone marrow to regenerate. Variability in time to repopulation in primate animal models did not seem to be dose dependent [18]. Factors infl uencing B-cell precursor formation in humans are poorly understood, as are factors that determine to what extent a fully functional B-cell repertoire is regenerated and how long it takes. Whether age or other individual characteristics infl uence repopulation is not known [19,20]. Th e fact that plasma cells are also not directly depleted by anti-CD20 antibodies explains why, in the majority of patients, serum total immunoglobulin levels remain within the normal range after treatment with one course of rituximab. Several studies have shown that serum levels of several autoantibodies decrease after treatment with rituximab (although they do not usually become undetectable) and do so proportionally more than total immunoglobulin levels or anti-microbial antibodies [21][22][23]. Th is observation suggests that these autoantibodies are produced by proportionally more shortlived plasma cells and therefore are more dependent on the formation of new plasma cells, which is interrupted by B-cell depletion [23]. Treatment with rituximab is associated with major depletion of normal B cells in vivo. Depletion in the peripheral blood is frequently higher than 99% but depletion in other tissues has been less well studied, with several studies documenting that depletion in solid tissues with rituximab is frequently not complete and can show considerable variation between individuals. In vitro, rituximab depletes malignant B cells by antibody-dependent cellular cytotoxicity, complement-mediated cyto toxicity and induction of apoptosis. In vivo, rituxi mab is thought to act mainly by inducing antibody-dependent cellular cytotoxicity with activation of complement also contribut ing [24]. One of the consistent fi ndings in several of the animal and earlier human studies is the variability of depletion seen with anti-CD20 mAbs in diff erent individuals even when treated with the same dose [18,25,26]. Interestingly, depletion in the same individual tends to be consistent in diff erent tissues, suggesting that individual characteristics are important. Resistance to depletion with anti-CD20 monoclonal antibodies Because depletion is achieved by binding of the mAbs to the cell surface CD20 molecules, the fi nal extent of depletion will necessarily depend on the relationship between total number of B cells and total dose of rituximab administered, on accessibility of the drug and eff ector immune cells to the tissues where B cells are located, on intrinsic or extrinsic factors that may infl uence B-cell survival and on the effi cacy of recruited host immune mechanisms responsible for depletion. Former small dose-ranging studies in lymphoma and in animal models have shown that B cells in the peripheral blood are readily killed by anti-CD20 antibodies but that higher doses and higher serum levels are needed for depletion in extravascular sites [18,24,25]. Factors infl uencing antigen and eff ector modulation are thought to be important in determining the fi nal extent of depletion achieved (Table 1) [18,27,28]. Antigen modulation refers to antigen endocytosis/modulation after binding to the antibody. Contrary to what was originally thought, this can be seen with the CD20 molecule after binding with certain anti-CD20 antibodies including rituximab [29]. Th is can lead to less recruitment of Fcγ receptors on eff ector immune cells and to decreased serum drug levels. Eff ector modulation refers to genetic and acquired mechanisms that can enhance or diminish eff ector immune cell function and therefore infl uence the extent of depletion. For example, a Fcγ receptor IIIa polymorphism that can infl uence affi nity for IgG has been associated with clinical response in lymphoma [28]. Profound complement depletion as seen during treatment of chronic lymphocytic leukaemia with rituximab can be a limiting factor for further depletion [28]. Intrinsic B-cell factors that may infl uence depletion include high expression of complement regulatory proteins as seen in chronic lymphocytic leukaemia [28]. In cynomolgus monkeys, diff erent sensitivities to rituximab were associated with, but not fully explained by, diff erent levels of expression of CD20 [30]. Binding of rituximab to CD20 leads to translocation of the CD20 molecule to lipid rafts. Altera tions in lipid raft composition and treatment with statins have been associated with less good responses to rituximab [28]. To what extent external B-cell survival factors, in particular the cytokine B-cell activating factor (BAFF), infl uence deple tion is not known, although it has been suggested that local high levels of BAFF may contribute to resis tance to depletion by rituximab [31]. In animal models, certain subpopulations have been shown to be more resistant to depletion with anti-CD20 antibodies but this varies with the mice strain used and whether they were studies using human CD20 transgenic mice treated with anti-human CD20 mAbs or nontransgenic mice treated with anti-mouse CD20 mAbs [32,33]. Populations that were found to be more resistant to depletion were peritoneal B1-type B cells, germinal centre B cells and marginal zone B cells [32,33]. Insufficient depletion of peritoneal B1 cells is thought to be due to the lack of eff ector cells in the peritoneal space [33]. Diff erential sensitivity of germinal centre and marginal zone B cells to anti-CD20 antibodies has also been described in cyno mo logous monkeys, with diff er ences appearing more prominent in the lymph nodes than in the spleen [30]. Th e relative resistance of some populations is thought to be related to B-cell and microenvironment diff erences responsible for antigen or eff ector modulation or related to direct resistance of the B cells involved. In an autoimmune mouse model of lupus, B cells were more resistant to depletion when com pared with nonauto immune mice and more frequent administration of larger doses increased effi cacy of depletion [34]. Less good depletion has also been associated with acquired defects in antibody-dependent cellular cytotoxicity in the same autoimmune mouse model of lupus [35]. To what extent the diff erential susceptibility of various B-cell subsets demonstrated in some of the animal models refl ects what happens in humans in vivo is not known. Diff erent B-cell malignancies deriving from B cells at diff erent stages of diff erentiation and diff erent tumour locations are also associated with diff erential responses to treatment with anti-CD20 mAbs but susceptibility of the correspondent normal human B-cell subpopulations is expected to be substantially diff erent. Whether there are any diff erences in susceptibility to depletion of autoreactive human B-cell clones when compared with nonautoreactive ones, as suggested by mouse models [34], and whether there are any signifi cant diff erences in susceptibility to depletion of disease-associated B-cell clones between diff erent autoimmune diseases are also not known. In addition, administration of chimaeric anti-CD20 mAbs such as rituximab can be associated with formation of human anti-chimaeric antibodies that can infl uence drug action and clearance. Although most large studies show no association between the presence of human anti-chimaeric antibodies and clinical response or depletion, this association has been described, for example, in small studies in systemic lupus erythematosus patients [36,37]. With evidence showing that not all B cells that bind rituximab are depleted there is an interest in knowing what exactly happens to these cells in vivo during the period of depletion. Are they eventually depleted later on, particularly if they recirculate in peripheral blood? Are they functionally impaired? Are they able to expand in an environment with less competition and raised BAFF levels? Kamburova and colleagues tried to address some of these issues by studying the in vitro eff ects of incuba tion with rituximab on proliferation, activation and diff erentiation of nondepleted human normal peripheral blood B cells [38]. Th ey reported that incubation with rituxi mab (for 30 minutes at 5 μl/ml) inhibited the pro liferation of stimulated CD27naïve B cells but not of CD27 + memory B cells and this was associated with a relative increase of B cells with an activated naïve pheno type. B cells stimulated in the presence of rituximab induced stronger T-cell proliferation and the T-cell popu lation showed a more Th 2-like phenotype. Th ese results suggest that B cells which are exposed to rituximab but are not depleted may have altered function and that naïve and memory B cell populations may be diff erentially aff ected. Whether any of these phenomena occur in vivo and what their implications would be are unclear. Interestingly, and similar to what happens after bone marrow transplantation, the residual B cells are not able to expand and repopulate the peripheral blood, even in the presence of abundant BAFF. B-cell depletion in peripheral blood Administration of rituximab is usually associated with a rapid and profound depletion of circulating B cells in the peripheral blood [18]. Major depletion eff ector cells are probably macrophages from the reticulo-endothelial system [24]. Studies in autoimmune diseases -in particular, RA and systemic lupus erythematosus -have documented variable degrees and durations of B-cell depletion in peripheral blood in diff erent individuals following treatment with rituximab with standard doses [17,36,37,[39][40][41]. Incomplete B-cell depletion in the peripheral blood, as defi ned by B-cell counts >5 cells/μl after treatment with rituximab, has been well documented in cases of patients with autoimmune diseases, more frequently in systemic lupus erythematosus than in RA [17,36,37]. Persistent presence of circulating B cells has also been documented with high-sensitivity fl ow cytometry and has been associated with no or less good response to treatment [39,40]. Insuffi cient depletion can be seen on retreatment with documented very rapid clearance of rituximab in association with a marked human antichimaeric antibody response [42]. Other mechanisms under lying incomplete depletion in the peripheral blood have not been well studied but are probably a consequence of more rapid clearance of the drug and/or antigen and eff ector modulation phenomena [17,24,36,37]. Th e very small numbers of circulating B cells that can be detected during periods of depletion usually show a phenotype of plasmablasts but cells with memory or even naïve B cells have also been reported [17,40,41,43]. Th e CD20 antigen cannot usually be detected in these memory B cells, suggesting that it is masked by binding to rituximab because the drug can be detected in the circulation for several months [26]. Mei and colleagues described that, similarly to their controls, the majority of circulating plasmablasts/plasma cells detected during depletion were positive for IgA and a reasonable proportion expressed markers suggesting they had been formed in mucosal tissue and were circulating back to mucosal areas [44]. Th ese results suggest that depletion in mucosal-associated lymphoid tissue may be particularly less pronounced. Repopulation of the peripheral blood after treatment with a standard dose of rituximab usually starts 6 to 9 months after treatment with predominantly transitional and naïve B cells as previously mentioned. Frequently, repopulation with larger numbers of memory B cells and/ or plasmablasts has been associated with earlier relapse [17,40,45]. At repopulation, the decrease from baseline in the frequency of pre-switch memory B cells (CD27 + IgD + ) was larger than the decrease in the switched memory B-cell population (CD27 + IgD -) [46]. However, to what extent circulating memory B cells at repopulation are old memory B cells that have not been depleted by rituximab or recently diff erentiated memory B cells is not known. We therefore do not know whether relative frequencies of the diff erent B-cell subpopulations at repopulation can tell us anything about the subpopulations of cells that may have resisted depletion. In RA, nonresponse has been associated with higher numbers of plasmablasts before treatment and early relapse has been associated with higher numbers of CD27 + memory B cells before treatment [39,45]. Again, to what extent this may indicate less susceptibility and insuffi cient depletion of memory B-cell subsets in association with no response or with a shorter response is not known. B-cell depletion in bone marrow and secondary lymphoid tissues Unfortunately, there are limited data on the degree of depletion of normal B cells in secondary lymphoid organs and other solid tissues in human individuals treated with rituximab, and hardly any data on diff erential susceptibility to depletion of diff erent subpopulations in diff erent tissues except for the expected resistance of CD20plasmablasts and plasma cells to depletion [47]. Animal studies in primates showed that increasingly higher doses are needed to deplete bone marrow, spleen and lymph nodes in this order [18,48,49]. Th ese studies also showed that B-cell depletion in solid tissues was frequently signifi cant, but not complete, and that it varied from site to site and from individual to individual even when the same doses were used. Interestingly, consistency regard ing the degree of depletion achieved in diff erent lymph nodes in the same individual was described [18,20,48,49]. As previously mentioned, mice studies suggested that B cells resident in tissues other than peripheral blood may be partly resistant to depletion by anti-CD20 antibodies either because of local defective eff ector mechanisms or because the B cells have a particular phenotype that renders them resistant to depletion in association with their specifi c state of maturation, activation or diff erentiation. In bone marrow samples of RA patients treated with rituximab a relatively high number of B-cell precursors subpopulations can be seen [50][51][52]. Th is has been documented at 1 month or 3 to 4 months after treatment, at a time when peripheral blood repopulation had not yet started [50,51]. Persistence of CD20plasma cells has been observed as expected [50,51]. In the two studies where phenotyping was more detailed, the cells found were mainly B-cell precursors and recirculating memory B cells [50,52]. Once again, variability between individuals was observed [50,52]. Th e presence of cells of B-cell lineage that presumably should be expressing CD20 has therefore been well documented and rituximab is probably still present and binds to the CD20 molecule, preventing its detection in fl ow cytometry as discussed above [50,51]. Alternatively, antigen endocytosis/modu lation could occur. Whether the developing B cells are eventually depleted by anti-CD20 recruited mecha nisms or whether their full maturation is prevented by binding of rituximab to CD20 is not known. In a study of autopsy samples of lymph node and spleen of patients with lymphoma treated with rituximab monotherapy or with rituximab and chemotherapy, a substantial reduction of B-cell populations was documented -with only three out of eight patients showing any reactivity for markers of cells of B-cell lineage in the lymph nodes and only one out of eight in the spleen by immunohistochemistry [53]. Similarly, a study in patients with idiopathic thrombocytopenic purpura showed major and prolonged depletion of B cells in the spleen of 10 patients treated with rituximab [54]. Th e number of residual B cells correlated with time from rituximab treatment but was <5% of spleen lymphocytes in eight out of nine patients studied up to 10 months after rituximab treatment. Plasma cells were detected at increased frequencies when compared with patients with idiopathic thrombocyto penic purpura not treated with rituximab. In a patient with idiopathic thrombocytopenic purpura, analysis of spleen and bone marrow samples by fl ow cytometry revealed complete depletion of B cells 3 months after treatment with rituximab [55]. In another patient with idiopathic thrombocytopenic purpura, B cells in the spleen 3 months after rituximab treatment were only present in very low numbers (around 0.1%) [56]. Interest ingly, in this later study persistence of memory B cells against vaccinia virus in the spleen of patients previously treated with rituximab was documented [56]. In kidney transplant patients that had a splenectomy 3 to 12 days after treatment with rituximab, naïve B cells were reduced but not memory B cells or plasma cells [57]. Vaccination studies in patients treated with rituximab can provide indirect data on B-cell subpopulations that may be resistant to depletion with anti-CD20 mAbs. However, published data are diffi cult to interpret because of the small number of patients, eff ects of concomitant therapy and the background disease itself on the humoral response to vaccines and, in particular, because studies included patients at various stages of B-cell depletion or repopulation at the time of vaccination. Most studies have looked at responses to infl uenza vaccines and showed absent or decreased humoral responses to vaccination in patients previously treated with rituximab when compared with normal controls or patients not treated with rituximab [58][59][60][61][62][63][64]. Some studies described a positive relationship between the antibody responses to vaccination and number of circulating B cells at the time of vaccination [64] or the time from last rituximab treatment [60,62]. Interestingly, when circulating infl uenzaspecifi c B cells were studied 6 days after vaccination, specifi c IgM-B cells were decreased in patients treated with rituximab 6 months previously when compared with controls but IgA B cells and IgG B cells were similar [61]. In a study in lymphoma patients, responses to recall antigens in the infl uenza vaccine were also seen but not to the new antigen [65]. Th ese studies suggest that memory B cells are more resistant to depletion than naïve B cells and can survive treatment with rituximab and be recruited in a secondary immune response. B-cell depletion in other solid tissues In patients with RA, several studies have documented signifi cant but variable depletion of B cells in samples of synovial tissue of involved joints and persistence of CD20plasma cells [66][67][68]. Variability in depletion between individuals was not explained by diff erences in rituximab serum levels [69]. In a study in patients with Sjogren's syndrome, repeated salivary gland biopsies 3 months after treatment with rituximab showed in com plete depletion of B cells [70]. A previous study had shown complete depletion at 4 months [71]. In a study of renal explanted grafts in two patients treated with one dose (4 months earlier) or two doses (10 months earlier) of rituximab, despite depletion of peripheral blood, tertiary lymphoid structures containing B cells were seen [72]. Conclusion In summary, although there are several studies looking at the degree and duration of B-cell depletion induced by rituximab in the peripheral blood, there is very little information on the exact degree of depletion in solid tissues -and, in particular, few defi nite data on whether diff erent subtypes of CD20-expressing B cells are more or less susceptible to depletion by anti-CD20 antibodies. Th e data available suggest that there is variability between individuals on the extent and duration of depletion induced and that this may have clinical correlations with response and duration of response in autoimmune diseases. Understanding what underlies this variabilityand, in particular, whether drug clearance and antigen and eff ector modulation phenomena are involved -has the potential to lead to more eff ective B-cell depleting strategies and to increasing our understanding of the role that diff erent B-cell subtypes play in the pathogenesis of the diff erent autoimmune diseases. Competing interests The author has received consultancy fees and funding to attend international medical meetings from Roche Pharmaceuticals and consultancy fees and research funding from GlaxoSmithKlein. Declarations This article has been published as part of Arthritis Research & Therapy Volume 15 Supplement 1, 2013: B cells in autoimmune diseases: Part 2. The supplement was proposed by the journal and content was developed in consultation with the Editors-in-Chief. Articles have been independently prepared by the authors and have undergone the journal's standard peer review process. Publication of the supplement was supported by Medimmune.
2015-03-27T04:16:54.000Z
2013-03-25T00:00:00.000
{ "year": 2013, "sha1": "9a212717e492554af746fa7b3f860fed6a5b267a", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3908", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8cfe76980dcb444dce23f5ed79fad03b029637d3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221667083
pes2o/s2orc
v3-fos-license
Pyneal: Open Source Real-Time fMRI Software Increasingly, neuroimaging researchers are exploring the use of real-time functional magnetic resonance imaging (rt-fMRI) as a way to access a participant’s ongoing brain function throughout a scan. This approach presents novel and exciting experimental applications ranging from monitoring data quality in real time, to delivering neurofeedback from a region of interest, to dynamically controlling experimental flow, or interfacing with remote devices. Yet, for those interested in adopting this method, the existing software options are few and limited in application. This presents a barrier for new users, as well as hinders existing users from refining techniques and methods. Here we introduce a free, open-source rt-fMRI package, the Pyneal toolkit, designed to address this limitation. The Pyneal toolkit is python-based software that offers a flexible and user friendly framework for rt-fMRI, is compatible with all three major scanner manufacturers (GE, Siemens, Phillips), and, critically, allows fully customized analysis pipelines. In this article, we provide a detailed overview of the architecture, describe how to set up and run the Pyneal toolkit during an experimental session, offer tutorials with scan data that demonstrate how data flows through the Pyneal toolkit with example analyses, and highlight the advantages that the Pyneal toolkit offers to the neuroimaging community. INTRODUCTION Real-time functional magnetic resonance imaging (rt-fMRI) is an emerging technique that expands the scope of research questions beyond what traditional neuroimaging methods can offer (Sulzer et al., 2013a;Stoeckel et al., 2014;Sitaram et al., 2017;MacInnes and Dickerson, 2018). With traditional fMRI, brain activation is measured concurrently but independently from the experiment. All analyses (e.g., correlating behavior or cognitive state with brain activations) therefore, take place after the scan 1 is completed, once the brain images and behavioral data have been saved and transferred to a shared location. In contrast, real-time fMRI is an approach whereby MRI data is accessed and analyzed throughout an ongoing scan, and can be incorporated directly into the experiment. Technological advances over the last decade have made it feasible to reconfigure an MRI environment to allow researchers to access and analyze incoming data at a rate that matches data acquisition. A few key advantages that rt-fMRI provides over traditional fMRI include the ability to: (1) monitor data quality in real time, thereby saving time and money, (2) provide participants with feedback from a region or network of regions in cognitive training paradigms, and (3) use ongoing brain activation as an independent variable to dynamically control the flow of an experimental task. While rt-fMRI has risen in popularity over the past decade (Sulzer et al., 2013a), the majority of imaging centers around the world remain unequipped to support this technique. In the past, this was primarily due to the computational demands exceeding scanner hardware capabilities [e.g., reconstructing and analyzing datasets composed of >100 k voxels at a rate that matched data acquisition was not feasible (Voyvodic, 1999)]. Excitingly, modern day scanners available from each of the major MRI manufacturers -GE, Philips, and Siemens -are now outfitted with multicore processors, capable of operating in parallel to reconstruct imaging data and write files to disk while a scan is ongoing. The availability of fMRI data in real time presents novel opportunities to design experiments that incorporate information about ongoing brain activation. However, finding the right software tool to read images across multiple data formats, support flexible analyses, and integrate the results into an ongoing experimental presentation is a challenge. To date, the existing software options are limited for one or more reasons, including: cost (requiring a commercial license or dependent upon commercially licensed software such as Matlab) or a constrained choice of analysis options [e.g., region of interest (ROI) analysis only]. In this article, we describe the Pyneal toolkit, an open source and freely available software package that was developed to address these limitations and support real-time fMRI. It is written entirely in Python, a programming language that offers flexibility and performance, balanced with readability and widespread support among the neuroimaging community. The Pyneal toolkit was built using a modular architecture to support a variety of different data formats, including those used across all three major MRI scanner manufacturers -GE, Philips, and Siemens. It offers built-in routines for basic data quality measures and single ROI summary statistics, as well as a web-based dashboard for monitoring the progress of ongoing scans. Its primary advantage, however, is that it offers an easy-to-use scaffolding on which users can design fully customized analyses to meet their unique experimental needs (e.g., neurofeedback from multiple 1 Throughout this article we use the term scan or run to refer to a single, discrete 4D acquisition, and the terms experimental session to refer to a collection of scans that are administered to a particular participant in a continuous time window. ROIs, dynamic experimental control, classification of brain states, brain-computer interaction). This flexibility allows researchers full control over which neural regions to include, which analyses to carry out, and how the results of those analyses may be incorporated into the overall experimental flow. Moreover, computational and technological advances have ushered in new and more sensitive approaches to fMRI analyses. As the field continues to evolve, the ability to customize analyses within the Pyneal toolkit will allow researchers to quickly adapt new analytic methods to real-time experiments. The Pyneal toolkit was designed to offer a powerful and flexible tool to existing rt-fMRI practitioners as well as to lower the burden of entry for new researchers or imaging centers looking to add this capability to their facilities. Here we provide an overview of the software architecture, describe how it is used, offer tutorial data and analyses demonstrating how to use the Pyneal toolkit, and discuss the advantages of the Pyneal toolkit. We conclude by describing both limitations of and future directions for the Pyneal toolkit. Overview The Pyneal toolkit was created as a flexible and open-source option for researchers interested in pursuing real-time fMRI methods. The entire codebase is written in Python 3 2 and integrates commonly used neuroimaging libraries (e.g., Nipy, NiBabel). For users developing customized real-time analyses, Python has a low burden of entry (compared to languages like Java or C++), while at the same time offers performance measures that meet or exceed the needs of basic research applications, in part due to backend numeric computing libraries (e.g., Numpy, Scipy) that are wrapped on top of a fast, C-based architecture. In order to support a wide range of data types and computing environments, the software is divided into two primary components: Pyneal Scanner and Pyneal 3 (see Figure 1). The two components communicate via TCP/IP connections, allowing users the flexibility to run the components on the same or different machines as required by their individual scanning environments 4 . Internally, Pyneal uses ZeroMQ 5 , a performant and reliable messaging framework, for all TCP/IPbased communication among its core processes. FIGURE 1 | Overview of the Pyneal toolkit. The Pyneal toolkit consists of two modules: Pyneal Scanner and Pyneal. Pyneal Scanner receives the raw data and transforms it into a standardized format for Pyneal to use. Pyneal analyzes the data in real time and makes it available for subsequent use (e.g., by a remote End User for experimental display). Pyneal Scanner and Pyneal can operate on the same computer (e.g., dedicated analysis computer) or separate computers (as required by the specific scanning environment). During a scan, Pyneal Scanner is responsible for converting data into a standardized format and passing it along to Pyneal (see Figure 2). Pyneal receives incoming data, carries out the specified preprocessing and analysis steps, and stores the results of the analysis on a locally running server. Throughout the scan, any remote End User (e.g., a workstation running the experimental task) can retrieve analysis results from Pyneal at any point. Each of these components is discussed in greater detail below. Pyneal Scanner Given the range of potential input data formats, depending on the scanning environment, we aimed to standardize the incoming data in a way that allows subsequent processing steps to be environment agnostic. Thus we divided the overall Pyneal toolkit architecture into two components that operate independently, enabling one component, Pyneal Scanner, to adapt to the idiosyncrasies of the local scanning environment without affecting the downstream processing and analysis stages of the Pyneal component (see Figure 2). Architecturally, Pyneal Scanner uses a multithreaded design with one thread monitoring for the appearance of new image data, and a second thread processing image data as it appears. This design allows Pyneal Scanner to efficiently process incoming scan data with minimal latency (in practice, under typical scanning conditions, the latency between when new image data arrives and is processed is on the order of tens of milliseconds). Throughout a scan, new images that appear from the scanner are placed into a queue. The processing thread pulls individual files from that queue and converts the data to a standardized format. In addition, header information from the first images to arrive is processed to determine key metadata about the current scan, including total volume dimensions, voxel spacing, total number of expected time points, and the affine transformation needed to reorient the data to RAS+ format (axes increase from left to right, posterior to anterior, inferior to superior). FIGURE 2 | Process flow diagram illustrating the multi-threaded nature of the Pyneal toolkit. Pyneal Scanner has two sub-modules: a scan watcher and scan processor. The scan watcher monitors and adds all new raw images to a queue. The scan processor receives all new raw images from the queue, extracts the image data, transforms it to a standardized format, and sends it to Pyneal for analysis. Pyneal operates as an independent, multi-threaded component and has three sub-modules: scan receiver, scan processor, and results server. The scan receiver receives formatted data from Pyneal Scanner and sends it to the scan processor, which completes the specified analyses and sends them to the results server. The results server listens to incoming requests from End Users (e.g., experimental task). Pyneal Scanner is initialized through a simple configuration text file specifying the scanner type and paths to where data files are expected to appear throughout a scan. Users can create this file manually, or follow the command line prompts when first launching; in either case, once Pyneal Scanner is configured at the start of a session, it does not need to be modified, unless the scanning environment itself is modified. In that case, users can update Pyneal Scanner without having to add any additional modifications to downstream processes in Pyneal. Regardless of how and where the data arrives from the scanner, as long as Pyneal Scanner continues to output data in the expected format, subsequent stages in the pipeline will proceed unaffected. This is a significant advantage that provides researchers the necessary latitude to customize the installation to their unique environment. Pyneal Scanner has built-in routines for handling common data formats used in GE (e.g., 2D dicom slice files), Siemens (e.g., 3D dicom mosaic files), and Philips scanners (e.g., PAR/REC files), and is easily extensible to incorporate additional formats that may emerge in the future. As soon as a complete volume (i.e., 3D array of voxel values from a single time point) has arrived, it is passed along to Pyneal via a dedicated TCP/IP socket interface. This arrangement allows Pyneal Scanner and Pyneal to run on separate machines or as separate processes on the same machine, depending on the particular requirements of the local scanning environment. For instance, if newly arriving images are only accessible from the scanner console itself, Pyneal Scanner can run on that machine, monitoring the local directory where new images appear, and then transferring processed volumes to Pyneal running on a separate dedicated machine. Alternatively, the scanner network configuration may be such that it is possible to remotely mount the directory where new images appear, allowing Pyneal Scanner and Pyneal to run concurrently on the same machine. Each transmitted volume from Pyneal Scanner to Pyneal occurs in two waves: First, Pyneal Scanner sends a JSONformatted header that contains relevant metadata about the current volume, including the time point index and volume dimensions. Second, it sends the numeric array representing the volume data itself. Pyneal uses the information from the header to reconstruct the incoming array, store it as a memory-and computation-efficient Numpy array, and index the volume in a way to facilitate subsequent processing and analysis steps. Pyneal Pyneal is divided up into three distinct submodules that operate efficiently in a multithreaded configuration: submodule 1: the scan receiver, accepts incoming data from Pyneal Scanner; submodule 2: the processing module, oversees the preprocessing and analysis stages on each incoming volume, and submodule 3: the results server fields requests for data from remote End Users throughout the scan (see Figure 2). As described above, throughout a scan Pyneal's submodule 1 (scan receiver) receives re-formatted data from Pyneal Scanner. Each new data point is represented as a 3D matrix of voxel values corresponding to a single sample (i.e., one TR). The JSON header that Pyneal Scanner provides with every transmission allows Pyneal to reconstruct the 3D volume with the correct dimensions, as well as assign it the proper index location in time. Each new volume is passed to the proper location of a preallocated 4D matrix that incrementally fills in throughout the scan. Submodule 2 (processing module) accepts each 3D volume and submits it through preprocessing and analysis stages. The preprocessing stage estimates motion using a histogram registration algorithm and yields mean displacement in millimeters relative to a fixed reference volume from the start of the run (absolute motion), as well as relative to the previous time point (relative motion) (Jenkinson, 2000). The analysis stage takes the preprocessed volume and runs the specified analyses or computations on the volume. Users have the option of selecting from built-in analysis routines (including calculating a weighted or unweighted mean signal within a supplied ROI mask), or, importantly, can generate and include their own custom analysis script (written in Python) that will be executed on each volume. The ability to design and execute customized analyses in real-time provides researchers the freedom to measure and use ongoing brain activations however they desire. See Using Pyneal below for more details on selecting an analysis or building a custom analysis script. The analysis stage is capable of computing and returning multiple results on each volume (e.g., mean signal from multiple distinct ROIs). The computed results are tagged with the corresponding volume index, and passed along to the third submodule: the results server. Submodule 3, (the results server), listens for and responds to incoming requests for specific results from an End User throughout the scan. An End User is anything that may wish to access real-time results throughout an on-going scan (e.g., experimental presentation software that will present results as neurofeedback to the participant in the scanner). To request results, the End User sends a specific volume index to the results server via a TCP/IP socket interface. The result server receives the request and checks to see if the requested volume has arrived and been analyzed. Responses are sent as a JSON-formatted reply to the End User. If the requested volume has not been processed yet, the reply message from the Result Server will contain the entry foundResults: False; if the requested volume exists, the Results Server retrieves the requested results for that volume, and sends a reply message to the End User that contains foundResults: True as well as the full set of results for that volume. The End User can then parse and make use of the results as needed (e.g., update a graphical display showing mean percent signal change in an ROI). At the completion of each run, Pyneal creates a unique output directory for the current scan. The scan data is written to this directory as a 4D NIFTI image, along with a JSON file containing all computed results as well as log files. Using Pyneal Once installed, users can interact with and customize Pyneal via configuration files and graphical user interfaces (GUIs). At the start of a new scan, the user needs to launch both Pyneal Scanner and Pyneal. Launching Pyneal Scanner is done via the command line. Pyneal Scanner uses a configuration text file to obtain parameters specific to the current computing environment, including the scanner make and the directory path where new incoming data is expected to appear (see example in the Full Pipeline tutorial below, section "Pyneal Toolkit -Full Pipeline Tutorial"). Users can manually create this configuration file ahead of time, or, if no file exists, the user will be prompted to specify the parameters via the command line when launching. Parameters specified via the command line will be written into the configuration file and saved to disk. Pyneal Scanner will automatically read this configuration file at the start of every scan. Thus, Pyneal Scanner needs to be configured only once at the beginning of each experimental session. Launching Pyneal is also done via the command line. Upon launching Pyneal at the start of each scan (run), the user is presented with a setup GUI for configuring Pyneal to the current scan (see Figure 3). The setup GUI includes sections for socket communication parameters (e.g., IP address), selecting an input mask, setting preprocessing parameters, choosing analyses, and specifying an output directory. Some parameters, like the socket communication host address and ports, are unlikely to change from experimental session to session, while other parameters, most notably the input mask and output directory, will be specific to experimental session and/or each individual scan. The GUI is populated with the last used settings to minimize set-up time, however, the GUI must be launched before each scan. The setup GUI asks users to specify the path to an input mask, which will be used during the analysis stage of a scan. If the user selects one of the built-in analysis options (i.e., calculate an average or median), the mask will define which voxels are included in the calculation. Alternatively, if the user chooses to use a custom analysis, a reference to this mask will be passed into the custom analysis script, which the user is free to use or ignore as needed. In addition, the mask panel also allows users to specify whether or not to use voxel values from the mask as weights in subsequent analyses. All analyses in Pyneal take place in the native functional space of the current scan, and as such, this mask is required to match the dimensions and orientation of the incoming functional data. For cases where the user wishes to use an existing anatomical mask in a different imaging space (e.g., MNI space), the Pyneal toolkit includes a Create Mask tool (utils/createMask.py) for transforming masks to the functional space of the current subject [see Figure 4; Note that this functionality requires FSL (Jenkinson et al., 2012) to be installed]. Pyneal includes built-in analysis options for calculating the average and median activation levels across all voxels in the supplied mask. For experiments that wish to present neurofeedback from a single ROI, these options may be appropriate. However, one of Pyneal's primary advantages is the ability to run fully customized analyses. By selecting "custom" in the analysis panel, the user will be prompted to choose a pythonbased analysis script they have composed. Pyneal requires that a custom script contain certain functions in order to integrate with the rest of the Pyneal pipeline throughout a scan. However, beyond that basic structure, there are few limitations on what FIGURE 3 | Pyneal Graphical User Interface (GUI). The Pyneal GUI contains the following sections: (1) Communication: allows Pyneal to communicate with Pyneal Scanner and any End Users. This includes the IP address of the computer running Pyneal as well as the port numbers for Pyneal Scanner and End Users to communicate with Pyneal. (2) Mask: users have the option of loading a mask to use during real-time fMRI runs (weighted or unweighted). (3) Preprocessing: users specify the number of timepoints (volumes) in the run. (4) Analysis: users may choose between one of the default options (calculating the average or median of a mask) or importantly can upload a custom analysis script (e.g., correlation between two regions). (5) Output: users specify a location where the output files are saved. users may wish to include. To assist users in designing a custom analysis script, we include a basic template file 6 with the required named functions and input/output variable names that users can expand upon as needed. The benefit of this approach is that it liberates users to design analysis approaches that are best suited FIGURE 4 | Create Mask GUI. This GUI assists users in making a mask that can be used in analysis during the real-time fMRI runs. Users can choose between making a whole brain mask or a mask from a pre-specified MNI template (e.g., amygdala ROI). Users must load an example functional data file for both mask types. When creating a mask from an MNI template, users must additionally load an anatomical data file, specify the path to the MNI standard brain file, the MNI mask file, and specify the new file name (output prefix). Note, this tool requires FSL. to their experimental questions, all while fully integrating into the existing Pyneal pipeline. Lastly, users are able to specify an output directory for the current experimental session. During an experimental session, the output from each scan will be saved to its own unique subdirectory within this output directory. The saved output from each scan includes a log file showing all settings and messages recorded throughout the scan, a JSON file containing all of the computed analysis results, and a 4D NIFTI image containing the functional data as received by Pyneal. Once the user hits "submit, " Pyneal will establish communication with Pyneal Scanner, launch the results server, and wait for the scan to start and data to appear. Web-Based Dashboard Once the scan begins, users are presented with a web-based dashboard (see Figure 5) viewable in an internet browser. The dashboard updates in real-time allowing users to view the progress of the scan, and monitor the status via four separate components. A plot in the top-left displays ongoing head motion estimates expressed in millimeters relative FIGURE 5 | Pyneal Dashboard. This web-based dashboard allows users to monitor analysis and progress during real-time runs. The current volume is displayed along with basic information about the scan (e.g., mask, analysis, etc.). Two plots indicate: (1) head motion (top left) -both relative (compared to previous volume) and absolute (compared to the start of the scan) and (2) processing time for each volume (top right). Two log windows display: (1) messages from Pyneal Scanner (bottom left) and (2) to both a fixed reference volume (absolute displacement) and the previous volume (relative displacement). In the topright, a separate plot shows the processing time for each volume. By monitoring this plot, users can ensure that all analyses are completing at a rate that keeps pace with data acquisition. At the bottom, two log windows allow users to watch incoming messages from Pyneal Scanner (bottom left) and communication between Pyneal's results server and any End User (bottom right). RESULTS Here we present two complementary tutorials and results using real fMRI data. Section "Pyneal Toolkit -Full Pipeline Tutorial" details how to set up and use the Pyneal toolkit. It demonstrates the full pipeline of data flow throughout the Pyneal Toolkit. Section "Pyneal Analysis Tutorial" describes in more detail how to run two example analyses in Pyneal -one using the default built-in ROI-averaging tool in the toolkit and the second using a custom analysis script. Please see: https://github.com/ jeffmacinnes/pyneal-tutorial for full access to the data and scripts for both tutorials. Both tutorials assume the user has downloaded and installed the Pyneal toolkit and the Pyneal Tutorial repositories in their local folder. If so, the following directories should be located in the user's home directory: The goal of this tutorial is to test the Pyneal toolkit's complete pipeline using conditions similar to what is available at the three major scanner manufacturers. This tutorial uses the Scanner Simulator command line tool that comes with the Pyneal toolkit. This tool mimics the behavior of an actual scanner by writing image data to an output directory at a steady rate (directory and rate specified by the user). The source data (included) are actual scan images from GE, Philips, and Siemens scanners. These data are meant to simulate the format and directory structure typical of each of these platforms. This tutorial allows users to test the complete Pyneal toolkit's pipeline on any of these platforms prior to actual data collection. Regardless of scanner type, each platform follows the same general steps: • Set up the Scan Simulator. Below we provide a complete example using the Siemens' scanner setup. Please see https://github.com/jeffmacinnes/ pyneal-tutorial for source data and information for all scanner types, including examples using GE and Philips scanners. Siemens Full Pipeline Tutorial: Inside the Siemens_demo folder, there is a directory named scanner. This directory serves as the mock scanner for this tutorial, and follows a structure similar to what is observed on actual Siemens scanners. There's a single session directory (data) that contains all of the dicom files for two functional series (000013, 000015) and an anatomical series (for more source data detail, see Appendix: Siemens source data within: https://github.com/jeffmacinnes/pyneal-tutorial/blob/master/ FullPipelineTutorial.md). We will use the Scanner Simulator tool to simulate a new functional series, using 000013 as our source data. The new series will appear in the session directory alongside the existing series files, and dicom files will contain the series name 000014. To perform this tutorial the following steps are required: I. Launch Siemens_sim.py with the desired input data • Open a new terminal window and navigate to the Scanner Simulator tool: cd ∼ /pyneal/pyneal_ scanner/simulation/scannerSimulators • launch Siemens_sim.py, specifying paths to the source directory (∼/pyneal-tutorial/Siemens_ demo/scanner/data) and series numbers (000013). The user can also specify the new series number (-n 000014), and TR (-t 1000) if desired. python Siemens_sim.py ∼/pynealtutorial/Siemens_demo/scanner/data 000013 -t 1000 -n 000014 The user should see details about the current scan, and an option to press ENTER to begin the scan: -------- • Start Pyneal by pressing Submit. • In the Pyneal Scanner terminal, the user will see messages indicating that Pyneal Scanner has successfully set up a connection to Pyneal and that it is waiting for a new seriesDir (which will be created once the scan starts). IV. Start demo • In the first terminal window, where the Scan Simulator tool is running, press ENTER to begin the scan. • As the scan is progressing, each of the three terminal windows will update with new log messages. In addition, the user can monitor the progress from the dashboard in a web browser at 127.0.0.1:5558. • As soon at the scan finishes, the user can find the Pyneal output at ∼/pyneal-tutorial/Siemens_ demo/output/pyneal_001. This directory will have: • pynealLog.log: log file from the current scan. • receivedFunc.nii.gz: 4D nifti file of the data, as received by Pyneal * results.json: JSON file containing the analysis results from the current scan. Pyneal Analysis Tutorial The goal of this tutorial is to guide users through two different analyses using Pyneal. We provide real fMRI data (note -this tool also allows for use of randomly generated data). This tutorial uses the pynealScanner_sim.py command line tool that comes with the Pyneal toolkit. This tool takes real or generated data, breaks it apart, and sends it to Pyneal for analysis. The source data (included) is a nifti file from one run of a hand squeezing task. It alternates between blocks of squeeze and rest (each 20 s, repeated five times). The first analysis demonstrates Pyneal's built-in ROI neurofeedback tool. The second demonstrates use of a custom analysis script: correlating the activation of two ROIs and using it for neurofeedback. Neurofeedback: Single ROI Averaging Using Built-in Analysis Functions Example: A researcher wishes to provide participants with neurofeedback from the primary motor cortex (M1) in a handsqueezing task. The M1 ROI is defined on the basis of an anatomical mask using the Juelich atlas in FSL. Tutorial: Ordinarily, the first step is to create a unique mask in functional space of the target ROI (M1). For the purposes of this tutorial, we provide the ROI in subject-specific space for users. We used the left M1 ROI from the Juelich atlas freely available in FSL. We thresholded the mask at 10% and binarized it using fslmaths. Then using flirt, we converted the left M1 mask (in MNI space) to functional space (subject-specific). The resulting mask, L_MotorCortex.nii.gz is now ready to use in this tutorial. Usage includes: python pynealScanner_sim.py [-filePath] [-random] [-dims] [-TR] [-sockethost] [-socketport] Input arguments: • -f/-filePath: path to 4D nifti image the user wants to use as the "scan" data. Here we are using "func.nii.gz" provided in ∼/pyneal-tutorial/analysis Tutorial as our input data. • -r/-random: flag to generate random data instead of using a pre-existing nifti image To run the tutorial, the following steps are required: I. Launch pynealScanner_sim.py script python pynealScanner_sim.py -f ∼/pyneal -tutorial/analysisTutorial/func. nii.gz -t 1000 -sh 127.0.0.1 -sp 5555 Here we are setting the TR to 1000 ms, the host socket number to 127.0.0.1 and the port number to 5555. This tool will simulate the behavior of Pyneal Scanner. During a real scan, Pyneal Scanner will send data to Pyneal over a socket connection. Each transmission comes in two phases: (1) a json header with metadata about the volume and (2) the volume itself. Once the user hits enter, she should see the following: IV. Hit Enter to begin the simulated scan As soon as the scan simulation begins, Pyneal Scanner begins processing and transmitting volumes of the provided data (func.nii.gz) to Pyneal, which calculates the mean activation within the target region on each volume and stores the results on the Pyneal's Results Server. As the scan is progressing, the user should see information about each volume appear in both the Scan Simulator and Pyneal terminals, indicating the volumes are being successfully transmitted and processed. V. Results • At the completion of the scan, the user can find the following Pyneal output files in ∼/pynealtutorial/analysisTutorial/output/pyneal_ 001 (Note: the directory names increase in sequence. If this is the first time saving output to this directory, it will be _001, otherwise it will be a larger number): • pynealLog.log: complete log file from the scan. • Since the input data here came from a simple hand squeezing task where we computed the average signal within the Left Motor Cortex, we expect to see a fairly robust signal in the results, following the alternating blocks design of the task. • To confirm, the user can open the results.json file and plot the results at each timepoint using the user's preferred tools (e.g., Python, Matlab). Note -it is also possible to use this setup to test communication with an End User (e.g., experimental presentation script) if desired. See https://jeffmacinnes.github.io/ pyneal-docs/simulations/ for more details. See Neurofeedback: Correlation Between Two ROIs Using a Custom Analysis Script Example: Using a custom analysis script to calculate the correlation between two ROIs and use the correlation as feedback during a task. E.g., A researcher wishes to calculate the correlation between the primary motor cortex and the caudate nucleus and use that correlated signal as neurofeedback in a hand squeezing task. This tutorial uses the Pyneal Scanner simulation script, which is located in: ∼/pyneal/utils/simulation/pyneal Scanner_sim.py To perform this tutorial the following steps are required: I. Setup Scan Simulator Like in the example in "Neurofeedback: Single ROI Averaging Using Built-in Analysis Functions, " the first step is to set up Pyneal Scanner Simulator, which will send our sample dataset to Pyneal for analysis. Open a new terminal and navigate to the Simulation Tools directory: cd ∼/pyneal/utils/simulation Run pynealScanner_sim.py and pass in the path to our sample dataset. II. Setup Custom Analysis Script This tutorial includes a custom analysis script that the user will load into pyneal. This script can be found at: ∼/pyneal-tutorial/analysisTutorial/custom Analysis_ROI_corr.py. Open this file to follow along below. This script is adapted from the customAnalysis Template.py that is included in the Pyneal toolkit. There are two relevant sections to this script: initialize The analysis script includes an __init__ method that runs once Pyneal is launched. This section should be used to load any required files and initialize any variables needed once the scan begins. In the __init__ method in the tutorial script, the user will find the following code block: ## Load the mask files for the 2 ROIs we will compute the correlation between # Note: we will be ignoring the mask that is passed in from the Pyneal GUI mask1_path = join(self.customAnalysis Dir, 'masks/L_Caudate.nii.gz') mask2_path = join(self.customAnalysis Dir, 'masks/L_MotorCortex.nii.gz') mask1_img = nib.load(mask1_path) mask2_img = nib.load(mask2_path) self.masks = { 'mask1': { 'mask': mask1_img.get_data() > 0, # creat boolean mask 'vals': np.zeros(self.numTimepts) # init array to store mean signal on each timept }, 'mask2': { 'mask': mask2_img.get_data() > 0, 'vals': np.zeros(self.numTimepts) } } ## Correlation config self.corr_window = 10 # number of timepts to calculate correlation over The above block of code does the following: • Loads each mask file. Note that while the template provides a reference to the mask file loaded via the Pyneal GUI, we are ignoring that mask and instead loading each mask manually. • Pre-allocates an array for each mask where we will store the mean signal within that mask on each timepoint. • Sets the correlation window to 10 timepoints, meaning that, with each new volume that arrives, the correlation between the two ROIs will be computed over the previous 10 timepoints. compute The compute method will be executed on each incoming volume throughout the scan, and provides the image data (vol) and volume index (volIdx) as inputs. This method should be used to define analysis steps. In the compute method in the tutorial script, the user will find the following code block: The above block of code does the following: • Computes the mean signal within each mask at the current timepoint. • Once enough volumes have arrived, computes the correlation between the two ROIs over the specified correlation window. • Returns the result of the correlation as a dictionary. The results of any custom script need to be returned as a dictionary. The Pyneal will integrate these results into the existing pipeline and the results will be available via the Pyneal Results Server (for requests from an End User if desired) in the same manner as with the built-in analysis options. III. Set up Pyneal Next, configure Pyneal to use the custom analysis script developed above. IV. Start the scan Back in the Scan Simulator terminal, the user should see a successful connection to Pyneal connected to pyneal Press ENTER to begin the "scan" • Hit Enter to begin the simulated scan As the scan is progressing, the user should see information about each volume appear in both the Scan Simulator and the Pyneal terminals, indicating the volumes are being successfully transmitted and processed. V. Results • At the completion of the scan, the user can find the following Pyneal output files in ∼/pynealtutorial/analysisTutorial/output/pyneal_ 002 (Note: the directory names increase in sequence. If the user completed the single ROI NF tutorial first, it'll be _002, otherwise it'll be a different number): • pynealLog.log: complete log file from the scan. • receivedFunc.nii.gz: 4D Nifti of the data, as received by the Pyneal. • results.json: JSON-formatted file containing the computed analysis results at each timepoint. • The custom analysis script computed a sliding window correlation between the Left Motor Cortex and the Left Caudate throughout the task. • To visualize these results, the user can open the results.json file and plot the results at each timepoint using their preferred tools (e.g., Python, Matlab). Advantages of the Pyneal Toolkit rt-fMRI Software A variety of tools currently exist that support real-time fMRI to varying degrees, including AFNI (Cox and Jesmanowicz, 1995), FIRE (Gembris et al., 2000), scanSTAT (Cohen, 2001), STAR (Magland et al., 2011), FieldTrip toolbox extension (Oostenveld et al., 2011), Turbo-BrainVoyager (Goebel, 2012), FRIEND (Sato et al., 2013), BART (Hellrung et al., 2015), OpenNFT (Koush et al., 2017), and Neu3CA-RT (Heunis et al., 2018). At a time when implementing real-time fMRI meant researchers had to develop custom in-house software solutions, these tools presented a valuable alternative, catalyzing new experiments, and supporting pioneering early research with real-time fMRI. Nevertheless, the existing software options are limited in one or more ways that fundamentally restricts who can use them and where, and what types of experiments they support. Please see Table 1 for a comparison of the Pyneal toolkit to the other main rt-fMRI software packages currently available. For example, some of these tools require users to purchase licensing agreements for the package itself (e.g., Turbo-BrainVoyager), or are designed to work inside of commercial software packages like Matlab 7 . In addition, a number of these tools are designed to only support a particular usage of real-time fMRI, like neurofeedback, while not supporting other uses of rt-fMRI. And lastly, even in cases where the underlying code is customizable, it often requires proficiency with advanced computer languages like C++. We built the Pyneal toolkit to directly address these limitations. Free and Open-Source The Pyneal toolkit offers a number of key features that make it an appealing package for existing real-time fMRI practitioners as well as those new to the field. First, in support of the growing movement toward open-science, the Pyneal toolkit is free and open source. It is written entirely in Python (see text footnote 2), and all required dependencies are similarly costfree and open. We chose to use Python specifically because it is sufficiently powerful to handle the computational demands of fMRI analysis in real-time and the language is comparatively easy for users to read and write, an important consideration when designing a package that encourages customization by researchers. Furthermore, the number of libraries designed to aid scientific computing (e.g., Numpy, Scipy, Scikit-learn), and the large user support community worldwide, have lead Python to surge in popularity among the sciences (see Perez et al., 2011), and neuroscience in particular (see Gleeson et al., 2017 andMuller et al., 2015). The Pyneal toolkit follows style and documentation guidelines of scientific python libraries, and when possible uses the same data formats and image orientation conventions as popular neuroimaging libraries (e.g., NiBabel). Moreover, the source code for the Pyneal toolkit is hosted via a GitHub repository, which ensures users can access the most upto-date code releases, as well as track modifications and revisions to the codebase across time (Perkel, 2016). Flexibility in Handling Multiple Data Formats and Local Computing Configurations A second advantage the Pyneal toolkit offers is flexibility in handling multiple different data formats and directory structures. MRI data can be represented via a number of different file formats, depending in part on the particular scanner manufacturer and/or automated processing pipelines that modify data before it gets written to disk. For instance, the scanner may store images using a universal medical imaging standard like DICOM, a more specific neuroimaging standard like Nifti, or a proprietary format like the PAR/REC file convention currently seen with Philips scanners. Moreover, even within a given file format, there is considerable variation in how data are represented. For instance, a single DICOM image file may represent a 2D slice (GE scanners) or a 3D volume arranged as a 2D mosaic grid (Siemens scanners). Lastly, even when two imaging centers have the same scanners and use the same data formats, there can be differences in how the local computing networks are configured. This affects where data is saved, and how the Pyneal toolkit can access existing pipelines. The Pyneal toolkit was designed to be robust to these differences across scanning environments. Relatedly, a third advantage is the ability of the Pyneal toolkit to accommodate multiple different environmental variations. Importantly, the Pyneal toolkit splits data handling from realtime analysis tasks into modular components that run via independent processes. Pyneal Scanner is responsible for reading incoming MRI data in whatever form it takes, accessing the raw data, and reformatting to a standardized form that is compatible with subsequent analysis stages of Pyneal. The re-formatted data is then passed to the preprocessing and analysis stage of Pyneal via TCP/IP based interprocess communications. The modular nature of this configuration offers important advantages. For one, Pyneal Scanner and Pyneal are able (though not required) to run on separate workstations. This is important as researchers may lack the administrative permissions needed to significantly modify the computing environment of a shared scanning suite. For example, in a situation where the scanner console does not export images to a shared network directory, Pyneal Scanner can run on the scanner console and pass data to a remote workstation running Pyneal, minimizing the risk of interfering with normal scanner operations. In other situations where the scanner does export images to a shared network directory, Pyneal Scanner and Pyneal can run on the same workstation. The modular nature of the Pyneal toolkit's design means that it can be modified to support new data formats in the future without having to drastically alter the core codebase. Importantly, if the Pyneal toolkit does not currently support a desired data format, researchers can modify Pyneal Scanner to accommodate their needs without having to modify the rest of the Pyneal toolkit core utilities. As the entire toolkit is free and open-source, users and welcome and encouraged to do so. Fully Customizable Analyses A fourth, and chief, advantage that the Pyneal toolkit offers is flexibility of analyses. The ability to design and implement uniquely tailored analysis routines via custom analysis scripts means that users can adapt the method to their research question rather than having to constrain their research questions based on the methodology. This flexibility means that the Pyneal toolkit can be used to accommodate a broader and more diverse spectrum of research and experimental goals, offering numerous benefits to the real-time neuroimaging community and general scientific advancement. Importantly, in the Pyneal toolkit, the entire incoming data stream is made available, and by using custom analysis scripts, researchers can extract, manipulate and interrogate whichever portions of that data are most relevant to their question. In addition, researchers are able to use these results in real-time for whatever purpose they choose, including neurofeedback, experimental control, qualityassurance monitoring, etc. The ability to design and test one's own analyses will expedite the growth and maturation of real-time neuroimaging more broadly. It is worth highlighting that real-time fMRI is still a comparatively new approach, with many open questions regarding imaging parameters, experimental design, effect sizes, subject populations, long-term outcomes, and general best practices (Sulzer et al., 2013a). Determining satisfactory answers to these questions has been slow, in part due to the limitations of existing software and a small community of users. Customizing analyses in the Pyneal toolkit allows researchers to work in a rapid and iterative way to explore new methods, addressing these questions, and establishing a framework for future studies. It also means that researchers can keep up with the latest analytic advances in their domain without having to rely on external software developers to release new updates for their real-time tools. In short, the Pyneal toolkit is powerful precisely because it does not presuppose how researchers intend to use it; our conviction is that advances in real-time neuroimaging are best achieved by empowering the community to develop those advances itself. Limitations While the Pyneal toolkit offers a convenient and flexible infrastructure for accessing and using fMRI data in real-time, there are a few limitations with the software presently. First, the Pyneal toolkit does not currently include built-in online denoising of the raw fMRI data. Depending on the application, a user may find that simple denoising steps prior to analysis, such as slow-wave drift removal or head motion correction, may increase the signal-to-noise ratio and improve the statistical power of the analysis. We plan to include built-in options for basic denoising in forthcoming software releases. In the meantime, the current version of the Pyneal toolkit allows users to implement their own denoising steps as part of a customized processing pipeline via a custom analysis script. Second, the Pyneal toolkit offers built-in support for standard data formats found across the three main scanner manufacturers, but does not currently support multiband acquisitions. As imaging technology advances, multiband acquisitions are becoming increasingly common as a way to increase coverage while maintaining short TRs. As such, we plan to offer built-in multiband support in an upcoming software update. Due to the modular nature of the Pyneal toolkit, multiband support can be integrated as a component of Pyneal Scanner without requiring significant changes to the bulk of the code base. Third, the Pyneal toolkit was built and tested using Python 3 on Linux and macOS environments. While there are no obvious incompatibilities with a Windows environment, we have not had the resources to thoroughly test the Pyneal toolkit across multiple platforms. We encourage Windows users to run the Pyneal toolkit via a virtual machine configured as a Linux operating system. In future versions of the Pyneal toolkit we hope to offer broader support across platforms, or containerize the application using a tool like Docker 8 in order to be platform agnostic. While our team is working to improve the aforementioned limitations, we would also like to extend an invitation to the neuroimaging community to contribute directly to the Pyneal toolkit. The Pyneal toolkit was developed with the open source ethos of sharing and collaboration. It lives in the GitHub ecosystem, which facilitates collaborative work across multiple teams and/or individuals, and offers an easy way for users to submit new features, discuss code modifications in detail, and log bugs as they are discovered. Working collaboratively in this manner ensures efficiency in expanding the software's capabilities and improving stability. Anyone interested in working on the Pyneal toolkit can find information in the Contributor Guidelines and Contributor Code of Conduct outlined in the documentation at the Pyneal toolkit GitHub repository at: https://github.com/ jeffmacinnes/pyneal. CONCLUSION In this article we describe the Pyneal toolkit, a free and opensource software platform for rt-fMRI. The Pyneal toolkit provides seamless access to incoming MRI data across a variety of formats, a flexible basis to carry out preprocessing and analysis in realtime, a mechanism to communicate results in real-time with remote devices, and interactive tools to monitor the quality and status of an on-going real-time fMRI experimental session. In addition to a number of basic built-in analysis options, the Pyneal toolkit offers users the flexibility to design and implement fully customized processing pipelines, allowing real-time fMRI analyses to be tailored to the experimental question instead of the other way around [for two examples using the Pyneal toolkit with 8 https://www.docker.com/ different experimental approaches see (MacInnes et al., 2016) and (MacDuffie et al., 2018)]. As the rt-fMRI community grows worldwide, new tools are needed that allow researchers to flexibly adapt to suit their unique needs, be that neurofeedback from a single or multiple regions, triggering task flow, or online multivariate classification. The Pyneal toolkit offers researchers a powerful way to address the current open questions in the field, and the flexibility necessary to adapt to answer future questions.
2020-09-15T13:09:01.032Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "b590eff0be7fa36928861b2fcb511dab6c6f4e21", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00900/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b590eff0be7fa36928861b2fcb511dab6c6f4e21", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
79621319
pes2o/s2orc
v3-fos-license
Anne Hardy, Salmonella Infections, Networks of Knowledge and Public Health in Britain 1880–1975 (Oxford: Oxford University Press, 2015), pp. x+249, £60, hardback, ISBN: 978-0-19-870497-3. of open heart surgery. The rapid adoption of open heart surgery is indicated by the 8792 operations in 290 hospitals in 1961 in the United States. The presence of many heart disease patients in hospitals led in the 1960s to the establishment of coronary care units, which were based on intensive care units. These required electronic monitoring equipment, highly skilled nurses, and cardiac arrest teams. Coronary angiography in the 1960s used catheters to introduce a contrast medium in the coronary arteries that enabled x-rays to show blockages of the arteries. This led to operations to remove the obstruction, at first using coronary artery bypass graft surgery (CABG) that became popular in the 1970s. The versatility of the coronary catheter was demonstrated when a balloon was placed in its tip and expanded at the site of a coronary artery blockage to restore blood flow. Angioplasty was widely adopted in the 1980s and was also used to open obstructed heart valves. The subsequent reocclusion of arteries led to the use of catheters to place metal mesh stents inside the arteries. In the 1980s, drugs became available that dissolved clots that occluded arteries and they became very popular. Computers permitted a better understanding of heart rhythm disorders and transistors enabled the development of devices that provided various types of electrical stimulation to the heart to restore normal rhythms. Implantable pacemakers were developed in the 1970s, and followed by implantable automatic defibrillators in the 1980s. The invasive nature of diagnostic catheterisation and angiography led to use of less invasive techniques beginning in the 1980s. Electrocardiography showed cross-sectional slices of the heart. Other methods included radioisotopes, computerised tomography scans and magnetic resonance imaging. Heart transplants were a method of treating heart disease first used in the 1970s, but they were uncommon because of the high cost and low success rate. The prevention of heart diseases became a concern about mid-century, but Fye states that ‘heart specialists devoted little time or energy to prevention’ (p. 473) because they were too busy with diagnosis and treatment. Drug treatment became available to treat hypertension. Concern with cholesterol in the blood led to programs to reduce dietary cholesterol and later to the statin drugs. Attention was given to the dangers of cigarette smoking and the importance of physical activity. This book contains detailed and readable descriptions of the development and utilisation of many methods of diagnosis and treatment of heart diseases in the United States. Technical terms are explained and topics are described individually to permit selective reading. The focus is primarily on the introduction of the methods rather than their general adoption and associated problems. The book includes considerable discussion of internal organisational and personnel matters at the Mayo Clinic. Caring for the Heart is an extraordinary achievement that is an essential source of information about heart diseases, which were the primary causes of adult deaths in advanced countries in the twentieth century. It deserves the highest praise. of open heart surgery. The rapid adoption of open heart surgery is indicated by the 8792 operations in 290 hospitals in 1961 in the United States. The presence of many heart disease patients in hospitals led in the 1960s to the establishment of coronary care units, which were based on intensive care units. These required electronic monitoring equipment, highly skilled nurses, and cardiac arrest teams. Coronary angiography in the 1960s used catheters to introduce a contrast medium in the coronary arteries that enabled x-rays to show blockages of the arteries. This led to operations to remove the obstruction, at first using coronary artery bypass graft surgery (CABG) that became popular in the 1970s. The versatility of the coronary catheter was demonstrated when a balloon was placed in its tip and expanded at the site of a coronary artery blockage to restore blood flow. Angioplasty was widely adopted in the 1980s and was also used to open obstructed heart valves. The subsequent reocclusion of arteries led to the use of catheters to place metal mesh stents inside the arteries. In the 1980s, drugs became available that dissolved clots that occluded arteries and they became very popular. Computers permitted a better understanding of heart rhythm disorders and transistors enabled the development of devices that provided various types of electrical stimulation to the heart to restore normal rhythms. Implantable pacemakers were developed in the 1970s, and followed by implantable automatic defibrillators in the 1980s. The invasive nature of diagnostic catheterisation and angiography led to use of less invasive techniques beginning in the 1980s. Electrocardiography showed cross-sectional slices of the heart. Other methods included radioisotopes, computerised tomography scans and magnetic resonance imaging. Heart transplants were a method of treating heart disease first used in the 1970s, but they were uncommon because of the high cost and low success rate. The prevention of heart diseases became a concern about mid-century, but Fye states that 'heart specialists devoted little time or energy to prevention' (p. 473) because they were too busy with diagnosis and treatment. Drug treatment became available to treat hypertension. Concern with cholesterol in the blood led to programs to reduce dietary cholesterol and later to the statin drugs. Attention was given to the dangers of cigarette smoking and the importance of physical activity. This book contains detailed and readable descriptions of the development and utilisation of many methods of diagnosis and treatment of heart diseases in the United States. Technical terms are explained and topics are described individually to permit selective reading. The focus is primarily on the introduction of the methods rather than their general adoption and associated problems. The book includes considerable discussion of internal organisational and personnel matters at the Mayo Clinic. Caring for the Heart is an extraordinary achievement that is an essential source of information about heart diseases, which were the primary causes of adult deaths in advanced countries in the twentieth century. It deserves the highest praise. In Salmonella Infections, Networks of Knowledge and Public Health in Britain 1880-1975, Anne Hardy provides an overview of how salmonella infections were understood and managed as a public health problem in the late nineteenth and twentieth centuries. She discusses how infections spread via eggs, flies, meat, milk and shellfish. Hardy also pays much attention to the laboratories where scientists investigated the causes of salmonella poisoning and built an international research network based upon interest in the problem. In addition, she examines the various sites in which food poisoning occurred. Salmonella Infections investigates the intersection between animals and humans, and the ways in which food poisoning became understood as laboratory medicine evolved and germ theory became accepted. In her introduction, Hardy makes a case for the historical significance of salmonella. While she convincingly points out that a lot of people have suffered from the problem (in both the past and present), the rationale provided for the importance of the subject to medical historiography is rather weak. Moreover, the methodological approaches adopted in Salmonella Infections seem somewhat under-ambitious in an era when medical historians are increasingly turning towards more exciting interdisciplinary research avenues and engaging with medical communities in interesting ways. The main focus here is on science and scientists, which is disappointingly limiting. Hardy pays scant attention to personal experiences of food poisoning or being patients. But much information is provided on scientific investigations, laboratory workers and expert ideas on salmonella based primarily on scientific books and articles, as well as public health reports. More personal accounts drawn from sources such as diaries might have added more depth and human interest to this study. This emphasis on science also means that themes of major interest are missing. Hardy barely discusses domesticity and gender, despite the centrality of food hygiene to the early-twentieth-century drive for improved infant welfare in an era when the 'gospel of germs' was spreading. Indeed, gender is a remarkably curious omission in a book about food production and consumption. For instance, Hardy persuasively argues that anxieties about flies transmitting disease via food arose once germ theory had gained acceptance. Yet she fails to consider how this affected domestic life, does not acknowledge the new ways in which housewives were encouraged to think about food preparation and ignores the shifting hygienic practices which become encoded in early-twentieth-century cookery. Restaurants are discussed in more depth. Public health is a further important component of this study. Hardy observes that although scientists eagerly sought to persuade the public (and food producers) that hygiene was of utmost importance, they could only do so much when faced with scepticism, disinterest and socio-economic priorities. Although accurate, this is not a particularly new claim given the prominence of this line of thought in preexisting histories of food science. In her introduction, Hardy admits that although her focus is on Britain, Salmonella Infections does not fully explore all of the components of what then constituted the United Kingdom, specifically Ireland and Scotland. In fact, her main focus is on England, not Britain. This is a shame. For instance, closer investigation of Ireland could have facilitated an intriguing comparative case study, given the importance of food to nineteenth-century Irish history. Modern food cultures developed in contrasting ways in England and Ireland. Research into food hygiene pursued by Dublin's prolific Medical Officer of Health, Charles Cameron, is cited only once. A quick glance through the pages of publications such as the Dublin Journal of Medical Science, as well the British publication The Analyst, would have revealed that Cameron had much more to say on matters such as shellfish consumption and the transmission of disease through food consumption. Cameron also left a considerable amount of archival research detailing the ways in which he promoted and investigated food hygiene across Ireland, and internationally, throughout his lengthy career. He was central to the networks of knowledge of food hygiene and analysis discussed by Hardy. Linked to this, certain other potentially important topics are given surprisingly short thrift. For instance, Hardy mentions anger among shellfish traders as science began to castigate their business as unhygienic and blame them for transmitting typhoid. Yet the issue of the relationship between food businesses and the emerging food sciences of the late nineteenth century is much vaster than this and deserves far more credit in Hardy's narrative. Given that Dublin (as Hardy briefly acknowledges) was viewed as a hotbed of typhoid precisely due to its lively shellfish trade, comparison of different regions of the United Kingdom would again have been beneficial, and could have been used to replace the large amount of science-focused detail provided. Overall, Salmonella Infections is a worthy attempt to draw the issue of food poisoning to the attention of medical historians. It will prove relevant to readers with an interest in nineteenth-and twentieth-century science and public health. Hardy's narrow focus will lessen the appeal of her book to researchers engaged in interdisciplinary disciplines such as food studies. The book is generally well written, although there are typographical errors and the chapter titles are problematic. Surely a better title than 'things with wings' could have been found for a chapter on how flies and birds transmitted disease? There is no shortage of books that explore the history of colonial medicine in Africa. Most of them revolve around a particular colonial empire -usually before the Second World War -or more often yet, they focus on a particular disease. In Le médicament qui devait sauver l'Afrique, however, Guillaume Lachenal gives us something new: a tale of imperial scientific hubris turned deadly colonial folly (bêtise) told through the lens of another story: the story of a drug called Lomidine. Once called a 'wonder drug' -believed to prevent against sleeping sickness -Lomidine (or pentamidine) was administered across wide regions in sub-Saharan Africa where the disease was endemic. In a mere matter of years, however, the more sinister effects of the drug were rapidly becoming apparent. In the heyday of its use, numerous people died of complications associated with Lomidine injections. Further tests in the 1960s revealed the drug to be not only ineffective, but hazardous as well. Ian Miller In this book, Lachenal asks: knowing what we do today about this drug's 'dangerous uselessness', how can we understand the enthusiasm -obsession even -with which colonial doctors pursued the 'lomidinisation' campaigns of the 1950s? In what ways was this drug a tool of colonial power, as well as a site of both colonial edification and contestation? And finally, in what ways did imperialism shape the history of modern biomedical science in Europe after the Second World War? Le médicament qui devait sauver l'Afrique argues that in the context of post-war colonial Africa, the imperial compulsion for modernisation -and in the case of sleeping sickness, eradication -
2016-05-04T20:20:58.661Z
2015-09-09T00:00:00.000
{ "year": 2015, "sha1": "b8cf1f8be661d6b3f63b3101eb0f1dde4acf5c20", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4595961", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b8cf1f8be661d6b3f63b3101eb0f1dde4acf5c20", "s2fieldsofstudy": [ "Medicine", "History" ], "extfieldsofstudy": [ "Medicine" ] }
221136736
pes2o/s2orc
v3-fos-license
Assessing life cycle impacts from changes in agricultural practices of crop production This paper presents an improved methodological approach for studying life cycle impacts (especially global warming) from changes in crop production practices. The paper seeks to improve the quantitative assessment via better tools and it seeks to break down results in categories that are logically separate and thereby easy to explain to farmers and other relevant stakeholder groups. The methodological framework is illustrated by a concrete study of a phosphate inoculant introduced in US corn production. The framework considers a shift from an initial agricultural practice (reference system) to an alternative practice (alternative system) on an area of cropland A. To ensure system equivalence (same functional output), the alternative system is expanded with displaced or induced crop production elsewhere to level out potential changes in crop output from the area A. Upstream effects are analyzed in terms of changes in agricultural inputs to the area A. The yield effect is quantified by assessing the impacts from changes in crop production elsewhere. The field effect from potential changes in direct emissions from the field is quantified via biogeochemical modeling. Downstream effects are assessed as impacts from potential changes in post-harvest treatment, e.g., changes in drying requirements (if crop moisture changes). An inoculant with the soil fungus Penicillium bilaiae has been shown to increase corn yields in Minnesota by 0.44 Mg ha−1 (~ 4%). For global warming, the upstream effect (inoculant production) was 0.4 kg CO2e per hectare treated. The field effect (estimated via the biogeochemical model DayCent) was − 250 kg CO2e ha−1 (increased soil carbon and reduced N2O emissions) and the yield effect (estimated by simple system expansion) was − 140 kg CO2e ha−1 (corn production displaced elsewhere). There were no downstream effects. The total change per Mg dried corn produced was − 36 kg CO2e corresponding to a 14% decrease in global warming impacts. Combining more advanced methods indicates that results may vary from − 27 to − 40 kg CO2e per Mg corn. The present paper illustrates how environmental impacts from changes in agricultural practices can be logically categorized according to where in the life cycle they occur. The paper also illustrates how changes in emissions directly from the field (the field effect) can be assessed by biogeochemical modeling, thereby improving life cycle inventory modeling and addressing concerns in the literature. It is recommended to use the presented approach in any LCA of changes in agricultural practices. Introduction As the world population continues to expand, along with its demands for feed, food, fuel, and fiber, the necessity in achieving sustainable agricultural production is of urgent concern. To do so, several agricultural practices and concepts have been introduced, e.g. organic farming (Rigby and Cáceres, 2001), sustainable intensification (Pretty 1997), no/ low tilling (Tebrügge and Düring, 1999), and precision agriculture (Bongiovanni and Lowenberg-DeBoer 2004). Potentially, changes in agricultural practices can lead to trade-offs (burden shifting) as well as "upstream effects" (e.g., due to changes in the use of agricultural inputs such as seeds, fertilizers, pesticides, etc.). Life cycle assessment (LCA) is the obvious choice of methodology to study the environmental implications of such effects. However, the LCA literature contains surprisingly little guidance on how to systematically and consistently evaluate the change in environmental impacts of crop production following from a change in agricultural practices. Brentrup et al. (2004) presented an extended version of the general LCA approach to assess the environmental impacts of crop production. This allowed for a better characterization of the environmental impacts from different agricultural "standalone systems" but the methodology did not focus on the change in environmental impacts from a shift from one agricultural practice to another. Caffrey and Veal (2013) discussed various challenges and perspectives in agricultural LCA at a generic level but did not give detailed guidance to the LCA practitioner. Meier et al. (2015) reviewed 34 studies comparing organic and conventional farming based on LCA. The authors pointed out several challenges relating to data as well as methods and called for a better differentiation between nitrogen fluxes in different agricultural systems as well as the use of consequential LCA (including system expansion) to account for differences in analyzed farming systems. This need for improved methodology to assess nutrient flows and soil carbon dynamics in agricultural LCAs was also highlighted by Goglio et al. (2015). Jiang et al. (2014) discussed the use of biogeochemical models for informing LCA of energy crops but did not consider changes in management practices and their potential impact on production elsewhere. Numerous case studies compare agricultural practices by use of LCA (e.g., Keyes et al. 2015, Goossens et al. 2017, Houshyar and Grundmann 2017, Tricase et al. 2018. The common approach is to divide environmental impacts related to a fixed area of cropland by the yield of that land-thereby allowing for comparison across practices based on the same functional unit. While this approach is intuitive, it fails to capture potential changes elsewhere driven by a potential change in output (yield) from the cropland studied. This further supports the need for methodological guidance. The purpose of consequential LCA is to estimate the environmental consequences of a specific change (Weidema 2003), e.g., a change in crop demand or a change in cropping systems. This may involve changes in agricultural inputs, changes in soil nutrient flows, and changes in crop yields. If crop yields are changed, while there are no changes in demand, the change in crop supply will in turn affect production elsewhere (to balance out the change in crop supply). This must be considered in consequential LCA. The tools and concepts to assess changes in environmental impacts from changes in agricultural practices are already available but broadly applicable guidelines for their combined and consistent use in LCA have been lacking. The purpose of this paper is to demonstrate how concepts such as system equivalence, biogeochemical modeling, system expansion and/or modeling of indirect land use change (ILUC) can be combined to assess the environmental impacts from changes in agricultural practices and, thereby, relative changes in environmental impacts from the crops grown in the analyzed cropping systems. The main focus will be on global warming impacts. The purpose is also to categorize changes in impacts according to where they occur. The paper describes a generic approach and illustrates options at different levels of sophistication to derive LCA results. The paper seeks to give detailed guidance on how to use results from biogeochemical models and ILUC models in agricultural LCA but it is beyond the scope of the paper to also give detailed guidance on how to run such supporting models. Finally, the use of the suggested approach is exemplified with a novel case study of a phosphate-solubilizing microbial inoculant introduced in US corn production. Methods The methodological description takes its point of departure in an area of cropland (A) to which a change is introduced. From here, this change will be referred to as the alternative agricultural practice or just the alternative practice. To analyze the consequences of introducing an alternative practice, a reference system is defined. The reference system is the area A with the functional output Q (quantity of crop). The alternative system (with the alternative practice introduced on the area A) must provide the same functional output to allow for direct comparison to the reference system (the principle of system equivalence; Hauschild et al. 2018). When this has been ensured, the impacts from introducing the alternative practice can be quantified by analyzing the differences between the reference system and the alternative system. To illustrate how different aspects of the alternative practice (e.g., change in inputs, change in field emissions, and change in yield) influence the environmental impacts from producing a certain quantity of crop (Q), the change in impacts is divided into four categories (upstream, field, yield, and downstream), which will be discussed in the subsequent sections. The change resulting from a shift in agricultural practice within each category is defined as an "effect." Note that each of the four effects cover all impact categories considered and thereby can have multiple dimensions. Some of the effects may be assessed in different ways with different methodological sophistication. The paper introduces an overview of such published methods to provide the reader with different choices and to allow for sensitivity analyses to test the influence of these choices. Figure 1 illustrates the reference system and the alternative system. The area A (the field) receives agricultural inputs such as fertilizers and pesticides. Agricultural inputs also cover fuel and machinery for field work (sowing, harvesting, etc.). These inputs are associated with upstream life cycle impacts, i.e., emissions and resource use taking place prior to crop cultivation on the field. Fuel combustion during field work is the exception as that takes place during cultivation but is counted as an upstream impact because it is related to the fuel produced off the field (i.e., fuel production and combustion is counted in the same category). Fuel combustion is relevant because different agricultural practices may require different levels of field work and therefore different quantities of fuel. As shown in Fig. 1, there are also direct emissions from the field. These include (but are not limited to) carbon dioxide (CO 2 ) from changes in soil organic carbon (SOC), nitrous oxide (N 2 O) from microbial soil processes as well as nitrate (NO 3 − ) leaching to the aquatic environment. After harvest, the fresh crop may need to go through post-harvest treatment (e.g., drying to meet moisture specifications) before it is ready for sale as an agricultural commodity (referred to as crops to market in Fig. 1). In case the alternative agricultural practice results in a yield change, it is necessary to consider the impact on crop production elsewhere (system expansion) as illustrated in Fig. 1 (represented by crop cultivation on the area B). As mentioned above, the environmental consequences of introducing the alternative agricultural practice can be divided into four different effects, which will be discussed in detail in the following sections. One of these effects (the field effect) needs special attention if the alternative agricultural practice is applied to a crop, which is grown in rotation with another crop. This special case has been discussed in Sect. S1 of the Electronic Supplementary Material item 1 (ESM 1). Fig. 1 Illustration of the reference system and the alternative system producing the same functional output (Q) with different environmental impacts. The index sys refers to either the reference system (ref) or the alternative system (alt). Agricultural inputs represent upstream life cycle impacts, field emissions represent impacts stemming directly from the field, system expansion represents impacts "elsewhere," and post-harvest treatment represents downstream life cycle impacts Upstream effects The shift in agricultural practice may involve a change in agricultural inputs (fertilizer, pesticides, etc.) to the area A. For instance, if shifting from conventional tilling to a no-till practice, there is a reduced need for fuel (for tilling). The environmental impacts from changes in agricultural inputs to the area A will be referred to as upstream effects. The upstream effects are simply characterized by summing up the difference in impacts from the agricultural inputs used on the area A in the reference system and the alternative system. This can be expressed as described in Eq. 1. where -E up, j is the upstream effect for impact category j m i, alt is the quantity of agricultural input i to the area A in the alternative system m i, ref is the quantity of agricultural input i to the area A in the reference system -I i, j is the life cycle impact for the impact category j for one unit of the input i n is the total number of agricultural inputs The field effect Field emissions from the area A (cf. Fig. 1) are likely to change when an alternative agricultural practice is introduced. This can happen for several reasons. If there are changes in the amount or type of fertilizers applied or if the crop yield is affected, the nutrient flows in the field will be impacted. Changes in yield can also impact emissions related to crop residues as well as soil organic carbon (SOC), e.g., due to larger crop roots. The impacts from changes in field emissions from the area A will be referred to as the field effect. Note that this effect covers emissions (incl. nutrient losses to the aquatic environment) associated with soil processes only. Hence, indirect emissions of N 2 O following from leaching and volatilization of N should also be included (aggregated default values of respectively 1.1% and 1.0% suggested by IPCC 2019) but emissions from field work (e.g., life cycle impacts from fuel production and use) are considered part of the upstream impacts (cf. explanation in the beginning of Sect. 2). Note also that the field effect relates only to the area A (i.e., the area where the change in agricultural practice occurs). Field emissions from the area B are considered part of the yield effect (see separate section). This distinction has been made to allow farmers and other agricultural stakeholders to separate effects taking place "on site" (where the new agricultural practice is introduced) and effects taking place elsewhere ("off site"). The assessment of the field effect requires establishment of consistent life cycle inventories for different agricultural practices. As pointed out by Meier et al. (2015), this can be challenging. It is therefore recommended to apply biogeochemical models such as Century (Paustian et al. 1992), DayCent (Del Grosso et al. 2001), or DNDC (Li et al. 1992. Biogeochemical models (sometimes also referred to as soil-crop models) are designed to characterize nutrient flows in cropping systems as well as the impact of management changes on nutrient cycling and productivity in these systems. Hence, they are useful in the assessment of the field effect. Goglio et al. (2018) indicates that biogeochemical models, in comparison to simpler empirical equations, are particularly helpful in deriving reliable results for N 2 O emissions from cropping systems, thereby addressing some of the concerns mentioned in the introduction, e.g., those raised by Meier et al. (2015). The substances that should be accounted for as field emissions depend on the considered impact categories. N 2 O and CO 2 from SOC changes will typically be the most important for global warming whereas leaching and run-off of N and P will be important for nutrient enrichment. For these substances, biogeochemical models are very practical. Meanwhile, biogeochemical models also have limitations in terms of scope and assessment capabilities. Hence, issues such as leaching of heavy metals and active ingredients in pesticides may need to be modeled separately (if relevant for the impact categories considered in a specific LCA study). Once a biogeochemical model has been set up to simulate the soil processes on the area A in the reference system and the alternative system, field emissions from the two systems can be estimated (cf. Fig. 1). This is done by simulating production of the relevant crop over a modeling period long enough to determine representative average emissions, usually a few decades. On this basis, the field effect can be quantified by use of Eq. 2. where -E field, j is the field effect for impact category j e i, alt is the quantity of field emission i from the area A in the alternative system 1 e i, ref is the quantity of field emission i from the area A in the reference system 1 -P i, j is the specific characterization factor for the impact category j for one unit of the field emission e i m is the total number of different field emissions While biogeochemical models can be used to estimate annual, average field emissions from the area A (which can then be inserted in Eq. 2), one specific output requires special attention, namely CO 2 emissions derived from changes in SOC. These CO 2 emissions are different from other GHG emissions from the field because they are governed by longterm changes in soil carbon stock. Hence, they must be treated different than, for example, annual emissions of N 2 O stemming from the addition of nitrogen to the field. First, a change in SOC must be converted to a corresponding amount of CO 2 by use of stoichiometry, i.e., 1 kg of C corresponds to − 44/ 12 kg CO 2 . The negative sign indicates that a positive change in SOC (carbon sequestration) corresponds to a negative CO 2 emission (binding of carbon from the atmosphere). Secondly, it must be considered how to assign an appropriate amount of SOC-related CO 2 emissions to the output from the area A. This is challenging because SOC levels adjust slowly to changes in practices (moving towards an equilibrium state, which matches inputs and outputs of carbon). Hence, estimates of SOC changes will depend on the time perspective applied creating the need for a well-considered approach to time accounting. Currently, there is no well-defined procedure for how to account for SOC changes in LCA (Goglio et al. 2015) but the following sections outlines two approaches that have both previously been used in the literature. The methods will be presented with an increasing level of sophistication. SOC modeling: 20-year annualization One option to account for SOC-related CO 2 emissions from the area A is to calculate an annual average based on the first 20 years of the modeling period applied in the biogeochemical modeling. Specifically, e CO2, alt in Eq. 2 then becomes the change in SOC in the alternative system during the first 20 years multiplied by − 44/12 kg CO 2 per kg C and divided by 20. The same approach is applied to determine e CO2, ref in Eq. 2. While the 20-year annualization approach builds on an arbitrary period, there is some precedence for its use. It has been applied in the life cycle GHG accounting method in the European Renewable Energy Directive (EC 2009) and in LCA studies by Knudsen et al. (2010) and Hamelin et al. (2012). Note that a different choice of annualization period would yield substantially different results. A 100-year period could reduce the result by a factor of 5 and a 1-year period could increase the result by a factor of 20. If the 20-year annualization approach is applied, it is important to interpret the SOC results from the biogeochemical modeling carefully. Due to their intended complexity in representing SOC dynamics, these models are able to estimate the inter-annual changes in SOC and crop carbon inputs as influenced by year to year climate variability that can sometimes be difficult to detect in measurements. Hence, there can be a need to smooth out the yearly SOC changes over time to derive an appropriate 20-year trend in SOC change. There are several options for doing that. One of them is described by VandenBygaart et al. (2008) where they fit the output from the CENTURY model to a first-order exponential equation. SOC modeling: time-independent approach Another option to account for SOC-related CO 2 emissions has been described by Petersen et al. (2013). This approach does not rely on an arbitrary time horizon (annualization period) and will therefore be referred to as the time-independent approach or just time-independent approach (TIA). The timeindependent approach is based on the change in radiative forcing related to a single event with impact on SOC. In the present paper, such an event would be the introduction of a new agricultural practice during one growth cycle for a crop grown on the area A. This "one-time intervention" would impact the subsequent development of SOC because a change in SOC in 1 year provides a different starting point for subsequent years. The alternative temporal development in SOC can be compared to the temporal development in the reference system (the "baseline"). By conversion of the differences in SOC into radiative forcing, the global warming potential (GWP) can be determined for any given accounting period. This approach is easier to reason scientifically than the more arbitrary annualization approach but may also be more challenging to apply. The aim is to derive a value, which represents (e CO 2 ;alt −e CO 2 ;ref ) in Eq. 2. To do this, a biogeochemical model can be set up to characterize a single year of the alternative agricultural practice followed by 99 years of the previous practice. As a reference (baseline), 100 years of the initial practice (i.e. the practice applied in the reference system) is also modeled. This procedure will allow for the tracking of the differences in SOC (year-by-year) between the alternative system and the reference system over the full accounting period (100 years if GWP100 is used as the global warming metric). To derive representative results, this curve (difference in SOC over time) should be smoothed out by use of an exponential fit function. This gives a generalized picture of the difference in SOC between the two systems in each year of the accounting period. Hence, the difference in CO 2 emissions can be calculated for each year (stoichiometric conversion). The difference in CO 2 emissions in a given year is then multiplied with a characterization factor, which assigns a certain weight to the emission. This is based on CO 2 's atmospheric decay function and the timing of the emission in the accounting period as described by Petersen et al. (2013) and further elaborated by Schmidt and Brandão, (2013, Sect. 3.1). The emission in year one will have a characterization factor of 1 while characterization factors for the end of the accounting period will be close to zero (because a late emission will have little impact within the accounting period). The time dependent characterization factors are available in Sect. S2 of ESM 1. The difference in CO 2 emissions for each year is multiplied with the corresponding time dependent characterization factor and results for all years are summed up to provide an estimate of (e CO 2 ;alt −e CO 2 ;ref ), which can then be used in Eq. 2. Note that the described approach is not dependent on an arbitrary annualization period because it relates SOC changes directly to one 'batch' of output from the area A. Thereby, the CO 2 field effect can be viewed in isolation for one growth cycle of crop production (as with all the other emissions covered by the present methodological proposal). Yield effect If the studied alternative practice changes the crop yield on the area A (cf. Fig. 1), it will impact crop production elsewhere through market-mediated effects. This is because the overall demand for crops is not affected by the introduction of an alternative practice. If the crop yield on the area A increases, the additional output (ΔQ in Fig. 1) will displace crop production elsewhere (Schmidt 2008). In case of a reduction in yield (if shifting to a less intensive practice), farmers elsewhere will be incentivized to fill the supply gap. The environmental impacts from changes in crop production elsewhere will be referred to as the yield effect. To account for the yield effect, the alternative system must be expanded to ensure that it produces the same amount of crop (or an equivalent quantity of other crops with the same functional characteristics, e.g. same feed value in terms of nutritional composition) as in the reference system. If the change in output from the area A in the alternative system (as compared to the reference system) is ΔQ, the system is expanded (as shown in Fig. 1) with an area B, which produces a quantity of the crop c equal to -ΔQ. This ensures that the two systems have the same functional output (system equivalence) because any change in output from the area A is leveled out by a corresponding change (with the opposite sign) in crop production on the area B. Hence, the yield effect is determined by the impacts of a change in the quantity of crop production elsewhere. This has been described in Eq. 3. where -E yield, j is the yield effect for impact category j -ΔQ is the change in output of crops to market from the area A -I c, j is the life cycle impact for the impact category j for one unit of crops to market (c) displaced or induced elsewhere. I c, j should exclude potential impacts from postharvest treatment (see below). As mentioned in the definitions above, I c, j should not include impacts from potential post-harvest treatment. This is because the overall need for post-harvest treatment in the two systems is unrelated to potential yield changes on the area A. The reason is that the two systems compared (cf. Fig. 1) produce the same quantity of crops (Q). Only if the composition of the fresh crop (cf. Fig. 1) is different in the alternative system and the reference system (e.g., different moisture contents) could there be changes in impacts related to post-harvest treatment. Such changes will be referred to as downstream effects (see Sect. 2.4). The estimation of I c, j in Eq. 3 requires an assessment of how crop production is affected elsewhere when the output from the area A changes. This can be approached in different ways. In the following, several options are discussed with increasing levels of sophistication but also increasing requirements for the LCA practitioner. A table with a simple overview of the different approaches is available in Sect. 2.3.5. Simple system expansion The simplest option to deal with the expansion of the alternative system is to assume that the crop production affected elsewhere is conventional production. For instance, if the alternative practice is improving wheat yields on the area A, the additional output can be assumed to displace conventional wheat production on the area B. LCI data for conventional crop production is often readily available in the literature and in LCA databases, at least for developed countries. In case the reference system (cf. Fig. 1) is characterized by conventional crop production, data from that system can be used to estimate I c,j in Eq. 3. This approach will be referred to as simple system expansion and I c, j will, for this particular approach, be referred to as I c, j, s . Note that I c, j, s refers to impacts from the specific crop c on the area B. If the applied inventory data for the crop c on the area B includes CO 2 emissions (positive or negative) from ongoing changes in SOC, it is suggested to exclude this aspect in the estimation of I c, j, s . The reason is that gradual SOC changes in a continuous cropping system do not reflect a situation where crop production on the area B is either initiated or seized as a result of changes on the area A. Hence, the use of inventory data for SOC changes could give misleading results. The exclusion of SOC-related CO 2 changes in the estimation of I c, j, s can be seen as a "corrective simplification." Note that more sophisticated approaches are also discussed in the following sections. While it may sound complicated to establish I c, j, s without post-harvest treatment (as discussed above) and without SOCrelated CO 2 emissions, it can be quite simple. If an LCI is available for the crop to market (produced from the area B), it is only necessary to neglect any inputs from post-harvest treatment and any potential CO 2 emissions from changes in SOC. Simple system expansion (although not necessarily dubbed as such) is applied in several LCA studies of grain-based bioethanol, which is co-produced with protein feed (also known as distillers' grains with solubles or DGS). Both Cai et al. (2013) and Wang et al. (2012) assumed that DGS would displace equivalent amounts of conventionally produced crops. Another example is found in a study by Nielsen and Oxenbøll (2007), who assessed the environmental impacts from enzyme production. One of the inputs studied was wheat starch, which is co-produced with wheat protein. To account for additional protein production (driven by the use of wheat starch), system expansion was used to consider displacement of conventional protein production elsewhere. Marginal system expansion In a slightly more sophisticated approach, it may be considered whether it is possible to determine a marginal type of crop production, which is affected by a change in output from the area A. It might not be standard, conventional crop production, which is affected but instead a less competitive supplier, which is squeezed out of the market if yields are improved on the area A. For some crops and other agricultural products, the literature already describes suggested marginal suppliers. For instance, Weidema (1999) demonstrated how 1 kg of protein by-product from food production could be assumed to displace 3.9 kg soybeans and Schmidt and Weidema (2008) suggested that palm oil took over from rapeseed oil as the marginal vegetable oil on the world market around the year 2000. Another example of marginal system expansion in agricultural LCA can be found for a comparison of conventional and organic milk production by Flysjö et al. (2011). Here, system equivalence in terms of milk production and co-produced calf meat was ensured by expanding one of the milk production systems to include displaced marginal meat production elsewhere. Schmidt (2015) utilized marginal system expansion in consequential LCA in a comparative assessment of rapeseed and palm oil suggesting that the marginal suppliers of displaced fodder protein and energy were Brazilian soybean and Canadian barley producers, respectively. In summary, if a relevant marginal crop can be identified, a corresponding LCI can be established and I c, j in Eq. 3 can be determined based on marginal system expansion. For marginal system expansion, I c, j will be referred to as I c, j, m . Note that I c, j, m refers to impacts from the specific crop c on the area B. Further guidance on the identification of marginal suppliers is available in Weidema et al. (2009). As for simple system expansion, SOC-related CO 2 emissions and post-harvest treatment should be excluded (cf. discussion above). As mentioned above, marginal system expansion is an attempt to identify the type of crop production ultimately affected by a change in output from the area A. In that sense, marginal system expansion seeks to by-pass the many market-mediated steps between the initial "supply shock" (the change in output from A) and the crop production affected in the end. The alternative to this 'short-cut' is actual economic equilibrium modeling, which has been applied in recent years when assessing land use changes caused by changes in crop demand (see, e.g., Hertel et al. 2010, Kløverpris et al. 2010). This topic is addressed in the next sections. ILUC option 1: yield effect fully based on ILUC modeling The concept of indirect land use change (ILUC) covers market-mediated land use changes caused by changes in crop demand or crop supply. Such a change can be driven by the use of crop-based products (affecting crop demand) or by the introduction of yield-changing agricultural practices (affecting crop supply). Various methods and models to estimate ILUC and associated GHG emissions have been developed (De Rosa et al. 2016), but there is still no scientific consensus on how to address the issue (de Bikuña et al. 2018;Woltjer et al. 2017). However, the topic is highly relevant for agricultural practices with impacts on crop yields. Hence, two possible options for including ILUC as part of the yield effect will be discussed here. The best choice of option will need to be determined in relation to the specific LCA study in question and the characteristics of the ILUC model applied. The advanced ILUC options are more complex and demanding than simple or marginal system expansion but also theoretically more correct. With ILUC option 1, the impacts driven elsewhere by a change in yield on the area A are entirely based on ILUC modeling. In other words, I c, j in Eq. 3 is estimated solely by use of an ILUC model. This option is feasible if the applied ILUC model not only covers impacts from land use change but also impacts from changes in crop intensity. Further explanation follows below. ILUC modeling can in itself be viewed as a complex and sophisticated form of system expansion where cropland and other land uses can displace each other as a result of a studied change. In general, markets can react to a change in crop supply from a specific area in three ways (Kløverpris et al. 2008. (1) Crop production can be adjusted by changes in production intensity, i.e., adjustment of agricultural inputs to match a new supply situation (adjusting crop yields to a new economic optimum). (2) Crop production can also be adjusted by bringing new land into production or taking existing cropland out of production. (3) Changes in crop supply can lead to changes in crop use patterns, i.e., certain uses of crops may be either reduced or increased. The interplay between the three above-mentioned effects (change in intensity, change in land use, and change in use patterns) determines the total impact from the studied change. If the applied ILUC model incorporates both the intensity and land use aspect, it can be used to assess the impact of producing one additional unit (or one unit less) of 'crop to market' on the area A (cf. Fig. 1). In other words, the ILUC model can be used to derive an estimate of I c, j (here denoted I c, j, ILUC1 ) in Eq. 3 encompassing the full market response and associated impacts from a change in crop supply from the area A (cf. Fig. 1). Results of ILUC models are typically related to an area of land occupied for production of an item under study (e.g. an area required for bioenergy crops). This land occupation triggers the indirect land use change. In the present paper, however, the triggering land use occupation could be either positive or negative depending on the yield impact from the alternative agricultural practice. If the output from the area A increases by ΔQ, it means that the initial production (Q) could be maintained on an area smaller than A. It is this initial land saving that triggers the indirect land use change, which ultimately reduces pressure on land elsewhere. The initial reduction in land occupation can be quantified as the fraction of the area A no longer needed to maintain the production of Q. On this basis, Eq. 3 can be re-written as follows (specific to ILUC option 1) into Eq. 4. where -E yield, j, ILUC1 is the yield effect expressed on the basis of ILUC option 1 -ΔQ is the change in output of crops to market from the area A -I c, j, ILUC1 is the ILUC impact in impact category j per unit of additional output (crops to market) from the area A -Q is the output of crops to market from the area A in the reference system (cf. Fig. 1) -A is the area where the alternative practice is introduced (cf. Fig. 1) -T is the time of land occupation on the area A, i.e. the effective duration of the full crop cycle -I ILUC, j, A is the ILUC impact in impact category j per unit of land occupation 2 in the region where A is located 3 It follows from Eq. 4 that I c, j, ILUC1 is equal to A•T QþΔQ : I ILUC; j;A: Note that I ILUC, j, A needs to be estimated by use of an ILUC model. Meanwhile, some ILUC models may be able to estimate I c, j, ILUC1 directly, which then simplifies the application of ILUC option 1. Due to the variety of existing ILUC models, it is not feasible to provide formulas for all cases in the present paper. The advanced ILUC approaches (both options 1 and 2) avoid the complexities relating to SOC changes on the area B in Fig. 1 (cf. discussion in Sect. 2.3.1). This is because the approach considers general market effects in terms of land use and intensification whereby effects are not confined to a single specific area (B in Fig. 1). To be consistent with the principle of system equivalence (same output from compared systems), option 1 is only feasible with an ILUC model that assumes a fully elastic market response in the long run where a change in supply or demand is fully compensated through changes in intensification and land occupation (i.e., where there are ultimately no changes in sectorial crop use patterns). ILUC option 2: yield effect partially based on ILUC modeling With ILUC option 2, an estimate of impacts from indirect land use change is added to the impacts from crop production on the area B (determined by simple or marginal system expansion). In other words, the land occupation associated with the crop(s) displaced (or induced) is used as a starting point for estimating ILUC impacts. These impacts are then added to the other emissions associated with crop production on the area B. Hence, the ILUC estimate is added to (and thereby becomes part of) I c, j in Eq. 3. This can be expressed as follows from Eq. 5. Note that ILUC option 2 is particularly relevant when applying an ILUC model that only considers land use impacts (and not intensification). where -E yield, j, ILUC2 is the yield effect expressed on the basis of ILUC option 2 -ΔQ is the change in output of crops to market from the area A -I c, j, ILUC2 is the ILUC impact in impact category j per unit of additional output (crops to market) from the area A -I c, j, x is the life cycle impact for the impact category j for changes in crop production elsewhere modeled via system expansion where x denotes either simple (s) or marginal (m) -B is the area where production of crop c is induced or displaced 4 (cf. Fig. 1) -T is the time of occupation on the area B, i.e. the effective duration of the full crop cycle -I ILUC, j, B is the ILUC impact in impact category j per unit of land occupation 5 in the region where B is located 6 The last term in Eq. 5 constitutes the addition of ILUC to the environmental impacts from crop cultivation on the area B. The term simply expresses land occupation (the area B multiplied by the time T) multiplied with the ILUC impact per unit of land occupation. Any type of ILUC model could be used with this approach (also ILUC models that do not assume full elasticity of supply) because system equivalence is ensured by displaced or induced production on the area B and then ILUC follows as an "add-on effect." It follows from Eq. 5 that I c,j,ILUC2 is equal to The way to interpret this approach (ILUC option 2) is that the intensification aspect is covered by the (induced or avoided) agricultural inputs to the area B (assuming no change in SOC on the area B) and the land use aspect is covered by the ILUC modeling, which also includes the SOC component (cf. discussion in Sect. 2.3.1). It is important that the LCI for the crop production on the area B does not include any emissions from direct land transformation (as this would result in double-counting of land use emissions). Table 1 seeks to provide an overview of the four approaches outlined for estimation of the yield effect or, more specifically, determination of I c, j in Eq. 3. Downstream effects As previously discussed, the collective inputs to post-harvest treatment of the fresh crop (cf. Fig. 1) will be unchanged when shifting to an alternative agricultural practice-unless the characteristics of the fresh crop (e.g., moisture content) is impacted by the shift in practice. As this will probably be unusual in most agricultural LCAs, it has been decided to handle the topic in ESM 1 (Sect. S3). Any potential impacts in postharvest treatment following from the introduction of the alternative practice will be referred to as downstream effects. Total effects The change in impacts from introducing a new agricultural practice on the area A for impact category j (E total, j ) can be summed up by use of Eq. 6. Once the total change in impacts is known, the change in impact per unit of crops to market for impact category j (ΔI c, j ) can be estimated by use of Eq. 7. If the impact of the crops produced in the reference system is known (I c, j, ref ), the relative change in impacts per unit of crops to market following from the studied shift in practice (ΔI c, j, rel ) can be quantified via Eq. 8. 3 Case study results: introduction of microbial phosphate inoculant in corn production in Minnesota The approaches described in Sect. 2 are exemplified by a case study available as Electronic Supplementary Material item 2 (ESM 2). The study has not previously been published but has undergone critical review by three independent experts in accordance with the ISO 14040 standards for LCA (ISO 2006a;ISO 2006b). The study formed the pre-cursor to the general method described in the present paper. The case study considers the introduction of a new practice in corn production in Minnesota and North Dakota, USA. The present paper focuses on Minnesota. The new practice consists of the introduction of a yield-enhancing microbial inoculant, which contains spores of the naturally occurring soil fungus called Penicillium bilaiae (P. bilaiae or P.b.). The inoculant is added to corn seeds prior to seeding. When the corn grows, the fungus colonizes the roots. P. bilaiae solubilizes minerally bound phosphorus by secretion of organic acids leading to an increase in nutrient uptake for corn plants. Penicillium bilaiae is available in the agricultural inoculant called JumpStart® and as an integrated part of the seed inoculant called Acceleron® B-300 SAT. To determine the impact of the new agricultural practice, a reference system and an alternative system is defined in accordance with Fig. 1. The area A is defined as 1 ha and calculations are performed on this basis while final results are expressed per Mg of dried corn kernels, which is the functional unit of the case study. In the reference system, corn is cultivated and the fresh crop is 4 B can be determined as ΔQ divided by the yield on the area B 5 Land occupation is measured as the area occupied multiplied with the time of occupation and a typical unit for land occupation is 'hectare years'. 6 ILUC impacts may differ from region to region depending on regional cropland quality harvested and then dried (post-harvest treatment) to meet the market requirements of approximately 14% moisture. The alternative system receives the same inputs on the area A as the reference system and, in addition, the inoculant is applied to the corn seeds. This leads to a higher output of corn from the area A as documented by Leggett et al. (2015). The case study considers corn grown after corn (continuous corn rotation). The data applied for continuous corn is shown in Table 2. The yield data in Table 2 is based on the field trials described by Leggett et al. (2015). Total application of macronutrients (N, P, and K) is also based on Leggett et al. (2015) and the share of each nutrient applied by the specific fertilizers in Table 2 is based on the ratio between fertilizers in the dataset for US corn (consequential model) in the ecoinvent LCI database version 3.0 (ecoinvent 2014). More detail available in Sect. 3.2 of ESM 2. Seeds, pesticides, and field work data is also based on ecoinvent (2014). The inoculant dose is based on the report available as ESM 2. Lime has been omitted in the table since lime only impacts LCA results from corn production with less than 0.25% in all impact categories, based on the applied corn process from ecoinvent 2014) and the applied LCIA method (see below). Irrigation has been left out since the corn fields were not irrigated during field trials (Leggett et al. 2015). The measured average yield increase when applying P. bilaiae on corn in Minnesota is 0.44 Mg ha −1 . This does not fully appear from Table 2 because Leggett et al. (2015) and contribution from specific fertilizer based on ecoinvent (2014), see Sect. 3.2 in ESM 2 for further details ‡ Includes 15 specific pesticides (e.g., atrazine, metolachlor, glyphosate, etc.) plus an amount of unspecified pesticides, all applied in US corn production according to ecoinvent (2014) § Fertilizing, tillage, sowing, application of plant protection, and harvesting yields are only shown with three significant digits. A detailed discussion of the data in Table 2 is found in Sect. 5 in ESM 2. The case study considers the following six environmental impact categories: global warming (gw), acidification (ac), nutrient enrichment (ne), photochemical ozone formation (po), fossil energy resources (fe), and land occupation (lo). The impact assessment method called CML-IA baseline (version 3.01) was used. While this method is now superseded, it was still commonly used when the LCA study in ESM 2 was initiated. As the choice of LCIA method is of little relevance for the exemplification of the methodological recommendations in the present paper, and to stay consistent with the case study in ESM 2, it has been decided to stick to the CML method in this case study section. This has a minor impact for the characterization of global warming impacts from N 2 O emissions but it does not impact the overall conclusions of the case study (cf. end of Sect. 2.2.7 in ESM 2). Ideally, a newer and regionalized LCIA method had been applied. The base case is based on 20-year amortization of CO 2 from changes in SOC (field effect) and simple system expansion (yield effect). Equally relevant results of the more advanced methods are also presented and discussed. The approach outlined in the method section will be demonstrated for the impact category global warming (gw) and for corn grown after corn (continuous corn rotation) in Minnesota. Results for remaining considered impact categories will also be presented but not exemplified by calculations. Upstream effects from inoculant production As shown in Table 2, the agricultural inputs to the area A (cf. Fig. 1) are the same in the reference system and the alternative system, except for the use of the P.b. inoculant. The spores from P. bilaiae are produced via "solid state fermentation" and mixed with other ingredients. The exact inventory is proprietary but contained in an internal LCA report, which has been subject to critical review in accordance with the ISO standards for LCA (ISO 2006a;ISO 2006b). Additional detail is available in Sect. 3.4 of ESM 2. The impacts from the inoculant (I inoc, j ) are shown in Table 3. The global warming impacts from inoculant production (I inoc, gw ) of 69 kg CO 2 e kg −1 (see Table 3) is potentially overestimated because a worst-case scenario for disposal of organic waste, mainly from the fermentation process, was assumed (maximum conversion to methane in a landfill). This accounts for almost 30% of I inoc, gw . In addition, some uncertainty relates to heating and electricity use, which together account for roughly one-third of I inoc, gw . As impacts from inoculant production turn out to have low influence on final results, above-mentioned uncertainties and potential overestimation are not considered critical. The dose of the inoculant amounts to 5.7 g ha −1 (cf. Table 2). By use of Eq. 1, the upstream effect in terms of global warming can thereby be estimated as follows. Field effect from inoculant use In the present case study, the biogeochemical model DayCent (Del Grosso et al. 2001) was applied to model field emissions from the different corn production systems. The DayCent model simulates crop growth, nutrient flows, soil carbon, and trace gas emissions in cropping systems. Additional detail (including sources for climate data, etc.) is available in Sect. 2.2.6.3 of ESM 2. The yield and fertilizer data in Table 2 was applied to calibrate the DayCent model and characterize the effect of the inoculant. A more detailed description of this procedure can be found in Sect. 3.1 in ESM 2. All field emissions and nutrient losses from the area A, except SOC-related CO 2 emissions, were estimated as averages over a 40-year modeling period for the corn production systems. A sufficiently long modeling period was chosen to be able to smooth out inter-annual variations and derive generally representative results. Results are available in Table 4. Methane emissions have been left out because they were unaffected by the new practice. More detail is available in Sect. 4.1 in ESM 2. Field effect with SOC-related CO 2 emissions based on 20-year annualization Based on the DayCent results, the change in SOC over a 20year period corresponded to emissions of − 13.7 Mg and − 17.3 Mg of CO 2 in the reference system and the alternative system, respectively. Applying the 20-year annualization Note that nitrous oxide is the only greenhouse gas in the modeled field emissions (cf. Table 4) and therefore the only contributor to the field effect for global warming besides CO 2 from SOC changes. Field effect with SOC-related CO 2 emissions based on TIA This section provides a summary of how the approach described in Sect. 2.2.2 was applied in the inoculant LCA study. By modeling 1 year of inoculant use in DayCent within a 100-year timeframe and a reference with no inoculant use, it was possible to estimate differences in SOC over time between the two systems. On this basis, (e CO2, alt − e CO2, ref ) in Eq. 2 was estimated at − 129 kg CO 2 . Inserting this in Eq. 2 gives an estimated field effect for global warming of − 204 kg CO 2 e, i.e., 20% lower in numeric terms than with the 20-year annualization approach. As for SOC-related CO 2 emissions specifically, the field effect is 28% lower (− 129 kg CO 2 vs. − 179 kg CO 2 ), also in numeric terms. Interestingly, a 30-year annualization period gives an estimate of SOC-related CO 2 emissions quite close to the TIA estimate (8% higher in numeric terms). Thirty-year annualization is often applied in US ILUC studies (see, e.g., US EPA 2010 andHertel et al. 2010). Yield effect from inoculant use Based on Leggett et al. (2015), the use of P. bilaiae on corn in Minnesota gives an average increase in corn output of ΔQ = 0.44 Mg on the area A (defined as 1 ha). On this basis, the yield effect was estimated by simple system expansion (cf. Sect. 2.3.1) and by system expansion with ILUC modeling (cf. Sects. 2.3.3 and 2.3.4). Yield effect based on simple system expansion It was assumed that the corn displaced on the area B had the same characteristics as the corn in the reference system (cf. discussion in Sect. 2.3.1). The impacts from the corn in the reference system (I c, j ) were estimated based on the agricultural inputs in Table 2 and the field emissions in Table 4. SOC changes in Table 4 where however excluded for the reasons discussed in Sect. 2.3 (and Sect. 4.2 in ESM 2). On this basis, the global warming impact from reference corn (I c, gw, s ) was estimated at 312 kg CO 2 e. Hence, the yield effect for global warming can be estimated as follows on the basis of simple system expansion and Eq. 3. The ILUC model by Schmidt et al. (2015) was used to assess the indirect land use implications of the increased corn yield obtained with P. bilaiae. According to the model, the occupation of 1 ha of cropland in Minnesota generates a global warming impact of 2050 kg CO 2 e. Option 2 assumes initial displacement of adjacent crop production on the area B, which is then accompanied by an ILUC response. In the specific case of P. bilaiae. Applied on continuous corn in Minnesota, the area B equals 411 m 2 and, by use of Eq. 5, the yield effect for global warming can be estimated as follows. Option 1 reduces (numerically) the estimated yield effect for global warming by 41% as compared to simple system expansion. This indicates that the combined land use and intensification response to the yield increase from the inoculant (as modeled with the ILUC model) is less pronounced in terms of GHG emissions than the direct one-to-one displacement of agricultural inputs assumed with simple system expansion. Option 2 increases (numerically) the estimated yield effect for global warming by 61% as compared to simple system expansion. The increase occurs because the ILUC emissions are added on top of the emissions estimated via simple system expansion. Use of option 1 requires an ILUC model, which can model the full market response in terms of land use change and intensification whereas option 2 can be applied with models that only capture the land use aspect (cf. Sects. 2.3.3 and 2.3.4). Besides, the choice of option depends on interpretation of ILUC dynamics. As scientific consensus is still lacking, the present study leaves both options open. The calculations for the two ILUC options have been further explained and discussed in Sect. S4.2.2 of ESM 1. Downstream effects The downstream effects from using P. bilaiae (changes in transport and drying) are negligible since the corn from the reference system has the same characteristics as the corn from the alternative system. Hence, there is no net change in the need for post-harvest treatment and E down, j thereby equals zero for all impact categories (j). Additional detail is available in ESM 2 (e.g., Sect. 4.2). Total effects from inoculant use Based on Eq. 6, the total global warming effect of introducing P. bilaiae on 1 ha of corn in Minnesota (E total,gw ) is − 390 kg CO 2 e for the base case (applying simple system expansion and 20-year annualization of SOC) and respectively − 284 and − 424 kg CO 2 e for the advanced methods with ILUC options 1 and 2 (TIA for SOC). The change in impact per Mg corn produced with P. bilaiae (ΔI c, gw ) can be calculated by use of Eq. 7 (Q = 10.7 Mg) and amounts to − 36 kg CO 2 e Mg −1 corn in the base case and respectively − 27 kg CO 2 e Mg −1 corn (ILUC option 1) and − 40 kg CO 2 e Mg −1 corn (ILUC option 2). Additional detail is available in Sect. S4.2.3 of ESM 1 and a breakdown of results is available in Fig. 2. The changes in field emissions (CO 2 and N 2 O) contribute the most to GHG savings in the base case. These savings are reduced by 20% when applying the time-independent approach for SOC in the advanced methods (N 2 O unchanged). The yield effect is also important in the base case, making up 40% of the total global warming impact. The numeric impact of the yield effect decreases when modeled as 'ILUC only' in accordance with ILUC option 1 in the advanced methods. On the other hand, the impact of the yield effect increases with ILUC option 2 because (avoided) ILUC emissions are added on top of the (avoided) emissions from displaced production. The difference in results between ILUC option 1 and 2 (cf. Fig. 2) shows the need for further research in this field. With a global warming impact of conventional corn of roughly 260 kg CO 2 e Mg −1 (see Table 8 in ESM 2), the relative reduction in global warming impact per Mg of corn (ΔI c, gw, rel ) is 14% in the base case and respectively 10% and 15% with the advanced methods applying ILUC option 1 and 2 (based on Eq. 8). Table 5 shows the effects of introducing the inoculant in all assessed impact categories per Mg of corn produced for the base case. Note that the advanced SOC method (TIA) only affects global warming results. Results per hectare treated with Penicillium bilaiae can be found in Sect. S4.1 in ESM 1. As discussed above, methodological choices influence the results. In addition, there is uncertainty related to the parameters in the calculations. This has been further discussed in Sect. 5.1 of ESM 2. The largest parameter uncertainty is related to the modeling of agricultural N 2 O emissions. For the results based on simple system expansion, the relative 95% confidence interval (CI) was estimated at − 28%/+ 25%. Meanwhile, this does not consider potential covariance in some of the parameters, so the actual CI is likely somewhat lower. Section S4.3 of ESM 1 illustrates the outlined approach for corn grown after soybeans (corn-soybean rotation). Discussion The approach laid out in the present paper has "the field" as the focal point. The upstream effect, the downstream effect, and the field effect represent changes within the supply chain of the crop grown on the field. The yield effect represents changes in other supply chains; changes caused by market signals driven by yield changes on the field (A). The equations for estimation of the four effects contain multiple indices required to generalize the methodology but the calculations are trivial in an LCA context. Meanwhile, the breakdown of results into the four effects should be applied systematically in all LCA studies of new agricultural practices. The breakdown makes it easier to explain the environmental impacts caused by a change in agricultural practice and, more importantly, where these changes occur. This allows for a more informed discussion with relevant stakeholders. In addition, different sustainability schemes have different criteria for recognition of environmental benefits. If a farmer shifts from conventional tilling to no till, the total change in impacts would be comprised of all four effects from a product LCA perspective whereas only the "CO 2 field effect" would count in some of the carbon trading schemes with carbon credits for no till. The approach laid out in the present paper makes it easy to break down results as needed. The approach is also useful in demonstrating the consequences of yield changes. In that sense, it can be a useful tool to inform the discussion about conventional vs. organic crop production. From an optimization perspective, the approach can be useful in determining where in the supply chain the biggest improvement opportunities are situated and this may in turn guide development of better agricultural practices. Another important aspect of the present paper is the proposal to utilize biogeochemical modeling for estimation of the yield effect. As discussed earlier, LCA studies of agricultural practices often fail to assess these impacts comprehensively for nutrient flows, which can hinder the formulation of clear conclusions and decision support recommendations. The utilization of biogeochemical modeling, as part of agricultural LCA, helps to address this concern. Meanwhile, it also increases the level of expertise required to conduct LCAs of crop production. Whether this added level of comprehensiveness is worthwhile will need to be judged on a case-by-case basis but the marriage between LCA and biogeochemical modeling can certainly provide improved LCA results. As also discussed, CO 2 emissions resulting from changes in SOC can be modeled at different degrees of sophistication with a notable impact on the results. Ideally, advanced methods where the temporal changes are estimated over time should be applied to best reflect the environmental impacts from changes in agricultural practices. Meanwhile, the case study in the previous section illustrates that simpler methods can, in certain cases, provide results that are in a similar range as the more sophisticated methodologies. Hence, a pragmatic approach may sometimes be adequate to show tendencies and guide decision making. The case study also illustrates that the field effect can be very important if a new practice can improve nutrient efficiency in the field. If this comes in tandem with improved yield, there is a double benefit. Interestingly, the base case results for P.b. on continuous corn and corn after soy in Minnesota and North Dakota were quite similar despite of the different rotations and locations (see ESM 2). The average change in GHG emissions for the four scenarios were − 37 kg CO 2 e Mg −1 corn, i.e., very close to the − 36 kg CO 2 e Mg −1 corn estimated for continuous corn in Minnesota (base case). This is partly explained by the fact that the relative yield increase obtained with P. bilaiae in Minnesota and North Dakota was more or less the same (Leggett et al. 2015). In the report (ESM 2), average results were used for a crude extrapolation to a general US scenario with a somewhat lower yield increase, thereby estimating a total GHG saving potential of 3.9 million Mg CO 2 e if P.b. were applied on all US corn fields. This illustrates how the approach laid out in the present paper can be used to estimate full potentials of new agricultural practices. Conclusions Changes in environmental impacts from shifts in agricultural practices can be logically categorized according to where in the life cycle they occur. The categorization is helpful when assessing and explaining the environmental implications of introducing a new practice. Upstream effects caused by changes in agricultural inputs can be assessed by standard LCA procedure as can potential downstream effects related to post-harvest treatment. Changes in emissions from the field where the change in practice occurs (the field effect) can be assessed by biogeochemical modeling, thereby improving life cycle inventory modeling and addressing concerns raised in the literature. Finally, changes in impacts from production elsewhere (the yield effect) can be assessed via system expansion, potentially supplemented by ILUC modeling. The outlined approach has been shown to be applicable to the introduction of the phosphate-solubilizing microbe P. bilaiae on corn fields in the USA. It was found that induced environmental impacts from production of the microbial inoculant (the upstream effect) were overshadowed by the environmental impacts avoided in terms of the field effect (reduced emissions of N 2 O, increased sequestration of carbon, and reduced nitrogen losses) and the yield effect (avoided crop production elsewhere). In other words, the impacts from producing the spore-containing inoculant was small compared to the positive effects obtained when P. bilaiae colonizes corn roots and facilitates improved nutrient uptake in the crops. It was also shown that the yield effect can vary substantially depending on modeling choices and interpretation of system dynamics. This is not a weakness of the approach laid out in the present paper but rather a reflection of the ongoing scientific developments in ILUC modeling. In addition, the variation in results (specifically for the yield effect) did not alter the overall conclusions in the case study. It is recommended that the outlined approach be applied to other assessments for changes in agricultural practices, such as switching from conventional to organic farming and from conventional tilling to no or low till. It may also be considered to integrate price rebound effects in the approach to account for potential changes in cost of agricultural production when switching from one practice to another. Compliance with ethical standards Conflict of interest Jesper H. Kløverpris and Claus Nordstrøm Scheel are respectively fully and partially employed by Novozymes that produce and market microbial inoculants as part of a larger portfolio of biological solutions. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-08-17T14:27:04.695Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "4b9034617139f9eb8987980dcc89c86b59431e78", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11367-020-01767-z.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4b9034617139f9eb8987980dcc89c86b59431e78", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
14091196
pes2o/s2orc
v3-fos-license
Novel and known mutations of TGFBI, their genotype-phenotype correlation and structural modeling in 3 Chinese families with lattice corneal dystrophy. PURPOSE To report novel transforming growth factor beta-induced (TGFBI) mutations responsible for lattice corneal dystrophy (LCD), the associated genotype-phenotype correlation, and structural changes in the mutant proteins in three Chinese families. METHODS Three unrelated Chinese families were diagnosed as Type I LCD. Mutations in TGFBI were detected by sequencing all of the 17 exons and splice sites of the gene. Phenotype, including corneal erosions, and opacification in the families were compared. Structural changes of the mutant proteins were modeled. One hundred healthy volunteers were recruited as controls for sequence analysis of TGFBI. RESULTS Two novel mutations, c.(1702G>C and 1706T>A; p.Arg514Pro and Phe515Leu) in TGFBI were identified in Family 1. Two known hotspot mutations, c. 531C>T (p. Arg124Cys) and c.1876A>G (p.His572Arg), were revealed in Family 2 and Family 3, respectively. Sequence analysis in the 100 healthy control subjects, the unaffected members in Family 1, and evolutionary alignment showed that the novel mutations occurred in the conserved amino acids. Structural modeling revealed changes in the 2nd structure of the mutant proteins, but did not detect gross structural changes. Mutations c.(1702G>C and 1706T>A; p.Arg514Pro and Phe515Leu) and the c. 531C>T (p. Arg124Cys) were present in the corneas with sever opacification. CONCLUSIONS The novel mutations c.(1702G>C and 1706T>A; p.Arg514Pro and Phe515Leu), c. 531C>T (p. Arg124Cys), c.1876A>G (p.His572Arg) in TGFBI were responsible for LCD in the 3 families. Mutations c.(1702G>C and 1706T>A) (p.Arg514Pro and Phe515Leu) and the c. 531C>T (p. Arg124Cys) were associated with more severe LCD phenotypes in the families. These results provide more data for molecular diagnosis and prognosis of the disease. Lattice corneal dystrophy (LCD) is an inherited disease characterized by the accumulation of amyloid materials that form refractile lines and white dots in the corneal stroma. Genetically, this disease is classified into five distinct subtypes, type I, II, III, and IIIA, and IV [1][2][3]. Patients usually present with ocular pain and recurrent corneal erosions in the first or second decade of life. Corneal opacification and blindness could eventually occur, requiring keratoplasty to restore sight. Type I LCD is the most common subtype and the disease usually progresses slowly [4]. These clinical findings are the characteristics of LCD, and genetic classification does not really reflect them; therefore, a classification that includes genetics and phenotype appears to be more practical. Genetically, transforming growth factor beta-induced (TGFBI Entrez Gene ID: 7045) is the gene underlying most incidences of LCD. The dominant mode of inheritance has been recognized as the pattern of transmission [5,6], although homozygous mutations, for the severe forms of corneal dystrophy, and compound heterozygous mutations have been reported [7]. To date more than 30 mutations of TGFBI are responsible for corneal dystrophy, with various clinical subtypes being identified [7]. However, no structural modeling of the mutant proteins has been reported. In this paper, we report 2 novel mutations of TGFBI identified in one Chinese family, two known heterozygous mutations in the other two families, and genotype-phenotype correlation and structural analysis of the four mutant proteins in the three Chinese families with LCD. METHODS Patients and control subjects: Five members from Family 1, four members from Family 2, one member from Family 3, and one hundred healthy volunteers (Han ethnicity) in clinic of Zhongshan Ophthalmic Center were recruited for this study. This study was approved by the Review Board of Sun Yat-Sen University. The principles outlined in the Declaration of Helsinki, such as participant safety, clinical trial registration, post-study access, usage of data and human tissues, compensating participants with research-related injury, were followed. Pedigrees of the three families were constructed ( Figure 1). Among the three families, Family 1 and Family 2 came from Guangdong province, in the south of China, and Family 3 came from Sichuan province, in Western China. All three families are of Han ethnicity. Owing to the availability of samples, ten members of the families were analyzed (III:2, III:3, III:4, IV:2, IV:3 in Family 1; I:1, II:4, II:6, III:3 in Family 2; and II:2 in Family 3; Figure 1). No consanguinity between the families was found in the family histories. Visual acuity, slit lamp microscopy, and funduscopic examinations were performed by ophthalmologists (X.Z. and J.Y.). Blood samples were collected for DNA isolation. All the individuals included in the study underwent clinical examination before the molecular investigation. Systemic amyloidosis was excluded in all of the affected family members. One hundred healthy volunteers (Han ethnicity) were recruited as controls for sequence analysis of TGFBI. These control subjects were free of corneal opacity and epithelial defect, which was confirmed by slit-lamp microscopy. Mutational analysis: Genomic DNA from the peripheral blood of the available family members was extracted with a QiaAmp Kit (Qiagen). All of the 17 exons and flanking intronic sequences of TGFBI (Entrez Gene ID: 7045) were amplified by polymerase chain reaction (primer sequences and PCR conditions are available on request). The products were purified with a Pre-Sequencing Kit (USB, Cleveland, OH) and sequenced in both directions using a BigDye Terminator v3.1 Cycle Sequencing Kit and an ABI 3100 Genetic Analyzer (Applied Biosystems, Foster City, CA). The results were compared with the sequence retrieved from the UCSC Genome Browser. The HGVS guidelines for describing sequence variations and numbering were used. Haplotypes were constructed using the TGFBI locus markers D5S816, D5S393, and the c.1702G>C and c.1706T>A variants in Family 1. Evolutionary comparisons of the TGFBI ortholog: The amino acid sequences of the TGFBI orthologs of chimpanzees (ENSPTRG00000017264), dogs (ENSCAFG00000001091), rats (ENSRNOG00000012216), mice (ENSMUSG00000035493), and chicks (ENSGALG00000006319) were retrieved from the Ensembl Genome Browser and compared with the human TGFBI amino acid sequences (AAA61163.1). Structural modeling of the wild and mutant proteins: The SWISS MODEL was used to model the structure of the wild type protein and the four mutant proteins. The 3D protein models were viewed using RasMol. The secondary structures were predicted by PredictProtein, NNPREDICT, and PROF to obtain a comprehensive understanding of the effect of the mutations found in the three families. RESULTS Clinical findings: All of the affected family members examined showed anterior refractile corneal stromal deposition characterized by several branching and nonbranching lattice figures resembling pipe stems in both eyes. These patients also had delicate, filamentous, discrete, short, and irregularly shaped stromal deposition, along with corneal haze. The onset of all the affected individuals in Family 1 was in the second decade of life. Ocular pain, recurrent corneal erosions, and corneal opacification were the frequent complaints and clinical signs of LCD. These symptoms progressed more rapidly in relation to the other families. Penetrating keratoplasty was necessary in the patients' late 20s ( Figure 2). The proband (II-4) patient in Family 2 had visual loss in both eyes when she was approximately 23 years old. On examination, subepithelial punctiform corneal erosions were present in both eyes ( Figure 3). In 1985, her father (I-1) had penetrating keratoplasty for the first time on his right eye at age 41. One year later, the corneal implant was rejected. He had penetrating keratoplasty twice in both eyes one year later. Due to recurrence of the primary disease, he had another penetrating keratoplasty on his left eye ten years later ( Figure 4). Unfortunately, his histopathological record could not be retrieved. Her son (III-3) experienced a foreign body sensation in both eyes, however, no abnormalities were found in his corneas. Family 2 was a branch of a large Chinese family of Han ethnicity living in southern China. The family members with LCD encompass five generations. Some members of this family were living overseas. We learned from the proband (II-4) that, like her, some members of the family had analogous symptoms in their teenage years, and some members had undergone penetrating keratoplasty around age 40. However, we cannot find their medical records. We have invited the other members of this family to our study, but have yet to hear from them. The proband (II-2) from Family 3 displayed thin linear branching deposition in the subepithelial and stromal layers in both eyes that first appeared during adulthood. The disease progression in this case was slow. The patient also had recurrence of corneal erosions. No visual impairment was found during examinations when he/she was 38 years old ( Figure 5). Sequence analysis of TGFBI in the 100 healthy control subjects showed conserved c.1702G and c.1706T, suggesting that the c.1702G>C and 1706T>A substitutions were not polymorphisms. Owing to the lack of family members, haplotype analyses using the TGFBI locus markers D5S816 and D5S393 and the c.1702G>C and 1706T>A variants could not conclusively identify parental origin of the two mutations. Alignment of the amino acid sequences of TGFBI displayed a high conservation of p.Arg514Pro and Phe515Leu in the chimpanzee, cow, dog, mouse, rat, chick, and Xenopus orthologs (Figure 9). Structural modeling: Prediction with the SWISS MODEL indicated that all the mutations occurred in the alpha-helix region, which, predictively, would induce local secondary structure changes, although no gross structure modification could be detected. In Family 1, the mutation p.Arg514Pro is predicted to shorten the helix and induce the formation of a downstream turn structure; whereas the p.Phe515Leu mutation could predictably elongate the helix. The combined effects of these two mutations conformed to the result of p.Arg514Pro. DISCUSSION In the present study, the affected individuals of Family 1 and Family 2 had symptoms in the first decade of life; therefore, penetrating keratoplasty is indicated in their late 20s. This indicates that mutations c.(1702G>C and 1706T>A) (p.Arg514Pro and Phe515Leu) and c. 531C>T (p. Arg124Cys) have more phenotypic effects. The mutation c. 1876A>G (p.His572Arg) is also identified in Family 3. The proband of Family 3 has thin linear branching deposits in the subepithelial and stromal layers in both eyes, with the deposition detected in adulthood. Visual impairment was not obvious in his/her 20s. The same mutation has been reported in later onset Type I LCD [9] from a Thai family. Consistently, the average age of onset of symptoms of the affected individuals in the Thai family was 28.6±8.1 years (range: 20-50). In addition, the p.His572del is associated with a unilateral, late-onset variant form of LCD in a 63-year-old man with decreased vision in the affected eye [8]. This indicates that c.1876A>G may be particularly associated with the relatively late onset of LCD and a less severe phenotype. Predominantly expressed by the corneal epithelium, TGFBI is believed to be an adhesion protein secreted into the extracellular matrix and bound to Type I, II, and IV collagens responsible for the structure of microfibrils and cell surface. TGFBI is also a major gene responsible for most LCD cases. To date, more than 30 mutations in TGFBI have been reported to cause dominantly inherited corneal disorders [7], although homozygous mutations are also reported in severe forms of corneal dystrophy patients. Moreover, most mutations reported from present or previous studies are single base pairs of substitutions affecting one of the alleles in the locus. Compound heterozygous mutations affecting both alleles are rare. Dighiero and colleagues [10] suggest that both p.Arg124Leu and p.Thr125_Glu126del compound heterozygous mutations are responsible for the phenotype of the family with granular corneal dystrophy. The compound heterozygous mutations p.Arg124Cys coupled with p.Gly470X (a nonsense mutation) have been identified in the proband of one family [11]. However, the proband's daughter who carries the heterozygous mutation p.Gly470X alone was unaffected. This raises the question as to whether the p.Gly470X in the compound heterozygous mutations could be pathogenic. The authors speculated that the p.Gly470X nonsense mutation might be nonpathogenic or have a very low penetrance. The third compound heterozygous mutations, p.Ala546Asp and p.Pro551Gln, have been reported in two non-consanguineous African-American families [12]. In an attempt to distinguish the pathogenic mutations from polymorphism in the compound mutations [12], haplotype analysis of the TGFBI locus and sequenced samples from 125 healthy controls show no common haplotype between the affected and unaffected family members and no mutations in the control subjects. In this present study, 2 novel mutations, c.1702G>C and 1706T>A (p.Arg514Pro and Phe515Leu), are identified in Family 1. The two substituted base pairs are only 4bp away from each other, and affect the consecutive amino acids in the polypeptide. This also raises the question whether the two mutations are located in one allele consecutively, or in both alleles separately (one in each of the alleles). These questions are unable to be answered due to lack of sufficient family samples for the analysis. However, the pedigree looks like a dominant inheritance in the family, as the spouses (I:2, II:2) of the mutant affected members (I-1, II-1) are unaffected. Regardless of whether the two mutations occur in one or both alleles, the pathogenic roles of the mutations are likely associated with severity in the phenotype of LCD, as they are absent in the unaffected family members and controls subjects. The 2nd structural changes of the mutant protein by either one or both of the mutations also provide support of their pathogenic roles. This is also consistent with the phenotype severity in the family, as it has an earlier age of onset, relatively faster progression, and requires earlier penetrating keratoplasty compared to Family 3. ACKNOWLEDGMENTS The study is supported by the second phase of the 985 Project of China, National Natural Science Foundation of China (30772389). The authors thank all the family members and the control subjects for their participation in, and support for this project. Professor Xingwu Zhong contributed equally to this research and can be considered as a co-corresponding author. Figure 9. Alignment of partial amino acid (lower) sequences of TGFBI with 5 of its orthologs. The boxes indicate arginine at 514 and phenylalanine at 515 in the human protein sequence. These were conserved across these species.
2014-10-01T00:00:00.000Z
2010-02-15T00:00:00.000
{ "year": 2010, "sha1": "da6326ed76b9f3d774aeed125946e740970dfc8a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "da6326ed76b9f3d774aeed125946e740970dfc8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254222656
pes2o/s2orc
v3-fos-license
Socioeconomic inequality in depression and anxiety and its determinants in Iranian older adults Background Older adults with lower socioeconomic status are more vulnerable to stressful life events and at increased risk of common mental health disorders like anxiety and depression. This study investigates the socioeconomic inequality in depressive symptoms and anxiety. Methods The data were from 7462 participants of the Neyshabur longitudinal study of ageing registered during 2016-2018. The outcome variables were anxiety and depressive symptoms. Anxiety was defined by the “Hospital Anxiety and Depression scale Questionnaire”, and depressive symptoms was defined and measured by the “short-term form of the Epidemiological Center Questionnaire.” The socioeconomic status was defined using principal component analysis of home assets. The Concentration Index (C) was used to measure socioeconomic inequality in anxiety and depressive symptoms. Concentration index was decomposed to its determinants to determine the role of the independent variables on inequality. Results The prevalence of depressive symptoms and anxiety was 12.2% (95% CI: 11.4, 12.9) and 7.0% (95% CI: 6.4, 7.5), respectively. Moreover, the C for anxiety was -0.195 (95% CI: -0.254, -0.136) and for depressive symptoms was -0.206 (95% CI: -0.252, -0.159), which indicate a considerable inequality in favor of high socioeconomic group for anxiety and depressive symptoms. Decomposition of the concentration Index showed that education, unemployment and male sex were the most important positive contributors to the observed inequality in anxiety and depressive symptoms, while age and number of grandchildren were main negative contributors of this inequality. Conclusion Low socioeconomic groups were more affected by anxiety and depressive symptoms. Any intervention for alleviation of inequality in anxiety and depression should be focus on education and employment of people, especially in younger elderly. Background Over the last decades, the number of older people has increased significantly from 130 million in 1950 to more than 600 million in 2017 [1]. It has been predicted that from 2015 to 2050, the ratio of people >60 will almost double, from 12% to 22%, globally [2]. In Iran, according to the population censuses, the elderly accounted for 6.69% of the total population and it has been estimated that population aged 65 years or over will increase to 18.25% by 2050 [3]. Depression is defined as persistent sadness and lack of interest and pleasure in doing formerly delightful activities [4]. According to the World Health Open Access *Correspondence: emamian@shmu.ac.ir Organization, 279 million people, or 3.7% of the world's population, suffer from depression [5]. In Iran, the prevalence of depression equals 5.4% in 2019 [5], with a Years of healthy life lost due to disability (YLD) equal to 813,441 years (8.5% of total YLD) [5]. The prevalence is associated with age and it was reported up to 52% in elderly [6]. According to the World Health Organization statistics, 301 million people, or 4.0% of the world's population are experiencing anxiety throughout their lives. In Iran, the overall prevalence of anxiety was 7.8% in 2019, and YLD for anxiety was 608,056 years (6.4% of total YLD) [7]. Inequality in health is defined as the discrepancy in the incidence or prevalence of health problems among individuals in different situations (economic, social, geographical, etc.) [8]. Low-income countries usually have more inappropriate health outcomes than wealthy countries. Moreover, in all the countries worldwide, the lower socioeconomic groups suffer, the more disease burden than the higher classes [9,10]. A healthy society program aims to eliminate health inequalities among the genders, with different ethnicities, races, educational status, income levels and geographical locations [11]. Therefore, one of the principal goals of global public health is striving against social and economic inequalities. The World Health Organization recommends monitoring and evaluating socioeconomic inequalities in health behaviors as one of the social determinants of health [10]. There are pieces of evidence that socioeconomic inequality of depression and anxiety directly correlates with age [12,13]. A meta-analysis study among adults found that the lower the socioeconomic status was associated with higher prevalence or incidence of depression (a prorich inequality) [14]. The inequality in health status is avoidable in many cases through adjustable factors such as economic status, education status, employment, and living facilities [15]. A few studies have been done in Iran on the effects of socioeconomic inequality in mental health, which illustrates that this inequality is often in favor of the rich and has a relation with features such as gender, age, and employment status [16][17][18][19]. The current study aims to determine socioeconomic inequality in depressive symptoms and anxiety and characterize the determinants of these inequalities based on a large population-based study in Northeast of Iran; Nayshabur Longitudinal Study on Ageing (NeLSA). Identifying the status of socioeconomic inequality in depression and anxiety and its determinants will help policymakers to implement appropriate interventions and promote mental health in society. Study population The data was extracted from the Neyshabour longitudinal study on aging (NeLSA) [20], which is an ageing component of the Prospective Epidemiological Research Studies in Iran (PERSIAN) [21]. It was conducted in four sites, including Neyshabur (Razavi Khorasan province, Northeast of Iran), Guilan (Northern Iran), Tabriz (Northwest of Iran), and Ardakan (central Iran). The current study included people aged 50 -94 in Neyshabur during 2016-2018. Participants were selected through stratified random sampling from people registered with six health centers. A total of 9220 people met the eligibility criteria including minimum 3-year residency in Neyshabur, Iranian citizens, without dementia, major depression, and disabilities, limiting their ability to participate in the study, of whom a total of 7462 individuals (4831 households) provided the written consent to participate in the study. The participation rate was 81%. Details of NeLSA sampling and implementation have already been reported [20]. Independent and Outcome variables The outcome variables of the current study were depressive symptoms and anxiety. The "Short-Form of the Center for Epidemiological Studies-Depression Scale (CES-D)" [22] and "The Hospital Anxiety and Depression Scale" [23] were used respectively to assess depressive symptoms and anxiety. A score of 11 or higher was considered as the anxiety disorder and the score of 10 or higher was considered positive for depressive symptoms. Mentioned questionnaires are considered effective screening tools due to their good reliability, validity, and sensitivity, based on the results of the former studies [23][24][25]. Both 10 and 8 items forms of CES-D were also validated for Farsi language and Iranian elderly [25]. Trained officers conducted face-to-face interviews using a comprehensive questionnaire. It includes information related to sociodemographic (age, sex, marital status, education, income, and job), lifestyle behaviors (smoking, physical activity, diet, sleep), history of chronic disease, and medication use. Trained clinical psychologists completed psychological questionnaires (cognition, quality of life, depression, anxiety, etc.). There was a clinical examination by a physician or trained nurse. All procedures were based on standard protocols followed by a quality control check. Chronic diseases were defined by a physician on clinical assessment and the participant's response to the question 'Has a doctor ever told you that you have any of the following health problems? In this study, a list of different chronic diseases, including gastrointestinal, cardiac, neurologic, musculoskeletal, endocrine, respiratory, and cancers, have been investigated. Participants had been asked to bring all medical records, laboratory results, and medications that they were using on the interview day; they were all checked by a general practitioner to verify the self-reported medical conditions. Diabetes was defined as self-report history of diabetes and/or using diabetes medications and/or FBS>=126 in a blood test. Hypertension was defined as self-report history of hypertension and/or using hypertension medications and/or systolic blood pressure >=140 mmHg and/or diastolic blood pressure >=90 mmHg. Smoking behavior was based on whether respondents identified themselves as a regular smoker or not. Body Mass Index was calculated after measuring weight and height of participants and was categorized as normal (< 25 Kg/m 2 ), overweight (25-29.9 Kg/m 2 ) and obese (≥30 Kg/m 2 ). Marital status was classified into two groups: married/living with a partner and divorced/ separated/single/widow. Socioeconomic status variable Principal Component Analysis was used to construct a variable that shows the socioeconomic status [26][27][28]. Several factors were considered in the PCA model to generate a socioeconomic status variable. It included the Possession of a freezer, washing machine, dishwasher, laptop / desktop, Internet access, LCD / LED TV, vacuum cleaner, master bedroom (built-in bathroom in the bedroom), motorcycle, the car value 5000-12500$, tablet / IPAD. The number of extracurricular and non-professional books read in the past year and the number of foreign and domestic trips in the last ten years entered into the model as a social status variable. Categorical variables were re-coded as binary variables (0 and 1), then all continuous and binary variables were entered into the model. As a result, seven components were obtained with Eigen-value> 1, covering 61.99% of the observed variance. The Sum of the asset variables weighted by the first component was used to calculate the socioeconomic score for each individual [26]. Inequality measurement The Concentration Index (C) was measured to evaluate inequality, which has been widely used to examine income-related inequalities in the health departments internationally. Its decomposition analysis is increasingly being used to study the determinants of health inequality in elderly [29,30]. To understand concentration index, one must first become familiar with the concentration curve, which in the present study displays the share of depressing or anxiety accounted for by cumulative proportions of individuals in the population ranked from lowest to highest socioeconomic status. The x-axis of concentration curve demonstrates the cumulative percentage of population, ranked by their socioeconomic status and the y-axis presents the cumulative percentage of health outcome (depressive symptoms or anxiety). Therefore, if individuals -regardless of their economic status -have equal health outcomes, the curve will be a line of 45 degrees (equality line) [31]. The concentration index is defined as twice the area between the concentration curve and the line of equality. The C value becomes negative when the outcome under study is concentrated among the lower socioeconomic groups. In this scenario the concentration curve will be above the equality line. On the contrary, the value of the C becomes positive when the concentration curve is below the equality line, and the outcome under study is concentrated among the higher socioeconomic groups. Hence, the higher the absolute value of the index, the greater the inequality. The C ranges from +1 (the outcome under study is entirely focused on the rich) to -1 (the outcome under study is entirely focused on the poor), and a value of zero indicates equality [31,32]. The concentration index was calculated using the "conindex" command [33] with the option for bounded outcome variables, in Stata 15 software (College Station, TX: Stata-Corp LLC). Participants with complete data in all above parts were included in this study. The Wagstaff decomposition method was used in the current study [34]. The C has two components: the explained component that identifies each determinant's contribution to socioeconomic inequality, and the unexplained or residual component (derived from βs), specifies which socioeconomic inequality is not explained through the systematic variation of the determinants among socioeconomic groups. Elasticity is a measure of the association without unity; it shows the importance of the variation of the dependent variable per unit of change in the determinant. To calculate elasticity; the beta coefficient of any independent variable was multiplied by the mean of the same variable. The result was divided by the mean of the outcome variable. For each determinant variable, the multiplication of the elasticity and concentration index indicates the absolute contribution of that determinant. Moreover, to stipulate the percentage contribution of each determinant, the absolute contribution is divided by the concentration index of the dependent variable [34]. Given the binary outcomes in this study, we used linear approximation by using marginal effects on the logit model, as coefficients. Results The data of 7462 participants were used for the current analysis. The mean and standard deviation of the age of the participants equals 61.0±8.14 years, and most of them had primary education. Most of the participants (90%) lived with an individual or persons. The prevalence of overweight and obesity was 43.0% and 29.9% respectively. Smokers accounted for 10.9% of the population. The data of 7462 and 7316 participants were available for depressive symptoms and anxiety scores. The mean and standard deviation (SD) of test scores for depressive symptoms and anxiety disorders were 3.94 (4.16) and 5.15 (3.28), respectively. The prevalence of depressive symptoms and anxiety in total population was 12.2% (95% CI: 11.4 -12.9) and 7.0% (95% CI: 6.4 -7.6) respectively. More information on how these two disorders were distributed to other independent variables is appointed in Table 1. The prevalence of depressive symptoms and anxiety were different in socioeconomic groups. Within the highest SES group; depressive symptoms, and anxiety were 6.5% and 3.4%, while within the lowest SES group, they were equal to 16.2% and 9.7%, respectively. The prevalence of depressive symptoms was not different in different age groups (p=0.287), while anxiety was less frequent in higher age groups (p=0.003) The concentration indices for anxiety and depressive symptoms were -0.195 (95% CI: -0.254, -0.136) and -0.206 (95% CI: -0.252, -0.159), respectively. The negative concentration index implies that inequality was favored to the high SES group, and anxiety and depressive symptoms were concentrated in the low SES group. Figure 1 illustrates the concentration curve of anxiety and depressive symptoms. The concentration curves for both disorders were above the equality line, indicating a pro-rich inequality. Table 2 represents the decomposition of the concentration index for anxiety. Among the studied variables, education, age, sex, occupational status, and number of grandchildren were the common determinants of inequality. The large elasticity of anxiety with respect to age is responsible for its large contribution to the anxiety concentration index. In contrast, there is a great deal of socioeconomic inequality in the numbers of grandchildren, education and age, and so they make large contribution to the anxiety concentration index. Education and age make the largest proportional contributions to overall socioeconomic inequality. Similar pattern for the contribution of variables in depressive symptoms inequality was also seen (Table 3), where education, age and sex had the highest contributions to depressive symptoms inequality. Discussion In this study, the concentration index of anxiety and depressive symptoms to determine socioeconomic inequality were equal to -0.195 and -0.206, respectively, which has been indicated a significant inequality in anxiety and depressive symptoms. Similar to other studies [35,36], the inequality in anxiety was concentrated among individuals with low socioeconomic status. The prevalence of this disorder was 9.7% within the lowest and 3.4% within the highest SES quintiles. The prevalence of depressive symptoms equals 16.2% in the lowest and 53.46.5% in the highest SES quintiles. Although depression has been as often as possible conceptualised as a 'backward-looking' emotion, and anxiety as 'forward-looking' [37,38], both were associated with socioeconomic status in current study. The prevalence of depression, which can affect inequality is very different across nations [39][40][41]. Contrary to our results another study in three European countries [42] find no association between income and depression in Spain, while in Finland and Poland with lower prevalence of depression, a pro-rich inequality in depression was reported. The above comparison between studies indicates that the socioeconomic inequality in depression is heterogeneous and other factors including the way of measurement and definition of depression and SES, the region, the study time, study population and its sample size should be considered [14]. In a study in Finland, Poland and Spain [43], higher income was associated with lower odds of depression in a logistic regression model adjusted for age and sex. This association was not significant in another model with adjustment for age, sex and other demographic, behavioral variables and chronic diseases. Therefore, the analysis method and plan, is another reason for differences between studies. Similar to our results, other studies in Spain [44] Korea [45], India [46], South Africa [47], and US [48], have shown a pro-rich inequality in depression with different extents. Richardson's study [49] focuses on the potential role that the social environment within countries may play in shaping inequalities and differences between countries. We found a socioeconomic inequality in anxiety which was in favor to high socioeconomic groups. This finding is in accordance with other studies around the world [50][51][52][53]. Decomposition of inequality in anxiety and depressive symptoms expressed that the main modifiable factors, causing this inequality in terms of C were: education level, number of grandchildren (negative contribution) and employment status. According to the different values of elasticity and concentration index in each studied variable, the contribution percentages of factors affecting inequality are different. So that age and education played most significant role in the inequality of anxiety and depressive symptoms. Current study represents that, elderly with low education will experience anxiety and depressive symptoms more. This finding has been reported in many studies, especially in developing countries, which reveals that lower levels of education were significantly associated with mental disorders such as anxiety, depression, and stress [54,55]. Education was the factor that contributed the most to the socioeconomic inequality of anxiety and depressive symptoms in the decomposition of the concentration index. About 70% of the inequality observed in anxiety and depressive symptoms was explained by the education level. This result was consistent with the findings of other studies [56,57]. This could be explained by the fact that individuals with higher levels of education are more aware of prevention ways of anxiety, which improves their mental status [58]. The Low level of education is an independent risk factor for anxiety; thus, interventions in education at the community level can be considered a way to reduce socioeconomic inequalities of anxiety [59]. Education is one of the social factors affecting health, which can create a strong network of communication and more social links for the elderly, and successively will cause a better mental health state [59]. Higher education levels also help adults in prevention of diseases, health promotion, access to health insurance, and having healthy behaviors [60]. It seems that literacy is a development index in older people, as the second leading indicator of vulnerability of mental status. Consequently, adopting policies aimed to increase literacy among individuals with lower socioeconomic status could be one of the most substantial steps in reducing socioeconomic inequality of anxiety. Age was the second most contributor to inequality of anxiety and depressive symptoms. The negative contribution of age with depressive symptoms and anxiety inequality, means that increasing in age associated with lower inequality and people in older age affected more equal with depressive symptoms and anxiety. Therefore, any intervention to reduce the socioeconomic inequality in anxiety and depressive symptoms of the elderly (such as literacy increasing interventions) should be more focused on younger age groups. Other studies have also reported the role of age in socio-economic inequality in mental health, some of them have reported that anxiety, depressive symptoms and mental health worsen with age [18,19,40]. The reason for this discrepancy is the difference in the age ranges of this study and other studies. Most other studies have been in adults of all age groups [19,61]. Of course, some studies in old age also had similar results to ours [46,61]. Not having an occupation was another contributor to anxiety and depressive symptoms inequality. It seems that having even a part-time job can play a key role in reducing anxiety [62]. Therefore, attention to employment status in elderly is important to minimize inequality in depressive symptoms and anxiety. Further analysis of data in this study showed that 21.24% of lonely and more anxious elderly were within the low socioeconomic groups while 5.12% were in the high socioeconomic groups. Other studies have shown that living alone was strongly associated with depressive symptoms and depression [57,[63][64][65] and anxiety [66] in elderly. In this regard, Berkman believes that social support creates a sense of intimacy through emotional patronage. He daresay family is the most remarkable factor in the loving communication establishment or emotional support [67]. Other study in India also indicates the protective role of a good social network against depression, in elderly [68]. The United Nations, in its 17 Sustainable Development Goals, has emphasized the necessity of paying more attention to the factors impacting economic and social conditions on health inequalities, including education, policy decisions engagement, employment, and socioeconomic differences. Although the principal causes of inequality in anxiety and mental disorders may vary by region, culture, and gender, efforts should be taken place to address them [69]. It should be also noted that other country level factors that were not investigated in this study, also have an effect on depression and anxiety [70]. High sample size, good design, robust analysis of data, proper implementation of study and systematic monitoring of study to ensure quality assurance in data gathering are the main strength of this study. However, exclusion of patients with major depression limits comparability of our results with some other studies and to some extent underestimate the inequality. Limited age range of participants to over 50 years, also Limits the generalizability of the results. As another limitation, it should be noted that decomposition of inequality has been done based on the factors that were examined in the questionnaire. Evidently, all effective factors were not examined in the present study. finally, no causal inference can be drawn from this cross-sectional study. What was described as a reason for inequality of anxiety is just the association between the variables under study, and there is no causal role. Therefore, it is recommended that this methodology be used to study inequalities in longitudinal studies with an appropriate design. Conclusion Lower socioeconomic groups were more affected by anxiety and depressive symptoms among older adults of Neyshabur. Lower education, unemployment, and younger age were the main factors that play a considerable role in the inequality of anxiety and depressive symptoms. These factors should be considered for policymaking and for the development of new interventions to lower prevalence of anxiety and depressive symptoms in elderly.
2022-12-05T14:32:04.957Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "20c154187f15512fbf2d752e88c12d0a88659e0a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "20c154187f15512fbf2d752e88c12d0a88659e0a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
214262720
pes2o/s2orc
v3-fos-license
Effects of seed morphology and elaiosome chemical composition on attractiveness of five Trillium species to seed‐dispersing ants Abstract Morphological and chemical attributes of diaspores in myrmecochorous plants have been shown to affect seed dispersal by ants, but the relative importance of these attributes in determining seed attractiveness and dispersal success is poorly understood. We explored whether differences in diaspore morphology, elaiosome fatty acids, or elaiosome phytochemical profiles explain the differential attractiveness of five species in the genus Trillium to eastern North American forest ants. Species were ranked from least to most attractive based on empirically‐derived seed dispersal probabilities in our study system, and we compared diaspore traits to test our hypotheses that more attractive species will have larger diaspores, greater concentrations of elaiosome fatty acids, and distinct elaiosome phytochemistry compared to the less attractive species. Diaspore length, width, mass, and elaiosome length were significantly greater in the more attractive species. Using gas chromatography–mass spectrometry, we found significantly higher concentrations of oleic, linoleic, hexadecenoic, stearic, palmitoleic, and total fatty acids in elaiosomes of the more attractive species. Multivariate assessments revealed that elaiosome phytochemical profiles, identified through liquid chromatography–mass spectrometry, were more homogeneous for the more attractive species. Random forest classification models (RFCM) identified several elaiosome phytochemicals that differed significantly among species. Random forest regression models revealed that some of the compounds identified by RFCM, including methylhistidine (α‐amino acid) and d‐glucarate (carbohydrate), were positively related to seed dispersal probabilities, while others, including salicylate (salicylic acid) and citrulline (L‐α‐amino acid), were negatively related. These results supported our hypotheses that the more attractive species of Trillium—which are geographically widespread compared to their less attractive, endemic congeners—are characterized by larger diaspores, greater concentrations of fatty acids, and distinct elaiosome phytochemistry. Further advances in our understanding of seed dispersal effectiveness in myrmecochorous systems will benefit from a portrayal of dispersal unit chemical and physical traits, and their combined responses to selection pressures. K E Y W O R D S ants, chemical ecology, insect-plant interaction, myrmecochory, oleic acid Kruger, & Linsenmair, 2004;Heil, Fiala, Kaiser, & Linsenmair, 1998;Paiva, Buono, & Lombardi, 2009;Rickson, 1976;Shenoy, Radhika, Satish, & Borges, 2012;Webber, Abaloz, & Woodrow, 2007), and these food resources are typically not as attractive to generalist ants as they are to the plants' specific ant mutualists Davidson, Foster, Snelling, & Lozada, 1991;Gonzalez-Teuber & Heil, 2009;Heil et al., 2004Heil et al., , 1998. Furthermore, ant preferences for specific chemical compounds in EFN are highly species-specific, suggesting mutualists exert strong stabilizing selection on EFN chemotypes. Although the myrmecochory is a facultative mutualism rather than an obligate mutualism as in tropical myrmecophytic systems, it is possible that seed-dispersing ants nevertheless exercise a similar stabilizing selection on elaiosome phytochemical profiles such that a certain narrow chemotype is defined. This hypothesis has not been tested in myrmecochorous systems; in particular, studies are lacking that characterize the metabolomic profiles of elaiosome phytochemistry and assess whether single compounds or entire phytochemical profiles affect diaspore retrieval and dispersal to the nest by ants. Despite the fact that three main aspects of diaspores (morphological, nutritional, chemical) have been linked to seed dispersal by ants, rarely have multiple diaspore traits been investigated in the same study (but see Leal et al., 2014), and the most important mechanisms governing ant preference remain unclear. We explore whether differences in diaspore morphology, elaiosome fatty acids, or elaiosome phytochemical profiles explain the differential attractiveness of diaspores of plants in the genus Trillium (Order Liliales, Family Melanthiaceae) to ants, primarily of the keystone seed-dispersing genus Aphaenogaster. Although a number of ant species disperse seeds of myrmecochores in eastern North America (Gaddy, 1986;Zelikova, Sanders, & Dunn, 2011), Aphaenogaster ants are the primary seed dispersal vector for most temperate ant-dispersed flora in this region and are responsible for approximately 74% of myrmecochore seed collection in forests where Aphaenogaster have been reported (Ness et al., 2009). Our study system is comprised of five eastern North American Trillium species, two of which are geographically widespread and three of which are narrowly endemic; herein, pairs of widespread and endemic species co-occur in multiple sites across the southern Appalachian region of the United States. In this system, Aphaenogaster ants display a preference for dispersing the diaspores of the geographically widespread trilliums compared to their co-occurring, endemic congeners (Miller & Kwit, 2018). Using field-based observations of seed dispersal probabilities from Miller and Kwit (2018), we ranked each of the five study species from least attractive to most attractive (see Materials and Methods below) and use this scale of attractiveness as a comparative framework for exploring diaspore morphology and elaiosome chemistry throughout the present study. Our first objective is to quantify interspecific differences in diaspore morphology among our study species. We predict that the more attractive species of Trillium will have larger diaspores overall, and higher elaiosome mass and elaiosome-seed mass ratios than less attractive species. Our second objective is to assess interspecific differences in concentrations of elaiosome fatty acids. We predict that concentrations of key fatty acids, including oleic acid, will be higher in the elaiosomes of the more attractive species. Our third objective is to assess interspecific differences in elaiosome phytochemical profiles. We predict that elaiosome phytochemical profiles of the more attractive species will be more tightly clustered in multivariate space than those of less attractive species, reflecting greater stabilizing selection by ant dispersers. | Study species and sites Eastern North America is a biodiversity hotspot for the plants in the genus Trillium-a group of myrmecochorous (ant seed dispersed) perennial understory forest herbs-with at least 29 species occurring in this region (Freeman, 1975;Ohara, 1989). In the southern Appalachian region, many species of Trillium are sympatric. Although co-occurring myrmecochore species often temporally stagger fruiting (Gordon, Meadley-Dunphy, Prior, & Frederickson, 2019;Warren, Giladi, & Bradford, 2014), the sympatric species studied here have overlapping flowering and fruiting phenology (Miller & Kwit, 2018). Due to the spatial proximity of plants at our study sites, foraging ants occasionally come across mature diaspores of co-occurring congeners at the same time (C. N. M., personal observation), resulting in potential interspecific competition for dispersal services. We investigated the importance of morphological and chemical diaspore attributes in determining seed attractiveness to foraging ants using the five species of Trillium studied in Miller and Kwit (2018). These included the geographically widespread species Trillium catesbaei Elliott and T. cuneatum Raf., as well as the range-restricted, endemic species T. lancifolium Raf., T. discolor Wray ex Hook, and T. decumbens Harbison. Pairs of these species co-occur in multiple locations throughout the southern Appalachians (see Miller & Kwit, 2018). We located six study sites in spatially distinct forest stands (average size of 130 ha; stands separated from one another by at least 3 km) containing sympatric populations of species pairs, and one additional site containing only T. lancifolium to supplement a lack of available mature fruits for this species at the other sites (n = 7 sites; Table 1). Five sites were located in northwest Georgia, and two sites were located near the borders of Georgia, North Carolina, and South Carolina. All sites were low-lying, mesic, deciduous forests with moderate to thick canopy cover. Sites in northwest Georgia were located in the Limestone Valley soil prov- We collected mature fruits just prior to natural dehiscence from the study species at their respective sites during summer 2018 for use in morphological and chemical analyses. Upon return to the laboratory, we stored fruits at −20°C for 1 month prior to diaspore morphological analyses and 4 months prior to chemical analyses. To organize the species by their overall attractiveness to ants, we used seed dispersal probabilities calculated for each species based on in situ observations conducted during a previous study in the same study system (Miller & Kwit, 2018). In that study, the proportion of seeds removed by ants (primarily of the genus Aphaenogaster) in 1 hr from natural seed depots was averaged for 15 individuals of each species at three study sites; the proportion across sites for each species was then averaged (n = 45 hr of observation/species). We interpret these averaged proportions as the probability of seed dispersal, a measure of attractiveness for each species of Trillium and a comparative framework for the present study. | Diaspore morphology To address our first study objective, quantifying interspecific differences in diaspore morphology, we measured diaspore and elaiosome length, width, and mass for 26 diaspores of each species, representing multiple individuals from each study site (n = 130). We took the fresh mass of entire diaspores (g) and then measured the length and width of diaspores (mm) using digital calipers (Mitutoyo Digimatic Caliper, 0.01 mm resolution). We then removed the elaiosome from each seed using a straight razor and repeated the above measurements for the elaiosome. Elaiosome-seed mass ratios were calculated by dividing the mass of the elaiosome by the mass of the entire diaspore. After evaluating each response variable (diaspore length, diaspore width, diaspore mass, elaiosome length, elaiosome width, elaiosome mass, and elaiosome-seed mass ratio) for normality by generating normal probability (Q-Q) plots and histograms to visualize model residuals, we determined that the residuals of all response variables were normally distributed. Each response variable was then compared for the five study species using a linear mixed-effects model in the package lmerTest (Kuznetsova, Brockhoff, & Christensen, 2017) in R (R Core Team, 2017). This test uses the Satterthwaite's method for approximating degrees of freedom for t and F tests, which is more conservative than residual degrees of freedom approximations; therefore, the assumption of homogeneity of variances across samples can be relaxed (Keselman, Algina, Kowalchuk, & Wolfinger, 1999). Each model included one morphological trait as the response variable, species of Trillium as the fixed effect, and study site as a random effect to account for the potential effect of environmental heterogeneity TA B L E 1 Study species collected at field sites in June 2018, and their relative status as attractive or less attractive to ant dispersers F I G U R E 1 Empirically-derived averaged probability of seed dispersal by ants, primarily of the genus Aphaenogaster, for five species of Trillium at seven sites in the southeastern U.S. Bars represent standard error. Blue indicates lower seed dispersal probabilities (i.e., "least attractive" species) while pink indicates highest seed dispersal probabilities (i.e., "most attractive" species) or intraspecific population. Tukey post hoc tests were performed to identify the pairwise direction of effects using the package multcomp (Hothorn, Bretz, & Westfall, 2008). To visualize trends, which were impacted by the random effect of study site, we generated model residuals from linear models with the effect of site partialled out. Boxplots display these model residuals, which depict the isolated effect of species on diaspore morphology traits. | Elaiosome fatty acid profiles To address our second study objective, quantifying interspecific differences in concentrations of elaiosome fatty acids among species of Trillium, we assessed triglyceride, diglyceride, and free fatty acid forms using gas chromatography-mass spectrometry (GC-MS). Detailed methodology relating to the GC-MS analysis can be found in Data S1. We produced standard curves by running five concentrations (10, 1, Total fatty acid concentrations and concentrations of fatty acids in free, di-, and triglyceride forms were compared for the five study species using linear mixed-effects models in the package lmerTest in R, as above. Concentrations (% fresh weight) of five individual fatty acids and of total fatty acids were evaluated for normality prior to running linear mixed-effects models; in each case, the assumption of normality was violated. Therefore, we applied data transformations to raw concentrations to approximate normality (logarithmic or square-root; see Keene, 1995;Osborne, 2002). As in the Diaspore Morphology methods above, our use of the Satterthwaite's method for approximating degrees of freedom allowed us to relax the assumption of homogeneity of variances. Study site was included as a random effect in each model to account for the effects of environmental heterogeneity or interspecific population. Species and fatty acid form (free, di-, or triglyceride) were included as fixed effects. Interactions between species and form were tested for each model; in no case was the interaction term significant, so interactions were dropped from all models. Tukey post hoc tests were performed to identify the pairwise direction of effects between species and between fatty acid forms. To visualize trends, we generated boxplots using model residuals as in the Diaspore Morphology methods above. | Elaiosome phytochemical profiles To address our third study objective, assessing interspecific differences in elaiosome phytochemistry, we characterized and compared profiles of both known and unknown elaiosome metabolites using liquid chromatography-mass spectrometry (LC-MS). Each of the five study species was represented by six sample replicates (N = 30). Samples were taken from 2 to 6 individuals from the representative study sites (Table 1). Detailed methodology relating to the LC-MS analysis can be found in Data S1. LC-MS produced relative concentrations of phytochemical compounds across samples, which were compared using z-transformed peak areas from extracted ion chromatograms. To visualize clustering of elaiosome phytochemical profiles across species, we performed a constrained redundancy analysis (RDA) using the vegan package (Oksanen et al., 2017) in R with species of Trillium as the explanatory variable and peak areas of the partial phytochemical data (known compounds; n = 122) as the response. RDA, a method to summarize the variation in a set of response variables that can be explained by a set of explanatory variables, is a constrained version of principal components analysis wherein canonical axes (i.e., linear combinations of response variables) must also be linear combinations of the explanatory variables (Legendre & Legendre, 1998). We did not explicitly account for variation in the phytochemistry data that may have been explained by study site because RDA partitions the total variance of the data into constrained variances (i.e., variation in the response matrix that is redundant with the variation in the explanatory matrix) and unconstrained variances (i.e., variation in the response matrix that is not redundant with the variation in the explanatory matrix), which can be compared to determine how much of the variation in the response is accounted for by the explanatory variable (Legendre & Legendre, 1998). We used all six sample replicates per species, for a total of 30 samples. The model ran for 1,000 permutations and the significance of the explanatory variables and the first two RDA axes were assessed using ANOVA. We repeated this procedure for the full phytochemical data (known compounds + unknown features; n = 7,552) detected by LC-MS. To better understand the major chemical drivers of the multivariate spread of the partial (i.e., known compounds) and full (i.e., known + unknown compounds) phytochemical data, we constructed four random forest models using the package randomForest (Liaw & Wiener, 2002) in R. Random forest is a supervised machine learning technique that builds a classification tree by repeatedly splitting the data based on whether or not they fall above or below a threshold value of each explanatory variable in the model (Biau, 2012;Bielby, Cardillo, Cooper, & Purvis, 2010). Random forest ranks the relative importance of different predictors in distinguishing among the levels and provides a measure of prediction accuracy, cross-validation, for correctly classifying unknown samples into groups according to the predictor variables. By training the model using a subset of samples, random forest is able to classify the remaining samples with higher accuracy than would be achieved by always guessing the most common category. In our analysis, random forest classification models (RFCMs) were implemented to identify compounds with the highest relative importance in distinguishing among the five species of Trillium, a categorical level. Two RFCMs were used evaluate the partial and full phytochemical data sets, respectively. Peak areas of the known compounds were used as the set of possible predictors for the partial RFCM (n = 122), whereas peak areas of all compounds (known + unknown) were used as the set of possible predictors for the full RFCM | Diaspore morphology Diaspore length, width, and mass differed significantly among the five Trillium species (F 4,15 = 4.98, p = .009; F 4,13 = 7.79, p = .002; F 4,15 = 9.03, p < .001, respectively). Elaiosome length also differed significantly among species (F 4,15 = 5.59, p = .006). Post hoc comparisons revealed that the second most attractive species, T. cuneatum, had significantly greater diaspore length than the least attractive species, T. decumbens; that the two most attractive species, T. catesbaei and T. cuneatum, had significantly greater diaspore width than the least attractive species; and that the two most attractive species had significantly greater diaspore mass than the least attractive species (Table S1; Figure 2). Post hoc comparisons revealed that the most attractive species, T. catesbaei, had significantly greater elaiosome length than all of the other species except for T. lancifolium. | D ISCUSS I ON In this study, we investigated the differences in diaspore morphology, elaiosome fatty acids, and elaiosome phytochemisty among five species of Trillium with different levels of attractiveness to seed-dispersing ants in the southern Appalachian region of North America. Of the morphology metrics considered in our study, diaspore length, diaspore width, diaspore mass, and elaiosome length were significantly different among species, and post hoc tests revealed that values of these traits tended to increase with seed attractiveness. These findings provide support for the established hypothesis that seed size is a key trait determining the probability of seed dispersal , with larger diaspores overall being preferred by seed-dispersing ants (Hughes & Westoby, 1992;Takahashi & Itino, 2015). Contrary to our prediction, elaiosome mass, elaiosome width, and elaiosome-seed mass ratio were not significantly different among species. These results are not consistent with the results of several studies showing that ants prefer larger elaiosome biomass and/or elaiosome-seed mass ratios (Leal et al., 2014;Levine et al., 2019;Mark & Olesen, 1995), although they do support the finding that ants do not exert significant selection on elaiosome size in Helleborous foetidus . Considering that the most attractive species in our study, T. catesbaei, had smaller average elaiosome mass and smaller elaiosome-seed mass ratios than all of its congeners, these aspects of elaiosome morphology do not appear to be the most important factors contributing to the attractiveness of Trillium seeds to ants. Larger diaspores likely enhance attractiveness of seeds to ants, but ants may not always prefer seed species with larger elaiosome mass or elaiosome-seed mass ratios. The most attractive species also had significantly higher concentrations of oleic acid than the less attractive species, providing support for the hypothesis that this compound acts as a behavior-releasing signal that stimulates ants to pick up and carry items to or from the nest (Brew et al., 1989;Marshall, Beattie, & Bollenbacher, 1979;Qiu et al., 2015;Skidmore & Heithaus, 1988). Oleic acid is the most abundant fatty acid in plant and animal tissue and the biosynthetic precursor of linoleic and linolenic acids (Christie, 2005), essential nutrients that are not synthesized by hymenopterans (Barbehenn, Reese, & Hagens, 1999;Canavoso, Jouni, Karnas, Pennington, & Wells, 2001;Dadd, 1973;Hagen, Dadd, & Reese, 1984). As the main constituent in insect hemolymph, oleic acid in the form of diolein is of particular nutritional importance for ant larvae (Fischer et al., 2008;Municio, Odriozola, & Pérez-Albarsanz, 1975;Thompson, 1973 (1980) found that elaiosomes of Datura discolor (Solanaceae) were conspicuously absent of oleic acid, whereas palmitic acid, stearic acid, linoleic acid, and linolenic acid were all present. We found that linoleic, hexadecanoic, stearic, and palmitoleic acids were also present in higher concentrations in the more attractive species of Trillium, so these fatty acids likely contribute to the overall attractiveness of seeds to ants. Our prediction that elaiosome phytochemical profiles of the more attractive species of Trillium would be distinct from their less attractive congeners was supported. Although the partial RDA shows that all study species were independently clustered in multi- by these endemic species (Miller & Kwit, 2018) in seeds that had a higher probability of being dispersed by ants. The potential roles played by the remaining compounds selected by random forest models is unknown, but could be the focus of further investigations. The defense trade-off hypothesis posited by Cipollini and Levey (1997), and further explored by Schaefer, Schmidt, and Winkler (2003), may explain the presence of phytochemicals that appear to reduce the attractiveness of trillium seeds. Plants must balance attracting dispersers and repelling the mortality agents that exposed fruits/seeds come into contact with prior to and following dispersal. Whereas some compounds present in elaiosomes might deter ant dispersers, this cost may be off-set by better defenses against granivores or microbial pathogens. In myrmecochores, this could be of particular importance, given the likelihood that rodents will prey on seeds that are not removed by ants within a few hours of dehiscence (Heithaus, 1981). The importance of microbial pathogens has not been investigated thoroughly in myrmecochore systems, so deterrent compounds in elaiosomes might also play a role in defending the seed against fungal infections. The defense trade-off hypothesis would imply that mortality selection agents (i.e., granivores, F I G U R E 4 Constrained RDA evaluating general elaiosome phytochemical profiles (n = 6 samples per species; n = 30), with relative abundance of 122 known phytochemical compounds as the response and species as the fixed effect. Ellipses represent the standard deviation of the points around the centroid for each species. Blue = least attractive species, pink = most attractive species. The overall model is significant (p < .001), meaning species is a significant predictor of phytochemical composition of elaiosomes and several other morphologically similar species including A. picea (Umphrey, 1996), are difficult to reliably distinguish from one another and may be characterized by polyphyly (DeMarco & Cognato, 2016). Our study system encompasses the ecotone between A. picea and A. rudis (Warren et al., 2016). Although both species are effective seed dispersers, they may be marked with differences in seed dispersal effectiveness that are not accounted for in this paper (see Warren, Bahn, & Bradford, 2011); as such, we acknowledge that some of the variability in seed dispersal probabilities across species of Trillium may be due to differences in the seed disperser assemblages at each study site. However, that pairs of the study species co-occur within multiple sites and thus were exposed to identical dispersal assemblages during the in situ observations of seed dispersal likely minimizes these differences. Based on our results, the trillium seeds that were most attractive to ants are characterized by larger diaspores overall (but not necessarily by larger elaiosome biomass or elaiosome-seed mass ratios), and by elaiosomes with high concentrations of the α-amino acid methylhistidine and the carbohydrate d-glucarate, low concentrations of salicylate, xanthine, histidine, citrulline, and pantothenate, and high concentrations of free oleic, linoleic, hexadecenoic, stearic, palmitoleic, and total fatty acids. Many of these traits are likely correlated, and thus there is some redundancy in our multi-dimensional description of an attractive myrmecochore seed. Our results regarding diaspore and elaiosome morphology and elaiosome fatty acids compliment the work of many previous studies, but we provide novel insights into the potential roles played by previously-unknown components of the broader elaiosome phytochemical profile. Previous work in other Trillium species corroborates our conclusion that seed-dispersing ants respond to a complex of characters, including morphological and chemical seed attributes (Gunther & Lanza, 1989; Lanza et al., Least attractive M ost attractive 1992). Further advances in our understanding of seed dispersal effectiveness in myrmecochorous as well as other animal-mediated seed dispersal systems will require a portrayal of dispersal unit chemical and physical traits, and their combined responses to selection pressures. ACK N OWLED G M ENTS We thank E. Schilling, J. Clark, and J. Fordyce for their input and suggestions throughout the project. We thank J. Fordyce for help with statistical analyses. Thanks to the late T. Patrick, and the Georgia DNR for assistance in locating study sites. We thank K. Lawhorn for field assistance. This project was DATA AVA I L A B I L I T Y S TAT E M E N T Data are available from the Dryad Digital Repository: https ://doi. org/10.5061/dryad.hhmgq nkcz
2020-03-05T11:10:11.681Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "bc096e088fae8a7908f0d07aa38cabcb9e2ba6c8", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6101", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae1c434821e192f3db02d23a0823a19d3d0350b5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
240343253
pes2o/s2orc
v3-fos-license
Presence, Absence, and Alterity: Fire Space and Goffman’s Selves in Postdigital Education The literature on space in higher education has arguably been dominated by the concept of ‘learning spaces’. In this paper, I will argue that this construct, while appearing student-focused and creative, is ideologically circumscribed by an underlying social constructivism. Following Bayne et al. (2014), I draw on science and technology studies to consider social topologies, in particular regional space, network space, and their proposed fluid space, and the work of Law and colleagues on the category of fire space, derived from Bachelard’s (The Psychoanalysis of Fire, 1964) disquisition on the nature of fire. I work with this construct in an analysis of postdigital education, in particular looking at synchronous interaction via video conferencing software such as Zoom. Linking this analysis to the work of Goffman and his concept of the lecturer selves (Goffman in Forms of Talk, 1981), I argue that the concept of fire space may allow for a more nuanced and accurate account of the flickering, contingent nature of (co) presence, absence, and alterity, allowing for a more immanent account of digital interaction in ‘distance’ or ‘online’ education. Introduction The literature on space in relation to postdigital higher education, and indeed higher education more broadly, has arguably been dominated by the concept of 'learning spaces'. In this paper, I will argue that this construct, while appearing studentfocused and creative, is ideologically circumscribed by an underlying social constructivism, a fundamentally performative framing, and an ethos of 'learnification' (Biesta 2012), despite the welcome influence of sociomaterial perspectives in recent years (e.g. Fenwick et al. 2011) and related recognition of nonhuman actors. In this critique, I will pay particular attention to the effects of using 'learning' as an adjective, and I will propose that there is a danger the generative insights of posthumanism and sociomateriality may be lost if subsumed under a pragmatic, 'what works' discourse, or if they are merely added to existing notions of 'learning space'. Instead I argue that more fundamental shifts in theoretical focus should ensue. Following Bayne et al. (2014), I draw on Mol and Law (1994) to consider social topologies, in particular regional space, network space, and their proposed fluid space. I then review Law and Mol's (2001) discussion of the additional category of 'fire space', which they derive from Bachelard's (1964) disquisition on the nature of fire. In a challenging and seemingly tangential move, Law and Mol draw three points from Bachelard. The first point is that, for them, the trope of death and rebirth can be seen '…as a metaphor for treating the continuity of shape as an effect of discontinuity…, and that in a topology of fire, constancy is produced in abrupt and discontinuous movements' (Law and Mol 2001: 615). Their second point is that Bachelard's analysis can be seen as '…a call for attending to discontinuous transformation as a flickering relation between presence and absence'; they urge us to see fire as '…a metaphor for thinking about the dependence of that which cannot be made present on that which is absent'. They go on to propose that '… in fire space a shape achieves constancy in a relation between presence and absence: the constancy of object presence depends on simultaneous absence or alterity'. Their third point refers to Bachelard's discussion of the 'star pattern' of reverie, which, they argue '…evokes a specific version of the relation between presence and absence: a link between a single present centre and multiple absent Others' (loc cit). The notion of fire object is also Their third point refers to Bachelard'so proposed in Law and Singleton (2005). In this paper, I will attempt to work with this construct in an analysis of postdigital education, in particular looking at synchronous interactions via video conferencing software such as Zoom. In my discussion, I also link this analysis to the work of Goffman and his concept of the lecturer selves (Goffman 1981). I will argue that the concept of fire space may allow for a more nuanced and accurate account of the flickering, contingent nature of (co) presence, absence and alterity, allowing for a more immanent account of digital interaction in 'distance' or 'online' education, in what Law and Mol characterise as '…patterns of conjoined alterity' (loc cit). This will be linked to the notion of selves in terms of not only verbal and textual performance as Goffman sets out, but also in terms of presence, absence, and alterity of the embodied self, image, representation and voice, in an attempt to extend Goffman's analysis to address the nature of self and presence in postdigital settings. I will conclude by arguing that theorisations of space which turn the focus away from 'learning' in postdigital education may offer the field a more granular view of the flickering, ephemeral, and at times uncanny nature of these encounters. The Problem with 'Learning Spaces' The term 'learning spaces' has become commonplace in higher education over recent years, with reference to the ways in which aspects of the physical campus might influence student engagement. In an early review, Temple and Filippakou (2007) discusses the issue, including guidance surrounding the use of space in universities. JISC (2006) provided the following definition: A learning space should be able to motivate learners and promote learning as an activity, support collaboration as well as formal practice, provide a personalised and inclusive environment, and be flexible in the face of changing needs. The part technology plays in achieving these aims is the focus of this guide (JISC 2006: 2). The use of the term 'learning space' is used here and is prevalent in the sector. In Gourlay and Oliver (2018), we discuss this document in detail, pointing out that one of the features of the term 'learning spaces' is that it is not generally used to refer to classrooms, but more to public areas outside the classroom, such as a 'learning cafe' or another space designated for student collaborative working. Learning spaces are explicitly defined as a replacement for the previously dominant 'teaching spaces', which are described as retrograde. In the 2018 critique, we made the point that the way the concept is portrayed by JISC serves to de-emphasise teaching, instead promoting small group interaction outside of formal instruction as the desired mode of engagement for students, with a strong emphasis on observable interaction. We also pointed out the way in which 'learning spaces' are enrolled in an effort to make the university appealing, with the students positioned as customers whose attention must be competed for via a range of material commodities and a 'proactive service-delivery culture'. Digital technologies are ubiquitous in this imaginary, but how they are used in terms of scholarship (rather than information retrieval) is not explored. We also analysed a more recent guide to 'learning space' (UCISA 2016) that refers to the concept of 'built pedagogy' (Monaghan 2002), defined as '…architectural embodiments of educational philosophies', based on the assumption that '… the way in which a space is designed shapes the learning that takes place in that space' (UCISA 2016: 9). In both of these guides, a strongly deterministic relationship is assumed between the form of the space and the resultant 'learning', for which observable interaction seems to stand as a proxy. In terms of digital technology in education, this stands in contrast with what Law and Singleton (2005) called the 'incorporeal fallacy', the idea that materiality and bodies somehow disappear in the realm of the digital, in a fantasy of untrammelled freedom from the constraints they represent. This assumption surrounding the nature of space has, however, been challenged in the literature, as the complex, co-constitutive, and sociomaterial nature of education practices has become more recognised (e.g. Gourlay and Oliver 2018; Acton 2017), alongside the growing influence of the mobilities paradigm Hannam et al. 2006;Urry 2007), on research into higher education (e.g. Enriquez 2011Enriquez , 2013Edwards et al. 2011;Hamilton and Friesen 2013;Bayne et al. 2014), plus work more broadly on the nature of digital objects (e.g. Adams and Thompson 2016). These bodies of work provide growing insights into the sheer complexity, contingency, and shifting nature of educational practices and engagement in terms of spatiality, embodiment, and movement. These are crucial questions for scholars of higher education when considering both the campus, and also the nature of online engagement. In foundational work focused on what was at the time called 'computer conferencing', the related question of 'presence' has been explored, in particular how presence can be understood when individuals are not co-present in a physical, face-toface setting. Garrison and colleagues explored the various dimensions of presence (Garrison et al. 2000(Garrison et al. , 2001, with a later paper pointing out that 'interaction is not enough' to create a sense of presence (Garrison and Cleveland-Innes 2005). However, given the centrality of the intersection of space and presence to remote digital education, arguably, it has not been adequately theorised in the literature. An exception is Bayne et al. (2014), who draw on theoretical work in science and technology studies focused on the concept of topology, in an insightful discussion of the nature of 'distance' learning and various conceptions of the campus held by remote students. I seek to build on this work by drawing on the same set of theoretical resources, in order to relate this to questions around the complex and multifaceted nature of space and presence in digital education. Mol and Law (1994), in a consideration of the medical condition anaemia, discuss 'social topology', which they define as follows: Social Topologies Unlike anatomy, topology doesn't localise objects in terms of a given set of co-ordinates. Instead, it articulates different rules for localising in a variety of coordinate systems. This it doesn't limit to the three standard axes, X, Y and Z, but invents alternative systems of axes. In each of these, another set of mathematical operations is permitted which generates its own 'points' and 'lines'. These do not necessarily map on to those generated in an alternative axial system. Even the activity of 'mapping' itself differs between one space and another. Topology, in short, extends the possibilities of mathematics far beyond its original Euclidean restrictions by articulating other spaces. (Mol and Law 1994: 643) They take this concept from mathematics and adapt it to social theory, based on the contention that 'the social' does not exist as a single spatial type. Rather, it performs several kinds of space in which different 'operations' take place. First, there are regions in which objects are clustered together and boundaries are drawn around each cluster. Second, there are networks in which distance is a function of the relations between the elements and difference a matter of relational variety (Mol and Law 1994: 643). However, they suggest that in addition to these two kinds of space, '[s]ometimes, we suggest, neither boundaries nor relations mark the difference between one place and another. Instead, sometimes boundaries come and go, allow leakage or disappear altogether, while relations transform themselves without fracture. Sometimes, then, social pace behaves like a fluid.' (Mol and Law 1994: 643) Discussing the case of the prevalence of anaemia in the Netherlands in comparison with African nations, they point out the complications involved in the measurement of the condition. From an actor-network point of view, '…the space in which regions can be drawn and differentiated exists. But it doesn't exist in the order of things. Rather, it is an effect or a product which depends on another quite different kind of space, the space of networks. This isn't regional in character, but is generated within a network topology.' (Mol and Law 1994: 649) Discussing the nature of network topology, they point out that proximity is not metric, and that '… "here" and "there" are not objects or attributes that lie inside or outside a set of boundaries… places with a similar et of elements and similar relationships between them are lose to one another, and those with different elements or relations are far apart'. (Mol and Law 1994: 649) They use the example of two haemoglobin meters which may be geographical distant, but are close in a network topology, an inter-topological effect when one topology meets another, in a 'fold' of regional surfaces (Cooper 1992). However, they point out that this type of fold can only take place 'if the network holds', giving the example of diagnostic medical labs in the Netherlands in comparison with those in Africa. The measurement is not in fact immutable. 'The folded surface of the region starts to flatten out, and the space-time tunnel of the network dissolves.' (Mol and Law 1994: 652) They provide a detailed discussion of the ways in which anaemia is diagnosed in the Netherlands and also in clinics in southern Africa, focusing on the contrast between laboratory testing and clinical diagnosis using observation and questioning of the patient. They characterise these as two different networks, but crucially, they point out that they are not clearly separate. Instead, elements of both may be present in the different regions. As they put it, '[w]hat we are looking at is something different. We're looking at variation without boundaries and transformation without discontinuity. We're looking at flows. The space with which we're dealing is fluid.' (Mol and Law 1994: 658) They define fluid space as a topology which is neither regional nor network-based, there are other types of topology. … there are others too, and one of them is fluid. For there are social objects which exist in, draw upon and recursively form fluid spaces that are defined by liquid continuity. Sometimes fluid spaces perform sharp boundaries. But sometimes they do not -though one object gives way to another. So there are mixtures and gradients. And inside these mixtures everything informs everything else -the world doesn't collapse if some things suddenly fail to appear. (Mol and Law 1994: 659) As they explain: For in a network, things that go together depend on one another. If you take one away, the consequences are likely to be disastrous. But in a fluid, it isn't like that as there is no 'obligatory point of passage'; no place past which everything else has to file; no panopticon; no centre of translation; which means that every individual element may be superfluous. (Mol and Law 1994: 661) They see these three types of topology as co-existing with each other in 'intricate relations'. In a related work which develops these ideas further, Law and Mol (2001) remind us of relationships between social studies of science and epistemology, in particular the role of early laboratory studies such as Latour and Woolgar (1979), Knorr-Cetina (1981), and Lynch (1985). As they point out, these studies of scientific practice were seen as a challenge to the notion of scientific universality, demonstrating that, '[q]uite quickly the argument was made: scientific findings and theories are made in specific locations. They are always made somewhere. In a locality. They are regional, not universal.' (Law and Mol 2001: 610) In order for these to be regarded as 'facts', a configuration of facts and context must be held stable. The focus in the research on the work of holding configurations in shape with 'immutable mobiles' (Latour 1986) led to the actor-network theory. They give an example of a ship: On the one hand it generates an immutable mobile, a vessel that made it safely across the seven seas, an object holding itself together in a particular web of relations. But it also, at the same time, implies a form of spatiality. The argument then, is that a network-object also implies a stable shape within a network space. The two go together. Spatiality is an aspect of network stability. A large network (with its wind, its stars, its merchants, and its princes) implies a network space which renders possible the immutable mobility of an object -such as a Portuguese ship travelling from Lisbon to Calicut. (Law and Mol 2001: 611-612) In the case of the immutable mobile, there are two forms of spatiality: Euclidean and network. In Cartesian, regional, or Euclidean space place is defined by a set of relative three-dimensional coordinates. So long as a ship is tied up in the harbour in Lisbon, it does not move. And as soon as it sets out to sea, it displaces itself. But the space implied in actor-network theory is different. Is there no change in the working relations between the hull, the spars, the sails, the sailors, and all the rest? If this is the case then the ship is immutable in the sense intended by Latour. It does not move in relation to a network space. (Law and Mol 2001: 612) In network space, the vessel holds its shape, and holds its position in that space. It does not displace itself; instead, it is an immutable immobile. Relations are sustained in a stable manner. The mobility of the ships only exists in Euclidian space: …the immutable mobile achieves its character by virtue of participation in two spaces: it participates in both network and Euclidean space…it is the interference between the spatial systems that afford the vessel its special properties. We are in the presence of two topological systems, two ways of performing space. And the two are being linked together. (loc cit) Developing the notion of fluid space, they refer to De Laet's and Mol's (2000) study of bush pumps in Zimbabwe to give an example of something that does not move within a network but is other to the network and its spatialities, outside a network. It changes shape. Of this pump and everything that allows it to work, nothing in particular necessarily holds in place. Bits break off the device and are replaced with bits which do not seem to fit. And other components -we are talking here both of parts of the 'machine itself', and of the social relations embedded in it -are added to it, components which were not in the original design itself. (Law and Mol 2001: 613) In Euclidean and also network space, the bush pump is an object which changes shape. The pump shows configurational variance; it is a mutable mobile. It is a fluid object in a topological system fluid spatiality. For them, the defining feature of a fluid space is incremental change in shape, in which elements may be lost or added, with a continuity of function. Fire Space and Fire Objects The philosopher Bachelard wrote about the nature of fire: The fascinated individual hears the call of the funeral pyre. For him destruction is more than a change, it is a renewal. (Bachelard 1964: 13) He writes about the reveries of a person who stares at a fire: …the reverie is entirely different from the dream by the very fact that it is always more or less centred upon one object. The dream proceeds on its way in linear fashion, forgetting its original path as it hastens along. The reverie works in a star pattern. It returns to the centre to shoot out new beams. (Bachelard 1964:14) Law and Mol (2001) take three points from this. The first is that the trope of death and rebirth '…as a metaphor for treating the continuity of shape as an effect of discontinuity … The difference is that, whereas in fluidity constancy depends on gradual change, in a topology of fire constancy is produced in abrupt and discontinuous movements.' (Law and Mol 2001: 615) They characterise fire space as consisting of '…a flickering relation between absence and presence', in which fire is used as a metaphor for the existence of elements which rely on the absence, or alterity, of others. A further attribute they identify relates to Bachelard's notion of the 'star pattern' of reverie. They link this to a relation between absence and presence in which there exists '…a single present centre and multiple absent Others … a relatively stable set of star-like enactments between a single present and multiple absences' (Law and Mol 2001). Law and Singleton (2005) in a paper which explicitly builds on Law and Mol (2001) go on to develop this construct further in an analysis of treatment for alcoholic liver disease, which they propose as a fire object. In the course of their healthcare-related study, they found it difficult to keep the condition 'in focus', and suggested on that basis, that 'social science methods are ill-adapted for the study of complex and messy objects' (Law and Singleton 2005: 1). They refer to three types of object established in actor-network theory; region, network, and fluid in their discussion, and also the fire object, discussed above, which they define here as one which 'treats objects as patterns of discontinuity between absence and presence' (loc cit). They identify two strategies for 'knowing mess'; epistemological and ontological. The former rests on the idea that messy objects are difficult to know because people have varied perspectives on them, are 'interpretively complex', and mean different things to different groups of people. This assumes there is a real object to be retrieved behind these interpretations. They point out that this is common in the study of objects in science and technology studies, such as in Star and Griesemer's (1989) study of boundary objects. In contrast, Law and Singleton adopt an ontological perspective on objects, in which messy objects are 'enacted into being' (loc cit). Like Law and Mol, they discuss the concept of immutable mobiles, and also introduce de Laet and Mol's (2000) concept of the mutable mobile, discussed above in terms of the bush water pump in rural Zimbabwe, which is itself constantly changing its form due to repairs and other aspects of its operation which change. They suggest it should be seen as a fluid object, which changes gradually and gently. However, when they attempt to apply the concept of fluid object to understand the ontological nature of alcoholic liver disease, they find it resists that analysis. With reference to Mol's (2002) book The Body Multiple, they highlight how the body is regarded as a single entity, but it is multiple 'because it is enacted in multiple practices' (Law and Singleton 2005: 11). They refer to the view that for an object to be present, it depends on a series of absences, elements which cannot be seen or known -'an object is a pattern of presences and absences' (loc cit). They provide the example of the construction of an aircraft wing, whose form is composed partly of elements which are not physically present, such as the enemy air force, the body of the pilot, and the air pressure when flying at a particular altitude. For them: Such objects are transformative, but the transformations are not the gentle flows discussed above in fluid objects. They are more like some of the differences mentioned by Mol. This is because they take the form of jumps and discontinuities. In this way of thinking then, constant objects are energetic, entities or processes which that juxtapose, distinguish, make and transform absences and presences. They are made in disjunction … we will talk of such entities as fire objects. (Law and Singleton 2005: 13) This rests on the notion of fires as transformative, and dependent on difference, such as between fuel which is absent, and flame which is present. 'Fire objects, then, depend upon otherness, and that otherness is generative.' (loc cit) They apply this notion to an extended analysis of alcoholic liver disease, taking in multiple facets of the phenomenon, and its multiple ontology across different treatment sites. For them '[i]t is an object in the form of a dancing and dangerous pattern of discontinuous displacements between locations that are other to (but linked with) each other' (loc cit). What distinguishes it from a fluid object is its inherent discontinuity which '…lives in and through juxtaposition of uncontrollable and generative otherness' (loc cit). Postdigital Education as a Fire Object The foregoing analysis is clearly about a very different context than postdigital education. However, I would propose that a fire object analysis might allow us further theoretical purchase on the ontology of teaching, engaging and communicating via screens and digital technologies, using video platforms such as Zoom or Teams. The operation of video calls or broadcasts inherently depends on difference and otherness, the absence of the interlocutor and the presence of oneself and the physical device, and associated sociomaterial assemblage. In simple terms, it would be nonsensical, or at least unusual, for this phenomenon to take place in a situation of embodied co-presence, as it is defined by absence. In addition, it is characterised by the lack of fluidity in these terms; the listener or speaker is either 'on the call' or not, although the form of presence may vary if video or audio only is used. This form of engagement is characteristic of alterity, a form of simultaneous absence and presence, in which one is both 'there' and 'not there'. Like fire, it has a flickering ontology, with sudden flares of visual presence, loss of video or sound, and could perhaps also be likened to smouldering embers when participants are 'there', but with the mute button on and the video off. It is, like the disease described above, inherently discontinuous, partial, flickering, and multiple in its nature. The 'star pattern' also seems apposite as a means by which to conceptualise postdigital education, describing the experience of being present in a space, in a relationship of conjoined alterity with multiple others via a video platform such as Zoom. Fire Objects, Goffman's Selves and Forms of Talk Working with this reading, a connection might be made with early theorisations of the nature of the lecture, drawing on the work of Goffman, in particular his (1981) Forms of Talk. This is relevant in that allows for the previous fire object analysis to connect with a framing of the nature of teaching speech and selves which can also be understood in terms of absence, presence and alterity. In his essay 'The Lecture', Goffman sets out a view of the lecturing self as essentially multiple: At the apparent center will be the textual self, that is, the sense of the person that seems to stand behind the textual statements made and which incidentally gives these statements authority. Typically, this is a self of relatively long standing, one the speaker was involved in long before the current occasion of talk. This is the self that others will cite as the author of various publications, recognise as the holder of various positions, and so forth… And he (sic) is seen as the 'principal' namely, someone who believes personally in what is being said and takes the position that is implied in the remarks. (Goffman 1981: 173) This textual self comes into being in the preparation of the lecture materials, while the animator self emerges when the lecture is delivered, a being who Goffman refers to as a 'talking machine' (Goffman 1981: 171). In Goffman's framing, there are three forms of talk; memorisation, aloud reading, and fresh talk. The latter he characterises as an illusion of spontaneity, as part of a performative display (see Friesen 2017 for an extensive discussion of Goffman and a historical review of technology in education). The question for this paper is how these categories might intersect with a fire object analysis of postdigital education (for the purposes of this paper I am focusing on synchronous teaching online, see Gourlay (2021) for a Goffmanian analysis of the 'flipped classroom'). I would propose that this reading could be extended to take in the lecturer selves and forms of talk in Goffman's terms, as they could also be characterised in terms of absence, presence, and alterity. The textual self exists prior to the live event of the lecture or class. In this regard, it is characterised by the absence of the students, and of some elements of the technology such as Zoom, which is not generally used at this stage. Instead, the lecturer may be working with PowerPoint or other textual/multimodal semiotic online resources. Their voice and forms of talk are also absent at this stage. The textual artefact that is created also has the features of a fire object, as once complete, it is no longer fluid, but can either be absent or present in a flickering, fire-like manner, made present by sharing the screen during the live class online. The animator self is also fire-like; the lecturer emerges into being presence on the screens of the students in the form of a live video image or audio facility. The lecturer's subjectivity, it might be argued, is fundamentally made other in a condition of alterity, in a circumstance in which the students have never met that person face-to-face. The forms of talk may also be analysed in these terms; memorisation of blocks of text is defined by the absence of the text itself at the time of the animator self's performance, while aloud reading is defined by the presence of the text and also the listeners, a form of gathering. Fresh talk is arguably an element of the lecture defined by alterity, in that it entails a form of layered performance which enrols the lecturer and students into subject positions which are essentially double; they are both entwined in a formal educational encounter governed by relevant generic conventions, while also participating in a more personal exchange or display, with the lecturer as an apparently spontaneous speaker, an element of side commentary which might be compared to marginalia in a Medieval text. In an online context, this might also include the type of procedural talk required in order to facilitate the class -such as the oft-repeated 'Can you hear me?', 'Let me just share my slides now', 'Hopefully you can see that screen now', and the ubiquitous 'You're on mute'. The status of the chat room is another area which might be included as a further category of 'talk', plus the presence on one's own live image on screen while talking, although these elements are beyond the scope of this paper. Conclusions: It's Just Not the Same In this paper, I have argued that the construct of 'learning space', when applied to the physical campus, is inadequate as a theorisation of the sociomaterial complexities of student engagement, and is reproductive of a very particular discourse of performative and observable interaction, underpinned by implicit notions of Biesta's 'learnification', and also of the student as customer. I went on the explore concepts of space developed in science and technology studies, in particular the construct of fire space, arguing that this captures the flickering nature of absence, presence, and alterity in postdigital education, looking at the example of the synchronous lecture. I went on to suggest that this analysis may be rendered more granular with the inclusion of Goffman's lecturing selves and associated forms of talk. Clearly, Goffman's framework was developed in a time prior to postdigital teaching. However, I would propose that it not only pertains to contemporary online teaching, it might also allow us to unpick the elements of talk that are required; the model could in fact be refined and extended further to reflect the changes brought about by digital mediation. I also contend, as set out above, that this set of selves and forms of talk are characteristic of fire space as discussed, and reinforce the analysis of postdigital teaching as a fire object. The framing of online teaching as taking place in fire space may have some implications in terms of how we see these events, touching on teaching practices and expectation of lecturers and students. One outcome might be a recognition of the flickering, unstable, and fundamentally uncanny nature of Zoom and other video platforms, allowing us to extricate ourselves from what might be termed discourses of replication, in which (arguably futile) attempts are made to recreate the conditions of the face-to-face embodied, ephemeral encounter. Arguably, this is doomed to failure, as can be hinted at the often-expressed view that 'It's just not the same'. It is clearly 'not the same' in terms of fostering a sense of connection, immediacy, and the ability 'to read the room', in a setting where the nuances of embodied performance, facial expression, gesture, and gaze cannot be fully deployed, in addition to the more subtle yet vital effects of physical copresence and ephemerality, the latter being compromised or at least complicated, as the digital lecture is recorded. Instead, the speaker must treat the screen as form of portal for digital performance (Gourlay 2020). In these respects and possibly others, this analysis allows us to theorise the many ways in which 'it's just not the same', while the event may appear superficially similar in certain simulacrum-like respects. In terms of implications for teaching and student engagement, this reinforces the importance of enhancing of participants' sense of relationality, connectedness, and inclusion in other ways and in other forums, particularly in a context where all engagement is online, such as in the current Covid-19 crisis ). This is a matter of particular urgency in a context where moves towards greater use of online formats in higher education are being discussed as somehow 'inevitable', post-pandemic. I would argue, in conclusion, that it is all the more incumbent on the field therefore
2021-11-01T15:09:59.023Z
2021-10-30T00:00:00.000
{ "year": 2021, "sha1": "7d0b2ca9eea4024a9d81a0c2cd653878f46a8d89", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42438-021-00265-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2979fa4abb16adf24b0076ed2035433ac27973f2", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
14363339
pes2o/s2orc
v3-fos-license
The Race against Protease Activation Defines the Role of ESCRTs in HIV Budding HIV virions assemble on the plasma membrane and bud out of infected cells using interactions with endosomal sorting complexes required for transport (ESCRTs). HIV protease activation is essential for maturation and infectivity of progeny virions, however, the precise timing of protease activation and its relationship to budding has not been well defined. We show that compromised interactions with ESCRTs result in delayed budding of virions from host cells. Specifically, we show that Gag mutants with compromised interactions with ALIX and Tsg101, two early ESCRT factors, have an average budding delay of ~75 minutes and ~10 hours, respectively. Virions with inactive proteases incorporated the full Gag-Pol and had ~60 minutes delay in budding. We demonstrate that during budding delay, activated proteases release critical HIV enzymes back to host cytosol leading to production of non-infectious progeny virions. To explain the molecular mechanism of the observed budding delay, we modulated the Pol size artificially and show that virion release delays are size-dependent and also show size-dependency in requirements for Tsg101 and ALIX. We highlight the sensitivity of HIV to budding “on-time” and suggest that budding delay is a potent mechanism for inhibition of infectious retroviral release. Author Summary ESCRTs are implicated in cellular processes which require fission of budding membranes. Likely the most studied of these processes is the HIV-ESCRT interactions. The canonical view is that interference with ESCRT recruitment results in a late budding arrest of virions at the plasma membrane and this mechanistic view of ESCRTs has shaped our understanding of their function in almost all cell biology. In this manuscript, we present a full kinetic analysis of HIV virion release under all known mutations in Gag that affect HIV-ESCRT interactions. Our data show that contrary to the canonical view, a defect in ESCRT recruitment does not inhibit virion budding, however it creates a delay. We further show that during budding delay, activated proteases release critical HIV enzymes back to host cytosol, leading to budding of non-infectious progeny virions. We suggest that budding delay is a potent mechanism for inhibition of infectious retroviral release and can be the Introduction generated a fully functional Gag.Pol vector that incorporates both Gag and Gag-Pol and is sufficient for budding mature VLPs with similar efficiency to HIV-1 full-length virus. We show that with an active protease, the Gag.Pol VLP budding is delayed when introducing ΔPTAP and ΔYP mutations. Indeed, Gag.Pol VLPs with ΔPTAP mutation are released with~10 hours delay and are void of HIV RT and PR. Gag.Pol VLPs with ΔYP mutation are released with~75 minutes delay, which results in significant reduction of RT and PR incorporation within released VLPs. Budding of Gag.Pol VLPs with an inactive protease and either ΔPTAP or ΔYP mutations is dramatically slowed down with similar sensitivity to the involvement of Tsg101 and ALIX. Using Gag proteins with multiple GFP fusions as cargo, we further show that budding is sensitive to the size of cargo proteins, and this effect is reproduced when a PR inactive truncated Pol protein is used as cargo. Finally, modeling these data using MonteCarlo simulations show that the protease activation after complete assembly of HIV virions on plasma membrane can quantifiably explain the loss of Pol specific proteins to host cell cytosol before VLP release. Results Humanized HIV Gag protein expressed in cells supports production of VLPs with similar size distributions as HIV virions [38]. We initiated our study by performing side by side comparison between budding of HIV Gag VLPs versus VLPs produced from the HIV-1 ΔR8.2 vector (HIV R8.2 ) and its parental full length HIV-1 R9 vector (HIV R9 ). HIV R8.2 after budding incorporates all components of the virion except ENV proteins and the genomic RNA. In contrast to HIV R9 and HIV R8.2 budding, Gag VLP release is not sensitive to p6 alterations In parallel experiments, budding of HIV R9 and HIV R8.2 were compared to Gag VLP release 24 hours after transfection. Using Gag domain mutagenesis, we observed that while HIV release and maturation are affected by p6 late domain mutations, VLP production by Gag remains almost unaffected (Fig 1A). Shown are the following mutations in Gag p6, HIV R9 and HIV R8.2 : ΔPTAP incorporate a 7 LIRL 10 instead of 7 PTAP 10 [6], ΔYP include 36 SR 37 instead of 36 YP 37 [31], and ΔPTAP. ΔYP has both PTAP and YP sequences altered ( 7 LIRL 10 plus 36 SR 37 ). 24 hours post-transfection, cells and VLPs were collected as described in Materials and Methods and analyzed by immunoblotting using p24, ALIX and Tsg101 specific antibodies. We found that incorporation of early ESCRTs in released Gag as well as HIV R9 and HIV R8.2 VLPs was sensitive to late domain mutagenesis; Tsg101 was fully sensitive to ΔPTAP mutation, and ALIX was only slightly affected by ΔYP mutation (Fig 1A). ALIX migrates as two separate bands with the upper band likely related to a post-translational modified form. We don't know yet at this stage the nature of this ALIX modification. ALIX background level corresponds to exosome release (Fig 1A and 1B). As commonly reported, HIV virion release (here shown HIV R9 and HIV R8.2 ) was detectably reduced under the ΔPTAP mutation in addition to a clear defect in Gag processing. Also, a slight change in release and maturation profiles was observed in ΔYP mutant HIV R9 and HIV R8.2 . In contrast to HIV R9 and HIV R8.2 , production of Gag VLPs was only slightly affected by mutations within p6 (Fig 1). Indeed, expression of Gag with alteration in late domains either as humanized, non-humanized co-expressed with Rev or within R9 with an abrogated ribosomal slippage leads to the same results ( Fig 1B, 1C and 1D). Aside from interacting with the p6 domain, ALIX also binds to the NC domain and mutations affecting NC have recently been implicated in HIV virion release [39]. We found that under NCΔC6S (replacement of each NC cysteine by serine) plus ΔYP and ΔPTAP mutations, ALIX retention in released Gag VLPs is further abrogated and a reduction in TSG101 retention was observed in Gag NCΔC6S VLPs (Fig 1B), however, none of the amino acid substitutions and/or truncations had a marked effect on Gag VLP release (see also S1A, S1B and S1C Fig). The observations related to Gag versus HIV R8.2 VLP release were confirmed by pulse/chase 35 The Tsg101/ALIX engagement-independent release of Gag VLPs was tested on different cell types with no major changes in the VLP release except for the apparent cell type specific NC effect (S3 Fig). Plasma membrane binding requirement was tested by G2A mutation [40] which abrogated VLP budding (S1A Fig). CHMP4 and VPS4 are retained in released Gag VLPs in absence of functional p6 late domains and VPS4 is essential for release of Gag VLPs Given that compared to HIV R9 and HIV R8.2 , Gag VLP production 24 hours post-transfection shows differential dependence on late domains, we set out to test the requirements of higher ESCRT factors in release of Gag VLPs. To investigate ESCRT recruitment (Tsg101, ALIX, CHMP4b, and VPS4A) to budding Gag VLPs, we used HA-tagged forms under tight control of their expression levels along with Gag NC and/or p6 mutants (Fig 2A). We found that Gag VLPs were released with similar yield even when recruitment of Tsg101 and ALIX were compromised due to p6 and/or NC mutations, however surprisingly these VLPs retained both CHMP4b and VPS4A independently of p6 and NC alterations that are inhibiting the early ESCRTs recruitments. Fluorescently tagged ESCRT-III components have been previously localized within budding wild type Gag VLPs, however not in the presence of p6 mutations [41]. Even if our observation is based on a mild over-expression, it clearly shows that ESCR-T-III and VPS4 have the potential to be recruited independently of ESCRT-I/ALIX. We further tested the requirement for VPS4 engagement in production of VLPs with compromised interactions with early ESCRTs. To this end, we expressed a dominant negative VPS4 (ΔE228Q) during production of HIV Gag VLPs. As shown in Fig 2B, the expression of VPS4ΔE228Q had a substantial negative effect on all Gag VLP production which confirms a requirement for VPS4 in eventual Gag VLP production. Gag VLP release is delayed when p6 domains are altered While our data show that Gag VLPs with compromised interactions with Tsg101 and ALIX were released with similar efficiencies 24 hours post-transfection, we further investigated the effect of these interactions on the kinetics of Gag VLP production. Shown in Fig 3 is the VLP production comparing WT, ΔPTAP, ΔYP, Δp6 and control ΔG2A. U2OS cells were used for both immunoblotting and microscopy (left and right panels, respectively). Our analysis shows that the kinetics of VLP release is delayed by~20 minutes (ΔYP) to~1 hour (ΔPTAP and Δp6) which is consistent with the similar VLP release observed 24 hours post-transfection (see also Supporting Information section). To confirm that the Gag variants detected in VLPs using immunoprobing indeed originate from VLPs produced by cells, we visualized the released VLPs by total internal reflection microscopy (TIRF) on individual cells. Using TIRF and Gag p6 variants fused to mCherry, we followed the assembly and release of VLPs in live cells. We confirmed the similar VLP assembly on cellular plasma membrane between all Gag variants, and 12 hours post-transfection, we artificially detached the cells to visualize released VLPs as described in Materials and Methods. VLPs were indeed observed immobilized on the cell-free surface accordingly as shown in Humanized Gag-Pol vector preserving the ribosomal slippage produces VLPs that mature similarly to HIV VLPs Having established that the HIV Gag VLPs with abrogated interactions with Tsg101 and ALIX are delayed in their release, we set out to investigate the discrepancy in budding of HIV versus Gag VLPs. Upon transfection in cells, the HIV R9 or HIV R8.2 express Gag along with Gag-Pol and all other HIV co-factors aside from ENV for HIV R8.2 . We chose to generate a system that only express Gag and Gag-Pol proteins for more accurate comparison with Gag to investigate whether the observed differences between Gag and HIV VLPs can be sufficiently explained by the packaging of Gag-Pol. We constructed the Gag plus Gag-Pol open reading frames in a single encoding cassette using humanized Gag and preserving the HIV ribosomal slippage (S4A Fig); the Gag plus Gag-Pol VLPs produced are referred to as "Gag.Pol". We further generated variants of Gag.Pol by mutating p6 as described for Gag.Pol with ΔPTAP and ΔYP mutations resulted in formation of VLPs with defects in terms of VLP yield and maturation (Fig 4). Interestingly, PRΔD25N VLPs showed dramatic release defect in all p6 mutants ( Fig 4A). Over-expression of ALIX substantially rescued the maturation defect due to ΔPTAP mutation (Fig 4A), as commonly reported. Immunoprobing for Gag and Pol domains indicates that ΔPTAP VLPs are devoid of any detectable RT, while an average of 70% RT loss is observed in ΔYP VLPs (Fig 4A and 4B and S5 Fig). The RT loss is reversed in ΔPTAP VLPs by over-expression of ALIX as shown in Fig 4A. Interestingly, we observed that while ΔPTAP mutation induces identical RT loss in both Gag.Pol and HIV VLPs, the RT loss induced by ΔYP mutation in Gag.Pol VLPs is not occurring in HIV R9 and HIV R8.2 ( Fig 4B). These data suggest the potential engagement of an HIV effector(s) missing in the minimal Gag.Pol system, that is likely capable of supporting ALIX function in the context of ΔYP mutation. There is a reduction in the amount of incorporated RT within Gag.Pol p6 mutants when compared to incorporated RT in WT Gag.Pol. We hypothesized that delayed VLP release in addition to activation of PR before closure of the VLP neck would result in Pol auto-processing and subsequent diffusion of Pol products back to the host cytosol. Indeed, PR was also lost equivalently to RT in Gag.Pol p6 mutants, and follows the same profile in HIV R9 and HIV R8.2 ΔPTAP variants ( Fig 4B). Supporting the notion of a race between VLP neck closure and PR activation, we also found that WT Gag.Pol showed a~25% RT loss when compared to ΔPR Gag.Pol (S5C Fig). Based on the yields of Gag.Pol VLP production (comparing both PRwt and PRΔD25N to Gag VLPs), we suspected a longer delay in release of Gag.Pol VLPs with altered p6 compared to Gag VLPs. Gag.Pol VLP release is substantially delayed when p6 is altered VLP release kinetics of Gag.Pol variants were analyzed as shown in Fig 5. As expected, Gag.Pol VLPs budded out at a slower rate compared to Gag VLPs, likely due to the Pol cargo size. To this end, all delays related to p6 mutations were extended in time. Unlike Gag VLPs, which were released with a constant delay measured with respect to the cytosolic Gag concentration, the delay in Gag.Pol VLPs did not follow the same curve as the cytosolic fraction. These kinetics indicates the occurrence of parallel processes during Gag.Pol VLPs production. Interestingly, in the context of PRwt ( Fig 5A, top panels), we observed that the appearance of mature p24 versus p55 precursor and related products (p48 and p41) were not necessarily Gag p6 alteration delays Gag VLP release. Kinetics of VLP release by Gag in U2OS cells with either p6 wild type or inactivated as indicated, western blot kinetics are shown where 200 ng of each Gag construct was used for transfection; both Cells and VLPs were collected at 1 hour intervals and immunoprobed using p24 antibody (left panels), Single cell imaging 12 hours post-transfection of mCherry fused Gag constructs as indicated was performed using TIRF microscopy, images were captured before and after cell detachment to visualize released VLPs (right panels). All experiments were performed 3 times with similar results. synchronized. Indeed, ΔYP mutation shows a delay in release of mature VLPs however their production does not continue to the same extent as for WT, instead, it saturates earlier while budding follows with VLPs enriched with Gag precursors. ΔPTAP mutation releases VLPs with mainly Gag precursors, especially Gag p48, and with a substantial delay. To test the effect of packaging full length Pol we performed kinetics on PRΔD25N (Fig 5A, bottom panels) with p6 mutations. VLP production kinetics in Gag.Pol PRΔD25N with p6 mutants were all significantly affected, strongly suggesting the importance of early ESCRT engagement (both Tsg101 and ALIX) when large cargo is loaded. Importantly, in any case, no full abrogation of VLP release was observed under any p6 mutation. Effect of Gag-cargo on VLP release Our data show that p6 mutations create a delay in production of HIV Gag.Pol VLPs, which in turn results in premature activation of PR and diffusion of Pol components from budding VLPs. Also, the delay in VLP release was longer than the one measured for HIV Gag VLPs. To further dissect the mechanistic basis of the observed delay, we hypothesized that the delay length is associated with cargo size defined as domains added after HIV Gag protein, which are naturally present as Pol within HIV. Our observations in budding kinetics of Gag.Pol VLPs demonstrated that when protease activation is inhibited and VLPs incorporate the full length Gag-Pol protein, the kinetics of VLP release is further delayed and becomes strongly dependent on early ESCRTs. These observations suggest a dependence of VLP release on cargo size. To evaluate the influence of cargo size on VLP production, we artificially fused GFPs in frame and in tandem to Gag C-terminus (in these experiments, every expressed Gag is in tandem with GFPs). We found that indeed VLP release by Gag-GFP x variants is proportionally reduced depending on cargo length (x = 1, 2 or 3 GFPs). The p6 late domain mutation directly dictates the efficiency based on the severity of p6 alterations (Fig 6). These observations were confirmed by pulse/chase 35 S-labeling experiments (S2B Fig). We further confirmed that intact Gag p6 is required for efficient VLP production with large cargo through rescue of p6 mutant Gag-3x.GFP VLP release by co-transfection of Gag with wt p6 (Fig 7A).There is a predominant impact for PTAP and at lower extent for YP. We further modulated the cargo size using Pol truncations in the context of PRΔD25N for maintaining the integrity of Pol cargo. Experiments were performed both under physiological frame shifted expression of Gag.Pol along with Pol proteins expressed in frame with Gag which resulted in 10 fold increase of Pol incorporation in released VLPs. Under both conditions, we observed the same effect of p6 late domain mutations on VLP release (Fig 7B). In both cases, VLP production is negatively affected depending on the length of cargo and nature of p6 alteration. In the context of truncated Gag.Pol with wild type protease, VLP production profile is more complex as deletions in Pol also influence timing of PR activation, as shown for Pol truncations Simulations of Gag.Pol VLP release Kinetics of Gag.Pol VLP release were analyzed using a Gellipsie stochastic algorithm [43] as detailed in Materials and Methods. This analysis incorporated a) the VLP release rates, b) protease activation kinetics, and c) diffusion of protease byproducts out of the open VLPs on the plasma membrane. The simulated data were fitted into the experimental data extracted from Gag.Pol VLP kinetics as shown in Fig 8A. Simulations allowed separation of the three underlying processes. As shown in Fig 8A, the delay in release of VLPs behaves along a poissonian curve with average delay times for WT, ΔYP and ΔPTAP alterations of 5 min, 75 min and 620 min, respectively. The delay in release of Gag.Pol PRΔD25N is substantially longer. During the simulations, the rates of protease activation and diffusion of protease byproducts were held constant while various p6 alterations were analyzed with varying VLP release rates; these rates are shown in Fig 8B. All together, the simulations support our hypothesis that a delay in release of VLPs, all other events constant, results in substantial loss of Pol associated enzymes from the VLPs. Discussion Three major points emerge from our results: i) Late domain mutations of HIV Gag result in transient delay of virion release from the plasma membrane. ii) HIV protease is activated following full assembly of virions on the plasma membrane and delays in virion release result in loss of Pol associated enzymes to the cell cytosol and budding of non-infectious virions. iii) Size of cargo attached to the C-terminus of Gag modulates the speed and requirements for early ESCRT factors during HIV budding. While small cargo sizes rely mostly on Tsg101, larger cargo sizes are similarly dependent on both Tsg101 and ALIX for efficient VLP budding. We show that alteration of Gag p6 late domains do not inhibit the release of HIV VLPs but rather result in delayed release. We characterized this effect for both VLPs that package HIV Gag only and for VLPs packaging both Gag and Gag-Pol (Gag.Pol). For the Gag.Pol VLPs, the delay ranges from~70 minutes for the ΔYP mutants that lose proper interaction with ALIX to more than 10 hours for the ΔPTAP mutants which completely lose Tsg101 recruitment. Since the assembly of VLPs takes approximately 45 minutes, a~10 fold delay in release of the The densitometry values plotted correspond to the band density on the immunoblotting. Gag and Gag-Pol were immunoprobed using p24 antibody. For accuracy, when Gag was partially processed, the quantified Gag p55 precursor is referred to the addition of p55, p48 and p41 bands. The color scheme: Gag p24 indicates full processing of the Gag and is shown in Red, the p41/48/55 is shown in dark blue to indicate cytosolic fraction and light blue to indicate the VLP fraction. (B) HIV VLP release is more sensitive to PTAP than to YP inactivation. 250 ng of each construct were used for transfection, and samples were collected at 4 hours intervals starting 8 hours post transfection. All experiments were performed 2 times with similar results. doi:10.1371/journal.ppat.1005657.g005 ESCRTs Allow Release of HIV within Minutes of Protease Activation budding VLP will result in a substantial accumulation of ΔPTAP VLPs at the cell surface when analyzed 12 to 24 hours post-transfection. ΔYP mutation has a much shorter delay of~70 minutes and therefore would result in lesser fold increase in budding VLPs at the cell surface. Importantly, these accumulation levels of VLPs are consistent with the observed phenotypes of HIV late domain mutagenesis [5,27,15]. Interestingly, we observed that a pool of budding Gag. Our results rationally explain the infectivity assays previously reported on progeny virions lacking engagement of ESCRTs. Specifically, infectivity experiments using HIV R8.2 pseudotyped with VSV-G have shown that VLPs produced by HIV R8.2 ΔYP have a decreased infectivity of approximately 50% compared to wild type HIV R8.2 while HIV R8.2 ΔPTAP VLPs are noninfectious [44]. While these results could also indicate an alternate effect on particle release, a mismatch between released VLPs and their infectivity has been previously reported [45]. Analysis of the Gag.Pol VLP release kinetics suggests that activation of the protease is occurring immediately after completion of VLP assembly followed by Pol-associated enzymes diffusion out of VLPs in p6 mutants. The rates of PR activation and Pol product diffusion would result in the loss of all Pol enzymes~60 minutes post-assembly as the VLPs remain open. Also, our analysis indicates that the VLP release times are distributed along a poissonian curve with an average of 5 minutes for WT, 70 minutes for ΔYP and 10 hours for ΔPTAP. This distribution of budding times correlates with percentage of Pol products lost in released ΔYP VLPs compared to WT VLPs. The ΔPTAP mutation which has a~10 hours delay does not show Pol product incorporation. HIV Gag protein alone is capable of budding from the plasma membrane. We found that Gag still efficiently buds out under severe p6 mutations but with delay at the cell surface for periods of~20 minutes to~1 hour. There is some minimal endocytosis of VLPs assembled under mutated Gag compared to Gag.Pol and HIV R8.2 VLPs, as shown in S6 Fig. The observed reduction of VLP release due to endocytosis is in agreement with a balance between fast budding and endocytosis of delayed VLPs. Prior to our observations it was shown that HIV Gag with mutated or even deleted p6 releases VLPs from cells [46][47][48][49]. These observations were interpreted as related to an ESCRT-independent release of Gag VLPs. In the context of HIV, the mismatch between the levels of VLP release and infectivity was also investigated as an indication of ESCRT-independent budding process and/or budding through intracellular vesicles and exocytosis [48]. Here, our data indicate that HIV virions defective in ESCRT recruitment mainly bud out from the plasma membrane but with proportional delays according to the severity of p6 late domain alterations. Aside from the mutations within the p6 domain, we have conducted extensive mutations within the NC domain of Gag. We found that in the context of Gag expression, VLP budding is independent of NC engagement with ALIX and/or indirectly Tsg101. Interestingly, using a slight over-expression of CHMP4 and VPS4, we observed the incorporation of these proteins within released VLPs even in the context of severe p6 and NC mutations, and expression of VPS4DN markedly reduced the efficiency of VLP release. Engagement of Tsg101 and ALIX during the HIV budding is generally assumed to allow the recruitment of downstream ESCRT-III proteins which polymerize at the neck of the budding VLPs before release [15,[50][51][52][53][54]. Based on our finding, we hypothesize direct recruitment of ESCRT-III and VPS4 to the neck of budding VLPs defective in early ESCRT engagement. To this end, we believe that, if this hypothesis is correct, the neck diameter formed in budding Gag VLPs is small enough to allow effective Tsg101/ALIX-independent CHMP recruitment and VLP release. In vitro, direct recruitment of CHMPs onto negatively curved membranes has been recently observed [55]. We cannot however rule out the possibility that ESCRT-III and VPS4 would be recruited in a Tsg101/ALIX-independent mechanism, possibly through engagement with AMOT and Nedd4 ubiquitin ligases [55][56][57][58][59][60]. In case of Gag.Pol and HIV VLP production, the Tsg101/ALIX-independent effective CHMP recruitment is substantially delayed due to the large cargo (Pol). We hypothesize that incorporation of Gag-Pol results in wider neck diameters. This hypothesis can rationally explain the different VLP release delays with altered p6 accordingly. The timing of recruitment of ALIX into HIV and EIAV has been investigated using Gag VLPs [61,62], based on our results we suggest that the recruitment may also be sensitive to cargo and therefore the recruitment should be further investigated in HIV virions incorporating both Gag and Gag-Pol. Finally, it is also possible that Tsg101/ALIX-independent CHMP recruitment to the neck of budding VLPs is naturally occurring, however, when Tsg101 and/or ALIX are involved during the CHMP recruitment, the process is faster and functions at maximum velocity to promote fast VLP release which promotes infectivity. In line with the above, we found that ESCRT engagement during VLP budding grows more critical by addition of cargo to the Gag C-terminus. We have measured the kinetics of release for Gag.Pol VLPs with inactivated protease. The Pol protein has a large protein mass (150 kDa) and, in absence of processing, the full length Pol incorporates within the VLP. We found that under these conditions, Gag.Pol VLP production is similarly sensitive to Tsg101 as well as ALIX interactions as shown with ΔPTAP and ΔYP p6 mutants. These results are surprising since typically Tsg101 is the primary interaction during HIV VLP budding, however they agree with the increased importance of ALIX when the budding neck diameter is large like during cytokinesis [17,19,63]. Also, while Gag VLPs can still release efficiently even in the absence of functional late domains, addition of artificial cargos (GFPs in tandem) at the Gag C-terminus inhibits the VLP release in a cargo length-dependent manner. While these Gag-GFP experiments demonstrate the concept of cargo dependent ESCRT requirements, it does not directly reflect on effect of Pol in HIV-1 budding since the Gag-Pol comprises only 5-10% of Gags in the forming HIV virion. The rescue experiments with co-expression of Gag and GagΔp6-3xGFP are the closest comparison to the role of Pol in HIV budding. These experiments demonstrate efficient release of GagΔp6-3xGFP VLPs only when co-expressed with Gag which has a functional late domain. All these observations together support the mechanistic role of ESCRTs in accelerating the closure of budding VLPs with large necks, and that cargo size is the primary regulatory factor that dictates the early ESCRT requirement level. Compared to the minimal Gag.Pol system, when HIV R8.2 PRΔD25N VLP release is tested (Fig 5), full length Pol (large cargo) release is more sensitive to PTAP integrity than to YP. We hypothesize that there is an HIV factor that is absent in the Gag.Pol and is promoting the efficient VLP release in absence of functional PTAP/YP sites. This factor is likely acting to mimic some of the Tsg101/ALIX function in accelerating ESCRT-III recruitment and/or promoting Pol packaging before PR activation. The activation of HIV protease immediately post-assembly on plasma membrane is supported by some experimental evidence suggesting that increased packaging of Gag-Pol results in premature activation of PR [64]. Rapid maturation of HIV VLPs within 1 minute post- release has also been reported [65], although our results predict at least 30 minutes delay between release and full maturation. Also, processing was shown to be essential for HIV VLP release [60], however the rate of HIV assembly is not affected by PR inactivation [66]. The observed kinetics of Gag precursor release from budding virions analyzed using computer simulations support activation of PR immediately post-virion assembly. Early biochemical characterization of PR cleavage sites showed that Gag and Gag-Pol SP1/NC and SP2/p6 sites are the first to get cleaved by PR [67]. Therefore, if the VLP neck closes before PR activation (as for WT p6 VLPs), soluble PR-containing fragments are trapped within the VLP and continue processing which results in virion maturation. In the case of delayed neck closure, soluble PR-containing fragments diffuse to host cytosol and the progeny virions produced lose Pol products based on the severity of p6 alteration. In agreement with our model (Fig 9), ΔPTAP and to lesser extent ΔYP VLPs are enriched mainly of Gag p48 and p41 forms, accordingly, clearly suggesting a loss of PR activity in these released VLPs. The first report identifying the importance of the PTAP sequence within Gag p6 used RT activity within the released HIV virions as a measure of viral fitness [6]. In these pioneering experiments, HIV ΔPTAP virions lost RT activity, however, inactivation of PR restored RT activity within released HIV ΔPTAP virions. Our data explain this observation as shown in Fig 4 and demonstrates that this phenotype is due to delayed release of ΔPTAP PRΔ VLPs with intact Pol domains. All together, our observations suggest that the engagement of early ESCRTs during HIV budding is obligatory for speeding up the closure of budding virions and release of fully formed particles before the HIV protease activation occurs, which is fundamental for safeguarding the Fig 9. Products of Gag.Pol processing by PR during VLP production. Among cleavage sites in Gag and Gag-Pol, the SP1/NC and SP2/p6 sites are the most rapidly cleaved by PR [67]. If the neck closes under normal conditions (WT p6), soluble PR-containing fragments are trapped in VLPs and continue processing the remaining cleavage sites on Gag and Gag-Pol, which leads to release of mature virions. In the case of delayed neck closure, soluble PR-containing fragments diffuse to host cytosol and progeny virions are devoid of Pol products (ΔPTAP and to lesser extent ΔYP VLPs). PM, plasma membrane. infectivity of progeny HIV virions. Other viruses and cellular processes whose cargo are not as time sensitive may forego some interactions with ESCRTs therefore possibly explaining the diverse requirements of ESCRTs in these processes [68,69]. Our observations show that 'budding delay' is a potent mechanism for inhibition of infectious retroviral release and suggest that this mechanism can be used for developing antiviral treatments that would not block ESCRT-dependent cellular processes but slow them to the point of infectious retroviral release inhibition. We also speculate that such mechanism maybe exploited by host cells to inhibit spread of infection. All cell lines used were grown in complete DMEM medium under standard conditions, excepted for TIRF experiments where cells were incubated in CO2-independent medium (LifeTechnologies). VLP release analysis All cell lines used were transfected using lipofectamine 2000 (LifeTechnologies), except for 293T cells using standard CaPO4 precipitation technique. Both cells and media were collected for analysis. Cells were lysed in RIPA buffer (140 mM NaCl, 8 mM Na2HPO4, 2 mM NaH2PO4, 1% NP-40, 0.5% sodium deoxycholate, 0.05% SDS), and after removal of residual cell debris by centrifugation, VLPs were pelleted from cell supernatants by centrifugation for 2 hours through 10% (w/v) sucrose cushion at 15,000 x g. Final VLP samples were re-suspended in PBS. VLP release yields/ratio were calculated as VLPs-associated Gag forms per cell-associated Gag forms based on either CA or MA probing, after densitometry analysis of the immunoblotting data using the Image Studio Lite software (LI-COR). HIV Gag kinetics were fit using a boltzman equation to calculate the delay times for various mutants as described in Supporting Information. TIR-FM assessments Live images were acquired using iMIC Digital Microscope made by TILL photonics controlled by TILL's Live Acquisition imaging software (see also Supporting Information). U2OS cells were transfected with Gag-mCherry variants and observed by TIRF imaging. At 12 hours posttransfection, cells were gently detached using TryplE (LifeTechnologies). Detachment was achieved by removing the medium and washing once with PBS; a thin layer of TryplE was added to cover cells to allow cell to detach. Images of cells before detachment and afterwards with released VLPs left on the glass support are shown in Fig 3 (right panels). MonteCarlo simulations Simulations were setup following the Gillipsie algorithm [43]. Processing, diffusion of Pol and budding were simulated for a single VLP and repeated 500 times to generate a population. The expected p24 and p55 proteins were calculated based on the simulated VLP release. Three essential reactions were considered within each VLP as follows: d½Gag:Pol dt ¼ Àk p ½Gag:Pol à ½Gag:Pol The concentration shown in brackets is the number of molecules within one VLP. At time t = 0 therefore ½Gag:Polðt ¼ 0Þ ¼ 120 ð molecules VLP Þ and ½Polðt ¼ 0Þ ¼ 0. In these equations k p is the processing rate, k d is the diffusion rate of Pol from the formed VLP with open neck, k r is the rate of VLP release before processing, and k à r is the rate of release after processing. The concentrations of p24 and p55 were calculated based on the following equations: if ð½Gag:Pol þ ½Pol < 2Þ then ½p24 ¼ 0 and ½p55 ¼ ½Gag þ ½Gag:pol if ð½Gag:Pol þ ½Pol > 2Þ then ½p24 ¼ ½Gag þ ½Gag:Pol and ½p55 ¼ 0 Simulated curves of p24 and p55 (for this analysis, we did not distinguish between p41, p48 and p55, summing all products and representing them as p55) are used in Fig 8A to [72,73], was used to assess VLP internalization during VLP release. Cells were treated with 80 μM Dynasore 4 hours post-transfection as previously described [73], and samples were collected 20 hours post-treatment. The vectors were expressed in 293T cells; all panels correspond to p24 immunoprobing.
2018-04-03T04:46:04.947Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "de949354d52f8895363434d55036fc95298bbef6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1005657&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de949354d52f8895363434d55036fc95298bbef6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
19570434
pes2o/s2orc
v3-fos-license
SEROCONVERSION OF HEPATITIS B VACCINE IN INFANTS RELATED TO THE MOTHER’S SEROSTATUS IN A COMMUNITY OF SÃO JOSÉ DOS CAMPOS, STATE OF SÃO PAULO, BRAZIL PURPOSE: To detect seroconversion of hepatitis B vaccine and antibody waning 3 years after vaccination in children immunized according to the World Health Organization schedule and its relationship to the mother’s serostatus during pregnancy. METHODS: A serological study was carried out in São José dos Campos. Blood samples from pregnant women were taken for hepatitis B marker serology. To evaluate seroconversion in infants born to these women, serology was performed 1 month after they were vaccinated with recombinant vaccine. Another group of children was evaluated 3 years after being immunized. RESULTS: Among 224 pregnant women, 0.9% were positive for hepatitis B surface antigen, 8.0% for antibodies to the surface antigen, and 4.5% for antibodies to the virus core. Seroconversion among 174 infants was as follows: absent in 18 children (10.35%), low level in 15 (8.62%), intermediate level in 26 (14.94%), and a high level (good response) in 115 (66.09%). Antibody positivity after 3 years was as follows: absent in 8 children (7.92%), low level in 51 (50.5%), intermediate level in 20 (19.8%), and high level in 22 (21.78%). Considering the age that the vaccine was administered, a significant proportion of non-seroconverters was found among children who had received the complete 3-dose schedule before 9 months (P = 0.023). Another factor that significantly contributed to the lack of seroconversion was the presence of any serological marker for hepatitis B during pregnancy (P = 0.044). CONCLUSIONS: Data gathered in this work show that the immunization schedule for hepatitis B in low or moderate prevalence areas should be revised in order to optimize seroconversion. INTRODUCTION Hepatitis B is one of the world's most serious infectious diseases.It is estimated that over 350 million people worldwide are chronic hepatitis B virus (HBV) carriers. 1,2tudies show that of all infected persons, 25% have acute hepatitis with jaundice, and 6% to 10% have chronic hepatitis. 3Of them, 40% die each year from HBV-related liver diseases in the USA. 3,4etween 35% and 40% of all HBV infections diagnosed worldwide every year result from vertically transmitted cases.The risk of infecting their children is increased among women found seropositive for both hepatitis B surface antigen (HBsAg) and precore antigen (HBeAg), an indicator of high HBV titers. 5In an attempt to reduce the spread of this virus, in 1991, the WHO recommended the introduction of HBV vaccination into the Programme of Immunization in all countries. 6he prevalence of hepatitis B is variable around the world. 7[10] In Brazil, the prevalence is moderate (2% to 7% in most CLINICS 2006;61 (5):387-94 Seroconversion of hepatitis B vaccine in infants related to the mother's serostatus Ribeiro TM et al. part of the country), with the peak of infections occurring around 25 years of age. 11But in the Amazon, some authors have found a prevalence of 24.6% to 61.79%. 9,10ccording to the Brazilian Health Authority, hepatitis B vaccine was officially used for the first time in 1989 during a National Vaccination Day in an endemic area of eastern Amazon forest. Since 1992, the Brazilian Health authority has implemented the vaccine program, giving priority to endemic areas, such as the States of Acre and the Amazon and for high-risk groups, such as persons with occupational risk throughout the country.The States of Santa Catarina, Espírito Santo, and Paraná implemented the immunization program against hepatitis B in 1993, and the Federal District implemented it in 1995. The vaccine was recommended for all states of Brazil in 1996, targeting all children less than 1 year of age.But, in 1997 the amount of vaccine was insufficient to immunize according to the 1996 strategy.Only in 1998 did the stock of vaccines reach an adequate level to guarantee national coverage, and from then on immunization was regularly achieved, according to health authorities. Several studies have been conducted in endemic areas for HBV.In these areas, the dynamics of anti-hepatitis B antibodies may be influenced not only by vaccination but also by natural exposure to HBV (natural booster). 12But, in an area in which the prevalence of HBV decreases, or with a lower prevalence, such as is found in Southern Brazil, the dynamics of antibodies against surface antigen (anti-HBs) and the persistence of the protection afforded by the hepatitis B vaccine may differ from what is observed in other studies. The aim of this study was to detect the immediate seroconversion following HBV vaccination of children 1 month after the last dose for a community attended by the routine immunization programme of a Basic Health Unit in Brazil, to determine the seroprevalence of HBV markers and the prevalence of HBV carriers among pregnant women, and to determine the anti-HBs levels in the first cohort of children immunized by the official programme, in an attempt to contribute to better understanding of the effects of immunization in a population with moderate prevalence of hepatitis B infection. Population and Study Design This study was carried out in São José dos Campos, a city of São Paulo State, Brazil.The city has an area of 1,102 km² and is located just north of the Tropic of Capri-corn (23 o 13' 53" latitude south, 45º 51' 21" longitude west), 84 km from São Paulo City, the State capital.Its population is about 534,000 habitants, 95.1% in the urban area.The annual growth rate is around 1.89%.The city of São José dos Campos is strategically placed along the major road and rail links between the two largest Brazilian cities, namely, São Paulo and Rio de Janeiro, and is part of is known as the Latin America economic development pole. 13he economic activities in São José dos Campos include industrial facilities for aircraft, motorcar, pharmaceutical, telecommunications, health consumables, electro-electronics and photography.The average annual per capita income is BRL $15,000.00(circa US $ 7,000.00).The lowest 8.75% of the population have an annual income of BRL $2,400.00(circa US $ 1,000.00),and 18.26% have an income of BRL $ 4,800.00(circa US $ 2,000.00)per year. 14his study was performed at the Basic Health Unit (BHU) of the borough of Campo dos Alemães; it is one of the 42 BHU in town, and serves approximately 21,800 people for assistance with clinical, pediatric and gynecologic primary care and prenatal care. To check seroconversion and antibody concentration against the surface antigen (anti-HBs) a transverse serological study was performed with children immunized since 1998, the year in which the official programme was introduced.Two groups were established: the first cohort comprised children immunized after 6 months of age, and the second cohort, with children immunized according to the current vaccination schedule. The recent seroconversion study was conducted in children born to women who attended in the prenatal program of the BHU of Campo dos Alemães. From June 2000, pregnant women selected from a list of women with a positive b-HCG test were contacted and invited to participate in this study.A database was built with their complete names, record numbers, addresses, and predicted day of delivery.All participants were informed of the purpose of the study, and a written informed consent was subsequently obtained. Blood samples from pregnant women were collected during the period from August 2000 to June 2001, approximately 6 weeks before delivery, for determination of HBV markers. The newborn babies were examined monthly by the pediatrician conducting this study (TMR) as part of firstyear pediatric routine.From each child's record, the weight, height, and adverse events related to vaccination were recorded in our immunization database.These infants were vaccinated according to the schedule proposed by the Bra- zilian Health Authority.One month after the last dose of hepatitis B vaccine, blood samples were collected from the children and tested for anti-HBs.Parents of the children assisted at Campos dos Alemães who were immunized against hepatitis B from October 1998 until August 2001 were contacted to enter this study, constituting a group in which we analyzed the antibody concentration more than 1 year after vaccination. Vaccine The vaccines used routinely in our country are bought, handled, and administered by central and local health authorities.During this study, the children received any of these yeast-derived hepatitis B vaccines: Euvax B recombinant ® , LG Chemical Ltda.Pharmaceutical Div., Seoul, Korea-241 doses; Engerix-B ® , Smithkline Beecham, Belgium-495 doses; and Hepavax-Gene ® , Korea Green, South Korea-89 doses. Each dose of this vaccine was given in the musculus vastus lateralis of right thigh, in a dose of 0.5 mL (10 m. The vaccine was given during the first month of life and then 1 month and 6 months after the first dose. 2.3.Blood collection and serology Blood samples were collected by venipuncture with butterfly and syringe designed for children and through a vacuum collecting system designed for pregnant women. Samples were left at room temperature for 2 hours and then centrifuged at 3000 rpm for 15 minutes.Serum was separated from the clot, then frozen to and preserved at -20 ºC until being transported to the university laboratory. Sera were transported to the city of São Paulo inside a thermal box with recycled ice (Gelo-X ® , Adiquima Ind Com Adit Ltda., SP, Brazil) to keep the temperature between 2 and 8 ºC, and then stocked in a freezer at -20 ºC, at the Laboratory of Immunosuppressive Investigation (LIM 01), Department of Pathology, Hospital das Clínicas, São Paulo University Medical School until laboratory analysis. The variables collected for the study were: (i) gender, (ii) weight and height at birth and at the last dose of vaccine for calculation of body mass index (weight/height 2 ), (iii) the Capurro value, (the most used method for determination of the gestational age, in weeks, from clinical and neurological data), (iv) age of child at first and third dose of vaccine, (v) vaccine brand(s), (vi) interval between last dose and blood collection, and (vii) determination of the HBV markers of the mother (anti-HBs, anti-HBc, and HBsAg). Data management and analysis Data were collected and organized using a spreadsheet software (MS-Excel Anti-HBs concentration was analyzed by 1-way ANOVA to test differences among groups classified by the time from the last vaccine dose. RESULTS From June 2000 to June 2001, 507 pregnant women were contacted and invited to participate in this study.Blood samples were taken from 224 pregnant women. Regarding the questionnaire answered by those 224 pregnant women, the following information was found: previous history of diagnosed hepatitis B infection (0.4%), hepatitis B infection in family members (3.1%), history of alcoholism (0.4%), history of previous blood transfusions (4.0%), and history of use of non-intravenous illicit drugs (4.9%).None of the patients admitted a history of previous use of intravenous illicit drugs.Two patients (0.9%) were vaccinated against hepatitis B because they had jobs that require this immunization. In the final account, 174 children were sampled and serology was performed.All of them received the complete course of hepatitis B vaccination. Details of the time of administration of the first vaccine dose were available for these 174 children as follows: 74 (42.5%) received the vaccine within 48 hours after birth and 100 (57.5%) had their first dose after this time.This occurred because the vaccine is administrated soon after birth in only 1 hospital.The other children received their first immunization at the Health Basic Unit. Vaccinees were observed for fever, injection site reactions, and systemic complaints.No side effects of the vaccine were observed, with only minor limited local reactions at the site of administration. Cross tabulation of children's serostatus for anti-HBs and their mother's positivity for any HBV serological marker presented a significant association (P = 0.044), showing that seroconversion was lower in children from positive mothers (Table 1). No significant difference was found between the antibody response to anti-HBV vaccine and weight and height (body mass index) at the time of the third dose. To check the persistence of anti-HBs concentration in the group of children immunized after 1998 by the national programme, 101 children 1 through 4 years old were sampled among 1520 children who had received the HBV vaccine in Campo dos Alemães BHU.Among those 101 children, 49 (48.5%) were girls and 52 (51.5%) were boys.Of them, 32 children were from the first group vaccinated in 1998 (first cohort), sampled after an average of 32 months from the last vaccine dose.Data on anti-HBs antibody concentration was obtained in 69 children sampled 11 to 30 months after the last dose of HBV vaccine (second cohort). Table 2 shows the results of the 3 groups studied for seropositivity for anti-HBs. To analyze antibody waning from the last dose of vaccination and the seroconversion related to age, data from the 3 groups were added together to form a single group. If we consider the age, in months, at which the children received the first dose of the hepatitis B vaccine, we observe that the likelihood of seroconversion increases if it is given after the second month of life (Table 3).Moreover, if we categorize the children by the age at which they received the third dose of HBV vaccine, we observe that the seroconversion also increases if this last dose is given after the 6.5 months of age (Table 4).We elected to stratify into 3 levels, namely: up to 6.5 months, 6.5 though 9 months, and after 9 months.This strategy stems from the age at which the first dose of hepatitis B vaccine was given: if the child received the first dose soon after birth, he/she received the last dose around 6 months of age; but if the first dose was administered after the second month of life, the last dose was given after 9 months of age.It should be noted that the interval between the doses is specified, and these doses tend to be administered in the interval proposed or later, never in a shorter period of time. When all the children were stratified into 2 groups by the age at which they received the third HBV vaccine dose, using 7 months of age as the cut-off point, a significant difference in seroconversion in favor of the older children (P = 0.022) was observed.In this case, children immunized by the official proposed calendar did not benefit as much as those for whom immunization was delayed to a later age (Table 5). We classified time elapsed after last dose of anti-HBV vaccine was given into 5 categories, namely: 1 through 1.5, 1.6 through 3, 3.1 through 12, 12.1 through 24, and 24.1 through 36 months.The analysis of variance showed that there is a significant difference among these groups in respect to anti-HBs concentration (P < 0.001) (Figure 1). DISCUSSION The main objective of HBV vaccination is to prevent the chronic carrier state and its associated morbidity and mortality, especially in areas where the prevalence is higher, posing risks to the infants. 15The vast majority of reports have been about the experience with HBV vaccine in these types of areas. There are few studies that adequately address the effect of HBV vaccination in areas where the prevalence is moderate to low, and therefore the circulation of the virus is correspondingly low, which may cause the outcomes of the immunization in the long run to be potentially different from outcomes in countries or areas with a high HBV prevalence. The optimal age for immunizing a population must take into account some characteristics of the target population, such as the best seroconversion time, the age-dependent force of infection, as well as antibody waning after vaccination and its potential effect on weakening immunity, and consequently the necessity of re-vaccination schemes. The HBV prevalence in Southern Brazil is low to moderate.As shown by our data, 1% of pregnant women are infected and carrying HBV, being a risk for vertical or perinatal transmission.The prevention this type of transmission seems to have been improved by serological screening carried out since the year 2001 during the prenatal official programme.Women positive for HBV and their newborns are treated with hepatitis B immunoglobulin and vaccination as soon as the baby is delivered. Considering this epidemiological scenario, it is pertinent to question the practice of vaccinating all infants soon after birth without investigating seroconversion at this age, or the efficacy of the resulting immunity as protection against hepatitis B infection. Seroconversion studies have shown a good response among newborns and infants.In a recent study done in São Paulo State, Brazil, full-term newborns had a high rate of seroconversion following vaccination with recombinant hepatitis B. 16 In that study, 98% (95% CI = 91.6-99-9)seroconverted, with 1.8% having low (<10 mIU/mL), 26.3% having intermediate (10-100 mIU/mL), and 71.9% having good (>100 mIU/mL) anti-HBs titers, with a mean anti-HBs titer of 537.5 mIU/mL.Our experience with the immune response in full-term babies is different, in that we found a greater percentage with low antibody levels and a higher percentage of good responses.In our study, we found that 1 month after the last HBV vaccine was given, seroconversion was low (<10 mIU/mL) in 10.35%, intermediate (10-100 mIU/mL) in 8.62%, and high (good) (> 100 mIU/ml) in 81.03% of the neonates, with a mean concentration of 740.87 mIU/mL (± 524.79 mIU/mL). The previous study cited above was conducted in an University Hospital, where subjects and materials are under better control when compared with native community with its populational heterogeneities in terms of immunological response and application of vaccination procedures.This could possibly explain differences in the proportion of low responders to Hepatitis B vaccine. It must be noted that vaccine products come from different vaccine laboratory manufacturers and countries.Although this fact may have introduced heterogeneity in immunological response among vaccinees, our results are still valid and relevant for evaluating the actual herd immunity obtained from the immunization programme.In other words, health authorities must be aware that varying vaccine products to be applied in immunization will influence the population protection against a given disease.Consequently, it is important to keep a surveillance program for evaluating the real outcome of a vaccination strategy. It is important to emphasize that seroconversion obtained in this routine immunization service was around 90% among children who had received the HBV vaccine before 9 months of age.This phenomenon could result in an accumulation in the future of 10% per year of individuals susceptible to HBV infection, exposing this population to a greater risk, because they believe they are immunized. Our data indicates that seroconversion is maximized after 9 months of age. Moreover, considering that there is a significant decay in the antibody concentration, it is possible that individuals immunized early in life will be susceptible and at increased risk when adolescents or during adulthood, when the incidence of hepatitis B is greater. In conclusion, it seems to be inadequate to maintain the immunization programme against hepatitis B soon after birth as it is presently scheduled.It is our opinion that HBV vaccine should be given after 9 months of age in low and moderate HBV-endemic areas. Figure 1 - Figure 1 -Box-plot of anti-HBs concentration related to time elapsed after last dose of HBV vaccine -dots represent the average antibody concentration ® , MS-Office 97 ® , Microsoft, USA).Statistical tests were performed using EPI-INFO version 6.0 STATCALC and EPITABLE modules, CDC, USA & WHO, Switzerland, and MINITAB version 13.1, Minitab Inc., USA.Categorical variables were analyzed by the χ 2 test corrected by the Mantel-Haenszel technique where applicable, adopting 5% as the level of significance. Table 2 - Seropositivity to anti-HBs for the 3 groups of children studied Table 4 - Age at the third HBV vaccine dose versus seroconversion, by stratification into 3 age groups
2017-06-01T09:27:54.971Z
2006-10-01T00:00:00.000
{ "year": 2006, "sha1": "e7b344cbfe50cb32774b38f37102513b51b28fac", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/clin/a/g4swyqfMQgxG7WzcwsCY9Mf/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e7b344cbfe50cb32774b38f37102513b51b28fac", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
250511683
pes2o/s2orc
v3-fos-license
The Impact of Makassar - Parepare Railroad Development towards the Community of Soppeng Riaja District, Barru Regency --The development of the transportation is very compulsory for national development in all regions. The government announced the construction of the Makassar - Parepare railroad which was one of the National Strategic Projects (PSN). The purpose of this research was to determine the impacts and strategies that could be utilized in overcoming the impacts which was arising from the construction of the Makassar - Parepare railroad to the socities of Soppeng Riaja District, Barru Regency. This study designed qualitative methods with descriptive data analysis techniques. Informants in the study were; 1) PPK of South Sulawesi Railway Development, 2) the affected societies, 3) Chairperson of BPD at Ajakkang Village, and 4) Head of Polewali Environment. Data collection techniques were interviews, observation, and documentation. The results showed the construction of the railroad had impacts such as increased community welfare, provide economic benefits for the community, improving rural facilities, reducing social interaction between residents, not contributing significantly, environmental damage in the area of the railroad tracks, social conflict, and reduced rice production. While the strategy that has been carried out by the government is massive socialization and consensus agreement, community and government synergy, as well as river dredging and road improvement. INTRODUCTION Indonesia is the fourth most populous country in the world after China, India and the United States. This refers to data released by the United Nations in 2019 which stated that Indonesia had a population of 270 million (United Nations, 2019). With such a large population it is necessary to prepare facilities and infrastructure that can support the sustainability of people's lives, and accommodate their social and economic activities. Means of transportation become one aspect that sustains these needs. This can be forming and triggering growth in an area (Zulfikar, 2017). These activities can increase if an area is equipped with adequate and integrated transportation facilities, whether by land, sea or air. The government's effort to improve transportation facilities is by issuing Presidential Regulation No. 58 of 2017 concerning the Acceleration of the Implementation of National Strategic Projects (PSN). In general, this program has a long period of time, PSN can include the construction of airports, ports, dams, toll roads, and so on (Wahyu, 2018). South Sulawesi became one of the provinces running PSN with the Makassar-Parepare Railroad Development. PSN represents nawacita in developing Indonesia as a whole, the government's commitment to build infrastructure evenly to support increased economic growth and be able to provide employment. South Sulawesi once had a railroad during the Dutch colonial era, the Makassar -Takalar railroad along 47 km in the 19th century. However, it only lasted for seven years due to causing losses (Nasrul, dkk, 2018). 85 years later, the central government is trying to revive railways in the province by building a railroad in South Sulawesi. The goal of developing the railway line is to create connectivity between regions that have large-scale natural potential, efficiently in terms of energy, cost, and time (Fitriah, dkk, 2018), so it is also hoped that the public will use public transportation modes rather than private vehicles. Transportation infrastructure development can have an impact on the community, including the construction of the Makassar -Parepare railroad. One of the impacts is the conversion of agricultural land into railroad and train stations. This is supported by BPS Barru Regency data, the area of paddy fields in Soppeng Riaja District from 1,608 ha in 2017 was reduced to 1,534 ha in 2018 (BPS Barru Regency, 2019). The researchers' observations were also supported by (Marlianawati, et al, 2019) research which said that the development process had an impact on the community such as the loss of agricultural land and dwellings, so that they were forced to experience evictions to serve as airport construction sites. Based on the literature review, it is increasingly emphasized that the development of transportation mode infrastructure can lead to the conversion of agricultural land and residential land. Moving on from thought and based on the background of the problem, the formulation of the problem in this research is the construction impact of the Makassar -Parepare railroad on people of Soppeng Riaja Sub-district, Barru Regency and strategies to overcome the impacts. The purpose of this study is to determine the impact and strategies that can be carried out in overcoming the impacts arising from the construction of Makassar -Parepare railroad to the community of Soppeng Riaja District, Barru Regency. II. METHODS Qualitative methods are used in research, where qualitative is a method that examines phenomena that occur and are experienced by research subjects such as behavior, actions, etc. thoroughly then interpreted in a description in the form of words, phrases, sentences, and languages (Moleong, 2017). The method is used to describe an in-depth picture of the impacts caused and strategies in overcoming the impacts caused by the construction of Makassar -Parepare railroad to the people of Soppeng Riaja District, Barru Regency. Data collection techniques in this study were interviews, observation, and documentation. The techniques are mutually sustainable to obtain complete, in-depth data and in accordance with the focus of the study. Informants to be interviewed in this study are: (1) key informant, namely South Sulawesi Railway Development PPK; (2) the main informant is the affected community; and (3) Supporting informants in this case are the Chairperson of the Ajakkang Village BPD, and the Head of Polewali Environment. The data analysis technique used is a descriptive analysis technique that aims to find out and analyze data on the impacts caused and strategies in overcoming the impacts caused by the construction of Makassar -Parepare railroad to the people of Soppeng Riaja District, Barru Regency. The two areas are the locations of the construction of Makassar -Parepare railroad along 3 km, while the train station for Soppeng Riaja District will be built in Ajakkang Village. Ajakkang village was chosen as the location for the construction of the station for Soppeng Riaja District because of its strategic location and close to Mangkoso which is the capital of the sub-district so that it is more accessible to the community (Arinova / Head of the Technical Division of the South Sulawesi Railway Development PPK Technical Division, personal communication. June 16, 2020). In addition, the flat contour of the land can facilitate the construction and development of the station later. The laying of the first stone that marks the construction of Makassar -Parepare railroad began in 2014 in Siawung Village, Barru District, Barru Regency. Whereas construction in Soppeng Riaja District only began in 2016, starting with the determination of the stake for the location of the railroad track, then socialization is conducted for the community whose houses and paddy fields are affected by the development, measurement and pricing of affected land and buildings (Syaifuddin / Chairman of Ajakkang Village BPD, personal communication. 18 June 2020). Community meetings with stakeholders such as the South Sulawesi Railway Development PPK, the Barru Regency National Land Agency, the Public Works Office, the District and Village Parties took place three times and all agreed on the development. The determination of land and house prices is calculated in detail both in terms of size and material by an independent party namely the State Asset Management Institute (LMAN) (Muhammad Nasir/Head of Polewali Environment, personal communication. May 30, 2020). The government as a stakeholder and policy in development can have an impact that can be empirically reviewed whether or not there is a change in attitude that arises from the community after the development is felt and seen from the condition of the community (Muhammad, Pambudi, & Subarkah, 2015). The impacts that occur are as follows: 1. The Welfare of The Community Increases The majority of the people who own houses and rice fields affected by the construction of the railway line receive a large amount of compensation, even calling it profit compensation. With the benefits they get, they Advances in Social Science, Education and Humanities Research, volume 574 use it for moving house needs, such as buying land, building materials and renting a handyman. Besides that, buying paddies is for the people who have the affected paddy fields, while the rest is used for school fees and children's tuition, vehicles, and investment in the form of savings or land. There were 30 houses that had to be relocated in Ajakkang Village due to the construction of the railroad track, most received compensation in the hundreds of millions of rupiah, some even received up to one billion rupiah. When the researchers directly visited the relocated residents' houses, they were indeed better than before. Affected people's lives have become more prosperous. Providing Economic Benefits for The Community The construction of Makassar -Parepare railroad is a national strategic project that is handled directly by the central government, although that does not mean that local communities are not involved. Some communities worked in this development such as being a local contractor who collaborated with contractors in Java, built a foundation to support a railroad track, and some were tasked with cleaning the village shaft road that was passed by a dump truck. When construction activities are underway, coffee shops and shops selling food become crowded due to workers who come during work breaks, thus reviving the economy of the surrounding community. Improvement of Rural Area Facilities The construction of the railway line in Soppeng Riaja District indirectly provides improved facilities in rural areas. The establishment of the station in Ajakkang Village gives pride to the local community who believe that the region is undergoing modernization in the field of transportation. Another facility that is undergoing repairs is the Public Cemetery in Kiru-Kiru Sub-District, which was originally only an untreated burial area which has now become more organized because it has been given a guardrail and parking area which makes pilgrims comfortable and safe. Reduced Social Interaction Between Citizens The high surface of the railroad track that is right in the residential area makes Ajakkang Village look like it has a dividing wall between the sides of Kampung Baru and Ajakkang, besides that many houses have to be relocated due to the construction of the railroad tracks so that the settlements are not as dense as before construction. These two factors are the cause of reduced interaction between citizens, usually residents often gather or discuss with each other and many children who play especially in the afternoon have now begun to rarely occur such activities (Kartini Sade/Affected communities, personal communication. June 15, 2020). At night, young people rarely gather even though people are happy because they provide a sense of security. Even though social interaction between residents is reduced, mutual cooperation is maintained well as if there are residents who will hold weddings, move houses, and so on, there will be crowds and hand in hand to help launch the event. Not Provide Benefits Yet The construction of Makassar -Parepare railroad has been completed in Soppeng Riaja District, but until now the train has not been able to operate because the construction has not been fully completed in other regions, while to operate the railroad there needs to be continuity between the areas traversed by the railroad from Makassar to Parepare. The station in Ajakkang Village is still under construction, so it has not contributed significantly to the community. What actually happened was that residents complained about the dark village shaft because there was no lighting around the railroad tracks and tunnels below. Residents complain because during the construction took place a lot of dust and disrupt people's daily activities and endanger health. Besides that, there has also been massive tree cutting for development, making the climate in the affected area hotter, especially in the dry season. Waterways such as rivers experienced siltation during construction which caused intense flooding in Ajakkang Village, the worst flooding occurred in 2018 (Andi Elvy/Affected community, personal communication. June 13, 2020). Floods cause losses such as rice fields that failed to harvest, many houses were Advances in Social Science, Education and Humanities Research, volume 574 damaged due to high water up to chest height of an adult. Social Conflict in Soppeng Riaja Subdistrict Railroad development activities in Soppeng Riaja District can trigger conflict. As happened between the grave heirs in the Kiru-Kiru burial area who refused construction with the government. This is due to the lack of appropriate places to build a new public cemetery. Another conflict arose due to the way the truck driver was carrying hoarding recklessly so that the material soil had spilled over onto the village shaft road and dust made the community complain and there was a beating between the village youth and the truck driver. Ajakkang villagers also protested the government because during the construction they suffered a lot of losses, such as floods which got worse every year, and damage to the village axis road that disrupted their daily activities. Reduced Rice Production Development in Indonesia is not infrequently causing land use change, especially agricultural land. Agricultural land has a relatively low value compared to nonagricultural land, so that agricultural land is very vulnerable to land conversion (transfer function) for infrastructure development (Lestari, 2019). Transfer of function on agricultural land resulted in reduced rice production in Soppeng Riaja District, including in Kiru-Kiru Village and Ajakkang Village, especially rainfed rice fields that can only be harvested twice a year. Rice production was reduced from 12,844.98 tons in 2017 to 12,558.79 in 2018 (BPS Barru Regency, 2019). Impact Resolution Strategy Railroad Development Makassar -Parepare Various strategies have been carried out by the government as an agent of change in society, including: 1. Massive Socialization and Deliberation to Reach Consensus The government formed an implementing team for the relocation of the graves in Kiru-Kiru consisting of various stakeholders, whose task is to carry out outreach to the heirs to relocate the burial area and identify the graves. Deliberations were also held in order to obtain an agreement that would not harm the tomb's heirs, even though they had suggested making a flyover over the tomb. Synergy Between Society And Government The implementation of development in all aspects of people life of the nation and state can run optimally if there is good synergy between the government and the community (Dimpudus, et al, 2019), including the construction of Makassar -Parepare railway which involves the central government to the RT-RW, TNI-Police and the community. Various conflicts were successfully resolved due to good coordination. For example, the resolution of conflicts between the community and the truck drivers carrying rail material which are handled directly by the police, distributing aid to flood victims, and providing compensation money to people affected by dust during construction of Rp. 25,000/day for each house. River Dredging and Road Repair Development that causes damage has been handled well by the government. The shallow river is the cause of the flood, dredging has been carried out using heavy equipment so that the river can again accommodate the maximum water, and the damaged village axis roads have been concreted so that community activities run well. IV. CONCLUSION Based on the above discussion, it can be concluded that the construction of Makassar -Parepare railroad has various impacts on the people of Soppeng Riaja District such as increased community welfare, bringing employment and economic benefits to the local community, increasing rural facilities, reduced social interaction between residents, not contributed significantly yet, environmental damage in the area of the railroad tracks, social conflict in Soppeng Riaja sub-district, and reduced rice production. While the strategies that have been carried out by the government as agents of change in society are massive socialization and consensus agreement, community and government synergy, as well as river dredging and road improvement.
2022-07-14T18:22:55.852Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "44d5645fc6c693b97b7eba318ef3712f2e583e6b", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125964486.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7fb1fa64a0d6e4ba66cde0c744cf02cbac55ca59", "s2fieldsofstudy": [ "Engineering", "Geography" ], "extfieldsofstudy": [] }
221763162
pes2o/s2orc
v3-fos-license
Dialogue of Cultures: Challenges of Globalization The article is dedicated to the consideration of Islamic civilization in the context of the dialogue of contemporary world's cultures caused by globalization processes. Islamic civilization is presented as the interaction of different cultures, historically formed on universal underlying values and moral norms. The author deduces stereotypes of consideration of Islam and threats "coming from it" existing in modern-day humanities and social sciences. The researcher refers to them theoretical attempts to describe the realities of the Muslim world in terms of the Christian tradition, as well as related to the particular ideological attitude and methodology of so-called Eurocentrism. Conclusions are made about the role and significance of Islamic civilization for the preservation of sustainable development of countries within the global world. I. INTRODUCTION Religion is a cultural system, and culture is receiving more and more attention in world politics. We can state that one of the manifestations of the global crisis is the aggravation of religious and ethnic conflicts. We need response to question: how to correlate globalization, which is oriented to the values of unity with the domination of national and state forms of management and the dominance of differentiation on cultural and religious practices, identification by a some number of features, as well as pluralism of opinion in society? Does globalization imply orientation to conventional cultural and civilizational values and equate it with westernization? If we consider the adaptation of new technologies and ideas to regional conditions as a descriptive process of globalization, then localization and individualization may become dominant trends. Globalization under such circumstances no longer forms an abstract single cultural and educational space, and its emergence is associated with far more than one source. It is about the contribution of national cultures to the global culture. Moreover, globalization is accompanied by the socalled regionalization, i.e., the increasing importance of subcultures (e.g., Latin American, Southeast Asian, etc.). However, pointing globalization to the values of unity and the communality can lead to division and a new type of inequality, i.e., there is a gap between the producing and consuming countries. It is also essential to bear in mind that, within countries themselves, there are fractures among the social groups that participate in globalization and those that passively consume its "fruits." Thus, in the context of globalization, culture, and education, a researcher must assume that globalization cannot become an integrated whole through convergence based on a common set of cultural values. The following question must be answered: what holds these different parts of the whole together and what gives them coherence. That is, how to preserve the value of pluralism as the world seeks a single social space. II. ISSUES OF UNDERSTANDING THE "CROSSROAD OF CULTURES" Cultural values are naturally changing, but changes in culture are not reflections of social change, although there is an interaction between them. Some social scientists argue that changes in the dominant socialization procedure, including contemporary values and norms, are self-sufficient to cause dynamic changes in society. At the same time, it should be taken into account that globalization has destroyed the boundaries of the so-called first, second, and, consequently, third worlds of the development era. At present, we know that civilizations and local cultures are increasingly crucial for peoples and countries in the search for new national and cultural identities. It is impossible to work seriously in non-European cultures and civilizations without knowing them from the inside. Without knowledge of the fundamental bases of culture, as well as without understanding the peculiarities of spiritual life, we will not be able to understand what was known in the West as the "third world," even if we are familiar with the structure of the world market and know the trends of economic development. Nowadays, the entire Islamic civilization is considered in Europe as a potential source of conflicts in the modern world. Is the dialogue between East and West possible under such conditions? The answer to this question is to find out the reasons for considering the Islamic civilization as a threat to the modern world, a source of contradictions between countries. Let us try to outline the contours and context of the problem in terms of understanding Islamic civilization. Some of the significant figures claimed that those who know history know the future. Indeed, our past does determine the vector of development in many ways. Contribution of Islamic civilization in world history has been significant. Sure, it continues to have a noticeable impact on various areas of life in different countries. Today there are more than 1.5 billion Muslims worldwide, and 57 countries with a population of about 1.5 billion are members of the Organization of Islamic Cooperation (OIC). There are numerous Muslim diasporas in Europe. Thus, their number increased from 29.6 million in 1990 to 44.1 million in 2010. The share of the population professing Islam has increased from 4.1% to 6%, an increase of almost 50%. According to rough projections, the proportion of the Islamic population rises from 4.1% to 6% by 2030. 8% of the people of Western Europe will be Muslims in 2030, and every fourth by 2100. As for the Muslim diaspora in the USA, there are no official statistics, because according to the laws of the country, religious affiliation is not recorded in the national census, which occurs every ten years. That is why the number of adherents of Islam in the USA is estimated approximately. According to the report of the Statistical Association of American Religions on changes in the religious composition of the U.S. population, for ten years (2000 -2010), there has been an increase in the number of Americans professing Islam. Thus, from 2000 to 2010, the number of people who identify themselves as Muslims increased from 1 million to 2.6 million. According to the Pew Research Center (PRC) forecast, the number of U.S. citizens professing Islam will increase from 3.3 million to 8.1 million by 2050, making about 2.1% of the total population. Russia has a Muslim population of 21,513,046 (15% of the population). In Turkey, 68,963,953 people are Muslims or 99% of the population. Traditionally, the notion of civilization is associated with a country or continent, such as Europe, China, India, etc. However, Islamic civilization is not causally linked to any geographical location but covers the entire modern continental and subcontinental world. Thus, the peoples and countries that identify themselves as an integral part of the Islamic civilization are consolidated by the specificity of what is called Islam and Islamic civilization. It unites peoples of different ethnic groups, cultures, languages, and traditions from Syria to Malaysia, from Tatarstan to South Africa. It is thus challenging to consider a single historical community of fates of these peoples. Islam is a unity in diversity. We should discuss a special kind of solidarity, based not only on faith in Allah but also on a shared Worldview. However, such civilizational solidarity is not monolithic and non-conflictual. It does not exist in a pure form but is the result of cross-cultural interaction in the broad sense. The Islamic civilization can instead be considered as an epiphenomenon, i.e., a result of intercultural communication of different civilizations. Its peculiarity is close interrelation with religion, which is a way of life, a system of values, social, political, and economic institutions. We may call Islam one of the most viable world religions, dynamically adapting to the peculiarities of different peoples' traditions and cultures. The increase in the number of Muslims in the modern world is more connected with the attraction of new adherents, "enchanted" by the attractiveness and simplicity of Islamic religious practices. At the same time, in the broad sense, Islam has brought a large number of problems to the modern world, both at the level of interstate and interreligious relations and at the level of international political and economic life. Since 1967, media and researchers used frequently the term "Islamic factor". After the 1979 Iranian Revolution, the threat to the world order is associated with Islam. The Western countries started to correlate Islam with a sense of anxiety, and with extremism and terrorism in the 1980s. After S. Huntington's 1993 article "The Clash of Civilizations", the entire Islamic civilization began to be considered as a potential source of conflicts in the modern world. At present, it is rather obvious that in assessing Islam, stereotypes play a significant role. They arise as some researchers attempt to describe the realities of the Muslim world using terms of the Christian tradition, as well as those related to the particular ideological and cognitive attitude and methodology of Eurocentrism. It is accepted that modernity is the totality of existence, but as history shows, it is not so. The study of the state of Muslim culture shows that not only is there not enough knowledge about it, but its image is often much distorted. Speaking about the problem of stereotypes, we should note that up to the present day, false culturalphilosophical and political-ideological stereotypes prevail in various research and the consciousness of the public. Suffice to point out the widespread use of Islamic fundamentalism in the media, the content of which is interpreted quite broadly and arbitrarily understood rather as religious extremism. It is, therefore, necessary to distinguish between fundamentalism and extremism. In general, stereotypes are the result of either insufficient knowledge or an inadequate methodology or are formed per the ideological and socio-cultural attitudes of the cognizing subject. In Islam, as a rule, orthodoxy, theology, church ideology, etc. are sought by analogy with Christianity. However, these phenomena simply do not exist in Muslim culture. It is also improper to think about Islam and Islamic culture abstractly without taking into account that Islam and its culture in different historical epochs and different countries have their manifestation. The politicization of what is commonly referred to as the Islamic factor began in the early 1970s after some Arab countries, primarily Saudi Arabia, imposed an oil embargo on the states supporting Israel in its war with Egypt in 1973, which led to the first energy crisis. The embargo period was short, but in the historical memory of Western countries, Islam and the Arab countries were considered responsible for the crisis. After the 1979 Islamic Revolution in Iran, the stereotype of the Islamic threat became the dominant topic in the media. After 1991, the military implementation of the liberal democracy model in the Middle East led to irreversible and severe consequences in Afghanistan, Iraq, Libya, Syria, and Yemen. Mass migration from the Middle East and Africa has led, among other things, to a significant strengthening of the European sense of fear and threat from Islam. Islam, as well as Christianity, belongs to the world religions. Accordingly, it carries attitudes on the universal vision of the world. Obviously, since the publication of S. Huntington's article The Clash of Civilizations, the idea of a clash of Western and Muslim civilizations claiming universalism has been asserted. We consider this position, to put it mildly, incorrect, given the fact that Islamic and Western European Christian civilizations have been neighboring within a single Mediterranean civilization throughout many centuries of history. Thus, it seems that the main difference between Islam and the West is related to the difference in value systems and norms. Each of them has its understanding of the world. Moreover, if the Islamic worldview bases on certain religious principles, the European one is notable for its secularity. Only on this level, we may speak about the dialogue of two Universalist Worldviews. The recognition of cultural and religious pluralism can be the dialogue's basis. Thus, the matter may concern the search and establishment of cross-cultural interaction, including moral grounds. At the same time, it is vital to remember that the politicization of Universalist worldviews is the basis for considering and asserting the idea of clash or conflict of Islamic and Western European civilizations. The proposed approach to the search for a dialogue of Islam in the East-West context requires knowledge and understanding of the shared history underlying the formation and development of these civilizations in the Mediterranean area. Furthermore, the matter is that Islam and Christianity belong to the Abrahamic religious tradition, and that the ancient culture is an integral part of these civilizations. It is crucial to stress that for the Arab East, Aristotle has always been the First Teacher. The main differences in the historical and cultural development, especially after the Renaissance, can probably be found in the ratio of secular and religious movements. Classical Islamic culture preserved the balance between religion and secularism. That balance defined the primary worldview and value principles. III. CLASSICAL ARAB MUSLIM AND CONTEMPORARY ISLAMIC CULTURES How can we compare classic Arab Muslim culture, which was open to interactions, and the modern Islamic one, which, if not opposed, is not welcoming recent inter-civilizational dialogue? The values of Islamic culture and those of any other civilizations are determined mainly by fundamental values that constitute the basis of value consciousness in their integrity. Those of Islamic culture were primarily determined by the peculiarities of the formation and development of the Arab Caliphate. The traits of classical Islamic culture, as of its paradigms in general, are determined mainly by the fact that it was formed as an integral part of a unique Mediterranean culture and civilization, and by the fact that it preserved and multiplied the cultural, scientific and philosophical traditions of antiquity, as well as developed the humanistic nature of Mediterranean culture but in other historical conditions. It is not surprising that in the Arab East, antique heritage was the source and integral part of the Islamic world's culture. The development of Islamic civilization is closely connected with the emergence and strengthening of Islam and the Caliphate, whose vast space was a center of interaction and mutual enrichment of various cultural and religious traditions. The Islamic Golden Age came in the 9 th -12 th centuries, when Islam started to define the level of global cultureboth spiritual and material. One of the crucial traits of classical Islamic culture is that its main structural elements are not so much scientific (as was the case in Western European thought) but the value and ideological processes, determining the nature of knowledge, interpretation, and scope of adequate understanding of the epistemological map of the world. These processes have a conventional paradigm based on a specific set of assessments and perceptions of the limits of human existence in the world, the nature, and connection with the cosmos, reflected in the Islamic world. It was in the problem field of knowledge (based on the ideal of knowledge in Islam) that intellectuals of the Islamic Middle Ages solved each problem separately -be it cultural and political issues, ethics and aesthetics, philosophy and law. All major philosophical and sociopolitical trends of the Muslim Middle Ages, without limiting themselves to one specific subject of knowledge, acted as political, or philosophical, or legal, or ethical theories, etc. related to political problems. The peculiarities of the ideal of knowledge in Islamic culture was defined by Sharia, according to which faith and mind should not oppose each other but rather mutually enrich in the field of knowledge. Thus, we can say that medieval Islamic culture is knowledgecentered. For example, the work of the famous medieval thinker al-Ghazali (1058-1111) The Revival of Religious Sciences (Ihya' Ulum al-Din) can be simultaneously considered philosophical, legal, religious, linguistic, and cultural, i.e., interdisciplinary in the modern sense [2]. It is not without reason that the famous philosopher Averroes (1126-1198) spoke about al-Ghazali: "… he was an Ash'arite with the Ash'arites, a Sufi with the Sufis and a philosopher with the philosophers" [3]. Many representatives of Kalam wrote not only on religion but also on philosophy and natural science. The matter here is not in the weak differentiation of sciences, but in the proper spiritual attitude of Islamic culture, based on the famous statement attributed to the prophet Muhammad: "Seek knowledge even in China." In medieval Arab Muslim civilization, as American orientalist Franz Rosenthal stresses in his Knowledge Triumphant [4], "knowledge" has acquired such significance that there is no equal in other civilizations. The "knowledge" is both secular and religious. The nature of the value orientation of the educated part of the medieval Islamic society can be judged from Adab literature. Adibs embody the image of cultural and educated people. Adab, a set of norms of education and politeness, assumed the knowledge of both secular and religious sciences, in particular philosophy, astronomy, mathematics, and a specific model of behavior. Crucial for understanding the paradigm of Islamic culture are such features of Islam as the absence of the institution of the church and, consequently, the lack of church ideology; the recognition of the law-making role only for God and, hence, the absence of orthodoxy and heresy in the Christian sense; religious and legal pluralism within a single Islamic worldview. In describing the paradigm of Islamic culture and civilization, it seems necessary to identify at least two dominant components: Islam and Hellenism. Throughout its history, Islamic culture has shown and demonstrated both its "Western face," as it contains elements of Judaism, Christianity and Hellenism, and the "Eastern face," as it departs from the essence of these components. By taking the latter into account, it is possible to understand the humanistic component linked to the attempt to make human beings more humane and to contribute to the discovery of their greatness. We are discussing three aspects of humanism in medieval Islamic culture:  religious humanism, proclaiming man the supreme of divine creations;  Adab humanism, the archetype of which is the 9th-century Adab, corresponding to the ideal of Humanitas, which is characteristic of 16thcentury Europe. That is, the idea of the development of physical, moral and mental capacities of an individual for the common good;  Philosophical humanism, more conceptualized. Abū Hayyān al-Tawhīdī briefly expressed its essence: "Man has become a problem for man" [5]. While paying tribute to and recognizing the existence of universal traits and principles of humanism, we should also mention that every culture and civilization in its prime is developing its humanism model. It is also true that, even within the framework of Islamic culture, humanism lies in various forms. In the East, this phenomenon first became known during the reign of Khosrow I Anushirvan and was introduced by Barzuyeh, Paul the Persian, and others. The humanism that came to life under the influence of Hellenistic Gnosticism, hermeticism and Neoplatonism followed. That humanist quest focused on the theme of the "perfect man" and was represented by the names of Ibn Arabi, Abd al-Karīm al-Jīlī (1365-1417), Al-Hallaj (857-922) and Yahya ibn Habash Suhrawardī (1154-1191). At least, but not last, humanism, which emphasizes the greatness of the human mind (as in the hadith, attributing to the prophet Muhammad the words "Whoever knows God knows himself" and "The first thing created by God is the mind"), is found in the work of Muhammad ibn Zakariya al-Razi (850-925), who rejected Revelation and affirmed the autonomy of the human mind in the spirit of European Enlightenment. The ambivalence of Islamic culture, based on the principles of Sharia and the historical practice of the Caliphate, suggests its consideration in terms of both earthly and heavenly, as well as esoteric and exoteric. Considering the significant role of Sharia in the world and the predominance of worldly attitudes in human Advances in Social Science, Education and Humanities Research, volume 468 behavior and thought, we should note that Islamic culture has preserved and maintains a stable connection between the ideas of space and ethics. This fact allowed us at one time to consider "foreign science" (philosophy, oriented at the ancient tradition) as an integral part of our own culture, and allows us to leave the doors open for modern European science and culture. Considering esoteric versus exoteric relations in the issue of reason-faith, it is necessary to note the nature of their complementarity. The analysis of the theological and philosophical levels of solving the problem of the correlation of reason and the institutions of faith shows the following. Despite the divergence in positions of different thinkers, they agree that in their totality they followed the esoteric tradition associated with the priority of reason. In so doing, they paved the way for Sufi esoteric knowledge and its intellectual attempt to harmonize Sharia and Tariqa as the justification of its approach to the problem. Sufism did not regard the relation between reason and faith as "the very essence of the problem" but included it in the overall system of the connections among the Spiritual Stations: the Law, the Path, and the Inner Truth (Sharia, Ṭariqa, Haqiqa). At the same time, the Sharia, Ṭariqa, Haqiqa system organized a "logical form" of action of a cognizing subject in search of his Absolute, thus contributing to the appearance of many variations, one of which is the teaching of al-Ghazali. Realizing that Sufism is a historical and holistic phenomenon, we believe it is crucial to study it, taking into account the archetypes of the Sufi culture. IV. CONCLUSION The philosophical analysis of Muslim culture requires identifying a stable paradigm and changes in the course of historical development. It is essential to consider this when analyzing concepts on the so-called reform or modernization of Islam. As a rule, the attempts made so far by the West to shape Islamic development have failed because the traditional foundations that represent the spirit of Islamic culture have been accepted as something that can historically be overcome. Meanwhile, social, historical, and political realities inherently show that understanding of traditional and modern essence is tightly related to the foundations of political and legal culture of Islam and the dominant ideological and cultural movements within the framework of developing Islam. The analysis of classical government theories in Islamic political thought, represented by such authors as al-Mawardi (d. 1058), al-Juwayni (d. 1085), al-Ghazali, clearly shows that the principles of Sharia did not interfere with the historical realities of the Caliphate and relied more on historical precedents [6]. The constant component of these concepts is the doctrine that the state is only a conductor of the Sharia principles. However, the question is who has the real political powerhow are power and authority understood, what the consolidating component is, and moral and spiritual basis of a Muslim civil society. The idea of the unity of religion and state is based on not only a sense of religious solidarity but also on the need to understand that Islam is expected to establish equality and justice in social, political, and economic relations. The recognition of the fact that Islam is a way of life and a specific type of modern world outlook makes it possible to understand the idea of an Islamic state by its very essence.
2020-09-10T10:16:27.086Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "630a89efdc3e6de24f940340f7f9337ff7fce87c", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125944242.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "87c0ff509c3d4831726cf756dfa7c94c0f1202ea", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
13479104
pes2o/s2orc
v3-fos-license
Expression of c-myc is not critical for cell proliferation in established human leukemia lines Background A study was undertaken to resolve preliminary conflicting results on the proliferation of leukemia cells observed with different c-myc antisense oligonucleotides. Results RNase H-active, chimeric methylphosphonodiester / phosphodiester antisense oligodeoxynucleotides targeting bases 1147–1166 of c-myc mRNA downregulated c-Myc protein and induced apoptosis and cell cycle arrest respectively in cultures of MOLT-4 and KYO1 human leukemia cells. In contrast, an RNase H-inactive, morpholino antisense oligonucleotide analogue 28-mer, simultaneously targeting the exon 2 splice acceptor site and initiation codon, reduced c-Myc protein to barely detectable levels but did not affect cell proliferation in these or other leukemia lines. The RNase H-active oligodeoxynucleotide 20-mers contained the phosphodiester linked motif CGTTG, which as an apoptosis inducing CpG oligodeoxynucleotide 5-mer of sequence type CGNNN (N = A, G, C, or T) had potent activity against MOLT-4 cells. The 5-mer mimicked the antiproliferative effects of the 20-mer in the absence of any antisense activity against c-myc mRNA, while the latter still reduced expression of c-myc in a subline of MOLT-4 cells that had been selected for resistance to CGTTA, but in this case the oligodeoxynucleotide failed to induce apoptosis or cell cycle arrest. Conclusions We conclude that the biological activity of the chimeric c-myc antisense 20-mers resulted from a non-antisense mechanism related to the CGTTG motif contained within the sequence, and not through downregulation of c-myc. Although the oncogene may have been implicated in the etiology of the original leukemias, expression of c-myc is apparently no longer required to sustain continuous cell proliferation in these culture lines. Background The v-myc gene was originally discovered as being the cell transforming oncogene of the avian myelocytomatosis virus MC29, and was subsequently shown to be a vi-rally sequestered copy of a member of a normal cellular gene family [1]. The c-myc gene was subsequently found to be overexpressed in a variety of human malignancies, most notably in Burkitt lymphoma, through chromo-somal translocations into the transcriptionally active immunoglobulin loci [2,3], and in other cancers through gene amplification [2,4,5]. The gene product has been shown to act as a transcription factor and has been implicated in cell cycle control, while overexpression can contribute to malignant transformation or result in induction of programmed cell death [for detailed reviews see [6][7][8][9]]. Results of experiments with an antisense oligodeoxynucleotide sequence targeting the initiation codon region of c-myc mRNA (bases 559-573 HSMYC1, GenBank Accession Number V00568) [8,10,11] have been taken as providing strong support for the generally accepted view that expression of the gene is required for passage of cells from G1 into S phase of the cell cycle [7]. In consideration of the foregoing, it was somewhat of a surprise to us to find that we could reduce expression of c-myc to low levels for several hours, following intracytoplasmic delivery of the same antisense sequence through reversible plasma membrane permeabilization with streptolysin O, but that such treatment had no effect on the proliferation of cells of the human T lymphocytic leukemia line, MOLT-4 [12]. The oligodeoxynucleotide 15-mer used in these experiments had a chimeric structure composed of 3 nuclease-resistant methylphosphonate internucleoside linkages at each end of the molecule to protect against exonucleolytic degradation, and a central section with 8 normal phosphodiester groups to elicit cleavage of c-myc mRNA by endogenous cellular RNase H. In view of the transient nature of the suppression of c-myc gene expression, due to the intracellular decay of intact antisense effector [12], attempts were made to enhance the biological stability of the antisense oligodeoxynucleotide, by reducing endonuclease susceptibility through further methylphosphonate substitution. However, these experiments served to demonstrate that the initiation codon region of c-myc mRNA, which we refer to as the A site, is involved in secondary structure and is not a good target for antisense attack [13]. The methylphosphonate modification is helix destabilising [14,15] and just one further substitution abolished RNase H-mediated antisense activity at the initiation codon site [13], while reduced phosphodiester structures targeting apparently accessible regions in other mRNAs were still highly active [16]. On the other hand, the downstream region bounded by positions 1150-1159 of c-myc mRNA was obviously readily accessible since it was efficiently cleaved by RNase H in the presence of the initiation codon antisense oligodeoxynucleotide [17], through only partial complementarity to the latter over the sequence CGTTGAGG*GG within the oligomer (G* ignifies a GU base pair in the hybrid between the A site antisense oligodeoxynucleotide and positions 1150-1159 of c-myc mRNA). Chimeric methylphosphonate / phosphodiester oligodeoxynucleotides fully complementary to positions 1147-1166 of c-myc mRNA, which we refer to as the D site, and incorporating the sequence CGTTG within the central phosphodiester section were shown to be highly effective at ablating the message by an RNase H dependent mechanism, as evidenced by the appearance of c-myc mRNA fragments on Northern blots, and at reducing c-Myc protein to low levels over 24 h in living leukaemia cells [18]. These effects were accompanied by marked inhibition of cell proliferation that was initially ascribed to the sustained downregulation of c-myc achieved through targeting the new D site. However, this interpretation was put in doubt when prolonged suppression of c-myc gene expression, achieved with a morpholino antisense oligonucleotide analogue 28-mer, simultaneously targeting the exon 2 splice acceptor site and initiation codon [19], failed to affect cell proliferation. During the course of the above work we had also undertaken an investigation into the observation that a random sequence, control 15-mer oligodeoxynucleotide rapidly induced apoptosis following intracytoplasmic delivery into MOLT-4 and Jurkat E6 human T lymphocytic leukemia cells. The results of this study demonstrated that the activity was due to the phosphodiester linked 5mer motif CGGTA present within the oligodeoxynucleotide, and that isolated, end-protected CpG 5-mers of sequence type CGNNN (N = A,G,C or T) induced apoptosis with varying degrees of potency depending upon the nature of the 3' terminal sequence [20]. The more active 5mers were also found to induce cell cycle arrest without triggering apoptosis in a number of other leukemia cell lines. The 5-mer sequence CGTTG, present within both the A site, initiation codon c-myc antisense oligodeoxynucleotide and the D site antisense effector, was one of the most potent inducers of apoptosis / cell cycle arrest. Why the A site antisense oligodeoxynucleotide failed to induce apoptosis in MOLT-4 cells is readily explained by the fact that the internucleoside methylphosphonate substitution of the chimeric structure extended into the CpG 5-mer motif [12], which was shown to abolish apoptosis inducing activity of CpG oligodeoxynucleotides [20]. On the other hand the D site antisense oligodeoxynucleotides contained CGTTG in fully phosphodiester linked form [18] raising the possibility that the observed antiproliferative effects were related to the non-antisense biological activity of this motif rather than to the antisense-induced downregulation of c-myc expression. The situation is further complicated by the observation that the sequences CGGTA and CGTTA, present within 15-mer oligodeoxynucleotides and as an isolated 5-mer respectively, induced quite substantial reductions in the levels of c-Myc protein expression in MOLT-4 and myeloid leukemia KYO1 cells by a non-antisense mechanism, with lesser effects on the corresponding intracellular concentrations of c-myc mRNA [20]. The present work was undertaken to establish the relevance of c-myc ex-pression to cell proliferation in human leukemia cell lines as opposed to CpG 5-mer effects. Our results demonstrate that c-Myc protein may be severely reduced in cells without affecting proliferation, suggesting that if dysregulation of c-myc was involved in the etiology of the source leukemias, its function has since been superseded by other factors which maintain the cells in a permanently proliferative state. Results A range of chimeric methylphosphonodiester / phosphodiester antisense oligodeoxynucleotides (Table 1) targeting bases 1147-1166 of human c-myc mRNA (HSMYC1, GenBank Accession Number V00568), derived from exon 2 and corresponding to codons 197-203 (part) of c-Myc protein, were evaluated for their effects on c-myc gene expression following intracytoplasmic delivery into cells of the human chronic myeloid leukemia line, KYO1. Molecules with stepwise reductions in the number of central phosphodiester linkages from 9 to 4 exhibited high activity in inducing the ablation of c-myc mRNA. Since c-Myc protein has a very short half-life [13], its intracellular concentration reflected the reductions in the level of the mRNA ( Figure 1A). Control oligodeoxynucleotides with an inverted antisense or sense sequence had no significant effect on c-myc gene expression. Ablation of c-myc mRNA by the antisense oligodeoxynucleotides resulted from an RNase H-mediated mechanism directed by the central phosphodiester section of the molecules. RNase H-generated fragments of c-myc mRNA were visible on Northern blots of total RNA isolated from treated KYO1 cells at 4 h, and essentially the same results were observed whether intracytoplasmic delivery was achieved by reversible plasma membrane permeabilization with streptolysin O or by electroporation ( Figure 1B). Figure 2A presents the dose response for ablation of c-myc mRNA in cells loaded by streptolysin O permeabilization from the indicated external concentrations of a chimeric antisense oligodeoxynucleotide 20-mer containing just 5 central phosphodiester linkages. GAPDH mRNA was included as a control for specificity in this experiment since it had previously been shown to be susceptible to RNase H-mediated cleavage at high concentrations of c-myc D site antisense oligodeoxynucleotides [18], through partial complementarity over 9 consecutive bases from 967-975 (HSGAPDR, GenBank Accession Number X01677) and 11 overall between 961-980. It can be seen that cells became arrested in the cell cycle ( Figure 2B) at concentrations of oligodeoxynucleotide which had significant effects on c-myc mRNA and negligible effects on GAPDH mRNA ( Figure 2A). In contrast to the foregoing, an RNase H-inactive morpholino c-myc antisense oligonucleotide 28-mer, simultaneously targeting both the initiation codon and exon 2 splice acceptor sites of c-myc (Table 1), inhibited both splicing and normal translation initiation, induced missplicing to a cryptic site 44 nucleotides downstream of the normal splice acceptor site thereby cutting out the AUG initiation codon, and after around 4 h of inhibition, began to induce translation initiation from a downstream, in frame AUG [19]. However, despite the almost complete ablation of full length c-Myc protein over 24 h and negligible level of truncated product at 4 h ( Figure 3A), KYO1 cells continued to proliferate at the same rate as cells permeabilized in the absence of oligonucleotide or in the presence of sense and nonsense controls ( Figure 3B). Similar results were observed with HL60 human acute promyelocytic leukemia, U937 human histiocytic leukemia and MOLT-4 human T lymphocytic leukemia cells (data not shown). The CpG motif, CGTTG, present within the central phosphodiester section of the chimeric methylphosphonodiester / phosphodiester, c-myc antisense 20-mer, 5'F-MD757AS (Table 1), also inhibited the proliferation of KYO1 cells when delivered as an isolated 5-mer which was end-protected against exonuclease degradation by fluorescein at the 5' end and 3-hydroxypropyl phosphate at the 3' end ( Figure 4A). In contrast to the large reduction in c-myc mRNA and appearance of RNase H-generated fragments at 30 min in cells treated with the full length antisense 20-mer, CGTTG induced only a modest drop in mRNA and no fragments were detectable on Northern blots ( Figure 4B). However, CGTTG treatment did downregulate c-Myc protein, although this occurred more slowly and to a somewhat lesser extent than in cells loaded with the full length antisense 20-mer ( Figure 4C). Modest reductions in c-myc mRNA and more profound decreases in c-Myc protein were previously observed in KYO1 and MOLT-4 cells treated with other CpG oligodeoxynucleotides [20]. Therefore, it was unlikely that downregulation of c-Myc protein by CGTTG occurred through an antisense mechanism. Indeed, unlike the cmyc antisense 20-mer, CGTTG did not promote cleavage of c-myc mRNA by E. coli RNase H, as determined by incubation in vitro with total RNA isolated from KYO1 cells, followed by RT-PCR using c-myc specific primers across the antisense target site ( Figure 5). Both the cmyc antisense 20-mer, 5'F-MD757AS and CGTTG rapidly induced apoptosis in MOLT-4 cells and downregulation of c-Myc protein was maintained over 4 h by both treatments (Figure 6A and 6B). However, a MOLT-4 cell subline generated for resistance to CpG 5-mers by repetitive treatments of surviving cells with CGTTA was also resistant to the apoptogenic action of CGTTG and the c-myc antisense 20-mer ( Figure 6D). CGTTG also failed to induce a significant effect on c-Myc protein expression in this MOLT-4 subline ( Figure 6C). Nevertheless, the true RNase H-mediated antisense activity of 5'F'MD757AS against c-myc mRNA was maintained and c-Myc protein levels were severely reduced in CpG oligodeoxynucleotide-resistant cells loaded with the 20-mer ( Figure 6C), but the cells did not apoptose ( Figure 6D). Consequently, it is clear that once the propensity for CpG oligodeoxynucleotide-induced apoptosis is removed, the previously observed correlation between downregulation of c-Myc protein and cell death no longer holds. Discussion In applying antisense oligonucleotides to dissect gene function it is desirable to use two types of molecule embodying different mechanisms of action against the target mRNA. It is unlikely that non-specific or unanticipated sequence-specific biological activity will be the same when the chemical structure of the oligonucleotides and the strategy for blocking translation of the message are widely different. Therefore, the assignment of gene function can be made with greater confidence when phenotypic effects of different types of antisense oligonucleotide concur. We have targeted expression of the c-myc gene in human leukemia cell lines using two different antisense strategies. On the one hand, cleavage of c-myc mRNA within the translated region by endogenous RNase H was induced through application of chimeric methylphosphonodiester / phosphodiester oligodeoxynucleotides. On the other, c-Myc protein was downregulated through steric blockade of both pre-mRNA splicing and translation initiation using a morpholino oligonucleotide analogue. In both cases the data presented and reported previously [19,21] support the conclusion that the anticipated mechanisms were en-trained within the cells and that true antisense effects on c-myc gene expression were achieved. However, in our case the phenotypic effects in terms of cell proliferation were contradictory. 5'F-MD685AS 5'-Fluos-C/T/G/C/T/G/T-C-G-T-T-G-A-G-A/G/G/G/T/A-3' The induction of apoptosis in MOLT-4 cells and cell cycle arrest in KYO1 cultures by RNase H-active chimeric oligodeoxynucleotides targeting bases 1147-1166 of c-myc mRNA were seemingly predictable results. The lack of effect of the morpholino antisense oligonucleotide on leukemia cell proliferation was somewhat surprising, especially considering that ablation of c-Myc protein at 4h was almost total (Figure 3). The accumulation of a protein of approximately 47 kDa carrying the c-Myc epitope recognised by the antibody mix, clearly visible at 24 h, was the result of initiation of translation at an in-frame AUG exactly 300 bases downstream of the position of the normal AUG initiation codon in c-myc mRNA. Such c-Myc short proteins which lack most of the N-terminal transactivation domain but retain the C-terminal protein dimerization and DNA binding domains have been observed in certain cells under normal growth conditions [22]. They are unable to activate transcription and inhibit transactivation by full-length c-Myc protein. However, in the present experiments both the full-length protein and the truncated product were barely detectable at 4 h after the initiation of antisense treatment (Figure 3), but despite their absence the cells continued to proliferate at the same rate as untreated controls. In view of the foregoing it seemed likely that the biological activity of the RNase H-active antisense oligodeoxynucleotides was due to the action of the CGTTG motif present within the phosphodiester section of the chimeric molecules, rather than to downregulation of c-Myc protein. Apoptosis induced by CpG oligodeoxynucleotide 5-mers in MOLT-4 cells was of rapid onset and was characterised by marked redistribution of phosphatidylserine to the outer surface of the plasma membrane and DNA laddering within 160 min, while the mitochondrial transmembrane potential collapsed over roughly the same timescale [20]. The process was associated with proteolytic processing of pro-caspase 8 and Bid, followed by processing of procaspase 3. It is noteworthy that the oligodeoxynucleotide CGTTG was a potent inducer of apoptosis in MOLT-4 cells and that the relative activity of different 5-mers correlated with their ability to arrest KYO1 cells in the cell cycle [20]. In the present work we have shown that CGTTG mimicked the antiproliferative effects of the RNase H-active 20-mers in the absence of any antisense activity against c-myc mRNA (Figures 4 and 6). We have also demonstrated that once MOLT-4 cells are rendered resistant to CpG oligodeoxynucleotide-induced apoptosis they no longer undergo programmed cell death or even cell cycle arrest in response to downregulation of c-Myc by a chimeric antisense 20-mer ( Figure 6C and 6D). Figure 4 Comparison of the chimeric c-myc antisense 20-mer, 5'F-MD757AS (Table 1) Despite the large body of evidence linking c-myc expression to cell cycle control [6,7,9], it would appear that its functions have been superseded by other dominant pathways in the established leukemia cell lines. This raises the question as to whether independence from c-myc was selected for during establishment of immortal cell cultures, or occurred prior to this during malignant progression in vivo. It has generally been assumed that expression of the oncogene would be critical for maintaining the malignant phenotype of leukemic leukocytes since it was shown to be upregulated in these cells [3,4]. This relies on the logic that if a protein is there and expressed in abundance, then it must be doing something, which is not necessarily true. It may well be that overexpression of c-myc serves to select for cells that are insensitive to its activities, such as induction of apoptosis, and that the accompanying genetic changes contribute to malignant progression without the further requirement for elevated c-Myc protein [8,9,[25][26][27]. Our future work will address this question using primary leukemia cells derived from patients before and during therapy. Conclusions We have shown that use of antisense oligonucleotides in assessing gene function may lead to spurious assignments, if not properly controlled by confirmation through independent strategies. Conflicting results of such an approach prompted this investigation, which has led to the conclusion that expression of c-myc is not critical for cell proliferation in the established leukemia lines studied. Rather the CpG 5-mer motif present in the antisense oligodeoxynucleotide sequence, and not downregulation of c-myc, was responsible for the observed biological activity, leading to apoptosis and cell cycle arrest by a non-antisense mechanism. This is an important finding which needs to be taken into account in defining the true functions of c-myc in dividing cells. It raises the question as to whether expression of the oncogene is required to sustain uncontrolled proliferation in primary malignant cells, and if not, then why is the gene amplified / overexpressed in leukemia and lymphoma. Finally, apart from the importance of avoiding CpG 5-mer motifs in antisense oligodeoxynucleotides, the unusual biological activity of these small oligodeoxynucleotides presents the opportunity to define novel targets for antileukemic drug development. Oligodeoxynucleotide synthesis and cell treatment Chimeric methylphosphonodiester / phosphodiester oligodeoxynucleotides were synthesized as described previ- ously [23] using methylphosphonamidite and phosphoramidite synthons (Glen Research, Sterling, VA; UK supplier Cambio Ltd, Cambridge). 5'-Amino-Modifier C6-TFA (Glen Research) was incorporated in the final cycle of the synthesis and the oligodeoxynucleotides were labelled with fluorescein, postsynthesis, using Fluos reagent (Roche Diagnostics Ltd, Lewes, East Sussex, UK). Oligodeoxynucleotides terminating at their 3' ends with phosphodiester internucleoside linkages were synthesized on DMT-C 3 -Succinyl-CPG support (Peninsula Laboratories Europe Ltd, St Helens, Merseyside, UK) to provide protection against 3'-exonuclease by a 3-hydroxypropyl phosphate group on the 3'-OH. Fluorescein tagged morpholino oligonucleotide analogues were generously donated by AntiVirals, Inc. (Corvallis, OR). Oligonucleotides were introduced into the cytoplasm of cells by 10 min reversible permeabilization of the plasma membrane with streptolysin O (Sigma-Aldrich Company Ltd, Poole, Dorset, UK) in the presence of external concentrations of 0.002-20 µM oligonucleotide in serumfree RPMI 1640, as previously described [12]. A full ex-perimental protocol for this technique is available at the web site of The Antisense Research Group at The University of Liverpool [http://www.liv.ac.uk/~giles] . Alternatively, cells were electroporated in the presence of oligonucleotide under the reported conditions [24]. Results presented represent cultures in which >85% of the cells had taken up the oligodeoxynucleotide and subsequently resealed to exclude propidium iodide as determined by dual parameter flow cytometry. Northern and Western blotting Effects on cellular content of mRNAs and protein were determined by densitometry of Northern and Western blots as previously described [13,18]. In vitro RNase H assays Total RNA isolated from KYO1 cells was diluted to a concentration of 1 µg/18 µl reaction volume in complete First Strand Buffer mix (GibcoBRL) containing 0.4 nmol oligodeoxynucleotide and 0.5 U E. coli RNase H (Boehringer Mannheim UK Ltd, Lewes, East Sussex, UK). Reaction mixtures were incubated at 37°C for 30 min. The reaction was terminated by heating at 70°C for 10 min, and first strand cDNA synthesis was carried out using 200 U Superscript II (GibcoBRL) and 60 pmol random 9-mer oligodeoxynucleotide primers according to the manufacturer's instructions. Samples (1 µl) of the reverse transcription reactions were subjected to PCR amplification using an oligodeoxynucleotide corresponding to bases 368-397 of c-myc mRNA (GenBank Accession Number V00568) as the upstream primer (CG-GGCACTTTGCACTGGAACTTACAACACC), and the complement of bases 1221-1243 plus a 5' 23 base T7 promoter sequence as the downstream primer (GCGTAATACGACTCACTATAGGGACTCCGTCGAGG AGAGCAGAGAA), to yield a 899 bp product from uncut message. Products were separated by 1.5 % agarose / ethidium bromide (1 µg/ml) gel electrophoresis.
2017-08-03T01:33:43.878Z
2001-11-16T00:00:00.000
{ "year": 2001, "sha1": "44a6d9e80719bf4d3e407037c2259a46e8bdc445", "oa_license": null, "oa_url": "https://bmcmolbiol.biomedcentral.com/track/pdf/10.1186/1471-2199-2-13", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "734a844f4ddc517bd0e19dafff42e508953e92a7", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221861879
pes2o/s2orc
v3-fos-license
A new proximal femoral nail antirotation design: Is it effective in preventing varus collapse and cut-out? Objectives This study aims to compare the mechanical features of the existing proximal femoral nail antirotation (PFNA) system and the new PFNA system that we designed using three-dimensional (3D) finite element analysis. Materials and methods This experimental study was conducted between 2019 and 2020. We constructed two femur models with Arbeitsgemeinschaft für Osteosynthesefragen (AO) type A1 fractures using 3D computed tomography scans. The new and standard PFNA designs were inserted into the femur models and subsequently transferred to the program. We investigated the distribution of stress on the tip of the lag screw, the calcar region, lag screw-nail junction, and the additional screw inserted through the greater trochanter (only present in the new PFNA design) using 3D finite element analysis. Results When the von Mises stress distributions in our models were examined, the maximum stress at the lag screw-nail junction was 18 mpa in the new design PFNA, while it was 20 mpa in the classic PFNA model. The maximum stress at the junction of the additional screw that had greater trochanter inlet with the nail was found as 42.5 mpa. The maximum stress on the calcar region was found to be 10 mpa at the new design PFNA, while it was 13 mpa with 30% increase in the classic PFNA. The stress on the tip of the lag screw was found to be 49 mpa in the classic PFNA design, while in the new design PFNA it was found as 28 mpa with a decrease of more than 40%. Conclusion As per our findings, the new PFNA design leads to reduced stress on the lag screw-nail junction, the calcar region, and the tip of the lag screw. Intertrochanteric femoral fractures (ITFFs) are among the most common fractures in the elderly osteoporotic population. [1] The prevalence of ITFFs has increased with the recent increase in life expectancy. [2] Of these fractures, 35-40% are unstable. [3,4] Hip fractures are associated with increased mortality, particularly in elderly patients. They are treated with a surgical approach other than in rare cases where surgery has a very high mortality risk. [5] Mortality and morbidity rates are significantly higher in elderly patients that require ITFF revision surgery. Therefore, precise reduction, a properly executed fixation, and early mobilization are vital. [6] Intertrochanteric femoral fractures can be treated with an intramedullary or extramedullary treatment approach. Available studies report favorable results for intramedullary nailing in the treatment of intertrochanteric fractures. [7] The complications that can be seen in patients that have undergone Objectives: This study aims to compare the mechanical features of the existing proximal femoral nail antirotation (PFNA) system and the new PFNA system that we designed using three-dimensional (3D) finite element analysis. Materials and methods: This experimental study was conducted between 2019 and 2020. We constructed two femur models with Arbeitsgemeinschaft für Osteosynthesefragen (AO) type A1 fractures using 3D computed tomography scans. The new and standard PFNA designs were inserted into the femur models and subsequently transferred to the program. We investigated the distribution of stress on the tip of the lag screw, the calcar region, lag screw-nail junction, and the additional screw inserted through the greater trochanter (only present in the new PFNA design) using 3D finite element analysis. Results: When the von Mises stress distributions in our models were examined, the maximum stress at the lag screw-nail junction was 18 mpa in the new design PFNA, while it was 20 mpa in the classic PFNA model. The maximum stress at the junction of the additional screw that had greater trochanter inlet with the nail was found as 42.5 mpa. The maximum stress on the calcar region was found to be 10 mpa at the new design PFNA, while it was 13 mpa with 30% increase in the classic PFNA. The stress on the tip of the lag screw was found to be 49 mpa in the classic PFNA design, while in the new design PFNA it was found as 28 mpa with a decrease of more than 40%. new generation of Gamma nail appeared to be stronger and to reduce the risk of lag screw cuttingout. [11] Despite many studies, there is currently no consensus on the ideal implant design that will provide optimum stability. [12] In this study, we aimed to compare the mechanical features of the existing proximal femoral nail antirotation (PFNA) system and the new PFNA system that we designed using threedimensional (3D) finite element analysis. MATERIALS AND METHODS This experimental study was conducted at Yozgat Bozok University Faculty of Medicine, between 2019 and 2020. We developed a femur model using data obtained from 3D computed tomography scans. We created an Arbeitsgemeinschaft für Osteosynthesefragen (AO) type A1 fracture extending from the greater to the lesser trochanter on two femur models. [13] We used a PFNA implant (implant length, 200 mm; implant diameter, 11 mm; lag screw length, 85 mm). In the new PFNA design, we included an additional screw that was inserted through the greater trochanter making a 45-degree angle with the nail and connected to the nail with a thread system. The additional screw had a diameter of 5 mm and a blunt end, and passed through the nail and lag screw and rested on the inner surface of the calcar (Figure 1a). The lag screw was designed with an oblique 6-mm slot for the passage of the additional screw so as to allow easy application without causing mechanical weakness ( Figure 1b). The new and standard PFNA models were transferred to the ANSYS Workbench program (Ansys Inc., Canonsburg, Pennsylvania, USA). The two models were compared under mechanical loading using the 3D finite element method. We evaluated maximum stress levels at the tip of the lag screws, at the lag screw-nail junction, in the additional screw that was inserted through the greater trochanter, and in the calcar region. Linear elastic and isotropic material model was assumed for all bone and other metal alloys. Material properties E=16.8 GPa, υ=0.3; E=110 GPa, υ=0.33 were used for simulations as cortical bone and titanium parts such as PFNA and lag screw. [14] Bone to bone contact interface, such as fracture interface, was considered as completely broken, frictional sliding contact and 0.2 was taken as friction coefficient. [14] Contact interface with PFNA and lag screw was assumed as bonded without sliding in order to better simulate real condition. Also, lag screw was connected to femoral head as bonded without sliding. The other contact interfaces of bone-to-titanium and titanium-to-titanium parts were taken as frictional and bonded according to real conditions. Friction coefficient for bone-to-titanium interfaces was taken as 0.46. [15] Every model was fixed at the bottom of the femur model. Forces were applied as constraints and static solutions were realized. The femur models were subjected to forces from three directions that are commonly used in the literature. [14] The configurations of the simulated forces were as follows: 2460 newtons from the acetabular fossa to the femoral head (23 degrees in the frontal plane, 68 degrees in the sagittal plane), 1700 newtons from the abductor muscles to the greater trochanter (24 degrees in the frontal plane, 15 degrees in the sagittal plane), and 771 newtons from the iliopsoas muscle to the lesser trochanter (41 degrees in the frontal plane, 26 degrees in the sagittal plane) (Figure 2). [14] RESULTS When the von Mises stress distributions of the two designs we modeled at three different points (lag screw tip, lag screw-nail junction and calcar region) were examined, the maximum stress at the lag screw-nail junction was 18 mpa at the new design PFNA and 20 mpa at the standard PFNA model. Only the maximum stress at the junction of the additional screw that had greater trochanter inlet in the new design with the nail was found as 42.5 mpa. The maximum stress on the calcar region was found to be 10 mpa at the new design PFNA, while it was 13 mpa with 30% increase in the standard PFNA (Figure 3a, b). The efficiency of our additional screw was understood from this stress value (Figure 3c). The stress on the tip of the lag screw was found to be 49 mpa in the standard PFNA design, while in the new design PFNA it was found as 28 mpa with a decrease of more than 40% (Figure 4 and 5). DISCUSSION Intramedullary nails and plate-screw systems are commonly used in the treatment of ITFFs. [16] Intramedullary fixation is favorable due to the short operation time, minimal surgical bleeding, better stability, and allowing early postoperative loading. [17] We have demonstrated that modifying the nail used for proximal femoral antirotation -an intramedullary fixation method -leads to increased fracture stabilization and implant resistance against varus forces. Therefore, we believe complications such as implant failure, varus collapse, and cut-out will decrease. The risk of cut-out and varus collapse in intertrochanteric femur fractures has been reported in recently performed studies to be possibly affected by many factors such as fracture type, fracture reduction, and placement of lag screw, osteoporosis, cervical angle difference, fracture instability and varus reduction. [17][18][19][20] Precise reduction reduces the risk of implant failure in intertrochanteric fractures. [21] In their finite element analysis, Furui et al. [22] have shown that varying degrees of varus and rotational deforming forces lead to significantly increased stress on the calcar region. In our study, the stress at the junction of the nail and the additional screw inserted through the greater trochanter was 42.5 mpa. We believe that this stress is caused by the fact that the additional screw inserted through the greater trochanter sustains the femoral head against varus malalignment. As a result, the stress on the calcar region has decreased by 30% compared to the classic nail design. This decrease will bring along significant decreases in varus collapse, implant failure, and cutout rates. We believe that the decreased complication rates will translate into decreased revision surgery rates. This reduced stress on the calcar region by the new PFNA design is even more significant in cases of varus fixation where complete anatomical reduction cannot be achieved. Continuous microfracture of the bone in contact with the lag screw is one of the important reasons that cause the lag screw to change position in the femoral head. In their study, Liang et al. [14] state that the varus tendency of the femoral head causes microfractures in the bone in contact with the tip of the lag screw. They also indicate that this varus tendency and increased microfractures lead the lag screw to pierce the femoral head and subsequently cause cut-out. The high stress on the tip of the lag screw increases the risk of microfractures and cut-out. [14] In our study, we found that maximum stress was 28 mpa on the tip of the lag screw in the new PFNA design compared to 49 mpa in the classical design. This shows that the new PFNA design reduces the stress on the tip of the lag screw by approximately 50%. Also, the additional screw will rest against the cortex from inside without penetrating the calcar and will thus prevent varus malalignment. Herein, it can be said that our new PFNA design is effective both in preventing varus tendency and in reducing the stress on the tip of the lag screw and the subsequent microfractures, thus significantly decreasing the possibility of varus collapse and cut-out. The new PFNA is not significantly different in production and cost when compared to classical PFNA. Although additional screws can be adapted to many existing PFNA designs, the use of this additional screw may be optional depending on the surgeon's preference. The new PFNA design is mechanically superior and practical to apply while not increasing treatment costs significantly. It increases stability and reduces complications. The limitation of our study is that although the new PFNA design had favorable outcomes in the computer simulation, we could not obtain clinical results. Therefore, this design must first be subjected to biomechanical tests. The subsequent outcomes can shed light on the clinical feasibility of this model. In our study, we also found that the stress in the calcar region had decreased. Further clinical studies are needed to determine whether this reduction will cause non-union of the fracture. Another limitation of the study is not having investigated any significant weakness that the passage of the additional screw, which is inserted through the greater trochanter, through the lag screw may cause. Further studies are needed to investigate the biomechanical properties of the lag screw and the additional screw. In conclusion, our new PFNA design is superior to the classical PFNA in its mechanical properties. However, there is a need for extensive clinical and biomechanical studies on this new design. Declaration of conflicting interests The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.
2020-07-02T10:11:18.014Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "280e342bd886091e72203f033ff37cbc38e20dc1", "oa_license": "CCBYNC", "oa_url": "https://www.jointdrs.org/full-text-pdf/1154", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3719e2b61090b9be3ef509140cc8138b1215986b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
90211566
pes2o/s2orc
v3-fos-license
Evolution of the D. melanogaster chromatin landscape and its associated proteins In the nucleus of eukaryotic cells, genomic DNA associates with numerous protein complexes and RNAs, forming the chromatin landscape. Through a genome-wide study of chromatin-associated proteins in Drosophila cells, five major chromatin types were identified as a refinement of the traditional binary division into hetero- and euchromatin. These five types are defined by distinct but overlapping combinations of proteins and differ in biological and biochemical properties, including transcriptional activity, replication timing and histone modifications. In this work, we assess the evolutionary relationships of chromatin-associated proteins and present an integrated view of the evolution and conservation of the fruit fly D. melanogaster chromatin landscape. We combine homology prediction across a wide range of species with gene age inference methods to determine the origin of each chromatin-associated protein. This provides insight into the emergence of the different chromatin types. Our results indicate that the two euchromatic types, YELLOW and RED, were one single activating type that split early in eukaryotic history. Next, we provide evidence that GREEN-associated proteins are involved in a centromere drive and expanded in a lineage-specific way in D. melanogaster. Our results on BLUE chromatin support the hypothesis that the emergence of Polycomb Group proteins is linked to eukaryotic multicellularity. In light of these results, we discuss how the regulatory complexification of chromatin links to the origins of eukaryotic multicellularity. Introduction The chromatin landscape consists of DNA, histones and other associated proteins and RNAs, and plays a fundamental role in development, cellular memory, and integration of external signals. As a unique feature of the eukaryotic cell, it is closely tied to the evolution of eukaryotes, both regarding their origin and the major transition(s) to multicellularity (Newman 2005;Aravind et al. 2014;Gombar et al. 2014;Penny et al. 2014;Miyamoto et al. 2015;Sebé-Pedrós et al. 2017). At a basic level, chromatin is responsible for maintenance, organization, and correct use of the genome. Histone proteins package and condense DNA in the nucleus, and act as a docking platform for hundreds of structural and regulatory proteins. A variety of reversible post-translational modifications of histones, known as epigenetic marks, promote the recruitment of specific proteins. This creates a local context for nuclear processes such as transcriptional activity, replication, as well as DNA-repair. These and other epigenetic mechanisms involved in chromatin modification have been extensively characterized in variety of eukaryotic species, which led to the observation that the chromatin landscape is effectively subdivided into a small set of distinct chromatin states (Filion et al. 2010;Ernst et al. 2011;Roudier et al. 2011). A largely open question, however, is how these chromatin states have (co-)evolved. In this work, we assess the evolutionary relationships of chromatin-associated proteins (CAPs) and present an integrated view of the evolution and conservation of the fruit fly D. melanogaster chromatin landscape. Classically, chromatin is divided into two states, namely heterochromatin and euchromatin, the former a compacted DNA state in which transcription is mostly repressed and the latter an open, transcriptionally active configuration. This classification has been refined into multiple types of chromatin. In particular, a breakthrough result was presented by Filion et al., who established five major chromatin types in D. melanogaster, named with the colors YELLOW, RED, GREEN, BLUE, and BLACK. To do so, they used genome-wide binding profiles of CAPs obtained via DamID (Vogel et al. 2007;Filion et al. 2010;van Bemmel et al. 2013). This approach is complementary to more commonly used genome-wide histone mark profiling techniques, such as ChiP-seq. Nevertheless, both are consistent with each other and serve as independent validation. Indeed, the five types can be mapped to an alternative classification into nine chromatin states, that is derived from histone modifications (Kharchenko et al. 2011). The five chromatin types have different biological and biochemical properties. YELLOW and RED are two types of euchromatin. YELLOW mainly marks ubiquitously expressed housekeeping genes. In contrast, the genes harbored in RED show more restricted expression patterns and are linked to specific tissues and developmental processes. Both euchromatin types are replicated in early S phase, and of the two, RED tends to be replicated first (Filion et al. 2010). GREEN, BLUE, and BLACK are three types of heterochromatin. GREEN is considered a type of constitutive heterochromatin. It is identified by HP1-related proteins and is especially prevalent in pericentric regions as well as on chromosome 4. BLUE is facultative heterochromatin and concerns mostly genes specifically repressed during development. It is notably composed of the Polycomb Group (PcG) proteins, which were originally discovered in D. melanogaster to repress Hox genes, and were later found to have a general role in development. (Lewis 1978;Duncan 1982;Boyer et al. 2006;Lee et al. 2006;Nègre et al. 2006). Finally, BLACK is a major repressive type, covering 65% of silent genes, whose underlying repressive molecular mechanisms remain poorly characterized (Filion et al. 2010). From an evolutionary point of view, although prokaryotes have specialized proteins associated with their DNA, they do not share homology with eukaryotic CAPs (Luijsterburg et al. 2008). In general, evolution of chromatin and diversification of epigenetic mechanisms are suggested to be tightly linked with eukaryotic evolution, from its origin to the transition to multicellularity (Newman 2005;Aravind et al. 2014;Gombar et al. 2014;Penny et al. 2014;Miyamoto et al. 2015;Sebé-Pedrós et al. 2017). Indeed, the Last Eukaryotic Common Ancestor (LECA) is considered to possess the key components of eukaryotic epigenetics, including most histone modification enzymes and some histone mark readers (Aravind et al. 2014). In addition, a current hypothesis on the transition to multicellularity is that complexification of the regulatory genome, via the emergence of repressive chromatin contexts and distal regulatory elements, permitted to generate the cell-type-specific transcriptional programs required for multicellularity (Larroux et al. 2006;Mendoza et al. 2013;Sebé-Pedrós et al. 2016Arenas-Mena 2017;Hinman & Cary 2017). Recently, a system-level view of the evolution of the chromatin modification machinery was provided by . They demonstrated the high conservation of a core of chromatin proteins across four model organisms (human, yeast, fruit fly, and worm), accompanied with diverse species-specific innovations. Here, we investigate the evolutionary relationships of the CAPs studied by (Filion et al. 2010;van Bemmel et al. 2013), using homology prediction, gene age inference methods, functional annotations, and protein domain annotations. Taken together, the work provides an integrated view of the conservation of a chromatin landscape across eukaryotes. Our phylogenomic analysis leads us to propose that the chromatin types YELLOW and RED derive from a single ancestral euchromatin-like type. With respect to GREEN chromatin, we provide evidence that some of its associated proteins are undergoing an evolutionary Red Queen process called centromere drive, while others expanded in a lineage specific manner in D. melanogaster. Finally, our results support the association between the emergence of BLUE chromatin with its Polycomb proteins, and animal and plant multicellularity. Data set Our data set contains all CAPs whose chromatin types have been assigned by (Filion et al. 2010;van Bemmel et al. 2013). As a convention throughout the work, a CAP is assigned the color of the chromatin type(s) it binds over more than 10% (fraction of 0.1). The set contains 107 D. melanogaster proteins, which include 65 well-characterized CAPs selected to cover a wide range of known chromatin complexes plus 42 previously unknown proteins putatively linked with chromatin. All have also been selected on expressibility in Kc167 cell-lines (derived from D. melanogaster embryonic hemocytes). This set was used to search for homologs in 53 species, covering 15 prokaryotes, 15 non-metazoan eukaryotes, and 23 metazoa (Supplementary Table 1, Supplementary Figure 1). The selection of species was guided by the quality of their PhylomeDB entry. In a first round, we extracted all D. melanogaster homology predictions for the 107 CAPs in the other species of interest. We used the common assumption that protein function tends to be conserved in homologs across species, between orthologs and less systematically between paralogs (Koonin & Galperin 2003). We retained only homology hits (i.e orthology and/or paralogy) that had sufficient sequence similarity with the corresponding D. melanogaster protein. In all cases, a sequence similarity criterion of 25% and a maximum gap proportion of 60% (i.e. minimum 40% overlap) were applied after Needleman-Wunsch global pairwise alignment with the D. melanogaster protein. The maximum gap proportion avoids hits that share very conserved domains in otherwise unconserved sequences. The similarity threshold for homology was chosen to be consistent with knowledge for wellstudied proteins, including Polycomb, HP1, SU(VAR)3-9, Sir2, RNA pol, TBP, CTCF, PCNA, SU(HW), BEAF-32 (Klenk et al. 1992;Lanzendörfer et al. 1993;Marsh et al. 1994;Rowlands et al. 1994;Krauss et al. 2006;Lomberk et al. 2006;Whitcomb et al. 2007;Greiss & Gartner 2009;Chia et al. 2010;Schoborg & Labrador 2010;Heger et al. 2013). The homology prediction of MetaPhOrs is based on searching over half a million precomputed gene trees. These trees usually focus on subsets of species, for instance, a tree can be restricted to vertebrates only. This may generate false negatives in our first round of homology search, since some species are less likely to appear in trees with D. melanogaster. Therefore, a second round of homology search was conducted to cover also the less-studied species as follows. For each protein of a particular organism lacking a hit in the first round, the predicted homologs of the two closest species to that particular organism were used to seed a second search for an ortholog in this organism. For instance, during the first round a homolog of the D. melanogaster protein HP6 (HP6_Dme) was found in D. simulans as HP6_Dsi, but not in the ant A. cephalotes. In the second round, the homology search in A. cephalotes was seeded with HP6_Dsi. Then finding an ortholog in A. cephalotes points to a candidate homolog of D. melanogaster HP6_Dme. We encountered 190 cases of a successful second round of homology search. Despite the two rounds of homology search, strictly speaking we cannot prove the absence of homologs observed in certain species, as we cannot rule out that it is related to biological and/or technical challenges, such as rapid sequence divergence, limited sequencing depth and/or genome coverage, or the sensitivity of the homology search. Different amino acid substitution matrices were used to account for different evolutionary distances: Blossum45 to compare with prokaryotes, Blossum62 with eukaryotes, and Blossum80 with metazoa. Finally, we note that instead of D. melanogaster Su(var)3-9, the well-characterized human homolog SUV39H2 was used as a seed for homolog search, since this gene and the eukaryotic translation initiation factors eiF2 are fused in D. melanogaster (Krauss et al. 2006) and attract false positive hits. Gene age inference The binary vectors of homolog absence/presence of the 107 CAPs for each species were clustered using partitioning around medoids (PAM) (Kaufman & Rousseeuw 1990), with simple matching distance (SMD) as dissimilarity measure, and followed by silhouette optimization. The resulting clustering and age groups are robust, as confirmed by re-runs of PAM and by using the Jaccard distance measure. Similar to (Arcas et al. 2014), we verify our clustering by independently applying the Dollo parsimony method, which associates gene age to the most recent common ancestor (MRCA). We relate each gene to the age of the most distant hit, defining 5 age groups: Pre-Eukaryotes, Eukaryotes, Opisthokonta, Metazoa, and Arthropods. For instance, since the most distant homolog of Deformed Wings (DWG) is in the spreading earthmoss P. patens, we assign it to Eukaryotes. We confirm that the trends in Figure 3 and Figure ProteinHistorian regroups databases of D. melanogaster proteomes with protein age assigned by different methods. We calculated enrichment using five different sets of protein family prediction of the Princeton Protein Orthology Database (Heinicke et al. 2007) (DROME_PPODv4 clustered with OrthoMCL, Multiparanoid, Lens, Jaccard and Panther7) and two different methods (Wagner and Dollo parsimony) to account for the expected differences according to the different phylogenies and data sets (Supplementary Table 3). Reader/Writer/Eraser of histone marks analysis From the literature known D. melanogaster histone modifiers and histone marks readers were extracted in addition to the ones present in the initial set (Bannister et al. 2001;Cao et al. 2002;Schotta et al. 2002;Byrd & Shearn 2003;Smith et al. 2004;Stabell et al. 2006;Steward et al. 2006;Wysocka et al. 2006;Eissenberg et al. 2007;Larschan et al. 2007;Rudolph et al. 2007;Seum et al. 2007;Srinivasan et al. 2008;Smith et al. 2008;Moore et al. 2010;Rechtsteiner et al. 2010;Wagner & Carpenter 2012). Homologs of these proteins among our species set were searched applying the same method as described in the above section 'Homology Prediction'. Coding sequences extraction for dN/dS calculation and positive selection tests For all 107 D. melanogaster CAPs, MetaPhOrs was used to retrieve orthologs within ten other corresponding coding sequences (CDS). To avoid different isoforms and different withinspecies paralogs, only the protein with the highest alignment score to its corresponding D. melanogaster protein was retained for each species. Next, with these Drosophila species we inferred phylogenetic tree topologies, we estimated dN/dS, and we performed positive selection tests. We elaborate on each of these steps below. Sequence alignment and tree topology inference for dN/dS calculation and positive selection tests To prepare the homology sets for dN/dS calculation and positive selection tests with PAML (Yang 2007), CDSs of each set were multiple-aligned and a tree topology inferred. First, CDSs were translated and multiple aligned with Clustal Omega 2.1 (Chenna et al. 2003) Translation, alignment, cleaning and translation reversion is done with TranslatorX local version (Abascal et al. 2010) (available at http://translatorx.co.uk/), with the following parameters for Gblocks cleaning: '-b1=6 -b2=6 -b3=12 -b4=6 -b5=H' (Castresana 2000). In short, the Gblocks parameters b1 to b4 tune which amino acid (sub)sequences are considered conserved and/or non-conserved. They were chosen to relax cleaning on variable regions and retain diversity. The parameter -b5=H permits to clean sites with gaps in more than half of the sequences, following the recommendation from the PAML documentation to remove such sites. We refer to Gblocks documentation for details. To account for possible differences between gene trees and species tree, positive selection tests were run on maximum likelihood trees computed from CDS alignments with phyml (Guindon et al. 2010) and also on Drosophila species trees extracted from TimeTree (Kumar et al. 2017) (http: //www.timetree.org/). Phyml was run with default parameters to return the topology maximizing the likelihood function. dN/dS estimation From multiple CDS alignments and inferred tree topology (see previous section), PAML fits codon substitution models and estimates both branch length and dN/dS by maximum likelihood. For each of these alignments, a single dN/dS was estimated using Model 0 of codeml included in PAML (Yang 2007). We verified that dN/dS values are similar with the two tree topology inference methods (Supplementary Figure 3). Positive selection tests In order to detect positive selection among amino acid sites and along branches of the Drosophila tree, tests were carried out on gene and species trees with codeml from PAML using branch-site codon substitution models (Yang 2007). Since PAML fits models by maximum likelihood, it allows to put constraints on the dN/dS parameter and compare models via their likelihood. Following the approach of "Test 2" (see PAML documentation), we predicted positive selection by comparing Model A to the Null Model. In these models, different constraints can be put on a candidate branch, the so-called foreground branch, and all other branches in the tree, i.e. background branches. Model A allows dN/dS to vary among sites and lineages on the specified foreground branch, thus allowing for positive selection. The Null Model fixes dN/dS to 1 on both foreground and background branches, thus allowing only for neutral selection. This process was automated for all branches in the trees. Finally, for every (Model A, Model Null) pair, likelihood ratio tests (LRT) with Bonferroni correction for multiple testing were applied. The Null model was rejected where the adjusted p-value was < 0.01. Finally, Bayes empirical Bayes (BEB) calculates the posterior probabilities for sites to be under positive selection when the LRT is significant. Protein domain annotation To search for over-represented domains among the proteins in each of the inferred age clusters, domain annotations for the 107 D. melanogaster CAPs were extracted from InterPro database v63 (Finn et al. 2017). DNA-binding domains and their location in D1 proteins from 10 Drosophila species were inferred from protein sequence by searching Pfam or Prosite domains using InterProScan v5 (Jones et al. 2014). Gene Ontology Annotation PANTHER is a multifaceted database, classifying proteins via their evolutionary history and function. Functional annotations are provided both by downloading them directly from the GO Consortium and by inferring them from the phylogeny. We conducted a functional classification analysis per cluster of CAPs (see 'Gene age inference' section) with PANTHER. We combined clusters I and II ( Figure 1) into a single pre-eukaryotic cluster. From this analysis, we extracted two types of GO terms. We used GO slim terms, which are high-level GO terms that serve as an overview of ontology content. Moreover, we used specific finegrained terms by taking the deepest children of a corresponding GO slim category. The Drosophila chromatin landscape is biased towards eukaryotic age Taking the dataset from (Filion et al. 2010;van Bemmel et al. 2013 We made several major observations on the inferred clusters. We find two dominant clusters, one referring to eukaryotes in general (III) and one specific to metazoans (V), and a third large cluster indicating lineage specific diversification (VI). Next, we observe a regular lack of CAPs across evolutionary times, in particular in fungal and parasitic species (for instance S. pombe and S. japonicus, respectively Spo and Sja in Figure 1). For fungal species the lack of CAPs may be due to lineage specific divergence, such that we do not detect any homologs, though we cannot rule out lineage specific loss. With respect to parasitic species, loss of CAPs is more likely. In order to understand what biological functions are found in each of the clusters, we used PANTHER GO Slim annotations from the domains 'Biological Process' and 'Molecular Function', as well as corresponding specific terms that are at a lower level in the GO hierarchy (Mi et al. 2017). The oldest age groups (I, II, III) contain a more diverse set of functional annotation terms than the youngest groups (V, VI) (Figure 2A and B). Analysing the occurrence of different annotations and terms, we find that the pre-eukaryotic clusters (I, II) contain CAPs with roles in basic nuclear processes: translation, transcription, replication, and splicing. The eukaryotic cluster (III) is the richest in annotation terms, containing proteins involved in transcription regulation, mitosis, cellular transport, post-translational modifications, and cell-cycle regulation. The three youngest clusters (IV, V, VI) are dominated by transcription factors and co-factors, some of which are annotated with chromatin remodeling activity. These annotations suggest that most chromatin-related processes are ancient and were present in the last common ancestor of eukaryotes. We strengthened this hypothesis by independent age enrichment tests against the D. melanogaster proteome, with age assigned to each protein by means of Dollo and Wagner parsimony (Csurös 2010). Indeed, we find that CAPs are significantly enriched in genes that date back to the origin of eukaryotes (Supplementary Table 2). Moreover, our analysis suggests that evolution towards more complex eukaryotic organisms was accompanied by the acquisition of new regulatory interactions. This is consistent with the paradigm that the evolution of increasingly complex transcriptional regulation is one of the key features in (animal) multicellularity, enabling the establishment of precise spatio-temporal patterns of gene expression and regulation (Larroux et al. 2006;Mendoza et al. 2013;Sebé-Pedrós et al. 2016Arenas-Mena 2017;Hinman & Cary 2017). In summary, chromatin-associated proteins appear to have been established early in eukaryotic evolution, after which they continuously diversified and specialized. In the next sections, we assess the conservation of the D. melanogaster chromatin landscape in eukaryotes and we highlight three major dynamics in chromatin evolution. YELLOW and RED emerged from an ancient single euchromatin type Of the five chromatin types, YELLOW and RED are the two euchromatic types, associated to transcriptionally active regions in the genome. The key biological differences between them are gene expression patterns, broad in YELLOW and specific in RED, and replication timing, which is early in YELLOW and very early in RED (Filion et al. 2010). We hypothesized that YELLOW and RED are derived from one ancestral active chromatin type (Figure 1). To shed light on the idea, we examined the phylogenetic profile of chromatin-associated proteins. The distribution of CAPs across clusters I-VI supports the idea of a single ancestral euchromatic type in two ways. First of all, proteins binding either YELLOW or RED are most abundant amongst pre-eukarotic and eukaryotic ones ( Figure 3A, cluster I-III). This suggests a rather conserved (i.e. ancient) composition of both euchromatin types. Second, CAPs in the older clusters I-III more often associate with both YELLOW and RED, while younger CAPS appear to be more specialized ( Figure 3B, "preeuk" and "euk"). To strengthen the above observations, we explored complementary lines of evidence. First, we investigated the origin of the histone marks specific to YELLOW and RED (H3K4me3) and specific to YELLOW (H3K36me3). The starting point was evidence that the last eukaryotic common ancestor (LECA) had a lysine (K) at the amino acid positions indicated by H3K4 and H3K36 (Aravind et al. 2014). To understand if these lysines were indeed part of an ancient "epigenetic code", we summarized the rich literature of histone modifiers in a phylogenetic profile across the 53 species, similar to the profile that we made for CAPs (Figure 4, see Methods for used literature). We focused on three classes of proteins: writers that do the histone modification (i.e. methylation, acetylation, etc.), readers that interpret the mark, and erasers that remove the mark. We identified the first writer for both H3 lysine marks in one basal eukaryote (Phaeodactylum tricornutum) and three Viridiplantae (Physcomitrella patens, Oryza sativa and Arabidopsis thaliana). And we found one H3K4me3 reader and one H3K36me3 eraser in four basal eukaryotes (Guillardia theta, Emiliania huxleyi, Bigelowiella natans, and Phaeodactylum tricornutum). Moreover, genome-wide histone modification studies in yeasts, plants, as well as Capsaspora owczarzaki, which is a close unicellular relative of metazoa, reveal abundant use of both H3K4me3 and H3K36me3 (Bernstein et al. 2002;Suzuki et al. 2016;Roudier et al. 2011;Sebé-Pedrós et al. 2016). Finally, basal unicellular eukaryotes such as Tetrahymena, Euglena, Stylonychia, and Trichomonas make use of H3K4me3, but not H3K36me3 (Garcia et al. 2007;Postberg et al. 2010), suggesting H3K4me3 to be older than H3K36me3. In summary, H3K4 and H3K36 methylation appear indeed ancient, functional epigenetic marks, which supports our hypothesis of an early euchromatin split. Second, a substantial decrease in proteins that associate to both YELLOW and RED takes place from eukaryotes to metazoans ( Figure 3B). The decrease coincides with the major evolutionary transition to (animal) multicellular life. One hypothesis on the origin of this transition is that a unicellular ancestor with a complex life cycle transitioned from temporally regulated differentiation to a spatiotemporal one (Sebé-Pedrós et al. 2017). The complex life cycle of such a unicellular organism is based on two main features controlled by environmental stimuli, namely cell-cycle control and directional cell type transitions. In support of this hypothesis, we find that proteins involved in replication and cell-cycle control are in the eukaryotic cluster III (CAF-1, PCAF, ASF1, RAD21, and TRIP1) and that they are amongst the oldest RED-associated proteins. At first sight, four proteins in the arthropod cluster (MNT, PROD, SUUR and SSP) invalidate this "rule". However, these may be considered exceptions, as they are linked to a specialized process of proliferation control through endoreplication, a replication without cell division in D. melanogaster salivary glands. All in all, it suggests the first RED proteins to be involved in cell-cycle control. Summarizing, we have shown several lines of evidence for the hypothesis that YELLOW and RED were once a single euchromatin type. If we take the hypothesis to hold, it allows for three different scenarios: RED could derive from an ancestral type functionally closest to current YELLOW, YELLOW could derive from an ancestral type functionally closest to current RED, or both types could derive from a distinct ancestral type. As RED is more complex and more specialized (Filion et al. 2010), we favour the scenario that it derived from an ancestral general euchromatin type, that was similar to Drosophila's current YELLOW. On the basis of Figure 3B, we suggest that the split was initiated before the acquisition of multicellularity. Indeed, the overlap between RED and YELLOW has its most substantial decrease between the eukaryotic cluster (III) and the multicellular cluster (IV). GREEN emerged in metazoa and expanded in a lineage-specific way in Drosophila GREEN chromatin is best characterized as constitutive, classic heterochromatin, and encompasses regions with high content in repetitive DNA and transposable elements (Sun et al. 1997;Filion et al. 2010). It is marked by HP1, a protein family that is involved in chromatin packaging and that binds di-and trimethylated histone H3 (H3K9me2/3) (Bannister et al. 2001). Classic proteins linked with HP1 heterochromatin are conserved (Saksouk et al. 2015) and indeed we find HP1, HP1c, and SU(VAR)3-9 across metazoa (cluster IV). Yet, eleven GREEN proteins, from a total of 25 in the whole dataset, are assigned to the arthropod cluster (the youngest gene cluster VI). Thus, as opposed to YELLOW and RED, the fraction of proteins bound in GREEN increases through evolutionary times ( Figure 5A). At first view, this observation is paradoxical, since GREEN proteins are involved in genome integrity, in particular centromere maintenance. One expects to find them conserved across metazoa. We propose this fragmentation to be linked to gene age. In the network, Region 1 contains 3 proteins, RAD21, MRG15, and CC35, that bind both GREEN and YELLOW chromatin. They belong to the oldest group of GREEN proteins. RAD21 and MRG15 are found across eukaryotes (cluster III), while CC35 is predicted to be of metazoan origins (cluster IV). Region 2 consists of proteins of all age clusters, from eukaryotes to arthropods, marking the 3 heterochromatin types (GREEN, BLUE, and BLACK). The region is organized around SUUR, a key player in chromatin silencing on polytene chromosomes (Makunin et al. 2002). Finally, region 3 contains mostly young GREEN proteins from the arthropod age group, organized around two metazoan proteins, HP1 and SU(VAR)3-9. Matching the three regions to the protein age clusters, we find that regions 2 and 3 are most strongly involved in the specific expansion of GREEN in Drosophila. Moreover, their peripheral location in the chromatin network compared to region 1 is consistent with this explanation (Zhang et al. 2015). D1 chromosomal protein evolves under the centromere drive model We asked if poor conservation of many GREEN proteins may be due to the fact that they are fast evolving, which would lead to the rapid divergence of homologs. We estimated dN/dS, the ratio of non-synonymous nucleotide substitutions versus synonymous substitutions among different Drosophila species for all CAPs ( Figure 5B). Under neutral evolution, nonsynonymous substitutions and synonymous substitutions occur with the same probabilities and dN/dS ~ 1. If positively selected, amino acids change rapidly and dN/dS > 1. On the other hand, under purifying selection amino acid variation is reduced and results in dN/dS < 1. The ratio averaged over all sites and all lineages is however almost never > 1, since positive selection is unlikely to affect all sites over long periods of time. Our analysis revealed that Green CAPs from the arthropod cluster (Green Arthropod Cluster, GAC) show significantly more elevated dN/dS than other CAPs (8 GACs among a total of 16 CAPs with elevated dN/dS, p-value = 7.48 10 -5 ) ( Figure 5B). Next, we asked if those 8 GAC candidates (green labeled proteins in Figure 5B Chromosomes with more satellite DNA sequences gain an advantage, if heterochromatin proteins involved in recruitment of microtubules do not correct the bias by changing binding specificity. If a centromere drive is left unchecked, it breaks meiotic parity and has a deleterious effect on fitness both at the organism level and at the species level. Chromatin proteins repressing the drive must therefore contain both a role in binding satellite DNA and a role in recruitment of other heterochromatic or centromere proteins. Of the 8 GACs candidates, HP6 and LHR have been proposed to be evolving under this model (Brideau et al. 2006;Ross et al. 2013). We carried out a positive selection test under a branch-site model and found recurrent positive selection for D1. D1 presents the features of heterochromatin proteins evolving through centromere-drive: it is capable of binding satellite DNA and is involved in heterochromatin propagation (Levinger & Varshavsky 1982). To the best of our knowledge, it has not been previously reported as a centromere drive protein. We also propose CC29 as a potential candidate. Although we have not been able to detect positive selection using the branch site model, CC29 has DNA binding domains, shows elevated dN/dS, and is part of a centromeric complex with HMR and LHR (Thomae et al. 2013). For a better characterization of positive selection affecting D1 and to corroborate the hypothesis that it is involved in the centromere drive, we investigated more closely at which amino acids positive selection took place. We detected that positively selected sites ( Figure 6A) are within or close to AT-HOOK domains. AT-HOOK domains enable D1 to bind to DNA: the domain is organized around a so-called GRP core, which is able to insert itself into the minor groove of DNA (Aravind & Landsman 1998). Many negatively charged amino acids around this core are then involved in DNA-protein interactions. Drosophila species have nine to eleven copies of AT-HOOK in D1 ( Figure 6B). Moreover, their locations in sequences vary between species ( Figure 6B), highlighting domain-level differences in D1 proteins amongst Drosophila, possibly related to DNA binding specificity. As an example of a positively selected amino acid in an AT-HOOK motif, Leucine 83 is replaced by an Alanine directly before the GRP core ( Figure 6C). We verified that positively selected sites are equivalent between the two tree topology inference methods, i.e species tree and gene tree (Supplementary Figure 4,Supplementary Table 5). In summary, D1 shows strong signs of evolving under positive selection in Drosophila and we propose that it tunes the specificity of its DNA-binding motifs to counterbalance fast-evolving satellite DNA. Drosophila After we established four recent GREEN proteins are involved in the centromere drive model, we studied the evolution of the GREEN proteins that lacked signs of positive selection. Notably, in the Drosophila genus, the HP1 family has been demonstrated to present little evidence of positive selection. Nevertheless, this protein family is numerous with about 25 members, of which only four are conserved across a large number of drosophilids, and others are evolutionarily restricted to particular Drosophila species (Levine et al. 2012). This diversification of the HP1 family is thought to be a lineage-specific expansion driven by karyotype evolution, where events of chromosome rearrangements (fusion/fission) correlate with losses and gains of HP1 proteins (Levine et al. 2012). We explored if other GREENassociated proteins showed signs of lineage-specific expansions in Drosophila. By studying protein domains, we found evidence that a subset of young GREEN proteins are part of the family of proteins with BESS domains that is expanding in the Drosophila lineage. BESS domains direct protein-protein interactions, including with itself. Among all known proteins (not just the ones in our data set) with an inferred BESS domain (InterPro database), more than 80% are restricted to insects and more than 50% are restricted to diptera. A comparison among Drosophilids has shown that the BESS domain family expanded through duplications in a lineage-specific way approximately 40 million years ago (Shukla et al. 2014). In our dataset, five of 107 proteins have a BESS domain (SU(VAR)3-7, LHR, BEAF-32, CC20, and CC25). They are all found in the arthropod cluster (VI), and with the exception of CC20, they are GREEN-associated. Therefore, we propose that these GREEN CAPs evolve rapidly through lineage-specific expansion. And we suggest that BESS domains are involved in directing protein-protein interactions in GREEN chromatin in Drosophila. BLUE is related to the origin of multicellularity Central in BLUE chromatin are the Polycomb group (PcG) proteins, which are recruited to Polycomb Response Elements (PREs) to silence specific target genes during development, such as Hox genes. PcG proteins form two multiprotein complexes, PRC1 and PRC2. Their catalytic signatures are well-characterized; PRC2 trimethylates histone H3K27 into H3K27me3; this modified histone is bound by PRC1, which in turn ubiquitylates histone H2A. Extensive study on the evolution and conservation of PRC1 and PRC2 has suggested that expansion and diversification of PcG proteins contributed to the complexity of multicellular organisms (Trojer & Reinberg 2006;Whitcomb et al. 2007;Köhler & Villar 2008;Gombar et al. 2014). In this study, the PcG proteins are represented by the main components of PRC2, namely E(Z) and PCL, and PRC1, with SCE and PC, in addition to three PRE-binders, respectively PHO, LOLAL, and PHOL. PRE-binders are found in RED chromatin, though, as they trigger the transition from active developmentally controlled chromatin to the PcG repressed state. Of the PcG proteins, the oldest ones that lay down key heterochromatin histone marks, are found in the multicellular cluster (IV). They are the writers E(Z) and SCE, which, respectively, tri-methylate H3K27 and ubiquitinate H3K118. Another key BLUE protein, PC, which reads H3K27me3 marks, is metazoan (Cluster V). This is in support of the hypothesis that PRC1, which contains PC, is younger than PRC2. Summarizing, both complexes are conserved across metazoans, suggesting the repression mediated by the PcG proteins as described above, was established at the origins of animal multicellularity (Whitcomb et al. 2007). Several BLUE proteins are found in cluster II and III, and thus are older than PcG proteins. We mention the three most prominent ones: EFF, IAL, and LAM. All three are conserved in all eukaryotes, with functions unrelated to Polycomb-controlled repression. EFF is involved in protein ubiquitination and degradation, and is suggested to have a general role in chromatin organization (Cipressa & Cenci 2013). IAL is mainly involved in mitosis (Adams et al. 2001) and LAM recruites chromatin to the nuclear envelope (Gruenbaum et al. 1988). We argue that these are not BLUE specialized proteins but rather general heterochomatic proteins recruited by GREEN, BLUE, and BLACK chromatin to form a repressed state. Discussion We have presented an integrated view of the evolution and conservation of a chromatinassociated proteome across eukaryotes. The creation and analysis of a phylogenetic profile of protein presence/absence resulted in three major findings. First, we presented evidence that YELLOW and RED chromatin originate from a single euchromatic type. Second, GREENassociated proteins were found to be relatively specific to arthopods (or even restricted to dipterans). We connected two processes to this observation, namely a Red Queen type of evolution due to centromere drive, and lineage-specific expansion of proteins with BESS domains. Finally, our analysis of BLUE chromatin confirmed existing hypotheses on the importance of Polycomb repressive proteins for the evolutionary success of multicellular life forms. BLACK has not been addressed in this work. It is hard to interpret because it is mechanistically poorly understood and overlaps strongly with BLUE chromatin. To place these results in context, we mention some critical points of our study. The evolutionary view on an epigenetic landscape that we have provided here is, of course, restricted in the sense that it is defined explicitly from a D. melanogaster angle. Notably, the Drosophila genome is particular, as it appears to lack DNA methylation and is known for an original mechanism of telomere maintenance by specialized non-LTR retrotransposons (Pardue & DeBaryshe 1999). Also, the homologs of D. melanogaster CAPs in other species do not necessarily share the same interactions and global assembly to form similar chromatin types. Indeed, in distant species that are separated by more evolutionary time, they are more likely to be functionally different. To counter such false positives, we used a strict similarity cut-off for all protein-protein comparisons. The cut-off indeed helped us to reject functional homology prediction. For instance, it did not accept the A. thaliana HP1 homolog, LHP1, which appears to function both in a "classical" HP1-fashion and as a PcG protein (Zhang et al. 2007). Nevertheless, we cannot exclude that even if sequences and domains are very similar, the exact role in chromatin organization may be different. Histone modifications, gene regulation, and the origins of multicellularity The evolution of (animal) multicellularity is one of the major transitions in evolution. Within the area of (epi)genomics, it has been hypothesized that complexification of chromatin states and in particular the emergence of distinct heterochromatin states lay at the origin of multicellular life (Sebé-Pedrós et al. 2016;Hinman & Cary 2017). For instance, general heterochomatic proteins are already present in unicellular eukaryotes such as S. cerevisiae and T. thermophila, while more specific ones are found in mammals, which indeed have more complex repressive chromatin states (Garcia et al. 2007). Similar observations are made in studies focused on the large repertoire of histone modifiers in mammals and in work on PcG proteins. In summary, these studies propose that an elaboration of chromatin states is based on (unique) combinations of histone modifications. Our phylogenomic profile supports the above idea of regulatory complexification. Indeed, we find that older proteins are more general than recent ones, in the sense that the older proteins tend to be found in multiple types of chromatin. Moreover, both multicellular and metazoan clusters (IV and V) highlight complexification of histone modifications throughout eukaryotic evolution. In the eukaryotic cluster (III), proteins linked with histone modification are acetylation/deacetylation proteins (RPD3, DMAP1, SIN3A, PCAF), H3K36me3 reader (MRG15) and H3K4me3 writer (CC10). New repressive histone marks appeared in the multicellular and metazoan clusters, respectively H3K9me3 (SU(VAR)3-9) and H3K27me3 (E(Z)). We confirmed these results through an additional analysis of the conservation of Drosophila histone modifiers (Figure 4). It is interesting to note that in wellstudied unicellular organisms (T. thermophila, S. cerevisiae, C. owczarzaki), repressive methylated histones H3K9 and H3K27 are often absent or present only at a very low level, while they are abundant in the multicellular fungi N. crassa (Garcia et al. 2007;Roudier et al. 2011;Ernst et al. 2011;Jamieson et al. 2013;Sebé-Pedrós et al. 2016). Thus we find diversification of histone marks and the acccompanying proteins, which as mentioned above, allow for a more fine-grained regulatory control over the genome. Connected to the modulation of accessibility through histone modifications, our work also supports new regulatory elements to be linked with the transition to multicellularity. We find that the multicellular and metazoan clusters (IV and V) display the first insulator (DWG, CTCF, CC27) and enhancer binding proteins (JRA). Indeed, enhancers and insulators are mechanistically linked: enhancers being distal regulatory regions, they rely on looping with help of insulators to influence the expression of their targets (Krivega & Dean 2012;Phillips-Cremins & Corces 2013). Taken together, we affirm the importance of regulatory complexification in the success of multicellular life. Like other studies, our work suggests this regulatory complexification to be linked with the need to control chromatin states and their propagation in an increasingly complex landscape of active and repressive genomic regions. Outlook We have enhanced our understanding of the evolution of the chromatin landscape through the epigenomic proteome in Drosophila. This is a good starting point, and we need additional studies that focus on other species to deepen and broaden that knowledge. Tackling other model organisms is a straightforward extension, such as the worm C. elegans and the plant A. thaliana. One future breakthrough we hope for, is that such studies could provide insight into new BLACK-associated proteins and perhaps lead to a better molecular and evolutionary characterization of this type. Moreover, some classes of proteins are better studied in species other than Drosophila. For instance, in our dataset, five proteins are responsible of histone acetylation/deacetylation, but substrate specificity and links with previously inferred chromatin states are not well-investigated in fly species. Contrastingly, acetylases (HAT) and deacetylases (HDAC) specificity are well-characterized in human (Seto & Yoshida 2014) and thus H. sapiens could be a better subject for questions in this area. Furthermore, non-coding RNAs are tightly associated to both active and inactive chromatin in eukaryotes, including in S. pombe (Martienssen et al. 2005), in various mammals (Saksouk et al. 2015), and in D. melanogaster (Fagegaltier et al. 2009). Thus we advocate for an inclusion of ncRNA functionality within the analyses on different chromatin states across species. Clearly our current study is but an introduction that shows the potential exists for new insights into the evolution of the chromatin landscape. Figure 1. Phylogenetic profile of chromatin-associated proteins. To the left, six protein age clusters are indicated with Roman numerals (I-VI). In the matrix, dark blue rectangles represent the presence of a homolog, grey rectangles its absence. On top, 13 species groups are defined to aid the reader, three letter codes refer to species names as given in Supplementary Table 1. The five columns "Fraction bound in chromatin types" display the fraction of chromatin type (GREEN, BLUE, BLACK, RED, YELLOW) bound by each CAP. Figures To the right, the column "Proteins" contains protein names, with unknown proteins in a red font.
2019-04-02T13:13:52.274Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "3d955488ebda36fd92f9707d7f9974e9eb7cbe62", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/gbe/evz019", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "ffaeeba9a811af2948d8803abda3b30e1127d094", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
255499171
pes2o/s2orc
v3-fos-license
Wastewater and seroprevalence for pandemic preparedness: variant analysis, vaccination effect, and hospitalization forecasting for SARS-CoV-2, in Jefferson County, Kentucky Despite wide scale assessments, it remains unclear how large-scale SARS-CoV-2 vaccination affected the wastewater concentration of the virus or the overall disease burden as measured by hospitalization rates. We used weekly SARS-CoV-2 wastewater concentration with a stratified random sampling of seroprevalence, and linked vaccination and hospitalization data, from April 2021–August 2021 in Jefferson County, Kentucky (USA). Our susceptible (S), vaccinated (V), variant-specific infected I1 and I2, recovered (R), and seropositive (T) model SVI2RT tracked prevalence longitudinally. This was related to wastewater concentration. The 64% county vaccination rate translated into about 61% decrease in SARS-CoV-2 incidence. The estimated effect of SARS-CoV-2 Delta variant emergence was a 24-fold increase of infection counts, which corresponded to an over 9-fold increase in wastewater concentration. Hospitalization burden and wastewater concentration had the strongest correlation (r = 0.95) at 1 week lag. Our study underscores the importance of continued environmental surveillance post-vaccine and provides a proof-of-concept for environmental epidemiology monitoring of infectious disease for future pandemic preparedness. Introduction 103 There is an increasing realization that the current methods of disease monitoring based on 104 individual testing may be insufficient to effectively combat the new, possibly much more 105 infectious, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants. This leaves 106 public health researchers and policy makers in search for more reliable methods of measuring 107 SARS-CoV-2 prevalence in communities and especially those not involving the (expensive) 108 process of collecting individual level data. Wastewater concentration, when properly calibrated, 109 can be a surrogate for the virus community prevalence analysis. 1,2 Wastewater epidemiology 110 promises an exciting opportunity to estimate community disease prevalence even with 111 asymptomatic, vaccine preventable, disease. 2,3 However, the handful of recent studies 112 considering a relationship between SARS-CoV-2 wastewater concentration and the COVID 19 113 vaccine have relied almost exclusively on statistical models calibrated with publicly available 114 COVID 19 clinical case data. 4-8 These data run the risk of biased underrepresentation of 115 asymptomatic individuals who may not seek testing, or individuals testing in settings where 116 reporting is low or not required. 9 In this study we consider this question in the context of 117 randomized seroprevalence surveillance, combining mechanistic and statistical frameworks to 118 obtain a more robust and realistic answer. 119 120 We used repeated cross-sectional community-wide stratified randomized sampling to measure 121 SARS-CoV-2 nucleocapsid (N) specific IgG antibody-based seroprevalence in Jefferson County, 122 Kentucky (USA), from April through August 2021 to determine post-vaccine community 123 prevalence at a sub-population scale. We then related this to a statistical linear model and the 124 available sub-population weekly wastewater surveillance data which thus yielded an explicit 125 impact of vaccination and seroimmunity on SARS-CoV-2 wastewater concentration estimate, 126 while controlling for prevalence in different epidemic phases. The latter may be easily translated 127 into other important public health indicators such as the patterns of hospitalization. 128 129 2. Methods 130 2.1 Seroprevalence 131 Community-wide stratified randomized seroprevalence sampling was conducted in four waves 132 from April to August 2021 in Jefferson County, Kentucky (USA) which is also the consolidated 133 government for the city of Louisville. Seroprevalence sampling was both before and during 134 vaccination, but this analysis only considers the period after COVID-19 vaccines were made 135 widely available to the public (N=3436). An address-based sampling frame was used to build 136 four geographic zones. Invitations (~30,000 per wave) were mailed to sampled households and 137 one random adult was selected to join the study. Participants completed an online consent form 138 and survey and scheduled an in-person appointment for testing at a mobile site. In some cases, 139 due to the timing of sampling waves, respondents may have had only the first of a two-dose 140 vaccine series. Owing to elevated levels of vaccinated respondents in our study (~90%), 141 seroprevalence was measured by response to IgG N antibodies. 10 It was assumed over the study 142 period vaccination induced antibodies do not decay below detection. The nucleocapsid (N) IgG 143 test sensitivity was 65% and the specificity was 85%. The seroprevalence sampling by 144 geographical zones are described in more detail in the Supplemental Material section S1. 145 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Wastewater SARS-CoV-2 concentration 146 Wastewater samples were collected twice per week from five wastewater treatment plants 147 (N=520; Supplemental Material section S2) from April to August 2021. From an influent 24-148 hour composite sampler, 125 ml of subsample was collected and analyzed for SARS-CoV-2. In a 149 few cases due to an equipment malfunction, a grab sample was collected. The geographic area 150 and population serviced by a wastewater treatment plant comprises a sewershed, the zone for 151 which we consider in our model analysis across a range of population sizes, income levels and 152 racial and ethnic diversity. 2 Analysis used polyethylene glycol (PEG) precipitation with 153 quantification in triplicate by reverse transcription polymerase chain reaction (RT-qPCR). 11 the non-informative Cauchy distribution was assigned to regression coefficients, and the non-187 informative gamma prior was assigned to the dispersion parameter error term. 188 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. adjusting for the Delta variant emergence ( Figure 1). The peak and the overall temporal 233 dynamics are different under the two scenarios across each location. To better quantify these 234 differences, we calculated the location-specific vaccination effects as the ratios of the areas under 235 two scenario curves (with-vaccination area over without-vaccination area). The value obtained 236 for the aggregated data was 0·429 (CI= (0·405, 1)), with the remaining sewershed specific effects 237 being even stronger at ( Figure 1; panels B-D) 0·532 (CI= (0·515, 1)), 0·367 (CI= (0·366, 0·785)), 238 and 0·555 (CI= 0·555, 1)), respectively. Based on converting these ratios to excess incidence, we 239 conclude that without vaccination, we would expect to see the incidence increase of about 133% 240 above the observed level in Jefferson County (panel A) and about 88%, 172%, and 80% in 241 respective sewershed areas ( Figure 1; panels B-D, see also S3). 242 243 To obtain estimates of the vaccination effects on the wastewater concentrations, we developed a 244 hybrid inferential model combining the wastewater regression (see Sec 3.1) equation with the 245 SVI 2 RT estimated prevalence, under two different vaccination scenarios (factual 64% rate and 246 counter-factual 0% rate) ( Figure 2). Note that the usage of SVI 2 RT (which accounts for the effect 247 of different virulence of the two different SARS-CoV-2 strains) automatically adjusted our 248 analysis for the Delta variant emergence. As the estimated prevalence from the ܵ ܸ ‫ܫ‬ ଶ ܴ ܶ model 249 and the normalized wastewater concentration are highly correlated (see Sec 3.1), the hybrid 250 model is seen to fit data well. As before, to quantify the location-specific vaccination effects, we 251 calculated the location-specific ratios under two curves in an analogous way as when quantifying 252 the vaccination effect on the disease incidence. The ratios of the areas under the two curves, 253 under factual (vaccinated) and counterfactual (unvaccinated) scenarios, were computed. The 254 Jefferson County (Figure 2; panel A) ratio was equal to 0·358 (CI= (0·333, 0·381)), and the 255 remaining sewershed area ratios (Figure 2; panels B-D) were equal to, respectively, 0·457 (CI= 256 (0·388, 0·537)), 0·276 (CI= (0·260, 0·296)), and 0·426 (CI= 0·407, 0·446)). The estimate of 257 excess wastewater virus without vaccination is estimated as 179%, 119%, 262%, and 135%, 258 respectively (Supplemental material section S3). 259 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. ; https://doi.org/10.1101/2023.01.06.23284260 doi: medRxiv preprint Effects of virus mutation on disease incidence and wastewater concentration 276 The time periods during which the Alpha and Delta variants were dominant in each sewershed 277 are are shown in both Alpha-and Delta-variants present), where the incidence (dark curve) was seen to rise 295 rapidly ( Figure 3). As in the previous section, to quantify the difference between the two curves, 296 which we interpret as measuring the effect of introducing the Delta mutation, we calculated the 297 ratio of areas under the two curves in each panel, obtaining the values of 7·32 (CI = (7·05, 298 20·13)), 4·40 (CI = (4·33, 7·64)), 8·58 (CI = (1, 8·60)), and 6·15 (CI = (1, 6·16)) for the aggregate, 299 MSD1, MSD2, and MSD3-5 regions respectively (corresponding to panels A-D). The estimate 300 of the decrease in total incidence without mutation is found as 86%, 77%, 88%, and 84%, 301 respectively. 302 303 To identify the effect of the Delta variant emergence on the observed wastewater concentration, 304 we again applied the hybrid model discussed in the previous section. In the current analysis, the 305 regression model was applied to predict the longitudinal wastewater concentrations from both 306 factual (both variants present) and counterfactual prevalence data (no Delta variant). The results 307 are depicted in the panels of Figure 4 both for the aggregated and sewershed-specific analysis. 308 As with the analysis of the vaccination effects, here we also considered the ratios of areas under 309 the corresponding curves as measures of Delta variant effects in specific locations. Based on the 310 location-specific ratio values of 12·569 (CI = (11·487, 13·914)), 6·235 (CI = (5·290, 7·891)), 311 14·932 (13·351, 16·898), and 8·413 (CI = (7·654, 9·351)), corresponding to aggregated and 312 sewershed-specific curves, the estimate of excess wastewater virus due to Delta mutation is 313 founded as 92%, 84%, 93%, and 88% respectively. Further analysis is provided in Supplemental 314 Material Table S3.3. 315 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. hospitalization counts on the wastewater concentrations ( Figure 5). As hospitalization is likely to 336 occur sometime after symptom onset, we used the 1-week lagged-regression model where the 337 length of the lag was based on the overall model fit criteria. The fitted intercept and slope 338 coefficients were 1·284 x 10 -4 (std=2·279 x 10 -5 ) and 0·176 (std=0·0119) for vaccinated and 339 unvaccinated scenarios respectively, with the R-square of 0·928. The maximal number of the 340 observed daily average hospitalizations under vaccination scenario was 110·4 per weekly 341 average (actual 122·0 in daily) at the end of August. However, without vaccination, the 342 maximum predicted number of weekly average hospitalizations increased to 150·3. The ratios 343 between the areas under the prediction curves with and without vaccination were 0·368 (CI = 344 (0·413, 0·458)), indicating a 170% increase in the number of hospitalizations when no vaccine 345 would be present. In a comparable way, we obtained the hospitalization estimate without the 346 Delta variant mutation. The ratio of the areas under the two graphs (with and without the Delta 347 variant mutation) is 2·632 (CI = (2·382, 5·573)), indicating a 62% decreasing in the 348 hospitalization rate. 349 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. helped to control Alpha, while an increase in a third booster was found to lead to a decline in 407 Delta. When vaccination levels increase to higher coverage, overall reported incidence may 408 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. Our study used five sub-community scales based on the existing wastewater infrastructure 416 allowing observation of regional trends but also the aggregation of data for a countywide picture. 417 We is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. The seroprevalence, wastewater concentration, and hospitalization information data used in the 494 study can be accessed from the website https://github.com/cbskust/DSA_Seroprevalence. The 495 computer code that implemented our model-based analysis will be made available immediately 496 after publication. 497 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. ; https://doi.org/10.1101/2023.01.06.23284260 doi: medRxiv preprint . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. S3. Population vaccination model (SVI 2 RT) 589 590 The equation shown in (1) describes the time-evolution of the proportions of individuals who are 591 susceptible (S), vaccinated (V), infected with Alpha variant(I 1 ), infected with Delta variant (I 2 ) 592 removed (R), and seropositive (T). We assume the total initial population of susceptibles is large 593 with a small initial fraction of infected. The model equations are 594 having positive results, the corresponding 624 log-likelihood function is 625 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. where ‫‬ ሺ Θ ) is the prior distribution on Θ to be determined from our previous work. 3 Hence, we 631 seek the values of Θ that maximize our posterior log-likelihood function (3). Note the entire 632 system (1) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. 0·643 (0·551, 0·711) 652 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. Relationship among observed wastewater concentration, the hospitalization rate, and estimated 699 prevalence. The dark brown line represents the estimated prevalence, and the shaded area is the 700 95% credible interval of MCMC simulation. The green line is the weekly average of daily 701 hospitalization rate of Jefferson County, and the blue dots represent the weekly average of 702 wastewater concentrations. The Pearson correlation coefficient of estimated prevalence and 703 wastewater concentration is 0·916 (95% CI=(0·764, 0·976)) and that of hospitalization rate and 704 wastewater concentration is 0·720 (95% CI =(0·224, 0·953)). 705 35 er . ed he ily of nd nd . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. To obtain the linear regressions, the procedure was as follows: Let ‫ܫ‬ ሚ ௧ be the model estimated 718 percentage prevalence corresponding to the same week and sewershed area. ܹ ௧ was defined in 719 Eq. (5). The linear and NB regression models are given by: 720 In the Bayesian linear regression models, non-informative priors were assigned. Specifically, the 722 non-informative Cauchy distribution was assigned to the regression coefficients, and the non-723 informative gamma prior was assigned to the dispersion parameter of the error term. The 724 summary of the posterior estimates of all regression parameters is presented in Table S3.4, and 725 fitting and prediction using the regression model are represented in Figure 2 In this model, we changed the time lag d from 1 to 4 so that the maximum period from a shred of 737 evidence of the community spread of COVID-19 in wastewater to reach a burden to 738 hospitalization is about a month. Of note, hospitalizations data is available daily while 739 wastewater is weekly 740 741 Additionally, we performed a simulation study using this regression model how to check how 742 much the hospitalization rate changes according to the vaccination rate. We changed the 743 vaccination rate so that the vaccination percentage of the community was 0% and predicted the 744 serial estimates P r e d ௧ in Eq. (4). And then, we predicted the wastewater concentration using a 745 linear regression model and used them as the predictor in the regression model (5). 746 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted January 7, 2023. ; https://doi.org/10.1101/2023.01.06.23284260 doi: medRxiv preprint
2023-01-08T02:02:28.141Z
2023-01-07T00:00:00.000
{ "year": 2023, "sha1": "d3dd1dbdceaef6b472fb92a43aba12501fa85b4a", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844017", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "d3dd1dbdceaef6b472fb92a43aba12501fa85b4a", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244041262
pes2o/s2orc
v3-fos-license
Dysregulated Immunometabolism Is Associated with the Generation of Myeloid-Derived Suppressor Cells in Staphylococcus aureus Chronic Infection Myeloid-derived suppressor cells (MDSCs) are a compendium of immature myeloid cells that exhibit potent T-cell suppressive capacity and expand during pathological conditions such as cancer and chronic infections. Although well-characterized in cancer, the physiology of MDSCs in the infection setting remains enigmatic. Here, we integrated single-cell RNA sequencing (scRNA-seq) and functional metabolic profiling to gain deeper insights into the factors governing the generation and maintenance of MDSCs in chronic Staphylococcus aureus infection. We found that MDSCs originate not only in the bone marrow but also at extramedullary sites in S. aureus-infected mice. scRNA-seq showed that infection-driven MDSCs encompass a spectrum of myeloid precursors in different stages of differentiation, ranging from promyelocytes to mature neutrophils. Furthermore, the scRNA-seq analysis has also uncovered valuable phenotypic markers to distinguish mature myeloid cells from immature MDSCs. Metabolic profiling indicates that MDSCs exhibit high glycolytic activity and high glucose consumption rates, which are required for undergoing terminal maturation. However, rapid glucose consumption by MDSCs added to infection-induced perturbations in the glucose supplies in infected mice hinders the terminal maturation of MDSCs and promotes their accumulation in an immature stage. In a proof-of-concept in vivo experiment, we demonstrate the beneficial effect of increasing glucose availability in promoting MDSC terminal differentiation in infected mice. Our results provide valuable information of how metabolic alterations induced by infection influence reprogramming and differentiation of MDSCs. Introduction MDSCs are an aberrant population of immature myeloid cells that fail to undergo terminal differentiation and accumulate during pathological conditions Oliver Dietrich and Alexander Heinz contributed equally to this work. such as cancer, chronic infection, and autoimmunity [1,2]. In contrast to normal, mature myeloid cells, which play a pivotal role in host defense against pathogens and in the initiation of T-cell immunity, MDSCs exert immune regulatory functions and are potent suppressors of T-cell responses [3]. In humans and mice, MDSCs have been typically divided into 2 different subsets, monocytic and granulocytic, based on cell surface markers. In humans, granulocytic MDSCs are identified by the expression of CD15 + -CD11b + CD33 + HLA-DR − and monocytic MDSCs by the expression of CD14 + CD11b + CD33 + HLA-DR − [4]. In mice, monocytic MDSCs express CD11b + Ly6C + -Ly6G low , while granulocytic MDSCs express CD11b + -Ly6C low L6G + [4]. However, it has become clear that this classification is rather simplistic and does not recapitulate the high degree of phenotypic and functional heterogeneity of MDSCs [5,6]. Although MDSCs have been extensively studied and characterized in the cancer setting, where they seem to play an important role in supporting tumor progression [7], mounting evidence indicates that MDSCs play also an important regulatory role in the immune response to pathogens [8]. MDSCs have been reported to play an important role in chronic infections caused by S. aureus [9][10][11][12][13][14], which is a major human pathogen that causes a wide variety of infections ranging from mild, self-limited infections to chronic and difficult-to-treat diseases including osteomyelitis, prosthetic joint infections, and biofilmrelated infections [15]. We have previously reported the expansion of MDSCs in murine models of S. aureus chronic abscesses and bone infection where they induce progressive T-cell dysfunction and promote pathogen persistence [12]. In the same line, Heim and colleagues [9,11,14] demonstrated that MDSCs infiltrate the site of infection in a mouse model of S. aureus orthopedic implant infection, where they promote an anti-inflammatory environment that favore biofilm persistence. Accumulation of granulocytic MDSCs at the site of prosthetic joint infections has also been observed in humans [13]. The authors proposed that the accumulation of MDSCs could account for the chronicity of these infections [13]. All together, these observations indicate that MDSCs are an important element of the host response to S. aureus chronic infections, and therefore, targeting MDSCs may represent a promising therapeutic intervention to overcome immunosuppression and facilitate pathogen clearance by the immune system. In cancer, several preclinical and clinical studies have shown the benefit of including MDSC-targeting ap-proaches such as depletion of MDSCs or blockade of MD-SCs migration in combination therapies to reduce tumor progression [16]. In the infection setting, however, considering that MDSCs also encompass populations of mature myeloid cells that are critical for the control of many pathogens, these strategies may have a profound negative effect in the course of infection. The development of such strategies will require a better understanding of how MD-SCs are generated during chronic infection, which factors are involved in the process, and the mechanisms that prevent their maturation. In the current study, we used single-cell RNA sequencing (scRNA-seq) and metabolic profiling to investigate the origin, heterogeneity, molecular mechanisms, and pathways underlying the development and maintenance of MDSCs in a murine model of S. aureus chronic infection. Materials and Methods Bacterial Strains S. aureus strains 6850 and SH1000 were grown to the mid-log phase in brain heart infusion medium (BHI, Roth) at 37°C with shaking (120 rpm), collected by centrifugation, washed with sterile PBS, and diluted to the required concentration. The number of viable bacteria was determined by tenfold serial dilution and colony count by plating on blood agar. Mice and Infection Model Pathogen-free 9-to 10-week-old C57BL/6 female mice were purchased from Envigo (The Netherlands) and maintained according to institutional guidelines in individually ventilated cages with food and water provided ad libitum. Mice were intravenously inoculated either with 10 6 CFU of S. aureus strain 6850 or with 4 × 10 7 CFU of S. aureus strain SH1000 in 100 μL of PBS via a lateral tail vein, and sacrificed by CO 2 asphyxiation at indicated times. Bacteria were counted in the tibia and spleen by preparing homogenates in PBS and plating tenfold serial dilutions on blood agar. In some experiments, infected mice were fed with water supplemented with 10% glucose during 10 days after bacterial inoculation. This time period was selected to minimize potential secondary metabolic alterations such as increased glucose intolerance and insulin resistance associated with long-term consumption of glucose-sweetened water. Blood glucose was measured using a Contour XT glucometer (Bayer). Cell suspensions were prepared from the spleen of infected mice by gently teasing the spleen tissue through a 100-µm pore size nylon cell strainer and PBS+10% FCS. Splenocytes were spun down and erythrocytes were lysed after incubation for 5 min at RT in ammonium-chloride-potassium lysing buffer and then washed 3 times in PBS+10% FCS. The bone marrow was flushed out of both tibia and femur from one hind limb nonaffected by the infection using a 21-gauge needle attached to a 5-mL syringe filled with PBS, followed by centrifugation and erythrocyte removal with ammonium-chloride-potassium. Flow Cytometry Analysis Cell suspensions were incubated with anti-mouse CD16/32 (eBioscience) for 5 min at RT to block Fc receptors and stained for 20 min at 4°C with antibodies against surface antigens. Cells were washed with PBS+10% FCS followed by fixation for 15 min with fixation buffer (BioLegend) and analyzed on a LSRII cytometer (Becton Dickinson). For intracellular staining, cells were stained first against surface antigens as described above, fixed for 15 min at RT with fixation buffer, washed twice with permeabilization buffer (BioLegend), and stained for intracellular markers. After washing with permeabilization buffer, cells were analyzed on a LSRII cytometer. Data were analyzed using FlowJo v9.3 software. Cell viability was determined by flow cytometry using propidium iodide solution following the manufacturer's recommendations (BioLegend). Carboxyfluorescein Succinimidyl Ester Staining and Proliferation Assay CD4 + T cells were isolated from the spleen of uninfected mice using the mouse CD4 + T Cell Isolation kit (Miltenyi Biotec), and Ly6C + Ly6G + MDSCs were isolated from the spleen of S. aureusinfected mice at day 21 of infection using the mouse Myeloid-Derived Suppressor Cell Isolation Kit (Miltenyi Biotec) according to the manufacturer's instructions. Isolated CD4 + T cells were then labeled with carboxyfluorescein succinimidyl ester (BioLegend) following the manufacturer's recommendations and cultured at 5 × 10 5 cells per well in complete RPMI-1640 medium (Gibco) supplemented with antibiotic-antimycotic (1:1,000) (VWR International), 4 mML-glutamine (Sigma-Aldrich), and 10% FCS and 2 μg/ mL of Armenian hamster anti-mouse CD3ε plus 2 μg/mL of Syrian hamster anti-mouse CD28 antibodies (BD Pharmingen) at 37°C and 5% CO 2 for 72 h in the presence or absence of 5 × 10 5 per well of MDSCs isolated from the spleen of S. aureus-infected mice at a 1:1 ratio. Unstimulated CD4 + T cells incubated in medium without anti-CD3ε and anti-CD28 antibodies were used as control. Proliferation was determined by flow cytometry analysis and dilution of CSFE as indication of cell division. Cytokine Determination IL-2 and IFN-γ levels were determined in the culture supernatant of CD4 + T cells unstimulated or stimulated for 72 h with anti-CD3/anti-CD28 in the presence or absence of MDSCs isolated from the spleen of S. aureus-infected mice at day 21 of infection using mouse IL-2 and mouse IFN-γ ELISA sets according to the manufacturer's recommendations (BD Biosciences). Histology Spleens were removed from uninfected or S. aureus-infected mice at day 21 of infection, fixed in 10% formalin, and embedded in paraffin. Tissue section samples (2 μm thick) were stained with hematoxylin/eosin (Roth) and examined under a light microscope. Single-Cell RNA Sequencing Spleens isolated from 5 uninfected and 5 S. aureus-infected mice (day 21 of infection) were transformed into a single-cell suspension and pooled. The CD11b + populations in the infected and uninfected samples were sorted using a FACSAria(TM) SORP and approximately 4,000 cells loaded onto the 10x Genomics Chromium Controller following the single-cell 3′ v3 protocol (10x Genomics). Libraries were prepared from single-cell suspensions according to the 10x Genomics 3′ v3 protocol and sequenced using an Illumina NovaSeq 6000 sequencer (Illumina) with a sequencing depth of 200 million reads per sample. In vivo 5-Ethynyl-2′-Deoxyuridine-Based Cell Proliferation Assay 5-Ethynyl-2′-deoxyuridine (EdU) (Thermo Fisher Scientific) was administered intraperitoneally (0.5 mg/mice) to uninfected or S. aureus-infected mice (day 21 of infection) 24 h before sacrifice. Spleens were removed, converted into a single-cell suspension, and stained for surface markers. EdU staining was performed with the Click-iT EdU AlexaFluor647 Flow Cytometry Assay Kit following the manufacturer's instructions (Thermo Fisher Scientific). Proliferating cells were determined by flow cytometry analysis. In vitro Culture of MDSCs Spleen cells isolated from S. aureus-infected mice (day 21 of infection) were cultured in vitro at a density of 5 × 10 6 cells/mL in complete RPMI-1640 medium at 37°C, 5% CO 2 . Cells were harvested at the indicated times of in vitro culture, stained with antibodies against the surface marker Ly6G and with antibodies against the intracellular marker CCL6, and analyzed by flow cytometry. In some experiments, spleen cells were cultured in complete RPMI-1640 medium containing different concentrations of glucose (0, 0.5, 1, and 2 mg/mL). To inhibit glycolysis, spleen cells were incubated in complete RPMI-1640 medium containing glucose (2 mg/mL) in the presence of 10 mM of the glycolysis inhibitor 2-deoxy-D-glucose (2-DG). Cytospin Cytospin were prepared using aliquots of in vitro-cultured MDSCs. The material was centrifuged at 500 rpm for 5 min in a Shandon cytocentrifuge (Cytospin 2, Shandon, UK). Slides were stained using May-Grünwald-Giemsa (Polysciences) and photographed with a light microscope. Phagocytosis and Killing Assay Ly6C + Ly6G + cells were isolated from S. aureus-infected mice (day 21 of infection) and cultured for 96 h in complete RPMI medium. Cells were collected, washed, seeded in multi-well plates at 5 × 10 5 cells per well, and incubated with S. aureus at an MOI of 10:1 in the presence of 10% mouse serum. The plates were centrifuged at 700 g for 5 min and incubated at 37°C for 1 h to allow phagocytosis. Noningested extracellular bacteria were then killed by addition of 100 μg/mL gentamicin (Gibco) and 5 μg/mL lysostaphin (Sigma-Aldrich), and cells were washed and further incubated for 3 h at 37°C. Cells were then harvested, pelleted by centrifugation, and lysed with 0.1% Triton X-100 (Sigma), and CFU were enumerated by plating on blood agar. Seahorse Extracellular Flux Analysis Ly6C + Ly6G + cells were isolated from S. aureus-infected mice (day 21 of infection) using the Myeloid-Derived Suppressor Cell Isolation Kit (Miltenyi Biotec) according to the manufacturer's instructions and used prior to (ex vivo) or after in vitro culture for 96 h (in vitro). Oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) of Ly6C + Ly6G + cells were assessed using an Agilent Seahorse XF96 Analyzer (Agilent Technologies). One day prior to the assay, the Seahorse XF Utility Plate (Agilent Technologies) was hydrated by adding 200 μL of sterile Milli-Q H 2 O to each well and incubated overnight in a non-CO 2 37°C incubator together with the XF sensor cartridge (Agilent Technologies). Before seeding the cells, the wells of a Seahorse 96-well XF cell culture plate (Agilent Technologies) were incubated with poly-L-lysin (Sigma-Aldrich) for 1 h at 37°C, extensively washed with Milli-Q H 2 O after removing the poly-L-lysin, and left to dry for 30 min at 37°C. Cells were added to poly-L-lysin-coated plates at a concentration of 5 × 10 5 cells per well in 180 μL Seahorse RPMI medium supplemented with 10 mM glucose and 2 mM glutamine (pH 7.4) and centrifuged at 1,000 g for 5 min. The wells filled up with only assay medium were used as background control. Water was removed from the wells in the utility plate and 200 μL prewarmed (37°C) Seahorse XF calibrant solution (Agilent Technologies) was added to each well. The cell culture plate and utility plate with the sensor cartridge were equilibrated after incubation in a non-CO 2 37°C incubator for 1 h. The different inhibitors of the Seahorse XF Cell Glycolytic Rate Assay Kit (Agilent Technologies) were added to the corresponding ports of the sensor cartridge prior to starting the assay. Thus, 20 μL of 5 μM rotenone and 5 μM antimycin A solution was added to port A and 22 μL of 500 μM 2-DG solution to port B. The utility plate and the sensor cartridge were then placed into the XF96 analyzer and calibrated. After calibration, the utility plate was replaced by the cell culture plate, and cell respiration parameters were determined by stepwise injection of the different inhibitors. During each measurement cycle, the OCR and ECAR were determined 3 times including 3 min of mixing and 3 min of measurement. Stable Isotope Labeling, Metabolite Extraction, GC-MS Measurement, and Data Processing Ly6C + Ly6G + cells isolated from the spleen of S. aureus-infected mice (day 21 of infection) prior to (ex vivo) and after in vitro culture for 96 h (in vitro) were seeded in 6-well plates at 8 × 10 6 cells per well in RPMI medium containing either 11 mM [U-13 C 6 ]-glucose (Cambridge Isotope Laboratories), 2 mM [U-13 C 5 ]-glutamine (Cambridge Isotope Laboratories), or 100 μM [U-13 C 16 ]-palmitate (Cambridge Isotope Laboratories), 10% FCS, 1% penicillin (10,000 IU/mL), and streptomycin (20 mg/mL). 12 C metabolites were added to each tracer to ensure equivalent nutrient state. Before the treatment, [U-13 C 16 ]-palmitate was noncovalently conjugated to fatty-acid-free BSA (Sigma-Aldrich) as previously described [17]. The cells were incubated at 37°C, 5% CO 2 for 4 h. Cells were then harvested and intracellular metabolites were extracted as previously described [19]. Briefly, cell suspensions were collected and centrifuged at 250 g for 5 min. The cell pellet was washed with 1 mL 0.9% NaCl, followed by centrifugation at 250 g for 5 min. The cells were immediately put on ice to quench the metabolism, and 250 μL ice-cold HPLC-grade methanol (Sigma-Aldrich), 250 μL Milli-Q H 2 O with 1 μg/mL D6 glutaric acid (CDN isotopes) as internal standard, and 250 μL HPLC-grade chloroform (Sigma-Aldrich) were added. The cells were agitated at 4°C for 20 min at 1,400 rpm, followed by centrifugation at 17,000 g at 4°C for 5 min. After phase separation, 300 μL of the polar phase was transferred to a glass vial with a micro-insert and dried at 4°C under vacuum. Derivatization for gas chromatography was performed using a Gerstel MPS. Dried polar metabolites were dissolved in 15 μL of 20 mg/mL methoxyamine hydrochloride (Sigma-Aldrich) in pyridine (Roth) at 40°C while shaking for 90 min. An equal volume of N-tert-butyldimethylsilyl-N-methytrifluoroacetamide (Restek) was then injected, and the cells were further incubated for 60 min at 55°C under shaking. GC-MS measurement was performed on an Agilent 7890B GC coupled to an Agilent 5977B with extractor EI source (Agilent Technologies). Metabolites of interest were measured in selected ion monitoring mode. The Metabolite Detector software was used for the data analysis with the following settings: peak threshold, 5; minimum peak height, 5; bins per scan, 10; deconvolution width, 5 scans; no baseline adjustment; required peaks, 2; and no minimum required peak intensity. Retention index was calibrated based on the retention time. An in-house mass spectral library was used for compound identification. Mass isotopomer distributions were calculated by MetaboliteDetector's MID wizard. Fractional contribution of glutamine-, glucose-, and palmitate-derived carbon to total metabolite carbon was calculated by dividing the sum of the abundance of all isotopologs (except M0) by the total number of carbons in the respective metabolite. Statistical Analysis Single-Cell RNA-Seq Data Analysis Sequencing data were demultiplexed using Cell Ranger software (version 2.0.2) (10x Genomics), and FASTQ files were generated. Reads were aligned to the UCSC mm10 reference genome (GRCm37) using Cell Ranger followed by quantification of gene expressions and generation of a gene-barcode matrix. Individual datasets were aggregated using the Cellranger aggr command and further analyzed using the R package Seurat (version 3.1.4) (https:// cran.r-project.org/package=Seurat). The data were subjected to library-size normalization and log transformation, and the 3,000 most variable genes (based on variance-stabilizing transformation) in the dataset were used for downstream analysis. Principal component analysis (PCA) was used to reduce the dimensionality of the original matrix, and 10 principal components were used to calculate the Uniform Manifold Approximation and Projection (UMAP) and clusters. Genes that were differentially expressed using the FindMarkers function (default parameters) in Seurat and genes with p values <0.01 were considered as differentially expressed genes. The raw expression matrix was subset by cluster annotation (classical monocytes, immature myeloid cells, and neutrophils) and normalized by SCTransform. The 3,000 most variable genes J Innate Immun 2022;14:257-274 DOI: 10.1159/000519306 (as above) were chosen for downstream analysis. Data were scaled, and a PCA was calculated, of which the first 30 components were used for UMAP and clustering. Mutual nearest neighbor batch correction was performed on the low-dimensional representation (PCA) as recommended in the batchelor vignette by Aaron Lun (https://bioconductor.org/packages/release/bioc/vignettes/batchelor/inst/doc/correction.html). Pseudotemporal ordering of single cells was performed using monocle3 using the normalized data (preprocess_cds params: norm_method = "none," num_dim = 15) including mutual nearest neighbor batch correction (alignment group = sample). Cell cycle assignment was performed using the Seurat function CellCycleScoring using 20 bins and the genes previously reported by Kowalczyk et al. [20]. Over-representation of gene ontology (GO) categories was calculated using the R package clusterProfiler (https://bioconductor.org/packages/release/bioc/ html/clusterProfiler.html). Visualizations were produced with the R package ggplot2 (https://cran.r-project.org/package=ggplot2). Other Data Analysis Comparisons between groups were made using a parametric ANOVA test with the Tukey posttest or a 2-way ANOVA test. p values <0.05 were considered significant. Heatmap of metabolite concentration was generated with R package "pheatmap" with a p value cutoff of 0.05, ANOVA, and z-score normalization. Results are presented as mean values ± SD of a minimum of 3 replicates, and all experiments were repeated at least 3 times. MDSCs Originate from Both Bone Marrow and Extramedullary Sites in S. aureus Chronic Infection We used a previously described experimental model of S. aureus chronic infection [21] to investigate the origin and physiology of infection-driven MDSCs. C57BL/6 were infected intravenously with 10 6 CFU of S. aureus strain 6850, and bacterial loads were determined in the tibia and spleen at progressing times after bacterial inoculation. Consistent with previous observations [21], S. aureus was detectable in the tibia of infected mice for up to 30 days, but it was under detection levels in the spleen from day 20 onward (Fig. 1a). Infected mice developed pronounced splenomegaly with the progression of infection ( Fig. 1b), which was largely due to a disproportionate accumulation of CD11b + cells (Fig. 1c) expressing the markers Ly6C and Ly6G (Fig. 1d, e; online suppl. Fig. 1; for all online suppl. material, see www.karger.com/ doi/10.1159/000519306), typical of murine MDSCs [4]. Since the ability to suppress T-cell responses is the hallmark of MDSCs [4], we then assessed the capacity of the Ly6C + Ly6G + cells accumulating in the spleen of infected mice to inhibit T-cell proliferation. For this purpose, mouse CD4 + T cells isolated from the spleen of uninfected C57BL/6 mice and labeled with CSFE were stimulated with anti-CD3 and anti-CD28 antibodies and incubated in the presence or absence of Ly6C + Ly6G + cells isolated from the spleen of S. aureus-infected mice at day 21 of infection. On day 3 of culture, proliferation of CD4 + T cells was determined by flow cytometry. As shown in Figure 1f, Ly6C + Ly6G + potently suppressed proliferation of CD4 + T cells and therefore fulfilled the functional criteria for MDSCs. Furthermore, secretion of cytokines such as IL-2 and IFN-γ by anti-CD3/anti-CD28-stimulated CD4 + T cells was significantly decreased in the presence of MD-SCs (online suppl. Fig. 2). We also demonstrated that the expansion of MDSCs during S. aureus chronic infection is not bacterial strainspecific since similar splenomegaly and accumulation of MDSCs with inhibitory effects on T-cell responses were observed in mice intravenously infected with S. aureus strain SH1000, a strain that causes chronic renal infection in mice [22] (online suppl. Fig. 3). To investigate the origin of the MDSCs arising during S. aureus infection, we first focused on the bone marrow since this is the primary site where myeloid cells are produced. Flow cytometry analysis of the bone marrow isolated from S. aureus-infected mice at progressing times of infection showed a significant increase in the percentage of CD11b + cells predominantly expressing Ly6C and Ly6G at day 7 that gradually decreased at later times (Fig. 1g, h; online suppl. Fig. 4). Because the kinetic of CD11b + Ly6C + Ly6G + cells in the bone marrow (Fig. 1g, h; online suppl. Fig. 4) did not match the progressive increase of these cells observed in the spleen during the course of infection (Fig. 1d, e; online suppl. Fig. 1), we speculated that in addition to the bone marrow, MDSCs may also originate from other sites. In this regard, it has been reported that MDSCs can originate at extramedullary sites such as the spleen and liver during chronic inflammatory conditions as a consequence of extramedullary hematopoiesis [5]. To investigate if extramedullary hematopoiesis is occurring at peripheral sites during S. aureus infection, we determined the percentage of Lin − IL-7Rα − c-Kit + Sca-1 − lineage-committed progenitors (LK) and of Lin − IL-7Rα − Sca-1 + c-Kit + myeloid progenitors (LSK) in the spleen of infected mice at progressing times after bacterial inoculation. A time-dependent increase in the frequency of LK and of LSK was observed in the spleen of S. aureus-infected mice (Fig. 1i). Histological examination of the spleen tissue taken from S. aureus-infected mice at day 21 of infection showed the red pulp markedly expanded by numerous hematopoietic cells including erythroid and myeloid precursors as well as DOI: 10.1159/000519306 megakaryocytes, further confirming the occurrence of extramedullary hematopoiesis (Fig. 1j). Together, these results indicate that both the bone marrow and extramedullary hematopoiesis at peripheral sites may contribute to the expansion of MDSCs observed during S. aureus chronic infection. High-Resolution Mapping of Infection-Driven MDSCs Determined by scRNA-Seq To capture the phenotypic variation among MDSCs present in the spleen of S. aureus-infected mice at a high resolution, we performed scRNA-seq on sorted CD11b + cells at day 21 of infection. CD11b + cells isolated from the spleen of uninfected control mice were included to determine the changes in cell composition specifically induced by infection. The scRNA-seq data acquired using the droplet-based 10x Genomics technology from both conditions were combined (1,897 cells from control and 1,497 cells from infected), and a 2-dimensional representation of the single cell transcriptomes was obtained using a UMAP (Fig. 2a). A total of 9 different cell clusters were identified according to the expression of known marker genes, including NK cells (Nkg7, Klrb1c, Klre1, Klrk1, Klra7-9, and Klrd1), B cells (Cd79a, Cd79b, Cd19, and Cd74), dendritic cells (H2-Ab1, H2-Eb1, H2-Aa, Cd209a, and H2-DMb1), classical-(Ly6c2, Ccl9, Ccr2, and Cd68) and nonclassical (Fabp4, Cx3cr1, and Csf1r) monocytes, plasma B cells (Jchain and Sdc1), T cells (Cd3d), neutrophils (S100a8, Ccl6, Il1b, Ly6g, and Wfdc21), and a cluster of immature myeloid cells expressing markers along the granulocytic differentiation axis (Ly6c2, Ly6g, Chil3, Camp, Ltf, S100a8, and Wfdc21) that we classified as MD-SCs (Fig. 2b, c; online suppl. Table 1). The proportions of cell types between the 2 conditions are shown in Figure 2d. Notably, the majority of CD11b + cells from infected mice could be classified as immature myeloid cells, while no such population was present in the control sample. To investigate the full extent of heterogeneity of MDSCs, we extracted all transcriptomes annotated as classical monocytes, immature myeloid cells, and neutrophils from the combined dataset and reanalyzed this subset. Conceptionally, the classical monocytes and neutrophils, present in both conditions, represent the typical myeloid cell populations under homeostatic conditions (compare Fig. 3a, b). In infected mice, however, a continuous spectrum of cells could be observed between these 2 populations representing the different stages of granulocyte differentiation as shown in Figure 3b. Neutrophils contain 4 types of granules including primary (azurophilic), secondary (specific), tertiary (gelatinase), and ficolin-1-rich granules. These granules are produced stepwise during the different stages of maturation that start with promyelocytes followed by myelocytes, metamyelocytes, band cells, and end with terminally differentiated segmented neutrophils [23]. This process has been described as a targeting-by-timing model to explain the differences in protein contents among neutrophil granule subsets [23][24][25][26]. Accordingly, we used the expression levels of the genes encoding the different granule proteins to classify the spectrum of cell populations identified by scRNA-seq within the MDSCs into specific neutrophil differentiation categories. Azurophilic granule proteins including myeloperoxidase (encoded by Mpo), elastase (encoded by Elane), cathepsin G (encoded by Ctsg), and proteinase 3 (encoded by Prtn3) are produced only at the promyelocyte stage (Fig. 3c, d; online suppl. Table 2). Myelocytes were identified by the high expression of genes encoding secondary granule proteins such as lactoferrin (Ltf), cathelicidin (Camp), and neutrophil gelatinase-associated lipocalin (Lcn2) as well as by the expression of the gene encoding ficolin-1 (Fcnb), which originates during the transition from myelocytes to metamyelocytes (Fig. 3c, e; online suppl. Table 2). Metamyelocytes could be identified based on the high expression of the gene encoding the above-mentioned secondary granule proteins (Ltf, Camp, Lcn2, and Fcnb) and by the increased expression of the gene encoding Ly6G (Ly6g) (Fig. 3c-f; online suppl. Table 2). The expression of genes encoding tertiary granule proteins such Segmented neutrophils as metalloproteinase 8 (Mmp8) and metalloproteinase 9 (Mmp9) identified band neutrophils ( Fig. 3c; online suppl. Table 2). Last, terminally differentiated segmented neutrophils were identified based on the high expression of genes encoding markers such as colony-stimulating factor 3 receptor (Csf3r), IL1-β (Il1b), and CCL6 (Ccl6) (Fig. 3c, g; online suppl. Table 2). The expression pattern of Cebpe, which encodes the transcription factor CCAAT/ enhancer binding protein-ε (C/EBP-ε) that is predominantly expressed during the myelocyte and metamyelocyte differentiation stages [27], confirmed the classification performed based on the expression of genes encoding granule proteins ( Fig. 3h; online suppl. Table 2). Mature neutrophils are mitotically inactive with cell cycle arrest occurring during the myelocyte to metamyelocyte transition [25]. To substantiate this in our experimental setting, we performed cell cycle analysis in the scRNA-seq data. The results indicated that both promyelocytes and myelocytes were actively proliferating as they were in phases S (DNA synthesis) and G2/M (cell division) of the cell cycle (Fig. 3i). On the other hand, metamyelocytes, band neutrophils, and segmented neutrophils were all in the postmitotic G1 phase (growth phase, Fig. 3i). Consistent with these results, gene ontology analysis identified the cell cycle to be an overrepresented functional category in promyelocytes and myelocytes, whereas functional categories associated with cell migration and host defense were overrepresented in the more mature populations such as band and segmented neutrophils (Fig. 4a). In addition, we performed functional analysis to determine the presence of actively proliferating cells in the spleen of S. aureus-infected mice in vivo using EdU, a thymidine analog that is incorporated into proliferating cells during DNA synthesis. A significantly higher number of proliferating cells were detected in the spleen of S. aureus-infected mice (day 21 of infection) in comparison to uninfected control mice (Fig. 4b, upper panels). More than 60% of the actively proliferating cells (EdU + ) were Ly6C + /Ly6G − (Fig. 4b, lower panel) and most probably represented promyelocyte and myelocyte cell populations. To further validate the hierarchy between the different myeloid cell populations within the MDSC cluster identified by the scRNA-seq data, we performed trajectory analysis based on pseudotime, where cells represent distinct stages in a continuous developmental process. This enables the association of specific cell types with the initial, intermediate, and terminal states of the trajectory [28]. The pseudotime analysis shown in Figure 4c recapitulated the trajectory of cell differentiation from pro-myelocytes (initial) to terminally differentiated segmented neutrophils (final) including several intermediate developmental states comprising myelocytes, metamyelocytes, and band neutrophils. Distinguishing between immature granulocytic precursors and mature segmented neutrophils has been very difficult, and no phenotypic marker has been identified so far that enables precise separation of these populations. The results of the scRNA-seq analysis performed in this study have revealed that the expression of both Ly6G and CCL6 markers may be suitable to separate mature neutrophils (Ly6G + CCL6 + ) from immature MDSC precursors (Ly6G + CCL6 − ). This was corroborated by flow cytometry analysis showing that whereas approximately 90% of neutrophils in the spleen (Fig. 4d, f) and blood (Fig. 4e, f) of uninfected mice were mature neutrophils (Ly6G + CCL6 + ), <10% of Ly6G + cells expressed CCL6 in the spleen (Fig. 4d, f) and blood (Fig. 4e, f) of S. aureusinfected mice. In the cancer setting, it has been reported that MDSCs are not irreversibly arrested in an immature stage and could terminally differentiate after being removed from the tumor environment and cultured under in vitro conditions [29][30][31]. To investigate if this was also the case for infection-driven MDSCs, we determined the capacity of MDSCs isolated from the spleen of S. aureus-infected mice at day 21 of infection to undergo terminal maturation upon in vitro culture conditions. As the surface expression of Ly6G and the intracellular expression of CCL6 were revealed by the scRNA-seq analysis as markers of mature neutrophils, we monitored MDSC maturation by measuring the level of expression of these markers at increasing times of in vitro culture. Although cell viability slowly decreased with time, over 70% of Ly6G + cells were still viable after 96 h of in vitro culture (online suppl. Fig. 7a, b). Flow cytometry analysis of Ly6G + at different times of in vitro culture showed a time-dependent gradual increase in the percentage of Ly6G + CCL6 + (Fig. 4g, h). After 96 h of in vitro culture, >80% of Ly6G + cells expressed CCL6, when only 10% of the Ly6G + expressed CCL6 prior to in vitro culture (0 h) (Fig. 4g, h). Morphological changes were also observed in in vitro-cultured MDSCs, which showed a transition from predominant immature myeloid cells including cells exhibiting round nuclei typical of promyelocytes, kidney-shaped nuclei typical of myelocytes and metamyelocytes, and band-like-shaped nuclei typical of band neutrophils prior to in vitro culture (Fig. 4i, upper panel) to cells with segmented nuclei morphology typical of mature neutrophils after 96 h of in vitro culture (Fig. 4i, lower panel) tro-cultured Ly6C + Ly6G + cells were capable of phagocytizing and killing internalized S. aureus (online suppl. Fig. 9). Together, these results indicate that S. aureus infection-driven MDSCs retained their capacity to terminally differentiate into mature myeloid cells and can undergo maturation after being removed from the spleen environment. Infection-Driven Immature MDSCs Rely on Aerobic Glycolysis to Complete Their Maturation Process As metabolism has been shown to influence immune cell differentiation and function [32], a better understanding of the metabolic pathways used by MDSCs to support both their energetic and biosynthetic demands could provide important information about their difficulties for undergoing terminal differentiation. Therefore, we analyzed ECAR as a surrogate for glycolytic rate and OCR as an indicator of mitochondrial oxidative phosphorylation in immature Ly6C + Ly6G + cells directly isolated from the spleen of S. aureus-infected mice at day 21 of infection (ex vivo) and in Ly6C + Ly6G + cells after maturation in in vitro culture for 96 h (in vitro) using a Seahorse XF biochemical analyzer [33]. Ex vivo immature Ly6C + Ly6G + cells showed increased basal ECAR (Fig. 5a) and OCR (Fig. 5b) as compared to in vitro-cultured mature Ly6C + Ly6G + cells, indicating that immature Ly6C + Ly6G + cells had increased energetic demands compared with mature Ly6C + Ly6G + cells. Injec- tion of a mixture of complex I inhibitor rotenone and complex III inhibitor antimycin A blocked mitochondrial respiration in both ex vivo immature and in vitrocultured mature Ly6C + Ly6G + cells, as evidenced by reduced OCR values (Fig. 5b). The decrease in OCR in ex vivo immature Ly6C + Ly6G + cells was accompanied by an increase in ECAR (Fig. 5a), indicating that glycolysis is induced after inhibition of the electron transport chain to compensate ATP production and to meet their energy demand. Subsequent injection of 2-DG, a glucose analog that inhibits hexokinase, the first enzyme in the glycolysis pathway, resulted in substantial reduction of ECAR below the basal level in both ex vivo immature and in vitro-cultured mature Ly6C + Ly6G + cells (Fig. 5a), confirming thus that acidification of the medium after inhibition of mitochondrial respiration was driven by glycolysis. These results imply that infection-driven immature MDSCs used both aerobic glycolysis and oxidative phosphorylation to support their bioenergetic demands. In accordance with their high glycolytic activity, ex vivo immature Ly6C + Ly6G + cells consumed greater amounts of glucose than in vitro-cultured mature Ly6C + Ly6G + cells as determined by flow cytometry using the fluorescent Dglucose analog 2-(N-[7-nitrobenz-2-oxa-1,3-diazol-4-yl] amino)-2-deoxy-D-glucose (Fig. 5c). As ex vivo immature Ly6C + Ly6G + MDSCs also utilized oxidative phosphorylation to fulfill their bioenergetic requirements, we also determined which carbon sources were used by these cells to support oxidative metabolism. For this purpose, ex vivo immature Ly6C + Ly6G + cells were incubated with [U-13 C 6 ]glucose, [U-13 C 5 ]-glutamine, or [U- 13 C 16 ]-palmitate, and the labeling pattern of selected metabolites was determined by GC-MS measurement. As expected, glucose-derived carbon was incorporated not only into pyruvate but also into lactate, further corroborating the use of aerobic glycolysis by S. aureus infection-driven immature MDSCs (Fig. 5d). The flux of glutamine-derived carbon into TCA cycle intermediates showed that approximately 50% of carbon in the TCA cycle is derived from glutamine (Fig. 5d). This may indicate that due to the excessive conversion of glucose into lactate, MDSCs used anaplerosis of glutamine to replenish TCA cycle intermediates. The results depicted in Figure 5d also show that glucose and palmitate fueled the TCA cycle with acetyl-CoA at comparable levels. Infection-Driven Immature MDSCs Are Reliant on Glucose Availability for Terminal Differentiation We next investigated the reason why MDSCs failed to complete their maturation program and accumulated in the spleen of infected mice in an immature stage of differentiation. The metabolic analysis performed above indicated that MDSCs exhibited high glycolytic activity and high rate of glucose consumption that may support the elevated biosynthetic requirements associated with the maturation process. However, glucose may become rapidly depleted in the spleen microenvironment due to its rapid consumption by the increased proportion of MDSCs accumulating in this organ. Furthermore, glucose supply during infection may be insufficient as a consequence of reduced food intake by infected mice. This was particularly evident in our study since S. aureus-infected mice exhibited progressive weight loss during the course of infection (Fig. 6a) and exhibited reduced concentrations of glucose in blood (Fig. 6b). Based on these observations, we postulated that limited glucose availability during infection may pose a bottleneck for MD-SCs to undergo complete maturation. To substantiate this assumption, we investigated if the level of glucose availability influenced the maturation status of MDSCs. For this purpose, we determined the capacity of immature MDSCs isolated from the spleen of S. aureus-infected mice to undergo terminal maturation under in vitro culture conditions in the presence or absence of glucose or after inhibition of glycolysis by measuring the level of expression of surface Ly6G and intracellular CCL6. The results show that approximately 90% of MDSCs underwent terminal maturation at 96 h of culture in the presence of glucose, but only 40% underwent terminal maturation in cultures where glucose was removed from the culture medium (Fig. 6c, d). Terminal differentiation of MDSCs was completely suppressed when the inhibitor of glycolysis 2-DG was added to the cultures (Fig. 6c, d). Cell viability was significantly lower in the absence of glucose or after inhibition of glycolysis with 2-DG than in cells cultured in the presence of glucose (online suppl. Fig. 10a). The impact of glucose availability on MDSC maturation was further confirmed by a trend toward a reduction in the terminal maturation of MDSCs observed upon exposure to decreased glucose concentrations (online suppl. Fig. 10b, c). Furthermore, only MDSCs cultured in vitro for 96 h in the presence of 2 mg/mL of glucose exhibited significantly lower capacity to inhibit T-cell proliferation than immature MDSCs prior to culture (online suppl. Fig. 10d). Based on these observations, we next investigated the effect of increasing glucose availability in vivo by supplementing S. aureus-infected mice with 10% glucose in drinking water for 10 days on the maturation of splenic DOI: 10.1159/000519306 MDSCs. Glucose-treated mice exhibited significantly less body weight loss (Fig. 7a) and higher levels of glucose in blood (Fig. 7b) than untreated mice. Importantly, glucose-treated mice exhibited a significantly higher number of mature neutrophils (Ly6G + CCL6 + ) in the spleen than the untreated group (Fig. 7c, d). However, glucose supplementation did not affect the bacterial loads in infected organs (Fig. 7e). Discussion In this study, we integrated scRNA-seq analysis and functional metabolic profiling to gain a deeper understanding of the generation and physiology of MDSCs in the context of S. aureus chronic infection. The results of the scRNA-seq analysis emphasize the vast heterogeneity and functional diversity of infection-driven MDSCs, which comprise a continuous spectrum of cell popula- tions representing transitions between different states of granulocyte differentiation, ranging from promyelocytes to mature segmented neutrophils. In mice, the phenotypic distinction between mature neutrophils and immature progenitors has been difficult, and no phenotypic marker has been identified so far that enables to precisely separate these populations. The scRNA-seq analysis per-formed in our study has identified surface expression of Ly6G and intracellular expression of CCL6 as phenotypic markers that enable to distinguish mature neutrophils (Ly6G + CCL6 + ) from immature granulocytes precursors (Ly6G + CCL6 − ) by flow cytometry. However, CCL6 has the drawback to be intracellular and its detection requires cell fixation and therefore does not allow recovery of live cells for subsequent functional studies. To date, relatively little information is available about the function and expression of CCL6. In mice, CCL6 is expressed in cells of granulocyte and macrophage lineages and is highly induced upon stimulation with GM-CSF [34]. Other studies have reported a role for this chemokine in inflammation and tissue remodeling [18]. The human homolog of CCL6 has not yet been identified, an issue that deserves further attention in future studies. We found that infection-driven MDSCs originate from both the bone marrow, most probably as a consequence of emergency granulopoiesis, and in situ within the spleen from extramedullary hematopoiesis. Emergency granulopoiesis induced by infection and the concomitant release of immature myeloid cells in the circulation seem to be a mechanism triggered to restore the neutrophil pool that is rapidly depleted from peripheral blood due to extravasation from the bloodstream into the sites of infection [35]. We also investigated the reason why infection-driven MDSCs fail to undergo terminal maturation and accumulate in an early stage of differentiation. We found that, similar to other pathological conditions [29][30][31], MDSCs in chronically infected mice are not irreversibly arrested in an immature stage and still retain their capacity to terminally differentiate into mature myeloid cells under in vitro culture conditions. Since metabolism plays an important role in immune cell differentiation and function [32], we explored a possible connection between the metabolic demands of MDSCs and their difficulties for undergoing terminal differentiation. The metabolic flux and isotope tracing analysis performed in our study indicate that infection-driven MDSCs use both aerobic glycolysis and oxidative phosphorylation for ATP production. The benefits of aerobic glycolysis for MDSCs may be both bioenergetics and biosynthesis. During glycolysis, 1 molecule of glucose is converted into 2 molecules of pyruvate with the concomitant generation of 2 molecules of ATP. Generally, pyruvate enters the mitochondria where it is converted into acetyl-CoA, which enters the TCA cycle. In certain circumstances, as those observed in our study in the infection-driven MDSCs, a proportion of pyruvate can be also converted into lactate in the cytosol by lactate dehydrogenase with concomitant regeneration of NAD + from NADH that keeps fueling the glycolytic pathway. Therefore, although the ATP generated per glucose molecule during aerobic glycolysis is rather low, a very high glycolytic flux like that detected in MDSCs from infected mice can produce high levels of ATP. Furthermore, in addition to ATP generation, glycolysis may provide biosynthetic intermediates to sup-port the synthesis of important molecular building blocks required by MDSCs for undergoing differentiation and maturation. Based on the results of the metabolic analysis, we speculated that MDSCs may rely on high glycolytic activity to complete their maturation process and that glucose limitation in the spleen microenvironment, possibly due to its rapid consumption by MDSCs and/or to a decline in glucose blood concentrations observed in infected mice, could prevent their complete maturation. This assumption proved to be true since MDSCs isolated from infected mice were capable of undergoing terminal differentiation under in vitro conditions when glucose was added to the culture medium, but differentiation was hampered in the absence of glucose or when glycolysis was inhibited. Furthermore, we could show that supplying S. aureusinfected mice with glucose in the drinking water resulted in improved blood glucose levels, ameliorated weight loss, and accelerated differentiation of immature myeloid cells in the spleen. However, the bacterial loads in the organs of infected mice were not affected by glucose supplementation. One possible explanation for this phenomenon could be that a proportion of MDSCs were still present in the glucose-treated mice that could interfere with effective T-cell responses. Furthermore, as glucose is the principal energy source of S. aureus, increasing glucose levels in treated mice could enhance S. aureus pathogenesis as reported by previous studies [36][37][38][39]. In summary, the results of our study have uncovered a link between metabolic alterations induced by infection and the accumulation of MDSCs.
2021-11-13T06:18:06.625Z
2021-11-11T00:00:00.000
{ "year": 2021, "sha1": "3ee87315b10391d50d218560f957e287e5b5ffe1", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/519306", "oa_status": "GOLD", "pdf_src": "Karger", "pdf_hash": "71203e47e1305e30f8dc48fe9f00f6dca8b98ddd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219431721
pes2o/s2orc
v3-fos-license
Integration of Monitoring Systems in the Public Administration of the Region Under the Conditions of Technological Transformation This paper presents the newly developed model for integrating monitoring systems in public administration at the regional level, including the possibilities for its technological transformation. The model is formed by authors on the example of the integration of monitoring systems for performance evaluation of the financial activities of local authorities at the territory of Tver region. Authors provide the assessment of the effectiveness of the inter-budget transfers provided to the municipal district from the regional budget, as well as the need for technological transformation of public administration. The article clarifies the concept of " integration of monitoring systems in the public administration of the region " , distinguishes the terms of " monitoring " and " monitoring system " . The technological transformation is highlighted as an element of digital transformation. The results of the paper are of interest to experts in the field of economics and management, the digital economy, the budget process and financial law. INTRODUCTION The performance of public administration largely depends on the information that has become the basis for making management decisions and can be obtained as a result of various monitoring, reflecting the efficiency of the implementation of methods and tools previously used by state and (or) municipal structures. The problem of modernizing public administration, increasing its effectiveness and quality, as well as the process of improving the system and structure of authorities, are not only legal problems, but also problems of a socio-political and socio-economic order. Moreover, these problems are closely interconnected and interdependent [1]. Today, various types of monitoring is implemented at the regional level, however, most often they are not automated, they are supervised by various authorities of the constituent entities of the Russian Federation. Many indicators in these monitoring are duplicated, and their value varies. This problem can be solved within the framework of the technological transformation of public administration at the regional level. The authors defines technological transformation as the first stage of digital transformation, which, at the same time, can be an independent element. The key difference between technological and digital transformations lies in the level of automation of monitoring processes and the degree of human participation in its implementation. The purpose of the paper is to develop a model for the integration of monitoring systems in public administration at the regional level, taking into account the possibility of technological transformation of processes. STUDY METHODOLOGY The study is based on the set of theories, which includes the theory of administrative management by H. Fayol, the theory of unbalanced growth by A. Hirschman, and feedback management theory. The principles adaptive for public administration implemented by regional and municipal authorities are taken into account. The main element of the research methodology was the structural level method, which allows us to consider monitoring processes as an element of the region's public administration system. As part of a system analysis, the mechanism for monitoring of municipal financial management systems was updated from the position of public administration. It was implemented to identify the effectiveness of the inter-budget transfers provided to the municipal district from the regional budget, and justify the management decisions based on the given results. The modeling method was used by the authors for the visualization of the proposed integration of the interbudget transfers' effectiveness monitoring into the public administration system at the regional level. As pilot territory for the experiment and analysis, the Tver region has been chosen. This region characterized over the past 3 years by an annual increase in the volume of inter-budget transfers provided to municipalities in the region. The model is formed on the example of the integration of monitoring systems in the field of assessing the activities of local authorities by regional authorities. The open data of executive authorities of the selected constituent entities of the Russian Federation was analyzed. Among them Primorsky Krai, Belgorod region, Ulyanovsk region, Khanty-Mansiysk Autonomous Okrug -Ugra. The rationale for the use of open data is the latest revision of the Federal Law 8-FZ "On Ensuring Access to the Information on the Activities of State Bodies and Local Government Bodies" from February 9, 2009. Integration of monitoring systems in the public administration of the region: concept and technological transformation Integration in translation from Latin is understood as restoration, replenishment. V.S. Evteev considers integration as a side of development associated with the unification into a whole of previously heterogeneous parts and elements [2]. The interdisciplinary term "monitoring" is interpreted in GOST R ISO 9000-2015 National Standard of the Russian Federation. "Quality Management Systems. Basic Provisions and Terms" determining the status of a system, process, product, service or activity, reflected at various stages or periods, in the context of public administration, has the specifics of determining the conceptual space [3]. Charles S. Wesson in his works notes that, as a result of monitoring for certain indicators, it is possible to establish (identify) trends, thanks to which it is possible to predict the values of indicators and pre-determine the state of the system, instead of trying to shift it to a more stable state, using a lagging characteristic [4 ]. More than 25 years ago, Joseph S. Wholey and Harry P. Hatry noted the importance of applying methodologies for assessing the effectiveness of various aspects of public administration in the framework of monitoring [5]. Irena Segalovičienė, examining public administration, notes that the formation of performance monitoring indicators is the most significant step in planning. Development of the performance monitoring indicators is an important political and methodological direction [6]. According to the authors, the concept of "monitoring system" is broader in relation to the concept of "monitoring", since in addition to the process and objects, it includes the subjects of implementation. Public administration at the regional level is the activity of the authorities of a constituent entity of the Russian Federation, as well as structures within the authorities. Thus, the integration of monitoring systems in the public administration of a region refers to the combination of processes for determining the status of management entities, the exclusion of duplication of functions of regional authorities (management entities), and the formation of unified final results. In the conditions of technological transformation, this integration implies automatic aggregation of indicators, the use of an inter-level and interagency interaction information system. A further effective transition from technological to digital transformation should be accompanied by an increase in the coefficient of reduction in labor costs for monitoring. Monitoring of local government performance evaluation by regional authorities Currently, in the Russian Federation most municipal government bodies use inter-budget transfers provided to the municipal district from the regional budget to resolve issues of local importance. At the same time, the question of the effectiveness of the use of intergovernmental transfers by local governments remains open, which can be considered from the perspective of analyzing the influence of the transferred funds on increasing the own revenue base of the municipality. High level of performance in this field can only be achieved within the framework of financial management implemented at the municipal level, and justification of appropriate management decisions by local authorities. In part, the performance analysis can be carried out on the basis of monitoring data on the quality of financial management in municipalities, taking into consideration the regulatory framework of the constituent entities of the Russian Federation. Monitoring of financial management quality allows on a regular basis to analyze and evaluate the actions of authorities implemented to ensure efficient use of budgetary resources [7]. An analysis of the 2018 data from the Tver region (data for 2019 are not publicly available) allows to conclude that in the period under review, compared to 2017, in 9 municipal districts of the Tver region, local governments significantly improved the quality of financial management, while not one of the municipalities, according to the rating, did not exceed the threshold of Advances in Economics, Business and Management Research, volume 138 90% of the maximum possible points, and in 16 municipalities there was a negative trend. Шn terms of ongoing research, the most interesting indicators of the group are: -"ensuring the balance and medium-term sustainability of local budgets" -"ensuring effective financial planning and spending of funds of the municipality", -"ensuring the effective management of municipal institutions of the Tver region". Based on the characteristics of the problems identified by the Ministry of Finance of the Tver region, the following reasons caused the failure to achieve the maximum possible assessment of the quality of municipal finance management in 2018: -local governments are not actively involved in collecting local taxes; -in 40 out of 43 municipalities the mechanism of selftaxation of citizens is not running (while objectively this possibility is excluded only in 2 municipalities that have the status of closed administrative-territorial entities); -reducing the revenue potential of the territories was identified in 7 municipalities (in 2017, it was identified in 11 municipalities); -inefficient use of municipal property was identified in 25 municipalities (in 2017, it was identified in 20 municipalities); -poor quality of planning the income from the sale of assets was identified in 16 municipalities (number of such municipalities has decreased compared to 2017). The task of increasing revenues from income-generating activities of the municipal institutions of the Tver region remains important. Thus, the existing monitoring of financial management allows to identify the main problems of forming the revenue base of the municipal district, but does not reflect the efficiency of using inter-budget transfers provided from the regional budget. However, as a rule, such monitoring does not imply the use of calculation formulas directly reflecting the studied local government performance. Monitoring process involves only some indicators that can be included in these formulas, or indirectly reflect the achievement of any effects. It confirms the problems of implementing adequate financial management. From the position of public administration at the regional level, the existing monitoring systems reflect only the effectiveness of financial resources usage, including the inter-budget transfers, and not the achievement of local government performance goals. Evaluation of the effectiveness of inter-budget transfers provided to municipal districts from the regional budget can be an independent tool for making managerial decisions by public structures of a constituent entity of the Russian Federation or a municipality, or it can be an element of integrated regional monitoring of financial management and evaluation of the local government performance. The introduction of assessment of the effectiveness of inter-budget transfers to municipal districts in regional monitoring of financial management and (or) an evaluation of the local government performance implies its inclusion as one of the indicators of assessment. At the same time, it is advisable to introduce a single approach, which will eliminate duplication of processes. As criteria for the evaluation, autonomy indicators can be used, assessing the inter-budget transfers from the point of view of financial equalization and the adequacy of budgetary resources for resolving issues of local importance, and also taking into account the coefficient reflecting the financing of transferred state tasks. The proposed integration of monitoring, including an assessment of the effectiveness of inter-budget transfers provided to municipalities, in the public administration system at the regional level, using the example of municipal districts of the Tver region, is presented at the Figure 1. IBTinter-budget transfer provided to the municipal district from the regional budget This integration can be considered as an adaptive model that can be implemented in any region. In this case, the model is understood in a broad interpretation, as a substitute object for the original, designed to study some of its properties [8]. The integration under study in the Tver region requires the introduction in the region of an information system of inter-level and interagency cooperation, as well as software tools that provide aggregation of indicators of municipalities and assessment within the framework of calculation formulas. It is worth noting a similar situation in other regions of the Russian Federation. The best practices of technological transformation of processes in public administration at the regional level executed at Khanty-Mansi Autonomous Okrug -Ugra, Primorsky Krai, Ulyanovsk region, Belgorod region. For the majority of these regions of the Russian Federation, information systems integrated at the regional and municipal levels are used for monitoring purposes, which ensures the required inter-level interaction [9]. So, for example, the inter-level interaction provided within the framework of the information system in the Khanty-Mansiysk Autonomous Okrug -Ugra (KMAO-Ugra) allows: -to ensure bilateral information interaction between the public administration structures of the KMAO-Ugra, the administrations of municipalities of the region, the central project office of the KMAO-Ugra; -to ensure bilateral information interaction between municipal project offices, structural divisions of municipal administrations, functional project offices of branch executive bodies of the KMAO-Ugra; -to provide information from the project committee of the KMAO-Ugra to the municipal project offices; -to provide information from the central project office of the KMAO-Ugra to the functional design offices of sectoral executive bodies of the KMAO-Ugra; -to introduce the monitoring of the implementation of national projects in the region and municipalities, as well as other monitoring in the field of economic management in the regional space [10]. CONCLUSIONS The executed analysis results allows to develop the following conclusions: -integration of monitoring systems in the public administration of a region refers to the combination of processes for the determination of the status of administrative entities, the exclusion of duplication of functions of regional authorities (administrative entities), the development of unified final results; -the concept of "monitoring system" is broader in relation to the concept of "monitoring", since in addition to the process and objects, it includes the subjects of implementation; -under the conditions of technological transformation, this integration implies automatic aggregation of indicators, the use of an information system of inter-level and interagency interactions; -assessing the effectiveness of inter-budget transfers provided to municipal districts from the regional budget can be an independent tool for making managerial decisions by public structures of a constituent entity of the Russian Federation or a municipality, or it can be an element of integrated regional monitoring of financial management and evaluation of local governments' performance. The proposed integration can be considered as an adaptive model, and should be recommended for implementation in all regions. The results of the study can be embedded in the activities of public authorities at the state and municipal levels. The value of the study can be considered from the point of view of experts in the field of economics and management, the digital economy, the budget process and financial law.
2020-05-21T00:10:23.232Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e0666aa2501ef5c3488eb82258967a45da8d8766", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125939815.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a520595de9a5f4bef33bde8571ad24d115e04d60", "s2fieldsofstudy": [ "Economics", "Political Science", "Business" ], "extfieldsofstudy": [ "Business" ] }
5129954
pes2o/s2orc
v3-fos-license
Microfluidic Technology in Vascular Research Vascular cell biology is an area of research with great biomedical relevance. Vascular dysfunction is involved in major diseases such as atherosclerosis, diabetes, and cancer. However, when studying vascular cell biology in the laboratory, it is difficult to mimic the dynamic, three-dimensional microenvironment that is found in vivo. Microfluidic technology offers unique possibilities to overcome this difficulty. In this review, an overview of the recent applications of microfluidic technology in the field of vascular biological research will be given. Examples of how microfluidics can be used to generate shear stresses, growth factor gradients, cocultures, and migration assays will be provided. The use of microfluidic devices in studying three-dimensional models of vascular tissue will be discussed. It is concluded that microfluidic technology offers great possibilities to systematically study vascular cell biology with setups that more closely mimic the in vivo situation than those that are generated with conventional methods. Introduction Vascular science is an active area of research. Scientists worldwide are trying to unravel the mechanisms that determine vascular function and dysfunction. Important objects of study in this field of research include the maintenance of vascular tone [1], regulation of inflammation [2], sprouting of new blood vessels [3], regulation of cell survival [4], and the differentiation of stem cells into vascular tissue [5]. Vascular science is a field with a strong translational focus, combining results from fundamental molecular and cell biology with in vitro models of blood vessels and in vivo tests to develop insight in vascular physiology and treatment of disease. Vascular dysfunction is an important factor in major diseases like atherosclerosis [6], cancer [7], and diabetes [8]. The basis for understanding the functioning of blood vessels lies in understanding its building blocks, the vascular cells. Therefore, a lot of research is focused on how endothelial cells or smooth muscle cells react to relevant biological, chemical, or physical cues in vitro. Usually, this work is carried out by using conventional methods, culturing cells of animal or human origin in wells-plates, subjecting them to the aforementioned stimuli, and analyzing the outcome by biological or biochemical techniques. However, in vivo dynamic conditions are present: vascular endothelial cells are constantly subjected to shear stress caused by the flowing blood [9], while smooth muscle cells are stretched because of distension of the blood vessel during the cardiac cycle [10]. Moreover, vascular cells are embedded in a three-dimensional environment consisting of an elastic extracellular matrix [11], other cells [12], and flowing blood, with its platelets [13], red blood cells [14], and leukocytes [15] (Figure 1). Both the three-dimensional environment and the dynamic mechanical changes with each cardiac cycle are very important factors in vascular cell functioning. It is advantageous to design laboratory setups that allow researchers to include these factors and control the relevant parameters. The main challenge when building such setups is that they should still be easy to assemble, handle, and combine with conventional analysis techniques. In the recent years, the field of microfluidic technology has gained much scientific interest among biologists, biochemists, and biophysicists ( Figure 2). We feel that microfluidic technology holds great promise to overcome the challenge of performing in vitro experiments with more physiologically realistic setups that are still simple enough to be used in everyday laboratory practice. Moreover, microfluidic technology allows for increasing scale and parallelization of current research, leading to more comprehensive insights into cell and tissue physiology. The advantages of microfluidic technology for cell culture in general have been reviewed elsewhere [16,17]. In this review, we will focus specifically on the application of microfluidic devices in vascular cell biology research. Fabrication. Microfluidic technology deals with the design, fabrication, and application of devices for manipulation of fluids on the micrometer scale. Typically, the sizes of features in these devices range from several micrometers to a few hundred micrometers. The amounts of fluid that are manipulated inside these devices are typically in the picoliter to nanoliter range. Microfluidic devices can be fabricated using metal, glass, or polymer materials. Most devices that are used in combination with cell biological research are made of glass or the silicone rubber polydimethylsiloxane (PDMS), because these materials are cheap, biocompatible, and transparent. Because all microfluidic studies that are discussed in this review use microfluidic devices of PDMS (sometimes combined with glass components), the process of producing these devices will be described shortly (see also Figure 3). PDMS devices are produced by soft lithography replica molding [18], which means that the devices are elastic replicas of a stiff, reusable mold. The process starts by producing the stiff mold with the desired structures. The mold is usually made of silicon with micrometer-size structures produced either by plasma-etching of the silicon plate or by building on top of the plate with the epoxy-based, photo-crosslinkable polymer SU-8. A mixture of PDMS oligomers is poured on top of this mold, allowed to solidify by crosslinking, and then peeled off from the mold. In order to create sealed channels, the surface of the PDMS replica is activated with oxygen plasma and bound to a PDMS or glass surface. Holes can be punctured to reach the closed channel structure and tubing can be connected to manipulate fluid inside the channels. The silicon master-molds need to be produced in The microfluidic device is ready to be used. Prior to binding to the surface, holes can be punched in the slab of PDMS to reach the resulting microfluidic channel. a clean room, but replica molding can be performed under standard laboratory conditions. Once the master-mold has been created, producing new microfluidic devices by this method takes only a few hours. Because the materials are cheap, microfluidic devices can be discarded after every experiment. Cells and Microfluidic Technology. Generally speaking, PDMS microfluidic devices offer a number of distinct advantages over conventional techniques for cell culturing, manipulation, and analysis. The main feature of microfluidic devices that makes them suitable for use in cell biology is that they are smaller than conventional setups (for an impression of the size of a microfluidic channel, see Figure 4(a)). Because of this small size, only limited amounts of cells, media, and reagents are needed. This leads to a number of significant benefits. First of all, if experiments are to be conducted with rare primary cell material or expensive drugs, it is quite advantageous to use only small quantities of these valuable materials. Secondly, if cultures are to be maintained under conditions of constant fluid flow, small sizes are a considerable advantage. In conventional bioreactors, cell culture medium is usually collected and re-used after it has passed through the cell culture chamber. The medium is then completely replaced every few days. In microfluidic devices, a constant flux of fresh medium can be used, because the volumes involved are orders of magnitude smaller. The third benefit is that the small, planar and transparent microfluidic setups are easily combined with bright field and fluorescence microscopy or spectroscopy, because they fit easily on stages of conventional microscopes. This facilitates monitoring cell behavior for long periods and with high magnification during the experiment. It is important to realize that cells that are cultured inside microfluidic devices need to be subjected to a constant flux of fresh medium. When the small volumes in the cellcontaining devices would be left under static conditions, nutrients would be depleted quickly, whereas waste products would increase to undesirable concentrations. The fact that constant refreshment of medium is needed may seem cumbersome at first glance. However, under physiological conditions, all cell types need a flux of nutrients and waste products. The flow conditions in microfluidic devices mimic this process more closely than in vitro culturing in wells plates [19]. Because of the small dimensions of microfluidic channels, fluid flow is fully laminar, meaning that the flow patterns are completely predictable and turbulent mixing does not occur. In some applications, such as microreactors, which require mixing of different reagents, this laminar flow pattern is an obstacle that has to be overcome. However, the laminar nature of the fluid flow can also be used to perform unique experiments that are difficult or nearly impossible to perform with conventional methods. Most of these experiments rely on parallel fluid flows: if two streams of fluid enter the chip in a parallel fashion, the two streams will remain separated and mixing of them will only occur by diffusion. Thus, the degree of mixing can be tuned by changing the flow rate: the higher the flow rate, the shorter the residence time inside the device, the less the streams mix. Therefore, as long as flow rates are sufficiently high, cells on one side of the device can be treated with one substance, whereas cells on the other side are treated with another substance. As a matter of fact, even two sides of a single cell can be treated in this way [20]. Another important main feature of microfluidic technology is that it is suitable for high-throughput, comprehensive studies of cell biology. This means that the effect of multiple factors and parameters on cell functioning can be screened in one assay. Increasing throughput is an active area of research in the field of microfluidics. Efforts are made to merge microfluidic technology with microarray and microtitre plate technology [21][22][23]. Also, researchers are dedicated to integrate multiple steps, such as cell culturing, lysis, and analysis in one device. Numerous examples of this parallel and serial microfluidic biochemical analysis, also known as lab-on-a-chip, have already been reported and are starting to be implemented in cell-containing microfluidic devices [16]. Microfluidic Technology and Vascular Cells 3.1. The Endothelial Mechanoresponse. Vascular endothelial cells are highly responsive to shear stress that is caused by the flow of fluid over their surface. This shear stress is the result of the presence of a fluid velocity gradient in the cross section of a tube. The velocity of the fluid next to the walls is zero, whereas the velocity is maximal in the center of the channel. The steeper this gradient, the higher the shear forces that act on the vessel wall. The biological response to this mechanical stimulus-the endothelial mechanoresponsehas been found to be a key process in preventing vascular disease [24]. The mechanoresponse is usually studied in vitro by subjecting endothelial cells to shear stress in parallel plate flow chambers. Microfluidic devices can be considered as miniaturized versions of these setups. Because shear stress is proportional to flow rate and inversely proportional to channel dimensions, only low flow rates are needed in microfluidic channels to mimic the high shear stresses found in the human body. Song et al. [25] took advantage of this fact by designing a microfluidic device that can subject endothelial cells to physiological levels of shear stress in multiple parallel channels. They showed that a flow rate of less than 200 μL per hour is already enough to make the sheared endothelial cells elongate and orient in the direction of the flow, which is a prominent feature of the endothelial mechanoresponse that is also found in vivo. This reorientation is also reflected in the actin cytoskeleton of the cells. In our laboratory, we subjected cells to a shear stress of 1 Pa for 12 hours and then stained the actin filaments with phalloidin-FITC. Most filaments were aligned and oriented in the flow direction ( Figure 4(b)). Recently, Tkachenko et al. [26] also reported the design of a microfluidic device that allows for real-time tracking of endothelial cells that are subjected to shear stress. They could generate shear stresses ranging from 0.01 to 0.9 Pa in parallel channels, using flow rates in the range of several milliliters per hour. In contrast, flow rates are in the order of hundreds of milliliters per hour for the conventional, larger, parallel plate flow chambers. Because of the small volumes of reagents that are needed, and the potential parallelized design of microfluidic devices, they are an ideal platform for screening of compounds that may have an impact on the mechanoresponse. We have recently developed such an assay, in which the morphological rearrangements of endothelial cells are used to quantify the mechanoresponse. Using this assay, the impact of inhibitory drugs on the mechanoresponse can be detected (paper submitted). Journal of Biomedicine and Biotechnology This phenomenon can be used to generate and maintain steady gradients in a channel. In this case, three parallel inlet streams were used, containing 0 μg/mL, 5 μg/mL, and 10 μg/mL dextran-rhodamine, respectively. When quantifying the fluorescence over the width of the channel, an almost linear gradient can be observed (white square box in the image, plotted in the inset). (c) By using parallel flows, the middle part of the channel was treated with trypsin. As a result, endothelial cells in the middle of the channel are selectively removed, creating an artificial wound. The closing of this wound can be followed over time to quantify cell migration rates. Another well-known effect of applying shear stress to endothelial cells is the release of the vasodilatant, nitric oxide [27]. Microfluidic assays have already been reported that can detect the production of nitric oxide in response to chemical stimuli amperometrically [28] or by fluorescence [29]. This provides researchers with an interesting tool to study nitric oxide release in response to both mechanical and chemical stimuli. When increasing the flow rate and the resultant shear stress, microfluidic devices can also be used to study the adhesion strength of endothelial cells to their underlying substrate. Young et al. [30] performed such an experiment with endothelial cells of different origins and two types of matrix proteins. When the cells were subjected to a shear stress that is about ten times higher than typical physiological values, a certain percentage of cells detached from the surface. In this manner, it was possible to give a semiquantitative indication of the strength of adhesion of different cells on different substrates. These types of experiments used to be performed with large, parallel plate shear devices that consumed large amounts of media, cells, and reagents [31]. Downscaling of these setups to micrometer dimensions is a clear advantage. Migration Assays. As discussed earlier, multiple parallel fluid flows can be introduced in one microfluidic channel. Transport of components from one flow to the other only occurs by diffusion ( Figure 5(a)). If flow rates are low, there is sufficient time for the parallel streams to exchange components. If one of the streams contains a drug or active compound, stable gradients can be generated by taking advantage of this diffusion. An example of such a gradient that was produced in our laboratory is shown in Figure 5(b). There are a number of studies that show how this phenomenon can be used when experimenting with vascular cells. Most of these studies focus on migration of vascular cells in response to gradients of physical or biochemical cues. Studying and understanding cell migration is important, because it is a process involved in embryogenesis, wound healing, and tumorigenesis. Barkefors et al. [32] studied migration of endothelial cells in gradients of vascular endothelial growth factor (VEGF). They designed a device with three inlets, generating three parallel fluid streams in the main channel. When VEGF was added to the middle stream, an increasing gradient from the sides of the channel towards the middle was generated. The steepness of this gradient could be tuned by adjusting the flow rates: the slower the flow rate, the longer the residence time in the channel, the more time there is for VEGF to diffuse, and the more shallow the gradient becomes. When endothelial cells were cultured in this stable gradient of VEGF, they preferentially migrated towards the middle of the channel. Because the researchers had control over the shape of the gradient, they could show that steep gradients induce faster migration. Moreover, it was found that endothelial cells migrated fastest in gradients from 0 to 50 ng/mL, whereas they were not able to sense gradients from 50 to 100 ng/mL due to saturation of the available receptors. Biochemical cues, such as the growth factors used in this study, are not the only relevant stimuli for vascular cell migration. In an elegant study, Zaari et al. [33] showed that smooth muscle cells tend to migrate towards mechanically stiffer underlying substrates. To reach this conclusion, the authors designed a microfluidic device that could generate a gradient of crosslinker, mixed with a solution of acrylamide. A layer of gel with a gradient of stiffness was produced by crosslinking the mixture with UV light into a polyacrylamide network. When these gels were taken out of the microfluidic devices and smooth muscle cells were seeded on them, all cells tended to migrate towards the side of the gel with higher stiffness. Apart from migration assays that rely on gradients, the parallel laminar fluid streams can also be used to bring the most conventional migration assay to a microfluidic scale. This assay is the scratch assay or wound healing assay. It works by growing cells in a monolayer, artificially creating a scratch and then following how this scratch is closed by directed migration of the surrounding cells. In a microfluidic device, the artificial scratch can be generated by adding the serine protease trypsin to one of the parallel fluid streams. When one side of the channel has been cleared of cells by trypsinization, the migration of the remaining cells can be followed over time to quantify directed migration. So far, this assay has only been published with data on fibroblasts [34], but work in our group has shown that it is also possible with endothelial cells (Figure 5(c)). The advantage of carrying out this assay in a microfluidic device is that it can be more easily combined with stimuli such as shear stress or growth factor gradients. Cell Interactions. The principle of parallel streams is not just useful in studies of cell migration. It can also be used to pattern cells inside a microfluidic device. This is important when interactions between cells are the object of study. Micrometer-scale patterning of cells can be achieved by stamping adhesive proteins on a substrate [35] or by temporarily confining cells in a microfluidic device until they adhere, after which the device is removed from the surface [36]. By using parallel streams, cells can be patterned without the need of removing the microfluidic device afterwards. When adding one cell type to one stream and another type to the parallel stream, cells can be cocultured in direct contact with each other inside a microfluidic device [37]. For vascular research, this method could be used to pattern endothelial cells and smooth muscle cells in one device. The planar nature of microfluidic devices would provide great opportunities for studying interactions between these cell types. In literature, there are numerous reports of microfluidic setups that are used for vascular cell interaction studies. For example, Song et al. [38] studied the interaction between endothelial cells and circulating tumor cells, a process that is important for cancer metastasis. They developed a device in which a layer of endothelial cells can be stimulated with chemokines from the bottom, while being treated simultaneously with a suspension of breast cancer cells from the top. When the endothelium was stimulated with CXCL12, a chemokine implicated in metastasis, they found that more cancer cells adhered to the layer of endothelial cells than under basal conditions. Another study on metastasis used a microfluidic chip with small, gel-coated gaps, overlaid with a monolayer of endothelial cells to mimic the basement membrane and the endothelium, respectively [39]. Using this microfluidic in vitro model of a blood vessel, tumor cell migration could be quantified and studied in great detail with time-lapse microscopy. It is not just interactions between tumor cells and endothelium that are an interesting object of research in vascular science. Studying the interactions between other circulating cells and endothelial cells is also important. For example, the binding of leukocytes to endothelial cells is an essential step in inflammation [40], while the endothelium-mediated activation of blood platelets is important in clotting and thrombosis [41]. Multiple reports have been published by groups that studied the adhesion of leukocytes [42] or platelets [43][44][45] to endothelial cells or endothelial cellderived adhesion factors in microfluidic devices. These reports show that microfluidic cell interaction studies require less sample and reagents than similar, conventional studies. Moreover, a number of these studies already show increased throughput by using parallel channels in one device [42,44,45]. Three-Dimensional Culturing. An important factor in vascular cell physiology is its three-dimensional microenvironment. Cells are embedded in an environment that comprises other cells, extracellular matrix proteins, bodily fluids, and blood. Three-dimensional cell culturing in the laboratory can be performed by incorporating cells in a hydrogel matrix (e.g., the commercially available, collagenbased Matrigel), or by growing cells on top of this matrix, allowing them to migrate into the gel [46]. Still, the complex real-life, three-dimensional microenvironment is usually reduced to a two-dimensional system when experiments are carried out on cells in vitro. This is more convenient, because with such a system cells can easily be supplied with fresh growth medium, growth factors, and other soluble compounds. Moreover, a two-dimensional setup is more compatible with microscopy and imaging. However, when using microfluidic devices, replenishment of medium, generation of gradients, and microscopic imaging is relatively easy to realize in a three-dimensional culturing environment. A good example is the recent publication by Vickerman et al. [47] They describe a microfluidic device with two parallel channels, connected by a gel chamber. The gel chamber is filled with a collagen-based hydrogel and endothelial cells are grown in one of the channels. By generating a gradient of soluble growth factors, the endothelial cells grow into the gel and even form open capillaries that span the entire gel chamber from channel to channel. In this particular article, the gel is pipetted into the gel chamber by microinjection before assembling the device. However, using the laminar flow properties discussed earlier in this review, hydrogels can also be formed in situ and even be patterned and confined to certain regions of the microfluidic device [48]. The great potential of these three-dimensional culturing techniques was recently underlined by Barkefors et al. [49], who cultured ex vivo kidney tissue and followed the formation of blood vessels in response to a VEGF gradient. Because of the small scale of microfluidic devices, it is possible to advance this proofof-concept study towards high-throughput assays in order to screen for compounds that affect blood vessel formation in such realistic models. A good example of this highthroughput trend is the recent study by Hsiao et al. [50], who studied three-dimensional, spheroid cocultures of prostate cancer cells and endothelial cells in a microfluidic device with 28 side chambers that could all harbor a tumorous spheroid. Compound Screening Assays. In biomedical engineering, a lot of research is dedicated to developing particle systems that carry drugs, proteins, DNA for gene therapy or siRNA for gene silencing to their proper site of action. In this field of research, it is important to have a way to quickly screen for adhesion to endothelial cells-the first barrier that particles encounter when injected intravenously. Screening under static conditions in well plates ignores the mechanical forces caused by the flowing blood, which counter particle adhesion. A recent study by our group, using fluorescent siRNA-containing polymer particles, shows that microfluidic technology allows for quick screening of particle adhesion to endothelial cells under dynamic conditions [51]. More realistic microfluidic models of microvasculature have been developed by Prabhakarpandian et al. [52] for the same purposes. These devices contain channels that are designed after real capillary networks. They show that capillary geometry has a strong influence on local mechanical conditions and particle adhesion. The fabrication of a more complex microfluidic device that tries to mimic the tight blood-brain barrier, while still being easy-to-use in high-throughput assays, was recently reported by Genes et al. [53]. It is to be expected that high-throughput screening in these more realistic vascular models of the in vivo situation will become the norm in drug development and material science in the future. Stem Cells and Tissue Engineering. Regenerative medicine is the field in which researchers try to engineer tissues in the laboratory to replace damaged or missing tissue in the human body. It is a multi disciplinary field, which combines materials science with cell biology and biomedicine. A major challenge in this field is the production of vascularized tissue for implantation. In order to achieve this, stem cells must be stimulated to differentiate into vascular cells, and these vascular cells need to arrange themselves into a vascular network. Microfluidic technology can be of use in both processes. For differentiation of human stem cells to vascular tissue, many factors can be of influence. Because human stem cells and the inducing factors are relatively difficult to obtain, it is advantageous to perform tests in a microfluidic setting instead of in a macroscopic assay. Figallo et al. [54] developed a 12-well micro-bioreactor in which human embryonic stem cells were directed towards a vascular phenotype by varying growth factors, perfusion, and cell seeding density. Using such microfluidic devices instead of conventional techniques saves reagents and allows for more flexibility in terms of culture parameters. The other aspect of vascular tissue engineering in which microfluidic technology can be of help is in preparing vascular networks that can be incorporated into tissue constructs. This can be accomplished by two approaches. First of all, a "synthetic capillary" can be engineered by using microfluidic technology. This works by designing a microfluidic channel of which the walls contain tiny gaps of only a few micrometer in diameter and tens of micrometers in length. Behind these gaps, compartments are located in which tissue can be grown. When medium is pumped through the channel, the gaps act as a simple endothelium-like barrier, limiting mass transport to the tissue compartments. An example of such a microfluidic design was reported by Lee et al. [55], who used this principle to design synthetic analogs of liver sinusoids. Using this approach, mass transport over the endothelium-like barrier can be tweaked to mimic the values found in human vessels [56,57]. A second approach to use microfluidic technology for generation of vascular networks is based on the notion that the microfluidic device itself can be considered as a three-dimensional "scaffold" in which cells can be grown. When all sides of a channel are completely covered with endothelial cells, microvascular networks are generated that mimic in vivo networks ( Figure 6). It has been shown that this approach will in principle work: PDMS devices with microvascular network morphologies can be completely covered with endothelial cells to generate capillary-like structures [58]. However, PDMS is a nonbiodegradable polymer. Biodegradability is paramount if eventually the material is to be replaced with functional tissue. Another study has shown that the same type of system can also be built with the biodegradable polymer poly (glycerol sebacate) [59]. Still, these microfluidic devices consist of only one flat layer of vascular structures. A major challenge will be to build biodegradable, truly three-dimensional microvascular networks that can be combined with other materials and cells in regenerative medicine. Novel rapid techniques for three-dimensional device fabrication, such as stereolithography with biodegradable polymers [60], hold great promise to overcome this challenge. Conclusion The examples given in this review clearly illustrate the fact that the use of microfluidic technology facilitates current vascular research and, more importantly, opens up novel areas of research that are not possible with more conventional setups and techniques. It is important to realize not only that microfluidic technology paves the way for more realistic in vitro models in vascular cell biology but also that the technology is still in its infancy in terms of throughput. Almost all studies described in this review are proof-ofprinciple experiments that require a lot of personal effort and intervention by the researcher. However, automation, standardization, and increasing scale will all be natural stages in the maturation of microfluidic technology. These improvements will boost the systematic nature of vascular cell biological research in the future.
2016-05-12T22:15:10.714Z
2009-11-10T00:00:00.000
{ "year": 2009, "sha1": "6b4164a419f9fb816123bbee008069ae29866e4f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2009/823148.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdd51dbd95c6f709e00205eb78592b64229abc7f", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254244690
pes2o/s2orc
v3-fos-license
Characterizing Skin Cancer in Transplant Recipients by Fitzpatrick Skin Phototype Introduction Nearly half of organ transplants occur annually in patients with Fitzpatrick skin phototypes (Fitz type) III–VI. Organ transplant recipients (OTRs) are at risk for sequelae of chronic immunosuppression, of which skin cancer is common. As literature regarding skin cancer risk is largely conducted in OTRs with Fitz types I and II, we aimed to further characterize the incidence and risk factors for skin cancer in OTRs with higher Fitz types. Methods We conducted a retrospective review of OTRs with Fitz types III–VI evaluated by dermatology between 1 January 2012 and 1 June 2022. The primary outcome of this study was development of skin cancer post-transplant. Secondary outcomes included risk factors for skin cancer development. Data were analyzed using two-sample t-tests and Pearson’s chi-squared. Results Of 530 OTRs, 193 had Fitz type III or higher. Ten patients (5.18%) developed 87 skin cancers and one recurrence at a mean of 5.17 years posttransplant. Patients with skin cancer self-identified as Black (70%, p-value ≤ 0.001), male (70%, p-value ≤ 0.001), and kidney transplant recipients (70%, p-value ≤ 0.001), with a mean age of 58.20 years at transplant (p-value ≤ 0.001). Subjects with skin cancer were more likely to be former smokers (60%) and prescribed tacrolimus (p-value ≤ 0.001 each). Development of cutaneous squamous cell carcinoma (66, 75.86%) was most common, followed by basal cell carcinoma (17, 19.54%), and malignant melanoma (3, 3.45%). Skin cancer most often occurred on the face or scalp (60%, p-value = 0.027), though also developed in sun-protected sites (30%, p-value = 0.002). Verruca vulgaris was present in 10% of patients (p-value = 0.028). Conclusions Risk factors for skin cancer post-transplant differ in OTRs with higher Fitz types. Our results suggest that among OTRs who self-identified as Black, kidney recipients are at increased risk for skin cancer in non-sun-exposed regions. These cancers may be associated with human papillomavirus (HPV). Education is key for preventing morbidity and mortality secondary to skin cancer. increased risk for skin cancer in non-sun-exposed regions. These cancers may be associated with human papillomavirus (HPV). Education is key for preventing morbidity and mortality secondary to skin cancer. Key Summary Points Solid organ transplant recipients with Fitzpatrick skin phototypes III-VI are at risk for skin cancer post-transplant; therefore, patients should be educated on self-skin exams and referred to dermatology for new or changing lesions. Some risks for skin cancer are the same among organ transplant recipients regardless of race or ethnicity. However, thoracic transplant (heart or lung) increases risk for skin cancer in organ transplant recipients with Fitzpatrick skin phototypes I and II, while kidney transplant increases risk for skin cancer in organ transplant recipients with Fitzpatrick skin phototypes III-VI. Like organ transplant recipients with Fitzpatrick skin phototypes I and II, recipients with higher Fitzpatrick skin phototypes are most frequently diagnosed with cutaneous squamous cell carcinoma of the head and neck. However, organ transplant recipients are also at risk for skin cancers in sun-protected sites such as the groin and genitals. INTRODUCTION In 2021, more than 41,000 transplants were performed in the USA, representing an annual record for the ninth consecutive year [1]. As transplants continue to increase and patients survive longer, the sequelae of chronic immunosuppression will become more prevalent [2]. Organ transplant recipients (OTRs) are at increased risk for malignancies, the most common of these being skin cancer [3]. OTRs are at a 65-250-fold increased risk for cutaneous squamous cell carcinoma (cSCC) [2], a tenfold increased risk for basal cell carcinoma (BCC), and a threefold increased risk for malignant melanoma (MM), along with other rare cutaneous neoplasms [4]. However, literature supporting these data is largely from kidney transplant recipients with Fitzpatrick skin phototypes (Fitz type) I and II [5]. While patients with Fitz types III-VI are thought to have lower risks of developing skin cancer as compared with their lower Fitz types, they are at increased risk of skin cancer compared with their immunocompetent peers [6]. We sought to describe the incidence of skin cancer in OTRs with higher Fitz types at our institution and identify associated risk factors. Improving understanding of skin cancer incidence in this group may contribute to improved screening tools and post-transplant guidelines that assist in the recognition of relevant racial healthcare disparities. METHODS We conducted a retrospective review of OTRs seen by dermatology at our institution between 1 January 2012 and 6 January 2022, which represents the time our institution's electronic health recorded was implemented to the time the study began. During the study period, our institution performed a mean of 375 transplants per year. Of these, 7.77% were seen by dermatology. Patients seen by dermatology either reported dermatologic complaints or had lesions of concern. We stratified OTRs on the basis of self-identified race or ethnicity. Patients who self-identified as white, representing Fitz types I and II, were excluded. Data pertaining to patient demographics, medical history, transplant course, and dermatologic history were collected. Patients were then stratified by occurrence of skin cancer post-transplant. Data analysis was conducted using SPSS Statistics 28 (IBM, Armonk, NY) for the primary outcome of skin cancer development. Cohort demographics were evaluated using descriptive statistics. Data were analyzed using chi-squared or paired t-tests. IRB approval was provided by the Medical University of South Carolina IRB I for Pro00117311, on 10 January 2022. This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. RESULTS Of the 530 OTRs identified, 193 had Fitz type of III or higher. Patients were most often male (52.85%) and self-identified as Black (91.70%), with a mean age at transplant of 47.04 years (±15.88 years). Only one patient had a pretransplant history of skin cancer, and one patient had a family history of skin cancer (Table 1). Ten patients (5.18%) developed 87 skin cancers and one recurrence post-transplant. Patients who developed skin cancer were more often male (70%, p-value B 0.001), kidney transplant recipients (70%, p-value B 0.001), who self-identified as Black (70%, p-value B 0.001), with a mean age of 58.20 years at transplant. This is compared with a mean age of 46.43 years at transplant in OTRs who did not develop skin cancer post-transplant (p-value B 0.001). Liver transplant recipients were less likely to develop posttransplant skin cancer (20%, p-value = 0.010) ( Table 1). Pretransplant history of skin cancer was a predictor for post-transplant skin cancer development (10%, p-value = 0.002). OTRs who developed skin cancer posttransplant were more frequently former smokers (60%, p-value B 0.001). They were also more likely to be prescribed cyclosporine (30%, p-value = 0.007). There was a trend toward significance for use of tacrolimus (70%, p-value = 0.070) and azathioprine (10%, p-value = 0.071). Common comorbidities, such as hypertension, type 2 diabetes mellitus, hyperlipidemia, and coronary artery disease were not associated with skin cancer development (Table 2). Patients most often developed cSCC (66, 75.86%), followed by BCC (17, 19.54%), MM (3, 3.45%), and one spindle cell adenocarcinoma. Skin cancer development occurred at a mean of 5.17 years post-transplant. Type of first skin cancer was BCC (60%, p-value = 0.014) or cSCC (40%, p-value = 0.045) most often occurring on the face or scalp (60%, p-value = 0.027). Skin cancers developed in sun-protected sites in 30% of patients (p-value = 0.002), which includes the buttocks and inguinal folds. There was a tendency for skin cancer to recur in the same location (10%, p-value B 0.001). A diagnosis of verruca vulgaris (VV) was present in 10% of patients with skin cancer (p-value = 0.028) ( Table 3). DISCUSSION The risk of skin cancer is considerably elevated in OTRs, with reports indicating at least a 100-fold increased risk when compared with the general population, without consideration of Fitz type [7,8]. Of the approximately 41,000 total OTRs in 2021, almost half were patients with Fitz types III-VI [1]. However, OTRs with higher Fitz types are underrepresented in studies of skin cancer post-organ transplant. Risk factors for skin cancer development in OTRs include male gender, age C 50 years, pretransplant history of skin cancer, and Fitz type I or II [7]. Among OTRs with higher Fitz types in our cohort, male gender, older age at transplant, and pretransplant history of skin cancer remained risk factors for post-transplant skin cancer. However, patients who self-identified as Black, as compared with American Indian/ Alaska Native and Hispanic or Latino patients, were more likely to develop skin cancer in our cohort. Smoking status was also a risk factor. Type of organ transplant may confer variable levels of risk for developing skin cancer in OTRs [9], with the greatest reported risk associated with thoracic organ transplants [3]. In our cohort, however, risk was highest among kidney recipients. Similar to reported literature, we found risk for post-transplant skin cancer to be lowest among liver recipients. Notably, kidney transplants comprised the overwhelming majority of our SOC cohort, which is representative of trends in organ transplantation nationwide [1]. Skin cancer risk is thought to stem from long-term administration of posttransplant immunosuppressive therapies that dampen immune system surveillance, impair the repair of UV-induced DNA damage, and increase the potential for reactivation of certain oncogenic [10]. Specific immunosuppressive drugs may positively or negatively influence the risk of skin cancer development [7]. Similar trends were noted in our cohort. Sun avoidance is recommended for patients taking azathioprine owing to drug-metabolite-induced UVA photosensitivity and impaired nucleotide excision repair [7,11]. Usage of calcineurin inhibitors (CNIs), such as tacrolimus and cyclosporin, may result in upregulation of the potentially oncogenic activating transcription factor 3 Refers to use of these medications at any point from time of transplant to time chart review was performed. Patients may have been prescribed more than one of these medications (ATF3), increased UVA photosensitivity, and altered nucleotide excision repair [7,12]. It has been reported that in OTRs who selfidentify as Black with Fitz types V or VI, skin cancer diagnoses are not uncommonly located in sun-protected sites [13]. In our cohort, nearly one-third of skin cancer occurred in sun-protected sites, including the groin and buttocks. As skin cancer is less prevalent in Fitz types V or VI, and not uncommonly occurs in sun-protected areas, Black OTRs are more likely to have skin cancer diagnosed at advanced stages, thus increasing their risk of morbidity and mortality [6]. Additionally, HPV DNA is three times more likely to be present in cSCCs arising in immunocompromised versus immunocompetent patients. The mechanism is complex and proposed to be owing to a complex interplay between HPV infection and impaired DNA repair or apoptosis of UV-damaged cells, or simply may underscore the susceptibility of immunocompromised patients to develop HPV infection and cutaneous malignancy [14,15]. Nonetheless, in Black OTRs, skin cancer diagnoses are frequently HPV positive, and/or associated with a history of condyloma acuminata or VV [6,16]. Within our cohort, VV was present in a number of patients who developed posttransplant skin cancer, suggesting that screening for and treating HPV infection pretransplant may be an important preventative measure. Limitations of our study include the retrospective nature, monocentric design, and small sample size for patients developing skin cancer posttransplant. Owing to the retrospective nature of this study, history of sunburn and sun exposure were not available/collected, however, as this may contribute to the formation of secondary cancers, future studies correlating sunburn/sun exposure history and skin cancer posttransplant are warranted. No patients selfidentifying as Asian or Pacific Islander who developed posttransplant skin cancer are included in our cohort. Additionally, we were unable to include OTRs evaluated by dermatology outside of our institution, or those without dermatologic symptoms or lesions that prompted referral to dermatology. CONCLUSIONS While skin cancer development post-transplant may be lower in OTRs with higher Fitz types, risk for skin cancer nonetheless exists. Similar to prior studies, our study demonstrates that skin cancer diagnosis in OTRs with higher Fitz types differs from diagnosis in their counterparts with Fitz types I or II. Skin cancer observed in OTRs with higher Fitz types may be more aggressive owing to later stage at diagnosis, which portends a greater risk for recurrence and metastasis [7]. OTRs who self-identify as Black may be at particularly high risk as compared with other patients with higher Fitz types. The results of our study can inform improvements in skin cancer education and screening in OTRs with
2022-12-06T06:17:19.964Z
2022-12-05T00:00:00.000
{ "year": 2022, "sha1": "b4dde37015afdca26fe90cba45ba0465ea2e2072", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13555-022-00858-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4bf83467d1f91aaa8a70a51ad7336cbc208ed80b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231577738
pes2o/s2orc
v3-fos-license
Transfer Learning for COVID-19 cases and deaths forecast using LSTM network In this paper, Transfer Learning is used in LSTM networks to forecast new COVID cases and deaths. Models trained in data from early COVID infected countries like Italy and the United States are used to forecast the spread in other countries. Single and multistep forecasting is performed from these models. The results from these models are tested with data from Germany, France, Brazil, India, and Nepal to check the validity of the method. The obtained forecasts are promising and can be helpful for policymakers coping with the threats of COVID-19. Introduction COVID-19 was first detected in Wuhan City in December 2019. Since then it has caused more than seven hundred thousand deaths and twenty million infections [1]. With such massive scale fatalities, it has become one of the greatest crises of this generation. Apart from the loss of human lives, the pandemic has caused serious damage to the world economy. Because of lockdowns and similar distancing strategies it has also adversely affected psychological and social sections [2]. In the absence of any proven medicine or vaccine at present, an intervention strategy may be more useful to control the spread [3]. An effective modeling method to forecast the spread of the virus among the population can be extremely useful to prepare and formulate health and economic policies for any government or administrators. From planning emergency hospitals, managing ventilators, and medical resources to regulating lockdowns and scheduling economic activities, effective forecasts are strategically very important for policymakers [4]. When new cases rise at a rate of thousands per day then even the most developed nation's healthcare system has been overwhelmed to handle a large number of patients. A timely forecast can prepare the responsible authority accordingly to manage efficiently even during overwhelming scenarios. With the rise of cases and the availability of more data, various studies [5][6][7] have presented mathematical models for the spread. However, most of the models have a limited scope of forecasts for a particular country or region only. Previous researches [8,9] have also used LSTM models to forecast but they use old data E-mail address: 073bme648@pcampus.edu.np. of the same country which is pretty limited. As it may not have seen and learned from the various patterns like sharp spikes and flattening effects, the learning process may be incomplete and imprecise. These dynamic patterns are more prevalent in early infected countries like Italy which have now passed the sharp spiking and gradual flattening for new cases. The use of Transfer Learning may help the network to model all these highly nonlinear temporal patterns which the test country may have never seen in its history. For countries like France, Italy the earlier occurrence and spread provides a large dataset for training which is not available for countries that saw the spread later like India and Nepal, which imposed strict lockdown early but is easing the regulations now. This discrepancy in pandemic over various regions has provided mature data for earlier infected countries. Cluster analysis [10] study has shown similarities existing in the dynamics of the spread of the disease between various countries like Italy, France, and Germany which has similar intervention modalities. Though there may be some level of similarity in intervention techniques and timing, even then the countries can have massive differences in scale of spread and mortality rate. These differences are very prominent across countries in the European Union. A generalized forecasting model incorporating a wider region is thus a difficult challenge owing to all the differences existing across nations. The paper implements Transfer Learning to LSTM models which learn separately from the data of Italy and the United States. It then applies these trained models i.e. the architecture along with weights, to forecast new COVID cases and deaths. The results are tested for France, Germany, Brazil, India, and Nepal. For both the models, separate networks have been trained to forecast for one and five days. Dataset The data has been obtained from an open dataset of Our World in Data [11] which maintains and updates global data daily acquired from the European Center for Disease Prevention and Control (ECDC). The data was available from the 31st of December to the 10th of June. However, the data from February 28th onwards has been used for training purpose during which cases were more active. Data for the number of new cases per 100 thousand of the population per day is one of the forecasting variables. new cases per 100 Thousand = new cases per day Total Population * 100,000 (1) New death cases per million of the populations calculated as in Eq. (2) has also been forecasted from the available data. new deaths per million = new death cases per day Total Population * 1,000,000 (2) Data from 7 countries: Italy, France, Germany, United States, Brazil, India, and Nepal has been used for this study. These test countries cover up 4 continents representing high diversity in intervention techniques, government policies, healthcare system, population density, and levels of spread and death because of COVID-19. Moving-average The data of new cases and new deaths consist of lots of sharp spikes, the noisy data is smoothened using moving average. Moving average can provide more stable data points suitable for the modeling. Moving average is defined mathematically as: where, The study uses a moving average of 3 days for both new cases and new deaths. The reason for using low valued moving average is to smooth out very sharp spikes without changing the actual pattern of the data. LSTM network Recurrent Neural Network (RNN) is one of the most popular and effective deep learning techniques for time series forecasts, because of its ability to memorize sequential information [12]. However, the problems of exploding and vanishing gradient are more common in simple RNN [13]. LSTM solves this problem by introducing a new memory state in RNN [14]. The ability of LSTM networks to capture patterns in data like trends, seasonality, autocorrelations and noise makes it a good candidate for the time series forecast. In a deep neural network, initial layers capture more basic features or patterns and the deeper layer can extract high-level features. As a model trained in one country is used to forecast for the next country, the network should learn basic temporal patterns and avoid memorizing high-level features of a particular dataset. For this single layer LSTM network has been used to avoid any deep and complex architecture. For single and 5-days predictions 1 and 5 units are assigned for this singlelayered network. The network does not return any sequences so the output from the LSTM network is the final output. The use of any dense layer has been avoided. The success of LSTM networks lies in their variety of update, forget, and output gates. Cell state carries the memory of past data and the combination of gates helps the network to determine what information from the past is to be retained and from the present is to be updated. Each cell in the sequence is fed with an activation state (a t−1 ) and cell state(C t−1 ) and the respective input (x t ) Update gate value (u) is based on the previous activation state (a t−1 ) and present input (x t ). Its range is 0 to 1. Weight (W u ) and bias (b u ) is learned during training. It provides necessary weightage to the new candidate cell state to determine a new cell state in Eq. (7). Forget gate value (f ) is also based on previous activation state (a t−1 ) and present input (x t ), Weight (W f ) and bias (b f ) is updated during training. It is used in the determination of a new cell state in Eq. (8). Output gate value (o) depends on the previous activation state (a t−1 ) and present input (x t ), Weight (W 0 ) and bias (b o ) is updated during the training process. It determines the new activation state by assigning necessary weightage to a new cell state. New candidate cell state(C t ) is determined with values of previous activation (a t−1 ) and present input(x t ) which will be used to update the new cell state, its influence in the new cell state is determined by the update gate value (u). The network learns the necessary weights (W c ) and bias (b c ) during training. Cell state (C t ) is passed to the next cell with weightage provided by update gate (u) and forget gate (f ) from Eqs. (4) and (5). New activation (a t ) is determined by output gate (o) and present cell state (C t ). The values of the activation state (a t ) and cell state (C t ) gets passed to the next cell for repeated operations as explained from Eqs. (4) to (9). σ is the sigmoid function defined as: Its output is in the range of 0 to 1. For all the gates i.e. update, forget and output gate the value ranges from 0 to 1, 0 provides no weightage, and 1 represents complete weightage. The output of the hyperbolic tangent (tanh) function ranges from −1 to 1. The network weights are learned separately from the data of Italy and the United States. During training, these trained weights along with the respective network are then used to forecast for various test countries. Thus, the concept of Transfer Learning enables us to acquire the necessary knowledge from the matured dataset and apply it to forecast for a different country. Network architecture N units of the LSTM network are used where N is the number of days of forecast. As many to one architecture is used, the output of each unit is output to the architecture. So, for 1 and 5 days forecasting 1 and 5 units of LSTM has been assigned in the first layer respectively. The number of cells inside each unit varies according to the window size of the model. Window size refers to the number of past data that is input into the model, this is equal to the number of cells in each LSTM unit. For singleday prediction window size of 8 is chosen i.e. 8 days of past data used as input and the value of 9th-day value is taken as the forecasting value. During training, the network learns respective weights by trying to minimize the error between the actual value on the 9th day and the forecasted value by the network. For 5 days multistep prediction window size of 20 is used i.e. 20 days of past data is input for the network and the consecutive future 5 days data is used as output for the network. The time sequence is maintained both for input and output. A larger window size is used for multistep prediction as more data sequence is necessary to forecast the long-range pattern. Fig. 1 depicts the architecture for single day prediction, as the window size is eight, the unit has eight cells. Each cell receives sequential input however the output is received from the final cell which is a typical many-to-one architecture. In multistep prediction 5 such sequences as in Fig. 1 are used with 20 cells in each unit. For a particular unit, the weights of trainable parameters across all the cells remain the same. In this architecture, the single-step model has 12 trainable parameters whereas the 5 days prediction model has 140 trainable parameters. For training purpose, a batch size of 10 is used. As training data is low in number lower batch size supplies more training batches from available data. Network parameters The LSTM network uses Stochastic Gradient Descent (SGD) optimizer for all of its models with momentum of 0.9. Initially variable learning rate ranging from 1 * 10 −4 to 1 * 10 −8 is set as a callback function to train along 100 epochs. Then the learning rate corresponding to minimum loss is chosen for actual training of 500 epochs. The same procedure has been used for all the models used in this paper. The details have been tabulated in Table 1. Performance parameters Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are two metrics used to compare the results from the proposed models. Mean Absolute Error provides an average absolute error between the forecasted value and real value. Because of squared error, Root Mean Square Error (RMSE) gives better insights of higher valued error across forecasts. The normal distribution represents the values of a given distribution as a density of variables. The distribution visualizes the symmetry occurrence in the data around the mean. For multistep prediction, both MAE and RMSE obtained are fitted to normal distribution to see the variation in error across all forecasts. The normal distribution is given by: where µ refers to mean and σ refers to standard deviation. Results and discussion The paper implements Transfer Learning for the LSTM network to learn from one region and forecast for a completely different region. For training purpose, Keras API with TensorFlow backend [15] is used. Various python libraries have been used for other evaluations. The networks were trained in GPU available in the Google Colab environment [16]. Both the models have been trained and tested in smoothened data whose samples are provided in Figs. 2 and 3. A low value moving average has helped to smooth out sharp spikes without causing any significant distortions to the curves. Two models based on Italy and the United States have been used. Data from December 31st to June 10th was available during modeling. However, the initial part has been truncated when the reported cases and death were limited to single-digit or even 0 in most of the days. For both the models training data ranging from February 28th to June 10th has been taken which encapsulates an active period of the spreading and deaths in these countries. The timeframe for new death cases has been kept consistent with new cases for training as well as testing. It is observed in all datasets, the time series increase or decrease of new cases is consistent with an increase or decrease of new death cases so, the same time frame has been used for death cases as well. In the United States even as late as February 26th, 0 new cases were recorded but after that, there has been a sharp rise in cases, and during the recorded time frame cases have risen from as low as 1 new case on February 28th to over 30,000 new cases being reported every day during March and April. As for new death cases, the first death in the US was recorded on March 1st and then rose to even 2000 plus new deaths per day within the second week of April. Even by the first week of June, the new death cases were still high approaching 1000 in some cases. For Italy, new cases reporting surged from the last week of February, until then even as late as February 21st, no new case was reported. During the last week of February, the new death cases were still limited to single-digit however during March and For testing the models, same timeframe has been used for France, Germany, and Brazil as it represents an active growth time frame for all these countries. However, for India and Nepal COVID spread has got more active lately. This can be explained by strict lockdowns imposed earlier in both countries. For India and Nepal, data from May 2nd to August 11th is used. For all the test countries 102 days have been taken as test days to maintain consistency while comparing results. Forecast results Detailed error calculation for all new cases forecasts has been tabulated in Table 2 and new death cases forecasts have been tabulated in Table 3. The model's self-forecast i.e. Italy model's forecast for Italy and the US model's forecast for the US has not been tabulated for comparison as these are training data for the models. Single-step forecast For new cases, the Italy model performs slightly better for France as compared to the US models. The differences in the result are prominent in the first peak occurring around the end of March, where the US model predicts higher values than the Italy model. France imposed lockdown from March 17th [17] when both new cases and death cases were having a very steep rise. Within a week both the curves for new cases and new deaths reached its' peak and started descending gradually. As new cases started leveling out, France eased its lockdown from the first week of May with strict controls in hotspots of the spread. Both US and Italy models predict well for all different cases like steep rise, peak, and the gradual descent of new cases of curves (see Fig. 4). Even though both the models perform comparatively well in the initial gradual rise and later gradual descend of death cases, the US model fails to forecast the steep peaks during April as seen in Fig. 5. Both the models' forecasts for Brazil have small errors initially as there is a gradual rise in new cases. However, during mid-May as there is a very steep rise in cases both the models fail to capture the steep peak which is visible in graphs of Fig. 6. Even though the US model predicts better in these peaks the error is still very high compared to prediction in other test countries. Unlike the forecast for new cases, both models can forecast new death cases for Brazil comparatively better (see Fig. 7). Numerically, the US model performs better with the MAE of 0.23. Unlike the case of France, both the models can forecast well in the peak region of Brazil. Germany's infection curves are limited compared to other European nation mainly because of its' quick actions and strict regulations. Beginning at the start of March itself strict regulations were being imposed for gathering [17] and offices. The Italy model performs better for Germany with an MAE of 0.32 compared to 0.97 for the US model. US model generally predicts higher values and the differences are clearer in May in Fig. 8 when new cases have started decreasing gradually. New death cases start to decrease after mid-April in Germany, Both the models predict higher values during this instance (see Fig. 9). Germany comparatively has a lower mortality rate compared to other similarly infected EU countries like Italy and France. Numerically, the US model predicts better in terms of MAE (0.23) compared to 0.24 of the Italy model, though the Italy model's RMSE of 0.30 is less than that of the US model (0.32). For Germany, though the Italy model predicts better for new cases, the US model predicts better for death cases. A possible explanation can be provided from training data. If we visualize the training data for Italy and United States in Figs. 2 and 3, it can be observed that for the United States' new cases, gradual descent is still not distinctly visible as in Italy and Germany. In contrast, gradual descent is clear for new death cases in the United States data. Because of this United States' training data for new death cases can capture a wider range of patterns than the data for new cases which still have no signs of the descent part. Moreover, the death cases data for the United States and Germany has a clear waveform with seasonal ups and downs which is not so clear in Italy's data. This similarity in the US and Germany could also be a reason for the ability of the US model to perform better than the Italy model for death cases of Germany. Nepal was under lockdown from March 24th and the easing started only around June 10th and finally ending it officially July 21st. Even then national and international flights have yet not started, inter-district mobility is highly controlled and strict restrictions apply to public gatherings. Vehicle movement has been controlled heavily using odd-even number plate rule, where only odd-numbered or even-numbered plates are allowed in a single day. Moreover, areas reporting higher cases have been put under immediate seal. Because of all these strict measures, the highest new cases reporting is limited to a few hundred. The new death cases are also limited to single-digit figures. However, even for such low cases, both models perform pretty well with MAE of 0.22 and 0.25 for Italy and US model respectively. Fig. 10 depicts the differences between these two models, Italy model predicts lower values for most of the instances whereas the US model predicts higher values. Though India maintained a longer lockdown in various phases easing different sectors one by one, the virus spread has gradually increased reaching threatening numbers. Sharp peaks are still not visible and it is still gradually increasing. For new cases, both the models have a modest performance with MAE of 0.12 and 0.17 for Italy and US models respectively. As seen in other test countries the US model predicts higher values compared to the Italy model (see Fig. 11). US model has an MAE of 0.138 for India which is the best figure among all the test countries for new death cases. Italy model also has a pretty decent MAE of 0.2 however it can be seen in Fig. 12 that initially Italy model has higher forecasted values with nearly equal forecasted values later for both the models as cases reach higher values. For new cases, Italy and US models when tested in each other produce interesting results. US model predicts comparatively better for Italy with MAE of 0.37, capturing its steep rise and peak very closely. Italy model on the other hand fails to capture sharp peaks in US Data and has an MAE of 0.56. The difference can be visualized more vividly in Fig. 13. For new death cases, the US model has a higher error for Italy as compared to the Italy model's error for the US. The errors are more pronounced in steep peaks of Italy occurring during the end of March (see Fig. 14). However, the Italy model predicts closely in the peak region of the US occurring during mid-April and other consecutive smaller peaks occurring thereafter. The Italy model predicts pretty well for countries like France, Germany whereas it predicts lower values for countries like Brazil and the United States and thus has comparatively large RMSE and MAE. The strategies implemented in European countries were pretty similar to each other [18], so the Italy model predicts better for European countries whereas for Brazil the strategy was very different from that of Italy and the US as well so the forecasts have higher errors. However, for Countries like Nepal and India where there are a lot of differences in intervention strategy, the health care system, and other control policies Italy model performs just as well as it performed for the European countries. This shows the forecasted values are also a strong function of its history or past data regardless of the exact policies being implemented or the other diversifying factors. The lowest RMSE and MAE of the Italy model are 0.12 and 0.11 respectively both occurring for India, and the maximum value is 1.31 and 1.65 respectively occurring in Brazil for new cases. The lowest RMSE and MAE of the US model are also for India with 0.17 and 0.15 respectively. One of the possible reasons for this can be India is having a gradual rise in cases and still not seen steep peaks where the models' performance is compromised. It can be seen in the earlier forecast figures, both the models' weakness lies in peaks but performs extremely well on gradual incline or decline of cases. The graphs of new cases for France, Italy, Germany, and Brazil consist of clear spikes which represent the highest rise in cases, the models predict fine for all these peaks except for Brazil. Thus, the models are not able to forecast a very steep rise in cases in Brazil. The graphs for Brazil depict the US model has a better capacity to encapsulate sharp spikes better than the Italy model for new cases. In contrast, the Italy model better encapsulates sharp spikes in death cases better compared to US models. The difference in MAE and RMSE for both the models in various countries can be visualized through Figs. 15 and 16. For testing purposes, training data i.e. Italy and US data have not been used for error analysis in Italy and US models respectively as these are the training data for the respective models. Multi-step forecast The same set of data has been used for the multistep forecast as in single-step forecasts. Total data of 102 days form 78 test sets for the five-days forecast. The mean value of RMSE and MAE across all these sets is calculated along with its standard deviation. The result is then fitted to a normal distribution for analyzing the variability in error across these test sets. The model requires 25 days of data, for single set validation, 20 for window size, and 5 for forecasts. As for the single day forecast of France, the results of both models have a very small difference. Though the error parameters are small, it is nearly twice compared to the single-step forecast. The forecast results for new death cases are unacceptably large with very high MAE and RMSE along with high deviation for both the models (see Figs. 17 and 18). The distributions extend to numbers as high as 7 which has not been recorded for other test countries. With MAE of 1.12 and 1.14 for US and Italy models respectively for new cases, the forecasted errors for Brazil are nearly equal for both the models. Though the US model performed comparatively better for single-step prediction the results for multistep prediction have one of the highest errors compared to other test countries. However, the results for new death cases are quite contrasting as the results are better compared to other test countries (see Figs. 19 and 20). Consistent with the single-step prediction for Germany, Italy model has a modest performance for multistep new cases prediction with an MAE of 0.56 while the US model has an MAE of 0.82. The standard deviation for US and Italy models for MAE is 0.50 and 0.34 respectively, the results are stable across the test set. The performance of Italy models for new death cases is even better with an MAE of 0.48 and a standard deviation of 0.30, the results are quite stable across the test set (see Figs. 21 and 22). In case of India, though the Italy model has slightly better performance, both the models have a similar distribution with nearly equal error and its deviation (see Fig. 23). Compared to other countries MAE of 0.46 and 0.43 are modest numbers however compared to India's single-step prediction the figures are nearly four times higher. For new death cases, both the models have the least deviated results with a standard deviation of 0.01 for both MAE and RMSE (see Fig. 24). Italy model for Nepal has the best performance compared to all the test countries with an MAE of 0.42 Nepal is still in its early phase with a very slow gradual rise in cases. It has been seen in the single-step prediction that both the models predict well in regions of gradual rise or fall and have some compromise in predictions near the peaks. As Nepal does not have any sharp spikes and peaks this may be the reason for Italy's fine performance for Nepal (see Fig. 25 Except for India and Germany both the models predict on a similar scale for new cases. The inability of the models to predict well for Brazil's new cases may be explained by the problems in its different restriction policies. The strategy followed in Brazil is very different from to rest of the world [19] and has a very steep rise in new cases and death cases as well which may be the reason for higher prediction error. Moreover, Brazil shows a higher variation in error which can be visualized in Figs. 19 and 20. As expected, the 5 days models do not perform as good as single day models, as it is more challenging to capture and forecast long ranger patterns than a shorter one. Even then the results are satisfying for countries like India and Germany for new cases. 5 days new cases prediction error can be visualized from Fig. 28. Italy's model performance for Nepal is far better than any other prediction model both. Samples of a multistep forecast for new cases have been presented in Fig. 29. Though any conclusion cannot be drawn from samples of Fig. 29, what can be observed is the US model has predicted higher values compared to both the actual value and values forecasted by the Italy model. Fig. 30 depicts the multistep forecast for the latest available values for Nepal. The cases in Nepal are gradually rising and still, the spread is in its' initial phase. For Nepal also US model predicts higher values compared to Italy's model. As for available data for new death cases of Nepal, it is still limited to single-digit figures and there are many days with no new death cases recorded. So, forecasts for new death cases have not been presented as the numbers are too close to zero and will be ambiguous to compare with the results of other test countries. Error for multistep death cases forecast can be visualized in Fig. 31. In contrast to all previous forecasts, Brazil also has achieved decent results. For new death cases, the best result is obtained for the Italy model applied to Germany with MAE of 0.48 and RMSE of 0.55. Both US and Italy models have poor performance for France with the highest MAE of 1.71 and 1.91 respectively for Italy and the US model. For all the test countries Italy model's performance is numerically better than that of the US model. Samples of multistep forecast are presented in Fig. 32. As expected, multistep models have higher errors than singlestep models for both new cases and deaths forecast as capturing longer sequences is challenging. Even then good results have been obtained for countries like Germany, Nepal, and India for new cases and Brazil and Germany for death cases. The results can further be improved by considering other variables like age ratio, population density, and localizing the forecasts to a particular region of hotspots like New York, Delhi, Paris, etc. With such limited training data and wide diversity across the test countries, the proposed models have provided a decent forecast. Conclusions In this paper, Transfer Learning is applied in the LSTM network to learn trends of new cases and new deaths because of COVID-19 from data of Italy and the United States and forecast for other countries. Germany, France, Brazil, India, and Nepal have been tested for single-step and multistep predictions from the prepared models. These forecasts have verified that even for different modes of intervention, policies, and health care systems, the proposed models can predict well. The results indicate new cases and deaths can be forecasted pretty well with the proposed models. Complex patterns of steep rise, spikes, and flattening effects of new cases and deaths can be learned from mature datasets i.e. data from countries infected earlier. This approach is more useful for countries that are in its early phase of virus spread, to forecast based on other country's models. Such forecasts can be a great help to governments and policymakers. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-01-07T09:11:44.900Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f596efdd01b2e11e66d5bcaa1cdbd59fa2a4cdaf", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.isatra.2020.12.057", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "16a9c22faee1ea8ab4e50aaf4cb136d7aab8aa24", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119070165
pes2o/s2orc
v3-fos-license
Non-homogeneous magnetic field induced magnetic edge states and their transport in a quantum wire The spectrum of magnetic edge states and their transport properties in the presence of a perpendicular non-homogeneous magnetic field in a quantum wire formed by a parabolic confining potential are obtained. Systems are studied where the magnetic field exhibits a discontinuous jump in the transverse direction and changes its sign, strength, and both sign and strength at the magnetic interface. The energy spectra and wave functions of these systems, the corresponding group velocities along the interface and the particle average positions normal to the interface are calculated. The resistance of the quantum wire in the presence of such a magnetic interface is obtained both in the ballistic and the diffusive regimes as a function of the Fermi energy and of the homogeneous background magnetic field. The results are compared with those for the case of a homogeneous field. I. INTRODUCTION Investigations of reduced dimensionality semiconductor systems is frequently connected with the use of a magnetic field, which, in addition to the lateral confinement, quantizes the carrier motion also in the plane normal to the magnetic field. Particularly, a two-dimensional electron gas (2DEG) exposed to a homogeneous magnetic field has proven to be an extremely rich subject for investigations in theory and experiment 1 . In the last several years a more complex situation of reduced dimensionality semiconductor systems exposed to a non-homogeneous magnetic field has attracted considerable interest 2 . Different experimental groups have succeeded in realizing such systems [3][4][5] . High mobility 2DEGs are formed in standard GaAs/AlGaAs heterojunctions and the spatial modulation of the magnetic field is achieved by depositing patterned gates of superconducting or ferromagnetic materials on top of the heterostructure. An alternative approach to produce non-homogeneous magnetic fields is by varying the topography of an electron gas 6 . These new technologies opened up a new dimension for investigations of reduced dimensionality semiconductor systems. Characterizing and understanding transport properties of these systems are crucial both for fundamental physics and for device applications. Theoretically the transport properties of reduced dimensionality semiconductor systems subjected to a spatial dependent magnetic field have been addressed in several recent works. The possibilities of the creation of periodic superstructures by a non-homogeneous magnetic field were investigated in Refs. 7-9. The magnetic field dependence of the conductance of a ballistic quantum wire a finite section of which is subjected to a magnetic field 10 and of a 2DEG through an orifice 11 was investigated. The single-particle energy spectrum of a 2DEG subjected to a non-homogeneous magnetic field was calculated for different step-like 12 , linearly 13,14 , and parabolically (in the transverse direction of a one dimensional channel) 15 varying with position, and for other functional magnetic field profiles 16,17 . It has been shown that the spectrum consists of states that propagate normal to the field gradient and have remarkable time-reversal asymmetry 13 in a linearly varying magnetic field while the spatial distribution of electron and current densities has a rich structure related to the energy quantization 14 . Transport properties of a 2DEG in a magnetic superlattice have been investigated in weakly 9 and strongly 18 modulated magnetic fields normal to the electron sheet. The combined effect 9 of the spatially periodic electrostatic and magnetic fields of arbitrary shape has been studied 19,20 . Analysis of the weak localization and calculation of the Hall and magneto-resistivities of the 2DEG in a non-homogeneous magnetic field have been presented [21][22][23][24] . Recently, different magnetic structures of nanometer scale have been realized experimentally: a magnetic antidot by depositing a superconducting disk on top of a 2DEG 25,26 , a large amplitude magnetic barriers [27][28][29][30][31][32][33] and structures with a magnetic field alternating in sign 34 have been produced by a single or by an array of ferromagnetic lines fabricated on the surface of the heterostructure in hybrid semiconductor/ferromagnet devices. This realization of different magnetic regions in an electron gas with sharp boundaries was a challenge for theoretical studies of the one-particle electronic states (or the magnetic edge states) moving along the magnetic interfaces in quantum waveguides 35 , quantum dots 36,37 , and in infinite 2DEGs 38,39 exposed to a non-homogeneous magnetic field. The aim of the present paper is to investigate the magnetic edge states and their transport properties (in the ballistic and diffusive regimes) in a one-dimensional (1D) channel formed by a parabolic confining potential and exposed to a normal non-homogeneous magnetic field. Structures are studied where the magnetic field changes its sign, strength, and both sign and strength at the magnetic interface. Such a system was recently realized experimentally 40 by depositing a ferromagnetic stripe on top of the electron gas and by applying a background magnetic field normal to the electron gas. Varying the background field results in all the above situations. We calculate rigorously the energy spectrum and the wave functions of these systems by matching the general solutions of the Schrödinger equation at the magnetic interface. The corresponding group velocities along the interface and the particle average position normal to the interface are obtained. Using the results for the spectrum, we calculate the conductance and the conductivity in the ballistic and diffusive regimes. The paper is organized as follows. In Sec. II we present the method we use to obtain the spectrum. In Sec. III we carry out actual calculations of the energy spectrum and the wave functions, the group velocity along the interface and the particle average position normal to the interface for the three different cases when the magnetic field changes its sign, strength, and both sign and strength at the magnetic interface. We analyze the dependence on the confining potential strength and on the magnetic field strength in one side of the interface while the magnetic field in the other side is kept fixed. In Sec. IV we calculate the conductance and the conductivity in the ballistic and diffusive regimes both as a function of the Fermi energy and of the background magnetic field. The results are compared with those in case of a homogeneous magnetic field. The results are summarized in Sec. V. II. APPROACH We investigate the magnetic edge states in a one-dimensional electron channel along the y-direction formed by the parabolic confining potential V (x) and exposed to a normal nonhomogeneous magnetic field B z (x) = B 1 and B z (x) = −B 2 respectively on the left and the right hand side of the magnetic interface located at x = 0 (see Fig. 1). This system is placed in a homogeneous background magnetic field B z (x) = B b . Varying the background magnetic field from −B b to B b (B b > B 1 , B 2 ) allows to have situations where the effective non-homogeneous magnetic field changes its sign, strength, and both sign and strength at the magnetic interface. In any finite region along the x-direction where the magnetic field is uniform, the system is described by the single particle Hamiltonian where m * is the particle effective mass, V (x) = m * ω 2 0 x 2 /2 the confining potential with ω 0 the confining potential strength. Because of the system translation invariance in the y-direction we choose for the vector potential the Landau gauge − → A = (0, Bx, 0). In this gauge the Schrödinger equation can be separated with the ansatz where ψ is an eigenstate of the one-dimensional problem Here we introduce the following notations: / ω * is the particle transverse energy in units of the oscillator frequency ω * = ω 2 B + ω 2 0 , ω B is the cyclotron frequency, ε and k are the energy and the momentum of the particle. The coordinate of the center of orbital rotation is X(k) = kl * ω B /ω * in units of the length scale l * = /(m * ω * ) related to ω * . In the longitudinal direction the electron acquires a new field dependent mass m B = m * ω * 2 /ω 2 0 which is larger than the effective mass m * . Equation (3) is to be solved under the boundary conditions ψ(x − X(k)) −→ 0 when x −→ ±∞. The solutions are the parabolic cylindrical functions 41 with 1 F 1 (a; b; x) the Kummer function. For any value of ν there are two independent so- In the non-homogeneous magnetic field case ν, X are different on the left and right hand side of the magnetic interface. We construct the wave function as Indices 1, 2 refer to the values of quantities for which ω * = ω 2 B + ω 2 0 is taken with B = B 1 and B = B 2 , respectively. Matching of this wave function and its derivative at x = 0 leads to the following dispersion equation By solving this equation we obtain the energy spectrum ε n (k) and the wave functions ψ ν 1 ,ν 2 (x, X 1 , X 2 ) of the magnetic edge states, which are the solution of the one-dimensional problem with the effective potential The shape of V ef f (x, k) depends strongly on the sign of the wave number k and on the magnetic field profile. In this symmetric case, the dispersion equation (6) breaks into two pieces i.e. the zeroes of the parabolic cylindrical function and its derivative give the single particle spectrum of the magnetic edge states. Notice that the first equation gives the spectrum of the usual edge states in a uniform magnetic field for the infinite hard wall confining potential [42][43][44][45] . First we find the spectrum from the above Eqs. (7) using the asymptotics of the parabolic cylindric functions and its derivative in the limits of k −→ ±∞. In the limit of k −→ +∞ (x ≫ 1, x ≫ 2 4 √ ν) we use the following asymptotic forms for the parabolic cylindric functions 46 and its derivative and find the energy The corresponding trajectories of the electron orbits have their center of orbital motion located far from the magnetic interface, i.e. X(k) ≫ 1, but they are on the same side of the magnetic interface where the particle is moving. The energy differs exponentially from the energy of the hybrid states Of the uniform magnetic field case. Exponentially small interaction between the hybrid states located at a finite distance on both sides of the magnetic interface shifts exponentially small the energy levels up and down with respect to the bare spectrum of the hybrid states of the uniform magnetic field case. The wave function of each level of the magnetic edge states is represented by a curve, which has two peaks. The peaks are situated far from the magnetic interface both in the positive and the negative magnetic field regions and are connected with an exponentially attenuating "tail" of the wave function near the interface. In the two-dimensional case when ω 0 −→ 0, m B −→ ∞ and the particle velocity due to the confining potential becomes zero, the exponential corrections to the energy results in exponentially small velocities of opposite sign for the states shifted up and down in energy. Notice that the exponential corrections in Eq. (9) cannot be obtained from quasiclassical considerations. In the opposite limit of k −→ −∞ (ν ≪ 1 − x 2 /2ν ≪ 1) we use the following asymptotic forms for the parabolic cylindric functions 46 and its derivative × cos sin and find the energy This spectrum characterizes the particle motion in snake orbits, i.e. in trajectories whose center of orbital motion are located far from the magnetic interface, i.e. X(k) ≫ 1, but on the opposite side of the magnetic interface where the particle is moving. To first order in k the confining potential has no influence on the energy along the y-direction and the particle mass is the free electron mass m * . This is because the effective potential minimum V ef f (x) = ℏ 2 k 2 /2m * at x = 0 does not depend on the confining potential strength ω 0 for negative k (see Fig. 2 where we summarised the three different shapes of the effective 1D potential for the symmetric system). The exact spectrum, shown in Fig. 3, is described by a discrete quantum number n = 0, 1, 2, ... and a continuous momentum k. For a given n the energy spectrum exhibits a pronounced asymmetry with respect to positive and negative values of k. The corresponding group velocities v n along the interface and the particle average semi-thickness ∆x n normal to the interface are shown in Fig. 4 (in this symmetrical system, B 1 = −B 2 , the particle average position x n is zero, and therefore we calculated the quantity ∆x n = 1 2 (x − x n ) 2 ). It is seen from Figs. 2 and 4 that for large negative values of k, particles are confined in a narrow region around the magnetic interface. The corresponding wave functions are represented by one-peak curves localized near the magnetic interface. These states correspond to the snake orbits, which wiggle around the magnetic interface moving alternatively in the positive and negative magnetic field regions. Since the coordinate of the orbit centrum, X(k), increases with k, the radius of the orbit should also increase to ensure particle motion on the opposite side of the magnetic interface. This requires an increase of the energy with k, and therefore all these snake states acquire a large velocity along the interface, which increases approximately linearly in k while the width ∆x n of the snake orbits decreases in k and reaches its minimum value. For positive values of k a triangular like barrier is developed at x = 0 and the effective potential becomes the double well (see Fig. 2). The height of the barrier is V ef f (x) = ℏ 2 k 2 /2m * at x = 0 while the well minima are ℏ 2 k 2 /2m B at x = ±X(k). For small positive values of k the particle motion is still snake-like around the magnetic interface. The momentum and the velocity of these states have opposite sign. Starting from some value k n > 0 the ground state electron wave functions is split into the two peaks by the barrier. For large positive values of k the spectrum characterizes the exponentially weak coupled two hybrid states located at x ≈ ±X(k) in the positive and negative magnetic field regions and therefore rotate in opposite direction. The velocity of these hybrid states is mainly due to the confining potential and directed opposite to the velocity of the snake states. The absolute value of the velocity is determined by the height of the minima of the effective potential wells, i.e. by the mass m B which is larger than the free electron mass m * , therefore the snake states are faster than the hybrid states. For large positive values of k both the group velocities v n and the particle average semi-thickness ∆x n increase approximately linearly in k (see Fig. 4). The velocity (the average semi-thickness) of the symmetric and anti-symmetric states tend respectively to its asymptotic value v n = ℏk/m B (∆x n = X) from above (below) and below (above). B. Asymmetric system: The effective potential for this asymmetric system where the magnetic field changes both its strength and sign at the magnetic interface is shown in Fig. 5. In this case the effective potential V ef f (x, k) exhibits a pronounced asymmetry both as a function of k and x. For negative values of k, the effective potential is a triangular-like asymmetric well with a minimum of V ef f (x) = ℏ 2 k 2 /2m * at x = 0. For positive values of k the effective potential is a double well with different minima ℏ 2 k 2 /2m B 1 and ℏ 2 k 2 /2m B 2 at the positions x = +X 1 (k) and x = −X 2 (k), respectively. The triangular like barrier between the wells has again the height V ef f (x) = ℏ 2 k 2 /2m * at x = 0. Thus the confining potential together with the nonhomogeneous magnetic field induces three effective masses (m * for negative and m B 1 , m B 2 for positive values of k) in the system. For negative values of k, the spectrum corresponds to snake orbits with free-like motion and with mass m * along the y-direction (see Fig. 6). These states are effectively localized in the vicinity of the magnetic interface in the region where the magnetic field is smaller and the magnetic length is larger. The group velocity is approximately linear (see Fig. 7) and the particle average position x n is approximately independent of the wave number (see Fig. 8). The n = 1 level is the closest to the magnetic interface and most remote from the n > 1 states. For positive k the spectrum characterizes the hybrid states with two different masses m B 1 and m B 2 . Each energy band n has n anti-crossings with these hybrid states. For some positive value of k the group velocity v n and the particle average position x n start to oscillate as a function of the wave number and the particle tunnels periodically from the left to the right side of the quantum wire and vice versa (see Fig. 8). At k → +∞ all states tend to be localized in the region where the magnetic field is large and the well of the effective potential is lower. Contrary to the symmetric system, the ground state wave function consists now of a curve with one peak. At the anti-crossing points the wave function changes sign and its peak position shifts rapidly by changing its sign and value (see Fig. 9). This corresponds to a tunneling of the particle from the one well of the effective potential to the other. This picture is true even if there is only a small difference between the magnetic field on both sides of the interface, which brakes the symmetry of the system and the particle is forced to choose one of the wells. C. Asymmetric system: In such systems in which at the magnetic interface the magnetic field changes only its strength, the effective potential consists of only one well (see Fig. 10). For negative values of k the well is located at x = −X 2 (k) with minimum value ℏ 2 k 2 /2m B 2 . It is higher and broader than the well for positive values of k with minimum ℏ 2 k 2 /2m B 1 at the position x = +X 1 (k) (B 1 > B 2 ). There are no snake states in this system and the spectrum for both large negative and positive values of k characterizes hybrid states with masses m B 2 and m B 1 (Fig. 11). These states rotate in the same direction on both sides of the magnetic interface and their group velocities have opposite sign. Both the velocity and the particle average position of any level n are approximately linear in k for large negative and positive k and tend respectively to their asymptotic values v n = ℏk/m B 2 , x n = −X 2 (k) and v n = −ℏk/m B 1 , x n = X 1 (k), which are independent of n (see Fig. 12). For intermediate values of k the velocity and the particle average position exhibit smooth oscillations as a function of k. For these values of k the corresponding states are confined in the effective potential which consists of two partial parabolas with different strengths. With varying k the influence of each parabola changes strongly in contributing to these states. D. Dependence on the confining potential strength In Figs. 13 (a-d) the dependence of the energy on the confining potential frequency ω 0 is depicted when B 1 = −3B 2 and B 1 = 3B 2 both for kl * = ±2. It is seen that there is a strong asymmetry with respect to the sign of the magnetic field and of the wave number. For the systems where the magnetic field changes its sign and strength at the magnetic interface, the states with kl * = −2 correspond to snake orbits. In this case the dependence on ω 0 is very weak because the effective potential consists of only one well with minimum value V (0) = 2 k 2 /2m * , as we mentioned above, which does not depend on ω 0 . Varying ω 0 changes only the sharpness of the banks of the potential well, which results in a much weaker dependence of the energy bands on k. In the case of kl * = +2 (this corresponds to the region of two hybrid states with different masses m B 1 and m B 2 ) the energy strongly depends on ω 0 . For ω 0 = 0 we have a two-dimensional system 38,39 and some states are very close in energy due to the special choice of the ratio of the two magnetic fields B 1 /B 2 which is an integer. For small values of ω 0 , the energy increases strongly with ω 0 . Several anticrossings appear between the energy bands with different n for this choice of parameters. With further increase of ω 0 the increase of energy becomes weaker and the simple oscillatory states with B 1,2 ≈ 0 correspond to the limit ω 0 /ω B ≫ 1. In the systems where the magnetic field changes only its strength at the magnetic interface, both kl * = +2 and kl * = −2 correspond to hybrid states on the left and right side of the interface, respectively, and the energy dependence on ω 0 is qualitatively similar for these states. The quantitative difference is a result of the different values of the masses m B 1 and m B 2 (e.g. m B 1 /m B 2 = 45/13 for ω B 1 = 2ω 0 = 3ω B 2 ). It is easy to see that for ω 0 = 0 the energy takes values near 1/6, 1/2, 5/6, ... if kl * = −2 and 1/2, 3/2, 5/2, ... if kl * = +2 as it should be for a two-dimensional system 38,39 . E. Dependence on the magnetic field B 2 In Figs. 14 (a,b) we plot the energy dependence on the magnetic field B 2 for fixed B 1 and for the confining potential frequency ω 0 /ω B 1 = 1/2 for kl * = ±2.25. Notice there is a strong asymmetry with respect to the sign of k. When kl * = −2.25 the energy dependence on B 2 for negative B 2 is weaker than for positive B 2 because in the first range the spectrum characterizes the snake states and the dependence on B 2 is mainly due to the dependence of ω * 2 on B 2 while in the second range the spectrum characterizes the hybrid states and the energy dependence is due to the dependence of both ω * 2 and m B 2 on B 2 . It is easy to see that for B 2 /B 1 = ±1, ±1/3 the energy values are consistent with that in Figs. 3,6, and 11. For kl * = +2.25 the first energy level almost does not depend on B 2 . For positive values of B 2 the spectrum describes only the hybrid states and for the chosen large value of kl * = +2.25 the energy equals approximately its asymptotic value, ω * 1 /2 + 2 k 2 /2m B 1 , which is independent of B 2 . For negative values of B 2 , the energy is again approximately those of the hybrid states with the same energy because kl * = +2.25 is larger than the value k 1 l * at which the ground state exhibits an anti-crossing. Analogous behavior is found for the second energy level, but now starting from some negative value of B 2 this level, after anti-crossing with the level n = 3, tends to the first level because of the degeneracy of the symmetric and anti-symmetric terms in the B 1 = −B 2 symmetric system (see Fig. 3). For B 2 < −B 1 , the degeneracy is lifted and the energy increases with |B 2 |. IV. TRANSPORT We calculate the zero temperature two terminal magneto-conductance for a perfect conductor using the Büttiker formula 47 where N(E) is the number of magnetic edge states with energy E and positive velocity. From Fig. 15 it is seen that the conductance, in the ballistic regime And for different magnetic field profiles, exhibits stepwise variations as a function of the Fermi energy. For a given energy and confining potential strength, the conductance in the non-homogeneous magnetic field is nearly twice that for the homogeneous field case. The conductance decreases when going from the profile B 1 = −3B 2 to the profiles B 1 = −B 2 , and B 1 = +3B 2 . For the symmetric profile the narrow plateau is followed by broad ones. This asymmetry is not visible for the other profiles. The conductance is the same in the positive and negative directions along the magnetic interface despite the strong asymmetry in the magnitude of the velocity of the states moving in opposite directions. The conductivity in the diffusive regime is calculated in the relaxation time approximation where L is the length of the quantum wire, τ is the momentum relaxation time, f T is the Fermi-Dirac distribution function at temperature T. We calculate the conductivity in the zero temperature limit. Then the derivative of the Fermi function is a δ-function. Replacing all quantities in Eq. (14) by their values at the Fermi energy, we obtain In the diffusive regime, we have calculated separately the conductivity due to states with negative and positive velocities as a function of the Fermi energy for the two magnetic field profiles, B 1 = −3B 2 and B 1 = +3B 2 (see Fig. 16). In both cases the conductivity due to states with negative velocities (dashed curves) is larger than that due to states with positive velocities (dotted curves). In the case when the magnetic field changes its sign, the states with negative velocities are the snake states, which are always faster than the states with positive velocities which are related to the hybrid states. In the case of B 1 = +3B 2 all the states are hybrid states, however, the contribution to the conductivity of the states with negative velocities is larger because these states are located in a region with small magnetic field, have the small mass m B , and large velocity v n . For both v n > 0 and v n < 0 parts, the conductivity has an oscillating structure as a function of the Fermi energy which is due to a divergence of the density of states at the bottom of the ε n (k) band. However, the contributions due to states with v n > 0 exhibit an additional structure related to the oscillations of the group velocity as a function of k. This structure is more pronounced in the case of B 1 = −3B 2 , the conductivity has additional distinct minima that reflect the tunneling effect discussed above. Notice that the conductivity of the system with the magnetic field profile B 1 = −3B 2 is roughly 1.5 times larger than that for the field profile In Fig. 17 the magnetic depopulation diagram is plotted as a function of the background magnetic field B b and the wave number k for the initial magnetic field profile B 1 = 2B 0 = −3B 2 (B 0 is the resonance field for which ω 0 = ω B 0 ) and for the Fermi energy E F = 5 ω * 1 . In the shaded region the effective magnetic field changes its sign at the magnetic interface. At the background magnetic field B b = 0 there are 18 current carrying states represented by the solid dots in the figure. The left 9 symbols correspond to snake states while the right 9, to hybrid states with both m B 1 and m B 2 masses. The maximum number of current carrying states, 20, is achieved in the small region of the background magnetic field around B sym = −2/3B 0 where the effective magnetic field on both sides of the magnetic interface has equal strength, 4/3B 0 , and opposite sign. The background magnetic field dependence is symmetric with respect to this point. Out of the shaded region the current carrying states are only the hybrid states. At the edges of the shaded region, the effective magnetic field becomes zero on either the left or the right side of the magnetic interface. When the absolute value of the background magnetic field increases starting from the value B sym , the number of current carrying states decreases monotonically. In Figs. 18 and 19 we plot the magneto-resistance as a function of the background magnetic field in the ballistic and diffusive regimes, respectively, for the situation corresponding to Fig. 17. The resistance in the ballistic regime exhibits stepwise variations as a function of B b and has a minimum at B b = B sym , which is shifted with respect to the minimum of the resistance in the homogeneous magnetic fields (thin dashed curve). In the later case the dependence on B b is stronger because the number of current carrying states is smaller. In the diffusive regime the resistance exhibits small peaks as a function of B b that are associated with the magnetic depopulation effect and that are on top of a positive magneto-resistance background, which increases with B b when B b has the same sign as the initial magnetic field of the region where the magnetic field is larger. For small values of B b the resistance in the homogeneous field is smaller. The slope of the resistance variation in B b in the homogeneous field case is larger than for non-homogeneous fields. Notice that the minima of the conductivity due to the hybrid states with v n > 0 associated with the tunneling effect (see Fig. 16) are not visible in the background magnetic field dependence of the resistance in Fig. 19. This is possibly due to the special choice of the initial parameters (B 1 = 2B 0 = −3B 2 ) and of the Fermi energy, the possible peak values of the resistance are out of the small range of the variation of B b (from 0 to 2/3B 0 ) where the effective magnetic field changes its sign. Moreover, the effective magnetic field on one side of the magnetic interface is given by −B 2 + B b , i.e. the increase of the background magnetic field diminishes the effective magnetic field, namely in the region where the initial magnetic field is smaller We developed a theory for the non-homogeneous magnetic field induced magnetic edge states and their transport in a quantum wire formed by a parabolic confining potential. We studied systems in which the magnetic field perpendicular to the wire axis exhibits a discontinuous jump in the transverse direction and changes its sign, strength, and both sign and strength at the magnetic interface. The energy spectrum and the wave functions of the magnetic edge states were calculated by matching the general solutions of the Schrödinger equation at the magnetic interface. The corresponding group velocities along the interface and the particle average position normal to the interface were obtained. The spectrum consists of alternating symmetrical and anti-symmetrical terms and is described by a discrete quantum number n = 0, 1, 2, ... and the momentum k along the wire. For given n the energy spectrum exhibits a pronounced asymmetry with respect to positive and negative values of k and describes snake orbits and hybrid states. Contrary to two-dimensional systems 38,39 , the confining potential together with the non-homogeneous magnetic field induces three effective masses, which can account for most of the system properties. When the magnetic field changes its sign and strength, all states with negative momenta (the snake orbits) are effectively localized in the vicinity of the magnetic interface in the region where the magnetic field is small. The group velocity is approximately linear and the particle average position is approximately independent of the momentum. For a positive momentum the spectrum exhibits anti-crossings, the group velocity and the particle average position oscillate as a function of the momentum and the particle tunnels periodically from the left to the right side of the quantum wire and vice versa. At k → +∞ all states tend to be localized in the region where the magnetic field is large. The conductance in the ballistic regime exhibits stepwise variations as a function of the Fermi energy and of the background magnetic field. For a given energy and confining potential strength, the conductance in the non-homogeneous magnetic field is nearly twice that in the case of a homogeneous field. The conductance has a maximum as a function of B b at the value for which the effective magnetic field on the left and on the right hand side of the magnetic interface has the same strength and opposite sign. In the diffusive regime, we calculated separately the conductivity for negative and positive velocities as a function of the Fermi energy and the background magnetic field B b . The conductivity due to states with negative velocities is large. The conductivity oscillates as a function of the Fermi energy. The contributions due to states with positive velocities exhibit an additional structure related to the oscillations of the group velocity as a function of k. In the systems where the magnetic field changes its sign this structure is more pronounced with additional distinct minima related to the tunneling of the particle between different magnetic regions. The resistance exhibits small peaks as a function of background magnetic field that are associated with magnetic depopulation effects. ACKNOWLEDGEMENT This work was partially supported by the Flemish Science Foundation (FWO-Vl), the Inter-university Micro-Electronic Center (IMEC, Leuven), the "Onderzoeksraad van de Universiteit Antwerpen", and the IUAP-IV (Belgium). S. M. B. was supported by a DWTCfellowship to promote the S & T collaboration with Central and Eastern Europe and a CRDF grant No 375100. We acknowledge fruitful discussions with J. Reijniers. FIGURES FIG. 1. Schematic diagram of the system under study. The quantum wire is formed along the y-axis by a parabolic confining potential V (x) = m * ω 2 0 x 2 /2 in the x-direction. In the z-direction a non-homogeneous magnetic field B z = B 1 and B z = −B 2 is applied respectively on the left and the right hand side of the magnetic interface at x = 0. A homogeneous background magnetic field, B z = B b , can be additionally applied. The channel width is W , the length scale l 2 0 = /(m * ω 0 ) is related to the confining potential strength ω 0 , m * is the electron effective mass, E F and k F are the Fermi energy and momentum, respectively. Fig. 6. The symbols correspond to the classical trajectories for k = 1.5 in Fig. 5. FIG. 9. The particle wave functions corresponding to the 2 lowest energy bands in Fig. 6 for several values of the wave number k for n = 1 (a) and n = 2 (b). FIG. 10. The effective potential V ef f (x, k) for the asymmetric magnetic field profile B 1 = +3B 2 . V ef f (x, k) is always a single well with different heights and widths for k < 0 and k > 0. The classical trajectories for the hybrid states at k = 0, ±1.5 are shown schematically. FIG. 11. The energy spectrum for the 7 lowest bands corresponding to the situation of Fig. 10. FIG. 12. The group velocity along the magnetic interface (the thick curves, the right and top axis) and the particle average position (the thin curves, the left and bottom axis) corresponding respectively to the 7 and 5 lowest energy bands in Fig. 11.
2019-04-14T01:56:45.446Z
2000-11-27T00:00:00.000
{ "year": 2000, "sha1": "4ad11d0e9c6f43bd30f9dd2ed53cbdc1cda76c4f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0011445", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b420316fb829bebedaa3473ed62c59d96f608c0e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
228820972
pes2o/s2orc
v3-fos-license
Dry Port: A Review on Concept, Classification, Functionalities and Technological Processes : The purpose of this article is to o ↵ er a literature review on the development and classification of inland terminals, later defined as “dry ports”. The aim of the paper is to analyze the extant literature on dry ports focusing on their concept, classification, function and technological processes. The review o ↵ ers an updated structured approach to what is currently defined as a dry port. To this end, a structured keyword search in major electronic databases has been conducted to find related material. As there are many di ↵ erent names indicating dry ports in European, South East Asian and North American countries, the following keywords were used: “dry port”, “inland terminal”, “freight village” and “ interporto / i ”. The search was conducted in respect of the article title and text, abstract and keywords. The results show that there is no unanimous consensus concerning cataloguing of terrestrial nodal facilities serving port gateways. “Dry ports” have emerged as fundamental elements of the integration between the sea “system” and the land network. The increased interest in the genesis and development of dry ports has been accompanied by an abundant contribution of the scientific community, originating a thriving literature, which, however, does not find a common denominator. Abstract: The purpose of this article is to offer a literature review on the development and classification of inland terminals, later defined as “dry ports”. The aim of the paper is to analyze the extant literature on dry ports focusing on their concept, classification, function and technological processes. The review offers an updated structured approach to what is currently defined as a dry port. To this end, a structured keyword search in major electronic databases has been conducted to find related material. As there are many different names indicating dry ports in European, South East Asian and North American countries, the following keywords were used: “dry port”, “inland terminal”, “freight village” and “ interporto/i ”. The search was conducted in respect of the article title and text, abstract and keywords. The results show that there is no unanimous consensus concerning cataloguing of terrestrial nodal facilities serving port gateways. “Dry ports” have emerged as fundamental elements of the integration between the sea “system” and the land network. The increased interest in the genesis and development of dry ports has been accompanied by an abundant contribution of the scientific community, originating a thriving literature, which, however, does not find a common denominator. Introduction Historically, ports have always been a privileged gateway for commercial sea-trade.However, only once containers had been established significantly, did ports find themselves increasingly facing the issue of their relationship with their respective hinterlands.Over time, concepts such as "integrated logistics" and "supply chain", together with a growing understanding of environmental problems connected to the distribution chain, have strengthened this new approach. Introducing a neologism, the concept of "dry port" has paved its way, rising as a key element for the integration of sea "system" and dry land networks.It is not only a mere physical extension of the storage capacity of quays, but most importantly a tool by which it amplifies the dimension of the port, namely its "catchment area".The increasing amount of interest in the appearance and development of dry ports, most commonly referred to as "inland terminals", has been nurtured, starting from the 1980's and up to present time, by a growing contribution by the scientific community, leading to abundant literature on this topic. This article aims at offering a literature review on the development and classification of inland terminals, later defined as "dry ports". The main contribution of this paper may be summarized as follows: by applying a systematic approach, this study offers an overview of research carried out on inland terminals, focusing on the development, classification and technological processes.These are key elements, which indicate current research trends on dry ports.In the authors' opinion it is crucial to verify whether, nowadays, it is possible to find a shared definition of dry port and which are its main classifications.Furthermore, in order to complete the analysis, it is essential to verify the technological elements of terminal activities carried out aiming at better quality of cargo handling. It needs to be considered that, in recent years and even more so nowadays, activities and technical characteristics are changing very fast, for instance those related to customs. The sequel of this paper is organized as follows: Section 2 presents the background of this topic; Section 3 describes the research approach to the systematic literature review and data collection used for the analysis; in Section 4, the results are shown and discussed, while final conclusions are drawn in the last Section. Background of the Concept of Dry Port An inland terminal serving a port was initially called a "dry port" [1].The first studies concerning inland terminals date back to the early 1980s.In his work, Munford [2] addresses the issue of increasing congestion in port gates.The expression "dry port" is initially used in order to describe a facility primarily directed at solving this problem, by re-distributing flows of goods arriving by sea.The United Nations Conference on Trade and Development-UNCTAD- [3] suggests the following definition: "An inland terminal to which shipping companies issue their own bill of lading for import cargoes assuming full responsibility of cost and conditions and from which shipping companies issue their own bill of lading for export cargoes". However, there seems to be no exact or at least univocal definition for an inland terminal [4][5][6][7][8][9][10][11][12][13][14][15][16].The latter is, in fact, part of a far wider category, comprising logistics facilities of various kinds and sizes, which are not necessarily a constituent or part of a port cluster: dry port, inland terminal, inland port, inland hub, inland logistics center and freight village.Historically, the first freight villages were established in France in the early 1960s [17].A freight village has been defined as an area organized for carrying out all activities related to transport, logistics and distribution of goods, both at a domestic and international level, which are performed by various operators [18][19][20][21][22][23].Specifically, a freight village belongs to the category called "interporto" in Italy. A common denominator for the above-mentioned facilities is that they provide a more or less ample and specialized variety of logistics services.In this regard, an "interporto" is maybe the organically most complex facility among them, just as is presumed by the more exhaustive definition of it provided by Italian law n.240/90 [24].In the following years, scientific literature used various methodologies in order to define inland structures.During the nineties of last century, for example, Beresford and Dubey [25] used the expression "dry port" for defining tax warehouses.These authors described the aspects concerning incorporation, and even the services a "dry port" should provide, particularly customs services, but they did not further specify kinds of connections and relationships with ports. Slack [26] contributes by stressing the relevance of intermodal transport for the development of inland structures, which are entrusted with an inland transshipment task.He points out satellite terminals as the solution for port congestion and lists four logistics functions they may not ignore: modal transfer between two transport modalities; consolidating goods for transport preparation; stocking goods waiting for shipment; delivery to the recipient. Jaržemskis and Vasiliauskas [27] describe a dry port as "a port situated in the hinterland servicing an industrial/commercial region connected with one or several ports by rail and/or road transport and offering specialized services between the dry port and the transmarine destinations.Normally the dry port is container and multimodal oriented and has all logistics facilities, which is needed for shipping and forwarding agents in a port". The main reason for the above terminological differences is the way the facilities look in different geographic areas.The concept also varies as a result of scale, complexity and field of specialization [28] and because of the position and role played within a transport network. Inland facility classification thus comes to depend on several parameters [7,[29][30][31].The most relevant among them is the one referring to the more or less developed co-modal prerogatives (mono modal road terminal, terminal for combined road-rail transport, terminal for combined road-inland waterway transport, terminal for both mentioned kinds of combined transport); second by importance is the parameter of logistics functions, i.e., the variety of more or less specialized services to goods which add up to transport (customs procedures, warehousing and manufacturing operations, up to retail or wholesale activities).Roso [32] introduces further parameters for differentiation: closeness to the port hub (close, at medium distance, distant) and nature of the ownership (ports owned by railway operators, peripheral public administrations or public-private companies). Nonetheless, the most common terms used for describing such facilities are: "inland terminal", "dry port" and "inland port".They are frequently used in order to generically define inland terminals where various handling and value implementation activities are offered [33][34][35]. The expression "dry port" is the one most commonly used, among those listed above, for a facility behind a port, frequently called "inland customs warehouse" [36].The European Commission [37] identifies as a "dry port" an inland terminal directly connected to the port by a railway transport service.Harrison et al. [38] take into consideration the role played by dry ports in "serving" the region they are located in, by their intermodal terminals which are part of them, since they are a consolidation point for goods and a transshipment facility among the different available modalities. Roso [32] and Roso and Lumsden [39] offer a definition of "dry port" which stresses its connection with the port: "an inland intermodal terminal directly connected to seaport(s) with high capacity transport mean(s), where customers can leave/pick up their standardized units as if directly to a seaport".The same concept was followed also by Rodrigue and Notteboom [40], Qiu et al. [13], Crainic et al. [41], Nguyen and Notteboom [42] and Talley and Ng [43]. Dry ports can facilitate more feasible and efficient combinations of sea cargo flows and inland flows [44] "especially with rail and truck combinations and through providing value-added services offerings at nodes" [45]. In the last decade many researchers identified success factors for dry ports related to specific cases; Black et al. [46] summarized factors influencing the success of dry ports, for instance, railway connections [39,44], cooperation among actors of the transport system [11] and development of value-added services [45]. According to Roso et al. [44], dry ports contribute not only to improving access of a port to its hinterland, thanks to the operational link between the port and inland site [29,31,47] based on partnership rather than competition [48], but even to the improvement of a more extended area, which is sometimes, as Rodrigue and Notteboom [49] observe, geographically discontinuous.In this respect, they mention that this "system" is not necessarily represented by a dyad port-dry port: it can be polycentric, i.e., made up of several dry ports, but with direct port-to-port connections. The concept of "dry port" according to the United Nations Economic and Social Commission for Asia and the Pacific [50] may be summarized as "a dry port provides services for the handling and temporary storage of containers, general and/or bulk cargoes entering or leaving the dry port by any mode of transport such as road, railways, inland waterways or airports.A dry port of international importance shall refer to a secure inland location for handling, temporary storage, inspection and customs clearance of freight moving in international trade". The same expression may also be used to witness that a certain inland terminal has reached a specific level in terms of services provided, as in the cases of particular customs procedures, or of the presence of third-party logistics services providers (3PL) and of other qualifying services [31].Thus, this expression is not suitable for facilities which, on the contrary, do not show characteristics sufficiently interesting from this point of view. Rodrigue et al. [7] prefer "inland port" to "dry port".The former is indeed considered suitable for indicating inland facilities of various kinds and dimensions, with a wide choice of logistics services, incorporated in the most various forms and situated close to important production areas.Such facilities can be found in the United States, where they cover areas normally larger than similar European facilities, with larger dimensions and storage capacity [31]. In Europe, the above expression refers to inland terminals connected to ports by a river; they are most common in Germany, the Netherlands and Belgium.Therefore, following this theory one encounters several obstacles to the use of the expression "inland port" combined with that of "inland terminal", because in Europe many terminals located in the inland do not have access to a river and/or are not close to a productive area.Finally, they do not even have a throughput comparable to the American case [35]. Notwithstanding the above problems, all scholars mentioned up to now seem to agree on the expression "inland terminal", considering it well defined at European level as inland facilities. A terminal positioned in a port hinterland, however, needs to fulfil following criteria in order to be considered an inland terminal [51]: Have a direct connection to the port/s, by road as well as by rail and/or river; • Have a "corridor" with strong transport capacity available, or be positioned on it; • Be equipped with suitable structures and machineries, compatible with the reference port/s; • Play a collection and distribution role at local and regional level. Inland terminals thus play a significant role in the transport chain, through the important function of "connecting" a port to its hinterland.This connection is undoubtedly advantageous for all operators involved in the transport chain. Notteboom [52] and Van Klink and Van den Berg [53] stress the need for ports to develop their own hinterland and to enhance increase of goods transported in containers, thus contributing to development.The above-mentioned authors confirm the importance of "competition" among ports for the sake of their better positioning on the market/s and on the main intermodal corridors.For this kind of competition, a good relationship to the hinterland is, of course, essential. According to Van Klink and Van den Berg [53] and Mc Calla [54], the need to extend port activities and reduce transport costs to the hinterland, using intermodal means of transport, also involves the chance of expanding one's area of influence beyond the traditional reference market. Van Klink and Van den Berg [53] define a port hinterland as the most internal area the port serves at lower cost compared with other ports in the same region.As mentioned, according to Notteboom and Rodrigue [55] the influence area of a port hinterland may even not be contiguous to the port itself.There are, in fact, situations where a certain territory, though closer to the port, is not part of its market, because it has better connections with more distant ports.This leads to the formation of more or less extended "islands", separate from the main area of the gravitational hinterland of the port.The same authors argue that dry ports and associated corridors are part of an evolutional phenomenon defined as "port regionalization", and expand the concept of "regionalization paradigm" considering the evolving role of intermediate hubs [56]. Van den Berg and De Langen [57] think that the connectivity of a port to its hinterland should be a necessary focus for Port Authorities' strategies, as well as for terminal operators and shipping companies.In a subsequent study [30] they encourage a comparison between the concept of "inland terminal" and door-to-door and port-to-port systems; they define the advantages from the point of view of shippers, logistics operators and others involved, primarily as to the issue of repositioning empty containers, which has considerable impact on port operability.The relevance of connections to the hinterland is, therefore, a critical factor for the economic success of a port platform and for the competitiveness of the entire transport chain [58].Development of a dry port/port system absolutely requires cooperation by shipping companies and by all stakeholders of the distribution process, as partners in an intermodal service.The number of stakeholders increases while a sea port expands to its hinterland area [59]. Monios [60] stresses the importance of the quality level of intermodal terminal management; he defines some models and compares them following the example of European, North American and Asian terminals.The so-defined models differ according to the kind of relationships between terminal operators and outside stakeholders (port and railway operators), as well as to the kind of relationships between terminal operators and logistics services providers at the terminals.Monios also argues that cooperation among all stakeholders in the distribution process is a necessary condition for transport network management in the connection system between intermodal terminals and ports.Beresford et al. [28] affirm the need to implement the transport chain between a port and its hinterland, and this is confirmed by the fact that 60% of total transport costs are linked to those generated by container distribution from and to ports. According to Nguyen and Notteboom [61], the expression "dry port" evolved "from an intermodal terminal used in bills of lading issued by shipping lines (United Nations Conference on Trade and Development, 1982) to a broader use as defined by Roso et al., 2009".Noting that there are many and different ways to define an infrastructure as a "dry port", the authors agree in considering mainly four components: "concept", "classification", "function" and "technological processes".Among them, they believe that the last two are prioritized characteristics, because "concept" and "classification" may be related to local regulation, as, for instance, in the case of "interporti", the concept and classification of which, at least at the time when they were conceived, were not related to the concept of close relation with a port.Dry port is a concept and a classification adaptable also to later infrastructures, which change their positions in the original network, or even place themselves into a new network.On the other hand, as stated above, functions and technological processes change very quickly over time, and transformation of terrestrial shipping chains will increase the speed of this process. In this context, the authors adopt following definitions: • a "concept" is based on a main idea or model on a theme.For example, the different taxonomy of dry ports; • "classification" is the action or process of classifying something by a specific characteristic such as distance from the seaport; • for "technological processes", the definition given by Rožić et al. [35] is as follows: "Technological processes represent the activities at the terminal that are conducted with the aim of better quality of cargo handling, and which require appropriate technological elements and real-time work". Materials and Methods According to Fink [62] "a literature review is a systematic, explicit, and reproducible design for identifying, evaluating, and interpreting the existing body of recorded work produced by researchers, scholars and practitioners".A researcher collects data through the analysis of existing documents [63,64].Considering that the objective of this paper is to analyze the dry port as to its concept development, classification and technological processes, in order to identify gaps, issues and opportunities for further study and research, a literature review seems to be a valid approach.According to Easterby-Smith et al. [65] this is a preliminary step in structuring a research field and enables to identify its conceptual content [66]. A systematic approach to review existing literature was chosen in order to analyze concept, classification, function and technological processes of dry ports. The process of analysis of this research was organized in the following steps [64,67]: • Defining the materials of analysis: research papers (research articles, business articles and review articles) as per our aim; • Classification context: a classification context was selected and defined, in order to classify the materials for the literature review; • Materials evaluation: the materials were analyzed and sorted according to the classification context.This was meant to enable identification of relevant issues and interpretation of results; • Collecting publications and delimiting the field: this literature review was limited to peer-reviewed English and Italian articles; the internet search was carried out on 18 May 2018 (it needs to be noticed that the search also detected papers published with a future date: in this case, the papers, classified as "in press", or "corrected draft", were not considered), starting, since the topic is quite recent, from the first published article (1997) on this issue. Each of these steps has been checked and revised and, to avoid bias, all records have been examined separately by the three authors.Whenever they disagreed in the phase of evaluating an article (Phase 3), they discussed the matter until agreement was reached.Studies meeting the following criteria for data-extraction were included: the paper needed to investigate, even only in one paragraph, the dry port concept and/or classification and/or technological processes.The search strategy resulted in 52 records (after excluding duplicates). The search was conducted as a structured keyword search.Major electronic databases were used to identify related materials; among them, those provided by library services such as Scopus hereinafter S, and Web of Science, hereinafter WoS, and the major publishers Elsevier (Amsterdam, The Nederland), hereinafter SD, Emerald (Bingley, UK), hereinafter E. According to several Scholars [68,69] there are many different names indicating dry ports in European, South East Asian and North American countries.Baydar et al. [21] state that they are called in Great Britain and the USA "Freight Villages"; in France "Plateforme Logistique/Plateforme Multimodale"; in Italy "Interporto"; in Germany "Güterverkehrszentrum"; in Denmark "Transport Center" and in Singapore and China "Logistics Center".This is the reason that it was decided to search using different keywords.The keywords "dry port", "inland terminal", "freight village" and "interporto/i" were used in respect of the article title and text, abstract and keywords (Phase 1) of the above-mentioned databases-Table 1. Quotation marks were used in order to specify terms which should appear next to each other; as to be sure not to miss any article, for the Italian name "interporto", search was conducted using both the singular ("interporto") and the plural word ("interporti").The research brought 837 papers into evidence. In order to refine the number of results, duplicate papers were eliminated, i.e., the ones provided by more than one among library service and publishers (Phase 2)-Table 2. The articles found (698) were scanned based on titles and/or abstracts and/or their full text was read thoroughly (Phase 3).As stated above, this study focused only on research, business or review articles; therefore, papers published, for example, in conference proceedings, book chapters or editorials were not included.Figure 1 presents the process by which the articles were selected: after the start (oval), it shows all project development stages (a parallelogram symbolizes an input/output while a rectangular shape symbolizes a process), the decision node (diamond) and the end of the process (oval again).All these phases are connected by arrows indicating the study project flows.Articles were considered eligible when their content was appropriate to the object of this paper.The articles found (698) were scanned based on titles and/or abstracts and/or their full text was read thoroughly (Phase 3).As stated above, this study focused only on research, business or review articles; therefore, papers published, for example, in conference proceedings, book chapters or editorials were not included. Figure 1 presents the process by which the articles were selected: after the start (oval), it shows all project development stages (a parallelogram symbolizes an input/output while a rectangular shape symbolizes a process), the decision node (diamond) and the end of the process (oval again).All these phases are connected by arrows indicating the study project flows.Articles were considered eligible when their content was appropriate to the object of this paper.At the end, 52 relevant contributions were extracted and deemed eligible for data extrapolation (Table S1 in the Supplementary Materials Section).Most of the articles were excluded due to their focus on other aspects.Highly technical works on topics such as economics, econometrics, finance, computer science and mathematics were also excluded from the review because they were At the end, 52 relevant contributions were extracted and deemed eligible for data extrapolation (Table S1 in the Supplementary Materials Section).Most of the articles were excluded due to their focus on other aspects.Highly technical works on topics such as economics, econometrics, finance, computer science and mathematics were also excluded from the review because they were considered beyond the scope of this paper.Articles published without the author's name were excluded as well.This seems to be justified when considering the outlined aim, which concentrates basically on dry port concept, classification, function and technological processes. Results and Discussion The selected papers were classified by author or, if more than one, by authors, research aim, type of article (research article or review article), subject area (social science, and/or environmental science, and/or engineering, and/or business and management) and field of research (concept, and/or classification, and/or technological processes). The first selected article dates back to 1997 and was written by Notteboom [52]; the most recent ones are dated 2018 [23,46].Over the years, the interest of Scholars on this subject has increased: this is demonstrated by the augmented number of articles per year.Most of the 52 selected papers may be classified as research or business articles (46); only a few may be considered as review articles (6). The results of this classification are summed up in Figure 2. Results and Discussion The selected papers were classified by author or, if more than one, by authors, research aim, type of article (research article or review article), subject area (social science, and/or environmental science, and/or engineering, and/or business and management) and field of research (concept, and/or classification, and/or technological processes). The first selected article dates back to 1997 and was written by Notteboom [52]; the most recent ones are dated 2018 [23,46].Over the years, the interest of Scholars on this subject has increased: this is demonstrated by the augmented number of articles per year.Most of the 52 selected papers may be classified as research or business articles (46); only a few may be considered as review articles (6). The results of this classification are summed up in Figure 2. The selected papers are specifically focused on countries located almost all over the world (just to name a few, Australia, China, Vietnam, Europe, North America, Iran, Malaysia, Brazil, India and Turkey). In total, 45 of the selected papers are included in the subject area of Social Science; 17 in Environmental Science; 13 in Engineering and 12 in Business and Management (a single paper may be included in more than one subject area) (Figure 3).The selected papers are specifically focused on countries located almost all over the world (just to name a few, Australia, China, Vietnam, Europe, North America, Iran, Malaysia, Brazil, India and Turkey). In total, 45 of the selected papers are included in the subject area of Social Science; 17 in Environmental Science; 13 in Engineering and 12 in Business and Management (a single paper may be included in more than one subject area) (Figure 3). Considering that different languages lead to different terms indicating "dry port", "inland terminal", "freight village" and "interporto/i", and that the concept underpinning these words has been better defined over the years, most of the selected papers (43) may be included in the field of search "concept".In total, 28 deal with "classification and function" while only a small number of papers (3) analyze the "technological processes".Furthermore, as to the "field of research", a single paper may be included in more than one field.Figure 4 sums up the above-mentioned results (more details are presented in Table S1 in the Supplementary Materials Section).Considering that different languages lead to different terms indicating "dry port", "inland terminal", "freight village" and "interporto/i", and that the concept underpinning these words has been better defined over the years, most of the selected papers ( 43) may be included in the field of search "concept".In total, 28 deal with "classification and function" while only a small number of papers ( 3) analyze the "technological processes".Furthermore, as to the "field of research", a single paper may be included in more than one field.Figure 4 sums up the above-mentioned results (more details are presented in Table S1 in the Supplementary Materials Section).In addition, in order to better outline the key aspects of the topic and to support or challenge the findings, relevant and central references known by the authors or found in the bibliography of the selected articles were identified (33), bringing the total number of articles to 85.These references are not included in Table S1 (Supplementary Materials), but they are listed in the text of the paper. Classification Woxenius et al. [70] suggest a classification of inland terminals based on their different role within the transport network.They divide them into three main categories: (1) terminals with direct connection to the port, which lack in warehousing capacity and operability in moving and handling goods, and because of this, the mentioned functions have been decentralized to locations close either to the consignee or to the port; (2) terminals positioned on more important corridors, meant to speed up unloading and reloading operations of means of transport, and also comprising smaller facilities for the purpose of sending small batches of goods to set destinations; (3) hub and spoke terminals, intended as central junctions where important flows of goods pass, characterized by strong capacity and high value-added services.In the same paper, "remote terminals" are defined as facilities on average not very relevant for behind-port activities, with small warehousing capacity. With reference to this aspect, it seems useful to mention the classification proposed by Roso and Lumsden [39], which will be further discussed; based on the distance separating dry ports from the port, these authors divide dry ports into three categories: close, midrange and distant. As to their remoteness from a port (close, midrange and distant), Roso [71] furthermore identified and categorized dry ports around Göteborg (Sweden). According to this definition [32,44], a facility acquires its own connotation also depending on how ports and shipping companies control the architecture of railway service operations.This stresses its relevance from the point of view of reducing environment pollution as well, since each train can substitute, on average, 35-40 lorries, also allowing for more rational ship unloading without setting port facilities under stress. Notteboom and Rodrigue [49,[73][74][75] define ownership structures and basic functions of an inland port and argue that the expression "inland structure" depends, among others, on company structure, geographic location and functions available on the platform.Following their classification, inland ports can be divided into satellite terminals, trans-modal centers and transshipment centers.According to the same Authors, the incidence of land transport costs, from port to hinterland and vice-versa, may add up to 80% of the entire transport costs.This is the reason why many shipping companies consider this part of logistics a strategic area in order to reduce unit costs.According to Notteboom et al. [15], a logistics center may be classified, following functional criteria, into three main categories: (i) logistics node, the primary function of which are cargo warehousing and storage; (ii) logistics center characterized by a prominent transit function; (iii) logistics center focused on value added services. The development of dry ports and their impact on the transport chain have been studied by many authors.In the contributions by Cullinane and Wilmsmeier [76] and Wilmsmeier et al. [36] they are associated to the theory of product life cycle.Bask et al. [45] suggest a development model for the port-dry port dyad structured in three phases ((1) Pre-phase, (2) Start-up phase, (3) Growth phase), subsequently describing each one's characteristics.The article by Bentaleb et al. [77] deepens the previous theories by Cullinane and Wilmsmeier [76], Wilmsmeier et al. [36] and Bask et al. [45]; the progression of the life of a dry port, within a dry port-port system, has been considered similarly to the life cycle of any product or service.According to this study, there are five life cycle phases of a dry port: (1) Development, (2) Introduction, (3) Growth, (4) Maturity, (5) Decline. In China, dry ports have been classified as [28,43]: seaport-based (located on China's coast, with the major function of providing pre-customs clearance for cargo imported into China); city-based (they provide logistics services for logistics hubs that have emerged at expanding nearby cities in central China) and border-based (generally located in the landlocked border areas of western China; they are far away from seaports and provide transshipment, multimodal transportation and custom clearance services). Function Generally speaking, the function carried out is determined by the location of these facilities considering the most important economic-financial catchment areas, by the structure of shareholding and even by the technical characteristics of the terminal.A significant, already mentioned contribution is offered by Roso [32,44]. The contribution by Wilmsmeier et al. [36] concerns the analysis of a model for dry ports considering the spatial evolution of a transport facility.It studies the direction of dry port development and cooperation tactics of port and dry port, looking into the origin of integration processes which led to their development: "Inside-Out" when it was generated by the land side, i.e., by the dry port itself; "Outside-In" when the input for development came from the sea side, i.e., from the port.In the "Inside-Out" case, integration, in the sense of ownership, may be determined by local authorities, by the transport company (road and railway operators and river navigation companies) or by logistics providers."Outside-In" integration is, on the other hand, usually determined by port authorities, by port terminal operators or by shipping companies.This classification was also used by Monios and Wilmsmeier [9] and Nguyen and Notteboom [42]. Dry port functions typically include distribution, consolidation, storage, customs services and equipment maintenance [10]. Rodrigue et al. [7] identify three different functions of inland terminals: (i) satellite terminal (to serve the port terminal by accommodating additional traffic and added value functions); (ii) load center (serves regions with large volumes of containerized loads); and (iii) trans-modal center (freight flow from one port can be bundled with other rail or barge flows).Van den Berg and De Langen [57], Rodrigue and Notteboom [40], Monios and Wang [47], Van den Berg and De Langen [30] also refer to the above-mentioned functions. An inland terminal is an intermodal terminal which, on top of carrying out the basic functions typical for intermodal transport (transshipment of containers into the various transport options and their temporary custody), also offers a range of logistics services connected to maritime requirements. The main functions are considered to be decongesting port docks and consolidating containers needing to be transported to the port by rail or river. In this sense, one of the primary roles of inland terminals is serving its area of influence by all means of transport, none excluded (rail, road and river), so as to further sort freight coming in from the port. In the Italian case, with reference to the system of Ligurian ports and freight villages of the Northern Italian "interporti" model, the latter with dry port functions, the area of technological processes is undoubtedly an interesting aspect, which should be followed in its future development.At an operational level, some projects and pilots have been launched: their scope is to systematize the way of acquiring and using codification data of lorries and transport units at the access gates on the roadside, or of International Consignment Notes (CMR) and/or Delivery Orders on intelligent electronic portals, appositely set up at the rail terminal of Vado (Port of Savona).This in order to obtain that entrance and exit to and from an inland terminal function by the same protocols used in a port.An example is the Vamp Up (the project "Vado Multimodal Platform Intermodal Connections Optimization and Upgrading CEF Project" pursues the objective of developing Vado as a multi-modal logistics platform, thanks to the effective integration of the Vado multimodal system with the European transport network TEN-T, in order to facilitate the shipment of goods to intermodal and logistics centers behind the quays) project, aiming at offering an integrated model shared by the quay and its intermodal reference centers, and applied to all goods transfer modalities. Technological Processes On top of classification and development aspects, another central issue is the one concerning technological processes in port areas as well as in inland terminal areas.They are the activities qualifying services to goods; therefore, they require appropriate technological elements and a real-time work methodology [78].Services often are the key drivers of growth and profitability [45].At port terminals the criteria for identifying and reaching these processes are clearly defined; at inland structures, however, they are still not studied sufficiently in depth and are not completely defined [35]. A database analysis of studies and scientific papers showed the scarcity of study space dedicated to technological processes for optimization at inland terminals.Jaraš ūniene's paper [79], for example, examines a particular aspect, concerning optimization of the recognition of vehicles entering the inland terminal: through the development of a dynamic model, the system is able to accelerate the process of controlling entering containers and sending them to the deposit. The study by Abacoumkin and Ballis [80] looks at chances for increasing the productivity of road-rail terminals on the basis of set design parameters and selection of alternative equipment. To this scope, a system was developed as part of an integrated modelling tool, able to compare alternative options of such platforms; a formula for calculating the costs of the respective solutions is annexed to it.The mathematical model by Carrese and Tatarelli [81] is based on an algorithm used for optimizing handling costs of containers arriving at the inland terminal by rail.Lastly, Gronalt et al. [82] developed a simulation tool for optimizing other processes characteristic for an inland terminal. The following, Table 3, summarizes the main elements, emerged from the previous subsections, for identifying the classification, function and technological processes of a dry port. Conclusions Most of the 52 selected papers may be classified as research or business articles (46); only a few may be considered as review articles (6).The selected papers are specifically focused on countries located almost all over the world. In total, 45 of them are included in the subject area of Social Science; 17 in Environmental Science; 13 in Engineering and 12 in Business and Management. Considering that different languages lead to different terms to indicate "dry port", "inland terminal", "freight village" and "interporto/i", and that the concept underpinning these words has been better defined over the years, most of the selected papers (43) may be included in the field of search "concept".In total, 28 deal with "classification & function" while only a small number of papers (3) have analyzed the "technological processes". In addition, in order to better outline the key aspects of the topic and to support or challenge the findings, relevant and central references known by the Authors or found in the bibliography of the selected articles were identified (33), bringing the total number of articles to 85. The majority of studies examined in this research seem to agree on the fact that within the scientific community there is no unanimous consensus concerning the cataloguing of terrestrial nodal facilities serving port gateways.In fact, these nodes take up different shapes, functions and roles, within the different contexts they are located in, and they acquire specific characteristics depending on the markets they are a reference for.On the other hand, beyond any cataloguing one cannot deny that, in the development of a more modern, complex and articulate supply chain, "dry ports" have emerged as fundamental elements of integration between sea "system" and land network; they have established themselves not only as a mere physical extension of a port quay, but also and above all as an enlargement tool of its gravitational market area.The increased interest in the genesis and development of dry ports, often generically indicated as inland terminals, has been accompanied, since the eighties of last century and up to date, by an abundant contribution of the scientific community, originating a thriving literature, which, however, does not find a common denominator. The authors propose to focalize further study on the concepts of "co-modality" and "dry ports" and of the "interporto" or freight village as a new kind of hub connected with the seaport. Figure 1 . Figure 1.Flow chart: selection of included studies. Figure 1 . Figure 1.Flow chart: selection of included studies. Figure 2 . Figure 2. Main results of the literature review by type of article. Figure 2 . Figure 2. Main results of the literature review by type of article. Figure 3 . Figure 3. Main results of the literature review by subject area. Figure 3 . Figure 3. Main results of the literature review by subject area. Figure 4 . Figure 4. Main results of the literature review by field of search. Table 1 . Search for the keywords in the library service S and in the three major publishers SD, E, and WoS: Phase 1. (Numbers refer to the number of articles extracted). Table 2 . First step to refine the number of results: elimination of duplicate papers.Phase 2. (Numbers refer to the number of articles extracted). Table 3 . Summary of the main elements for defining classification, function and technological processes.
2020-11-12T09:10:13.423Z
2020-11-08T00:00:00.000
{ "year": 2020, "sha1": "deca15184ac9389b354e7071005582e84e21aca0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6290/4/4/29/pdf?version=1604839122", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f2ba89f8a25fb06d2cb2267eb77fb3119210fbf3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
7334629
pes2o/s2orc
v3-fos-license
Different tumor necrosis factor α antagonists have different effects on host susceptibility to disseminated and oropharyngeal candidiasis in mice Tumor necrosis factor α is important for the host defense against intracellular pathogens. We tested the effect of mouse analogs of human TNF-α antagonists, the rat anti-mouse TNF-α monoclonal antibody (XT22) and the soluble mouse 75 kDa TNF-α receptor fused to the Fc portion of mouse IgG1 (p75-Fc), on the susceptibility of mice to hematogenously disseminated candidiasis (HDC) and oropharyngeal candidiasis (OPC). Both XT22 and p75-Fc significantly reduced mice survival, increased kidney fungal burden, and reduced leukocyte recruitment during HDC. However, only XT22 significantly increased the oral fungal burden and reduced leukocyte recruitment during OPC. This result suggests that XT22 and p75-Fc affect host susceptibility to different types of Candida albicans infections by different inhibitory mechanisms. Tumor necrosis factor α (TNF-α) plays a major role in various immune responses. 1 From various studies, TNF-α is known to play a key role in the recruitment of neutrophils to the site of infection 2 and also to modulate the phagocytic activity of both neutrophils and macrophages. 3 Also, studies with TNF-α −/− mice show that TNF-α is required for the normal host defense against hematogenously disseminated candidiasis (HDC) and oropharyngeal candidiasis (OPC). 4,5 Thus, it is expected that the host will become significantly susceptible to various infections when TNF-α is neutralized by TNF-α antagonists. Recent studies suggest that the incidence of invasive fungal infections has significantly increased among patients who are receiving TNF-α antagonists. Although the risk of candidal infections in these patients is lower than the risk of other infections by other dimorphic fungi, there are several reports of candidiasis in patients on TNF-α antagonist therapy. Candida esophagitis has occurred in the patients who received infliximab for the treatment of Crohn disease, 6 and transplant patients who received infliximab for graft-vs.-host-disease had a significant incidence of HDC and also other non-Candida invasive fungal infections. 7,8 Disseminated candidiasis has also occurred in a patient with rheumatoid arthritis who was treated with etanercept. 9 As part of the normal microbiota on mucosal surfaces, Candida albicans may be likely to cause serious infection in patients who receive TNF-α antagonists. However, the precise effects of TNF-α antagonism on susceptibility to invasive Candida infections have not been delineated. Here, we tested the effect of murine analogs of two TNF-α antagonists, infliximab and etanercept, which have been clinically used to treat various autoimmune diseases. We focused on the effect of TNF-α antagonists in the mouse models of HDC and OPC. The therapeutic TNF-α antagonists chosen for this study are known to have different mechanisms of action. Infliximab is a monoclonal antibody that neutralizes TNF-α activity by binding with high affinity to membrane-bound TNF-α. 10 Etanercept is a dimeric fusion protein consisting of the extracellular ligand-binding portion of p75 TNF-α receptor (TNFRp75) fused to the Fc portion of human IgG1. 11 Thus the different mechanisms of action of these two therapeutic agents may result in different outcomes in the treatment of autoimmune diseases as well as in increasing the host susceptibility to candidiasis. To mimic the effect of infliximab in murine model, the rat anti-murine TNF-α monoclonal antibody MP6-XT22 (XT22) was used as previously described. 12 To mimic the effect of etanercept, the soluble mouse 75 kDa TNF-α receptor fused to the Fc portion of mouse IgG1 (p75-Fc) was used. 12 Since XT22 and p75-Fc also demonstrate different TNF-α binding patterns and pharmacokinetics 9 similar to inflixamib and etanercept action in humans, 13 tumor necrosis factor α is important for the host defense against intracellular pathogens. We tested the effect of mouse analogs of human tNf-α antagonists, the rat anti-mouse tNf-α monoclonal antibody (Xt22) and the soluble mouse 75 kDa tNf-α receptor fused to the fc portion of mouse igG1 (p75-fc), on the susceptibility of mice to hematogenously disseminated candidiasis (HDC) and oropharyngeal candidiasis (opC). Both Xt22 and p75-fc significantly reduced mice survival, increased kidney fungal burden, and reduced leukocyte recruitment during HDC. However, only Xt22 significantly increased the oral fungal burden and reduced leukocyte recruitment during opC. this result suggests that Xt22 and p75-fc affect host susceptibility to different types of Candida albicans infections by different inhibitory mechanisms. the effectiveness of XT22 and p75-Fc in the mouse models of HDC and OPC would also vary and result in different susceptibility to candidal infection. Male BALB/c mice weighing 18-20 g (National Cancer Institute) were used in all experiments. XT22 was obtained from DNAX Research Institute (now part of Schering-Plough Biopharma) and p75-Fc was provided by Amgen, Inc. 14 To test the effects of TNF-α antagonists in HDC, the mice were divided into four groups: XT22 group treated with XT22, p75-Fc group treated with p75-Fc, a control group for XT22 receiving rat IgG (National Cell Culture Center), and another control group for p75-Fc receiving phosphate-buffered saline (PBS), which was used as the diluent for the p75-Fc protein. The XT22 group was divided into five subgroups that were injected intraperitoneally with 40, 20, 10, 5, and 2.5 mg/kg XT22 diluted in PBS. The p75-Fc group was also divided into five subgroups that were injected subcutaneously with 5, 2.5, 1.25, 0.625, and 0.375 mg/ kg p75-Fc diluted in PBS. Eight mice per each subgroup were treated on days −5 and −1 relative to infection as described previously. 12 To initiate HDC, the mice were inoculated with 10 5 blastospores of C. albicans SC5314 in 0.5 mL saline via tail vein injection and monitored for survival for 14 d as previously described. 15 Figure 1A demonstrates that neutralization of TNF-α with both XT22 (top) and p75-Fc (bottom) significantly increased the susceptibility of mice to HDC (P < 0.05 when compared with anti-murine IgG and PBS controls). This result suggests that both TNF-α antagonists significantly increased the susceptibility of mice to HDC and that the concentrations of XT22 and p75-Fc chosen for this study had a similar immune suppressive effect in this mouse model. To determine the effects of the two TNF-α antagonists on the kidney fungal burden, an additional seven mice per subgroup were infected and treated with 10 mg/kg XT22 and 2.5 mg/kg p75-Fc, both of which demonstrated similar effects on susceptibility to HDC as shown in Figure 1A. The mice were sacrificed at 1 d after infection and their kidneys were quantitatively cultured as previously described. 15 Portions of the kidneys were fixed in formalin, followed by 70% ethanol, and stained with periodic acid schiff (PAS) for histopathological analysis. As shown in Figure 1B (top), the kidney fungal burden of mice treated with either XT22 or p75-Fc was significantly increased when compared with the respective control mice. The kidney fungal burden in the control mice that did not receive the TNF-α antagonists was between log 4.5 to log 4.7, whereas the kidney fungal burden in mice treated with either TNF-α antagonists was 2 logs higher than the controls (P < 0.05). To measure neutrophil recruitment into the kidneys, the myeloperoxidase (MPO) level in the kidney homogenates was determined by a commercial ELISA (Hycult BT) following the manufacturer's protocol. MPO is most abundantly expressed in neutrophils and the level of MPO is used as an indirect measurement of neutrophils recruitment to the infected site as described in previous studies. [16][17][18] To measure the impact of the TNF-α antagonists on neutrophil influx relative to the number of invading microorganisms, we expressed the results in terms of ng MPO per CFU as previously described. 19 As shown in Figure 1B (top right), both TNF-α antagonists caused a significant reduction in the kidney MPO level relative to the fungal burden as compared with the untreated control mice (P < 0.05). This result suggests that treatment with XT22 or p75-Fc reduced neutrophil recruitment into the infected kidney. The histopathology of kidneys of mice in the different treatment groups are shown in Figure 1C. They also demonstrate that there were significantly more lesions containing fungal hyphae (shown as red filaments) in the mice treated with either XT22 or p75-Fc. In addition, the number of neutrophils (shown as blue dots) recruited to the infection site with both TNF-α antagonists was notably reduced when compared with the control groups. The finding that neutrophil recruitment to the infection site was significantly diminished in mice receiving either XT22 or p75-Fc indicates that both antagonists effectively inhibited functional TNF-α. These results also suggest that TNF-α governs renal neutrophil recruitment in response to fungal infection during HDC. The relatively few neutrophils that are recruited when TNF-α is neutralized are incapable of preventing fungal proliferation in the kidneys, and thus the severity of HDC is increased. To confirm our finding, direct measurement of the decreased neutrophil influx in mice treated with TNF-α antagonists should follow in future studies using more accurate immunological assays such as flow cytometry. We also tested the effect of TNF-α antagonists in the mouse model of OPC. Our previous study showed that the molecular mechanisms by which host cells interact with fungal cells are different in in vitro models of HDC vs. OPC. 20 Thus, we hypothesized that the two different TNF-α antagonists may have different effects on the host defense against OPC in mice. To test the effects of TNF-α antagonists on susceptibility to OPC, seven mice per group were given with 40 mg/kg XT22, 10 mg/kg p75-Fc, rat IgG, and PBS on days −5 and −1 relative to infection, as those concentrations were found to have similar immune suppressive effects during HDC. The mice were also given with 50 mg/kg cortisone acetate at days −1, 1, and 3 relative to infection to establish adequate oral infection with countable CFU in the mouse model (Park, unpublished data). All procedures to initiate OPC were followed as previously described. 21 After 5 d of infection, 7 mice per group were sacrificed, and their oral tissues were quantitatively cultured as previously described. 21 Interestingly, only XT22 caused a significant increase in oral fungal burden and also showed significant reduction in oral leukocyte recruitment relative to the oral fungal burden. As shown on Figure 1D (top), XT22 treatment resulted in 2.5 log greater oral fungal burden compared with the control group after 5 d of infection (P = 0.004). In contrast, p75-Fc treatment only increased oral fungal burden by 0.7 log compared with the control group and this difference was not statistically significant (P = 0.3176). It was also notable that there was a 1.7-log greater fungal burden in the XT22 treated group compared with the p75-Fc group (P = 0.002). There was also a significant reduction in oral MPO levels relative to the number of CFUs in mice treated with XT22 (P = 0.004) as shown on Figure 1D (bottom), suggesting that XT22 treatment significantly reduced neutrophil recruitment to the site of infection. The oral histopathology results are shown in Figure 1C (right). They did not clearly correspond to the oral fungal burden but did demonstrate that there were a significant number of lesions containing fungal hyphae (shown as red filaments) in mice treated with either XT-22 or p75-Fc. In addition, there was greater damage to the oral tissues in mice treated with XT22 as compared with the control groups. These results suggest that only XT22, but not p75-Fc, has significant negative impact on the host immune defense against C. albicans infection in the oropharyngeal cavity. Collectively, these results indicate that XT22 and p75-Fc have different effects on the immune response in the mouse model of HDC vs. OPC. In summary, this study recapitulates the importance of TNF-α in protecting the host from opportunistic fungal infection. Mice treated with XT22 and p75-Fc, two TNF-α antagonists with different inhibitory mechanisms, become more susceptible to disseminated C. albicans infection. A recent study reported a 95% correlation between organ fungal burden and neutrophil influx in mice infected with C. albicans. 22 Our data suggest that the increased fungal burden in mice treated with both TNF-α antagonists was due to decreased neutrophil influx into the kidney. Interestingly, only treatment with XT22, but not p75-Fc increased host susceptibility to OPC. Plessner et al. have previously described the mechanistic difference between TNF-α antagonists and their roles in the mouse model of tuberculosis. 19 They found that systemic TNF-α neutralization was equivalent between the anti-TNF-α antibody and the receptor fusion molecule in the tuberculosis murine model, which is similar to what we have discovered in murine model of HDC with XT22 and p75-Fc. However, the receptor fusion molecule penetrated poorly into the tuberculous granulomas compared with the anti-TNF-α antibody. 23 Thus, it is possible that XT22 may increase susceptibility to OPC more than p75-Fc because XT22 may achieve higher levels in the oral mucosa and/or cause greater inhibition of TNF-α activity in this anatomic site. This regional difference in mechanism of action may explain the variation in the TNF-α agonists' effectiveness in the treatment of various autoimmune diseases and in inducing susceptibility to pathogenic agents at different sites of infection. Future studies to address the molecular mechanisms by which XT22 and p75-Fc TNF-α agonists increase host susceptibility to Candida infection should follow to clarify this finding. Disclsoure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T03:38:59.563Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "953a66d0d51deac3a92bdd91d9b4cedb77b3e179", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/viru.29699?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "beb52b74785d13e50346661b5027bec4e4b07797", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270044209
pes2o/s2orc
v3-fos-license
osp(1|2)-trivial deformation of osp(2|2)-modules structure on the spaces of symbols Sd2 of differential operators acting on the space of weighted densities Fd2 Let osp(2|2) be the orthosymplectic Lie superalgebra and osp(1|2) a Lie subalgebra of osp(2|2). In our paper, we describe the cup-product H1∨H1, where H1:=H1(osp(2|2),osp(1|2);Dλ,μ2) is the first differential osp(1|2)-relative cohomology of osp(2|2) with coefficients in Dλ,μ2 and Dλ,μ2:=Homdiff(Fλ2,Fμ2) is the space of linear differential operators acting on weighted densities. This result allows us to classify the osp(1|2)-trivial deformations of the osp(2|2)-module structure on the spaces of symbols Sd2. More precisely, we compute the necessary and sufficient integrability conditions of a given infinitesimal deformation of this action. Furthermore, we prove that any formal osp(1|2)-trivial deformations of osp(2|2)-modules of symbols is equivalent to its infinitisemal part. This work is the simplest generalization of a result by Laraiedh [17]. Introduction A smooth vector field on ℝ is represented by the Lie algebra (ℝ).Considering the (ℝ)-action on ∞ (ℝ)'s 1-parameter deformation: where , ∈ ∞ (ℝ) and ′ ∶= .The (ℝ)-module structure on ∞ (ℝ), which is given by for a fixed , is denoted by  .Geometrically, is the space of polynomial weighted densities of weight ∈ ℝ.Along with the natural (ℝ)-action represented by , (), D , ∶= Hom dif f ( ,  ) represents the (ℝ)-module of linear differential operators.The order of differential operators naturally filters each module D , , and the space of symbols is represented by the graded module  , ∶= grD , .The quotient-module D , ∕D −1 , is isomorphic to  −− ; the principal "primary" symbol map pr which given by: provides the isomorphism (see, e.g., [16]).As a result, the space  , can be expressed as  since it is an (ℝ)-module that depends exclusively on the difference = − , and we have as (ℝ)-modules.In the sense of Richardson-Neijenhuis [18], the space D , is a deformation of this space  , and it is not an isometric as a (ℝ)-module to the symbol space to which it corresponds. Deformations of several categories of structures have played an increasingly important role in mathematics and a gain in physics during the last two decades.The objective of each of these deformation problems is to identify when all associated deformation obstructions disappear, and numerous attractive methods have been developed to determine when this occurs.The deformation theory of Lie algebras has received considerable attention. The symmetry Lie algebra of the quantum system is typically an extension of the classical symmetry algebra (see [8]).As a result, central extensions are necessary in physics. Richardson Neijenhuis [18,20] first considered some general questions about the theory in 1967.Deformations of Lie superalgebras considered by Binegar [9].Fialowski [13] extended the concept of deformations in 1988 by introducing deformations based on a complete algebra of a unique maximal ideal.Additionally and in general the concept of formal versal (or miniversal) deformation was introduced, and it was demonstrated that a versal deformation existed under certain cohomology constraints.Using this framework, Fialowski and Fuchs constructed a versal deformation [14]. Nijenhuis-Richardson asserts that the module deformation theory is inextricably linked to the determines the cohomology space. For this statement to be more specific, being presented a Lie (super)-algebra ℭ, ℭ-module and a subalgebra of ℭ, then the cohomology space -relative H 1 (ℭ, ; End( )) measure infinitesimal trivial deformations in which the action is restricted to (trivial deformations), where as the impediments to extending every infinitesimal -trivial deformation to a formal deformation are connected to H 2 (ℭ, ; End( )). This main result has been implemented by numerous authors: In 1999, Ovsienko and Roger [19] classified the deformation of the Lie algebra of vector fields in side the Lie algebra of operators pseudodifferential on 1 .In 2002, Agrebaoui, Ammar, Lecomte and Ovsienko [1] classified the deformations multiparameter of the module of symbols of operators differential.In 2003, Agrebaoui, Nizar, Mabrouk and Ovsienko [2] classified the deformations of modules of differential forms.In 2008, Ben Ammar and Boujelbene [10] classified the deformations of  (ℝ)-modules of symbols trivial on (2).They are proved that the conditions of integrability of the infinitesimal deformation trivial on (2) are necessary and sufficient.Moreover, every formal deformations of  (ℝ)-modules of symbols trivial on (2) is equivalent to a polynomial one of degree ≤ 2. In 2009, Basdouri and al. [6] classified the deformations of (1)-modules of symbols trivial on (1|2).They are proved that the conditions of integrability of the infinitesimal deformation trivial on ℝ 1|1 are necessary and sufficient.Moreover, every formal deformations of (1)-modules of symbols is equivalent to a polynomial one of degree < 5. On the other hand, Ammar and Kammoun [4] classified the deformations of (1)-modules of symbols. In 2010, Basdouri, Mabrouk, Bachir and Salem [7] classified the deformation of (1)-modules of symbols.They are proved that the conditions of integrability of the infinitesimal deformation of the second order are necessary and sufficient.In 2012, Basdouri and Ben Ammar [5] classified the deformation of (2)-modules and (1|2)-modules of symbols.In 2018, Ben Fraj, Abdaoui and Raouafi [11] classified the deformations of (2)-modules of symbols trivial on (1|2).They are proved that the conditions of integrability of the infinitesimal deformation of the second order are necessary and sufficient.Lately, in 2019, Laraiedh [17] classified the (1|1)-trivial deformations of (2|1)-modules of weighted densities on the superspace ℝ 1|2 .In this paper, first, we describe the cup-product H 1 ∨ H 1 .Second, we classify the (1|2)-trivial deformations of the (2|2)module structure on the superspace which is super analogous to the space  .We show that any formal deformation is equivalent to its infinitesimal part. The Lie superalgebra (1|2) is easily determined to be a subalgebra of (2|2): The space of -densities is defined by: A differential operator on ℝ 1|2 is an operator with the following form on ∞ (ℝ 1|2 ): Of course any differential operator defines a linear mapping 2 ⟼ () 2 from 2 to 2 for any , ∈ ℝ, thus the space of differential operators becomes a family of (2)-modules 2 , for the natural action , can be expressed in this manner where (, 1 , 2 ) are arbitrary functions. By definition, H (ℭ, ; ) is the quotient space where (ℭ, ; ) is the kernel of and that it is called the space of -relative -cocycles and (ℭ, ; ) is the elements in the range of −1 and that it is called the space of -relative -coboundaries.For = 1 and for all ∈ 1 (ℭ, ; ), ), for any , ℎ ∈ ℭ. 𝜷 We study the (1|2)-trivial deformations of the (2|2)-module structure on the space of symbols: The infinitesimal deformations are classified by: ) . The space is spanned by: ) . The space is spanned by: The space is spanned by: In our study, any infinitesimal (1|2)-trivial deformations of the (2|2)-module structure on 2 is of the form: where where the map: and the higher order terms 2 , The homomorphism condition (4.3) can be written as follows: [(), where the differential of the chain is represented by and for each linear map , ∶ ℭ ⟶ End( ), where ℭ is a Lie superalgebra and is a vector superspace, ∨ represents the standard cup-product as described by: From (4.4) for any , we can derive the following equation: The first obstruction to the integration of an infinitesimal deformation is presented by the first relation: Hence, 1 ∨ 1 must therefore be a coboundary. It is evident that the bilinear map 1 ∨ 2 is an -relative 2-cocycle for any two 1-cocycles 1 and 2 ∈ 1 (ℭ, ; End( )).Furthermore, 1 ∨ 2 is a -relative 2-coboundary " 1 ∨ 2 = " if one of the cocycles, 1 or 2 , is a -relative coboundary.As a result that the operation (4.4) generates a bilinear map: All the obstructions lie in H 2 (ℭ, ; End( )) and they are in the image of H 1 (ℭ, ; End( )) under the cup-product.Therefore, the cup-product H 1 ∨ H 1 is described in the following section. The cup-product 𝐇 𝟏 ∨ 𝐇 𝟏 We have to distinguish two cases: Case 1: 2 ∉ ℕ.In this case, we have (1) = Proof.In this case, the space H 1 ∨ H 1 is generated by the cup-product: , has a solution if and only if = 0. First of all, we have A.A. Almoneef, M. Abdaoui and A. Ghallabi For = (, , ), we denote by = 1 2 .Then, using equation (5.6), we can write ( Now, using the terms in ℎ in (5.6) for (, ) = ( 2 , 2 ) then for (, ) = ( 2 , 2 ), we get Similarly, the terms in 1 ℎ for (, ) = ( 1 , 1 ) then for (, ) = ( 1 , 1 ), we get Then, we have the following system: Proof.In this case, the space H 1 ∨ H 1 is generated by the two cup-products: Using simple computations, we confirm that of the natural action of (2|2) on the space End( 2 ) and we investigate the necessary and sufficient conditions to extend it to a formal one: and the terms 2 , In addition, any formal deformation is equivalent to its infinitesimal part. Proof.For the second-order terms, the condition (4.3) yields the following equation: = 0 for all ≥ 0. Thus, the condition (6.9) is necessary. We will explicitly find a deformation of whenever conditions (6.9) are satisfied to demonstrate that conditions (6.9) are sufficient.It is possible to select zero for solution (2) of (6.10).It is evident that a deformation (which is of order 1 in t) is obtained by selecting the highest-order terms () with ≥ 3, which are also identically zero. The solutions of the Maurer-Cartan equations (4.5) are defined up to a 1-cocycle and it has been proved in works [1,14] that different choices of solutions correspond to equivalent deformations.As a result we can always reduce , for = 2 to zero using equivalence.The highest-order terms with ≥ 3, then satisfy the equation ( ) and can be reduced to the identically zero map, via recurrence.□ Case 2: 2 = ∈ ℕ.In this case, we have (1) = ∑ Moreover, any formal deformation is equivalent to its infinitesimal part. These factors provide a classification of the formal deformations, similar to the first case. Now, for the last system, we substituting the first equation into the second equation, adding the third and the fourth equation, the fifth and the sixth equation, substituting the seventh equation into the eighth equation and adding the ninth equation and the last equation, we yeilds the result = 0. Therefore, we obtain the claim.□ Case 2: 2 = ∈ ℕ.In this case, we have
2024-05-26T15:31:18.107Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "db0208b901dcc8c0f673a5d5c2636ca93e0f1494", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "64fe91c29bf5b8f59fc6c3aa8b04368dbb673694", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
118538375
pes2o/s2orc
v3-fos-license
Critical comments on the paper"Crossing $\omega=-1$ by a single scalar field on a Dvali-Gabadadze-Porrati brane"by H Zhang and Z-H Zhu [Phys.Rev.D75,023510(2007)] It is demonstrated that the claim in the paper"Crossing $\omega=-1$ by a single scalar field on a Dvali-Gabadadze-Porrati brane"by H Zhang and Z-H Zhu [Phys.Rev.D75,023510(2007)], about a prove that there do not exist scaling solutions in a universe with dust in a Dvali-Gabadadze-Porrati (DGP) braneworld scenario, is incorrect. Since the discovery that our universe can be currently undergoing a stage of accelerated expansion [1], many phenomenological models based either on Einstein General Relativity (EGR), or using alternatives like the higher dimensional brane world theories [2], have been invoked (for a recent review on the subject see reference [3]). The latter ones, being phenomenological in nature, are inspired by string theory. One of the brane models that have received most attention in recent years is the so called Dvali-Gabadadze-Porrati (DGP) brane world [4]. 1 This model describes a brane with 4D world-volume, that is embedded into a flat 5D bulk, and allows for infrared/large scale modifications of gravitational laws. A distinctive ingredient of the model is the induced Einstein-Hilbert action on the brane, that is responsible for the recovery of 4D Einstein gravity at moderate scales, even if the mechanism of this recovery is rather non-trivial [6]. The acceleration of the expansion at late times is explained here as a consequence of the leakage of gravity into the bulk at large (cosmological) scales, so it is just a 5D geometrical effect,unrelated to any kind of misterious "dark energy". The study of the dynamics of DGP models is a very atractive subject of research. It is due, in part, to the very simple geometrical explanation to the "dark energy problem", and, in part, to the fact that it is one of a very few possible consistent infrared modifications of gravity that might be ever found. In particular, there can be found studies of the dynamics of a self-interacting scalar field trapped on a DGP brane by invoking the dynamical systems tools, which have been proved useful to retrieve significant information about the evolution of a huge class of cosmological models. In this regard, the exa Electronic address: israel@uclv.edu.cu b Electronic address: rigarcias@ipn.mx c Electronic address: claudia.moreno@cucei.udg.mx 1 For cosmology of DGP braneworlds see reference [5]. ponential potential represents a common functional form for self-interaction potentials that can be found in higherorder [7] or higher-dimensional theories [8]. These can also arise due to non-perturbative effects [9]. A dynamical study of DGP models with a selfinteracting scalar field trapped on the DGP brane has been undertaken, for instance, in reference [10] for an exponential potential, to show that crossing of the phantom barrier ω = −1 is indeed possible in DGP cosmology with a single scalar field (see also [11] in this regard). However, the authors of that paper do not study in detail the phase space of the model and, in correspondence, they are not able to find critical points. Their claim that scaling solutions do not exist in a universe with dust on a DGP brane (only the Minkowski cosmological phase is considered), seems to be in contradiction with known results. In fact, in the 4D limit (the formal limit when, in the DGP model, the crossover length r c = k 2 5 µ 2 → ∞) 2 the results of reference [12] have to be recovered, or, at least, approached, since the investigation in [10] is just a generalization of the one reported in [12], to include higherdimensional behaviour dictated by the DGP dynamics. Even if one expects that the late-time structure of the phase space is modified by the contribution of the DGP brane, these modifications should be associated with the stability of the corresponding critical points rather than with their mere existence. In the present comment we show which is the source of the incorrect result of reference [10], and we perform an exhaustive analysis of the phase space for the DGP model with a self-interacting scalar field trapped on the brane, by using the same variables of [10]. It is shown, in particular, that there is actually an isolated critical point associated with the matter-scaling solution, even if it is always a saddle point in phase space. The starting point is the Friedmann-DGP equation on the brane (equation (5) of reference [10]): where H =ȧ/a is the Hubble parameter, a is the scale factor, k is the spatial curvature of the three-dimensional (maximally symmetric) Friedamnn-Robertson-Walker (FRW) space -taken here to be vanishing: k = 0 -, and θ = ±1 denotes the two branches of the DGP model (the two possible ways to embedd the DGP brane into the Minkowski bulk). In what follows we shall consider the Minkowski cosmological phase, i. e., the case θ = −1, exclusively. The total energy density on the brane ρ, includes dust matter and the scalar field "fluid": The effective "density" ρ 0 relates the strength of 5dimensional gravity with respect to the 4-dimensional gravity, where, as usual, r c is the crossover radius. It is evident that 4-dimensional behavior is associated with the formal limit ρ 0 → 0. In the model of interest, where a dot denotes derivative with respect to the cosmic time t and V (φ) is the self-interaction potential, taken here to be in the form of a single exponential: Here λ is a constant parameter and V 0 denotes the initial value of the potential. In order to write the equations of the present model: the Friedmann equation (1) plus the continuity equationṡ in the form of a (autonomous) dynamical system, the following dynamical variables are chosen (see equations (18)-(21) in [10]): These variables are subject to the Friedmann constraint (equation (27) in [10]) coming from the Friedamnn-DGP equation (1): (8) or, alternartively In what follows we shall consider only the "+" sign in (9), since we are focused here on expanding universes only, while the oposite sign "−" corresponds to contracting universes. Thanks to the constraint (9), the dimension of the phase space can be reduced from 4 to 3. The consequence is that, only 3 of the 4 ordinary differential equations (22-25) in [10], are independent. Here we choose the following as independent equations: where α ≡ 1 − 1 + 2 and the prime denotes derivative with respect to the time variable s ≡ ln a. As already said, due to the constraint (8), the variable l can be written as a function of the remaining variables x, y, and b (see equation (9)), so that the differential equations (10)(11)(12) can be written in the alternative form: where, now The first step is to identify the phase space for our model, which is given by the non-compact 3-dimensional region: 3 I: Critical points of the autonomous system of differential equations (10)(11)(12). Note that, a straightforward analysis of the ordinary differential equations in the above autonomous system of equations (10)(11)(12), shows that we can not have x = y = b = l = 0 simultaneously, since this would imply that the constraint (9) is not obeyed. Therefore, the critical point found in the reference [10], where x = y = b = l = 0 at the same time, does not really belong in the phase space Ψ of the model under study. By the same reason, the other point found in [10]: x = y = l = 0, b = const, does not belong in Ψ neither. In fact, from the constraint (9) it follows that, if x = y = l = 0, then b = −1/ √ 2, which is not in Ψ since we are considering expanding FRW universes only. Going a step forward we can realize that the only critical points of the autonomous system (14-16) are associated with the four-dimensional limit r c → ∞ ⇒ ρ 0 → 0 ⇒ b = 0. 4 This case coincides with the one studied in reference [12]. The constraint (9) translates now into the following relationship: We are led with the two-dimensional system of equations: The critical points of (10-12) are summarized in the table I. These coincide with the ones found in reference [12] as it should be. The point P 1 = (x, y, b) = (0, 0, 0) corresponds to the matter-dominated solution (recall that, 4 The critical points (x, y, b) = (0, 0, −1/ √ 2) and (0, 0, −0.91 ± i 0.68) do not belong in the phase space Ψ so that these are not critical points of the present model. The value of the parameter of the exponent in the potential V = V0e λφ/µ , has been chosen to be: λ = 2. In the right-hand picture in the lower part of the figure we show the projection onto the plane (x, y, 0). The kinetic energy dominated solution P ± 2 is the past attractor. The scalar field-dominated solution seems to be the future (late-time) attractor. This is just a "mirage" due to the fact that the brane effects are ignored in the projection. The remaining critical points are the scalar field dominated solution (point P 4 ) and the matter-scaling solution P 3 . In the first case, since l 2 = b 2 = 0, then x 2 + y 2 = 1 ⇒ H 2 = (φ 2 /2 + V )/3µ 2 . This solution exists whenever λ 2 < 6 and is accelerating if λ 2 < 2. The matter-scaling solution (critical point P 3 ) exists for λ 2 > 3. In this case x = y, l 2 = 1 − 2x 2 . The scalar field Unlike the standard result in [12], both solutions (scalar field-dominated and matter-scaling critical points) represent saddle points in the phase space of the DGP model. Looking at the projections of the phase space onto the plane (x, y, 0) -the plane associated with four-dimensional behavior in the lower figures in Fig. 1 and Fig. 2 -so that we are ignoring higher-dimensional effects, one can think that these solutions, depending on the values of the free parameters, can be late-time attractor points in phase space (the standard result in [12]). However, since in the DGP model higher-dimensional effects modify the late-time dynamics, the actual situation is very different. At late times the trajectories in phase space leave the plane (x, y, 0) and asymptotically approach to increasingly large values of the parameter b -associated with the DGP brane effects -and of the scalar field kinetic energyφ 2 /2, so that there is no isolated critical point in the phase space that could be associated with a unique future (late-time) attractor. The phase space pictures in the figures 1 and 2, show this very interesting fact. It is apparent that the phase space trajectories leave the (x, y, 0)-plane at different places (and times), and these approach to different regions of the 3D phase space. It is clear, also, that the critical points associated with the matter-scaling solution, and with the scalar field-dominated solution, can be only saddle points in the phase space of the DGP cosmological model. We can conclude this comment by identifying the source of the incorrect result reported in reference [10], regarding the existence of matter-scaling critical points in a dust universe in the Minkowski cosmological phase of the DGP model with a scalar field trapped on the brane: the inaccurate identification of the phase space corresponding to the model. The existence of matterscaling solutions is expected from the beginnig since, in the 4-dimensional limit when standard Friedmann behavior is recovered, we are left with the case studied in the reference [12], where these solutions were identified as critical points in phase space. That is true even if it is expected that the stability of these points is modified by the infra-red (DGP) brane effects. The kinetic energydominated solution is always the past attractor (as in [12]) since, at early times, the brane effects can be safely ignored so that the standard cosmological dynamics is not modified. A graphic illustration of the above discussed features is given in the figures 1 and 2 above, where the phase space trajectories uncover the main features of the dinamical system of interest in [10].
2008-12-09T16:32:51.000Z
2008-12-09T00:00:00.000
{ "year": 2008, "sha1": "8441086ae551702df32809fdbc2e24154f0cdd52", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0812.1739", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8441086ae551702df32809fdbc2e24154f0cdd52", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Physics" ] }
159052225
pes2o/s2orc
v3-fos-license
The Luso-Spanish Composite Global Empire, 1598–1640 Breaking with a tradition of studying the Spanish and the Portuguese empires separately, this chapter adopts a global perspective to analyse imperial networks and the way family and personal relations and informal institutions articulated them. The asymmetries in information and agency at a local level also elucidate the terms of negotiations between the metropolis and the colonies, the difficulties in controlling the different imperial nodes and the problems of regulation and arbitration from Madrid and Lisbon. Two Empires and One World Recent historiography has underlined, with good reason, the numerous interconnections between the Spanish and Portuguese empires (Subrahmanyam 2007;Bethencourt 2013;Borges 2014;Herzog 2015). Certainly, in institutional terms both empires maintained the rule of their respective institutional systems and implicit codes of practice. Portuguese subjects of the Habsburgs could not enter directly into Castile's commerce with America; Castilians, like subjects of the peninsula's other kingdoms, were classified as foreigners in Portuguese territories ruled by the same dynasty. The Council of Portugal, in Madrid, was charged with providing advice on the management of Portuguese overseas possessions, and the Council of the Indies oversaw affairs relative to the Spanish colonies. The Casa de la Contratación of Seville and the Casa da Índia of Lisbon, two very similar institutions, operated, however, with total independence of each other. The same can be said of other institutions with colonial reach and impact, such as the Inquisition. Though one can recognize the sharing of an administrative culture similar in some respects as well as the exchange of information in some fields, the two organizations acted as separate bodies in Spain and Portugal (Bethencourt 2009). Likewise, the respective branches of the Catholic Church comprised separate organizations, in particular in regard to the exercise of patronage rights and the appointment of bishops, and so on. And the same can be said about the transfers of money and military resources between the two kingdoms, at least from a formal and juridical perspective. The problem of the transfer of funds between kingdoms in order to defend each other's interests had not abated. Nevertheless, on occasions this formal separation could be overlooked. At the very beginning of the union of the two Crowns, in 1583, an expedition to the Azores had been financed with Castilian and Neapolitan money, while Portugal was involved in the Invincible Armada (Thompson 1976). Similar situations, in which joint contributions served the interests of one kingdom, would occur over the coming years, as in the recovery of Bahia in 1625. But combined operations would be a cause of controversy. In fact, the Portuguese, who had never previously been involved in this sort of venture, claimed that they contravened the terms of the Cortes of Tomar (1581). Moreover, in time the Castilians would make the same claim (Feros 2000, pp. 159-161). This having been said, the binding or entwining of the Portuguese and Spanish empires would gather pace after 1580 (Herzog 2015). Thus, when in 1641 the governor of Buenos Aires received the royal order to expel the Portuguese for fear of 'contagion' of the rebellion in their homeland, a member of the city elite responded with a strong argument: this decree, he averred, broke up 'marriages', preventing husbands from 'living a marital life with our wives' (quoted by Trujillo 2009, 350). This entwining of Iberians was in many respects the result of two global phenomena, which are normally studied as separate processes but which were in fact very much linked: the development of new avenues of Portuguese trade with Asia from the 1580s onwards and the silver mining boom in America after the 1560s. 1 The available estimates show that the number of Portuguese ships sailing from Lisbon to Asia was revitalized from 1570, following a slight dip in numbers in previous years. This trend would reach its peak between 1601 and 1610 (despite attacks from the United Provinces) and was maintained at a very high level until 1620. Very meaningfully, Portuguese voyages therefore followed the same cycle as the arrival of American silver in Seville (Duncan 1986). empires had more or less clearly defined frontiers. But the cross-border character of their connections was a fact. This does not mean that tensions were absent. On the contrary, contact itself was creating conflicts. But these were two empires in an increasingly interconnected world. As we shall see, this was not a closed system fully controlled from above; nor was it a massive trade grid capable of producing a strong convergence of prices at the global level (O'Rourke and Williamson 2002). Nevertheless, the ensuing connections would be decisive for the history of the Spanish and Portuguese empires and for the working of their political economies. The Problem of Information The issues outlined above are essential to identifying the great problems of the Luso-Spanish imperial complex, as well as the principles that governed its political economy and brought about a profound transformation in it from 1600 onwards. One essential, recurring issue regarded information, the slowness of its circulation, the problems in obtaining it, and the consequent difficulties for government and business. Governors were perfectly conscious of these issues and dreamt of obtaining what they called the 'complete notice', ('la entera noticia') (Brendecke 2012). But they also presented a grave problem for the individuals and, above all, for the merchants who flowed through the arteries of this empire. The slow circulation of news was caused, logically, by the distances involved. The voyage between the metropolis and the closest colonies in the Atlantic was, in the case of the Spanish empire, some 13 weeks on the way out and 18 on the return, while in the case of the English empire, it would be between 5 to 8 weeks and 3 to 8 weeks, respectively, according to the route chosen (Elliott 2006, pp. 177 and 181). The problem was even greater in the case of the Portuguese empire, where the voyage from Lisbon to Goa might last as long as 24 weeks. Yet this was not even the major problem or principal difference with the British colonies to the North. Rather, the great difficulties sprang from the rhythm and rigidity in the circulation of reports upon which the organization of the Carrera de Indias depended, which in turn was caused by the climate and rhythm of the seasons, winds and tides, and so on. While the English empire depended upon voyages that were much more staggered and frequent over the course of the year until at least the end of the seventeenth century, the Spanish depended upon fleets that could travel to the Caribbean only twice a year and return only at the end of the summer following the timetable already set out (Chap. 2). A similar situation affected Portugal on account of the monsoons (Parry 1990, chapters 6 and 7). For this reason, the communication systems (essential for political and business organizations) that crossed over and between these areas faced considerable difficulties. If ships departed from Lisbon in March, even with fair winds, they would not arrive in Goa until September, and they could only embark on their return journey towards January or February, arriving back in Portugal, if everything went well, in August or September of the year after their departure (Kellenbenz 2000, p. 602). That is to say that the information, instructions, and orders could be produced and sent from the headquarters of the Estado da Índia only once a year, and the arrival of responses in Lisbon might in theory take as long as 18 or 19 months. Yet this was simply the first step, as within the colonies themselves, several weeks might be needed to communicate between the most distant parts of the empire and the Caribbean or Goa; for things to work well here, a perfect synchronization of voyages and climates and no little good fortune were necessary. It is not, therefore, surprising that letters written by emigrants were often filled with a terrible sense of isolation and disconnection. References abound to entire years in which communications with the Iberian Peninsula were interrupted, with insecurity flowing from the loss of correspondence or the unexplained disappearance of persons dispatched, while consternation was sometimes expressed about the plight of messengers sent to find people who were perhaps themselves in transit and therefore unlikely to be located. On 20 April 1592, one Pedro de la Huerta wrote to his nephew in response to a letter penned in September 1588, in which he complained of not having received any word from his uncle in almost four years. And in 1562 Diego Martín de Trujillo confessed that it had been more than 11 years since he had news of his family. These were two cases among hundreds that have survived (Otte 1988). And the same problem occurred in the royal administration that, despite a dense network of bureaucrats, informants, messengers, and mediators, still faced problems obtaining reliable and timely information (Brendecke 2012). In this situation, the creation of oligopolistic circuits of privileged information gave their members certain advantages. One well-known example is that of the Dutch merchants who wrote their famous gazettes to Amsterdam recounting the details of American shipments to Spain even before the galleons arrived in Seville (Morineau 1985). The circuits created by consuls, nominally merchants, should be understood in the same light, and these were extended across many parts of Europe and very often operated according to an affinity based upon common origins, as was the case of the Italian or Jewish communities. For its part the Society of Jesus came to arrange an entire system for the exchange of information, the so-called 'Jesuit letters' (Broggio 2002). In reality many of these arrangements were alternatives to the existence of a postal or courier network, although something resembling one began to emerge in America at the beginning of the seventeenth century (Montáñez 1950). Throughout the sixteenth century, the letters of emigrants repeatedly refer to entrusting correspondence to the merchants and sailors who sailed the seas, a practice that entailed a high risk of loss and news blackout (Otte 1988). All of this can be taken as evidence of the high costs of information and the need to create networks of confidence based upon previous relations or mutual benefit that guaranteed the circulation and veracity of reports. The consequences of this situation impacted governmental systems. Without doubt, government both depended upon and collected a multitude of reports, visits, and accounts, which presented a fuller picture of events than might otherwise have been obtained. The efforts of viceroys in both Asia and America provided a good demonstration of these phenomena (Merluzzi 2003, chapter 2). As we have seen, in both Crowns a number of centralizing projects were proposed, encompassing political organization as well as the news and information systems that facilitated the exercise of power. But the news that circulated through official channels was not always accurate, and many times the reports that reached Madrid (or Lisbon) provided 'not the truth, but rather indications of loyalty and disloyalty' (Brendecke 2012, p. 492). Because of these reasons, the result was a highly asymmetric information system that gave its mediators enormous power through its transmission and in particular favoured those who collected and advanced reports and news in situ, the price for which could only be paid through the cession of political capital which made it even more difficult for Madrid or Lisbon to implement the king's orders. The political economy of the empire would be highly affected by all of this. Social Networks and Informal Institutions in a Cross-Border Perspective The projection of elites across this global empire explains why, in part, imperial spaces across four continents very quickly became traversed by informal relationships that would prove decisive for their history. And this development was evident not only on the vast oceans, upon which historians have focused their attention, but also in Europe itself. This was, in many ways, a phenomenon that was unique to the period and of the fact that its politics had a corporate character. The formation of these networks was often, but not always, an indirect consequence of possibilities for the social consolidation and extension of elites who had been present in the creation of the empire and in the process of globalization (Chap. 4). As we have said, these were social groups governed by two essential tensions-between solidarity and conflict and, at the same time, between the individual and the collective (see Chap. 1)and as such could do nothing but project these same rules of play upon the empire itself. If there was a certain tendency towards rupture among the branches of each continent, there was also an intention to maintain contacts. In some cases this desire was expressed through networks, where contact was essential for the interests of individuals, as would be the case for the Jewish merchants and financiers (Studnicki-Gizbert 2007). These traders formed genuine dispersed coalitions, transferring capital and merchandise among their members, and their only chance of survival lay in avoiding rupture and atomization. At times personal interest surfaced, as would happen with the priest in Castro del Río who corresponded with his brother and retained the letters in the hope of proving his right to the family inheritance (Hidalgo 2006). At other times, as perhaps occurred with the American branch of the Borja, these connections served as a way of maintaining and conserving this immaterial capital that belonged to the house; this was also manifested in the use of the shield of that lineage on coats of arms and ornaments (Redondo and Yun 2008). These networks were based upon familial links and kinship, upon forms of recognizing prestige and reputation in the ways described by A. Greif (2006) for the case of Medieval Genoa. Family connections and ties with more distant relatives were a way to create confidence and circulate information. In America, nearly all correspondents of the Corzo and the Mañara, the leading Sevillian businessmen, belonged to their family or ended up being related to it (Vila 1991, passim). And often this meant persons who, while related, resided on different sides of the Atlantic. Moreover, dowries, often transferred across the Atlantic, were a form of moving family capital from the bride's side to the groom's and from one side of the ocean to the other (Almorza Hidalgo 2011). Similarly, the practice of standing as a godparent, very common in America, allowed a person-usually a family member or associate-to bind himself to another family in the act of baptism itself, thus creating links of great value for business designed to reinforce personal trust (Nutini and Bell 1980). Family relations were often strengthened through education and training with or near to relatives, with the intention being that the parties, having reached maturity, would eventually work, trade, or operate together (see some examples in Vila 1991). Permanent ties of culture, identity, and belonging could likewise be decisive. This was, of course, the case for the Jewish merchants. And it was not uncommon to find affinities of language binding Italians or members of the same 'nation'. Even in the world of lawyers, support networks were sometimes formed that could operate as genuine lobbies active on the Iberian Peninsula and in America or Asia. And this was very possibly the case for many Portuguese merchant adventurers who, like Bartolomeu Landeiro, operated as intermediaries between the Portuguese and the Chinese, sometimes even serving the latter in a military capacity (de Sousa 2010;Boxer 1959). No less important were the simple friendships that sprouted up in any number of situations. The lawyers who formed relationships of this sort in their school days or during undergraduate studies at Castile's universities provide an example of such a group, while merchants generally established similar friendship networks, sometimes basing them on nothing more than mutual necessity (García Hernán 2007, passim;Vila 1991). 2 Clientele and patronage relationships developed along similar lines, in particular between prominent persons such as viceroys and governors da Índia, who often belonged to the high aristocracy. In fact clientele practices-at times barely formalized in codes of external behaviour-could become decisive in a world much more inhospitable than that of the metropolis. The result was often the formation of overlapping and flexible identities, as was logical in individuals who had two ways of conceiving themselves: as Jews and Portuguese, Castilians and Portuguese, or as belonging to a noble house, that is, to a lineage, while being subjects of a distant king in Madrid. If the view from Europe necessarily entails the possibility of eurocentrism, then it is clear that the study of the connections within the colonies may provide a different perspective. The businessmen of the consulates of Lima and Mexico were often connected with groups based in Castile or Flanders, and they served at the other end of the network, as important as its European part and, indeed, crucial to its very operation, as is clear from a careful reading of the work of Studnicki-Gizbert (2007). In light of recent studies, something similar might be said of the creole elite that, little by little, formed in America and that, more than representing the periphery of a web, performed at times as the centre of a constellation of relationships that were projected out into Asia (or, at times, from Asia) and towards Europe. A similar degree of ex-centrality can be found in the networks of converso merchants who operated in the trade between the Atlantic, Africa, and Asia, at times working in a way that their most important connections did not pass through Lisbon and even, on occasion, attempting to impose their conditions on Madrid, where they negotiated asientos and rented monopolies (Boyajian 1983 andStudnicki-Gizbert 2007). Precisely because of the distances involved and the consequent problems of communication, the relationships formed in this way between individuals and groups could be fragile and easily broken. The death of members, oversight, or a lack of communication could fracture them and render them transitory, something that frequently happened. 3 But these same dangers made these contacts even more necessary and led to their substitution and the replacing of defunct pacts and coalitions with great speed and efficiency. And these dangers afforded comparative advantages to the most solid networks, those based on family and those created by the Jews and which drew upon community identity, religion, social practices, cultural beliefs, and economic interests. It is difficult to apply a single significance or historical effect to relations of this sort. Certainly, they were key to the conquest and functioning of empires. The lion's share of the available knowledge in military and economic affairs circulated through and along their branches, as did the political expertise that was crucial to making empire work. To give one example, military knowledge acquired in Europe by army captains and soldiers allowed for the wars of 'pacification' and conquest in America (Centenero 2009). And little remains to be said of the ability of Portuguese sailors to use the great advances of the incipient European military and naval revolution to serve their cause in Asia. Their knowledge of latest mining techniques allowed German emigrants to America to exploit the Peruvian and Mexican deposits that would irrigate the world with white metals (Sánchez 1989). It was in part thanks to the projection into America of German families such as the Welser and the Fugger, or many others of Genoese or Portuguese origin, that the Habsburg wars and their political system could be sustained. And, of course, the emergence of a certain type of law and a specific juridical culture-certainly, in this regard Portugal and Castile followed a very similar course, both originating in essentially the same university world-that created shared codes of legal and political behaviour that were subsequently extended across enormous territories (Rivero 2011). In other words, relationship networks were largely responsible for allowing armadas to sail and armies to fight; Audiencias, mines, cities, and viceroyalties were in large part also the result of networks of personal connections that, if they had their propulsive niche in these institutions, were fed in reality by informal relations appropriate to these networks of weak ties and links (see some examples in Centenero 2009). But our interest, in any case, is the interaction with formal institutions and the means by which they interfered in both realities. 'Perverting' 'Perverted' Institutions? As we have seen, informal networks were crucial for initial contacts in the different areas of both empires. Their development to a certain degree would create, however, a tension with the royal administrative and judicial apparatus, whose development was crucial in a gradually more competitive world where violence among the different polities would be the norm. The following pages try to explain this very important change and how such a tension would be decisive in a context of increasing globalization. Justice, Enforcement, and Distance The Crown would try to control the exercise of coercion, understood primarily in terms of justice and military organization through the creation of formal institutions (described above). The military defence was attempted primarily through the system of fleets and colonial squadrons, such as the Atlantic fleet (Armada del Mar Océano), the squadrons such as the Armada de Barlovento established by the Spaniards in America, or the maritime system that the Portuguese set up in Goa. In any case, this resulted in a strategic military presence that was intended to control the trade, regulate economic and social relations, and, in the final instance, wield the power of the king (Goodman 1997 andPhillips (1986). In this regard, the institu-tions mentioned above, such as the House of Trade of Seville (Casa de la Contratación de Sevilla) and the Casa da Índia, exercised, in theory at least, a high degree of power of enforcement to the extent that the 'monopoly' was organized through them. The same was true of the Portuguese Conselho da Fazenda (Miranda 2010). Attempts were made to exercise control over justice through the Audiencias and the Relaçoes, respectively (Tomás y Valiente 1982, Schwartz 1973, Hespanha 2001. Both cases constituted an attempt to apply and uphold the law in the colonies through a group of bureaucrats trained in peninsula law or, in the Spanish case, in the Indian Laws too (Chap. 2). As mentioned above, both empires proceeded to compile laws in the hope that their application in a clear and uniform way would serve to reduce risks and transaction costs. In the cities and American municipalities, as in the Portuguese câmaras and feitorias, an attempt was thus made to apply the king's justice. It is not, however, surprising that the assessment of this situation from the perspective of the new institutional economics has been very negative (Coatsworth 2008). As occurred in the Iberian Peninsula, the efficiency of these institutions in regard to the creation of a centralized, transparent and predictable system for the exercise of coercion was highly relative. Both empires saw a plurality of agents who applied overlapping and, very often, clashing forms of coercion (Hespanha 2001, pp. 181-2). This is very evident in Portugal, where a scholar has spoken of an 'estatuto colonial múltiplo' (a multiple colonial status) to underline the administrative and jurisdictional pluralism of the system (Hespanha and Santos 1998, pp. 353-61). This was in part a consequence of a diverse range of situations that even led to the creation of seigniorial estates in the colonies (Neto 1997, pp. 154-55). But, in the Spanish system, the plurality of agents active in the exercise of coercion and power, often clashing among themselves, was also present. By the same standard, the exercise of coercion was closely linked to the practices of social agents. Despite the efforts of the Crown to preserve the superiority of its authority, it was impossible to prevent the encomenderos and the owners of mines, mitas, and repartimientos, as well as the bandeirantes, the owners of slaves and plantations and even the owners of capitanias donatarias, from exercising their capacity for coercion on a day-to-day basis. Even, indeed, a number of religious institutions such as the Jesuits retained notable use of coercion, to judge by the many examples and cases found in the seventeenth century. This fact would even emerge in debates in which institutions such as the Audiencia of Lima became involved about the ownership of 'personal services', a euphemism for practices that entailed the enslavement of the Amerindians (Díaz 2010, pp. 108-120 and passim). Above all in the frontier areas, violence between the Crown's servants, Jesuits (and other ecclesiastical orders), and parts of the Indian population remained common (Ariel andSvriz 2016 andMonteiro 1994). Cases such as that of the bandeirantes in Brazil demonstrate both the vigour and prevalence of these customs of violence-as well as the weakness of royal authority-which were basic to the working of the overall system and, in particular, to obtaining a slave workforce for the emerging plantation economy (Monteiro 1994, pp. 138 and ff). Even a very hierarchical institution, such as the Inquisition, though it was "mixed' in nature' (a tribunal of the Crown but also an ecclesiastical tribunal), did not become completely integrated into the state's machinery (Bethencourt 2009, pp. 316 and ff). Not only could the Holy Office act quite independently in the implementation of justice, it could even interfere with the king's regular justice. Though, in theory, the Inquisition did not prosecute economic crimes, it could exacerbate a sense of risk and uncertainty among economic agents, which could affect economic activities and trade in particular. Merchant sectors in the Iberian Peninsula had clearly achieved a privileged position (Chap. 5). Yet by the same standards, their counterparts in the empire were not that much different; in particular, this was true of the very powerful Consulates of Lima (1613) and Mexico (1592), behind which stood extremely influential groups of businessmen involved in both Europe and Asia. A number of individuals, such as Juan de Solórzano, who helped to found consulates as 'justice courts for merchant affairs', saw how they evolved into 'professional corporations of merchants, thus becoming a lobby within the viceroyalty' (García Hernán 2007, p. 129). The information provided by J. L. Gasch (2015a) presents the modus operandi of the merchants of the Consulate of Mexico when faced with corruption in commerce with the Philippines. If the consulates themselves could positively reduce risks and transaction costs for their members, they could also fragment the map of conflict resolution and create barriers to the entrance of outsiders. The forms of ownership established in the colonies were the opposite of what the new institutional economics considers the paradigm for efficiency. Although the Crown imposed and exercised its law in specific circumstances, a large part of property in both empires existed as a form of rights ceded by the king (Romano 2004). Moreover, property held in the privileged form of entailment was increasingly extended over time and led to the creation of lay and ecclesiastical elites in America. In this way a system was consecrated that not only placed obstacles before the circulation of land ownership but also advanced forms of management that were not always conducive to the implementation of productive improvements (Coatsworth 2008). The political economy of the colonies was consequently characterized by the clash of forms of enforcement and by the importance of privilege and the opportunity to exercise violence as a substantial part of the productive relationships. All of this appears, on paper, a major problem for productive development and the efficient assignment of productive resources (North et al. 2009). The imperial reality implied not only the adherence to local legal and normative codes but also systems that made distant institutions work and created forms of confidence and coercion in overseas spaces. As occurred in the Iberian Peninsula, the Crown's scope for intervention as a third party in resolving conflicts was highly limited, although its role appeared to be guaranteed by the importance of the Audiencias and the authority of the viceroys and local authorities. In regard to lawsuits launched from the Iberian Peninsula, there were many problems. Enforcing compliance with contracts was sometimes complicated by the difficulty of locating persons or the time spent in doing so. In 1543 a Seville banker calculated that to prosecute a Fugger lawsuit in New Spain would require a year and a half (Kellenbenz 2000, p. 601). The execution of contracts by merchants involved in transatlantic trade was highly complex and required mechanisms of a mixed character. For example, Seville's merchants continually sent orders for unpaid debts to their operatives in America that appealed to the king's justice in the final instance (Cachero 2010). But, in order for them to arrive at that point, they themselves had to search for the debtors before these orders could be implemented. This revealed a form of justice that was accompanied by high transaction costs and that, by itself, had a limited capacity to reduce risks. And, if risks were fundamentally determined by other, more important factors (the chances of shipwreck, attack, appropriation by the king, the delay of the fleets, etc.), then forms of justice partly explain the high rates of maritime insurance (Bernal 1993). If, therefore, justice and enforcing compliance with contracts on the Iberian Peninsula were slow and there existed circuits outside of 'official' justice, to use the phrase of A. M. Hespanha, then the ramifications of this situation were even more emphatically felt in the immense world of global and local relations beyond it. For another thing, the very nature of a law that conceded a wide margin of manoeuvre to judges and entailed different jurisdictions meant that the problem was to know who was the 'best equipped to interpret and enforce the law' (Rivero 2011(Rivero , p. 2011. In regard to the Estado da Índia, Hespanha has spoken of an extremely complex system in which seven areas of political jurisdiction were operative (Hespanha and Santos 1998). The entwining of the two empires and their projection over extremely distinct societies increased the diverse range of moral and social codes, thus making it more difficult for official justice to penetrate the intricate social fabric and compelling it to highly complex cultural translations between the different social agents. This was the case in America, where the slow configuration of the 'caste society' (a term that is more a social representation than anything else) took place; it was also the case in the Portuguese dominions in Asia, where the frontier and the numerous cultures with whom they dealt was complex and porous. Dominant Coalitions, Patronage, Rent-Seeking, Corruption, Fraud, and Contraband These trans-frontier networks and webs cannot be understood as separate from the formation of elites and dominant coalitions of a local character. Indeed, the precise opposite was the case. Logically these webs acquired different characteristics according to the specific contexts in which they were born and their relations with the Crown. 4 Notable differences also existed in the weight they could bring to the negotiating table and their influence. They were especially powerful and influential in areas such as New Spain or the viceroyalty of Peru. Here minorities of powerful creoles concentrated, often being linked to the exploitation of mines, the great estates or haciendas, trade, and the bank that, enjoying strong connections with the royal bureaucracy, exercised an enormous decision-making capacity (Bakewell 1995;Kicza 1999). Its vast economic potential was complemented by its very considerable social capital and its influential and charismatic identity as a creole minority whose character was deliberately projected to distinguish it from both the indigenous population and the Spanish (Gasch 2014). In part exploiting norms of consumption that included the use of Asian products as a means of cultural differentiation, these creoles proceeded to invent their own tradition by underlining their hybrid but unique identity. Their powers were extended through the system for the transfer of funds between regions through the situados, which conferred a degree of pre-eminence upon them. Theirs were the areas which most frequently transferred resources to the poorer regions with the aim of not only oiling the bureaucratic machinery but also-and above all-priming defence forces (Grafe and Irigoin 2012). A series of features were repeated in practically all local elites, although in different combinations. As perhaps had to be the case, one of these was the use of matrimony and family and kinship relations as a means of constructing power. Here in fact lies one of the reasons why the family has always been credited with enormous importance in the history of social relations in Latin America. 5 It could serve as a means of connection with the international networks mentioned above. In other instances relationships were based on occasional transfers of influence or of political and economic resources. The custom was that these networks of local relationships could call upon members who were strategically placed in politics, the bureaucracy, the magistrates, or business. This was a means of controlling diverse spaces that offered their different members a form of capital (economic, social, or cultural) that was easily interchangeable. And, despite there being no comparative study of this phenomena, the impression is that, if things were broadly similar on the Iberian Peninsula, the greatest weakness of the conception of a society of orders was that it conferred upon these elites a notable fluidity in the relations between their different component parts, above all through matrimony. It should be added that Portuguese colonial society in Asia passed through two phases in this regard. The first phase 'was characterised by the mobility of individuals' (Russell-Wood 1992, pp. 112-3). Figures such as Bartolomeu Landeiro or Fernando Mendes Pinto appear to have been continually on the move, being highly skilled and versatile operatives, blessed with a talent of obtaining information and selling it to the highest bidder while operating among local agents-a lifestyle and career, in short, indicative of their 'endemic individualism' (Russell-Wood 1992, p. 113;de Sousa 2010). But in a second phase, the presence of Jewish networks, marked by a strong familial character, must have created a broader panorama in this area and proven more similar to what had occurred in the Portuguese colonies in Brazil, which in turn can be said to have been very similar in this regard to the Spanish case (Studnicki-Gizbert 2007, chapter 3). This sequence does not mean, however, that both models were not present in the whole period under analysis here. This type of network based on family and kinship would be vital, as it would prove to be crucial to one of the essential developments in the history of the empire-the increasing perversion of its formal institutions. By this I mean a process whereby these social networks and informal institutions would be able to take over many different institutions, such as the Audiencias, the municipal and ecclesiastical councils, the consulates, and many other institutions, and then impose upon them-and therefore upon the rest of the society-their own interests, ways of creating trust, and forms of enforcement. It is important to note that, as in the metropolis, this was also possible due to the 'unmodern' character (in the Weberian sense) of these administrative and political bodies. In a context in which the boundaries between public and private domains were highly permeable, this brought about a very high degree of concentration of political and economic capital, thus forging one of the key characteristics of the political economy of the colonies. Elite Mexican families, for example, were able to accumulate and unify their influence in the mines, the agrarian sector, local politics, the royal bureaucratic machinery, the Church, and so on and at the same time they were establishing international connections with Seville, Lima, the Caribbean, or the Philippines. These networks of interest were highly successful in penetrating the institutional system created by the monarchy. In these circumstances, which were far from unique, their chances of applying practices of rent-seeking were very high, to say the least. They had access to privileged information; were able to wield 'public' power to their own ends, and marshalled a highly solid economic base that allowed them to make transfers to members through the exercise of political and even juridical authority; and, finally, if all else failed, they could even shift the legal framework if they acted together as genuine coalitions. Equally, their chances of indulging in fraud and corrupt practices were extensive, given that their international connections offered them enormous scope for contraband and commercial fraud. Patronage, clientelism, and corruption, in fact, were tied to the very model of the Iberian state, the character of the composite monarchy as a group of powers and jurisdictions linked together only in the person of the king (Yun 1994b). The outcome was the reproduction in an amplified and perfected form in the Indies of practices of nepotism, patronage, and so on. One of the best studies of the theme has identified the viceroys who arrived in the New World 'with a large retinue of family members and creatures' as an epicentre of corruption. This was tied not only to the centre of the political system, but also extended out into the localities and "periphery of the administration" (Pietschmann 1989, pp. 163-182). Existing studies have also shown that strong solidarities existed between royal officials and the owners of sugar refineries, which led to similar situations (Schwartz 1973). And yet the problem of corruption did not reside only in the negative impact of the misuse of public funds. Nepotism, the promotion of clients-then, as today, but more so in those societies in which clientelism was rooted in the essence of the social fabric and was part of the moral economy of the elitesdepended upon the use of human resources in the pursuit of private interest rather than that of the state. The dilemma between personal loyalty-so important in the codex of values of the period-and efficiency in state service was often resolved by placing the former ahead of the latter; obviously, this had important effects upon politics. In fact it was part of the political culture and practices of the epoch. At times corruption of this sort did not even necessarily require pre-existing family relationships, as it simply took root in the forms of payment and the cost of offices. Rather than focus on a very well-known field of study, such as the great bureaucrats who came to the colonies from the Iberian Peninsula, the example can be given of the repartimientos, which sometimes adopted a form of industrial verlagssystem. Patch's research has shown that the mayors (alcaldes mayores), being badly paid and in need of making good the money spent in buying their offices, fraudulently favoured the backer (aviador), a sort of verlaguer who had advanced money or primary material to the official and who thus obtained access to the forced labour of the Indians at a very low price (1994). Many variants of this sort of corruption can be found. But the great problem-above all, as we shall see, in constructing a mercantilist empire or, simply, one based upon the control of the marketswas that one of the most important practices was contraband. This can be explained through the enormous power acquired by international networks of businessmen. But another factor was that these men, making use of their connections with local elites and functionaries, were able to pervert the working of the institutions. Moreover they found apt methods for doing so in the very institutions themselves. The renting of a state monopoly, the asientos of black slaves, and so on were means used by the very office holders themselves to facilitate fraudulent commerce; at other times these methods were exploited to exercise an unfair advantage over competitors, who might end up being removed or evicted from the sector in question (Studnicki-Gizbert 2007). Some time ago Zacarías Moutoukias provided information crucial to understanding another type of corrupt operation in the Río de la Plata and underlined the practical confusion that existed over what was legal and 'illegal' commerce. Indeed the multifunctional character of elites, in which functionaries mixed with merchants and their international connections raised smuggling to such levels that the monarchy had to 'legalize' it in return for money or the concession of privileges to its practitioners (Moutoukias 1988, chapter IV). The illegitimate thus became legal. A similar situation ensued in Cartagena and Veracruz where the collaboration of royal officials in the contraband networks has been very well described by J. M. Córdoba (2015). The same happened in Mexico, where members of the Consulate were awarded privileges in return for donations to the Crown of millions of pesos; these prerogatives granted them the right to discharge fiscal functions over their own shipments towards Asia (Gasch 2015a and 2015b). The method would also be applied in Seville and would be one of the keys to the so-called crisis of the Carrera de Indias, as we shall see. This admission-it might even be said, 'legalization'-of these practices in return for money testifies to the systemic character of corruption in the form of smuggling and contraband. In these ways the networks of the empire had a series of characteristics: a pronounced family component, a marked prolongation of other forms of obtaining confidence, and a multilateral character spanning frontiers. It could also be made up of individuals belonging to many different 'nationalities' densely intermingled. In fact it is not difficult to see how in cases such as that of the Jewish networks of smugglers not only were the Spanish or Portuguese authorities involved in the subterfuge but so also were the English, Dutch, and French ones. In short, these were complex and multifunctional relationships. These agents had a notable capacity to prevent interference from the formal institutions which were, by their very nature, given to a sort of self-perversion; perhaps indeed, using the terminology built by Max Weber to describe modern societies, they can be considered as perverse. Though, the right way to express it beyond metaphors is to say that they were different and product of their own moment in history. In a global context, this would prove fatal to the state that sought to control this empire. All this does not mean that the Spanish and Portuguese administrative system was not subject to monitoring and control from the Crown. The visitas and other types of control were very common, and there are examples of major efforts against smuggling, fraud, and corruption (Bertrand 1999). But one needs to understand that contraband, fraud, rent-seeking, and corruption in general, often linked to nepotism and patronage as components of the prevalent political culture, were normal practices and a way to take advantage of previous investments in offices or to compensate poorly paid functionaries. In other words, corruption-particularly when it was 'legalized'-allowed the state to externalize a part of its very high protection costs. CorrosIve GlobalIzaTIon The great enemy of the Luso-Spanish imperial complex was its own creation: the process of globalization, understood not only as the discovery and contact with new worlds but also as the increasing intertwining of distant societies. Globalization and Regional Economies The result of the colonization and 'globalization' of the New World was to encourage the emergence of new internal circuits in it, all of them linked at some point with the transcontinental routes. These were regional economies that, if they obeyed their own rules, were thoroughly interwoven into processes of globalization. The development of the cities and mining settlements, both manifestations of a world that was ever more global, activated the need to provision them with products from nearby areas. In the seaboard of the Pacific, a coastal complex emerged that served to feed Peru, whose growth was carried forward by its globally important role in the extraction and circulation of silver (Mörner 1990, p. 143). The increase in internal American demand, a consequence of its connections to the global economy, helped the development of the plantation economy. The great estates or haciendas of Mexico, Argentina, or Brazil supplied products such as maize, livestock, or sugar cane, without which it would not have been possible to feed or send primary materials to the areas connected to Atlantic trade. Phenomena such as the efforts to colonize new lands or, more simply, the hunt for slaves carried the frontier slowly forward into the interior of Brazil; certainly the formation of great livestock breeding farms there can-not be understood without reference to these international connections (Lockhart and Schwartz 1983). In this way the new plantation economy, shaped by production destined for internal American consumption and which would come to compete with-and complement-the mining economy, was also a consequence of its expansion and the global connections of America. The great fairs of Veracruz, Jalapa, or Portobello, which were crucial for the long-range commerce of the Carrera de Indias, also became the stage for a growing regional commerce in the Caribbean that operated according to its own rules (Macleod 1990, pp. 180-8). The connections between the west coasts of Brazil and Argentina encouraged not only these regional trade systems but also the global circuits between Peru and the Río de la Plata (Assadourian 1982;Garavaglia 1983), which in turn inserted themselves into other circuits of commerce. These regional circuits in the American continent have been considered as key to a debate on the possible existence of a seventeenth-century crisis (Israel 1974;TePaske and Klein 1981;Kamen and Israel 1982). But similar situations can be found in other zones of the planet. The South Atlantic, between the coasts of Guinea and the Ivory Coast, on the one side, and the coasts between the Caribbean and the Río de la Plata, on the other side, constituted a subsystem within global commerce (Russell-Wood 1992; Boyajian 1993). Similar circuits had developed, sometimes being built upon regional economies already linked before the arrival of the Europeans, between the Cape of Good Hope and Japan, where there existed regional economies with a certain degree of autonomy and in which Portuguese penetration was necessarily limited (Subrahmanyam 1990). The China Sea was an area with its own logic, where the silver from Japan was secured in exchange for all types of product to feed regional commerce (Flynn and Giraldez 2002). An idea of the importance of the regional circuits (not separated from the very long distance trade) can be gained from calculations that postulate that even in the moment of peak export of American silver to the Pacific (1600-1640), it did not constitute more than 10-25% of the Japanese silver exported to China in compensation for its products (Barrett 1990, p. 246). Regional circuits were, therefore, more important in volume than long-distance and intercontinental trade. Without doubt the penetration of American silver and the connection through the Philippines with America did more to open up these circuits, but they also had their own endogenous logic in many areas of Asia (Subrahmanyam 1990). The Indian Ocean and the Arabian Sea, from India to Madagascar, were replete with regional coastal circuits, many of them dating back to before the arrival of the Portuguese whose appearance in fact gave them a new dynamic. But, despite the connections with Portugal, the Asiatic economies had their own independence and a multitude of internal circuits. Today it is common to underline the fact that the portion of merchandise traded by the Portuguese-and, indeed, by the Europeans in general-was of reduced importance in relation to the total volume of traffic in these areas. Globalization also implied the appearance of new routes. This was the case of the trade between Peru and the Río de la Plata (mentioned above) and that served to divert the route of the white metal that would not be channelled through the Carrera de Indias-and, therefore, that would not end up resting in the coffers of the King of Spain. It was also a zone whose base was smuggling, to which the Crown turned a blind eye, as we have seen. Moreover the Portuguese never came to control all of the commerce from Asia, as a significant part of it moved through the Red Sea and into the Ottoman Empire, thus creating another path of diffusion for these products. Carried through Greece and Venice, these goods often reached the north of Europe, usually passing through English hands (Fusaro 2015). The same effect was achieved by the inhospitable route that connected Persia with the north of Europe or travelled over Siberia towards the Baltic. The new commercial route between Acapulco and Manila strengthened these centrifugal commercial tendencies in the heart of the Luso-Spanish empire and, this way, increased the difficulties in controlling and maintaining the monopoly of Seville. The impetus of both the regional economies, closely tied as they were to the expansion of international commerce, and the alternative routes of global trade can be interpreted as a symptom of the capacity of the world economy: it moved regional resources and generated new routes of economic development. It is also known that the fall in silver receipts in Seville and the crisis of Portuguese commerce with Asia would occur later than has been said. It certainly cannot be dated to before 1620-1630. But, on the other hand, it is also clear that these global mechanisms made it increasingly difficult to control the circulation of silver or maintain the extraction quota of the empire over a buoyant global economy that, even if it was in crisis in the short or long term, remained multipolar and with a very notable centrifugal component. As we shall see, this character would prove to be the Achilles heel of Madrid and of the empire. Would it also affect the economic agents who operated in it? Problems of Regulation and Internal Conflicts Historians have recently proposed that these empires might be described as polycentric (Cardim et al. 2012). The expression adds little to what we already know but has an undoubted graphic value as long as we do not forget that some centres had less power than others and that Madrid was in many senses 'the' centre of the empire. It is important to underline, moreover, that all empires in history have been polycentric in many ways. Furthermore, one of the problems of the expression is that so far it has been applied mainly, or only, on the political and jurisdictional levels and that it suggests a certain sense of exceptionalism of the Iberian world. This multinuclear character is, in realty, the fruit of the plurality of points of negotiation and, therefore, of decision-making centres that affected not only themselves but also the system as a whole. Nevertheless, from the perspective of political and institutional history, it is often forgotten that this nodal character became more evident with the development of regional economies and globalization. In effect the development of the regional economies implied the transferal to each of these centres of an opportunity to convert economic capital into political capital. The case of the development, almost fraudulent, of the Río de la Plata is paradigmatic, although perhaps in this respect the Philippines, Macao, or other areas might also be mentioned. In such centres, as economic resources increased, a number of agents emerged-that is, new elites and institutions that very shortly would negotiate with the Crown and, over time, would become decisive for the defence of the American empire as a whole. By 1630 there were many poles of regional development: in the Caribbean, Mexico, Peru, the Río de la Plata, Chile gradually, the coast of Brazil, the factories of Africa and the islands of the Atlantic, Lisbon, Madrid, Seville, Naples, Milan or the Low Countries, the coasts of the Indian Ocean, Goa, Macao, the China Sea, the Philippines, and so on. Furthermore, these were just the most important points, and under each of them were concentrated other, lower nodes of negotiation within the hierarchy. In each case we find a group in which a variety of agents negotiated with the king and among themselves. The case of New Spain provides a valuable example of the complex web of institutional relations at play. Here a range of actorsthe Consulate of Mexico, Mexico City itself, the freight shippers to the Philippines, the viceroy and the Audiencia-interacted among themselves, sometimes each pursuing its own agenda by recurring to institutional and economic privileges and yet, at the same time, also presenting claims for redress to the monarch in Madrid. Similar forms of rivalry and collaboration occurred under other types of institutional relations, for instance, in the European part of this composite monarchy (Chap. 5). This situation ensued to the extent that the development of these new areas-and the deepening of the old ones-was based on the establishment of relations of do ut des with the Crown in which the concession of de facto jurisdictional or economic privileges played an essential role (these concessions and prerogatives came, nearly always, to assume a perpetual character); the system crystalized relations that were essential to its functioning. Still more important, privileges of this sort were seldom the fruit of a preconceived plan of imperial organization. On the contrary, these were relations borne of mutual necessity in specific conjectures between the king and local agents. And they responded to economic and political agendas marked by local concerns. The result was that the empire was configured as a group of different interests which were sometimes contradictory. Many examples could be given. One study of the viceroyalties has arrived, with good reason, at the conclusion that 'jurisdictional conflict was the order of the day and formed part of the very nature of things' (Rivero 2011, p. 200). This author mentions another aspect of this situation, the collision of jurisdictions that formed part of the development of the European part of the composite monarchy of the Luso-Spanish empire. But the statement is also applicable to overseas territories, where institutions and an entire political philosophy had been exported and practices for the negotiation of privileges had been developed. For these reasons the Luso-Spanish complex existed during the first decades of the seventeenth century with a continual tension between placing more resources into the development of Pacific commerce-in which a large part of the Mexican and Lima elites were involved-and, alternatively, trying to limit and control it, as the businessmen of Seville and many who operated from Lisbon would have wanted and who saw in it a threat to their commerce with the Indian Ocean (Gasch 2015a). Even without resolving this question, the same phenomenon led to continual prohibitions being promulgated against the trade between Peru and New Spain that, if they appear not to have been entirely successful, did at least end the chances of uniting forces to proceed to the exploration and expansion into the South Pacific, a campaign that was left for English and French navigators (Céspedes 2009, p. 160). A similar conflict emerged over the development of trade through Buenos Aires, which infuriated the all-powerful Consulate of Seville, but was extremely remunerative for Peru's and the Río de la Plata's traders, as well as for the slave traders, many of them Portuguese, who connected this area with the Gulf of Guinea (Boyajian 1983;Studnicki-Gizbert 2007). The development of this route had strengthened the New Christians and Portuguese Jews who now, with the help of high-ranking governmental figures in Madrid, were coming into conflict with the Genoese over the control of the asientos (Ruiz Martín 1990b). Very similar conflicts were common in Asia. The opposition of institutions and Castilian and Portuguese corporations over China Sea commerce, or between enclaves such as Macao and Manila, and the disputes over the Moluccas, or over simple questions of naval protocol (and in this period, protocol was a means of setting the hierarchy of privileges that formed a part of political capital), are other good examples of this same tension (Valladares 2001, pp. 9, 20, 24, and 36). But, moreover, these conflicts could-and indeed did-have a markedly local character. Cases such as the dispute between the viceroy of New Spain and the bishop Palafox in Mexico have been considered as a clash of egos. But such episodes demonstrate a type of conflict between ecclesiastical power and civil authority that was present across the Iberian world (Álvarez de Toledo 2004). A good example can be found in the tensions between the Portuguese viceroy Linhares and the short-lived Portuguese East India Company, which decisively contributed to the failure of the latter (Disney 1978, chapter 9). Frontier zones, such as the region of Entre Rios in Argentina, were the stage for conflicts between the Jesuits and the governor of Buenos Aires for long periods of time (Ariel and Svriz 2016). Numerous other examples could be given (Herzog 2015). It is important to note that tensions of this type-and forms of lobbyingare present in any polity, from nation state to regional government. What makes this a special case are the scale and nature of the problems of distance, asymmetric information, and monitoring difficulties. But all of these disputes, and many others of a more local character, expressed what was in reality a problem of regulation and arbitration that, if common to all empires, was inherent to the very model of development of the Spanish and Portuguese systems. As would happen centuries later to the British Empire, part of the problem originated in the difficulties in the very centre of the empire (Darwin 2012). If it was by this point difficult to coordinate missions in the peninsula and in Europe (Chap. 4), it was even more challenging to do so in the distant spaces in an empire marked by the difficult circulation of information and the high degree of autonomy wielded by local agents. In 1625 a well-informed expert on American society, Gaytán Torres, decided that the good government of the Indies required not only the purification of a thoroughly corrupt administration but also the reform of the various Councils in Madrid, whose rivalries and disputes were drawn out and debilitating (Amadori 2009). In the case of the Council of the Indies, clashes with the exchequer were especially frequent (Schäfer 2003, pp. 115-9 and 165-8). Joint actions involving the Council of Portugal and the Council of the Indies were not easy to coordinate. Moreover both councils (like all others) had more than one governmental function. As it has been said, they were also jurisdictional bodies and had to make sure that laws, customs, and rights (meaning therefore privileges) of each one of the components of the Iberian imperial complex were respected and upheld. And these were not always compatible. When Portuguese Jewish and Genoese bankers were in conflict, should the Crown back its traditional allies, who had mobilized the Republic of Genoa to serve the Habsburg cause of controlling Europe or its direct subjects from Portugal? In the majority of cases, when its own interest was at play, as was the case in debates about when or how it was to advance the exploration of the South Pacific that might well introduce even more centrifugal forces, the position of the Crown was to favour itself. But this, rather than correct the problem of regulation, only made things worse, and in fact it created a problem of arbitration from the centre. World War, money, and men These features of the Luso-Spanish composite empire were also present at its European extreme. Moreover, the problems of regulation in the Habsburg domains were not an anomaly in themselves. As we have underlined, they were the logical outcome of social and political development in tune with the epoch's political systems. Problems would arise, however, when increasing pressure was put upon the different nodes of the empire in America, Asia, Africa, and Europe. Such pressure was not only the outcome of Dutch and English actions in Europe but also a side effect of the development of other areas of the planet in which globalization had also progressed since the sixteenth century. The climax of the increasing tension on a world scale can be seen in an event that has always been considered either in a strict European dimension or in its different parts separately: the Thirty Years War (1618-1648), or to be more precise the Eighty Years War, which started with the Dutch rebellion and ended with the Peace of Westphalia and the Treaty of the Pyrenees (1568-1648/1659). Mars and Mercury on a World Scale The years 1598-1621 were decisive. Fighting on against Holland until 1609 created an exhausting situation for the royal finances. The reforming instincts of Philip III and his favourite, the duke of Lerma, are recognized today (Feros 2000). This having been said, patronage, clientelism, and political corruption, which cannot be separated from their economic context and which were a constitutive part of the political regime, originating in Lerma and his family, appear to have reached unprecedented level (Feros 2000, chapter 8). The patronage system, the spine of the political economy, developed to the extent that it became impossible to recruit the best-qualified servants to government through the client network (Williams 2006, p. 355). The dilemma between, on the one hand, clientele loyalty achieved through side payments that kept lineages united and softened their internal conflicts and, on the other, efficiency in political management was being decided too often in favour of the former. The situation was even more disturbing for many as not only had the millones tax been increased, but the minting of vellón coinage had been allowed, a measure that was considered by many thinkers, such as Juan de Mariana, to be an act of tyranny-a rupture of the implicit agreement between the monarchy and its subjects since 1469 (de Mariana 1987). If they could not provide figures to substantiate their claims, contemporaries were very much aware of what was happening. This was especially true of the arbitristas, who flooded the Cortes and the desks of governors with proposals for reform and remedies (J. Vilar 1974;Gutiérrez Nieto 1982). If the solutions proffered were very diverse in nature, then their diagnosis of the problems remains extremely valuable in regard to their analysis of the political economy. According to the arbitristas, when compared to its European competitors, the Spanish economy had clearly run out of steam. The empire had fallen into a type of bad government that was being taken advantage of by other 'nations', among whose number figured not only the Dutch, French, or English but also-and above allthe Genoese, the Moriscos, and the gypsies. 6 The Portuguese equivalent to the arbitrista literature, the so-called literature of remedies (la literatura de remedios), also dealt with similar topics and placed emphasis on the problems of the empire and of the Estado da Índia. The overall arguments were very similar, but they were exactly the same in their consideration of foreigners and, above all, in taking the union of the Crowns as one of the main problems of the empire and of the country (Curto 2009 and Borges 2014). It is precisely this negative impression of foreign influence that led to the fluorescence of a sense of Spain and Portugal as a dual political identity that would be the basis of subsequent reform projects infused with a clear mercantilist character. In the case of Spain, among the reasons for this new feeling were the difficulties in overcoming the United Provinces of the Netherlands, with which a truce had been signed in 1609. By 1634 a painting was hung in the palace of the Buen Retiro in Madrid, which had been built for the greater glory of Philip IV (and, indirectly, for the Count Duke of Olivares). Its subject was an emblematic event: the recovery of Bahia in 1625. Laden with overt political symbolism (Brown and Elliott 1980, pp. 194-202), the work also appears today to reflect a number of elements that are crucial to understanding the political economy of the empire in these years: the likelihood of the Dutch attacking Portuguese areas of the empire, the fragility of Dutch efforts, and the considerable ability of the monarchy to react, despite its many problems. Without knowing it, the painting therefore confirmed the thinking of a group of Amsterdam merchants who, as we shall see, were increasingly convinced that the best way to penetrate the Luso-Spanish empire was by commercial infiltration rather than full frontal assault (Boxer 1973). Brimming with religious iconography and the representation of an imperial composite monarchy, the work of Juan Bautista Maíno perhaps demonstrates that the war with Holland was not a religious conflict. But neither was it a question of maintaining the unity of Habsburg patrimony; nor was it an economic war, despite its marked economic component, or one fought for reputation. Rather, it was all of these things at the same time. But, above all, this canvass allows us to understand that this was a global conflict fought by a geographically dispersed composite monarchy and dynastic world empire. This unique combination of themes underlines the dilemmas of Spain when faced with its precise opposite, a small confederation of provinces: the Dutch Republic was a small and badly connected political system, an antistate (Elliott 1990); however, it drew strength from its spatial concentration and the high degree of interaction between its elites and government thanks to a system that permitted discussion and informal relations, in which conflict was often resolved with joint actions in defence of the economic bases of the country. It is not that corruption was inexistent in the Netherlands and even less the case that family networks were failing to penetrate its institutions in a similar way to what was occurring in Spain (Adams 2005). But its political skeleton was very different. Any comparison between the Luso-Spanish empire and those that followed it, particularly the English and the Dutch, underlines this fact. First, in contrast to the Iberian imperio, these were political formations in which the colonies were appendixes to proto-national states that comprised territorial unities and pursued a basically mercantilist agenda. In effect the great problem of the Spanish monarchy from 1598 to 1648, and more specifically from 1618 to 1648, lay in waging war on a number of fronts across Europe from a mosaic of polities that were governed by a very wide range of political agendas, with very strong geopolitical constrictions and constitutions, whose interests did not always fall in line with those of the king. To this must be added that the dynastic nature of this ensemble created the duty of serving the general strategy of the House of Habsburg, manifested-it is only one example-in the participation in the Battle of the White Mountain (1620) in defence of the Austrian branch of the family and then sending help to repulse the attacks of Gustavus Adolphus of Sweden in Central Europe (from 1630). This role also created strategic problems that led to actions which were not only absurd but also enormously exhausting. This was the case in the attempted conquest of Mantua, undertaken to prevent the Nevers, a family tied to the French branch of the Gonzaga and therefore to Louis XIII of France, from seizing power in the duchy. Second, such difficulties resulted from being a composite monarchy and empire. In an international war fought on various fronts, the obstacles created by the political theory, inherent to the composite, states to the transfer of funds from one state to another would pose a crucial problem. But this was also the case when it came to colonial areas and the need to use Portuguese and Castilian resources in a coordinated way. This difficulty became even worse to the extent that it coincided with the first globalization, which made the Iberian complex extremely alluring to maritime and commercial states such as Holland and England. In Europe, the problem was even more serious because of the pincers around France created by the semicircle of the Pyrenees, the routes of the Spanish road and Flanders, which obliged the neighbouring country to a continuous fight against Habsburg interests. Third, in many areas of the world, this was an empire of arteries and widespread neuralgic points. Its political and economic functioning depended upon a series of routes that tied together nuclei condemned to assist each other but whose local elites, bound by their separate and independent bargains with the Crown, found that their interests seldom coincided. For this reason, the geopolitical situation was implacable: the so-called Spanish road and the Mediterranean route from the East coast of the Iberian Peninsula was one of the most important but delicate routes, with an especially vulnerable point, the Valtellina pass in the Alps. The maritime connections between the Cantabrian coasts and the Low Countries were also vital. Along both routes men, military resources, and silver flowed, without which everything in the War of Flanders might have been lost. The connections between Lisbon and Brazil and their extension to the Río de la Plata were vital, as were the routes to Africa (where Guinea was also crucial) and the Indian-Pacific Ocean complex (in which Mombasa, Ormuz, Goa, Malacca, Manila, and Macao were stress points). Commerce was based in these sea lanes, as was the lion's share of Philip IV's income as King of Portugal and, most important, the loyalty of the Portuguese to Madrid. The other artery united Seville with the Caribbean, whence the fundamental routes to Mexico, Panama, and the coast of Tierra Firme ran; and in turn these points connected with the American Pacific and the links with the Philippines. Any enemy action against this artery threatened to interrupt the flow of silver to Castile, which in turn would endanger the financial nerve system of the empire. To give an idea of the distances involved, this network covered some 100,000 kilometres across oceans and along coastlines-two and a half-times the length of the equator. Some of the fundamental characteristics of the Habsburg imperial system were, therefore, its dispersion and patrimonial nature, the obstacles to the mobilization of resources imposed by its character as a composite monarchy, and the strategic importance of some of the arteries and nodal points of the empire. Many empires have faced similar situations: their growth makes it more difficult to control frontiers faced with insupportable and continuous multilateral pressures, the variety of constitutions compels them to multilateral negotiations with local elites, military costs are always high, and so on. Indeed, some of these features have even been used to explain the failure of empires in general (Kennedy 1988). But the combination of all these characteristics in a moment of increasing globalization and fight for colonial markets would be decisive. In these circumstances, the strategy used by the Dutch and English was to focus on delicate points to assault or infiltrate, thus taking advantage of the problems in co-ordination and negotiation between local agents and Madrid, as well as the attendant logistical constraints. A quick glance at events between 1580 and 1635 is highly significant. From the end of the previous decade, the English corsairs (Drake, Cavendish, and Hawkins) had attacked a number of points on the coast of Peru, California, Cadiz, and the Caribbean. The point of outbreak was always a neuralgic node for the transport of silver and the expanding plantation economy. At the same time, the Dutch had increased their presence in the Caribbean and the coasts of Brazil, being especially drawn by smuggling sugar and slaves. The powerful network of Portuguese New Christians would prove a great help to them in the moment of a boom in the plantation economy. But the Dutch intention was also to gain access to Peruvian silver, which they obtained by selling their goods in the New World and more in particular in the Caribbean and along the Northern coast of Brazil. The truce of 1609-1621 also allowed the Dutch to sell their industrial products directly to Brazil, while 'dozens of Dutch vessels sailed from Portugal to Holland' and carried merchandise in the other direction (Israel 1990, p. 117). Access to Peruvian silver through Brazil was getting ever closer. In the 1590s, at the very height of the war, the Dutch also began their own expansion towards the Orient. According to Headrick (2010, p. 87), they drew upon a number of advantages that allowed them to build fortresses in the 'Spice Islands', where they ran into the Spanish, who were based in the Philippines (Israel 1990, chapter 3), and Portuguese. By 1617 the Dutch had established around 20 strong port fortresses and were 'the strongest European power in Asia', having supplanted the Portuguese, even if then they also had to face the English (Israel 1990, pp. 104-106), whose penetration into Lisbon's empire was more subtle, if no less inexorable. These were also the years in which the English defeated the Portuguese at Surat (1612) and obtained trading privileges from the Great Mogul in return for providing protection to Muslim pilgrims (1618). For these reasons, it is possible to understand why, when the truce lapsed in 1621, the prowar Spanish faction had in mind not only religious and dynastic ideals. Many people, and not only nobles such as the Count of Benavente, thought that Spain needed a 'good war' to avoid the country becoming effeminate and to satisfy the proto-national feeling surging within Castilian society (Elliott 1990, pp. 86-99). The Iberian merchant communities also saw Holland and, to a lesser extent, England as a threat that had to be faced down. But the Dutch strategy persisted. Despite the problems that it would cause Dutch trade, the war also provided the pretext for a new offensive, an important early move being the seizure of Bahia and infiltration into Portuguese commerce between Africa and the Brazilian plantations (Brown and Elliott 1980, p. 195), a crucial road for the Luso-Spanish trade networks. The capture of Recife (7 March 1630) and the control of the large Portuguese region of Pernambuco (1630-1654) had the same meaning (Boxer, 1973). A replica of events at Bahia was the help of the English to the Sha of Persia to conquer Ormuz (1622), which allowed them to forge the land route to Aleppo thus connecting with the Mediterranean, as we have said. At the same time, the Dutch established their headquarters at Batavia (Yakarta), whence they organized their entire military and commercial system, even nurturing the idea of 'becoming a territorial power' by attracting huge numbers of Dutch colonists to it; this plan was never implemented (De Vries and Van der Woude 1997, p. 386). After 1630 English and Dutch incursions into the trade of the Luso-Spanish empire only grew. These years saw the Dutch position at Pernambuco gain vigour under the government of Joh Mauritius (Boxer 1973). In Asia they gained a foothold in trade between China and Japan, taking Formosa (Taiwan) in 1641 and expelling the Portuguese from Ceylon and Cochin (Kochi) in the 1650s. It is to be noted, however, that the Iberian complex was not stressed only by European agents. Some of these attacks or resistances, and those in Asia in particular, were the outcome of the alliance between the Dutch and the English with the polities in the region. But, more important, the multipolar process of globalization (Chap. 1) was also behind the situation in the seventeenth century. By then Russian expansion to the East was reaching a crucial point. The Rurik and then the Romanov dynasties (1613) had created an empire with many similitudes to that of the Iberians, where the tsar negotiated power with the boyars in exchange for access to economic resources and labour (mainly based on serfdom) as well as with state servitors, who were rewarded with conquered and confiscated land (Darwin 2008, p. 73). By the 1640s the new Romanov dynasty dominated land from the Asian Pacific to the coasts of the Northern Sea and the Baltic. This reinforced the fur trade from Siberia and also the trade from the Eastern Mediterranean and the Black Sea with the north of Europe, thus connecting to the different branches of the Silk Road (Fusaro 2015). As we have seen, since the sixteenth century, the pressure of the other big agent of globalization in Eurasia, the Ottoman Empire, on the Indian Ocean increased, and it became even stronger after the Battle of Lepanto (1571). The English-Ottoman alliance in Ormuz was part of this process. The rise of the Mughal Empire during the sixteenth century and its expansion in India would be another important trend. After the Second Battle of Panipat (1556) and with the conquest of the Deccan since 1590s, the Mughal created a new and more centralized political formation, which would be more reluctant to establish alliances with the Portuguese, in part due to religious reasons, and keener to befriend the Dutch (Costa 2014, pp. 176-181;Flores 2015). Equally important are the development of Japan and the rise of the Tokugawa shogunate during the first decades of the seventeenth century. This more centralized political system was especially disposed against the religious interference of the Iberian missionaries and sought to weaken the trade they established with the daimios (Findlay and O'Rourke 2007, p. 172). It is to be noted that these trends are associated with the rejection of religious interference from the West, as well as with the spread in Asia of European warfare technology. Again, globalization elicited negative responses to the Iberian power, which would find crucial points in its nodal scheme under increasing stress. The period until the early 1630s cannot, however, be considered one of implacable defeat for the empires of Philip IV, which retained a considerable capacity for reaction. But the strategy to hit its Achilles heel (i.e. its nodal arteries) was already designed. What was happening? Some time ago J. Elliott drew attention to the way that the English adapted and adopted the legacy of the Spanish empire (2006). In the same way, the Dutch and English were imitating the methods used by the Portuguese in Asia since the fifteenth century: the foundation of factories and emporia trade-of long tradition in this area even before the arrival of the Portuguese (Chaudhuri 1985, pp. 105 and 107)-, the concentration of military force and commercial actions in strategic nodes, and the reduction of protection costs by balancing violence and negotiation on the local level. It is not the case that the Dutch did not try to conquer inland regions. The projects of Coen in Batavia (Israel 1989), where a vertical integration between spice production and trade led to the occupation of large territories, are good examples. The conquests of Bahia (1624) and then of Pernambuco (1630), with the intention of displacing the Portuguese from their continental dominium, are also a proof of this. But both campaigns showed that these were very risky and costly endeavours. The recovery of Bahia by the Spanish-Portuguese navy (1625) showed the inconsistency of this type of actions, and the fall of Pernambuco in 1654 was in part the consequence of the fact that in Amsterdam, 'the great merchant houses preferred an empire of trade and the expectations of quick profits to the uncertain and more distant returns from colonization' (Boxer 1957, p. 258). 7 This preference was, however, as old as the hills and provides one of the keys to understand what was happening. Although the Dutch could have been making a virtue of necessity, from the early years of the seventeenth century, many Dutch businessmen were very clear that major campaigns of conquest against the Spanish should be avoided because they 'involved large, open-ended financial commitments' (De Vries and Van der Woude 1997, p. 386). Behind this preference is the idea that conquering extensive regions, and so having to meet the high inherent protection costs, would have compelled them to maintain an expensive bureaucratic apparatus and therefore to open negotiations with colonial elites on the basis of the Spanish model and transfer political capital to local powers, thus giving rise to massive levels of contraband and political problems. At least in this period-as perhaps today-it was easier to infiltrate an empire than build one. But it is important to note that the Dutch and English were also taking a step forward in the history of the colonization of the world and in the relations between formal and informal institutions. The key here lay in the commercial companies. As public and private enterprises (Chap. 6), they created internal mechanisms for the regulation of conflicts between merchant interests and political concerns. K. Chaudhuri expressed this perfectly when he wrote that the VOC 'symbolised one of the most powerful and prestigious combinations of trade and political objectives that the commercial world of Asia had witnessed ' (1985, p. 83). Of course quarrels and fraud permeated all aspects of the companies (Chaudhuri 1985). But their capacity to resolve tensions was in sharp relief with the difficulties in reconciling the interests of merchants and monarch in the Iberian world. Their mixed character-simultaneously public and private-and their ability to finance their own military apparatus allowed them to create long-term 7 The opinion of Boxer remains all the more interesting to the extent that he placed special emphasis upon the human factor as a cause of the re-conquest of the island by the Portuguese. But his account also makes clear that the resistance of the Portuguese and Spanish moradores (residents) and the difficulties in obtaining logistical help were crucial to remove the powerful Dutch interlopers (Boxer 1957). strategies to resolve the classic economic dilemma of 'guns versus butter'. To be specific, in contrast to the Luso-Spanish empire, the Dutch were able to adapt and tailor their decisions in two senses: first, to delay the payment of dividends to their shareholders until their military power had allowed them to acquire and consolidate a solid position in the market (De Vries and Van der Woude 1997;Chaudhuri 1965) and second, and this was a crucial difference with the Luso-Spanish complex, to guarantee that the lion's share of available capital was used in costs that fed back into their own activities, something which emphatically did not happen with taxes paid by Iberian traders, an important part of which would end up in Flanders defending dynastic interests or, as Quevedo said, 'buried in Genoa', if not in the pockets of the bureaucrats, the aristocrats and what I have called the dominant coalition in general-let me add. 8 In addition the companies, acting as cartels, were able to regulate their competition and to lower the prices of the products that they bought (Chaudhuri 1965), something equally unthinkable in a system as unregulated as that of the Iberian world, whose independent companies-based on the model of the Italian medieval compagnia-tended to have a very short lifespan and compete among themselves. What was the impact of Dutch and English competition? Was it able to provoke an immediate decline in Luso-Spanish trade? The prevailing interpretation has been that from 1600 all of this was having a devastating impact upon Iberian commerce. This view is quite logical, as when things are examined from the Dutch or English perspective, the impression is of an inexorable rise of both powers (Israel 1990;Brenner 1993), and the perspective from Spain has always underlined the enormous problems of the Catholic monarchy and the deep depression that accompanied them. But this idea deserves to be nuanced in degree and chronology. The official figures of this trade are very revealing. The number of sugar refineries in Brazil increased during the first decades of the seventeenth century at the same rate as the figures for the official exportation of this product (Mauro 1960). Something similar can be said of the dimensions of the Portuguese merchant fleet in this region, which grew without interruption between 1583 and 1629 (Costa 2002, p. 173). All indicators invite us to conclude that the trade in these products, as in others such as indigo and cochineal for which only the merest hints exist in the archives (Hough and Grier 2015, pp. 288-9), increased noticeably-or at least stabilized-during this period. A similar impression is conveyed by an analysis of maritime trade by the total tonnage of ships employed in the Carrera de Indias. Here a certain degree of Spanish resistance can be found, with figures remaining relatively high until the 1620s, with a notable fall occurring in this decade and a precipitous decline only in the 1630s and 1640s (Chaunu 1977, p. 255). This impression is corroborated by the evolution of the silver shipments arriving at Seville for private owners, a very important part of which were made up by consignments from emigrants and silver used to pay for merchandise sold in America (Bernal 1993). Figures for this sum, which provide a good reflection of the official commerce of Seville with America, reached their maximum proportions between 1616 and 1630 for the entire period 1500-1650. 9 If we add to this sum the value of the goods that came in these shipments, most of which was the fruit of colonial commerce (Bernal 1993), the impression is even more positive. And this is even more the case if the noticeable expansion of the commerce carried by the Manila galleon is taken into account, with the massive arrival of products from Asia in New Spain and a considerable reverse flow of silver (TePaske 1983, Gasch 2012. None of this can be considered strange if the upward cycle of silver production in the first decades of the century is added into the equation (Bakewell 1991); nor should it be forgotten that these years witnessed a marked expansion of the urban economy as well as the plantation economy and the linking together of economic networks in Latin America (Carmagnani 2011). The same impression is given if we analyse the figures presented by Duncan some time ago for Portuguese trade in Asia (1986). Arrivals in Lisbon, as in Asia, without doubt the most reliable variable of the rhythm of commerce, remained high until 1620 and the more dramatic fall did not take place until 1630. Such a conclusion is reinforced by Boyajian's analysis, according to which private trade, particularly in the routes to Asia but also those to the Cape of Good Hope, 'did not collapse with the advent of European competition in Asia in 1600' and was in fact in its zenith during the two first decades of the seventeenth century (1993, p. 241). 9 Using Hamilton's figures as a basis, I have calculated that these proportions constituted 85.5% between 1616 and 1620, 81% between 1621 and 1625, and 81% between 1626 and 1630; while in the overall period 1580 to 1650, only on one occasion, 1646 to 1650, did it reach the figure of 85% (and figures for this period are highly uncertain). In general the figure had moved between 60 and 75% (see Hamilton 1975, But what lies behind these figures? It is worthwhile underlining that the infiltration of foreign businessmen and products into this commerce was highly important in the centres around which all else turned, Seville and Lisbon. Contrary to the image of the Carrera de Indias as a monopoly system for the control of trade, it is to be noted that Seville was the main loading point for the shipping of non-Iberian products. The Crown's cession of the right to register products carried to America to the city's Consulate in return for sworn declarations of their worth and then, later, for fixed fiscal evaluations of the value of the contents of the shipments (see Chap. 5 above) was having its effects. This process continued during the seventeenth century. In return for a gift to the Crown (donativo) of 200,000 pesos (see below), in 1629 the system was changed to that of avalúo, in which tax assessment was made according to the weight of the crates and not the value of their content, a good deal of it being composed of highvalue low-weight commodities. 10 All of this acted in combination with the very system of financing the Carrera de Indias. Seville trade was predominantly based upon the 'loan at risk' (préstamo a riesgo) in which the lender, who assumed the risk, advanced money to a borrower who would repay the loan in America once he had sold his merchandise. Given that this restitution was made by an exchange of money between Spain and America and that considerable risk was involved in it, the interest rates of these loans were very high (normally between 80 and 100%). This meant that only the great merchants could undertake operations involving very large shipments and that, frequently, the loan consisted in the handing over of merchandise whose value, plus interest payments, would have to be returned to Seville after its sale in the colonies. For this reason this system ended up favouring operations among the great international merchants (inserted in networks with their epicentres in the Low Countries, England, Italy, etc.), who took advantage of Castilian front companies with presence in the Consulate to insert their products in America, using the main artery of this trade. This was an authentic example of 'legalized contraband', which allows us to understand why, despite the figures on commerce remaining relatively high until the 1620s, their positive effects on the Spanish economy in general and its industry were less than could be expected. The system was aggravated-and this was the accusation of the Seville shipping merchants-by the presence in their city of the so-called peruleros, Peruvian traders who operated in the zone and who, after the foundation of the Consulate of Lima (1613), were able to defend their interests more effectively before the Crown. Their arrangement consisted of obtaining loans from private individuals in America and, having transferred the monies to Seville, the purchase in big quantities of products that would then be shipped back across the Atlantic (Oliva 2004, p. 36). The advantages acquired over the previous decades by the industries of the north of Europe, the lower prices of goods from these areas, and the strength of the foreign wholesale traders made them especially competitive in the markets of Seville and America. It is interesting to underline some notable parallels with changes in Portuguese commerce. In contrast to the Castilian model, this was not a royal estanco ceded to national merchants, in which most royal income came from taxes paid on commercial activity and the taxes in America and mining activities. In Portugal it was the Crown itself that retained its rights to trade. But, in any case, the system of licences-which, as we have seen, proliferated from the end of the sixteenth century-implied that the Portuguese king's ships would be loaded by non-Portuguese merchants and that, by extension, the majority of the profits would not remain in the country or have a small positive impact on the sectors that might have created added value (Godinho 1982). It is therefore evident that colonial trade from Seville and Lisbon remained very dynamic but also that part of it was already in the hands of transnational merchant networks. Important as the infiltration of foreign merchants in the ports of both Seville and Lisbon was, evidence also suggests that direct smuggling between other countries and the colonies under Portuguese-Spanish control was also increasing and would gain momentum from the mid-1620s. To justify this assertion for the Asia trade, one can recur to a comparison between the ships arriving at Lisbon and Holland from Asia (Graph 7.1). But the fact is also obvious in the increasing losses of Portuguese ships-in a good deal due to Dutch and English attacks-during this period, as well as in the way 'the private traders conducted by royal officials, soldiers and private merchants' undercut the Crown's exclusive control of this trade (Schwartz 2007, pp. 27-28). Something similar can be said of American trade, if we take into account the marked increase in freight prices in traffic to Brazil, a variable that was also partly influenced by the security of commerce. 11 The importance of smuggling, and the redirection of the trade at the expense of traffic in the Carrera de Indias between Seville and the Caribbean, is also clear in research on the Río de la Plata. 12 There is also the suspicion of a growing Dutch presence in the area, allowing more effective contraband from the 1620s onwards, something which is corroborated by qualitative data. In conclusion, the most probable explanation of what was happening is that between 1580/1590 and 1630, America witnessed very marked growth, thanks to the coincidence of the last cycle of mining and an explosive burst forward of the export-orientated plantation economy. 13 Rates of urbanization, if rather crude and not directly dependent upon the development of plantations, endorse this view too. Similar deductions can be made from quite a few qualitative indicators. Thus, the strong growth of the American economy was able to sustain expanding trade within the New World and also an increasing official and contraband trade towards other continents at the same time until 1630. Something similar must have been happening in Asia, where Europeans as a whole were diverting 11 According to the figures of Leonor Freire Costa (2002, pp. 76-7), it is clear that costs grew between 1580 and 1600, before stabilizing and even falling during the truce of 1609-1621. After this point they shot upwards. 12 Moutoukias' study presents its most consistent figures for the period 1650-1700, but his data also underlines the dynamism of the early decades of the century in the formation of the Peru-Buenos Aires smuggling axis (Moutoukias 1988, Chapter 2). 13 The growing number of slaves carried to the New World supports this analysis (Engerman and Genovese 1975;Curtin 1969). towards the west a growing proportion of the local and interregional trade that, as Subrahmanyan says, was very voluminous and therefore allowed them to redirect greater quantities towards the old continent without suffering adverse effects. From 1625 to 1630, the advance of Holland and England would be even clearer, and by 1648, when peace was finally signed with Holland, the system had reached a dramatic and ironic perfection. The policy of striking against weak points and infiltrating the trade of Lisbon, Seville, and other peripheral areas of the empire was yielding results. In both cities, the so-called monopolio now benefited many other countries. Moreover, smuggling was practised from strategic points in the Caribbean, Africa, and Asia. This situation was even more dramatically ironic if we take into account that the Catholic monarchy was seeing its spending rise for both the protection of these empires and the preservation of their markets (see below). Mercantilism was not only impossible but also benefited foreigners in proportion to the efforts to apply it. The outsiders had now found their 'Indies', as Thomas Mun would say in a work published posthumously in 1664 (Chap. 6). They had first imitated and then adapted the Portuguese techniques, at the same time as they learnt from the errors of the Spanish colonial system. Mun himself, as a director of the English East India Company, had been one of the protagonists in copying and perfecting existing practices and well knew the high cost that alternative experiments might entail. The Dutch and English were able to continue to trade in many American zones without having to take the trouble of conquest, thus saving themselves costs associated with social control, coercion, and administration. Around 1620 Seville was brimming full with Flemish merchants and merchandise; representatives of other northern nations and Italians were equally prominent. The rise of the American market was now beginning to have a very positive effect on the industries of different areas of Europe, all of them more competitive than Castile. The English had gained access to the Mediterranean, and their 'new draperies'-light, fashionable, and adapted to the climates of the South-were starting to be worn by a good deal of the urban population both in the Iberian Peninsula and in the colonies (Brenner 1993;Coleman 1977). The Dutch, thanks to their multidirectional growth and expanding global markets (Israel 1989), were establishing a centre of information and exchange whose tentacles reached into every part of the world economy and through which data and intelligence were gathered, costs lowered, and predictions made as never before. Economic development and political independence-achieved through military efforts-meant that the Dutch hub was much more efficient than Antwerp had ever been (De Vries and Van der Woude 1997, p. 667). The French were a growing presence in Spanish and American trade and were setting down the basis for a system that would reach its full maturity and potential in the second half of the century. All of this was achieved at a time in which the presence of Italian industrial products in the international markets was declining, as these were now hindered by diminishing competitiveness in terms of their cost and capacity to adapt to market demands, as well as the inability of Italian political systems to exert influence successfully in a world increasingly shaped by mercantilist considerations and policies. Global Wars and the Relevance of the Imperial Periphery The traditional view of what was to occur is quite clear. The decline in American revenues, traditionally associated with the mining crisis, reduced the Crown's income precisely when the domestic economy was weakened and the Thirty Years War, understood as a European conflict with some colonial extensions, reached its climax. Under such circumstances, Castile would remain the only territory disposed to endure the conflict's enormous burden. 14 However, if a wider global perspective is adopted to analyse this period, the final impression may be very different. Any such attempt may even cast a great deal of light on processes that reshaped the empire and were essential in the long term. Cash for the King on a Global Scale It might be said that we are dealing here with an economy of war in many senses. This is evident in the fact that military spending, and the payment of debts contracted to meet it, by themselves constituted more than 60% of Castile's ordinary expenditures. 15 This stress provoked by war is also evident in the subscription of both asientos and juros and paved the way for the bankruptcies of 1607, 1627, and 1647(Gelabert 1997and Ruiz Martín 1990b). 16 The problems accumulated after 1580 with a cessation of growth in the interior zones of the Meseta and the end of expansion in Andalusia, the most highly taxed areas of the Crown, had a deleterious effect (Castillo 1965). Due to the limitations of what we have called the conflictive pact, the fiscal system was reaching the limits of its efficiency: clientelism, corruption in local and central management, disorder in the collection of taxes, and many other deficiencies were now becoming a problem (Domínguez 1983). In this context, the war economy would reach its fullest extension. In the first decades of the century, the sale of baldíos and common lands had continued, as had those of jurisdictions and royal rents, reinforcing tax pressure upon the remaining royal domain. And if the alcabalas were hardly increased, these had now to be paid in some areas that contained a decreasing number of persons. 17 The outbreak of war in 1621 made things even worse. The decade would be marked by the negative effects of the minting of vellón coins, which brought about a rise of more than 40% in the premium of silver money in just eight years. 18 International payments (above all those effected in Flanders) could only be made in gold or silver: the result was a drastic reduction in the capacity of the monarchy to attend to the international dimensions of the war (Parker 1972, 247-65). These effects were even more prejudicial since between 1600 and 1640, the guilder rose in value by more than 30% in relation to the silver real (Graph 7.2). And from the 1630s, new payments of the millones taxes were introduced, their value quadrupling in many areas (Artola 1982). But, above all, this situation led the government to the arbitrios, or extraordinary measures, which on many occasions were not negotiated in the Cortes. 16 The evolution of the asientos is indicative in this regard. They increased between 1602 and 1605, before falling, doubtless because of the bankruptcy caused in 1607. From around (approximately) 1616, a new growth cycle began, reaching a peak for the reign in 1623-1625, with the bankruptcy of 1627 on the horizon, after which a new dip can be seen. The war of Mantua, and specifically the entrance of France into the conflict, brought about a dramatic increase, which lasted until 1642 (Gelabert 1997, Graph 1.1, p. 323). 17 In Segovia the alcabalas hardly rose between the 1570s and 1620s. But this quantity had to be paid by a population that had fallen by almost 25% and which now also had to pay the millones and other burdens (García Sanz 1987). 18 It has been calculated that this led to an appreciation of up to 120% by 1648 as a result of the repeated minting of new vellón coin, with another royal bankruptcy in 1647 (Hamilton 1975, pp. 105-12). The complex constructed by Charles V was thus trembling (see our description in Chap. 4). The fiscal pact between the Cortes of Castile and the king was now weakened; the system for the consolidation of debt was taking on water like a sinking ship, and would continue to do so to the extent that the dynamism of the economy was falling. This was all the more evident since the other fundamental pillar, the royal American revenues, was also now cracking. The royal silver shipments, which had begun to fall in 1600, reached a level in 1616-1620 that did not exceed that of 1566-1570, when the Seville boom in metals began (Hamilton 1975, p. 48). Thus, the lubricant that served to obtain asientos and then to consolidate them as juros on Castilian incomes fell short. This would have very prejudicial effects, not only for the phenomenon in itself but also because it struck a blow to a fiscal system that was by now very highly indebted. 19 Even the other pillar, the credit provided by the Genoese, now began to weaken as did the difficult and fragile alliance with them. The bankruptcy of 1627 made it clear to Olivares and, indeed, to everyone else 19 If in 1598 the juros absorbed approximately 45% of Crown incomes (4.6 million ducats from 10.22), in 1621 it had already reached 60% (5.6 million from 10.52). This is all the more revealing in that these years had seen a fall in the interest rate on debt from 7.1% to 5%, and many juros had been pledged by the Crown as security on these lower levels. that now was the time to replace the Genoese or at least to counterbalance their enormous power (Boyajian 1983;Ruiz Martín 1990b). A policy of approximation to Portuguese Jewish bankers and conversos was therefore introduced. If the final result was not the credit monopoly falling into the latter's hands, then it did, over the century, produce a greater number and range of asentistas and agents, to whom the monarchy was much more closely tied for reasons of international political strategy (Sanz 1988). But what was the real role of the colonies in this situation? Can we reduce the problem to a crisis of the Crown's colonial revenues, meaning a reduction of Asia's and America's capacity for sending money to the peninsula? Nothing could be further from the truth, although this interpretation has predominated when the problem has been viewed from a strictly peninsular perspective. Certainly, the Portuguese system was facing difficulties. Here smuggling and the growing Dutch presence in Asia were bringing about a financial crisis in overseas incomes. But the other big problem was in the licence system, that is, in the fact that a big proportion of the colonial benefits were going to private hands, especially those of transnational and global agents. The kings of Portugal were seeing a fall in their income for this reason from 1588; by 1627 they hardly garnered a quarter of what they had at the beginning of the period, a sum that practically disappeared between 1632 and 1641 (Hespanha 1993b, pp. 197-205). Furthermore, as in Castile, a phase of sharp inflation in the 1630s eroded the buying power of the tax intake (Hespanha 1989, p. 112). Once again, the problem was the stress experienced by a global system that had functioned relatively well during the sixteenth century but that generated insuperable tensions not only at its centre, as is normally said, but, more important, on its periphery. But the case of Castile is more meaningful. As we can see (Graph 7.3 and Table 7.1), the decade 1601-1610 saw a fall in the shipments of silver reaching the king from America that would continue until the 1640s, when a definitive collapse occurred. In these conditions it was extremely difficult to meet the needs created by the renewal of war in 1621 (Lynch 1969, pp. 71-6). But, more than a crisis in the colonial tax system, what in reality was taking place in the long run was a downward trend in the proportion of the silver brought to Castile, with more remaining on the empire's periphery. Thus, if 64% of income in Lima were sent to Spain in 1591-1600, this proportion fell to 45% in 1601-1610 and 'to about one third for the next These facts have even greater relevance given that in New Spain, in difference to Peru, mine production and, therefore, the incomes derived from it appear not to have diminished. Table 7.1 which represents the total income in America and the proportion sent to Castile is very meaningful. The total income fluctuated until 1680-1691 and only decreased in a very substantial way in the last decade of the century. But the proportion sent to Castile dramatically decreased during the century and more in particular after the big effort of 1631-1640, with the entry of France in the Thirty Years War. There are a number of reasons for this, but one above all must be underlined: growing proportions of metal remained in the colonies to meet their own needs and above all the defensive costs that were rising in response to the Dutch and English attacks. Thus, one of the reasons for the growing military problems in Europe was the global character of the war and empire. In broader terms, the difficulty lay in the growing needs of a global empire that had to establish and defend a multitude of local interests and centrifugal forces spread across a vast geographical area and whose local elites demanded that the empire attend to their protection costs by spending a growing proportion of their taxes. It is important to note too that, if the silver collected in America remained at very high levels until 1681-1690, all available figures on mining production indicate an unbroken fall that began from 1620 to 1625 and continued until 1660-1670 (Barrett 1990, Figure 7.1, p. 238). In other words, a sizeable part of the shortfall was now coming not from the mines but from the pockets of the American population. It is not odd, therefore, that bigger parts of the budget were contributing to maintain the defensive system and the American colonial administration. It should be noted that this is not only the case of America. In a revisionist study published years ago by Storrs (2006), this author made clear that the effort of the Habsburg empire's non-Castilian territories of Europe also increased during the last 30 years of the century. But this can be said also for the previous decades, in spite of the Union of Arms' failure. Thus royal incomes in Naples rose from some three million to almost six million ducats between 1600 and 1640, a 100% growth in nominal terms. This in fact meant a considerably larger contribution in real terms, since the fall of prices in the kingdom after 1620 also brought about a reassessment of the taxes levied, thus increasing the overall contribution of this state in real terms. In this kingdom, the consolidated debt grew by 65% between 1605 and 1638 (Calabria 1991, p. 91). 20 Something similar occurred in the territories of the Crown of Aragon, although on a somewhat lower scale (Bernabé 1993). Catalonia had already increased its extraordinary tax payments in 1599 up to 1,100,000 lliures a year (Elliott 1963). But, very importantly, in all of these kingdoms, new taxes were created on productive activities and commerce. These payments entailed contributions to the war or support for military units in situ. And, as in Castile, the municipalities had to impose sisas (sales taxes on items weighed or measured) to meet the costs of war (Bernabé 1993). 21 It is very difficult to work out the monetary contribution of the Low Countries. These monies remained in the treasury of the army, which in any case appears to have acted as an extractive mechanism, and when taxes were raised by local authorities, they proved barely sufficient to cover the cost of their own defence. But G. Parker has calculated that from 1600 to 1640, the 'obedient provinces' contributed some 4,000,000 florins a year (almost three million ducats) (Parker 1972, 144). 22 In Milan the system of contributions by compensation (compensazione) and levelling off (egualanze) for the maintenance of troops was equally indicative of how war and the overall military system of this composite monarchy generated costs and how its 'peripheral states' were obliged to increase their incomes, as well as their contribution to warfare. 23 But Not Only Cash: The Real Burden of the War The contribution of the periphery of the monarchy to the imperial efforts on its many battle fronts is even clearer when the focus of attention moves from money to men. In effect, the economy of war was characterized by the use of forms of resource extraction that, in addition to taxes, resulted in the direct mobilization of troops and military units on the local level and battlefield. This is important in this case, since, as has been said in Chap. 4, the fiscal system is too often confused with the system for the mobilization of resources sensu lato. Yet this consideration is crucial to understanding the impact of war on the empire and not only-or so much-because of its weight but also because of its footprint on the political economy. As expected, this type of contribution was very much present in Castile. Continuing the tradition set up in the precedent century, many cities-for example, Seville, Mérida, Segovia, Salamanca, and the coastal ones-came to contribute in this way with armies which were sometimes better paid than those of the king himself (Thompson 1976, pp. 134-5). There are some reasons to believe-though no overall figures can be provided-that efforts of this sort were extended in the 1630s, above all because of the war with France, which threatened the very Spanish territories (Mackay 1999, pp. 80-96). Thompson puts the value of the salaries and military equipment given by Seville at some 170,000 ducats during these years. In Murcia, the most exhaustively studied region, the city paid for armed men at its own cost and maintained its walls. It was also involved in skirmishes on the coast of Africa and even contributed to the maintenance costs of Orán (Ruiz Ibáñez 1995, pp. 227-9 and chapter VI.3). By the same standard, the city of Cartagena was directly involved in privateering offensive and defensive activities organized from its harbour (Ruiz Ibáñez and Montojo 1998). Also like in the preceding century, this was not only a business of towns. Nobles were actively involved in raising men as a form of service paid for from their own pockets (or by advancing money). Troops of this kind were, at heart, a form of 'tribute in kind' rather than in liquid money, although at times this sort of contribution might be commuted for payment in metallic (Mackay 1999). 24 Thompson has argued that the fact that the treasury receipts of Philip IV did not exceed those of his grandfather 'is an indication not that the costs of government had remained unchanged, but that the central administration was now taking a smaller share in the management of the state' (1976, p. 141). In other words, the treasury figures do not provide us with the full picture, as has often been assumed. Moreover, all of these political actors contributed to the mobilization of troops, as was normal in a corporate composite monarchy. But, as with the flows in cash, the contribution of other kingdoms was not negligible at all. The case of Milan, just mentioned, is perhaps one of the most expressive (Maffi 2009), as long as the compensations were associated with military mobilizations in situ, which opened the door to possible excesses related to the transfer of nonfiscal resources to the central power from the areas affected by the presence of troops. Moreover the levies of soldiers, their billeting, the forced purchase of provisions, and other actions, constituted a compulsory contribution that cannot be calculated but which should not be hidden behind the figures derived from treasury and exchequer accounts (Buono 2008). To the body of 60,000 men under arms, all paid for by the Crown, can be added the urban militias supported by the cities. This mechanism for the mobilization of resources, which often did not leave records in the central treasuries, was also employed in Sicily and Naples (Rizzo 1995, andMuto 2007). Though sometimes mixed with flows of cash, it also proved very common in the battlefield of the Low Countries (Parker 1972). We also know that local levies were raised in the Crown of Aragon, sometimes even as a result of forced agreements conceded by the Cortes (Bernabé 1993). The billeting of troops was especially costly in Catalonia during the war against France, to the point that this 'contribution' was another reason for the disaffection of the Catalans towards the House of Habsburg (Elliott 1963, chapter XIV). Though there were differences and natural diversity, this type of mobilizations was very much present in America, from New Spain to the viceroyalty of Peru or Río de la Plata (Ruiz Ibáñez 2009). The formats employed ranged from the establishment of pacts between captains, Jesuits, and friendly Indians, who acted as sort of mercenaries in exchange for a range of concessions or advantages, to the formation of militias (Giudicelli 2009;Ariel and Svriz 2016). From the end of the sixteenth century, the urban militias were being rolled out in other territories and, indeed, across the Iberian world and its principal cities, most of them coastal, to the point that this was almost a global phenomenon (Ruiz Ibáñez 2009). 25 The importance of all of this is difficult to exaggerate. As has been said (Chap. 4 above), it represented the military functioning of a composite monarchy. And it affects not only scholars' quantitative assessment of the war effort but also our understanding of the monarchy's overall debtsthat is, not just those of the King of Castile. This was also because a sizeable part of these operations continued to be paid for not only by the incomes of these kingdoms and corporations but also by increasing their debts. This is also important because it was a crucial phenomenon that, regardless of its quantitative impact, underlay the configuration of pacts that definitively affected the institutions and the political economy of the different territories. In less than a century, the Habsburgs had managed to put together the two greatest European empires. In both cases the expansion had produced great advantages. But in both cases, this expansion had also been possible thanks to the projection upon them of informal networks that, despite their going beyond the family and not being only or purely of a familial character, had their transcendental weight in this institution. It could not be any other way, as the networks of relatives and those that could be built upon them played an essential role in this society. In fact they served as the basis of the expansion itself, which was also the result of the needs for expansion of social elites for whom the themes set out in the previous pages-family and extended family relationships, clientele systems, patronage between groups, friendship understood as a political expression and an economic action, a sense of local and religious identity-provided the internal structural forces that shaped the strategies for the reproduction of the social order. This would be fundamental to all developments. In a brilliant essay written in 2007, Stuart Schwartz set out the dynamic of Portuguese commerce as the product of the clash between public institutions and merchants' private interests. The idea is excellent, and the above pages have attempted to reformulate it. The argument does not serve simply for the Portuguese empire. Nor should it be limited to commerce and merchants, since networks of trade were on many occasions only a part of informal and multifunctional webs that were much more extensive and widespread. And the outcome was logical: to the extent in which the institutions of the monarchy were imbued in this same social and ideological structure, the history of both empires can be understood as a process of continual perversion of the efforts towards centralization by the Crown. All of this stood at the base of a political economy in which the practices of rent-seeking, the capture of privileges, corruption, the continual use of privileged information of a very asymmetrical character, and so on would be essential components. It would be these networks, the very agents that had contributed towards the establishment of the empires, which would weaken the power of the Crown, which without doubt must be seen as the most important and the most powerful of all agents but, in the final analysis, simply an agent in these pacts. And this was so much more the case to the extent that the very process of globalization would end by turning upon those who had created it. The deficiencies in the system of the Carrera de Indias and commercial licences as well as the need to attend to the necessities of possessions in America, Africa, and Asia were leading to a fall in the Crown's revenues on the peninsula. This drop in revenues affected the sophisticated edifice constructed by Charles V. Yet this situation did not imply a long-term reduction of global commercial traffic. Under these circumstances, intra-colonial commerce grew, and even the official traffic centred on Seville and Lisbon resisted better and longer than has often been thought. At the same time, through these cities or through direct or indirect contraband with the colonies, commercial networks centred outside of the peninsula, in such centres as Amsterdam, Mexico, Manila, the coastal areas of Brazil, and others, increased their participation in this traffic. This change, moreover, also resulted from the fact that competitor states such as England or the United Provinces were conscious that it was easier and cheaper to infiltrate Iberian markets than to affront the risks and the costs of protection and administration that the conquest of such vast territories would have entailed. A tentacular empire is attacked at the nodes of its tentacles. Such a subtle process of invasion took place during the conflicts of the first half of the seventeenth century. Though the Thirty Years War was lived in Europe with great intensity, its extra-European dimension would bring out key elements in the working of the empire and the construction of the state. Far from being a narrowly European conflict, the war was a global phenomenon that compelled this composite global empire to mobilize in its peripheral areas. The consequent mobilizations, moreover, were far greater than is usually recognized when attention focuses only on flows of cash. Precisely because of its dimensions and characteristics, this conflict brought to light the limits of the military revolution and its effects upon the system for the mobilization of resources, which unsurprisingly were more appropriate for a composite monarchy than for a modern state (this latter assumption being the starting point for most visions). And it led to mobilizations in peripheral areas of this composite global empire that would be far greater than is usually recognized when the European dimension is the sole focus of attention. For all these reasons, war would shape and mark the equilibrium of the dominant coalitions, the political economy, and the path taken by the Iberian world, which was very different from the direction embarked upon by countries such as England. But, above all, it would lead to a process of dispersion of resources and even decentralization that would be decisive for the future of the empire. Local elites, particularly in America, would find in this situation the bases for more and more power of negotiation with the centre and for increasing autonomy. The next chapter will explore this aspect. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-05-21T13:05:26.060Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "d67eaf29aa3a83109a0b5abcd156aada75d9dc30", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-13-0833-8_7.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2e5a4b703d41ffa5a54a16ab1ed4fecc4b34f1bc", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Political Science" ] }
202192271
pes2o/s2orc
v3-fos-license
“Embodied and Operational Carbon of Typical Heating, Ventilation and Air Conditioning (HVAC) Systems in Office Buildings in Washington State: A study of buildings registered under LEED v3 2009” Heating, Ventilation and Air Conditioning (HVAC) systems contribute significantly to operational energy and CO2e emissions during the service life of office buildings. Over the last decade, stringent energy codes have enabled the introduction of new HVAC technologies to reduce operational CO2e emissions. However, life cycle carbon emissions of buildings are odivided into operational carbon (OC) and embodied carbon (EC). Operational carbon are the CO2e emissions generated from the burning of fossil fuels used to heat, cool and power the building space during its service life, while EC encompasses the CO2e emissions equivalent to producing, procuring, installing, mantaining, repairing and disposing of the materials and components that make up the building. Over the last decade, broad efforts have improved the understanding of the role that HVAC system selection play in overall OC, nevertheless, EC of HVAC has remained unexamined This paper aims to identify typical HVAC systems used in office building design in Washington State and explore the effects of current practice on total energy use, operational and embodied CO2e. The study sample is composed of twenty office buildings in Washington State registered under the LEED v3 2009 version, from which 15 have obtained LEED certification in the last two years. The projects are registered under the New Construction (NC), Core and Shell (CS), Existing Buildings and Operation and Maintenance (EB:OM) and Commercial Interiors (CI) products and comply with the requirements established in the ASHRAE 90.1-2007 energy standard. The results show that typical HVAC system selection is often a combination of different technologies for ventilation, heating and cooling, and that in general: smaller buildings tend to incorporate high efficiency packaged units while medium and large size buildings typically rely on High Performance Variable Air Volume (HPVAV) systems or hydronic systems such as chilled beams and water source heat pumps (WSHP). The results also indicate that data available through the LEED v3 2009 documentation system on embodied carbon of the mechanical systems is limited and that simplified methods to assess embodied carbon of HVAC are needed in order to integrate EC into whole life assessment of Mechanical Systems. Background In the face of Climate Change, policy efforts around the world for all new buildings to operate at net zero CO2e by 2030 have increased in recent years (Laski & Burrows, 2017). Recent ambitions to improve industry practice further contribute to the trend toward net-zero impact, and even net-positive buildings (Lützkendorf, Foliente, Balouktsi, & Wiberg, 2015). Net Zero Carbon buildings (NZC) are defined as 'a highly energy efficient building that produces on-site, or procures, enough carbon-free renewable energy to meet building operations energy consumption annually' (Architecture 2030(Architecture , 2016. In this context, CO2e emissions have been widely regarded as a key metric to understand a building's negative impact on the environment and its capacity to incorporate renewable energy sources (Laski & Burrows, 2017). A metric that uses CO2e emissions instead of site energy intensity (SEI) includes other strategies to mitigate or defer global warming, such as CO2e sequestration (Wang et al., 2017). HVAC Systems in Office Buildings, Operational and Embodied CO2e In large commercial buildings, HVAC systems represent the largest primary energy end-use (Huang et al., 2015). In developed countries, heating, ventilation, and air-conditioning (HVAC) (Cao, Dai, & Liu, 2016) account for almost half of the total energy use in commercial buildings (Yu, Yan, Sun, Hong, & Zhu, 2016) and approximately 10-20% of total energy consumption, which demonstrates the great energy reduction potential. In the United States buildings rely on electricity to meet a significant portion of its energy demands, especially for lighting and HVAC. In 2007, the emissions attributable to electricity consumption in commercial buildings for lighting, heating, cooling, and operating appliances in the US commercial sector was 79%. This made the sector accountable for 38% of CO2 emissions from fossil fuel combustion. Electricity generators consumed 36% of US energy generated from fossil fuels, and emitted 42% of total CO2 from fossil fuel combustion in 2007(Al-Sallal, 2016). However, life cycle carbon emissions of buildings are not only operational. Life cycle CO2 emissions of buildings are often divided into operational carbon (OC) and embodied carbon (EC). Operational carbon are the CO2e emissions generated from the burning of fossil fuels used to heat, cool and power the building space during its service life, while EC encompasses the CO2e emissions equivalent to producing, procuring, installing, mantaining, repairing and disposing of the materials and components that make up the building (Cabeza, Rincón, Vilariño, Pérez, & Castell, 2014). EC assessment plays a critical role in supporting decisions of building retrofit and for considerations of the large environmental impact of post disaster building destruction (Fardhosseini, 2015). Over the last decade, broad efforts have improved the understanding of the role that HVAC system selection play in overall OC, nevertheless, EC of HVAC has remained unexamined. Few studies have looked at the EC of HVAC systems, with only some exceptions quantifyng EC for different components ( LEED Rating System and Building Regulation in Washington State In the United States most state energy codes are based on model codes ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). The requirements of these codes vary by state and the control requirements can be difficult to implement, yet the assumption is that these codes are implemented and working correctly ( In Washington State, several local and state-level policies encourage green building development and energy efficiency. According to Building Energy Codes Program from the U.S. Department of Energy (DOE), the first statewide Washington State Energy Code (WSEC) was adopted in 1986 applicable to all buildings and was based on ANSI/ASHRAE/IES Standard 90A-1980 (U.S. DOE, 2018). The first amendment to the commercial energy standards came in 1991, and from that date progressive modifications for HVAC systems have included increased equipment efficiencies, more restrictive controls, and minimum motor efficiencies (SBCC, 2018). The 2012 WSEC went into effect on July 1, 2013 (WSU Energy Program, 2018). The latest version, the 2015 WSEC is one of the most stringent energy codes in the country and is more efficient than ASHRAE 90.1-2013. Washington is one of the only four states in the country that has adopted a standard with this level of requirements. Washington is one of the states with the largest number of certified projects in the United States. One of the enablers of a wide adoption of the LEED rating system, was the enactment of Chapter 99, Laws of 2011, that required that "All major facility projects of public agencies receiving any funding in a state capital budget, or projects financed through a financing contract must be designed, constructed, and certified to at least the LEED silver standard". According to the 2017 USGBC annual ranking of LEED Buildings per state, Washington came in 11th place in 2017, with 12,469,420 total square feet of LEEDcertified space from 74 certified projects, equating to 1.93 square feet of LEED space per capita (USGBC, 2018a). Method This study aims to respond the following research questions: What are the typical HVAC systems and equipment used in LEED registered buildings in Washington State and what is their contribution to the overall CO2e emissions in the building. In order to respond to these questions, a two-stage research plan is proposed. In the first stage, a systematic review of the project data is developed, the second stage analyses each HVAC system against the different performance indicators commonly used in LEED certification. Data Gathering Process The data for this project was obtained via USGBC LEED online system, the official platform for design and construction team members to upload the data for projects undergoing LEED certification process. The data available for each project are credit templates and supporting documentation to demonstrate compliance with each credit. The credit templates offer standardized data for all projects, however the organization of the template varies depending on the LEED product: New Construction (NC), Core and Shell (CS), Existing Buildings, Operation and Maintenance (EB:OM) or Commercial Interiors (CI). The type of supporting documentation in clearly indicated for each project under each credit, however the data is submitted by each project in unstructured content types. The project data was gathered during 12 months from June 2017 to June 2018 directly from the LEED Online website. The data from the website was summarized and recorded into a template for each project, the data recorded in these templates are: System Description Narratives, and Equipment List. During the second stage, specific parameters are organized into a structured database, the parameters included in this database are of five different types and are described in Table 1. Information about the sample The twenty buildings analyzed are office buildings registered under the LEED 2009 version 3.0 for either NC, CS, EB:OM, and CI, and 15 have obtained some level of certification over the past two years. All buildings included in the sample are located in the State of Washington, and more specifically in the cities of: Seattle (n=15); Bellevue (n=2); Kirkland(n=1); Olympia (n=1) and Redmond (n=1). Buildings registered under LEED EB:OM (n=4) demonstrate energy performance using historical energy consumption data, while buildings registered under LEED NC, CS, and CI are modeled to estimate energy consumption via building energy simulation programs (i.e. eQuest, EnergysPro, HAP, Trace and IES). Building energy simulation (BES) has been used extensively in the industry in order to estimate energy consumption patterns and to compare of proposed design projects relative to standard designs in early stages of design. BES does not provide predictive accuracy of the future energy use of the buildings or HVAC systems and its limitations have been extensively documented in the literature. BES analysis is conducted by first using the software to model the proposed building geometry and the different building parameters such as: climate data, envelope materials, schedules and mechanical, electrical and plumbing systems. The proposed building is then compared to a baseline model designed following the parameters in ASHRAE 90.1 2007. Appendix G guidelines. All projects comply with the 2012 Seattle Energy Code, which is 8 to 12 percent more efficient than ASHRAE 90.1-2010 for all office building sizes (Kennedy, 2014). Due to the large variation of the building parameters across all buildings in the sample, the office buildings were classified according to their size in three categories: Small, Medium, Large as shown in Table 2. Per USGBC requirements, data accessed via LEED Online, describing attributes of individual buildings should not be revealed publicly, all data from the platform must be reported in aggregate, therefore all data used in this study is only presented in aggregate for three building size categories. In order to obtain data for the EC of the HVAC equipment, this study uses the equipment descriptions submitted in compliance with Credit 4: Enhanced Refrigerant Management under the Energy and Atmosphere Category for LEED-NC and LEED-CS and Credit 5 for LEED EB:OM. Only 16 buildings in the sample complied with the enhanced refrigerant management credit. The equipment weights were calculated using industry technical sheets for each type of equipment. Preliminary estimates of embodied carbon is calculated using global warming potential data from existing databases. Results A subsubsection. The paragraph text follows on from the subsubsection heading but should not be in italic The results of this study are described in two parts. The first part explains the results of the qualitative systematic review of HVAC systems description in LEED online supplementary information. The second part of the results describe the results of the quantitative stage of the research where each EUI and CO2 ranges is described for each building size category and type of HVAC system. Small Buildings (10,000-80,000) For most small buildings, the most common type of HVAC system packaged rooftop units (RTUs). In most cases, these RTUs are packaged rooftop heat pumps serving each individual zone in the building. Typical zone numbers in small office buildings range from 10-15 and are typically served by 2.5-15 ton individual RTUs. These RTUs include economizers, power exhaust, and short cycling protection. Another type of system used in small buildings is Variable Refrigerant Flow systems VRF including heat recovery ventilators. Medium Buildings (80,000-300,000) In both medium and large building size categories High Performance Variable Air Volume Systems (HPVAV) are widely used. HPVAV are characterized by the use of optimized system control strategies, fan-pressure optimization and supply-air-temperature reset (Murphy, 2011). HPVAV also called High Performance Air Systems (HPAS) typically include heat recovery and efficient fans and capacity control (Smith, 2013). In various buildings in the sample, the centralized system consists of a cooler supporting office by office air handling units (AHU). Each AHU provides conditioned air to all occupied spaces using parallel fan powered terminal units (PFP). Ventilation in primary office space of medium buildings is also provided by roof top units (RTUs). Typical HVAC Systems in Large Buildings (300,000-800,000) Ventilation in primary office space of large buildings, is typically achieved by roof top units (RTU) systems. These RTU serve office zones through fan powered and VAV boxes located above the ceiling. Heating in this each zone is served by a series fan powered boxes with electric reheat. Large buildings usually include a central plant that serves the entire facility including different types of use in zones. In medium and large buildings the first retail floor is usually served by water source heat pumps (WSHPs). For most efficient buildings, the WSHP are served from high temperature chilled water return to reclaim heat that is typically rejected by cooling towers. In the most efficient buildings, these WSHP. Performance Results per type of HVAC System and Building Size Category (EUI and total CO2e) For office buildings in the PNW, in general, HVAC accounts for approximately 45 to 55% of end use consumption within the building. Due to the geographical location of these buildings heating energy is less than in typical office buildings, while ventilation, cooling, pumps and miscellaneous equipment represent larger energy use. In general, the building's site energy use intensity ranges from 35 to 70 (kBtu/sf-year) for smaller buildings, 20 to 50 (kBtu/sf-year) for medium buildings and from 30 to 60 (kBtu/sf-year) for larger buildings, as shown in Fig. 1. This is in line with the U.S National Median Reference values for the Energy Portafolio Manager (as EUI) for an office building comparable to these building types is 52.9 The building's CO2e use intensity ranges from 0.80 to 6.08 (kCO2e/sqm-yr) for smaller buildings, 0 to 9.15 (kCO2e/sqm-yr) for medium buildings and from 3.4 to 8 (kCO2e/sqm-yr) for larger buildings, as shown in Fig. 2. The embodied carbon intensities for each type of building size category vary between 6 and 12 kCO2e/m 2 , however this only considers main refrigerant intensive equipment types and does not consider other types of equipment (i.e. air handling units, cooling towers) nor does consider other types of materials such as ductwork, refrigerants or insulation and their replacement rates. Conclusions In this study, the HVAC systems of twenty buildings registered under LEED v3 where analyzed. The twenty buildings are office buildings located in Washington State and registered under the LEED 2009 version 3.0 for either NC, CS, EB:OM, and CI. Fifteen buildings have obtained some level of certification over the past two years. Buildings registered under LEED EB:OM (n=4) demonstrate energy performance using historical energy consumption data, while buildings registered under LEED NC, CS, and CI are modelled to estimate energy consumption via building energy simulation programs comparing the proposed to a baseline model designed following the parameters in ASHRAE 90.1 2007. Appendix G guidelines. The office buildings were classified according to their size in three categories: Small, medium, and large. The results show that typical HVAC system selection is often a combination of different technologies for ventilation, heating and cooling, and that in general: smaller buildings tend to incorporate high efficiency packaged units while medium and large size buildings typically rely on High Performance Variable Air Volume (HPVAV) systems. Medium and large size buildings tend to incorporate more novel systems such as chilled beams and water source heat pumps (WSHP). Large buildings implement central plants and typically incorporate Dedicated Outdoor Air System (DOAS), which contributes significantly to reduce energy consumption for ventilation. The building's operational CO2e intensity ranges from 0.80 to 8 (kCO2e/m 2 -yr) for larger buildings in contrast to the initial embodied carbon intensities for each type of building size category that vary between 6 and 12 (kCO2e/m 2 ). However the embodied carbon calculations only consider main refrigerant intensive equipment types and does not consider other important types of equipment for HVAC nor does consider other types of materials such as ductwork, refrigerants or insulation and their replacement rates. Further work is required to assess the different varieties of HVAC equipment, their material types and renovation rates across the building life cycle.
2019-09-10T20:24:06.703Z
2019-08-09T00:00:00.000
{ "year": 2019, "sha1": "78defd999a6c750bd50a6e24d5fa7076404e89bc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/294/1/012057", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b01a64c1d34bd0cb45b1e403db7e73ae5d841db5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
55698252
pes2o/s2orc
v3-fos-license
Superradiant Phenomena for Spinor Fields in Rotating Black Hole Geometry We derive the results (i) the ortho-normal and completeness relations for normal modes and (ii) non-existence of zero mode for spinor fields in rotating black hole geometry. From these results, we show that superradiant phenomena for spinor fields should be type 2: positive momentum on the horizon (pH > 0) and negative frequency at infinity (ω < 0). Introduction Matter interactions with black holes (BHs) are essential for their existence and observation.As the black hole geometry, rotating BH geometry is important for superradiant phenomena (incident intensity < scattered intensity).Especially Anti-de Sitter (AdS) space-time plays the role of reflection mirror and successive superradiant phenomena lead to instability of BHs.Spinors are fundamental as matter fields because multiple spinors can represent Bosons and Fermions, which can be realized by the Bargmann-Wigner formulation. The organization of this short report is as follows.In sect.2, spinor fields in Kerr-AdS space-time is studied.In sect.3, normal modes are studied to derive the ortho-normal and completeness relations.In sect.4, non-existence of zero mode (p H = ω − Ω H m = 0) is shown with ω, m as frequency and azimuthal quantum number of spinor fiedls, and Ω H as angular velocity of BH.In sect.5, we show that the type 2 superradiant modes (ω < 0, p H > 0) are consistent.Summary of the obtained new results is given in the final section. This report is based on our paper [1] including the detailed logic and calculation. Spinor fields in Kerr-AdS space-time The line element of Kerr-AdS space-time with Boyer-Lindquist coordinate is where ℓ = √ −3/Λ is the cosmological parameter and a = J/M is the rotation parameters of BH.The event horizon is defined as the outer zero of ∆ r as 0 = ∆ r ⇒ r = r + . The Dirac equation for spinor fields in curved spacetime is defined through local Minkowski spacetime, where the Dirac matrices are treated as spacetime independent (Greek suffix (µ, ν...) as curved spacetime and Latin suffix (i,j , ...) as local Cartesian spacetime): where ω i j µ , b µ i are denoted as spin connection, vierbein, Γ i j , Γ i jk as anti-symmetric product of Dirac gamma matrix, and μ as mass of spinor field. We put the spinor fields as separation of variables in the polar coordinate: Next we define the chirality eigen-states as where four component spinors are expressed in two component ones: with Then the Dirac equation reduces to the four ordinary differential equations: where κ is the separation parameter. Normal modes: orthonormal and completeness relations The local conservation law of the bi-field current k µ spinor for Ψ and Ψ ′ holds using the Dirac equations: The inner product defining the integral of k µ spinor is shown to be constant in time: where the surface term should vanish: The boundary condition on R 1 , R 2 are required for vanishing the surface term: where γ is a phase factor.The orthogonal relation for the separation parameter κ is obtained by using the spinor equations and the boundary condition: The orthonormal relations for the frequency ω and the azimuthal quantum number m are obtained straightforward.Summarizing there results, orthonormal relations with respect to ω, m and κ are obtained: with the eigenfunctions where α = (ω, m, κ) and α ′ = (ω ′ , m ′ , κ ′ ) denote sets of quantum numbers.Any spinor fields can be expanded by eigenfunctions u α and v α : with the coefficient functions Inserting the coefficient functions back into the spinor field expression, the completeness relations are obtained: Summary As the summary of this report, we classify the superradiant modes as follows.(p H ≡ ω−Ω H m denotes the momentum of matter fields on the horizon): For spinor fields, type 1( ω > 0 and p H < 0) is not possible but type 2 (ω < 0 and p H > 0) is possible by our analysis.(Note that both types are not possible by other analysis [3,5,6].) For scalar fields, which is described in our paper [1] though not in this report, type 1 (ω > 0 and p H < 0) is not possible but type 2 (ω < 0 and p H > 0) is possible by our analysis.(Note that the results are opposite by other analysis.) The result of our analysis is obtained from the completeness relations in sect.3 and the spectrum condition in sect.4.For spinor and scalar fields, one of superradiance should be possible, because of the completeness relation.Among two types, type 2 can occur because of the spectrum condition.We have obtained the same type 2 superradiance both for spinors and scalars, which is supported by Bargmann-Wigner formulation by our previous work [7].
2018-12-12T12:48:20.958Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "5bf18c95a7ec14ad39cf115bc50bcec2b7d1f6c7", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/03/epjconf_icgaxiii-ik2018_09001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5bf18c95a7ec14ad39cf115bc50bcec2b7d1f6c7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17936413
pes2o/s2orc
v3-fos-license
Personal genomes, quantitative dynamic omics and personalized medicine The rapid technological developments following the Human Genome Project have made possible the availability of personalized genomes. As the focus now shifts from characterizing genomes to making personalized disease associations, in combination with the availability of other omics technologies, the next big push will be not only to obtain a personalized genome, but to quantitatively follow other omics. This will include transcriptomes, proteomes, metabolomes, antibodyomes, and new emerging technologies, enabling the profiling of thousands of molecular components in individuals. Furthermore, omics profiling performed longitudinally can probe the temporal patterns associated with both molecular changes and associated physiological health and disease states. Such data necessitates the development of computational methodology to not only handle and descriptively assess such data, but also construct quantitative biological models. Here we describe the availability of personal genomes and developing omics technologies that can be brought together for personalized implementations and how these novel integrated approaches may effectively provide a precise personalized medicine that focuses on not only characterization and treatment but ultimately the prevention of disease. INTRODUCTION With the advent of high-throughput technologies genomic science has experienced great leaps, rapidly expanding its domain beyond the characterization of short genomic reads in the early days of sequencing to the possibility of obtaining personalized genomes, once considered the holy grail of genomic methodology and technology development. The value of personalized genomic analysis, and evaluation of variant associations to disease, is becoming more apparent, even spurring directly to consumer implementations. Further developments in the last few years now lead to a more ambitious goal: the longitudinal monitoring of multiple omics components in individuals and the characterization of the molecular changes associated with disease onset in individuals, at an unprecedented level. In this review we describe technological and methodological developments in personal genomics, and the new promise of multiple omics profiling, including transcriptomes, proteomes, metabolomes, autoantibodyomes and so forth, (sample omics analysis workflows shown in Figures 1-4). We then discuss a framework on how such data may be integrated with a view towards the application of a personalized precise and preventive medicine, and describe an implementation of this approach. The technological developments and methodology allow for inroads into the future of quantitative personal medicine, which we can now plan carefully by taking into account not only the scientific developments that need to be implemented, but also the social implications coupled to ethical and legal considerations. GENOMIC SEQUENCING In 2001 the completion of the Human Genome Project (HGP) was announced effectively with the publication of the first complete human genome sequence. The HGP came at a hefty $2.7 billion cost using the best technology of the time, making it seemingly prohibitive to expect personal genome sequences to be achieved shortly thereafter. Yet the immense technological advancement, spurred by motivation by the National Institute of Health (NIH) and the National Human Genome Research Institute (NHGRI) to bring down genomic costs, led to an unprecedented growth in technology and methodology, enabling the drop in sequencing costs (http://www. genome.gov/sequencingcosts) to continue at a rate beyond the most optimistic projections of 2001 ( < $4000 currently). While initially the human genome was a combination of multiple individual genomic data [1][2][3], the developments by 2008 had allowed the determination of genomic individual makeup [4][5][6][7]. It is now possible to personalize Whole Genome Sequencing (WGS), and the dwindling sequencing costs promise the possibility of affordability for all in the near future [8]. These developments encouraged efforts to characterize disease on a genomic level, towards the application of an all-encompassing genomic medicine, at the molecular level. The initial goals were the characterization of populations for large studies, now shifting to the individual. An alternative to sequencing the whole genome has been whole exome sequencing (WES) [23]. This technology aims to study the exonic regions of the genome (~2%-3%), which are associated to several Mendelian disorders. It offers a lower cost option (e.g., Illumina, Agilent, and Niblegen platforms, see Clark et al. for a comparison of the latter two [24]) and has received immense attention, including the Exome Sequencing Project (ESP) (see the Exome Variant Server at http://evs. gs.washington.edu/EVS/), supported by the National Heart, Lung and Blood Institute (NHLBI). Quantitating genomic variation Concurrently with the technological developments, our understanding of the human genome has grown immensely since the publication of the reference genome in 2003. The aim was to determine the precise role of each base in the genome and identify genomic variants ( Figure 1). Several collaborative large-scale efforts pursued such investigations. The International HapMap Consortium [25,26] tried to identify common population variants and led to the development of public databases, such as dbSNP [27] (http://www.ncbi.nlm.nih.gov/SNP/), which catalogues Single Nucleotide Polymorphisms (SNPs) (defined as occurring in >1% of the population to differentiate from Single Nucleotide Variants (SNVs)). This has revealed great genomic variation both in global populations [28,29] and populations of admixed ancestry [30][31][32][33]. Typically the technologies involve the assignment of reads to the reference genome to determine the structure of the underlying sequence, including variation ( Figure 1). Beyond nucleotide variation, other genomic differences have been investigated, including small insertions and deletions (indels), copy number variations (CNVs) indicating varying numbers of segments and longer chromosomal segments that contribute to Structural Variation (SVs) -SVs are defined for segments of chromosomes larger than 1000 bp ( Figure 1A). Such efforts have been based on microarray methodology [34][35][36][37] and even higher-resolution in structural variants may be achieved with other methods [38][39][40][41]. Structural variants have been publically made available in the database of Genomic Structural Variation (dbVAR; http:// www.ncbi.nlm.nih.gov/dbvar/). Furthermore, functional elements have been extensively catalogued by the Encyclopedia of DNA Elements consortium (ENCODE; http://genome.gov/encode~10 production projects), with funding from the NHGRI. ENCODE data, including regulatory elements and RNA and protein level elements, have now been released and the project has received widespread attention [42][43][44][45]. The ENCODE project aims at a biochemical genomic characterization, with a thorough mapping of transcribed regions, transcription factor binding sites, open chromatin signatures, chromatin modification and DNA methylation. Such extensive data still needs to be annotated [46] interpreted in terms of biological significance, mechanisms and connections to phenotype and will likely prove invaluable in our interpretation of personalized genomic differences. Though initially limited by the number of complete genomic sequences, such data are now continuously updated and expanded by information from other projects such as the 1000 Genomes Project [47] as discussed QB Figure 1. Genomic variants. (A) Variation in the human genome. The personal genomic code can differ from the published reference genome. Basic examples of variation are shown on a single or few base variants (e.g., point mutations, insertions and deletions), or a larger scale for structural variants (>1000 bp, e.g., large insertions, deletions, inversions, tandem repeats, translocations). (B) Sample variant analysis workflow. In a genomic variant analysis, for example, after sample preparation and sequencing the raw files can be passed through quality control (e.g., using FastQC (http://www.bioinformatics.bbsrc.ac.uk/projects/ fastqc/) and removing PCR artifacts using tools as Picard (http://picard.sourceforge.net)). Reads are mapped to the genome and variants are assessed, e.g., mapping with several algorithms, including ELAND II (Illumina), SOAP [221], MAQ and Burrows-Wheeler Aligner (BWA) [222] and Novoalign by © Novocraft Technologies (http://www.novocraft.com). Read re-alignment can be performed, e.g., using Genome Analysis Toolkit (GATK) [223], or HugeSeq [211], to call variants, including implementations with Sequence Alignment Map format Tools (SAMtools) [224], annotation using Annovar [225], SIFT [226] and Polyphen [227] for determining variant effects on proteomic translation [228]. Furthermore, using a variety of methods the structural variants can be determined. For example the paired-end mapping method considers how paired-end reads mapped to the reference to assign deletions and insertions, from reads whose mapped span is longer or shorter than the average span; inversions, from position and relative orientations of the ends of reads [39,40]. The read depth method allows the possibility to identify the proportional genomic copy number variation. In the approach of Abyzov et al. [229] the read depth considered as an image is analyzed using image processing techniques, viz. mean-shift-theory [230]. Programs such as Pindel [231] and BreakSeq [232] consider split-read analysis to determine breakpoints of insertions and deletions. DELLY [233] by Rausch et al. takes into account paired-end and split-read methods for determining structural variants. Many packages for analysis are available through the Bioconductor [234] project as implemented in the freely available R statistical analysis platform (http://www.R-project.org). below, which has allowed us to have a better view of the great variability in each individual genome (~3-4Â10 6 SNPs, > 200000 SVs of varying sizes,~1500 SVs> 2 kbp), with much of the variation considered rare (1%-5%). Genome-Wide Association Studies (GWAS) try to associate the common variants to disease, by combining the now readily available extensive variant information and allelic variability, with linkage disequilibrium (a description of the correlation patterns between proximal variants). The NHGRI provides a publically available catalogue of published GWAS (http://www.genome.gov/ gwastudies) [48]. The early expectations of finding common traits and genomic features unique to diseases have proven more complicated, as the genomic variability turns out to be higher than expected and additionally the genetic variants need further validation. Personalized risk evaluation One of the goals of personalized genome interpretation is the evaluation of disease risk factors based on an individual's variant and allelic distribution composition. Such information may be compared to similar individuals with known disease associations to assess whether an individual shows increased or decreased risk compared to the control group. A combination of know SNPs and personalized variants has been found to be effective [72][73][74][75] and has been used in clinical studies; more recently, a seminal study by Ashley et al. [76] evaluated disease risk for a patient with family history of vascular disease. Personalized evaluation of potential drug responses can be based on the effects of variants [77,78], including drug selection, sensitivity and dosage estimation, e.g., cardiovascular drugs [79], schizophrenia related medications [80]. For example, PharmGKB (http://www.pharmgkb. org) provides a curated database of possible genomics information [81,82], exploring the impact of genomic variation on drug responses as these relate to expressed genes and associated pathways and disorders. The future applications are to include a precise drug dosage for an individual, avoiding trial and error methods and providing more effective treatment. The evaluation of personalized risk based on genomes is now appearing in direct-to-consumer services. Companies like 23andMe, deCODEme, (and previously Navigenics), offer to assess individual genotypes and offer disease based interpretation services based on Mendelian disorder evaluation and including pharmacogenomics responses. These are mostly based on SNPs evaluation and the tests though limited in scope do offer interpretation attractive to multiple consumers. Personal Genomes Project Presently thousands of genomes have been completely sequenced. One of the first large scale projects has been the 1000 Genomes Project [47], that has made its data publically available, and has encouraged the development of streamlined bioinformatics tools to analyze the variation in the individual genomes ( Figure 1). This project aims to combine data from 2500 individuals from multiple populations, at a 4Â coverage. Another grand scale effort driven by George Church's group at Harvard University is the Personal Genome Project (PGP) [83][84][85]. The project has been recruiting individuals who can share their medical and other information together with genomic information online (http://www.personalgenomes.org). The volunteers share full DNA sequences, RNA and protein profile information in addition to extensive phenotype information including medical records and environmental considerations, with all the data made publically available, and plans to expand to 100000 individuals [86]. One of the rather unique features of the PGP project is that it differs in consent of participants as compared to traditional studies. The ownership of the data is to be open and publically available without restrictions, not only for the initial perspective of the study, but open to follow-up or additional investigations. The scope is participatory, with the volunteers for the project interacting directly with the researchers. To address informed consent, participants pass a basic genetic literacy exam and must understand the project's scope. Additionally, they provide complete medical history, immunization and medications history, which becomes part of the publically available subject information. The access to the individual's data in the project can be either private to the participant and researchers only or completely public, depending on the participant's choice. The availability of extensive patient and omic information will be invaluable to researchers in developing robust analysis models for characterizing genomes and disease and the PGP project, and its publically open structure model, will be at the forefront of such efforts. BEYOND THE GENOME: OTHER OMICS Transcriptomics Though the genetic code in DNA is the almost identical (besides cellular variation), different cells have different gene expressions, corresponding to the kind of cell, developmental stage and physiological state. The collection of the transcripts in a cell (e.g., mRNA, non-coding RNA and small RNAs), the transcriptome, is essential in our understanding of cell function, and response to disease. Considerations must include start and end sites of genes, and coding, alternative splicing and post-transcriptional modifications. Initially inroads were made using high-density oligo microarrays, and in-house custom made microarrays [87], with high-density arrays having resolutions up to 100 bp [88][89][90][91]. While relatively inexpensive, these methods suffered from relying on prior knowledge of the genome, and faced technical issues such as background and saturation effects [92]. Hybridization interactions between probe sets in short oligo microarrays lead to spurious correlations [92,93]. The development of RNA sequencing (RNA-Seq) brought higher coverage, better precision and quantitation, and higher resolution and sensitivity, bringing RNA-Seq technology and transcriptomics on par with genomic sequencing [94][95][96][97][98]. RNA-Seq considers reads that correspond to millions of transcriptomic fragments that are mapped to the reference genome, to provide information on transcripts that may not be in the existing genomic annotation, allowing the search for novel transcripts, and even identification of SNPs and other variants, while showing remarkable reproducibility ( Figure 2). Transcriptome profiling has included looking at cancers [99][100][101], including breast cancer [102], gastrointestinal tumors [103] and prostate cancer [104]. Mass spectrometry, proteomics and metabolomics Gene expression was expected to correlate with protein levels in a cell and it was thought that methods such as RNA-Seq would be enough to ascertain the proteomic expression corresponding to gene expression. Proteins are expected to be closer to phenotype, as they participate in every aspect of cellular biology, but their expression levels are difficult to quantitate, partly because of translational control in cells, possible degradation and sampling issues [105][106][107]. The development of electrospray ionization brought mass spectrometry (MS) to the field of proteomics and the possible identification of thousands of molecules based on mass [108][109][110][111][112]. This has enabled not only the cataloguing of proteins, but also querying post-translational modifications [113,114]. As the techniques matured, liquid chromatography tandem mass spectrometry (LC-MS/MS) has become standard, and novel instruments (e.g., Velos family [115] by Thermo Scientific; quadrupole time-of-flight mass spectrometers (QTOFs) by Agilent) allow unprecedented precision to enable the development of methods to QB Figure 2. RNA-Seq analysis. In RNA-Seq analysis, short reads can be assembled and then mapped to the reference genome (with tools such as Illumina's ELAND, MAQ and BWA [222], Bowtie [235][236][237], SOAP [221], and others). A recent protocol by Trapnell et al. [238] describes in detail the use of dedicated RNA-Seq programs from the Tuxedo suite, such as TopHat [239], Cufflinks [240,241] and an R implementation called CummeRBund as a Bioconductor package (an alternative is to run these directly or using GenePattern [242,243], which also includes possible reconstruction by Scripture [244]). Other programs such as DESeq, another package in Bioconductor, can also help test for differential expression [245]. The numerous analyses availabilities are now publically discussed online, in a forum (http://SEQanswers.com/) that discusses many other examples and all aspects of the mapping process [246]. identify thousands of proteins (~4000-6000 over 2 days), and quantitate protein levels [73,116] (Figure 3). One set of methods uses stable isotopic labeling by amino acids in cell culture (SILAC) to label cell in light and heavy isotopes of amino acids providing double spectral peaks in MS for identification and quantitation [117][118][119][120] this method is now supplemented by 'spike-in'/'super' SILAC which has been used to measure biopsy tumor proteomes [121]. Another possibility is to use isobaric tags for relative and absolute quantitation (iTRAQ) [122,123] or tandem mass tag (TMT) labeling [73,124,125], and other methods, including spiking in peptides for absolute quantitation. Finally, it is possible to employ label-free methods for quantitation, which do not rely on tags, including integrating signal methods and MS spectral counting [126][127][128][129][130][131]. In comparison to whole transcriptome profiling, the numbers of proteins identified in proteome profiling tend to be less in comparison, particularly since low peptide levels cannot be amplified (cf. polymerase chain reaction methods for sequencing methods). Additionally, the current bottom-up (shotgun) proteomics methodology uses digestion with endopeptidases such as trypsin to obtain peptides of small enough mass to be identified by MS/MS, resulting in many fragments that cannot be identified in MS, which may possibly be alleviated by top down approaches that do not employ a digestion step [132][133][134][135][136]. However, proteomics provides insights that are missing from transcriptomic analysis, especially given the low correlations between protein and transcriptome differential gene expressions [73,[137][138][139][140][141][142]. Multiple proteomes have been quantitatively profiled, including characterization of ovarian cancer [143], an integrated approach that combines transcriptome and proteome information in a human cancer cell line by Nagaraj et al. [144], integrative gastric cancer characterization and effects of post-translational modifications [145], and looking for biomarkers in other cancers [146,147]. In addition to developments in proteomics, MS has encouraged the study of small molecules. The behavior of small molecules in cells though difficult to track provides insight into many common disorders. The set of all cellular small molecules is collectively called the metabolome. Metabolic processes are vital in biological pathways and a systems analysis of molecular cell complexity might lead to biomarker discovery, and possibly disease risk assessment, diagnosis and treatment [148]. Similar to proteomics, metabolomics can employ mass spectrometry to identify compounds [149] In quantitative proteomics using mass spectrometry typical approaches employ trypsin digestion coupled with tagging methodsnon label-free methods include use of isotopic labeling (SILAC) or isobaric tagging (iTRAQ, TMT). One typical bottom-up-approach setup uses a combination of high affinity liquid chromatography coupled with two rounds of mass spectrometry (LC-MS/MS) to fractionate peptides for identification and obtain their mass spectra. Raw files may be analyzed using vendor software or converted to open formats (such as .mzXML, .mzData or the current standard .mzML [247][248][249], e.g., using MSConvert [250]). The mass spectra can be mapped to known protein using a protein library, or less frequently de novo assembled, using an array of programs (e.g., X!Tandem [251], SEQUEST [252], Mascot [253], Open Mass Spectrometry Search Algorithm (OMSSA) [254], Proteome Discoverer by Thermo Scientific, or MassHunter Workstation by Agilent). Quality control includes estimation of false discovery rates (FDR), often using a reverse database search [105,255,256]. Quantitation can be carried out to estimate relative levels of proteins in different samples (employing standardization and normalization of average sample ratios to a unit mean). Finally annotation is made using databases such as UniProt or NCBI. Some of the analysis can be performed using suites and programs, such as PEAKS [257], the Trans-Proteomic Pipeline (TPP) [258][259][260][261], multiple tools from ProteoWizard [250], OpenMS [262][263][264] or vendor complete solutions Proteome Discoverer and MassHunter Workstation mentioned above. Multiple other programs for mass spectrometry are available (e.g., see http://www.msutils.org). 4) and cataloguing is under way, with thousands of metabolites identified by structure, mass and occasionally associated biological processes [150][151][152][153][154][155][156][157][158][159][160][161]. The identification of compounds can be based on MS/MS application and use of known compound spectra, or via use of standards against which mass spectra are compared. The profiling of metabolic components on an individualized basis can provide insights into pharmacogenomics and personalized medications, in addition to potential biomarkers, for example cholesterol levels and coronary artery [162,163]. The metabolomics of cancer has been extensively studied [164][165][166] and Type 2 Diabetes has been investigated [167], and in vivo interactions with proteins are being evaluated [168]. Other omics Genomes, transcriptomes and metabolomes have received widespread attention and currently offer the most quantitative data, provided by robust and comprehensive omics technologies, both in terms of experimental, as well as computational methodology. However multiple other omics are available, and these numbers are increasing, with a few notable technologies mentioned below: Autoantibodyomes: In addition to profiling of proteins directly, the reactivity of proteins to autoantibodies may be profiled on a large scale. Spotted protein arrays [169][170][171][172][173] have been implemented to study for example effects in cancer [174], immune response [175] and recently diabetes [176]. Another approach is the Nucleic Acid Programmable Protein Array (NAPPA) constructed by spotting plasmid DNA to effectively express and code the proteins on the array and used for immunoprofiling [177,178]. Furthermore functional peptide arrays have also been constructed [179,180]. Complementary technologies such as bead-based immunoassays are also being actively developed, such as the Luminex xMAP assay [181]. Microbiomes: Omics profiling could also include mapping of the personal microbiome, the complete set of microbes in an individual (e.g., found mainly on the skin or in the gut, conjunctiva, saliva and mucosa) using possibly a combined omics approach to look at genetic makeup and metabolic components [182][183][184][185][186][187]. The human microbiota (http://www.human-microbiome.org) have been associated to obesity [188] and diabetes [189,190] and have also been suspected to play an active role in the development of immunity [191]. The dynamic monitoring of microbiome-related changes can help identify the specific microbiota involved in disease responses, elucidate microbiome-host interactions and how the individual variability in components impacts developmental and metabolic processes. Methylomes: In addition to genomics, epigenomic information, such probing the methylome, i.e., identifying all genomic sites of cytosine methylation [192,193], might provide information about differentiation and regulation of gene expression. Methylation analysis and data interpretation can be challenging [194,195] but methods are improving as more data becomes available. Methylome analysis has now been carried out in blood components [196], stem cells [197] and ovarian cancer [61], and it might prove invaluable in assessing epigenomic effects on individual development and In metabolomics analysis chromatography columns are used for purification and preparation of samples coupled to mass spectrometry (gas chromatography (GC) or liquid chromatography (LC)-MS); standards for specific compounds may also be used in parallel for positive identification. Raw files may be analyzed using vendor software or converted to open formats (such as .mzXML, .mzData or the current standard .mzML [247][248][249], e.g., using MSConvert). The spectral data may be aligned for retention time and mass intensity calibration, e.g., using XCMS [265][266][267], SIEVE by Thermo Scientific, Matlab toolboxes by MathWorks, MassHunterProfiler by Agilent, MzMine [268,269]. After quality control and statistical analysis, masses of interest can be annotated using databases, e.g., Metlin [155,156], KEGG [151], MetaCyc [153,270,271], Reactome [157][158][159][160][161]. health. PERSONALIZED MEDICINE The developments of the many different omics technologies outlined above have given us tremendous insight into the human genome and associations to diseases, especially with the rise of the personal genome. The NHGRI recognizing the importance of these developments and the directions necessary to enhance health care, outlined in 2011 a vision for the future of personalized medicine [198] encompassing five domains of development that included understanding the structure of genomes, their biology, improving our understanding of the biology of disease, advancing medicine and improving the effectiveness of healthcare. The aims had been set to a shift towards personalized medicine within two decades, but the availability of the technology and constant decreasing costs have made pilot investigations of personalized medicine a current possibility [73]. Genetic variation has proven adequate for understanding group differences in disorders, but a truly personalized implementation needs to consider an individual. Clinicians are already considering molecular markers in their evaluation of patients, and particularly cancer [199][200][201][202][203]. The typical clinical diagnosis involves the observation of symptoms traditionally confirmed utilizing a small set of molecular markers. In diseases that share a common set of symptoms, some rare, such diagnosis is often complicated and prolonged, especially for heterogeneous disorders that need additional information to enable classification and subsequent specific treatments. Genetic and environmental factors create additional variability in disease severity, progression and treatment responses. Thus, traditional assays together with the aforementioned current omics technologies, that allow monitoring of thousands of molecular components, will facilitate and accelerate differential diagnostics and sub-classification through utilizing a more complete set of disease markers. A personalized approach will result in better targeting of diseases, introduce higher precision through measurement of larger sets of molecular components and ideally implemented at an early age to assess disease risk and have a preventative rather than retrospective treatment focus. A personal approach is by its nature an n = 1 study, which helps eliminate variation between individuals that are treated as a group, but still requires some verification and establishment of a baseline for comparison. As such, the profiling of healthy physiological states in a longitudinal approach may provide such a basis, if multiple time points with similar physiological state makeup are sampled. Multiple omics can supply multiple supporting datasets at each time point, with each complementary technology providing additional supporting information for a baseline establishment. This introduces the concept of complete omics monitoring of individuals over time, making personalized medicine a more dynamic proposition. The dynamic changes of molecular components may be associated to the individual's changing physiological states, and mapped onto pathways to identify the onset and progression of disease, including possible preventive measures. In our suggested implementation, termed integrative Personal Omics Profiling (iPOP) which we followed in the study discussed below [73] we integrate the omics components discussed above in a longitudinal approach with three essential steps ( Figure 5): I) Risk estimation: As discussed above the personal and common genomic variants determined in an individual genome can be associated to disease [76], with pharmacogenomic evaluation to determine possible drug response. An early age whole genome sequencing, possibly at birth, can provide a list of possible increased risk disorders and lead to taking preventive measures. This may be done in combination with a complete medical and family history, as for example implemented in the PGP project, and in conjunction with classical clinical risk factor profiling. II) Dynamic profiling of multiple omics: Starting with a healthy or 'steady state' baseline, by monitoring changes in the molecular components over multiple time points, drastic or gradual changes in physiological states might be assessed and the dynamic onset of disease profiled, and possibly prevented. Such profiling may be done on blood components, which are easily obtainable currently in the clinic. The individual blood components are excellent reflectors of generalized physiological state of an individual, as the blood circulates and receives inputs from multiple tissues throughout the body. The components may be processed to track multiple omics, such as transcriptome, proteome, metabolome and autoantibodyome, etc., which as mentioned offer complementary information, especially given the modest correlation observed between transcriptomic and proteomic components [137][138][139][140][141][142]. A recent study of profiles of tumors changing over time also employed an integrative approach on genomic and transcriptomic components [204]. Implementing this monitoring on healthy individuals will allow the monitoring of disease onset and physiological changes from various healthy, disease and recovery states, and following thousands of molecular component levels and responses at corresponding physiological states. III) Data integration and biological impact assessment: The multiple omics data can be analyzed individually to characterize their temporal response profile. This may be done using standard statistical time-series analysis, extensively used in all quantitative disciplines, such as physics, economics and finance, as discussed by Bar-Joseph et al. [205]. The dynamic signature of the signals for each molecular component can be studied for autocorrelation, periodicity or spikey behavior, corre-QB Figure 5. iPOP for personalized medicine. The framework described in the text employs multi-omics analyses (see above and Figures 1-4) that may be implemented for individuals. In step I) Risk estimation for disease is carried out using a whole genome sequencing to perform variant analysis coupled to medical history, environmental considerations and pharmacogenomics evaluations. In step II) Dynamic profiling of multiple omics using an array of technologies follows multiple omics longitudinally in a subject as they progress through their different physiological states, including healthy, disease, and recovery states. Thus thousands of molecular components are collected over time for III) Data integration and biological impact assessment, using temporal patterns to obtain matched omics information, correlate and classify responses, compare against pathway databases and visualize components, e.g., current pathway tools include DAVID [206,272], KEGG [151], Reactome [157][158][159][160][161], Ingenuity Pathway Analysis (IPA); networks can be visualized using Cytoscape [207], various R packages through Bioconductor [234], Matlab by MathWorks and several others. The future iPOP implementations may be gathered into a curated database of iPOP-disease associations that may help in categorizing an omics dynamic response to a catalogued physiological state and disease onset, with potential diagnostic capabilities. sponding to causal changes or abnormal physiological state conditions resulting from the onset of disease, infections, or environmental effects. The different classes of temporal response can be checked for biological pathway and gene ontology enrichment [151,[157][158][159][160][161][206][207][208][209][210], and corresponding disease associations in comparison to a database of other longitudinal profiles (coupled to complete electronic records of omic and medical histories). Such a database is a necessary and powerful resource towards the realization of personalized medicine based on omics data profiling. Example implementation of personalized medicine: iPOP To show the feasibility and practical applicability of iPOP we profiled a healthy individual, 54, over a period of initially 14 (now 33) months [73]. This initial time series covered healthy states, and two viral states, including a human rhinovirus (HRV) infection at the initiation of the study and a respiratory syncytial virus (RSV) infection 289 days later. The iPOP used blood samples to extract omic components from peripheral blood mononuclear cells (PBMCs) and serum, which were analyzed to obtain a complete DNA, RNA, protein, metabolite and autoantibody profile. Initially a complete medical exam was performed with standard clinical tests before time-point profiling began. In a first step, WGS with two platforms was carried out (Complete Genomics and Illumina, at 150-and 120-fold coverage respectively) and WES with three platforms (Nimblegen, Illumina and Agilent) and helped identify a large number of variants (> 3Â10 6 SNPs; > 2Â10 5 indels; > 2000 SVs). Using multiple platforms allowed us to determine high-confidence and novel variants (using HugeSeq [211]). Evaluation of genetic disease risks based on variants was carried out, both by looking for known disease associations using dbSNP and the Online Mendelian Inheritance in Man (OMIM, http://omim.org/) database and using the RiskO-Gram algorithm [76] which integrates information from multiple alleles to assess risk against a similarly matched data cohort. This revealed significantly increased risk for various disorders, including open angle glaucoma, dyslipidemia, coronary artery disease, basal cell carcinoma, type 2 diabetes (T2D), age related macular degeneration and psoriasis. This encouraged the subject to follow up on these disorders, and also start monitoring glucose and glycated hemoglobin (HbA1c) levels, which surprisingly increased beyond normal levels following the RSV infection, and the subject was diagnosed by his physician for T2D 369 days into the study. Related to T2D, pharmacogenomic considerations revealed a possibly favorable (glucose lowering) response to diabetic drugs rosiglitazone and metformin, should treatment become necessary. Furthermore, the autoantibodyome profiling of the subject (Invitrogen ProtoArrays profiling of 9483 protein reactivities to Immunoglobulin G (IgG)) revealed increased reactivity in multiple proteins, including DOK6 (related to insulin receptors), and GOSR1, BTK and ASPA, previously reported to show high reactivity by Winer et al. in insulin resistant patients [176]. The subject initiated and still maintains a strict dietary and exercise regiment supplemented with low doses of acetylsalicylic acid, which helped him control his glucose and HbA1c levels, which after a considerable time period (~months) have now returned to normal levels. In addition a range of omics were profiled over time for up to 20 different timepoints over the span of the study including high coverage transcriptome (RNA-Seq of PBMCs, 2.67 billion reads mapped to 19714 isoforms corresponding to 12659 genes), proteome (MS of PBMCs, identifying a total of 6280 proteins; 3731 consistently across most timepoints), metabolome (MS of serum, profiling 6862 and 4228 metabolites during periods of HRV and RSV infections respectively, with 20% identified based on mass and retention times alone). The dynamic transcriptome, proteome and metabolome profiles were analyzed in a novel integrated framework based on spectral analysis of the time series. This allowed the identification of temporal patterns in the combined data, corresponding to biological processes that varied with physiological state changes, including the onset of T2D seen in multiple omics components, and common signatures of HRV and RSV infections. While several gene associations to pathways were known, multiple genes showed similar patterns that had not been reported before and merit further investigation. OTHER CONSIDERATIONS AND FUTURE DIRECTIONS The iPOP study discussed above revealed the complexities and characteristics of personal genomes, transcriptomes, proteomes and metabolomes and showed the feasibility of personalized longitudinal profiling that can provide actionable health information. Multiple omics data integration still presents a formidable challenge and merits further development. Each omics technology produces different kinds of data, including multiple formats (e.g., data files range from simple text, and extensible markup, e.g., .xml, to vendor closed-source formats). Additionally, each omics set requires its own quality control analysis, further confounded by different error and noise levels associated to the different technologies. As each of the data sets also presents different signal and noise distributions, this makes uniform normalization approaches across omics challenging, especially if considering multimodal dynamic data. Furthermore, the amounts of information per omics set can vary, e.g.,~5000 proteins,~20000 transcript isoforms,~6000-10000 metabolites,~9000 autoantibodyprotein reactivities and so forth. Hence, gene-centric approaches, that integrate data corresponding to, associated or interacting with the same genes, will not always work, as the different components may not match. The integration of information per component is made more difficult with multiple existing gene and protein annotations, often resulting in a many-to-many map in the geneprotein integration, and correspondingly lacking metabolite-protein/gene annotations and associations. Finally, if considering dynamic datasets, this also results in multiple instances where time points might be missing data for some of the molecular components (especially evident in mass spectrometry and shotgun proteomics, where proteins are identified through different peptides). These complications of omics data integration necessitate that each individual omics data set is analyzed independently up to normalization, and then integrated with the other information. New integrative methodology has to account for such different normalizations, missing data, and also integration that is not gene-based, but rather incorporates time-series analyses, as for example was carried out in the iPOP study [73]. Classification of changes by temporal response, and possibly interaction data leads to an interpretation of components based on shared similar dynamics and avoids some of the issues of insufficient annotations and missing information. Such an interpretation lends itself to a clinical setting where dynamic changes are associated to varying personalized physiological states, and may be adopted by the medical community. To facilitate the wide adoption of the methods into personalized medicine, the integrated data analysis will require optimization of current computational tools to rapidly and efficiently handle as well as visualize the multiple omics data. As a first step, the amount of computation time for different analyses must be reduced from days (in the case of mapping sequence data and quantitative proteomics in current omics analyses presented above) to hours or less to have immediate relevance to active medical examinations. Secondly, better visualizations of omics data, though difficult, are also necessary, as multidimensional information is difficult to collate, present, and interpret (many efforts are addressing this, e.g., Circos plots that allow multiple sequence information to be displayed together are now widely adopted [212]). Incorporating such information with clinical data and phenotypes presents a new challenge, requiring browsers that combine temporal information with multi-dimensional omics sets. We believe network analysis [213][214][215][216][217] presents an excellent visualization and integration possibility, allowing the combinations of multiple levels of networks, dynamically changing, that will include cellular information, component and corresponding disease temporal progressions, as well as medical assay data in a modularized approach. The computational analyses and visualization of omics data integration also reveal the known need to manage large amounts of data [218,219], both in terms of processing power, as well as storage capacity and maintaining easy accessibility, especially for the practicing clinicianwith the recent advent of cloud computing providing one possible solution. Finally, the combination of omics data with medical records presents another challenge, with privacy and ethical issues that must be considered. Such improvements and standardization of approaches will help make the analysis available in a clinical setting and an increasingly larger set of patients, while encouraging the early adaptation of the integrated approaches by the scientific community towards personalized medicine applications. As technology improves we expect to see advancements in each omics implementation discussed above. In terms of sequencing, continual improvements in depth and read length will allow unambiguous precise sequence mapping and additionally the querying of lower gene expression, coupled to higher accuracy in variant calling. With sequencing times becoming faster (e.g., whole genome sequencing in~5-30 hours depending on platform at deep,~100Â coverage), and hardware more compact, eventually such technology will be available in the clinic, enabling the incorporation of all genomic, transcriptomic, microbiomic and autoantibodyomic profiling as parts of regular medical examinations. Correspondingly, mass spectrometry improvements (including table-top hardware now available) will improve mass accuracy, and higher sensitivity, allowing increases in the number of proteins identified and better quantitation, which can already be implemented in a clinical setting. The MS improvements in combination with better metabolite cataloguing will also improve the identification of small molecules. The protocol and methodology advancements will allow using a smaller volume of patient sample needed for iPOP (decreasing from~80 mL to drops of blood) making it feasible to probe the omics on more regular basis for each patient, even providing home kits to send in self-collected samples (akin to what is already implemented to some degree by companies, e.g., 23andMe, that collect saliva samples for phenotyping). The technological and methodological advancements will allow for effective iPOP implementations with multiple patients, but it will still take some time to evaluate what constitutes actionable information and which components will be most informative. Once these relevant components are identified monitoring technologies can be further developed to help possible clinical implementations. This will certainly be alleviated by multiple iPOP studies providing the necessary aggregated information. However, clinical and psychological concerns need to be addressed and the possible impact to patient health being of paramount importance, in a medical process in which the patient is actively participating [220]. Such active participation requires the training of the public and health professionals to an understanding of genomic information, and how this omics knowledge impacts their health, and their families. Genetic counseling is a necessity, and the number of trained genetic counselors is steadily increasing. Informed consent will be necessary, but this requires an understanding of basic genomic terms that are not apparent to non-experts. To facilitate this, probably school curriculum adjustments will be needed to enable early education of the public. The emergence of quantitative Personal Omics, including genomes transcriptomes, proteomes, metabolomes and other omics allows us to now combine them to yield personalized actionable health care information. Such research is at the forefront of medical science, and may help the characterization of disorders and the implementation of precise personal medicine aimed towards prevention rather than treatment. Careful forward planning, coupled to the continuing interest and participation of the public, government agencies and researchers, assures that the development of personalized omics will proceed beyond possible hurtles into a novel approach for the 21st century health care implementations.
2017-08-03T02:56:43.586Z
2013-01-15T00:00:00.000
{ "year": 2013, "sha1": "1271a98d1511b4d5e7105977d0805fab7664b0ad", "oa_license": null, "oa_url": "https://doi.org/10.1007/s40484-013-0005-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b67376db380379b48690b43ab52ae56c4ec3cfc3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
38895460
pes2o/s2orc
v3-fos-license
Integrable deformations of T-dual $\sigma$ models We present a method to deform (generically non-abelian) T duals of two-dimensional $\sigma$ models, which preserves classical integrability. The deformed models are identified by a linear operator $\omega$ on the dualised subalgebra, which satisfies the 2-cocycle condition. We prove that the so-called homogeneous Yang-Baxter deformations are equivalent, via a field redefinition, to our deformed models when $\omega$ is invertible. We explain the details for deformations of T duals of Principal Chiral Models, and present the corresponding generalisation to the case of supercoset models. INTRODUCTION Integrable models in two dimensions have played a pivotal role in the understanding of (quantum) field theory, have numerous applications in condensed matter theory, and have recently attracted attention also in the context of the AdS/CFT correspondence [1], which relates certain string theories on (d + 1)-dimensional anti de Sitter (AdS) backgrounds to conformal field theories in d dimensions. The most studied example which exhibits integrable structures is that of the superstring on AdS 5 ×S 5 [2] and its dual N = 4 super Yang-Mills theory in four dimensions [3], see [4] for a review. On the string side the two-dimensional worldsheet theory is classically integrable, i.e. there is a Lax pair whose flatness condition is equivalent to the equations of motion of the σ model. The Lax pair depends on an auxiliary spectral parameter z, and its expansion around a fixed z 0 yields an infinite set of conserved charges, see [5] for a review. Integrability has provided the most stringent tests of AdS/CFT, culminating with the possibility of computing the spectrum of the quantum theory in the large N limit exactly [6][7][8][9]. Given this tremendous success it is natural to ask whether other theories which are not maximally (super)symmetric are still integrable. Integrability could then also be a guiding principle to discover new models which are interesting in their own right. The β deformation [10][11][12] or certain gravity duals of non-commutative gauge theories [13,14] are examples which are integrable but reduce to the maximally symmetric case only when a deformation parameter is sent to zero. These instances actually fall into a larger class that goes under the name of Yang-Baxter (YB) models [15][16][17][18], sometimes also called η deformations after the deformation parameter. A YB model is identified by an R matrix which solves the classical Yang-Baxter equation (CYBE), thus providing a rich set of solutions. Here we will not consider the case of "modified" CYBE. Each R generates a background that reduces to the undeformed model (e.g. AdS 5 ×S 5 ) in the η → 0 limit. In this letter we explore another possibility; we de-form the original σ model by adding a topological term (a closed B-field) and then apply non-abelian T duality (NATD) [19] with respect to a subgroup G of the isometry group G. The special case when G is abelian gives so-called TsT transformations [10][11][12]. We refer to the resulting actions as deformed T dual (DTD) models, since sending the deformation parameter ζ → 0 they reduce to NATD. DTD models are in one-to-one correspondence with 2-cocycles ω of the Lie algebra of G. The cocycle condition (3) guarantees that integrability is preserved, and plays the same role as CYBE for YB models. The analogy goes even further. When ω is invertible its inverse R = ω −1 solves CYBE, and each solution of CYBE corresponds to an invertible 2-cocycle [20]. We use this identification to show that the action of YB can be recast in the form of DTD models, where the two deformation parameters are simply related by η = ζ −1 . As explained later, this translates into our language a recent conjecture by Hoare and Tseytlin [21]. We prove it by providing the explicit field redefinition that relates YB to DTD. The field redefinition is local, albeit in general nonlinear, and it allows us to interpolate between a certain σ model (ζ → ∞) and its NATD (ζ → 0). In the case when ω is degenerate, DTD is equivalent to a combination of YB deformation and NATD. We first construct the DTD of the Principal Chiral Model (PCM), since it provides a simpler set up where all the essential features already appear. Later we generalise it to the case of supercosets, which is more relevant to the study of deformations of superstrings. The supercoset case will be described in more detail elsewhere [22]. (1) Here J = df f −1 is a right -invariant Maurer-Cartan form for f ∈ G, depending on fields that remain spectators under NATD. At the same timeà ∈g and ν ∈g * identify each of the two T-dual frames. If T i are generators for g, a basis for the dual algebrag * is given by The original PCM is recovered upon integrating out ν sinceF +− = 0 implies thatà is pure gauge, i.e.à =ḡ −1 dḡ for aḡ ∈ G, and we get the desired action with g =ḡf . The NATD with respect to G, on the other hand, is obtained by integrating outÃ. This property is needed to have local G invariance also for ζ = 0, which ensures that # d.o.f.= dim(G) [38]. Equations of motion forà give Tr(δà ∓ E ± ) = 0 where This impliesP T E ± = 0, whereP projects ontog,P T ontog * . We solve these equations by defining the linear and O −T is the inverse of its transpose. Note that O −1 O =P as the LHS is defined only ong. Evaluating S ′ on the solution we get the DTD action A second interpretation of DTD comes from integrating out ν rather thanà from (2), which gives agaiñ A =ḡ −1 dḡ. The resulting action is a topological deformation of the PCM, since the cocycle condition implies that B = ζω(ḡ −1 dḡ,ḡ −1 dḡ) is closed. At the classical level adding this term has no effect, and in fact this picture of a deformation which is trivial in the dual frame is reminiscent of YB models: in some cases they correspond to TsT transformations [21,[23][24][25], which are just field redefinitions in a T-dual frame. Since DTD is a NATD of a topological deformation of PCM, it is classically integrable, where NATD can be applied thanks to closure of B. A third interpretation of DTD comes from the possibility of applying NATD to a centrally extended subalgebra. This idea first appeared in [21] and was the original motivation for considering the deformation (2). One can indeed replaceà in (1) withà ′ ∈g c.e. =g ⊕ c and c central; similarly ν ′ ∈g * c.e. . We decomposẽ A ′ =à +à c , ν ′ = ν + ν c with obvious notation, and extend the definition of the trace Tr(c 2 ) = 1, Tr(cg) = 0. Introducing a map ω whose components are ω ab = −f ab we just notice that it is antisymmetric and satisfies the cocycle condition, a consequence of the Jacobi identity ing c.e. projected on c. For some ω's DTD reduces to just NATD, i.e. the deformation parameter can be removed by a field redefinition. This happens when ω is a coboundary, i.e. ω(x, y) = f ([x, y]) for some function f . Therefore, nontrivial deformations are in one-to-one correspondence with 2-cocycles modulo coboundaries, i.e. with elements of the second cohomology group H 2 (g). The same holds also for non-trivial central extensions. In particular, there are none for semisimpleg. Trivial deformations are equivalently described as adding an exact B-field to PCM. INTEGRABILITY Above we argued that DTD models must be integrable, however it is instructive to show this explicitly to see how the cocycle condition enters and write a Lax connection. We will show that the equations of motion formally resemble those of the PCM, for which a Lax pair is known. Suppose we consider a PCM with group element g =ḡf , withḡ ∈ G, f ∈ G. We prefer to rewrite its on-shell equations in terms of the left and right currents A =ḡ −1 dḡ and J = df f −1 . To start, the flatness condition for A = g −1 dg is equivalent to F J = 0, Fà = 0 Moreover, the equations of motion for the PCM, i.e. conservation of A, become C = 0 Tr(δf f −1 C) = 0, essentially as in the previous example of PCM. However, in that case it is only thanks to the equations of motion forḡ (i.e. Tr(ḡ −1 δḡ C) = 0) that one can claim C = 0. In analogy to PCM, it is then clear that our task is to show thatP T C = 0 also for DTD. We generalise the argument of [26] for NATD of PCM, and consider the equations E ± = M ⊥ ± , for some M ⊥ ± for whichP T M ⊥ ± = 0. They implỹ P T E ± = 0, i.e. they are equivalent to the solutions for A as in (5). They obviously imply also the equation . The first line on the right hand side is rewritten as [ν,F +− ], and hence vanishes thanks to flatness ofÃ. The second line vanishes upon projecting withP T [39]. Finally, the last line vanishes thanks to the cocycle condition: using (3) it is rewritten as −ζω(F +− ), which is again zero. Since alsoP T C = 0 holds, we conclude that the whole set of on-shell equations for the DTD is formally equivalent to those of a PCM, provided the proper A is used. We can furthermore write the Lax pair as with z a spectral parameter. In fact, the flatness condition ∂ + L − − ∂ − L + + [L + , L − ] = 0 is equivalent to the on-shell equations just derived. RELATION TO YANG-BAXTER We now prove that YB deformations for PCM on the group G are equivalent to DTD. This was checked for many particular examples in [21]. YB models are identified by an R matrix solving the CYBE on the Lie algebra R is invertible on a certain subalgebra and its inverse is a 2-cocycle [20]. As anticipated, we identify R = ω −1 , where ω is the operator defining the DTD model. Then R :g * →g. The two deformation parameters will be related by η = ζ −1 . We first split the group element parameterising the YB model as g =gf , whereg ∈ G and f ∈ G. We identify f with the homonym appearing on the DTD side. Our proof of equivalence of the two actions will then consist in giving the field redefinition relatingg and ν. Since R is invertible, we can always takeg = exp(RX) for some X ∈g * . One can check that taking X = ην + η 2 2P T [Rν, ν] + O(η 3 ) the two actions are equivalent up to terms which are at least cubic in η. The generalisation to all orders can be obtained by requiring that the df df terms in the two actions match. This leads to the condition (1 − ηRg) −1 = 1 − O −1 whose solution can be shown to be It follows that dν = (P T − O)g −1 dg or, equivalently, where we defined A ± = (1 ± ηR g ) −1 (g −1 ∂ ± g) on the YB side. Using these relations it is not hard to check that the two actions are the same up to the topological term ζω(g −1 dg,g −1 dg), which has no effect in the classical theory as remarked earlier. We have proven the equivalence of DTD and YB when ω is non-degenerate. In the case of degenerate ω it is always possible to choose it in such a way that it is nondegenerate on a subalgebraĝ ⊂g [27] and acts trivially on its complementǧ ing. We interpret it as NATD onǧ of the YB model corresponding to restricting ω toĝ. DTD OF SUPERCOSETS The construction of DTD for supercosets follows the steps explained in the simpler case of PCM. Here we only present the main results, whose derivation will be collected in [22]. We still denote by G the group of superisometries, e.g. P SU (2, 2|4) for superstrings on AdS 5 ×S 5 , see [28] for a review. Its Lie superalgebra g admits a Z 4 decomposition, and we denote by P (j) the projectors onto the four subspaces. They typically appear in the combinationd = P (1) + 2P (2) − P (3) or its transposed T . The absence of P (0) ind is necessary for the local g (0) invariance of the action, i.e. local Lorentz transformations. The action for DTD of supercosets is [40] whered f ≡ Ad fd Ad −1 f . We keep the same definitions for J, ν, which however now take values in superalgebras. The model is integrable since we can write down a Lax pair. This is more conveniently expressed in terms of Then flatness condition ∂ + L − − ∂ − L + + [L + , L − ] = 0 for is equivalent to the on-shell equations of the DTD model. DTD of supercosets possess kappa symmetry, and therefore correspond to solutions of the generalised supergravity equations of [29,30]. Kappa symmetry trans- ± +à (2) and κ (j) , j = 1, 3 are two local parameters of grading j. The action (15) is invariant under these transformations upon using the Virasoro constraints. If we were not fixing conformal gauge, the variation of the action would be compensated by the variation of the worldsheet metric. From these kappa symmetry transformations it is possible to extract the background fields of DTD [22]. The equivalence to YB for invertible ω's holds also in the case of DTD of supercosets. Remarkably, the field redefinition is still given by (13) as for PCM. We have further verified that kappa symmetry transformations of YB models [17] take the above form under this field redefinition, when we fix the G gauge to get δf f −1 =d T f (δν). CONCLUSIONS We provided a unified picture of (non-abelian) T duality and homogeneous YB deformations as DTD of σ models. As pointed out in [21], an advantage of this formulation is that it can be realised at the path integral level, giving a better handle on the quantum theory. In fact, it also explains why the condition for one-loop Weylinvariance, i.e. unimodularity ofg, is the same for both YB model and NATD [25,31,32]. Despite the close relation, it is still worth to view DTD as a distinct class of deformations. In fact, the field redefinition that relates it to YB is singular in the two undeformed limits; YB becomes degenerate when taking the undeformed (i.e. ζ → 0) limit of DTD, and viceversa. Therefore, the interpretation as deformation applies to just one of the two models in the T-dual pair. It would be interesting to understand if there is any connection to the λ-model of [26,33,34], which is also a deformation of NATD and is related to the inhomogeneous YB deformation [15][16][17]. Although our motivation was integrability, such deformations can be applied also to non-integrable models, which provides an interesting and potentially useful way to generate new supergravity solutions.
2017-08-23T15:02:11.000Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "57306cd18cbb5b64f8275f0c202dd4df41187886", "oa_license": "CCBYNCND", "oa_url": "http://spiral.imperial.ac.uk/bitstream/10044/1/43717/4/PhysRevLett.117.251602.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a804782f20176776c46349edaa029033ffd01854", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
233921173
pes2o/s2orc
v3-fos-license
Carbon Nanotubes Reinforced Natural Rubber Composites Several advanced methods have been introduced to disperse CNTs in the NR matrix. Various aspects highlighted in this chapter include the mixing processes such as melt mixing and latex mixing methods. As well as, formations of functional groups on the surfaces of CNT using silane coupling agents (i.e., ex-situ and in-situ functionalization). Moreover, hybrid CNT are beneficial to achieve better electrical conductivity of NR/CNT composites. These efforts are aimed to reduce the percolation threshold concentration in the NR composites for application as conducting composites based on electrically insulating rubber matrix. Sensor application is developed based on conducting NR composites. NR composites showed changing of resistivity during elongation termed as piezoresistivity. The most commonly used rubber matrices such as NR, ENR and IR are mixed with a combination of CNT and CB fillers as hybrid filler. The presence of linkages in the ENR composites results in the least loss of conductivity during external strain. It is found that the conductivity becomes stable after 3000 cycles. This is found to be similar to the NR-CNT/CB composite, while a few cycles are needed for IR-CNT/CB owing to the higher filler agglomeration and poor filler-rubber interactions. This is attributed to the polar chemical interactions between ENR and the functional groups on the surfaces of CNT/CB. Introduction Natural rubber (NR) is widely used in various industries owing to its excellent elasticity and mechanical properties. NR has been typically used in many industrial applications including tires, sports articles, sealing materials, medical glove, rubber boots and dairy rubber items [1]. Moreover, application of NR can be more applied by addition of fillers, such as silica, clay, carbon black, and carbon nanotube that is properties of NR can be induced by filler. As NR was converted from insulator material to be used as semi-conductive. CNT have been widely interested for using as conductive filler in NR composites, due to the sp 2 -hybridized carbon molecules throughout its molecular structure. Its carbon-carbon bond angles can be mechanically distorted reversibly, and core electrons can act as free electrons of the carbon atoms on CNT surfaces. Thus, the special molecular structure of CNT provides it with high mechanical properties, excellent thermal conductivity, and outstanding electrical conductivity [2]. Furthermore, nanocomposite of NR and CNT provided high elasticity material and also sensitivity on electricity due to CNT networks in NR matrix are easy to break under stretching and fast rebuilt under releasing [2]. Therefore, it well proper for application as smart sensors to monitor the applied external stimulus. That is, NR/CNT composites based stretchable strain sensors have been interested to emerging applications, such as human motion detection [3,4]. However, several works have been researched on human motion detection as adsorbing graphene woven fabrics on polydimethylsiloxane (PDMS) and medical tape composite. The wearable strain sensor could well detect human movements, including hand clenching, pulse, expression change, blink, phonation, and breathing [4]. Additionally, it is observed that the stretchable CNTs/carbon black (CB)/ isoprene rubber (IR) composites could be used to detect human motions and emotional expressions [5]. It was reported that the percolation threshold concentration of composites was significantly increased, while optimal conductivity increased, on adding conductive CB in CNT composites [5]. Furthermore, using CB also improves the sensitivity of electrical resistivity to stress and strain, due to its spherical shape that eases disconnection of conductive particles by strain, while the long cylindrical CNT particles can have sliding contact. This increases potentially the piezoresistive responsiveness, combining excellent conductivity of CNT with strain sensitivity of the electrical pathways on using CNT-CB blended filler [6,7]. Furthermore, using NR, incorporation of CNT and CB hybrid filler can keep a very stable sensor performance, showing good mechanical properties, when the composites are dynamically elongated several times [6,8]. Also, NR composites are easy to process, cost-effective, and well-known as hydrophobic biopolymers [9], so that humidity does not effect on an NR sensor. This review article focuses on the preparation and electrical property of NR/ CNT composites, the methods to improve the dispersion CNT are also mentioned as well as overview of applying NR/CNT composites for motions sensors application. Properties of CNTs Usually, CNT has extremely high tensile strength compared to other carbon materials. The excellent strength makes CNTs suitable for developing composite material with higher reinforcing efficiency. It was also found that, the incorporation of 0.5 phr MWCNT in NR composite reflected the best properties of increasing 61% of tensile strength and 75% of modulus [10]. Moreover, CNT exhibits excellent electronic properties as the details given in Table 1 [11,12]. Modified natural rubber-carbon nanotube composites and its properties Natural rubber (NR) is a well-known biopolymer that consists of isoprene units linked together in cis-1,4 configuration. NR has attracted tremendous scientific and industrial interests due to its unique molecular structure with superior and unique properties such as high elasticity, flexibility and some level of biodegradability. However, NR has intrinsically poor aging, weathering, oil resistance, and electrical conductivity that limits the use of NR in some applications. However, the application of NR can be extended by the modification of NR molecules in various forms, such as epoxidized natural rubber (ENR) and maleated natural rubber (MNR). Various properties of NR products (i.e., modulus, viscosity and strength) could be improved by incorporating special types of fillers to form NR composites. Therefore, different types of fillers into NR as an elastomeric matrix, including carbon black, silica, clay, calcium carbonate, carbon fibers or carbon nanotubes, have been widely investigated. CNT filled NR composites were used with various types of natural rubbers, especially unmodified natural rubber (NR), epoxidized natural rubber (ENR) and maleated natural rubber (MNR) [13]. Consequently, after the modification of rubber, the electrical conductivity of composites was found to be enhanced when compared to the unmodified NR-CNT (Figure 1). The percolation limit for CNTs in ENR-CNT and MNR-CNT composites is approximately 1 phr, while a value of 4 phr was found for unmodified NR-CNT composite. Lower percolation value of ENR-CNT and MNR-CNT composites than that of the unmodified NR-CNT composite proves the enhanced degree of CNT dispersion in the rubber matrix. It is gained by the occurrence of chemical interaction between the functional groups present in ENR or MNR molecules and the polar groups on CNT surfaces as shown in Figure 2. This confirms that, the polar nature of rubber molecules (i.e., ENR and MNR) causes significantly a greater degree of CNT dispersion and consequently the composites reach the percolation limit at smaller CNT concentration. However, for all the composites studied, the maximum electrical conductivities are the same. It means at the CNT concentrations below the percolation threshold the electrical conductivity is dominated by the rubber matrix. Whereas the CNT network plays the dominating role above the percolation limits. This correlates well to Earp et al. which indicated a comparable conductivity of the NR composites with and without CNT loading since the CNT was covered by the insulated NR molecules. After the increase of CNT concentration, the percolative CNT are formed and the NR matrix had CNT pathway dispersing throughout the samples. This CNT connection induces and carries well transferring of electron which causes increment of electrical conductivity [14]. This is agreement to Ma et al. which found that electrically conducting behavior of composites consisting of conducting fillers and insulating matrices can be applied to explain the percolation theory originating in the materials. It was found that the composite undergoes an insulator-to-conductor transition while the conducting filler content is gradually increased. The critical filler concentration is referred to the percolation threshold where the measured electrical conductivity of the composite had sharply increased for several orders of magnitude relating the formation of continuous electron paths or conducting networks [11]. Moreover, critical CNT loading in matrix effects on the overall properties of CNT filled ENR nanocomposites [2]. On varying the CNT loading from 1 to 7 phr showed the critical loading at 3 phr and significantly improved the electrical conductivity. Dispersion technique of carbon nanotubes and their network formations on the properties of natural rubber-carbon nanotube composites Recently, CNT becomes a promising filler for the NR based composites due to its several unique properties. Perfect molecular structure of CNT with sp 2 -hybrided carbon structure causes extremely high mechanical properties, excellent thermal conductivity and outstanding electrical conductivity [11]. In addition, low density, high specific surface area and extremely high aspect ratio make the CNT as an interesting carbon filler same as graphene and other carbon fibers. In the recent years, many researchers have attempted to incorporate CNT into rubber matrix (i.e., natural rubber [15][16][17] and synthetic rubber [18,19]) to utilize the intrinsic properties of CNT for enhancing the properties of rubber composites, particularly for the electrical [13]. Possible chemical reactions between (A) ENR and CNT, and (B) MNR and CNT Carbon Nanotubes Reinforced Natural Rubber Composites DOI: http://dx.doi.org /10.5772/intechopen.95913 conductivity. However, the property enhancement is not so easy and still the vigorous investigations are ongoing. The major drawback to use CNTs as the reinforcing filler in NR is their agglomeration, since CNT contains very high aspect ratio and strong Van-der Waals attractions between the particles. Small polar functional groups on the CNT surface are also the reason for their self-association behavior inside NR matrix. Altogether it provides strong filler-filler interaction which causes very poor dispersion of CNTs. Weak physical and chemical interactions among CNT and NR matrix generally lead to poor mechanical properties and electrical conductivity due to the incompatibility between them [20]. Therefore, homogeneous dispersion of CNTs inside the rubber matrix is an important challenge by optimizing the condition for the preparation of rubber-CNT composites. To obtain high conductive CNT-based rubber composites, a proper preparation method has also been widely investigated. Melt blending and latex state mixing processes are the most effective methods in terms of the process ability and properties of nanocomposites by using a two-roll mill and an internal mixer [21]. Shearing force and mixing temperature during rubber operation cause reduction of NR viscosity and therefore the CNT can be easily dispersed and distributed in NR matrix. However, this mixing system had originated much the heat and not environmentally friendly operation owing to dispersion of low density of CNT. Thus, latex-based composites are represented and it showed significantly improved properties than relative to the one preparing from melt mixing. It was found that the lowest percolation threshold concentration of approximately 0.5 phr of CNTs was observed in the latex-CNT composites [22]. Electrical conductivity is one of the properties that can be applied to characterize the quality of filler dispersion in CNT composites. If a continuous filler network of electrically conductive fillers is formed, the material undergoes a sudden transition from insulator to conductor. As a result, the electrical conductivity rises by several orders of magnitude. Figure 3 shows the effect of filler loading on the electrical conductivity of CNT-filled composites based on NR from ADS and latex. Here, the latex-based composites exhibited a percolation threshold at a CNT concentration lower than 1 phr. This is due to the orientation of nanotubes along a specific path around the rubber particles which resulted in the formation of segregated nanotube network [23,24] as confirmed by the TEM image (Figure 4). Figure 3. Electrical conductivity of composites as a function of CNT content [22]. Carbon Nanotubes -Redefining the World of Electronics 6 Functionalization of carbon nanotubes and the properties of natural rubber-carbon nanotube composites The major drawback of CNT as a reinforcing filler in NR is its agglomeration tendency, since the CNT fibers have very high aspect ratio and strong Van-der Waals attraction between each other. This is due to the lack of polar functional groups on the CNT surfaces which also leads to the self-association behavior in the NR matrix. Generally, the filler-filler interactions are too strong compared to fillermatrix interactions causes very poor dispersion of the filler. The poor physical and chemical bonding between CNT and NR or the incompatible nature generally leads to exhibit poor stability of the composites in terms of their mechanical properties and electrical conductivity [20]. Therefore, attaining a homogeneous dispersion of CNTs in the rubber matrix remains a challenge and addressed by seeking the optimal conditions for the preparation of rubber-CNT composites. To improve the dispersion of CNTs in the NR matrix, a silane coupling agent was applied by expecting that the filler-rubber interactions would be enhanced by reducing the Van-der Waals attractions of CNT particles. The ex-situ functionalization of CNTs with silane has been introduced to improve the CNT dispersion in rubber-CNT composites. However, this method is time consuming and more expensive, and might not be appropriate in practical applications. Thus, recent studies have investigated in-situ functionalization of CNTs with silane. Similar to rubber-silica composites [25,26], silane was added directly during the mixing of rubber and silica. The silanization of silica particles can take place during mixing if the mixing conditions are suitable. This alternative process can provide a short processing cycle compared to ex-situ silanization. On the other hand, the functional groups on the raw CNTs are readily available and sufficient to react with silane molecules during mixing [22] as similar to the silica-filled composites. Thus, the way of mixing is most important to improve the reinforcing efficiency of CNTs in rubber-CNT composites [27,28]. CNT filled NR composites were prepared by melt mixing and latex mixing methods. The in-situ functionalization of CNTs with a silane coupling agent, namely bis (triethoxysilylpropyl) tetrasulfide (TESPT) was done to improve Carbon Nanotubes Reinforced Natural Rubber Composites DOI: http://dx.doi.org /10.5772/intechopen.95913 the interactions between CNT surfaces and rubber molecules. Figure 5 shows the effect of CNT loading on the electrical conductivity of rubber-CNT composites with and without TESPT prepared by melt mixing and latex mixing processes. The lowest percolation threshold was observed in the composites prepared by latex mixing with in-situ functionalization. This is due to the chemical interactions of CNT surfaces, silane, and NR molecules (Figure 6) that improved the CNT dispersion and reduced the electrical percolation threshold. As a result, percolation thresholds were observed at approximately 2 and 1 phr of CNTs in NR-CNT-TESPT and L-CNT-TESPT composites, respectively. In addition, it was obtained the same trend as NR and ENR vulcanizates reinforced with CNT, CCB and CNT/CCB hybrid filler that decreasing of physically bound rubber absorption with addition of TESPT are showed, while the chemically bound amount had significantly increased. It was also found that superior conductive material with low dielectric constant of NR and ENR vulcanizates with CNT and CCB hybrid filler are received after the addition of TESPT [29]. In addition, composites of CNT and ENR were also prepared with in-situ functionalization of CNT with two alternative silane coupling agents such as bis(triethoxysilylpropyl) tetrasulfide (TESPT) and 3-aminopropyltriethoxysilane (APTES). The reactions of ENR molecules with the functional groups present on the CNT surfaces and also with the silane molecules were schematically shown in Figures 7 and 8. Composites of ENR-CNT and ENR-CNT-TESPT were successfully prepared with a very low electrical percolation threshold at 1 phr CNT content as showed in Figure 9. Furthermore, the highest electrical conductivity was achieved in the ENR-CNT-TESPT composite, due to its higher cross-link density and nearoptimal CNT dispersion. Moreover, the morphological study of ENR-CNT and ENR-CNT-TESPT composites was used to confirm the fine dispersion of CNTs in the ENR matrix with loosely agglomerated CNTs. Consequently, the composites of ENR-CNT and ENR-CNT-TESPT exhibited improved tensile properties with higher cross-link density and electrical conductivity than the baseline of pristine ENR. [30]. Hybrid carbon nanotubes filled natural rubber composites Several attempts have been made to disperse the CNTs in NR matrix by avoiding its re-agglomeration. To overcome this limitation, the addition of secondary fillers was introduced into the composites by generating new conductive hybrid filler pathways [32,33]. An improved conductivity was achieved by adding carbon black (CB) into the CNT polymer composites [34][35][36][37]. Electrical conductivity of the composites was found to be slightly increased with CB concentration when the CNT content lies below its percolation threshold. However, no significant increase in the electrical conductivity occurred above the percolation threshold concentration. This might be due to the agglomeration of CB connected to CNT surfaces, which impedes the conductivity of hybrid ternary composites [35]. Thus, the CB can bridge CNT encapsulates and contribute new electron pathways only with highly homogeneous distribution of both the fillers. In this regard, the extremely high viscosity of NR is essential to enhance the conductivity by enabling good dispersion of fillers during mixing. No prior studies have been reported on the NR vulcanizates to assess the electrical conductivity with the dual fillers CB and CNT. A hybrid epoxy-based nanocomposite was developed by reinforcing CNT and CB. It was found that, the gaps between carbon nanotubes were (1) and (2) are the molecular structures of APTES and TESPT [9]. connected by the CB nanoparticles, causing the formation of conducting networks [32,34] as shown in Figure 10. The same behavior was observed in the hybrid of expanded graphite (EG)-CNT filled cyanate ester (CE) [38], graphene nanoplatelets (GNPs)-CNT/epoxy composites, titania nanoparticles (TiO 2 )-CNT/epoxy composites [39] and hybrid of Ag nanoparticles (Ag-NPs)-CNT [40]. Hybrid composites of carbon nanotubes and conductive carbon black reinforced natural rubber Filled NR vulcanizates were prepared by incorporating carbon-based fillers, namely carbon nanotubes (CNT), conductive carbon black (CCB), and CNT/CCB hybrid filler [41]. Reinforcement of CNT and CCB was carefully done by using a two-roll mill. The main aim was to generate an optimal state of filler dispersion in the NR matrix, in which CCB particles/aggregates bridge the CNT encapsulates. It improved the optimum electrical conductivity of NR composites by enabling electron tunneling and it is appropriate to provide fillers in the NR matrix. It was expected that, the achievable conductivity would synergistically be better than those of rubber composites with solely CNT or CCB. The variation of conductivity (at f = 1 Hz) with the filler volume fraction according to the Voet model is shown in Figure 11. It is seen that, the increment in conductivity appears in different steps for the NR vulcanizates filled with CCB (4 steps), CNT (3 steps), and CNT/CCB hybrid filler (2 steps). As already stated, there is no percolation threshold observed in case of CCB filler in NR vulcanizate, even though increasing the CCB loading up to 15 phr. In Figure 11, the NR vulcanizate filled with CNT/CCB hybrid filler showed only two step increments in conductivity. It is also clear that the filled NR vulcanizate had linear conductivity in between 1to10 phr of CCB in the CNT/CCB hybrid filler and saturates at 15 phr of CCB owing to the strong agglomeration. This means that Figure 11. Electrical conductivity of the NR vulcanizates filled with CCB, CNT, and CNT/CCB hybrid filler at various filler loadings [41]. NR vulcanizate filled with 5 phr CNT is an Ohmic conductor in between 1to10 phr of added CCB. It confirms the synergistic effect of CNT and CCB fillers in the NR vulcanizates, that improved and extended the conductivity of the NR composites by enhancing the electron tunneling and reducing the gaps between CNT encapsulates. Hybrid carbon nanotubes and silver nanoparticle in natural rubber composites The conductive NR composite with CNT-decorated AgNP (Figure 12) was prepared via the latex mixing method to get homogenous dispersion of the filler [42]. The decoration of CNT surfaces with AgNP significantly enhanced the electrical conductivity and lowered the percolation threshold concentration of NR composites when compared to the composites with plain CNT filler. The percolation threshold concentrations of CNT and CNT-AgNP filled NR composites (Figure 13) are found to be 3.64 and 2.92 phr respectively. The combination of AgNP with CNT hybrid filler caused decreasing the percolation concentration and significantly increasing the optimal conductivity of the NR composites. This is due to the network formation of CNT-AgNP in the NR matrix favors the flow of electrons as compared to the NR filled with solely CNT. Therefore, better movement of the electrons by tunneling throughout the NR matrix was encountered. The degree of network formation of fillers in rubber matrix can be estimated by the t values. In Figure 13(b) and (c), the t values of CNT and CNT-AgNP filled NR vulcanizates are noticed as 2.34 and 1.86, respectively. This clarifies that the CNT-AgNP filled NR vulcanizates are fully threedimensional networks of fillers in the NR matrix, whereas the CNT filled NR vulcanizates showed stronger CNT agglomeration as indicated by higher t value. It also confirms the bridging of AgNP with end-to-end of CNT in the NR matrix which usually improves significantly the electrical conductivity and the percolation threshold of the composite. Hybrid carbon nanotubes and ionic liquid in natural rubber composites To enhance the electrical conductivity of the rubber composites, several methodologies have been exploited by improving the CNT dispersion in rubber Carbon Nanotubes Reinforced Natural Rubber Composites DOI: http://dx.doi.org/10.5772/intechopen.95913 matrix. One prominent approach is the use of CNT mixed with an ionic liquid (IL) [43]. Typically, the IL molecules have hydrophilic and hydrophobic parts of the inorganic and organic salts in their molecules. It is noted that the hydrophobic part has the ability of interacting with CNT surfaces through cationð interaction [44]. Also, some ionic liquids contain -C=C-in the alkyl chain, and this could interact with diene rubbers via sulfur bridges in case of sulfur vulcanization system [45]. Therefore, IL forms bridge CNT surfaces with the rubber matrix [45]. The imidazolium ionic liquid has been widely used in various types of polymer matrix [46][47][48][49]. It was found that the imidazolium groups play an important role in improving the ionic conductivity of acrylonitrile butadiene rubber (NBR) [48]. Furthermore, NBR/SiO 2 in combination with imidazolium ionic liquid exhibited good elastomeric properties, high tensile strength, and high electrical conductivity [49]. In addition, CNT filled NR composites improved the conductivity by the addition of an ionic liquid (IL) 1-butyl-3-methyl imidazolium bis (trifluoromethylsulphonyl)mide (BMI) [50]. Figure 14 clearly shows the addition of IL in to NR slightly increased the electrical conductivity, but the loading level of IL (BMI) does not significantly affect the conductivity of NR vulcanizate. This might be attributed to the encapsulated IL (BMI) by the insulating NR as the imidazolium IL could be more compatible with the hydrophobic rubber matrix [46]. This leads to reduce the electrical conductivity of the NR/IL vulcanizate with no noticeable percolation threshold. On the other hand, the composites of NR/CNT and NR/ CNT-IL showed percolation threshold concentrations at 3.64 and 2.92 phr, respectively. Therefore, the NR/CNT-IL composite exhibited comparatively higher electrical conductivity with lower percolation threshold than the composite of NR/ CNT. This might be due to the synergy of plasticizing by IL (BMI), contributed to good dispersion of CNT. It forms three-dimensional networks in the NR matrix Electrical conductivity of CNT and CNT-AgNP filled NR vulcanizates with various CNT and CNT-AgNP loadings (a), Plots of log σ dc and log ( f-f c ) of CNT filled NR vulcanizates (b) and CNT-Ag filled NR vulcanizates (c) [42]. assisted by the physical interactions of CNT particles. Therefore, the plasticizing effect and physical interactions facilitated CNT network formation and reduced the agglomeration of CNT. Piezoresistive carbon-based composites for sensor applications Conductive composites based on electrically insulating rubber matrix have attracted both scientific research and industrial interest for several years [11]. The two main parts in such composites are (i) the insulating rubber matrix and (ii) the conducting filler. The filler needs to form conductive pathways in the matrix for carrying electrons, thereby making the composite a semiconductor or a conductor [19]. Such filler pathways are perturbed by breakage and re-arrangement inside the matrix during deformation [33]. This change in resistivity during elongation is known as piezoresistivity and it can be used in motion detector applications [51]. Hence, the sensitivity of a composite sensor is affected by the type of rubber matrix and the type of fillers such as carbon black (CB), carbon fibre, graphene, graphite, and carbon nanotubes (CNT) [34,35,52,53]. The CNT filled composites can serve in sensor applications due to its excellent electrical conductivity, which responds to various external stimuli such as temperature, organic solvents, vapour, strain, and damage [6]. Incorporation of CNT and CB hybrid filler in NR exhibits a very stable sensor performance along with good mechanical properties when the composites are dynamically elongated several times [6,8]. Therefore, three alternative rubber matrices such as NR, epoxidized-NR (ENR) and isoprene rubber (IR) have been tested to clarify the effectiveness of the rubber matrix in a strain sensor containing CNT and CB as a hybrid filler. An appropriate ratio of CNT:CB was fixed at 1:1.5to form the filler networks throughout the matrix. Melt blending was selected as the mixing method to prepare the composites with the help of an internal mixer and a two-roll mill by optimizing the state of dispersion of fillers in the rubber matrix. Furthermore, the piezoresistivity (strain sensitivity Carbon Nanotubes Reinforced Natural Rubber Composites DOI: http://dx.doi.org /10.5772/intechopen.95913 of electrical resistance) was investigated in terms of the relative change in resistance, ΔR/R 0 (ΔR is the change in resistance with strain, and R 0 is the initial resistance of the composite) [6,54]. The measurement was performed with the help of an instrumental setup as showed in Figure 15. To assess the effects of long term deformations on the composites, dynamic cyclic tensile testing at 50% strain for 50, 100, 500, 1000, 3000, 5000 and 10000 cycles was performed with an extension speed of 200 mm/min. Here, the resistance of the composites after each run was noticed. Figure 16 shows the electrical conductivity as a function of cycle count for NR, ENR and IR composites with CNT/CB hybrid filler. The conductivity of these composites was found to be decreased with cycle count. The linkages in ENR composites exhibited the least loss of conductivity. It was found that the conductivity becomes stable after 3000 cycles (from 15.4 μS/cm to 0.044 μS/cm at 3,000 rounds). This is similar to the composites of NR-CNT/CB, while a few cycles were needed for IR-CNT/CB owing to the higher filler agglomeration and poor filler-rubber interactions. This is attributed to the polar chemical interactions between ENR and the functional groups on the surfaces of CNT/CB. Furthermore, the non-rubber components in NR and ENR matrices improved the filler dispersion as seen in the TEM images of Figure 16. It can be seen that, the dispersion of CNT/CB particles/clusters was homogeneous in the ENR matrix, whereas poor CNT/CB dispersion with strong filler-filler agglomeration was exhibited in the IR matrix as expected. Moreover, NR-CNT/CB composite (CNT/CB 0.5/9 phr) was developed for sensor [6], it was embedded in gloves to understand its efficiency and to get a visual idea about the function of the sensors as shown in Figure 17. Instrumental setup for measuring electrical conductivity and resistivity during mechanical tensile strain [6]. Conclusion Carbon nanotubes (CNT) have been widely used as the reinforcing and conductive filler in NR. However, the dispersion of CNT in NR matrix is limited and always an important factor to enhance the property of NR composites. In order to obtain a conductive NR material with high quality by the formation of strong CNT networks in an insulating NR matrix is needed. The CNT networks act as electrically conducting pathways to provide electrical conductivity, but the CNT typically has a high aspect ratio and strong Van-der Waals forces that give rise to a strong agglomeration tendency. It is very difficult to form the conductive paths with in the insulating rubber matrix and this path formation between the conducting particles is a challenge to achieve proper electron tunneling. This chapter reports several advanced methods to disperse CNTs in the NR matrix. Various aspects highlighted in this chapter include the mixing processes such as melt mixing and latex mixing methods. In addition, formations of functional groups on the surfaces of CNT using silane coupling agents (i.e., ex-situ and in-situ functionalization) as well as using a hybrid CNT are beneficial to achieve better electrical conductivity. These efforts are aimed to reduce the percolation threshold concentration in the NR composites. As mentioned in this review, latex mixing technique exhibits the formation of segregated nanotube network, which enhances the electrical conductivity of the composites. In addition, the improved interaction between CNT and NR matrix by using silane coupling agent enhances the uniformity of dispersion of CNT. It leads to reduce the percolation threshold concentration compared to the composites of NR/CNT without silane coupling agent. Moreover, the addition of secondary fillers into the composites generates new conductive hybrid filler pathways. Comparatively better conductivity is achieved by the addition of CB or AgNP or IL into the CNT polymer composites. However, conducting composites based on electrically insulating rubber matrix have been developed for sensor applications. Change in resistivity during elongation termed as piezoresistivity can be used in sensor applications. The most commonly used rubber matrices such as NR, ENR and IR are mixed with a combination of CNT and CB fillers as a hybrid filler. The presence of linkages in the ENR composites results in the least loss of conductivity during external strain. It is found that the conductivity becomes stable after 3000 cycles. This is found to be similar to the NR-CNT/CB composite, while a few cycles are needed for IR-CNT/CB owing to the higher filler agglomeration and poor filler-rubber interactions. This is attributed to the polar chemical interactions between ENR and the functional groups on the surfaces of CNT/CB. Furthermore, the non-rubber components in NR and ENR matrices improved the filler dispersion. Finally, it can be concluded that the composite of ENR and CNT/CB are beneficial in sensor applications particularly in case of health monitoring, motion detectors, and other related products because of its cost-effectiveness and ease of processing.
2021-05-08T00:03:03.509Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "2d01e631e5f197ee80ec1680bf02d057ed36ba04", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/75396", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0b264738f6e6c7786030ec2f86f2dd480a28183d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
222256294
pes2o/s2orc
v3-fos-license
An exercise intervention for people with serious mental illness: Findings from a qualitative data analysis using participatory theme elicitation Abstract Background People with severe mental illness (SMI) often have poorer physical health than the general population. A coproduced physical activity intervention to improve physical activity for people with SMI in Northern Ireland was evaluated by co‐researchers (researchers with lived experience of SMI) and academic researchers using a new approach to participatory data analysis called participatory theme elicitation (PTE). Objective Co‐researchers and academic researchers analysed the data from the pilot study using PTE. This paper aimed to compare these analyses to validate the findings of the study and explore the validity of the PTE method in the context of the evaluation of a physical activity intervention for individuals with SMI. Results There was alignment and congruence of some themes across groups. Important differences in the analyses across groups included the use of language, with the co‐researchers employing less academic and clinical language, and structure of themes generated, with the academic researchers including subthemes under some umbrella themes. Conclusions The comparison of analyses supports the validity of the PTE approach, which is a meaningful way of involving people with lived experience in research. PTE addresses the power imbalances that are often present in the analysis process and was found to be acceptable by co‐researchers and academic researchers alike. | INTRODUC TI ON Though physical activity has been found to have positive physical and mental health benefits for people with severe mental illness (SMI), 1 exercise interventions are seldom offered as a treatment option in mental health care. 2 Given that the estimated mortality gap for people with SMI is between 11 years 3 and 30 years 4 with a 20% reduction in life expectancy, 5,6 it is of public health importance to identify how to increase uptake and implementation of physical activity interventions among this population group. People with SMI are likely to face more barriers to physical activity than the general population. These include lack of motivation, which could be due to the mental health condition itself 7 or be a side-effect of medication they take (eg weight gain) which may make it more difficult for people with SMI to be physically active. [8][9][10] Stress, depression, disinterest in exercise, feeling unsafe or fear of injury were found to be barriers to engagement 11 and anxiety, including social anxiety 12 and anxiety around one's perceived exercise ability, 11 may also prevent some people with SMI from participating in physical activity. People with mental health problems are also more likely to develop physical health conditions, such as cardiovascular disease, obesity and diabetes, than the general population, and poor physical health and tiredness resulting from these comorbidities also serve as a barrier to participating in physical activity. 11,13 Other health risk behaviours (eg cigarette consumption, hazardous alcohol use) are also more common in people with SMI and can negatively impact on a person's ability to participate in physical activity. 14,15 As well as physical activity barriers, suggestions for effective physical activity interventions for people with SMI have been proposed, with a key recommendation being that the format of exercise is structured, supervised and delivered ideally by trained fitness professionals. 16,17 A study of in-patient nursing staff views stated the most prescribed exercise is group-based, 3 times per week for 20 minutes. 18 Many studies focus on the level of intensity required to benefit people with SMI, stating that physical activity should be of a moderate-to-vigorous intensity to have a positive effect on mental and physical health symptoms. [19][20][21] In a recent scoping review, it was reported that activity can range from 30 minutes to 3 hours in order for it to reduce mental illness symptoms. 22 Despite this growing body of evidence exploring the impact of physical activity and mental health problems, there is a lack of research using coproduction to explore the facilitators and barriers to physical activity for people with SMI. This is notable because coproductive methods have the potential to enhance the quality and relevance of research. 23 This paper relates to the evaluation of a three-month physical activity programme for people with SMI in Northern Ireland that took place in 2019. A team of lived experienced researchers with an SMI were employed to work on the study. Through adopting a qualitative, participatory approach, the study aimed to increase knowledge on what works to engage people with SMI in sustained physical activity; explore current barriers and facilitators to physical activity; and provide practical solutions to inform delivery of services in Northern Ireland. Given the emphasis on the study's coproductive approach, it was important that people with lived experience of mental health problems participate in the analysis. The initial term that was used in the recruitment process to describe the researchers with lived experience of SMI was 'peer researchers'; however, this was later changed, as the peer researchers questioned the definition during the capacity building process of the PTE approach, and concluded that the term 'co-researchers' was more in line with the spirit of coproduction and equality; thus, this is the preferred term in this study. Both the co-researchers and the academic researchers participated in the analysis process, with the former able to draw on their lived experience when interpreting the data. 24,25 In practice, coproduction is more common at the initial planning and design stages of a study, whereas the analysis and report write-up stages tend to be dominated by academic researchers, 26,27 perhaps due to issues around time and cost. Tensions may also arise when experts-by-experience and/or professionals work together, 28 particularly during these later stages which are sometimes conceptualized as more formal and distinct to the earlier research stages. 29 Indeed, academic researchers may consider data analysis to be one of their key skills and therefore be reticent to share power with co-researchers at this stage. 30,31 To ensure that the project was fully coproduced, the data analysis approach adopted for this study was participatory theme elicitation (PTE), which has been effective in other research projects involving co-researchers. 32,33 Whilst many other participatory data analysis methods focus on coding, 34-36 this may result in an imbalanced analysis process that is disproportionately influenced by academic researchers. 37 PTE, which is a five-step process (that consists of data selection, capacity building, open sorting, data grouping, data analysis and interpretation), builds on common participatory methods centred around coding but uses network analysis techniques to facilitate generation of themes. Applying this to health and social sciences research, quotes from interviews or focus groups are included on the cards for co-researchers to sort. Sifting through large volumes of raw data for analysis can be complex and time-consuming, so it is often a barrier to participation; however, sorting through cards with a smaller number of key phrases or quotes is more manageable. 38 Following this, network analysis methods are used to determine sorting patterns across all researchers to inform the selection of the final themes. This innovative approach serves to address power imbalances and democratize the process, as the independence of the network analysis results reflecting everyone's independent sorting process serves to minimize the influence of the academic researchers in the analysis process. The key aim of the paper was to compare analyses from both co-researchers and academic researchers to validate the findings of the study and explore the validity of the PTE method in the context of the evaluation of a physical activity intervention for individuals with severe and enduring mental health problems. | ME THODS Ethical approval for the exercise intervention was granted by | Recruitment The four co-researchers (two males and two females between 30 and 59 years old) who participated in the two-day analysis workshop were recruited previously to work on the pilot study and had contributed to the programme design and data collection. All participants had obtained secondary-level education and two held university degrees. There were four academic researchers (two males and two females between 34 and 54). One was the key researcher on the project; another was a mental health researcher; another was a key partner on the project; and the remaining researcher acted in an advisory capacity on the project. All participants held university degrees and post-graduate qualifications, ranging from master's degrees to PhDs. PTE consists of five steps (presented below), which is discussed in greater depth by Best and colleagues (2017) and involves (a) data selection, (b) capacity building, (c) open sorting, (d) data grouping and (e) data analysis and interpretation. | Step 1: Data selection Two individuals from a partner organisation, who were not previously involved in data collection, independently reviewed the transcripts of six focus groups and selected standalone representative quotes. More than one person is needed to select quotes, and for practical purposes (relating to time and resources), two individuals participated in data selection. It is usually recommended that one of the quote selectors has lived experience. In this instance, one individual had academic research experience and was part of the Management Steering Group Committee of the study; the other had research expertise stemming from their lived experience and was head of the study's Advisory Group. The two individuals were chosen as they were not involved again in the process until Step 5 (Data Analysis and Interpretation) which is important to ensure that no single member of the research team is involved at every stage, thus limiting their ability to influence the process. After selecting quotes individually, the two then met to agree on a final list of 89 anonymized quotes (ID01-ID89). While higher than the number of quotes used in previous PTE studies, 32,33 it was felt that these quotes accurately maintained the essence of the focus group conversations and provided a natural opportunity to explore the acceptability of PTE with more quotes. | Step 2: capacity building Given that the co-researchers had already worked on the project, they were well-versed in the details of the physical activity intervention and evaluation. In other cases, co-researchers should receive an overview of the intervention and research project at this stage. Researchers were provided with a slide which contained instructions for the sorting task they would undertake that would later help them to develop themes (see Appendix A). | Step 3: open sorting In separate sessions, both groups of researchers were presented with information packs which included the 89 quotes, each individually cut out; and a consent form and blank sheets of paper to create labels. An evaluation form (see Appendix B) was also included for participants to complete at the end of the task. Researchers would typically also be provided with a project information sheet; however, in this case, the researchers were already aware of the background to the study as well as its aims and objectives due to their prior involvement in the study. They were, however, not aware specifically of the questions that preceded each of the responses selected as quotations as one of the main pieces of guidance for quote selection was that they could be easily understood as standalone statements. They were given instructions on a PowerPoint slide (see Appendix A) that remained visible throughout, which instructed them to sort the quotes into piles based on similarity, using whatever criteria they found relevant. 40 There had to be at least two piles and no 'miscellaneous' pile, and each researcher labelled and stapled their piles of quotes. The sorting process was undertaken independently so that researchers were not able to influence or be influenced by each other. Two facilitators were present to answer questions but did not offer opinions or interpretation. | Step 4: data grouping The piles of data of both researcher groups were inputted into an Excel spreadsheet by PB and CM (who were not involved in the original selection of quotes). The spreadsheet contained three | Step 5: data analysis and interpretation On day 2 of the workshop, both groups of researchers were presented with their respective network diagram and the sheet of quotes, which was used as a basis for discussion to generate themes in each group. Two facilitators (one for the co-researchers and another for the academic researchers) recorded themes and initial codes on flipchart paper. The two groups then came together to compare and agree the final list of themes, which are summarized in Appendix C. | Network analysis In the co-researchers' and academic researchers' network diagrams ( Figures 1 and 2), each coloured circle represents a node. Pairs of nodes are connected if at least one person placed the same two quotes in the same pile. The thicker the line, the greater the number of researchers that have grouped those quotes into the same pile. The different colours in the diagrams represent the different groupings found by the Louvain algorithm. Both groups were informed that information regarding the strength of relationship between quotes and groups could be gleaned visually from the network diagrams by looking at the proximity of nodes to each other and the thickness of lines connecting them together. | Theme generation The co-researchers' PTE analysis identified three unique groupings ( Figure 1), whilst the academic researchers' analysis identified four ( Figure 2). The quotes grouped by colour from the two network diagrams are listed in Appendix D. Figure 1 was presented to the co-researchers; Figure 2 was presented to the academic researchers. Both groups met separately on day 2 of the workshop to review the groupings identified in the diagram and ascertain themes. The facilitator of each session emphasized that they were not bound to the groupings in the diagram; the network diagram was simply there to stimulate discussion and perhaps add greater depth and detail to their discussion. After this session, the co-researchers and academic researchers met and the two facilitators presented the themes identified by the group they had just facilitated. This encouraged a wider discussion among both groups who reflected on similarities and differences across groups, and the final theme list was developed. For this paper, the list of themes from the two-day PTE session is of interest, though final themes in the project report differ slightly as co-researchers and academic researchers met later to reflect on the draft F I G U R E 1 Network diagram and sorting results for co-researchers. The co-researchers categorized the quotes into three main groups represented by the three different colours. Group 1 is represented as pink; group 2 is green; and group 3 is blue 'Results' section of the report to ensure that it accurately conveyed the themes they had discussed. 39 | Comparison across co-researchers and academic researchers The analyses of the co-researchers and the academic researchers are presented in Tables 1 and 2, respectively. In the co-researchers' network diagram, there were three coloured groupings and thematic relationships were developed within all three (step 4); in the academic researchers' network diagram, there were four coloured groupings and thematic relationships were developed within all four. A comparison of the two groups' analyses is explored in this section. Both groups included a theme on the importance of the social aspect of the programme with similar labels to convey this-'social/ group dynamic' and 'a social environment.' The convergence of these F I G U R E 2 Network diagram and sorting results for academic researchers. The academic researchers categorized the quotes into four main groups represented by the four different colours. Group 1 is represented as red; group 2 is pink; group 3 is blue; and group 4 is green TA B L E 1 Themes generated by co-researchers using the PTE approach themes across both groups is evident by the choice of quotes they selected to develop this theme, such as: 'But there's very much a community spirit already established with them, but it's carried into our groups. It makes our role with them so much easier.' [ID4] Barriers to engaging in physical activity were also explored by both groups; however, both groups adopted a slightly different focus which There was also overlap between the co-researchers' barriers theme and the academic researchers' subtheme of 'barriers for participants more generally' which explored, among other things, other people's perceptions and feeling too old for certain types of exercises. The final barriers subtheme identified by the academic researchers related to the practical, process-related obstacles in the project. The co-researchers similarly explored this in a separate theme which they labelled 'Practical issues' which discussed not only barriers such as the challenge of getting GP approval but also more generally practical aspects of the programme, such as the ideal size of a group and the considerations that the trainers needed to take into account when delivering a programme to people with severe and enduring mental health problems. Whilst both groups had a theme on the trainers of the programme, the co-researchers focused on the trainers' approach, whereas the academic researchers explored the unique qualities and skills that trainers needed for this programme to be effective. Similarly, though the themes of 'personal responsibility' (co-researchers) and 'agency' Another key difference relates to the language used across groups, with the co-researchers' themes employing less academic and clinical language in comparison with the academic researchers. One theme identified by the academic researchers was 'agency', which is an academic psychological concept that refers to the degree with which an individual feels they have control over actions and consequences. 42 The corresponding theme identified by the co-researchers used the non-academic term of 'personal responsibility.' This difference in language might be an important consideration in how to engage with people with SMI to promote change. Another difference across the groups related to the structure of the theme list; whilst the co-researchers listed seven standalone themes, the academic researchers included subthemes under two umbrella themes. This is likely to be due to the academic researchers' experience of undertaking thematic analysis and their familiarity with various formats of categorising qualitative data. The thematic alignment and congruence of some themes across groups support the validity of the PTE approach. There are also important differences across groups, reflecting each group's different, equally valuable perspective, which suggests that PTE is a valuable methodology that could improve sensitivity to additional findings. At the end of the two-day workshop, the two groups of researchers convened to discuss and compare the themes each group had identified using PTE and ascertained a final list of themes. While the focus of this paper was to compare the themes generated across groups, the final list of themes from the workshop is included in Appendix C, which serves to demonstrate how PTE plays an integral role in the overall analysis approach. | D ISCUSS I ON This paper sought to strengthen the validity of the findings of an evaluation of a physical activity intervention for people with SMI in Northern Ireland; and explore the validity of the participatory theme elicitation by comparing analyses between lay and academic researchers. | Summary of key findings Though people with SMI are disproportionately excluded from physical activity, our qualitative findings highlight that they can enjoy physical and mental health benefits, including improved sleep and increased energy, from a physical activity intervention that is of a lower intensity than those often recommended in literature. 19,43 The data analysis identified facilitators to engaging people with SMI in physical activity, including focusing on the social com- | A co-researcher's reflection on PTE One of the co-researchers reflected on using PTE in the data analysis, highlighting their initial uncertainty around the methodology which soon paved the way to an understanding of its benefits and its innovative approach. The co-researcher also explored how their own lived experience of SMI impacted the project: 'As the study period of 12 weeks progressed, I was wondering, how is any data that we produce going to be collated and presented, unless we were going to take biometric measurements from everyone at the end? When I discovered we would be using the quotes from discussions with the participants I was still confused. The PTE was a revelation in that randomly generated qualitative data could be processed and presented as a quantitative display. More generally I felt that with having lived experience of mental ill health (which led me to gain knowledge working with AWARE and the Recovery Colleges as part of my own recovery) I was confident and empathetic in working with our study volunteers and participation in the study further enhanced my understanding of mental ill health in the community.' | The role of the facilitator The facilitator was important in encouraging discussion among the co-researchers and developing the final list of themes. In this study, the key facilitator was a lived experience researcher (CW) with extensive experience in facilitating workshops with people with mental health problems, which helped to reduce any power imbalance and promote joint decision making. He had also been involved in the project from its inception and selected with another researcher the 89 quotes (step 1). This familiarity with the project and the PTE methodology was invaluable in building a rapport with the co-researchers and relaying the methodology to them in an accessible and engaging way. | Acceptability of PTE Feedback on the methodology from the academic and co-researchers indicates that PTE is a valuable method of engaging people meaningfully in research. From a discussion at the end of the workshop between the academic researchers (excluding Paul Best who developed PTE) and the co-researchers, it was evident that both found it to be an acceptable participatory data analysis method. The academic researchers reflected that despite initial concerns, they found it to be a useful way of engaging people, with a key strength being the independence of the network analysis results which play a key role in levelling the power relations in participatory data analysis. Despite support for the acceptability of PTE, this approach should not be considered to be the 'only' way to explore data but can instead be viewed as a useful tool in a sequence of analysis. The co-researchers found PTE to be acceptable and 'enjoyable' and were able to pick up the process relatively quickly. One lay researcher stated that they were surprised by 'how much information was received by random snippets of people's thoughts,' emphasising the value in the approach and the insights it garnered. Regarding the number of quotes in the study, one lay researcher felt that this was 'just right', while another commented that 'you could easily handle more than 89.' The selected 89 quotes were described as 'easy to understand' and categorize. | Strengths and limitations Incorporating co-researchers in the analysis strengthened the project findings by providing an insider perspective. The same four co-researchers were present for both days of the workshop which allowed for consistency during the process. The two people who selected the 89 quotes from the transcripts were not involved in the initial data collection, thus minimising the potential for selection bias. Though the number of quotes (89) was found to be appropriate, it is still the case that a relatively small number of quotes must be selected to ensure that the process is manageable in the given time period. This selection process may therefore be an important limitation of the approach and create another potential source of bias. Despite the intentions of PTE to minimize researcher influence and democratize the analysis process, its requirement that co-researchers undergo training means that it is subject to the 'professionalisation paradox' 51 criticism, which refers to the fact that co-researchers will necessarily undergo some degree of professional socialisation, thus limiting the unique value of 'layness' on the research. It could be argued, however, that developing research methods, knowledge and skills does not necessarily diminish the contribution of lived experience. It is possible that people can be both experts-by-experience and experts-by-research method training. In addition, the training component of PTE is purposely short compared to other qualitative data analysis methods and, as the co-researchers in this study attested, relatively straightforward to pick up. | Conclusion The qualitative analysis highlighted that people with SMI can enjoy mental health and physical health benefits from engaging in physical activity, even at low levels of intensity. Key facilitators and barriers were identified, many of which mirrored findings in the literature; however, some unique insights were garnered, including the challenge in gaining GP approval for patients with SMI to engage in physical activity programmes. This paper compared themes generated by co-researchers and academic researchers in an evaluation of a physical activity programme, and found thematic alignment and congruence, which supports the validity of the PTE approach. PTE was also found to be a beneficial way of involving people with lived experience in research without having them go through a large amount of training. The approach also allowed for differences between the analyses conducted by the co-researchers and academic researchers to emerge, which may not occur in standard data analysis, and was able to balance out some of the bias towards the type of information that might be seen as important from each group. The methodology created some distance in the analysis process between the academic team which helps to minimize scientific researcher input/influence. Involving people with lived experience of mental health problems as co-researchers in the analysis provided a unique lived experience perspective that strengthened and enhanced the findings. The co-researchers found PTE to be acceptable even with a larger number of quotes than previous studies using the PTE approach. Future work using the PTE approach would benefit from trialling a larger number of quotes to determine at what point acceptability ceases. ACK N OWLED G EM ENTS The authors would like to acknowledge Ruth Neill and Victoria Zamperoni for their contribution to this paper. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available in the supplementary material of this article. Participatory Theme Elicitation (PTE) Training Evaluation Form We would appreciate if you could take a few minutes to share your opinions with us so we can improve on this training. A PPE N D I X C Final list of themes agreed on by co-researchers and academic researchers at the end of the two-day workshop: But there's very much a community spirit already established with them, but it's carried into our groups. It makes our role with them so much easier. 14 It was already a pre-made social group, they did meet up once a week already… 2 Community inside and outside of the gym, to incorporate that mental health element into it 17 When we were out in the park, you know the way you see now these machines that are out in the park anyway, so I was able to get them using things like that. 86 And see that leisure centre, the people there, they are just fantastic. You are welcome in straight away and when you do the exercise you get coffee, biscuits and tea. The progression over the 12 weeks was phenomenal. 11 The background work and the background checks, which [name removed] and [name removed] had done, had all been put in place. So really that was the hard, mundane bit. The easy bit for us was delivering, because all that was done. 19 You're here and that's it, that's all you have to do, just come and we'll scale everything right back. 75 It just means good for your bones and your brain…it stimulates your mind and you get a laugh from it. 52 See I was able to keep up more with chair exercise than what I was with walking, so I was 61 It makes ya sleep better as well. I've been sleeping a lot better. 20 It's actually caring about the individual that you're training. 3 I think it's more the environment with mental health, than I think as it's what time of day are you taking them in at, is it busy, are you indoors or are you outdoors, who else is around, music wise is it quiet…..? 21 Educating, if that's the right word, but making them aware that the word exercise doesn't necessarily have to be associated in a gym environment, it could be something else. 12 There has to be a balance, very much in terms of physical and social aspect of it and the group size is pretty important 51 I haven't really tried, tested my weight but I have lost inches 66 I think it got a bit easier every week. The more we did, I mean, I was feeling a lot fitter in the last few weeks of it. 70 Although < name removed > wasn't keen on the resistance bands, he bought two dumb bells, he uses the dumb bells. 53 Them Fitbits are popular too, I would love one, so I would! 48 Whereas in the house you'd be saying do this or I'll just lie down or I'll just sit here but you knew you were coming here so you picked yourself up to come 15 That need that extra support, need that bit more encouragement, that needed more accountability and they did need more guidance on how to perform exercises. 88 There's a friend of mine's going to Slimming World and she lost eight pounds in the first week and it must have been all fluid because the second week, she gained two and she had stopped her diet. 67 She was waiting on a letter to come from the doctor. They did say she was alright but the letter didn't come through in time before the course starting so she missed out. 22 GP referrals are so far behind over here, compared to the rest of the UK 71 There was < name removed > wanted to do it but she asked to late. She asked too late, she asked two or three weeks in and er she was told she couldn't do it because it would take too long to get clearance for it. 27 People think because you've got a weight problem, you can't exercise but that's a stereotype. 36 Too much, eh…, too much effort and exertion, I'd be exhausted, I just couldn't cope. 37 Exercise would probably have put me off too. 65 If you are going to go to the gym you feel selfconscious about your weight so it would deter ya from actually goin'. 63 I think, for it to have a proper long…you would need to be doing it over more than ten or twelve weeks. 21 Educating, if that's the right word, but making them aware that the word exercise doesn't necessarily have to be associated in a gym environment, it could be something else. 38 Walking groups is something I would be very keen on. 89 I think that as long as we keep this size. Becomes too big and it scares people. 12 There has to be a balance, very much in terms of physical and social aspect of it and the group size is pretty important 80 When you are in a group, you have more motivation because everybody around you is doing it. 4 But there's very much a community spirit already established with them, but it's carried into our groups. It makes our role with them so much easier. 2 Community inside and outside of the gym, to incorporate that mental health element into it 14 It was already a pre-made social group, they did meet up once a week already… Group 3: Blue 19 You're here and that's it, that's all you have to do, just come and we'll scale everything right back. 59 And the Recovery College, that's another door and for them to bringing it on to their prospectus 16 In my own example for me when I was getting into this, because of my own anxiety and depression stuff, I used the think, 'Jeepers do I look like a personal trainer?'. I had to question myself saying, what does a personal trainer look like? 49 Maybe we could meet and go for a wee walk or something ourselves? We could meet at the park or something even? That would be good, walk round the park 11 The background work and the background checks, which [name removed] and [name removed] had done, had all been put in place. So really that was the hard, mundane bit. The easy bit for us was delivering, because all that was done. 13 They all had completely different interests and backgrounds and that was challenging… 15 That need that extra support, need that bit more encouragement, that needed more accountability and they did need more guidance on how to perform exercises. 20 It's actually caring about the individual that you're training. We are actively involved in coaching clients that have a mental condition of some shape or form, whether it be an official diagnosis or not. 64 All the exercises we did with him, he, brought sheets with him and he photocopied them into a booklet for us. 42 I was very, very grateful for the pre-assessment because unknown to me I had high blood pressure and had no idea 70 Although < name removed > wasn't keen on the resistance bands, he bought two dumb bells, he uses the dumb bells. 56 They're not turning around going like putting you down in front of anybody 52 See I was able to keep up more with chair exercise than what I was with walking, so I was 69 He got us to walk up the hill, you know, so we got a bit of, er, more benefit out of walking up a hill and he got us to kind of walk a bit further each week 18 Patience, empathy, that was one of the big things we kind of picked up on 54 We're just disappointed, whenever the weeks we couldn't come, you know? 86 And see that leisure centre, the people there, they are just fantastic. You are welcome in straight away and when you do the exercise you get coffee, biscuits and tea. 3 I think it's more the environment with mental health, than I think as it's what time of day are you taking them in at, is it busy, are you indoors or are you outdoors, who else is around, music wise is it quiet…..? 17 When we were out in the park, you know the way you see now these machines that are out in the park anyway, so I was able to get them using things like that. 23 Momentum has started now, so it needs to keep going.
2020-10-11T13:05:33.887Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "c1119525eb6b9947532828a2f154e229fbb3fddd", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/hex.13141", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b9ead3ba4d0091dcfc7f9644c422e0c27ec0186", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
245010138
pes2o/s2orc
v3-fos-license
Mechanical Stress Effects on 550 °C Hot Corrosion Propagation Rates in Precipitation Hardened Ni-Base Superalloys: CMSX-4, CM247LC DS and IN6203DS Combinations of temperature, stress and hot corrosion may cause environmentally-assisted cracking in precipitation-hardened Ni-base superalloys, which is little understood. This research aims to increase current understanding by investigating the effects of mechanical stress on the hot corrosion propagation rate during corrosion-fatigue testing of CMSX-4, CM247LC DS and IN6203DS. The parameters used during the tests included a high R-ratio, high frequency, and a temperature of 550 °C. The results showed CMSX-4 experienced a predictable increase in the hot corrosion rate, CM247LC DS also experienced increased rates, but no obvious trend was apparent; whilst IN6203DS showed no evidence of an increased rate. These different behaviours appear to be a result of an interaction between the mechanical stress and microstructural features, which include gamma-prime volume fractions in both the matrix and eutectic regions, along with the distribution of the eutectic structure. The different behaviours in the hot corrosion propagation rate subsequently affected the respective corrosion fatigue results, with both CMSX-4 and CM247LC DS experiencing fracture but with significantly more scatter involved in the CM247LC DS results. All IN6203DS corrosion-fatigue specimens completed the respective tests without fracture and showed no evidence of cracking. It, therefore, appears that precipitation hardened Ni-base superalloys, which are susceptible to environmentally-assisted cracking, also experience increased hot corrosion propagation rates. Introduction Critical rotor blades of an industrial gas turbine (IGT) experience high temperatures and stresses during routine operation. The rotor blades must therefore be manufactured from materials, such as precipitation-hardened Ni-base superalloys, that have favourable high temperature mechanical and oxidation properties. Improvements in these properties have been attained through development of the superalloys [1], allowing the IGT to operate at higher temperatures [2] and efficiencies resulting in reduced CO 2 emissions [3]. The investment casting technique is used in the manufacture of the rotor blades [3]. This is followed by a suitable heat treatment which ensures a microstructure consisting of minimal gamma-prime eutectic [1] and a homogeneous distribution of gamma-prime precipitates within a gamma matrix [1,[3][4][5]. The gamma has a disordered fcc unit cell [4,5] containing elements such as Cr, Co, Re and Mo [6] whilst the gamma-prime is a Ni 3 Al based phase [1] [3] that has an ordered L1 2 fcc unit cell [3][4][5]. Other elements associated with the gamma prime are Ta, Ti and Nb which may substitute with the Al to strengthen this phase [1]. In addition, the unit cell of the gamma-prime generally has a smaller lattice parameter to that of the gamma [7], which creates coherency strains [5]. Between the temperatures of 400 to 650 ˚C, the magnitude of these strains increases [6] due to the different thermal expansion rates of the gamma-prime and gamma [7]. Thus, the strengthening mechanisms of the precipitation-hardened Ni-base superalloys include both solid solution and coherency strengthening. The amount of alloy strengthening tends to increase with the gamma-prime precipitate volume fraction [5] and development of the chemistries [1,4] has resulted in gamma-prime precipitate volume fractions of up to 70% [3,6,7]. This has been achieved by increasing the proportion of elements within the superalloys that are associated with the gamma-prime [6]. The additions of Al and Cr to the chemistries also provide the superalloys with oxidation resistance. That is, a slow-growing protective scale of either alumina or chromia may form, initially alongside transient oxides of all alloying additions. Once the protective scale has formed a continuous layer though, the transient oxides will stop growing since the protective scale acts as a barrier. This prevents O 2 reacting any further with the elements responsible for the transient oxides [8]. If damaged by cracks or spallation during service [8] though, the protective scales may self-repair [9] providing enough of the respective element (Al or Cr) remains in the superalloy. This self-repair period is known as steady-state oxidation, and during this stage, the rates of oxidation attack are low [9]. Eventually, though, the respective element in the superalloy may become so depleted that the protective scale will no longer be able to self-repair. The superalloy will then enter a breakaway stage and increased rates of oxidation attack will be experienced [9]. In hot corrosion, the self-repair period of the protective scale is known as incubation and may provide limited protection against attack [8,9]. That is, when molten deposits such as Na 2 SO 4 accumulate on the surfaces, the protective scale may be damaged [2] by dissolution [8] and thus shorten the incubation stage as the protective scale is forced to self-repair. The Na 2 SO 4 deposits are in a molten state at a temperature of around 900 ˚C allowing so-called Type I hot corrosion to occur which, having passed through incubation, enters a propagation stage, and causes accelerated internal damage and sulphidation [2,9,10]. At temperatures around 700 °C, the Na 2 SO 4 deposits are in a solid-state but, providing SO 3 is present in the gas phase, may interact with a transient NiO on the surface of the superalloy and produce a molten Na 2 SO 4 :NiSO 4 system [2]. This is known as Type II hot corrosion and, during the propagation stage, causes accelerated attack of the superalloy which is characterised by pitting [2,9,10]. Hot corrosion may also occur with the deposits remaining in a solid-state [11][12][13]. An example of this was provided by Kistler et al. [14] After performing hot corrosion exposures on a Ni-base superalloy and repeated on 99.98% pure Ni. These exposures were conducted over various durations (up to 20 h) and used Na 2 SO 4 deposits (with a surface loading of 2.5 mg cm −2 ) in a gaseous environment of SO 2 in O 2 at a temperature of 550 °C. At this temperature, the deposits are not expected to melt when applied to the pure Ni since the Na 2 SO 4 :NiSO 4 system has the lowest melting point of 671 °C [14]. Similarly, for the superalloy, a Na 2 SO 4 :CoSO 4 system may form which has the lowest melting point of 565 °C [14] suggesting that this sulphate system is also in the solid-state at 550 °C. For both materials though, accelerated attack did occur. This was attributed to Ni diffusing through NiO and reacting with the deposits to form a metastable nanocrystalline mixed oxide of Na 2 Ni 2 SO 5 , the structure of which allowed rapid Ni 2+ fluxing. Similar hot corrosion exposures to those performed by Kistler et al. [14] have also been performed on CMSX-4, CM247LC DS and IN6203DS using a 4/1 molar ratio of Na 2 SO 4 :K 2 SO 4 deposits, with a surface loading of 0.5 mg cm −2 applied every 100 h, in a gaseous environment of 300 ppm SO 2 in air at 550 °C [15]. Those exposures indicated that a continuous layer of protective scale had failed to establish itself at the relatively low temperature. Despite this, each material still exhibited an incubation and propagation stage. Under these circumstances, the incubation stage may be associated with both the time for the mixed oxide to form and the short-circuit diffusion paths associated with the interfaces of surface-connected refractory metal carbides. Surface roughness analysis indicated the incubation stage took approximately 400, 500 and 200 h for the CMSX-4, CM247LC DS and IN6203DS materials, respectively. A temperature of 550 °C may therefore allow solid-state diffusion hot corrosion to occur and cause high stresses, which are associated with the increased coherency strains, in the precipitation hardened Ni-base superalloys. These conditions may interact and cause environmentally-assisted cracking (EAC) issues, such as corrosion-fatigue (CF) or stress corrosion cracking (SCC) [16][17][18], when the mechanical stress associated with the CF or SCC tests is applied. This was investigated by performing a series of stress corrosion exposures on CMSX-4, CM247LC DS and IN6203DS over the temperature range of 450 to 550 °C using the same deposits (4/1 molar ratio of Na 2 SO 4 :K 2 SO 4 deposits), surface loading (0.5 mg cm −2 applied every 100 h) and gaseous environment (300 ppm SO 2 in air) as stated above. The results indicated a correlation exists between the severity of SCC experienced and the gamma-prime precipitate volume fraction [19]. This led to a new crack initiation/propagation mechanism being proposed that was based on a summation of stresses (which included: applied mechanical stress, surface stress raisers, coherency stresses and stresses associated with interstitial S atoms [20] distorting the lattice structure of the gamma-prime phase), accelerating the hot corrosion attack in a direction that was normal to the applied mechanical stress. The aim of this research was to provide evidence in support of the proposed new crack initiation/propagation mechanism. That is, does the application of mechanical stress affect the 550 ˚C hot corrosion propagation rate on CMSX-4, CM247LC DS and IN6203DS or not? This was investigated by comparing the respective hot corrosion propagation rates in specimens without the application of mechanical stress [15] against those that were subjected to CF testing using high frequency and high R-ratio parameters. Table 1 shows the measured chemistries of the precipitation-hardened CMSX-4, CM247LC DS and IN6203DS Ni-base superalloys that were supplied in the form of bars (5/8″ diameters by 9″ long) that had been cast in the < 001 > orientation. The values shown were obtained from the respective material certificates that generally quoted X-ray florescence results (with the exception of C-which was quoted from LECO analysis results, and Ni-which was stated on the certificates as 'Balance' and so has subsequently been arithmetically calculated). Materials and Test Specimens The heat treatments of the different materials were performed in accordance with the respective company standards of the sponsor (Siemens Energy Industrial Turbomachinery Limited) of this research. The details of the heat treatments are not included in this paper to protect the sponsors' heat treatment methodology. The materials were single-point turned into plain fatigue specimens ( Fig. 1) for the CF tests and 6 mm diameter cylindrical specimens (which represented the gauge diameter of the fatigue specimens) that were 10 mm long for hot corrosion exposures without the application of mechanical stress. Hot Corrosion Exposures Isopropyl alcohol was used to clean the surfaces of the fatigue and cylindrical specimens in an ultrasonic tank. After this, the specimens were thermally exposed to a hot corrosion environment using the deposit recoat practise [9] every 100 h. The deposits used were a 4/1 molar ratio of Na 2 SO 4 /K 2 SO 4 that were applied with a surface loading of 0.5 mg cm −2 on the outer surface of the cylindrical specimens, and the surface between the threads of the fatigue specimens. Thermal exposures were carried out at a temperature of 550 °C in a horizontal tube furnace containing an atmosphere of 300 ppm SO 2 in air that had a flow rate of 50 SCCM. The gas was vented through bubblers using a solution of NaOH as a scrubber. One cylindrical specimen for each material was exposed for durations of 0, 100, 200, 300, 400, 500, 600, 700 and 800 h. Subsequent surface roughness evaluations [15] indicated each material had entered the propagation stage of hot corrosion after approximately: 400 h for CMSX-4, 500 h for CM247LC DS and 200 h for IN6203DS. All fatigue specimens were exposed for a total of 500 h before any CF testing commenced. This ensured that each material was plausibly in the propagation stage of hot corrosion (verified by the surface roughness evaluations of the cylindrical specimens) and hence would accelerate the CF tests (since the materials had passed through the hot corrosion incubation stage before the start of the CF tests). CF Tests The CF tests were conducted in load control on a calibrated axial servo-hydraulic load frame that was fitted with a gas chamber. The pre-corroded fatigue specimens were loaded within a pull-bar assembly (within the gas chamber) of the load frame with a gas sheath around the specimen. This gas sheath was heated by an induction coil and thus radiated heat to the fatigue specimen. It also ensured a pre-heated corrosive gas flowed over the surfaces of the fatigue specimen. A type K thermocouple was attached to the fatigue specimen to confirm the required test temperature of 550 °C was achieved. The corrosive gas used was 300 ppm SO 2 in air that was pre-heated to the test temperature of 550 °C and flowed at a rate of 25 SCCM. The gas was subsequently scrubbed using a solution of NaOH through bubblers. The fatigue parameters used included a high R-ratio (R = 12/13 which gave high mean stress and thus required relatively small stress amplitudes to prevent the maximum stress exceeding the ultimate tensile strength) and a high frequency. The value of the frequency though is not given in this paper to protect the cycles to failure data for the sponsor of the research (Siemens Energy Industrial Turbomachinery Limited). However, the duration (h) is given for each CF test. If the fatigue specimen did not fail within 100 h of CF testing, the test was halted and once cooled, the specimen was removed from the load frame. The fatigue specimen would then have a further application of the Na 2 SO 4 /K 2 SO 4 deposits (0.114 mg cm −2 ) sprayed evenly on the surface between the threads, before being re-loaded into the load frame for further CF testing. If a fatigue specimen had not failed within three periods of 100 h of testing, the specimen was considered to have achieved a runout and no further CF testing was performed on that specimen. Microscopy Techniques A preliminary visual examination of the CF fracture surfaces was carried out using a Keyence VHX6000 optical microscope. This was followed by a more detailed visual examination using a Jeol JSM-6460 scanning electron microscope (SEM). On completion of the visual examinations, longitudinal and cross-sections (remote from the CF fracture surfaces) of the gauge length were taken, polished to a one-micron finish, and examined on the same SEM in the unetched and electrolytically etched state. The etchant used was a solution of 40 mls of glycerol, 20 mls of hydrofluoric acid and 340 mls of water. The longitudinal sections that contained cracks remote from the fracture were used to characterise the hot corrosion products within the cracks. This was achieved using Inca software to perform energy dispersive X-ray (EDX) mapping. Owing to overlapping Mo and S X-ray energy peaks though, this technique is unable to differentiate the two elements. However, due to the S content within the sprayed Na 2 SO 4 / K 2 SO 4 deposits and the relatively small Mo content in the chemistries of the materials, any Mo/S indication was assumed to be S. Metallographic cross-sections were prepared for the determination of the hot corrosion propagation rates since a true depth of attack could be measured (as opposed to the longitudinal sections where a geometric effect may obscure the true depth). Eight back-scattered images (separated by an angle of approximately 45°) from an etched cross-section of each specimen were obtained. These images were subsequently used to estimate the maximum depth of hot corrosion attack that each fatigue specimen had experienced, which allowed the hot corrosion propagation rate to be calculated as detailed within the Data analysis section. Secondary electron images were taken from three random areas of polished and etched cross-sections of two fatigue specimens from each material. These images were used to perform image analysis, using Olympus stream motion software, to evaluate the gamma-prime volume fraction associated with the eutectic. The size and volume fraction of the gamma-prime precipitates had previously [19] been performed and are repeated in Table 2 for reference purposes. All SEM work was conducted using an accelerating voltage of 20 kV. Data Analysis Cylindrical specimens were used to evaluate the evolution of surface roughness during exposure to the hot corrosion conditions [15]. The data obtained were gathered during a metrology exercise on the polished cross-sections and enabled equations to be derived which provided an estimation of the maximum penetration rate of hot corrosion during the propagation stage of attack. The following were the resulting equations from the data: where y is the depth of attack (µm) and t is the exposure time to hot corrosion (h). Equation 1 relates to CMSX-4 material and is applicable between 400 and 800 h (the hot corrosion propagation period under the exposure conditions tested). Similarly, Eq. 2 relates to CM247LC DS material over the period 500 to 800 h and Eq. 3 relates to IN6203DS material which is valid between 200 and 800 h. Equations 1-3 were therefore used to predict the maximum depth of hot corrosion attack the fatigue specimens had experienced during the 500 h of hot corrosion exposure. This represented the estimated maximum depth of attack at the start of the CF tests. The eight back-scattered SEM images per fatigue specimen were used to measure the maximum observable depth of hot corrosion attack each fatigue specimen had experienced at the end of the respective CF test. This allowed calculations to be made (based on the equation of a straight line, predicted depth of hot corrosion attack at the start of the CF test, and the duration of the CF test) which estimated the rate of hot corrosion during each CF test. These were subsequently compared with the respective rate component in Eqs. 1-3 to determine if the application of mechanical stress had increased the hot corrosion propagation rate of attack. Results and Discussion Of the three materials subjected to the CF testing, only CMSX-4 and CM247LC DS experienced fractures. The respective fracture surfaces of these two materials indicated similar features in that multiple cracking had occurred which had propagated normal to the direction of mechanical stressing on the < 100 > planes. The exposed crack surfaces revealed beach marks (Fig. 2) suggesting the cracking followed a start-stop-start process, but no evidence of fatigue striations could be found during the SEM examination. EDX mapping, on the unetched longitudinal sections, of the hot corrosion products within the CMSX-4 and CM247LC DS cracks that were remote from the final fracture, revealed an O [21] and S [20] embrittled phase ( Fig. 3 shows an example from CMSX-4). Despite these similarities, a comparison of scatter plots showing the effect of mean stress against the duration of testing (Fig. 4) indicated that all three materials displayed different CF behaviours. All the CMSX-4 and CM247LC DS specimens had experienced cracking, which ultimately caused fractures in all but one of these superalloy specimens with significantly more scatter in the CM247LC DS results than in the CMSX-4 results. By contrast, the IN6203DS specimens successfully completed the 300 h of CF testing without showing any evidence of cracking. (For each material, the range of mechanical stress levels that were applied during the respective CF tests, included those which represented elastic stressing and others which represented plastic stressing. The identification of which CF test was performed with elastic or plastic stress though is not given in this paper to protect the sponsors' design data). To understand the different CF behaviours of the three materials, the polished and etched cross-sections of the specimens were used to investigate the effect of mechanical stress on the hot corrosion propagation rate. The back-scattered images obtained indicated the scale present consisted of an external and internal component (Fig. 5) Fig. 2 Optical microscopy images showing beach marks observed on the exposed CF crack faces after fracture occurred in a CMSX-4 (CF test completed 73 h with mean stress of 820 MPa) and b CM247LC DS (CF test completed 99 h with mean stress of 675 MPa) which were defined by the absence or presence of microstructural features such as the gamma-prime precipitates and/or carbides. Further to this, the images indicated the scale to be in various states of damage on each material, which was most likely caused by the high frequency of the CF tests. The damage observed included breakage/cracking of the internal scale in some areas suggesting spallation may have occurred, whilst other areas showed the internal scale to have minimal damage and were largely intact (as indicated in Fig. 5). The maximum observed depth of hot corrosion each fatigue specimen had experienced was therefore based on measuring the thickest intact internal scale from the respective back-scattered images. This allowed the mechanical stress influenced hot corrosion propagation rates, experienced during the CF testing, to be calculated as described in the Data analysis section. Figure 6 shows a scatter plot for each material illustrating the effect that mechanical stress has on the calculated hot corrosion propagation rates. Also included in these plots is the respective hot corrosion propagation rate from the cylindrical specimens (obtained from the rate component in Eqs. 1-3) which were exposed without any mechanical stress being applied. No significant evidence of an increased hot corrosion propagation rate was found due to the mechanical stress applied during the CF testing of the IN6203DS material (rates of 0.012 and 0.016 µm h −1 were calculated which were plausibly equal to the rate component of 0.0142 (µm h −1 ) in Eq. 3). All fatigue specimens manufactured from CMSX-4 and CM247LC DS materials though did experience an increase in the hot corrosion propagation rate during the CF testing when compared with the respective rate components in Eqs. 1 and 2. In the case of CMSX-4, a predictable increasing trend in the calculated rate (rising from 0.028 to 0.107 µm h −1 ) was associated with the mechanical stress which compared with the rate component of 0.0105 (µm h −1 ) in Eq. 1. For CM247LC DS, the rates associated with the CF tests ranged between 0.024 to 0.311 µm h −1 which were all greater than the rate of 0.000935 (µm h −1 ) quoted in Eq. 2. However, no obvious trend was apparent for the mechanical stress-influenced CF hot corrosion propagation rates for CM247LC DS, and this appeared to be the cause of the greater degree of scatter in the CF results of this material. After grouping the CF calculated hot corrosion propagation rates of all three materials into one group, a Spearman correlation test [22] was performed (using Minitab version 19 software) against the duration of the CF tests (Fig. 7a). This form of correlation test assesses monotonic relationships between two variables. An output is then provided which is used to determine if a non-linear relationship plausibly Fig. 7 Spearman correlation analysis, performed with a 95% confidence interval, between a the hot corrosion propagation rate and the duration of the CF tests, and b the hot corrosion propagation rate and the mean stress the CF tests were performed at exists between the two variables. The correlation coefficient obtained (− 0.854 with a 95% confidence interval of − 0.966 to − 0.469) indicated a strong relationship suggesting the hot corrosion propagation rate is an influential factor in the CF life of the three materials tested. However, it does not explain the cause of the increased rates. Another Spearman correlation test was performed (Fig. 7b), but this time against the CF mean mechanical stress and the CF hot corrosion propagation rates. The correlation coefficient obtained (− 0.198 with a 95% confidence interval of − 0.697 to + 0.429) indicated little evidence for a relationship between these two variables. Although this may suggest that mechanical stress may not be an influential factor in the CF tests, it does not preclude it from interacting with another factor such as the microstructure. Table 2 indicates the gamma-prime precipitate volume fraction is correlated with the ranked severity of cracking in SCC tests. The gamma-prime, therefore, appears to be a susceptible phase to EAC and this suggests the gamma-prime associated with the precipitates and/or eutectic features may be interacting with the mechanical stress. The exact mechanism involved in the interaction is presently unknown. However, one possibility may concern the size of the gamma-prime interstitial sites and the relative size of interstitial species such as S. In the unstressed state, it may be that the interstitial sites are too small for S to gain access. In a mechanically stressed state though, the interstitial sites may have been distorted to such an extent that S can access these sites. This would then further distort the gamma-prime lattice and increase the summation of stresses associated with the gamma-prime. The gamma lattice would also experience distortion under the application of mechanical stress. However, since the gamma tends to have a larger lattice parameter than that of the gamma-prime, the effect of the interstitial S further distorting this lattice structure may not be as great as that for the gamma-prime. This would therefore ensure that the gamma-prime is the susceptible phase. Significant research is required though, before any evidence (for or against) this possibility is obtained. The interaction between the three factors (mechanical stress, gamma-prime precipitate volume fraction and gamma-prime eutectic volume fraction) may be more influential than the individual factors alone. Evidence of this possible interaction can be seen in Fig. 5b, which shows a localised region, associated with the gammaprime of a surface connected eutectic region, that has experienced greater depths of hot corrosion attack than the adjacent areas. To determine the gamma-prime eutectic volume fraction of the CMSX-4, CM247LC DS and IN6203DS materials, image analysis was performed (using Olympus Stream motion software) on three random areas of the etched cross-sections of two CF tested specimens per material. Table 3 shows the respective volume fraction for each area. For IN6203DS, no evidence of gamma-prime eutectic could be found in either specimen. In the case of CMSX-4, one specimen had gamma-prime eutectic volume fractions of 1.2, 0.9 and 1.5% (averaging out at a mean value of 1.2%) whilst the other specimen had values of 3.0, 2.1 and 3.3% (mean average of 2.8%). For CM247LC DS, values of 8.0, 6.3 and 6.1% (mean average of 6.8%) were recorded in one specimen and 7.7, 5.9 and 6.8% (mean average of 6.8%) were recorded in the other specimen. The overall mean average of the gamma-prime volume fraction within the eutectic features of the two specimens per material that contained the eutectic features was 2.0% for CMSX-4 and 6.8% for CM247LC DS. This indicates that the gamma-prime eutectic volume fraction was more than three times greater in CM247LC DS than that in CMSX-4. The distribution of the gamma-prime eutectic also differed between the two materials. In the case of CMSX-4 the gamma-prime eutectic appeared as isolated 'clumps' within the microstructure whilst that in CM247LC DS tended to be strung out around the dendritic features (Fig. 8). Providing the gamma-prime eutectic was surface connected, the relative distributions would allow the hot corrosion a potentially easier path to track and thus result in greater localised propagation rates for CM247LC DS than that for CMSX-4 during the CF tests. Hence, the distribution of the gamma-prime eutectic maybe a fourth factor which is involved in the interaction which causes the increased hot corrosion propagation rates during the CF testing (along with mechanical stress, gamma-prime precipitate volume fraction and gamma-prime eutectic volume fraction). Of the four factors which are proposed in the influential interaction that causes an increase in the hot corrosion propagation rates, and ultimately CF failures, the dominant factor(s) generally appear to be the ones associated with the microstructure. In the case of IN6203DS, the relatively low gamma-prime content (volume fractions of 27 and 0% for the precipitate and eutectic respectively) meant this material did not appear to experience any increase in the hot corrosion propagation rates with concurrent fatigue testing and was therefore immune to the influence of mechanical stress (under these conditions). CMSX-4 had gamma-prime volume fractions of 60 and 2.0% for the precipitate and eutectic (which appeared as isolated 'clumps') respectively. It seems that the eutectic volume fraction and distribution were insufficient to exert any major influence on the interaction allowing the magnitude of the mechanical stress to become the dominant factor in the predictable increase in the hot corrosion propagation rates as shown in Fig. 6. CM247LC DS had gammaprime volume fractions of 52 and 6.8% respectively for the precipitate and eutectic (which tended to be a continuous distribution around the dendritic structure). In this case, it appears that the magnitude of stressing became less dominant and the eutectic volume fraction and distribution more influential within the interaction. This introduced a greater degree of randomness within the maximum hot corrosion propagation rates (Fig. 6), which was most likely due to how many and how deep the surface-connected gamma-prime eutectic features were within the gauge length of the fatigue specimens. This randomness was therefore the most likely cause of the scatter observed within the CF results of CM247LC DS material (Fig. 4). The results of this research have provided evidence that the application of mechanical stress may increase the hot corrosion propagation rates in precipitation-hardened Ni-base superalloys. However, it does appear to be dependent on an interaction with microstructural features which include the gamma-prime precipitate and eutectic volume fractions and the distribution of the eutectic features. Once increased rates of the hot corrosion propagation rate have been experienced, the respective material may be considered more likely to experience EAC issues such as CF (using high R-ratio and high-frequency parameters) or SCC. Conclusions Upon completion of this research, the following conclusions were made: • Precipitation hardened Ni-base superalloys that appear susceptible to EAC are those with relatively high gamma-prime (precipitate and/or eutectic) volume fractions. • IN6203DS (with averaged gamma-prime volume fractions of 27 and 0% for the precipitate and eutectic respectively) does not appear to be susceptible to EAC. • CMSX-4 (with averaged gamma-prime volume fractions of 60 and 2.0% for the precipitate and eutectic respectively) and CM247LC DS (with averaged gammaprime volume fractions of 52 and 6.8% for the precipitate and eutectic respectively) are both susceptible to EAC. • EAC susceptible precipitation hardened Ni-base superalloys appear to experience an increased hot corrosion propagation rate due to an interaction between mechanical stress, the gamma-prime precipitate and eutectic volume fractions, and the distribution of the gamma-prime eutectic. • One possible explanation for the interaction is that the mechanical stress may be distorting the lattice structure of the gamma-prime sufficiently for S to access the interstitial sites within these phases. This would then create a further distortion of the gamma-prime lattice and therefore allow the S to access more interstitial sites. Further to this, if relatively large eutectic regions of the gamma-prime are surface connected, the increased hot corrosion rates would ensure localised deep penetration of the hot corrosion attack in a relatively short time frame. The lattice structure of the gamma matrix though would also be distorted by the mechanical stress. However, since the lattice parameter of the gamma-prime tends to be smaller than that of the gamma matrix, any further distortion caused by S accessing the interstitial sites would be of greater magnitude in the gammaprime ensuring these are the susceptible phases. This is an area that should be considered for further research by computer modelling. It may also be possible that a critical gamma-prime precipitate size for a given volume fraction may be derived below which increased rates of hot corrosion attack due to mechanical stress do not occur. • This research has shown that a proposed crack initiation/propagation mechanism based on a summation of stresses accelerating the 550 ˚C hot corrosion attack is plausible in susceptible precipitation hardened Ni-base superalloys.
2021-12-10T14:32:41.543Z
2021-12-10T00:00:00.000
{ "year": 2021, "sha1": "5f9a155af16a748fc88e00dab8938c686df56e4c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11085-021-10089-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5f9a155af16a748fc88e00dab8938c686df56e4c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
245658829
pes2o/s2orc
v3-fos-license
Does mean platelet volume and neutrophil to lymphocyte ratio increase in primary hyperparathyroidism arising from a single adenoma Aim: Primary hyperparathyroidism (PHP) is commonly caused by adenomas. Studies have shown mild inflammation in PHP and elevated levels of some inflammatory markers to support this. In addition, excess parathyroid hormone (PTH) and calcium (Ca) cause atherosclerosis by disrupting endothelial function. Mean platelet volume (MPV) describes the size and indirect activity of platelets and its value is expected to increase with inflammation and associated atherosclerosis. Neutrophil to lymphocyte ratio (NLR) is another parameter associated with inflammatory response. This study was performed to investigate the MPV and NLR levels in PHP developing from a single parathyroid adenoma. Method: Patient records from 2016-2021 were retrospectively scanned from the computer system and 40 patients with PHP developing from a single parathyroid adenoma were selected based on exclusion criteria. The values of PTH, Ca, 25-Hydroxyvitamin D, phosphorus, MPV and number of blood cells were recorded. NLR was calculated. The results were compared with the results of 36 healthy controls. Results: MPV (8.7±0.6 fl and 7.6±0.6 fl, respectively; p=0.001) and NLR (2.6±1.7 and 1.7±0.8, respectively; p=0.000) were higher in the PHP group compared to the control group. Ca and PTH correlated positively with MPV (p=0.003 and p=0.000, respectively) and NLR (p=0.011 and p=0.023, respectively). Conclusion: MPV and NLR were found to be higher in patients with PHP developing from a single adenoma than in healthy individuals. Introduction Parathyroid adenomas are the most common cause of primary hyperparathyroidism (PHP) and are increasingly encountered in clinical practice with increased ultrasound experience and widespread measurement of calcium (Ca), 25-hydroxyvitamin D (25OHD), and parathyroid hormone (PTH). Early diagnosis also allows patients to be caught mostly in the asymptomatic phase. Therefore, clinical studies have focused on investigating whether high PTH and Ca levels cause other pathologies in this group of patients without typical bone and Experimental Biomedical Research Original article renal involvement [1][2][3][4][5][6][7]. In these studies, the PTH-1 receptor has also been observed outside of bone and kidney which are typical sites of PTH action [3]. Apart from these two regions, the PTH-1 receptor was most frequently detected in the heart and vascular network, which is why the cardiac effects of PTH have begun to be studied [3][4][5][6][7]. These studies showed that PTH exerts chronotropic and inotropic effects on the heart and is a causative factor for left ventricular hypertrophy and hypertension [4,5]. Studies have shown mild inflammation in PHP and increased levels of some inflammatory markers (interleukin-6, highsensitivity C-reactive protein) supporting this [6,7]. Moreover, excess PTH and Ca cause atherosclerosis by disrupting endothelial function [8,9]. Platelets are involved in the development of inflammation and play a special role in hemostasis and thrombosis [10]. Platelets express and secrete CD40 ligand, which stimulates inflammation in the endothelium, and platelet cytoplasmic granules contain numerous inflammatory products, including leukotrienes, prostaglandins, plateletactivating factor, beta-thromboglobulin, and interleukin-1 [11][12][13]. MPV describes the size and indirectly the activity of platelets [14]. Therefore, its value is expected to increase in inflammation and associated atherosclerosis, in which platelet activity increases [15][16]. Studies have also supported this idea, and MPV has been found to be increased in chronic inflammatory diseases such as ankylosing spondylitis, rheumatoid arthritis, and inflammatory bowel disease, as well as in acute inflammatory processes such as unstable angina and myocardial infarction [17][18][19][20]. MPV has also been associated with infections such as coronavirus 2019 (Covid-19), obesity, diabetes mellitus, frailty, and coronary artery disease [21][22][23][24]. The neutrophil to lymphocyte ratio (NLR) is obtained by dividing the neutrophil count by lymphocyte count in the complete blood count and is one of the markers of the inflammatory response [25]. The positive association between high-sensitivity C-reactive protein and systemic inflammation also supports this finding [25]. NLR has been found to help predict prognosis in some diseases, indicating poor prognosis in cardiovascular disease, solid tumors, and infections [26][27][28]. This study was conducted to investigate the level of MPV and NLR in PHP developing from a single parathyroid adenoma. Materials and methods After obtaining ethics committee approval (date: 10/06/2021; decision number: 8/21), the data of subjects who visited the endocrinology and/or general surgery departments of Antalya Training and Research Hospital between January 2016 and April 2021 were reviewed. The exclusion criteria were as follows: PHP due to hyperplasia of the parathyroid glands, PHP patients with multiple adenomas, cases with surgical pathologies of parathyroid cancer, younger than 18 and older than 70 years, with cardiovascular or cerebrovascular diseases, taking medications that affect platelet function (e.g., acetylsalicylates, heparin, antiepileptic drugs, etc.), and with infections or inflammatory diseases. Patient records were retrospectively scanned from the computer system, and 40 PHP patients with solitary parathyroid adenoma who met the exclusion criteria were identified and included in the study. The study population was formed by selecting 36 controls of similar age and gender with normal serum PTH and Ca levels. In all patients, PHP diagnosis had been based on at least two separate measurements of Ca, phosphorus (P), albumin, and PTH, and at least one measurement of 25OHD and 24-hour urine Ca and creatinine. The diagnosis of adenoma was confirmed by the presence of adenoma on Technetium (99mTc) sestamibi scintigraphy in addition to the ultrasound image, and by PTH washout from the lesion on ultrasound if the scintigraphy was negative. Ca, P, albumin, and other biochemical tests results were obtained by the traditional spectrophotometric method using commercial kits from Beckman Coulter with a Beckman Coulter AU5800 autoanalyzer (Beckman Coulter Inc., CA, USA), and the result of whole blood parameters (hemogram) were obtained with a Beckman Coulter LH780 hematology autoanalyzer. PTH, 25OHD and other necessary hormone tests were performed using the chemiluminescence method on a Beckman Coulter DxI800 instrument (Beckman Coulter Inc.). The reference ranges in our hospital were 8.8-10.6 mg/dl for Ca, 2.5-4.5 mg/dl for P, 12-88 ng/l for PTH, and 3.6-12 fl for MPV between the study dates. Serum levels of Ca, P, albumin, PTH, 25OHD and hemogram of total population were recorded. If the patient's albumin is low, it was analyzed by calculating the corrected Ca for albumin [Corrected Ca (mg/dl) = measured Ca (mg/dl) + 0.8 (4-patient albumin). 25OHD level below 20ng/ml was considered as insufficient. We statistically compared the MPV and NLR values of patients and controls and examined the correlation between these parameters and PTH, 25OHD and Ca. Statistical analysis All results were given as numbers and percentages for categorical parameters and means and standard deviations for continuous variables. Analyzes were obtained in SPSS 20.0 program. Whether the distribution of the data was normal or not was determined by performing the Shapiro-Wilk test. The comparison of the means of the two groups that met the parametric analysis conditions was made with the Student's T-test and the Mann Whitney U test was used when comparing the means of nonparametric variables. Spearman correlation test was used to identify possible relationships among the parameters. It was considered significant when the p value was below 0.05. Results 40 PHP patients with a mean age of 50.6±7.3 years and 36 controls with a mean age of 49.8±8.1 years were studied. 29 (72.5%) of the PHP patients were female, while 26 (72.2%) of the control subjects were female. Mean PTH level was significantly higher in patients than control subjects (214.9±112.6 ng/l and 49.0±6.3 ng/l, respectively; p=0.001). The mean 25OHD level in the PHP group indicated insufficiency, while the 25OHD level in the control group was adequate (18.1±7.6 μg/l and 26.6±4.0 μg/l, respectively; p=0.024). The mean Ca level was 11.6±0.9 mg/dl in patients, 9.2±1.0 mg/dl in controls (p=0.012). P level was lower in patients than in controls as expected (2.4±0.02 mg/dl and 3.5±0.4 mg/dl, respectively; p=0.002). Red blood cell count and hemoglobin content were similar in the 2 groups. While platelet count did not change between patients and controls, MPV was significantly higher in patients compared to controls (8.7±0.6 fl, and 7.6±0.6 fl, respectively; p=0.001), supporting our hypothesis. The white blood cell count did not differ between the groups, but the neutrophil count was significantly increased in the PHP patients (p=0.000). In addition, NLR was significantly higher in the PHP group than in the control group (2.6±1.7 and 1.7±0.8, respectively; p=0.000). The comparison of study parameters between patients and controls is shown in Table 1. We found a significant correlation between PTH and MPV (r=0.476, p=0.000) positive, between Ca and MPV (r=0.292, p=0.003) positive and between MPV and 25OHD (r=-0.367, p=0.024) negative. In addition, NLR showed significant positive correlation with serum Ca (r=0.214, p=0.011) and PTH (r=0.347, p=0.023) and negative correlation with 25OHD which was not statistically significant (r=-0.072, p=0.131). The correlation results are shown in Table 2. Discussion In our study, we found that MPV, which was expected to increase in platelet activation, and NLR, which is positively correlated with inflammatory parameters were significantly higher in PHP patients than in healthy controls. MPV was found to be increased in chronic inflammatory diseases, many cancer types including thyroid papillary carcinoma as well as in cardiovascular diseases such as coronary artery disease and myocardial infarction (MI) [16][17][18]29]. Butterworth RJ et al., showed that MPV is also increased in ischemic stroke and that MPV on admission is significantly higher in patients who died or became dependent at 3 months after stroke [30]. In another study, an increase in MPV was observed after MI, suggesting that it may be a predictor of death or other ischemic events after MI [18]. The study, which compared thyroid cancer patients who underwent surgery with healthy controls and operated thyroid patients with benign goiter pathology, concluded that the MPV increase in thyroid papillary cancer was significant compared to other groups (29). Another study by Kuzu F et al., on thyroid nodules found that MPV and NLR were high in malignant nodules (31). Also, in studies on MPV in thyroid patients, it was observed that MPV increased in autoimmune thyroid diseases irrespective of TSH level (32). An increase in MPV was also observed in Graves' orbitopathy, which is also an autoimmune inflammatory process (33). In patients with PHP, inflammation, endothelial dysfunction, and the atherosclerosis cascade are activated by pathways whose mechanisms are not clearly understood. The consequence of this process is a poor prognosis and an increased risk of death from cardiovascular disease [3-9]. To elucidate this etiopathogenesis, platelet functions have been brought to the forefront and it has been suggested that changes in coagulation parameters, susceptibility to thrombosis and increased platelet activation may occur in PHP [34,35]. It is known that the number of studies investigating platelet function and activation in PHP is quite limited. In these studies, some factors of the coagulation cascade, coagulation and adhesion molecules were measured. One study found that the levels of the factor VII and D-dimer were higher in PHP patients than in control subjects [34], while in another study, P-selectin levels and aggregation parameters did not differ between PHP and control groups [35]. In studies that investigated MPV in PHP, the results were consistent with ours [36][37][38]. Yılmaz et al, on the other hand, found that MPV values decreased significantly in the 6th month after adenoma surgery [37]. Baradaran et al., studied MPV levels in secondary hyperparathyroidism in dialysis patients and found that there was a direct correlation between MPV and PTH in this group of patients, and observed that platelet count decreased with increasing PTH [38]. Some studies hypothesized that increased Ca levels, rather than PTH, affected platelets in PHP, leading to an increase in platelet Ca levels by altering platelet shape and activation [39,40]. It has also been suggested that increased inflammation and oxidative stress may cause platelet activation in PHP [41,42]. Similar to our results, Cure et al. [42] and Arpaci D et al., [36] found a negative correlation between MPV and 25OHD levels. In agreement with previous reports [43,44], females in this study were more likely to have a PHP than males. Another finding of our study is that NLR is higher in subjects with PHP than in healthy subjects. NLR, which can be derived from leukocyte count, increases in systemic inflammation [45]. Some data suggest that NLR may be related to cardiovascular disease prognosis [26,45]. In a few studies investigating the influence of PHP on NLR, a positive correlation between PTH and NLR was documented [46,47]. In this study, Zeren S et al., found that NLR increased with increasing parathyroid adenoma size [47]. In another study that focused attention on NLR in patients who developed primary or secondary PTH elevation, it was highlighted that due to the positive correlation between PTH and Ca and NLR, elevated PTH would indicate a proinflammatory state [48]. Our study has some limitations. The study population was small, the study was retrospective, and we did not measure other atherosclerotic or inflammatory markers and platelet activation parameters. Conclusion The significance of this study is that it demonstrates increased platelet activation and inflammatory propensity in PHP and paves the way for new studies to assess inflammatory markers and adhesion and aggregation molecules in relation to platelet activity. In our study, we found that MPV, which is an indicator of platelet activation, and NLR, which correlates with inflammatory parameters, increase in PHP due to a single parathyroid adenoma, but new studies on the clinical significance of these findings are needed. There is no conflict of interest.
2022-01-04T16:02:53.661Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f4e51589c78fe4995bc42c24f29f58de2a0efc71", "oa_license": null, "oa_url": "https://experimentalbiomedicalresearch.com/ojs/index.php/ebr/article/download/220/138/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5dcdbea2cc9d0165af13a2dbb8610e0ff564d180", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216429009
pes2o/s2orc
v3-fos-license
Research on the Linkage Mechanism of Multi-Time Scale Electricity Market in Northern Hebei With the rapid marketization of the electric sector, transferring to market-oriented system from managed-based system is of utmost importance. As spot market constructing recognized as a necessity for electric reformation, Northern Hebei is treating the linkage mechanism seriously. Focusing on the spatial and temporal linkage mechanism for energy trading, this paper aims to discuss reasonable options for Northern Hebei when it’s introducing the spot market. The relationship between energy market and ancillary market, particularly in the spot market, is also discussed to better shape a comprehensive framework for the entire electric market. Some discussions on the specific electric commodity categories are demonstrated as well to help Northern Hebei look into the future and take strategies accordingly as the market develop and the government giving more guidance on the market construction. Introduction Several Opinions of the CPC Central Committee and the State Council on Further Deepening the Reform of the Electric Power , is worthy of remark as a monument for the reformation of the electric power system. Published on March 15 th 2015, Policy No.9, together with six supporting documents, requires eight provinces to serve as the pioneers and develop spot market for electricity trading, which aims to establish a more modernized trading framework for energy supply. The goal is to build a system that contains mid-term and long-term markets as well as the spot market. While mid-term and long-term markets are for yearly, quarterly, monthly and weekly trading that deals with energy and ancillary services such as interruptible load and voltage regulation, the spot market aims for coping with energy and ancillary services such as reserve and frequency regulation for dayahead, daily, and real-time trading. More explorations on other trading commodities (e.g., capacity market, electric power futures, and derivatives) could be made when the market is mature enough. Centralized and decentralized markets are the two options for the market mode. Centralized marketplaces are usually carried with bilateral contracts that are delivered physically. Both supply-side and demand-side should have their daily curve for electricity production and consumption one day ahead of the real-time dispatch, and the energy differences between the planned results and the load should be balanced at the spot market. Meanwhile, Contract for difference is implemented for centralized marketplaces. While the price signal from the spot market depicts the demand-supply relationship of the market, mid-term and long-term bilateral contracts serve as hedging methods to manage the risk. Each province should build their market based on their financial situations while considering features like regional electric resources, load features, grid structure, etc. 2 For electric market system, trading should be done in either regional or provincial electric power markets. Unlike provincial markets merely dealing within the province, regional markets deal with a scale larger than a province and possibly contains several provinces that have bilateral transmission lines. Beijing trading center and Guangzhou trading center are the most important agencies for the national plan implementations and the regional agreements for large scale cross-provincial trading. Mid-term and long-term contracts, as well as spot market trading, should interact with each other to optimize the resources for both regional and provincial markets. It is to mention that one area should not have more than one spot market. Under these circumstances, Northern Hebei should know their strengths and weaknesses before they start building a spot market. Considering the connection with its neighboring provinces, Hebei should pay much attention on how to integrate the spot market with the mid-term and long-term markets, as well as how to integrate the regional with inter-province market together while considering other features such as its high renewable penetration patterns and local policies. Power Structure Hebei Province has two power grid companies that provide the customers with electric services. Here in this article, we are only going to focus on the northern part that is taken over by Northern Hebei Power Grid company. Northern Hebei Power Grid is located at the east end of Northern China Power Grid. Neighboring to Beijing, Northern Hebei Power Grid is playing a crucial role as the electric defender of the capital in Northern China area. Northern Hebei Power Grid serve for 43 counties (including districts and cities) in Langfang, Chengde, Tangshan and Qinhuangdao. The inter-province network is of 'three horizontal and three vertical' framework. There are also important transmission lines between Northern Hebei and Beijing, Tianjin, Southern Hebei and West Inner Mongolia. Northern Hebei has rich wind and solar energy. At late 2018, Northern Hebei has 1.36% of Hydro capacity, 41.96% of coal capacity, 40.15% of wind capacity, 16.42% of solar capacity and 0.10% of storage capacity. The total capacity for renewable energies has exceeded 17,283 MW. In 2018, wind provided 27.35% of total energy over 2057 hours, and solar provided 6.07% of total energy over 1310 hours. The discarded renewable energies are about 1.4 billion kwh. Zhangjiakou, rich in both wind and solar energy, has especially difficulty on market consumption of the renewable energies. Trading characteristics within Northern Hebei Currently, Northern Hebei only has mid-and-long-term trading and real-time dispatch. Local trading center is in charge of trading within the region, including direct trading whose suppliers and users both registered locally, Alibaba cloud computing project, and Green Electricity that only takes place in Zhangjiakou. Local trading center organizes Alibaba cloud computing project independently, and cooperates with Beijing Electricity Trading Center on other categories within Beijing-Tianjin-Tangshan area and with other provinces. At the moment, settlements for all categories are carried monthly, with a user-oriented penalty method. Details on organization of different categories will be discussed later. Electric Trading within Beijing-Tianjin-Tangshan Area Northern Hebei is part of the Beijing-Tianjin-Tangshan economic zone area, and is playing a crucial rule in the electricity serving in this area. Up to now, Northern Hebei has to do all its trading within the Beijing-Tianjin-Tangshan Area framework, which is affiliated to Northern Area Framework. For Northern Hebei, trading takes place under direct trading (between supply side and demand side), Alibaba Cloud Computing project, and Green Electricity Trading (which is for renewable consumption). Direct trading is mainly based on negotiation, and supplemented by central bidding. Trading categories include yearly negotiation, monthly negotiation and monthly auction. The corresponding administrative department is responsible for the publicity of the tradable energy amount of the next year before November 1st. Relevant trading centers at all levels are in charge of decomposing the yearly tradable electricity amount into each month while following the principle of dynamic equilibrium. For yearly negotiation, an agreed monthly electricity amount between two entities should be confirmed between November 10th and November 23rd. For monthly negotiation, the negotiation processes take place between the 10th and the 20th in certain month that is one month ahead of the dispatch, an agreed monthly amount for the next month should be confirmed then. Monthly auction is the last stage of the mid-and long-term electric market, the supply side could upload uptrend 'load-price' curves while the demand side provides downtrend ones. The aggregated curves from both sides then come across at a specific point and then the price at the equilibrium becomes the market clearing price. Then the winners of the bid become clear. This process should be done in two days before the 27th day of a certain month, with a notice of the open date at least three days ahead of the auction. The calculation follows the chronological sequence: monthly auction, monthly negotiation; agreed amount decomposed from yearly negotiation, and amount outside of the market. Within the scope of Direct Trading, Alibaba project operates synchronously and is in the form of unilateral listing. Green Electricity Trading contains yearly and monthly trading in listing and bilateral negotiation. Certain users and renewable plants trade to have the electricity generated from renewable sources consumed outside the government-guaranteed hours. Green Electricity Trading now operates outside the Direct Trading framework, however will be part of the framework as it becomes mature. Green Electricity Trading only works for Wind powers in Zhangjiakou at the moment, and it is organized by Northern Hebei Power Grid Company. Decomposition of yearly tradable amount to monthly tradable amount will be decided during November and December. Ancillary service trading within Beijing-Tianjin-Tangshan Area Northern Area has started a spot market for peak-load regulation since 2019. This market includes dayahead market and intraday market. It is a two-level market that contains Northern Area and the provincial levels. Beijing, Tianjin, and Hebei work as a whole at provincial level. For the first four months of market operating, the bidding electricity amount ranges from 0.50 GWh to 0.34 GWh, with the conventional plants taking up 51.59% to 46.06% and renewable plants taking up 53.91%. Cross-provincial trading follows the rule set by Beijing Electricity Trading Center. Several opinions have now been under review for cross-provincial trading, with focuses on contract repurchase, bundle for renewable energy and conventional energy, bundle for alternatives, generation rights transfer for clean energy, alternatives with clean energy and pumped storage, and trading for renewable quotas. As the construction for electric market grows rapidly, it is to mention that there is now spot market for cross-provincial renewable energy trading where the demand province does not have spot market yet, and the supply province with renewable curtailment still has capacity after the implementation of the original trading. The linkage mechanism within Beijing-Tianjin-Tangshan Area As the calling for spot market construction is urgent, it is unavoidable to have spot market in the Beijing-Tianjin-Tangshan area. However, as this area has more than one province, the final form of spot market is uncertain. Whether the region will have a centralized market or a decentralized market, could be a problem when Northern Hebei is going to participate in a spot market. Northern Hebei, with both renewable and conventional power plants, could build a spot market itself, however, Beijing, as the capital city with more air quality requirements for certain periods, and a situation with less renewable resources, might not have the ability to build a spot market whose price can reflects the supply and demand relationship. Meanwhile, Tianjin, one of its neighboring provinces, with various types of energies including both conventional and renewable, also has the ability to build an individual spot market. Beijing-Tianjin-Tangshan as one centralized market. In the situation where Beijing-Tianjin-Tangshan adopted centralized market and work as a whole, Northern Hebei could still be participating in the market spatially as it is now, and with challenges mainly focused on time-scale linkage. Then the focus will be the interaction between trading center and dispatching center. Requirements on computing system dealing with real-time monitoring, settlement, and penalty would then become vital. In this situation, Northern Hebei will have the least flexibility for organizing the market, and could possibly act like a participator with the focus mainly on logistical and technical issues. Beijing-Tianjin-Tangshan as one decentralized market. In the situation where Beijing-Tianjin-Tangshan work as a decentralized market, Beijing, Tianjin and Northern Hebei will have to participate wisely in the market. As physical delivery is required in a decentralized market, precise day-ahead predictions are needed to avoid penalty. The challenge then becomes the decomposition of mid-andlong-term contracts and the prediction of generations. Like Nord Pool, most trading might have been done within the spot market, with subtle adjustments by a balance mechanism or ancillary service market. In this scenario, the flexibility and Northern Hebei will have more autonomy, and it could be more flexible if the decentralized market only contains spot market. Northern Hebei as one centralized market. In the situation where Northern Hebei constructs its individual centralized market, Beijing and Tianjin will possibly have the same kind of centralized market. With the Beijing-Tianjin-Tangshan union breaks apart, Northern Hebei will have the highest flexibility and build an excellent independent spot market. The challenge would be the adjustment of working procedures as cooperation within Beijing-Tianjin-Tangshan area will have few differences with cooperation with other provinces. In this scenario, Northern Hebei will have to overcome all the challenges mentioned above for the other two scenarios, but with less urgency. The spatial and temporal linkage for Northern Hebei under different scenarios The following graph depicts the regional relationship between Northern Hebei (Jibei) and other provinces in Northern Area. Corresponding linkage mechanisms are listed according to the scenarios. Conclusion With high uncertainty of the future market framework, Northern Hebei should get prepared and build linkage mechanism for the transferring path between a manage-based system and a market-oriented system. The key is to understand the adversities and consequences of transferring to a new working mode, and therefore build reasonable strategies to overcome the possible corresponding difficulties. For Northern Hebei, its cooperating method with Beijing and Tianjin, should be the essence when coming up with a linkage mechanism and choosing strategies.
2020-03-19T10:31:00.121Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "7e1295418109172a2acd814bf746eebae300f0e3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/740/1/012185", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6ff85581b111323e4aecc8edf863b0f410492089", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
232405026
pes2o/s2orc
v3-fos-license
Preclinical Assessment of a New Hybrid Compound C11 Efficacy on Neurogenesis and Cognitive Functions after Pilocarpine Induced Status Epilepticus in Mice Status epilepticus (SE) is a frequent medical emergency that can lead to a variety of neurological disorders, including cognitive impairment and abnormal neurogenesis. The aim of the presented study was the in vitro evaluation of potential neuroprotective properties of a new pyrrolidine-2,5-dione derivatives compound C11, as well as the in vivo assessment of the impact on the neurogenesis and cognitive functions of C11 and levetiracetam (LEV) after pilocarpine (PILO)-induced SE in mice. The in vitro results indicated a protective effect of C11 (500, 1000, and 2500 ng/mL) on astrocytes under trophic stress conditions in the MTT (3-[4,5-dimethylthiazole-2-yl]-2,5-diphenyltetrazolium bromide) test. The results obtained from the in vivo studies, where mice 72 h after PILO SE were treated with C11 (20 mg/kg) and LEV (10 mg/kg), indicated markedly beneficial effects of C11 on the improvement of the neurogenesis compared to the PILO control and PILO LEV mice. Moreover, this beneficial effect was reflected in the Morris Water Maze test evaluating the cognitive functions in mice. The in vitro confirmed protective effect of C11 on astrocytes, as well as the in vivo demonstrated beneficial impact on neurogenesis and cognitive functions, strongly indicate the need for further advanced molecular research on this compound to determine the exact neuroprotective mechanism of action of C11. Introduction Status epilepticus (SE) is defined as a serious medical condition characterized by continuous or rapidly recurring seizures without a recovery period between them. These states can have long-term consequences, including neuronal death, neuronal injury, and the alteration of neuronal networks [1]. The early phase after SE can lead to neurodegeneration, neuroinflammation, and abnormal neurogenesis in the hippocampus, although the extent of these changes depends on the severity and duration of seizures [2]. In many cases, permanent traumatic lesions caused by SE can lead to temporal epilepsy, very often refractory to treatment with a single antiepileptic drug (AED) [3]. This type of epilepsy usually requires the chronic use of at least two AEDs to stop or minimize seizures, which unfortunately carries the risk of side effects such as drowsiness or chronic fatigue, dizziness, or cognitive impairment. Problems with spatial orientation and learning and memory functions are associated with both the degenerative process of nerve cells and disorders 2 of 15 of the process of neurogenesis [4]. Hippocampal neurogenesis is very sensitive to various physiological and pathological stimuli, including seizures, which have been proven to alter both the extent and the pattern of neurogenesis [5]. Moreover, it is thought that not only seizures but, also, antiepileptic treatment has a significant impact on neurogenesis. It is possible to find many publications based on in vivo and in vitro studies that present positive, as well as negative, influences of AEDs on the process of neurogenesis. Chen et al. [6], using topiramate (TPM) and lamotrigine (LTG), indicated the promotion of aberrant neuron regeneration by TPM, but not LTG, in the hippocampus after SE. Similar to LTG, levetiracetam (LEV) suppressed the development of spontaneous electroencephalography (EEG) seizures and aberrant neurogenesis following kainic acid (KA)-induced SE [7]. Valproic acid (VPA), a well-known anticonvulsant and mood stabilizer, reduced cell proliferation in the subgranular zone (SGZ) of dentate gyrus (DG) and impaired the ability of treated rats to successfully perform a hippocampus-dependent spatial memory test [8]. Additionally, it was shown that VPA induces abnormal visual avoidance and schooling behaviors in Xenopus laevis tadpoles [9]. The results from our investigations also confirm that long-term injection of VPA slightly decreased the total amount of newly born cells, while combination of VPA and arachidonyl-2'-chloroethylamide, a highly selective cannabinoid CB1 receptor agonist (ACEA), significantly increased the level of newborn neurons in the dentate SGZ in mice [10]. Juliandi and coworkers [11] showed that comparable postnatal cognitive functional impairment after prenatal VPA exposure in mice is caused by the untimely enhancement of embryonic neurogenesis, which leads to depletion of the neural precursor cells (NPCs) pool and, consequently, a decreased level of adult neurogenesis in the hippocampus. Similar results were obtained by Sakai et al. [12], indicating that prenatal VPA exposure in mice impairs neuronal migration in the adult dentate gyrus through the decreased expression of CXC motif chemokine receptor 4 (Cxcr4) in NPCs and, consequently, increases seizure susceptibility, whereas voluntary running overcomes these adverse effects. Sondosi and coworkers [13] showed that ethosuximide (ETS) has the ability to induce neuronal differentiation into GABAergic neurons that may explain one of the mechanisms of antiepileptic effects of ETS in rat forebrain stem cells. In addition to intensively designed and synthesized chemical compounds with potential antiepileptic properties, a lot of attention is also paid to substances of natural origin, which are increasingly used in research with AEDs in various models of experimental epilepsy [10,[14][15][16][17][18]. The results obtained by Kaminski and Obniska et al. [19][20][21] in a group of chemically diversified pyrrolidine-2,5-diones clearly showed anticonvulsant properties for several selected compounds in few animal models of epilepsy, including pilocarpine model of epilepsy (PILO). Looking for new potent multifunctional anticonvulsants with a broad spectrum of efficacy in the preclinical studies, we decided to combined on the common chemical template structural fragments of three AEDs active in three different animal models of epilepsy-namely, lacosamide (LCS), active in the maximal electroshock (MES) and six-hertz (6 Hz) seizure tests, etosuximide (ETS), effective in the pentylenetetrazol (PTZ) seizure test, and LEV, which acts potently in the 6-Hz seizure model. In consequence, on the basis of this assumption, we designed and synthesized a new compound, C11, with a hybrid structure that revealed a broad spectrum of activity in all aforementioned experimental seizure models [22]. Taking into consideration the anticonvulsant and neuroprotective activity of LEV in a mouse PILO model of epilepsy [23][24][25], the 6 Hz [26], and anticonvulsant activity of C11 in acute animal models of epilepsy [22], we decided to investigate an impact of the long-term treatment with C11 on neuroprotection, hippocampal neurogenesis, and cognitive functions in mice [27]. The results we obtained indicated that hybrid compound C11 used chronically has no negative impact on learning and memory functions in mice. Similarly, the long-term administration of C11 did not cause neuronal degeneration or neurogenesis in treated mice [27]. Considering the above-mentioned promising data, continuing research on compound C11, we decided to assess the potential neuroprotective properties of C11 in vitro and evaluate the impact of long-term treatment with C11 and LEV as a reference AED on neural stem cell proliferation, migration, and differentiation, as well as the cognitive functions and in the PILO model of status epilepticus (SE) in mice in vivo. Protective Abilities of C11-In Vitro Studies In the first step, the C11 impact on neuron and astrocyte viability was performed using colorimetric MTT assay (cell metabolic activity examination) after 48 h of cell exposure to the tested compound in concentrations from 100 ng/mL to 2500 ng/mL dissolved in a culture medium suitable for the given cells. As presented in Figure 1 (left part of the graphs), C11 in the whole range of analyzed concentrations (from 100 to 2500 ng/mL) did not impact the neuron viability. On the contrary, C11, in a dose-dependent manner, increased the metabolic activity of astrocytes. The viability of astrocytes after treatment with C11 elevated from 117.1% of the control (100 ng/mL) to 139.1% of the control (2500 ng/mL). Considering the above-mentioned promising data, continuing research on compound C11, we decided to assess the potential neuroprotective properties of C11 in vitro and evaluate the impact of long-term treatment with C11 and LEV as a reference AED on neural stem cell proliferation, migration, and differentiation, as well as the cognitive functions and in the PILO model of status epilepticus (SE) in mice in vivo. Protective Abilities of C11-In Vitro Studies In the first step, the C11 impact on neuron and astrocyte viability was performed using colorimetric MTT assay (cell metabolic activity examination) after 48 h of cell exposure to the tested compound in concentrations from 100 ng/mL to 2500 ng/mL dissolved in a culture medium suitable for the given cells. As presented in Figure 1 (left part of the graphs), C11 in the whole range of analyzed concentrations (from 100 to 2500 ng/mL) did not impact the neuron viability. On the contrary, C11, in a dose-dependent manner, increased the metabolic activity of astrocytes. The viability of astrocytes after treatment with C11 elevated from 117.1% of the control (100 ng/mL) to 139.1% of the control (2500 ng/mL). trophic stress. Cells were exposed for 48 h to the investigated compound in concentrations ranging from 100 to 2500 ng/mL prepared in culture medium alone (control) or with 3-mM glutamate or serum-deprived cell culture medium. Cell viability (metabolic activity) was examined photometrically by the MTT assay. Results are presented as the mean ± SEM of 6-12 measurements. Statistically significant differences compared to the control (black bar) at *** p < 0.001. Statistically significant differences compared to the serum deprivation medium (white bar with pattern) at ^^^ p < 0.001. One-way ANOVA test Tukey's posthoc test. In the next step, C11's influence on the nerve cell viability under glutamate excitotoxicity conditions was tested. As presented in Figure 1A, 3-mM glutamate significantly decreased both the neuron and astrocyte viability by 19.1% and 35.1%, respectively. C11, in the whole range of analyzed concentrations, did not increase or weaken the negative effect of glutamate, which suggested a lack of protective properties against excitotoxicity evoked by glutamate. Figure 1. Impact of C11 on the viability of neurons and astrocytes under standard or degenerative conditions: (A) glutamate excitotoxicity and (B) trophic stress. Cells were exposed for 48 h to the investigated compound in concentrations ranging from 100 to 2500 ng/mL prepared in culture medium alone (control) or with 3-mM glutamate or serum-deprived cell culture medium. Cell viability (metabolic activity) was examined photometrically by the MTT assay. Results are presented as the mean ± SEM of 6-12 measurements. Statistically significant differences compared to the control (black bar) at *** p < 0.001. Statistically significant differences compared to the serum deprivation medium (white bar with pattern) atˆˆˆp < 0.001. One-way ANOVA test Tukey's post-hoc test. In the next step, C11's influence on the nerve cell viability under glutamate excitotoxicity conditions was tested. As presented in Figure 1A, 3-mM glutamate significantly decreased both the neuron and astrocyte viability by 19.1% and 35.1%, respectively. C11, in the whole range of analyzed concentrations, did not increase or weaken the negative effect of glutamate, which suggested a lack of protective properties against excitotoxicity evoked by glutamate. In order to examine the influence of C11 on the nerve cell viability under trophic stress conditions, trophic factors were removed (supplement B27; neurons) from the standard cell culture medium, or its amounts were significantly reduced (fetal bovine serum (FBS); astrocytes). As presented in Figure 1B, the viability of the investigated nerve cells was lowered in response to trophic stress by 14.3% (neurons) and 35.1% (astrocytes). C11 administered to neurons in a culture medium without B27 was not able to reduce the negative impact of serum deprivation on the cell viability. On the contrary, C11 at concentrations 500, 1000, and 2500 ng/mL revealed a significant trophic effect in astrocytes. C11, at the mentioned concentrations, effectively protected astrocytes viability from inhibition caused by a 10-fold reduction of the serum amount in the culture medium. Evaluation of the Effects of Long-Term Administration of C11 and LEV on Mouse Spatial Learning and Memory after PILO Induced SE The results obtained from the Morris Water Maze test indicated a marked improvement in the process of spatial learning and memory in all treated mice in comparison to the control PILO group. All measured parameters in C11 mice: the mean escape latency (5.419 ± 1.123; p < 0.05; n = 7; Figure 3A, the mean distance (118.1 ± 15.24; p < 0.001; n = 7; Figure 3B), and mean percent of time spent in the W-Channel (50.34 ± 2.645; p < 0.001; n = 7; Figure 3C) averaged over the four quadrants were significantly more favorable compared to the PILO control group (17.21 ± 3589, 360.2 ± 72.18, and 27.12 ± 4.02, respectively, n = 7; Figure 3A-C) and quite similar to the healthy control group. Hence, statistically The results were analyzed using one-way analysis of variance (ANOVA), followed by Dunnett's test for multiple comparisons. Each bar represents the mean for five mice; error bars are S.E.M. (* p < 0.05, ** p < 0.01, and *** p < 0.001; n = 5). Evaluation of the Effects of Long-Term Administration of C11 and LEV on the Newborn Neurons in the Dentate SGZ and GCL of Mouse Hippocampus after PILO Induced SE The obtained results showed significant differences in the amount of newborn neurons between the PILO control and healthy control mice (961 ± 90.6 vs. 1700 ± 147, respectively, p < 0.0001; n = 5; Figure 2B) Interestingly, an increase in the amount of neurons cells for C11 was observed in comparison to the PILO control mice (1292 ± 91.04 vs. 0.05961 ± 90.6, respectively, n = 5), although the difference was not statistically significant ( Figure 2B). Evaluation of the Effects of Long-Term Administration of C11 and LEV on the Newborn Astrocytes in the Dentate SGZ and GCL of Mouse Hippocampus after PILO Induced SE Differences in the level of newborn astrocytes were observed for the PILO control and healthy control mice. The average number of astrocytes for PILO control mice was 141.2 ± 13.3 (p < 0.0001; n = 5; Figure 2C) and, for the healthy control group, 270.8.4 ± 23.46 (p < 0.0001; n = 5), Additionally, a significant increase of newborn astrocytes was observed for C11 PILO mice (254 ± 17.9; p < 0.001; n = 5) when compared to the control PILO group ( Figure 2C), whereas the level of astrocytes for LEV mice was similar to the PILO control group (132.2 ± 6.8, n = 5). Evaluation of the Effects of Long-Term Administration of C11 and LEV on Mouse Spatial Learning and Memory after PILO Induced SE The results obtained from the Morris Water Maze test indicated a marked improvement in the process of spatial learning and memory in all treated mice in comparison to the control PILO group. All measured parameters in C11 mice: the mean escape latency (5.419 ± 1.123; p < 0.05; n = 7; Figure 3A, the mean distance (118.1 ± 15.24; p < 0.001; n = 7; Figure 3B), and mean percent of time spent in the W-Channel (50.34 ± 2.645; p < 0.001; n = 7; Figure 3C) averaged over the four quadrants were significantly more favorable compared to the PILO control group (17.21 ± 3589, 360.2 ± 72.18, and 27.12 ± 4.02, respectively, n = 7; Figure 3A-C) and quite similar to the healthy control group. Hence, statistically significant differences in the distance and W-Channel were also observed between PILO and healthy control mice. Surprisingly, LEV had no stimulating effect on the cognitive function of animals, and the obtained values of all three measured parameters did not differ significantly from the PILO control mice ( Figure The Magnetic Resonance Spectroscopy (MRS) results indicated no significant changes in the total amount of tested neurometabolites in both C11-and LEV ( Figure 4A-E)treated mice, except for the GLN level for LEV mice, where the significant increase of this metabolite was observed when compared to the PILO control group (1.441 ± 0.16 and 0.969 ± 0.05; p < 0.05; n = 5; Figure 4D). Interestingly, a statistically significant increase of the NAA/Cr level for the PILO control group was observed when compared to the healthy control mice (0.836 ± 0.060 and 448 ± 0.03, respectively; p < 0.001; n = 5; Figure 4A). It should be mentioned that the level of NAA/Cr for all treated PILO groups (C11 and LEV) was higher than the healthy control mice, but the difference was not statistically significant ( Figure 4A). Discussion In the first part of the in vitro studies, the neuroprotective properties of our compound C11 were examined in human neurons and rat astrocytes under trophic stress and excitotoxicity conditions using the MTT test. This assay is a well-known and widely used assay for the evaluation of cell viability or proliferation; nevertheless, its basis is the measurement of mitochondrial dehydrogenase activity as an indicator of cell viability or proliferation. Due to that, in order to the proper interpretation of the obtained results before making the MTT assay, the cell responses to the investigated compounds were checked under the light microscope. The obtained results did not confirm our assumptions about C11's ability to protect neurons; however, the results regarding the impact of C11 on the nerve cell viability under trophic stress conditions in astroglia cell culture indicated that C11 at concentrations 500, 1000, and 2500 ng/mL significantly induced the astrocytes viability. What is more, C11 also effectively increased the astrocytes amount in the standard conditions (complete medium with a standard amount of trophic agents). The obtained data may suggest the stimulating properties of C11 on the astrocytes viability, as well as the nutritional effect on astrocytes under trophic stress conditions, which the importance for neurodegeneration was earlier proved [28][29][30]. In the light of the data that astrocytes release several trophic factors [31], we suppose that the beneficial impact of C11 on the astrocytes viability in the serum deprivation medium was associated with the enhancement of both the metabolic activity and astrocytes number; of course, this hypothesis requires further verification. Considering the fact that neurotrophin production by astrocytes in response to brain tissue injury is a well-described mechanism of neuroprotection [32,33], examination of the C11 impact on these processes seems to be reasonable. Although reactive astrocytes have been mainly regarded as detrimental for repair, recently, there have been reports that they are able to promote neurorestorative processes [34,35]. Thus, C11 is worth studying towards neuroprotective efficacy against a variety of neurological disorders in which neurodegenerative or neuroinflammatory processes are involved. Data from in vivo studies assessing the potential changes in the process of neurogenesis after C11 treatment in a model of PILO SE in mice showed the stimulating properties of C11 on stem cell proliferation when compared to the PILO control mice, where the level of newborn BrDU-positive cells was significantly lower. However, it should be mentioned that the relatively long time between PILO-SE and the quantification of neurogenesis may also have an impact on the increase of the newborn cells. Typically, neurogenesis has been shown to increase within several days after PILO-SE in animals, but after several weeks, when the spontaneous recurrent seizures occur, a significant decreased is observed [36]. Evaluation of the potent changes in neurogenesis several weeks after PILO-SE enables a thorough analysis of the newly formed cells in the epileptic brain [37]. Our recent study using the long-term administration of C11 in healthy mice did not indicate any disturbances in the hippocampal neurogenesis, which was also confirmed in the present study [27]. Interestingly, a long-term treatment with C11 of PILO mice significantly increased the total amount of newborn cells, including astrocytes, similarly to the level of the healthy control group. For newborn neurons, we also observed an increase of cells most close to the level of neurons in healthy animals; however, the difference with PILO control mice neurons was not statistically significant. It should be noted that the results from our previous studies on healthy mice indicated the neutral impact of chronic administration of C11 on astrocytes when compared to the control group [27]. On the other hand, LEV, despite its unique anticonvulsant mechanism of action in SE [38] and neuroprotective properties [39,40], turned out to be ineffective in improving the neurogenesis (including both neurons and astrocytes) of PILO animals. Similarly, a lack of LEV efficacy in a PILOinduced model of epilepsy was shown by Zagaja et al. [18]. Moreover, in our previous studies, LEV (10 mg/kg) also decreased the level of newborn neurons in healthy mice [27]. In contrast, Yan et al. [41] indicated that the long-term treatment with LEV at high doses (300 and 600 mg/kg) enhanced the cell proliferation and neuronal differentiation in the hippocampal DG of mice. Interestingly, Ithoh and coworkers [42] showed that two days of treatment with LEV (360 mg/kg) after PILO-induced SE in mice suppressed neuroinflammation and spontaneous recurrent seizures. In turn, a recent study by Vyas et al. [43] indicated the partial protective activity of seven-day prophylactic treatment with LEV at dose of 200 mg/kg but not for sodium valproate (VPA; 300 mg/kg) and carbamazepine (CBZ; 100 mg/kg) in mouse lipopolysaccharide (LPS)-primed + PILO-induced SE. One important thing to note regarding the above-mentioned studies of the anticonvulsant and neuroprotective effect of LEV in animal PILO-induced SE is its dose. In our study, LEV was administered chronically for 10 days at a dose of 10 mg/kg starting 72 h after SE induction. The LEV dose was selected based on our previous studies [13,27], but we also took into account the relatively low dose of C11 (20 mg/kg). It was reasonable to choose similar doses for both the aforementioned anticonvulsants. Therefore, it should be assumed that the lack of LEV neuroprotective activity may result from the relatively low dose (10 mg/kg) used in the studies. The results showing the reduction of astrocytes in the mouse dentate gyrus after PILOinduced SE were also shown by Borges and coworkers [44], although selective astrocyte death in the dentate hilus after PILO SE was dependent on the species and method used to induce SE. A significant degeneration was observed up to three days after PILO-SE, followed by a gradual increase in the number of GPAP-positive cells. In our study, a BrDU proliferation marker was injected 7 days after PILO-SE, and still, the level of GFAP cells remained significantly lower compared to healthy control mice. The opposite results were presented by Zhang et al. [45] studying the anticonvulsant effect of baldrinal in mouse PILO-SE. Seventy-two hours after SE, a high increase of GFAP-positive cells was noted, which can be explained by the fact that, in animal models of epilepsy, astrocytes are rapidly activated with the hypertrophy of cell bodies, thus increasing the expression of GFAP [46]. Astrocytes are known to be the most important neural cell type for the maintenance of brain homeostasis and to cooperate with neurons on many levels. What is of great importance is the time after which the qualitative, as well as quantitative, analyses of newly formed NEuN or GFAP cells after SE induction are performed. The in vivo assessment of neurogenesis using MRS showed no statistically significant differences in the level of several selected neurometabolites important for neurogenesis in C11 mice when compared to the PILO control group. However, all PILO groups showed an increased level of NAA/Cr, although only for PILO control mice was the difference significant. MRS enables the identification and quantification of the levels of the brain metabolites, such as NAA and glutathione and levels of the neurotransmitters, such as GLU, GLN, and GABA, which may be relevant to epileptogenesis [47]. We can find many scientific data from animal models of epilepsy (especially post-SE) showing a significant decrease of NAA [47][48][49]. An increased level of NAA was already reported in our previous study [27] in lacosamide (LCM)-and LEV-treated mice. Based on the previous research methodology, MRS was performed three weeks after the last anticonvulsant injections and five weeks after PILO-SE. It is surprising that all PILO groups (control, C11, and LEV) 5 weeks after SE induction maintained elevated NAA levels. One of the rational explanations for the difference between quantitative reduced neurogenesis and MRS seems to be a different time point of detection. Disturbed cognitive functions after PILO-SE in mice were observed in the Morris Water Maze (MWM) test, which is one of the most common tests for examining spatial learning and memory in mice. Bearing in mind that PILO-induced SE in mice is responsible for learning and memory dysfunctions [50][51][52][53][54], we focused on three of the most important parameters; time and distance to reach the platform and the average time spent in W-Channel. The obtained results showed a significant beneficial effect of the long-term administration of C11 in PILO mice in comparison to the PILO control and LEV groups. As we showed in our previous studies, the chronic administration of C11 in healthy mice did not impair learning and memory in the test animals [27]. Moreover, in the current research, we showed that C11 significantly shortened the time within which the PILO animals reached the platform. The time to reach the platform was also very similar to the animals from the healthy control group, although not statistically significant. Additionally, C11 mice similar to the healthy control group turned out to have a significantly better percent of time spent in the W-Channel, which clearly indicated a lack of spatial orientation disturbances in the tested animals. A proper neurogenesis and undisturbed cognitive functions after PILO SE in mice indicated a neuroprotective effect of C11, although its mechanism of action remains not fully defined. Summing up, C11 was proven to significantly stimulate the proliferation of newborn cells, as well as their migration and differentiation into neurons and astrocytes, as well as protect the cognitive functions in the mouse PILO-induced SE model. To explain all the beneficial properties of C11, its anticonvulsant and neuroprotective mechanism of action should be identified, which, according to our data, seems to be multidirectional. In vitro radioligand-binding studies revealed the binding affinity of C11 towards the L-type Ca 2+ channels and Na + channels (Site 2), which might contribute to its antiepileptogenic effects [22]. Further in vitro investigations using patch-clamp experiments in rat prefrontal cortex pyramidal neurons determined the influence of C11 on fast voltage-gated sodium channels. The in vivo research done so far using the mouse PTZ kindling model of epilepsy indicated that several distinct GABA-mediated mechanisms might be responsible for the protective effect of C11, including changes in GABA release, modulation of GABA transaminase activity, and changes in the expression of GABA transporters and/or GABAA receptor subunits [55]. Therefore, both the in vitro and in vivo data obtained so far certainly allow for the presumption that the anticonvulsive and neuroprotective effectiveness of C11 is caused by the involvement of various mechanisms of action. Looking for a new potent anticonvulsant drug candidate that would simultaneously protect neurons and cognitive functions in refractory epilepsy with a tendency towards SE is a priority and a challenge for researchers. Bearing in mind our in vitro results confirming a protective effect of C11 on astrocytes under excitotoxicity or trophic deprivation-mediated neuronal death, as well as its beneficial effect on neurogenesis and cognitive functions, more advanced molecular research is certainly worthwhile to determine the exact neuroprotective mechanism of action. Moreover, further preclinical studies on the neuroprotective properties of C11 could open up new frontiers of research for this substance as a potential drug candidate in other neurodegenerative diseases. Reagents All reagents and kits were purchased from Sigma-Aldrich (St Louis, MO, USA), unless otherwise indicated. Neuroblastoma Cell Line Human neuroblastoma SH-SY5Y cells were obtained from ECACC (European Collection of Cell Cultures, Salisbury, UK) and cultured according to its recommendations. Differentiation of SH-SY5Y towards Neuronal Cells SH-SY5Y cells' differentiation to the neuronal cells was performed according to the previously described method, but the experiments were conducted on neuronal cells cultured for 12 days [56]. Astroglia Cell Culture Astroglia cell culture was prepared from cortices of 3-dayold newborn Wistar rats. The tissue was dissociated with a 0.25% trypsin-EDTA solution. Obtained cell suspension at a density of 1 × 10 6 cells/mL was resuspended in Dulbecco's Modified Eagle Medium/Nutrient Mixture F-12 (DMEM/F12) medium supplemented with 10% fetal bovine serum (FBS), penicillin (100 U/mL), and streptomycin (100 mg/mL). The cell suspension was maintained in a humidified atmosphere of 95% air and 5% CO 2 at 37 • C (standard conditions). The culture medium was changed daily until the culture reached the confluence (10 days); then, culture vessels with growing cells were shaken overnight in an orbital shaker at 210 rpm in order to remove the fewer adherent cells (neurons, microglia, and oligodendroglia). Following this shaking procedure, the culture became enriched with flat cells displaying typical astrocyte morphology. Immunostaining with a primary antibody for glial fibrillary acidic protein (GFAP) (polyclonal, DAKO, Glostrup, Denmark) revealed that astrocytes accounted for around 95% of the cells in the culture. Cell Viability Assessment-MTT Assay Both neurons and astrocytes were exposed to serial dilutions of C11 (100, 500, 1000, and 2500 ng/mL) used alone or in combination with 3-mM glutamate. Solutions of the investigated compound were prepared in the culture medium suitable for given cells (as described above). In the case of experiments performed in conditions of trophic stress, solutions of C11 were prepared in the medium deprived of B-27 supplement (neurons) or the medium with reduced to 2% of FBS (astrocytes). Metabolic activity of nerve cells in the response to C11 was examined after 48 h of treatment using the MTT assay. In order to properly interpret the obtained results before the addition of MTT, all plates were checked under the light microscope, and afterward, the MTT solution (5 mg/mL in phosphate-buffered saline, PBS) was added for 3 h. Resultant crystals were solubilized overnight in SDS buffer pH 7.4 (10% SDS in 0.01 N HCl) and the product quantified spectrophotometrically by measuring the absorbance at a 570-nm wavelength using a microplate reader (BioTek ELx800, Highland Park, Winooski, VT, USA). The results were presented as a percentage of cell viability treated with the investigated compounds versus cells grown in the control medium (indicated as 100%). Animals and Experimental Conditions All experiments were performed on 6-week-old male CB57/BL (20-22 g) mice kept in colony cages with free access to food and tap water ad libitum, under standardized housing conditions (natural light-dark cycle, temperature 21 ± 1 • C). After 7 days of adaptation to laboratory conditions, the animals were randomly assigned to four experimental groups consisting of eight mice. For the MRS and quantitative analysis of neurogenesis, five from eight mice were analyzed. Experimental procedures related to the care of animals and protocols used in the study were approved by the Local Ethics Committee at the University of Life Science in Lublin (No 35/2016). Status Epilepticus (SE) in Mice Mice were administered an intraperitoneal (i.p.) injection of methylscopolamine (1 mg/kg) dissolved in water 15-30 min prior to injection of PILO to reduce the peripheral cholinergic effects of PILO. Experimental animals were then injected i.p. with a single dose of PILO 300 mg/kg. Control mice were age-matched with treated mice and administered a comparable volume of vehicle after the initial methylscopolamine treatment. Mice were carefully observed after PILO injection to catch first symptoms of convulsions. Seizure behavior occurred approximately 15 min after the PILO injection. The category and the number of generalized convulsive seizures in each 1/2-h period were tallied. A modified version of the seizure scale described by Racine and coworkers [57] with categories 1-5 were used to identify the seizure severity. Categories one and two (i.e., facial automatisms, tail stiffening, and wet dog shakes) were considered as a group to avoid subjectivity in assessing the seizures. Category 3, 4, and 5 were considered to be generalized, convulsive seizures. After 2 h of observation, animals were injected with diazepam (1 mg/kg) to stop SE. Animals with category 4 and 5 seizures that survived SE became candidates for the next step of the experiment. For the animals with no seizures, euthanasia with carbon dioxide inhalation was performed. Drugs The following drugs were used in this project: LEV (Keppra; UCB Pharma, Brussels, Belgium), 5-bromo-2 -deoxyuridine BrDU (Sigma Aldrich, St. Louis, MO, USA), diazepam (Relanium, GSK, London, UK), and pilocarpine and scopolamine (Sigma Aldrich, St. Louis, MO, USA). C11 compound was synthetized in the Department of Medicinal Chemistry, Jagiellonian University Medical College (Krakow, Poland) according to the procedure described previously [22]. All substances were suspended in a 1% solution of Tween 80 (Sigma, St. Louis, MO, USA) in water for injection (Baxter, Poland), All drugs were injected intraperitoneally (i.p.) with 1-mL syringes as a single injection in a volume of 0.005 mL/g. Drugs Administration Animals were divided into 4 groups (8 mice per group): 1. Healthy Control group (water for injections + Tween 80) Animals were administered with C11, LEV, and water for injections + Tween 80 72 h after SE induction once a day for the subsequent 10 days. Fresh drug solutions were prepared ex tempore each day of the experiment. LEV was administrated intraperitoneally (i.p.) at dose of 10 mg/kg based upon information about their efficacy in the experimental models of epilepsy found in the literature [10,18]. C11 was injected at a dose of 20 mg/kg, according to the quantitative pharmacological parameter effective dose (ED 50 ) from the 6-Hz test [22]. Additionally, BrDU (a marker of cell proliferation) was given as one more single injection for the last 5 days of the treatment. Animals were subjected to transcardial perfusion 3 weeks after the last BrDU injection. The experimental design used in the study is shown in Figure 5. Drugs Administration Animals were divided into 4 groups (8 mice per group): 1. PILO C11, 2. PILO LEV, 3. PILO Control group (PILO + water for injections + Tween 80), and 4. Healthy Control group (water for injections + Tween 80) Animals were administered with C11, LEV, and water for injections + Tween 80 72 h after SE induction once a day for the subsequent 10 days. Fresh drug solutions were prepared ex tempore each day of the experiment. LEV was administrated intraperitoneally (i.p.) at dose of 10 mg/kg based upon information about their efficacy in the experimental models of epilepsy found in the literature [10,18]. C11 was injected at a dose of 20 mg/kg, according to the quantitative pharmacological parameter effective dose (ED50) from the 6-Hz test [22]. Additionally, BrDU (a marker of cell proliferation) was given as one more single injection for the last 5 days of the treatment. Animals were subjected to transcardial perfusion 3 weeks after the last BrDU injection. The experimental design used in the study is shown in Figure 5. Behavioral Study-Spatial Learning and Memory (MWM Test) Animals underwent a behavioral test 24 h after the last anticonvulsant injection according to the methods described earlier [27]. There was one daily session consisting of four 60-s trials (each trial form a different quadrant of the pool) for 5 consecutive days. Twenty-four hours after 5 days of training, the final test (probe test) was performed. Three parameters were measured: escape latency, distance, and time spent in the W-Channel. Obtained results were analyzed based on the average values of the parameters tested from all quadrants for each animal in the group. Behavioral Study-Spatial Learning and Memory (MWM Test) Animals underwent a behavioral test 24 h after the last anticonvulsant injection according to the methods described earlier [27]. There was one daily session consisting of four 60-s trials (each trial form a different quadrant of the pool) for 5 consecutive days. Twenty-four hours after 5 days of training, the final test (probe test) was performed. Three parameters were measured: escape latency, distance, and time spent in the W-Channel. Obtained results were analyzed based on the average values of the parameters tested from all quadrants for each animal in the group. Magnetic Resonance Spectroscopy (MRS) Three weeks after the last anticonvulsant injection, 5 animals from each experimental group were subjected to MRS to obtain more information about any neurodegenerative changes in the mouse brain. Proton Magnetic Resonance Spectroscopy ( 1 HMRS) acquisition, as well as spectra processing, was described in detail in our previous publication [27]. Brain Slice Preparation For determining the influence of C11 and LEV on the proliferation, migration, and differentiation, 3 weeks after last anticonvulsant and BrDU injections, mice were anesthetized with isoflurane anesthesia with a premedication of analgesic drugs and perfused with ice-cold saline followed by freshly prepared, ice-cold 4% paraformaldehyde and then processed according to the methods described earlier [10,14,27]. Immunohistochemical Staining-Neurogenesis Fifty-micrometer sections were stored at 4 • C in cryoprotectant until needed. Free-floating sections were immunostained according to the methods described previously [10,14,27]. Confocal Microscopy and Cell Counting Confocal imaging was performed using a Nikon A1R confocal system microscope (Tokyo, Japan). Quantitative analysis of newborn BrDU cells colocalizing with NeUN and GFAP cells included the GCL and SGZ of the mouse DG using the methods described in our previous studies [10,14,18,27]. Statistical Analysis For the in vitro study, data was presented as the mean value and standard error of the mean (SEM) according to the previous study [56]. Results from in vivo study were analyzed using one-way analysis of variance (ANOVA), followed by the Dunnett's test for multiple comparisons, and performed using commercially available GraphPad Prism version 4.0 for Windows (GraphPad Software, San Diego, CA, USA).
2021-03-30T05:11:25.670Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4db487602d086f79a4f708e570384499a8161bf2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/6/3240/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4db487602d086f79a4f708e570384499a8161bf2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
169369047
pes2o/s2orc
v3-fos-license
Health and Safety Analysis of Light Rail Transit Projects in Palembang Several sports in the XVIII Asian Games will be held in Palembang city in 2018. To support activities and facilities it is necessary to support such buildings as Light Rail Transit (LRT). The city government of Palembang targets LRT construction to operate in June 2018 with a total length of 23.4 km and a 12 meter LRT road width for two lines. PT. Waskita Karya (Persero) Tbk as the implementation of LRT project development. In an effort to prevent or reduce accidents on the required development an occupational safety and health program (K3). The purpose of this study is to calculate the frequency and impact of OHS risk on the Light Rail Transit (LRT) project in Palembang. This research uses field observation and interview method, and conducting assessment based on risk assessment matrix from AS / NZS 4360: 2004 risk management standard. Matrix risk assessment methods derived from Australian Standards / New Zaeland Standards (AS / NZS) 4360: 2004 and ISO 31000: 2009. From 60 variables that have been tested previously obtained 78 factors in group L (low), 18 factors in the group Medium (M) and as many as 4 factors in group H (high). Introduction In the year 2018 Asian Games XVIII will be held one of them in the city of Palembang. To support the implementation of the required supporting buildings such as Light Rail Transit (LRT). The city government of Palembang targets LRT construction to operate in June 2018 with a total length of 23.4 KM . PT. Waskita Karya (Persero) Tbk as the implementation of LRT project development. To support the development of the required safety and health programs (K3) in an effort to prevent or reduce the occurrence of work accidents. Occupational accidents often occur due to the lack of fulfillment of requirements in the implementation of occupational safety and health. In this case the government as the organizer of the State has an obligation to provide protection to the workforce. This is realized by the government with the issuance of regulations such as: RI Law no. 1 of 1970 concerning work safety, Law no. 3 of 1992 on Social Security of Workers, and Regulation of the Minister of Manpower No.: Per.05 / Men / 1996 on OHS management system. But in reality, project implementers often ignore the requirements and regulations in OSH. This is due to lack of awareness of how much risk to be borne by the workforce and company. Besides, the regulation on OSH is not balanced by strict legal action and severe sanction, so that many project implementers neglect the safety and health of their workforce. The possibility of accidents occurring in the construction project will be one of the causes of disruption or cessation of project work activities. Therefore, at the time of construction work is required to implement a work safety and health management (OSH) system at work sites where safety and health issues are also part of project planning and control. [1]. The hypothesis proposed is problem accidents in Indonesia is still relatively high. Data on accidents in Indonesia are still limited. Assessment method using risk assessment matrix sourced from AS / NZS 4360: 2004 Risk Management Standard and AS / NZS 1SO 31000: 2009. From this research obtained the highest risk in soil work is lifting material with service crane with variable that is worker and facility struck material with a risk index of 5.88, on the foundation work of the reinforced steel frame framework with the employee variable falling by 5.35, the upper structure work is the lifitng material with tower crane with the material variable falling from the height and the worker's fall 6.63, ceiling installation with worker risk fell from a height of 5.02, wall and ceramic work with an electric shock risk of 5.24, plumbing work ie plumbing installation with worker risk falling from a height of 5.27 [2][3][4][5][6][7][8]. Definition of Light Rail Transit (LRT) Light rail or Light Rail Transit is a passenger rail system operating in urban areas whose construction is light and can be operated along with other vehicle traffic or in special tracks used for light rail. Light rail is widely used in various European countries and has experienced modernization, such as automation, so it can be operated without machinist, can operate on a special track, low floor usage (about 30 cm) called low floor LRT to facilitate up and down passenger. In general, the order in the main structure of the project Light Rail Transit (LRT) is a sub-structure work includes foundation work, job shop floor, pile cap work, work and work pier upper structure includes pier head work, and u-shape (girder work). Risk management According to the AS / NZS 4360 standard on risk management standards, the risk management process includes the risk management process, including risk management, risk management, and risk management the following steps and can be seen in Figure 1. 2.4. Identify Safety and health risk Identify Safety and health risk obtained based on the frequency and impact of each risk factor. According to [10] suggests an impact An approach used to measure the likelihood of occurrence of risk is frequency and impact. The results of these answers are processed and produce data in the form of mean score or average frequency and impact. After the stage of risk identification is followed by the risk analysis stage to get the value of risk index. The magnitude of frequency and impact values will result in high risk index values. Safety and health risk analysis Risk identification yields the mean frequency score and impact. At this stage identified risk factors are analyzed to obtain risk index values. The risk index is derived from the multiplication of frequency mean score and impact. Research Location Location of research is in development project of South Sumatra province. Especially on Zone 4 with 4 km long trace. Research Variables Specifically, this research variable has been established based on previous research where the variable is the risk of accidents that occur in each type of work. This research method uses variables such as working electricity installation, equipment mobilization, concrete iron mobilization with crane lift and manual way, mounting ring on column, wire installation on ring and column, installation of reinforced steel frame, concrete work, unloading work, formwork, casting, girder erection, disassembly scaffolding, lifting materials with a crane car, cleaning dust and dirt with the compressor on the floor plate work, the use of equipment (stamper, vibrators, etc.), welding work, work on the river From the description of these variables are used to analyze risk index and risk level in development of LRT zone 4 Palembang. Method used to obtain the risk index is a field survey by distributing questionnaires, and to analyze the level of risk that is matrix AS / NZS 4360: 2004. ANALYSIS AN DISCUSSION The results of the calculation of risk index and risk level for other risks can be seen in table 1 as follows. The data obtained and the results of field observations are further processed based on the meto de used. The risk k3 identification was performed on the basis of a questionnaire distributed to LRT development workers in Zone 4 based on 60 variables risk on the type of work performed. Having obtained mean frequency score and impact then done calculation to get value of risk index. The risk index is obtained by multiplying the average frequency and impact of each factor. For the highest risk index value of 10.04 in welding work, consequently workers inhale welding smoke. The lowest index value of the risk of 3,66 on the work job electrical installation work consequently electric shock work . The risk index results are then grouped by AS / NZS 4360: 2004 matrix. 5. CONCLUSION Based on the results of analysis and pe m discussion data obtained from interviews to respondents as described in chapter IV can be drawn conclusion as follows: a. From result of research between mean of frequency and impact is got highest value of index of risk 10,04 that is at work of welding with occupancy factor inhale of welding smoke , whereas lowest index value of risk equal to 3,66 that is electrical work work job with job factor is electric shock . b. From the risk grouping based on AS / NZS 4360: 2004 matrix obtained 78% factor in L (low) group, 18% factor in Medium group (M) and 4% factor in group H (high).
2019-05-30T23:47:38.847Z
2018-07-01T00:00:00.000
{ "year": 2019, "sha1": "0f0237a211278188a625a3f4b74906e6fd362010", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1198/8/082017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5d275da1fcca4529553c08dc60d99fe0de02e075", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business", "Physics" ] }
257109172
pes2o/s2orc
v3-fos-license
Quantum control of excitons for reversible heat transfer Lasers, photovoltaics, and thermoelectrically-pumped light emitting diodes are thermodynamic machines which use excitons (electron-hole pairs) as the working medium. The heat transfers in such devices are highly irreversible, leading to low efficiencies. Here we predict that reversible heat transfers between a quantum-dot exciton and its phonon environment can be induced by laser pulses. We calculate the heat transfer when a quantum-dot exciton is driven by a chirped laser pulse. The reversibility of this heat transfer is quantified by the efficiency of a heat engine in which it forms the hot stroke, which we predict to reach 95% of the Carnot limit. This performance is achieved by using the time-dependent laser-dressing of the exciton to control the heat current and exciton temperature. We conclude that reversible heat transfers can be achieved in excitonic thermal machines, allowing substantial improvements in their efficiency. Quantum heat engines hold potential to achieve efficiencies set by the Carnot limit, however loss of energy between quasiparticles and their environment prevents experimental realisation. Here, the authors propose a model to control this heat flow using chirped laser pulses. E lectron-hole pairs or excitons are essential in many different devices, forming a working medium that allows for the conversion between heat, light, and work. Important examples are photovoltaics 1 and photosynthetic reaction centres [2][3][4] , which are thermal machines in which electron-hole pairs are created from thermal radiation at a high temperature, release heat to their surroundings at a low temperature, and thereby generate work. Thermoelectrically pumped light emitting diodes 5 and laser cooling [6][7][8] involve similar processes operating in reverse, with the work done on the electron-hole pairs allowing them to absorb heat from their surroundings and transfer it to the electromagnetic field. The key requirements for thermal machines such as these are high thermodynamic efficiency, η, and high power, but these requirements conflict and must be balanced against one another. For a heat engine the ultimate limit is given by the Carnot efficiency η c = 1 − T c /T h , corresponding to a reversible process, but as this implies zero power a more pragmatic goal is the endoreversible efficiency at maximum power 9 , or Chambadal-Novikov efficiency, η mp ¼ 1 À ffiffiffiffiffiffiffiffiffiffiffiffi T c =T h p < η c . The possibility of exploiting quantum effects to enhance the performance of thermal machines is explored in recent work on quantum heat engines, covering systems including ion traps 10 , electron-tunnelling devices [11][12][13][14][15] and micromechanical resonators [16][17][18] . For exciton-photon thermal machines, such as reaction centres, it has been predicted that quantum coherence can lead to enhanced performance [2][3][4] . However, even with such improvements their efficiency would remain well below thermodynamic limits 19 . Fundamentally this reflects the absence of methods for controlling the heat flows between excitons and their surroundings. Indeed, to reach the Carnot efficiency these heat flows should occur reversibly, i.e., over a negligible temperature difference. This requires not just control of the magnitude of the heat flows, but also of the exciton temperature. In this article we show that controlled heat transfers between excitons and their surroundings can be achieved by driving the excitons with laser pulses. We consider quantum-dot excitons, for which quantum control [20][21][22] has been implemented using Rabi oscillations [23][24][25] and adiabatic rapid passage [26][27][28] . These experiments have been modelled by treating the dot as a two-level system coupled to a phonon bath, within a Born-Markov theory that accounts for the laser-dressing of the exciton in the Floquet picture 24,29 . We combine such a theory with the phase-marker approach 30 to evaluate the heat flow between excitons and phonons, when the former are driven by linearly chirped Gaussian pulses. We show that heat can be transferred from the phonon bath to the exciton, and assess the performance of a heat engine in which this forms the hot stroke. Typical pulses give efficiencies comparable to the Chambadal-Novikov result. However, for some pulses we obtain efficiencies up to 95% of the Carnot efficiency, showing that reversible heat transfers can be achieved. Our work shows that the amplitude and frequency profile of a driving laser pulse can be tuned to give complete control of exciton heat flows and exciton temperatures on picosecond timescales. This opens up the possibility of reaching thermodynamic efficiency limits in exciton-photon thermal machines. Results Model. We consider an InGaAs/GaAs quantum-dot, driven by an ultrafast laser pulse with a time-dependent amplitude and frequency. As illustrated in Fig. 1a, we model the dot as a twolevel system, consisting of the ground state, |0〉, and a single one-exciton state, |X〉. We consider a low temperature, T = 20 K, and near-resonant excitation, so that other electronic states may be neglected. Furthermore, we suppose that the driving pulses are short compared with the radiative lifetime, which is generally in the nanosecond range 31 , and so neglect spontaneous emission. In this low-temperature strong-driving regime the dominant source of dissipation and dephasing is the coupling to acoustic phonons 24,[32][33][34][35] . Including such phonons we have for the Hamiltonian, in the rotating-wave approximation 32 , Here and in the following we set ħ = 1, and use pseudospin operators, The terms involving summations in Eq. (1) correspond to the energy of the phonon bath,Ĥ b , and the exciton-phonon coupling,Ĥ c . The phonon bath is characterised by its spectral density, JðωÞ ¼ P k g 2 k δðω À ω k Þ, with the super-Ohmic form We take the value of A = 11.2 fs K −1 measured by Ramsay et al. 36 , and use a similar value, ħω c = 2 meV, for the cut-off frequency. (The cut-off depends on the geometry of the dot 32 . Ramsay et al. report a value of 1.44 meV for dots with height 3-4 nm and base diameter 25-30 nm). The remaining terms in Eq. (1) form the system Hamiltonian,Ĥ s , and describe the exciton driven by the laser pulse. This form is obtained by expressing the electric field of the laser in terms of its time-dependent amplitude and frequency, EðtÞ ¼ jEðtÞj cos R ωðtÞdt. This leads to a time-dependent Rabi frequency Ω(t) = d|E(t)|, where d is the transition dipole moment, and a time-dependent exciton-laser detuning, Δ (t) = ω x − ω(t). Note thatĤ is referred to a time-dependent basis, obtained from the fixed basis (Schrödinger picture) by the unitary transformationÛðtÞ ¼ e iŝ z R ωðtÞdt . As in previous work on adiabatic rapid passage 26,27,37,38 we consider driving by linearly chirped Gaussian pulses, for which the Rabi splitting Ω(t) is a Gaussian of duration τ, ΩðtÞ ¼ Ω 0 e Àt 2 =2τ 2 , and the frequency ω(t) sweeps linearly in time, ω(t) = (ω x − δ) + αt. Here α is the temporal chirp, and δ is the detuning of the pulse centre frequency below the exciton. To connect with experiments we suppose that the pulse is generated by applying a spectral chirp a to a bandwidth-limited Gaussian of pulse area Θ 0 and duration τ 0 , so that 29,38-40 ð3Þ Controlling heat flows. To explain how exciton-phonon heat flows can be controlled we recall the mechanism of adiabatic rapid passage using chirped pulses 21 , as illustrated in Fig. 1b. This figure shows a typical example of the evolution of the dressedstate energies as the driving frequency sweeps through the resonance. These energies are given by the eigenvalues ofĤ s , and are Figure 1b shows the situation for a positively chirped pulse which crosses through the exciton resonance. In that case the lower energy state at early times in the rotating frame is the zero exciton state, whereas that at late times is the one-exciton state. The driving field splits the levels and generates an avoided crossing at Δ = 0, so that the adiabatic evolution takes the dot, initially in its ground state, into the oneexciton state. The dressed states are coherent superpositions of the zero and one-exciton states, and are coupled together by the deformationpotential interaction with acoustic phonons 24,29,32,34,35,39,41,42 . Thus, as illustrated in Fig. 1b, a transition from the lower to the upper dressed state can occur with the absorption of a phonon of energy ħΛ, and vice versa with the emission of a phonon 39 . Such processes appear in a master equation for the exciton density matrix, which has been derived using standard techniques 29 , with the rates γ e = π[n B (Λ) + 1]J(Λ)A 2 /2 for emission and γ a = πn B (Λ) J(Λ)A 2 /2 for absorption. The factor A = Ω/Λ comes from the mixing of the zero and one-exciton states into the dressed states, and the phonon occupation function n B and spectral density J are evaluated at the transition frequency Λ. Note that both A and Λ, and hence the rates, are time-dependent. Thus, the form of the driving pulse gives time-dependent control of the phonon emission and absorption rates. Such control, dubbed dynamic vibronic coupling 43 , has been exploited in exciton and biexciton state preparation making active use of phonons [44][45][46][47][48] . To evaluate the heat flows in these processes we have derived and solved the equation-of-motion for the characteristic function of the heat distribution 30 , following the approach used in Eastham et al. 29 . This goes beyond previous work on heat distributions 30,49 to allow for the time-dependence of the driving pulse; more generally, it allows for time-dependent system Hamiltonians, as is required to model quantum-control experiments. Phonon cooling with chirped pulses. Figure 2 shows the predicted heat transferred from the phonons to the exciton for driving by a single chirped Gaussian pulse, with the dot starting in its ground state. The figure shows how the heat depends on the spectral chirp and pulse area, for τ 0 = 2 ps, corresponding to a typical experimental value, and three values of the detuning. Considering first the resonant case, δ = 0, shown in Fig. 2a, we see that positively chirped pulses lead to heat transfer from the phonons to the exciton, i.e., a cooling of the phonon environment and a heating of the exciton. In contrast, negatively chirped and unchirped pulses lead to heating of the phonons. This can be explained in a similar way to the dependence of the exciton occupation on the sign of chirp 39 : for positive chirp the ground state of the dot is continuously connected to the lower-energy dressed state, so that only phonon absorption is possible, whereas for negative chirp it is connected to the upper-energy dressed state, and phonon emission dominates. The implications for heat transfer follow because, in both cases, the initial density matrix is thermal in the dressed-state basis. For positive chirp this thermal state has zero temperature, since only the lower level is populated, so it absorbs heat from the phonon bath at T ph = 20 K. However, for negative chirp the initial density matrix has a negative temperatureit is inverted in the dressed-state basisand as such the state emits heat into any positive-temperature environment 50,51 . Figure 2 also shows results for pulses that are detuned from the exciton transition, such that the frequency at the peak of the pulse lies either above the exciton (negative detuning, Fig. 2b) or below it (positive detuning, Fig. 2c). For these parameters the sign of the heat flow becomes independent of the sign of the chirp. With positive detuning the heat flow is from the phonon bath to the exciton, giving a cooling of the phonon environment, whereas for negative detuning heat flows in the opposite direction. This is because the parameters are such that the field is not significant when the frequency sweeps through the exciton, and there is no avoided crossing. Instead the sign of the detuning determines which dressed-state has the greatest overlap with the initial (ground) state, and hence has the largest occupation in the initial density matrix. This then leads to the observed directions of heat flow. Phonon absorption by laser-dressed excitons has previously been predicted by Gauger and Wabnig 49 . However, these authors investigated continuous-wave excitation, and did not address the capabilities of pulsed excitation in time-dependent thermodynamic processes as evaluated here. Figure 2 indicates that chirping offers a significant enhancement of heat absorption. For example, for the positively detuned case shown in Fig. 2c the maximum heat for a = 0 is Q/ħ = 0.63 ps −1 , at Θ 0 = 5.3π, but maximum over the full region shown is one-and-a-half times bigger, Q/ħ = 0.95 ps −1 . This is achieved at the boundary of the plot, a = 40 ps 2 , Θ 0 = 9π. The transfer of heat from phonons to excitons which occurs over parts of Fig. 2 could be used to implement a chiller, following the thermodynamic cycle depicted in Fig. 3a. The first stroke of this cycle, shown by the solid line, is the heat absorption process discussed above. This stroke begins with the dot in its ground state, and ends in a high entropy state with temperature close to that of the phonon reservoir. This heat-absorption stroke is assumed to be short, τ ( τ sp , so that spontaneous emission can be neglected. However, the dot would then be left undriven for a time sufficient for spontaneous emission to return it to its ground state. This second process closes the cycle, which can then be repeated. The overall effect of the cycle is to extract heat from the phonon reservoir and deposit it, along with the work done by the driving laser, in the electromagnetic environment. The focus of the present work is on the exciton-phonon heat transfer, and a detailed analysis and optimisation of the performance of the full cooling cycle has not been undertaken. However, it is interesting to estimate the cooling power. For our calculations to be valid we require τ sp ) τ, so the time for the cycle envisaged in Fig. 3a is approximately τ sp . Thus the cooling power is Q/τ sp (and is maximised by maximising the heat absorbed by the driving stroke, Q). The specific heat-absorption stroke depicted corresponds to a pulse with a = 10 ps 2 , Θ 0 = 9π, and τ 0 = 0.5 ps; we refer to this pulse as the Carnot pulse, and discuss its properties further below. It gives a heat absorption of Q/ħ = 1.3 ps −1 , which is 72% of the maximum heat that could be absorbed by the two-level system, k B T ph ln 2. Taking τ sp = 1 ns leads to an estimated cooling power of 140 fW. We note that this is much lower than the estimate of 3 pW given by Gauger and Wabnig for their steady-state approach 49 , but that should be expected because they take a much smaller τ sp = 10 ps. The energy of the Carnot pulse would be 8 pJ for a dot with a transition dipole moment 52 d = 7 × 10 −29 C m at the centre of a Gaussian beam of waist 1 μm. This is much greater than the t b a Fig. 1 Model and heat transfer mechanism. a Illustration of the system, consisting of a quantum-dot exciton transition driven by a laser field, and interacting with a heat bath of phonons. b The mechanism of heat transfer, in which phonons from the heat bath are absorbed or emitted in transitions between the laser-dressed exciton states. The graph illustrates the evolution of the dressed-state energies in a typical adiabatic rapid passage process exciton or photon energy, and therefore also the heat absorption. The work done by the driving, which is the energy absorbed from the laser pulse, is W = ħω x p x − Q ≈ ħω x p x , where p x is the probability the dot is left in the excited state. For the Carnot pulse we find p x = 0.63, so the cooling efficiency would be Q/W ≈ 0.1% with ħω x = 1.5 eV. This is very low because the energy transferred to the electromagnetic field by the spontaneous emission is wasted 53 . Heat engines and thermodynamic efficiency. We now consider the thermodynamics of the exciton-phonon heat transfer process in the context of a heat engine. This will allow us to evaluate the thermodynamic performance achievable, in a machine using such a process, in comparison to the fundamental Carnot limit. To do this we consider the thermodynamic cycle illustrated in Fig. 3b, in which heat is absorbed from the phonon reservoir at a temperature T ph . For a heat engine the absorbed heat must be transferred to a reservoir at a lower temperature T c < T ph . We suppose that this is done by a reversible process, so that the dot returns to its original state along the parts of the Carnot cycle shown by the dotted lines. Since the cycle is closed by a reversible process any departure from the Carnot efficiency can be attributed to irreversibility in the exciton-phonon heat transfer. In principle the cold stroke could be implemented using resonant electron-hole tunnelling into leads that are colder than the dot; a similar process (resonant electron tunnelling) has recently been used to implement an electronic quantum-dot heat engine 11 . Our theory allows us to calculate both the heat absorbed from the hot phonon reservoir, Q, and the entropy of the dot after the hot stroke, S. Since the initial state for the hot stroke is presumed to be the dot ground-state, with zero entropy, the cold stroke must increase the entropy of the cold reservoir by S. The heat supplied to the cold reservoir is thus T c S, implying the work done by the cycle will be Q − T c S, and the efficiency η = 1 − T c S/Q. In the following we will take the cold reservoir temperature T c = 2.7 K. Figure 4 shows the dependence of the efficiency on the pulse area and spectral chirp, for two different unchirped pulse durations, and two different detunings. The efficiency is shown as a fraction of the Carnot efficiency at these temperatures, η c = 1 − T c /T ph = 0.87. Figure 4a gives the results for zero detuning, as is usual in an adiabatic rapid passage experiment, and τ 0 = 2 ps. In this case we find a peak efficiency of 0.61η c , at a pulse area Θ 0 = 6.3π and spectral chirp a = 8.0 ps 2 . Although some way below the Carnot limit this is nonetheless 80% of the Chambadal-Novikov efficiency at these temperatures, η mp = 0.63. Figure 4b shows the effect of introducing a positive detuning. As can be seen, this leads to considerably higher efficiencies. We note that as the chirp increases from zero to positive values the efficiency first rapidly increases, before approaching a limit. A similar behaviour is seen in the heat transfer (Fig. 2c). We believe this saturation can be attributed to the way the temporal chirp, α, and pulse duration, τ, depend on the spectral chirp, as given by Eqs. (2) and (3). In particular, for large a the temporal chirp α decreases with a, while τ increases, such that the product ατ asymptotes to 1=τ 0 . Figure 4c, d show the corresponding results for a smaller value of τ 0 , i.e., a higher bandwidth driving pulse. This leads to higher efficiencies which, for the positively-detuned case shown in Fig. 4d, reach 0.95η c . This maximum is achieved at the upper boundary of the plot Θ 0 = 9π, in the region of positive chirp a ≳ 5 ps 2 . Thus we conclude that such pulses lead to reversible exciton-phonon heat transfers. The reversibility of the heat transfer process can also be quantified by the entropy generation. Figure 5 shows the entropy of the dot for two choices of pulse parameters. One of these, which we refer to as the Carnot pulse, corresponds to a point in Fig. 4d in the maximum efficiency region, Θ 0 = 9π, a = 10 ps 2 . The other, which we choose for comparison with the chirped case, is the point of maximum efficiency in Fig. 4b along the line of zero chirp (Θ 0 = 6.0π, η = 0.84η c , Q/ħ = 0.62 ps −1 ). We also plot, as the dashed line, the corresponding entropy decrease of the phonon reservoir, Q/T ph , so that the gap between the two curves is the overall entropy generation. As one would expect from the difference in efficiencies, the entropy generation in the Carnot pulse is lower than that in the unchirped comparator. Effective temperature and reversibility. To understand why some pulses induce nearly reversible heat transfers, and others do not, we consider the temperature of the dot. In general a driven system such as the dot will not be in a thermal state and, as such, will not have a well-defined temperature. Indeed in our case the exciton density matrix is not thermal in the energy eigenbasis. However, the dressed-state populations do reach thermal equilibrium with the phonons in the steady-state, because the transition rates in the dissipator obey detailed balance γ a =γ e ¼ e À hΛ=k B T ph . This relation holds more generally, suggesting that in the context of the phonon dissipation we should take the dressed-state populations, p + and p − , to define the dot This definition of temperature is consistent with the form of the dissipator, and will allow us to interpret the entropy generation in the dissipative coupling. Figure 6a shows the temperature of the dot, as a function of time, for the chirped Carnot pulse and the unchirped comparator pulse. We also show, in Fig. 6b, the corresponding heat currents from the phonon bath to the exciton. For the unchirped pulse the temperature of the dot, which varies during the pulse, is significantly different from that of the bath while the heat is flowing. Thus there is entropy generated throughout the process. For the chirped Carnot pulse, however, there is an interval of time during which heat flows and the temperature is constant. This is clearly an isothermal process, but it is also one in which the dot and bath temperatures are very close. It thus produces very little entropy, and is nearly reversible. This isothermal part of the heat absorption process can also be seen on the temperature-entropy plots in Fig. 3, where the solid lines are results obtained for the chirped Carnot pulse. It may be noted that the duration of the chirped Carnot pulse, τ = 20 ps, is significantly greater than that of the unchirped comparator, τ = 2 ps. However, we have calculated the maximum efficiency for an unchirped pulse of these two durations, and find in both cases the same value (0.82η c ). Thus the increased duration associated with the chirping does not account for the change in efficiency. The reversible isothermal part of the heat absorption process is made possible by the time-dependence of the dressed-state energies, which are shown for both pulses in Fig. 6c. For the chirped Carnot pulse the energy splitting is reducing during the heat transfer. This would, for an adiabatic process, reduce the temperature in line with Eq. (4). Here it compensates for the increase in temperature that would be expected as heat flows from the phonons to the exciton. The result is an isothermal heat transfer, which can occur at the bath temperature and hence be reversible. An alternative view is in terms of the scattering rates: reducing the splitting increases the ratio between phonon absorption and emission, moving the detailed-balance equilibrium for the dressed-state populations, and driving a heat flow over a negligible temperature difference. Discussion In this article we have shown that a theory of open quantum systems 29 can be extended to allow the calculation of quantum thermodynamic quantities. Unlike previous work 30 our theory applies to time-dependent Hamiltonians and, therefore, Fig. 3 Heat engines and chillers. a The cycle for a quantum-dot chiller. The solid curve in the temperature-entropy plot shows these quantities for the quantum-dot as it is driven by a laser pulse and absorbs heat from the phonon bath. The temperature shown is defined by Eq. (4). The wavy lines depict the subsequent radiative decay, which returns the quantum dot to its ground state. The upper (lower) square box in the engine diagram represents the phonon (electromagnetic) environment, and the circle the quantum dot. b The cycle for a quantum-dot heat engine. This comprises the same heat-absorption stroke as the chiller, but the cycle is then closed by a partial Carnot cycle, which implements a reversible heat transfer to a bath at a temperature T c < T ph quantum-control experiments. Using this approach we have studied the thermodynamics of a quantum-dot exciton driven by a chirped laser pulse, and evaluated the exciton-phonon heat flow, entropy generation, and effective exciton temperature during the pulse. We have predicted that certain pulses, which are readily accessible experimentally, induce heat transfers from the phonons to the excitons, and that, in some cases, this heat transfer approaches the ideal reversible limit. In the context of a heat engine such a process gives an efficiency close to the Carnot limit. More generally, our results show that shaped laser pulses can be used to implement controlled thermodynamic processes for a single exciton transition interacting with the heat bath of phonons. The laser pulse amplitude allows for modulation of the heat flow, a feature which is essential for the implementation of thermodynamic cycles, yet is lacking in physical implementations of quantum thermodynamic machines. The pulse profile also allows simultaneous, yet independent, control over the effective temperature of the dressed-exciton system. Together, these effects allow for the implementation of any thermodynamic process in the single-qubit single-reservoir system. For example, adiabatic heating or cooling could be implemented using weak chirped pulses, for which the small pulse amplitude implies a small heat flow. These processes may be useful for high-efficiency photovoltaics, by allowing the hot excitons created by light to be cooled before they release heat. Another application of our work would be for optical cooling at low temperatures, where the freezing out of the optic phonons makes anti-Stokes cooling impossible. However, the heat absorbed in our simulations is approaching the maximum achievable for a two-level emitter, of order k B T ph per cycle, and the cooling power is limited by the use of a single transition and the need for the exciton to subsequently decay, rather than by the exciton-phonon coupling. As such it would be necessary to scale to an ensemble of emitters to reach a useful cooling power, and also to reduce the radiative lifetime. This would be challenging in quantum dots, but could be explored in other optically addressable solid-state systems, such as colour centres 54 . Photon counting of exciton luminescence under pulsed excitation 25,27 , or nanoscale current measurements 23,26 , provides direct access to the probability distribution of the exciton occupation, and hence thermodynamic quantities such as entropy. Our theory could be tested by comparison against such experiments. Some additional thermodynamic information could be obtained optically: spectrally-resolved luminescence, for example, could give the dressed-state occupations, and hence the effective temperature. A direct measurement of the heat based on thermal effects would not be possible due to their small size. One approach could be to determine the work done by the driving pulse from its absorption, and use the first law of thermodynamics to calculate the heat. Another would be to obtain the heat from theory, fitted and validated using its predictions for quantities such as luminescence. Overall, however, the quantumdot exciton transition seems to be a promising system in which to study thermodynamic processes at the quantum scalegiven the possibility, predicted here, of using laser pulses to implement and control thermodynamic processes. Methods Generalised Lindblad equation. The HamiltonianĤ s may be diagonalized by introducing rotated spin operators b r ¼ Rb s, where R is a rotation by an angle tan À1 ΩðtÞ=ΔðtÞ about the y-axis. ThusĤ s ¼ ΛðtÞr z , implying the dressed-state energies ±Λ(t)/2. This rotation leads to terms in the exciton-phonon coupling,Ĥ c , in which phonon emission or absorption is accompanied by transitions between the dressed states, since we havê A master equation with a dissipator corresponding to such processes has been obtained 29 by transforming to the interaction picture with respect toĤ s þĤ b , and applying the Born-Markov approximation to obtain a time-local equation for the reduced density matrix of the dot. Undoing the transformation to the interaction picture, and discarding rapidly oscillating terms in the result (secularisation) gives a generalised Lindblad form, with transition operatorsr þ andr À , and phonon absorption and emission rates γ a and γ e . The coupling tor z implies that there can be pure dephasing terms in the dressed-state basis, however, the corresponding rate is proportional to the spectral-density at zero frequency, which vanishes in this case. Evolution of the heat distribution. To compute thermodynamic quantities, in particular the heat transferred between the phonons and the exciton, we use the generating-function or counting-field approach 55 . This approach has been previously used 30 to obtain heat and work within a Lindblad master equation, for the case of a time-independentĤ s . It has also been used to calculate the phonon counting statistics for an exciton with continuous-wave driving 49 . We consider the characteristic function of the heat distribution, which is a two-time correlation function of the bath energy, Hereρðu; tÞ is an annotated density matrix, whose time evolution iŝ ρðu; tÞ ¼Û u=2 ðt; t 0 Þρðu; t 0 ÞÛ y Àu=2 ðt; t 0 Þ; ð6Þ with the modified time-evolution operatorÛ u ðt; t 0 Þ ¼ e iuĤ bÛ ðt; t 0 Þe ÀiuĤ b , and ρðu; t 0 Þ ¼ρðt 0 Þ. Replacing the standard time-evolution operator with the modified form in the derivation of the Lindblad master equation 29 gives the equation-of-motion for the annotated reduced density-matrix of the dot,ρ s ðu; tÞ, ∂ ∂tρ s ðu; tÞ ¼ À iĤ s ðtÞ;ρ s ðu; tÞ Â Ã À γ erþrÀ ;ρ s ðu; tÞ È É þ À 2e þiuΛðtÞr Àρs ðu; tÞr þ À γ arÀrþ ;ρ s ðu; tÞ È É þ À 2e ÀiuΛðtÞr þρs ðu; tÞr À : This is a generalisation of a time-dependent Lindblad form 29 to include phase markers in the dissipator, which account in the expected way for the heat transferred in the transitions. (We drop terms corresponding to the Lamb shift, as they would have a negligible effect on our results.) It extends the result of Silaev et al. 30 to allow for time-dependence ofĤ s , which means that the phase markers (as well as the jump operators and rates) become time-dependent due to the variation of Λ(t). Equation (7) is derived using the Born-Markov approximation, so that its validity requires both weak system-bath coupling, and that the bath memory time is short compared with certain timescales of the system evolution. For a time-independentĤ s these timescales are the inverses of the decay rates, while the bath memory time is 1/ω c , so that the approach is self-consistent when γ a;e ( ω c . A time-dependentĤ s introduces additional relevant timescales, H s = _ H s , so that the validity of the approach additionally requires _ H s =H s ( ω c . Technically this requirement reflects an approximation in the derivation of the dissipator, in which the full time-evolution operator, which is a time-ordered exponential, is replaced by a simpler time-local form. In addition it is necessary, in order to obtain a Lindblad form, to make the secular approximation in the dissipator, which is valid where the dynamics induced by the dissipator are slow compared with the inverse level spacing ofĤ s . All these conditions are well satisfied in our simulations for the parameters considered here. We have solved Eq. (7) numerically, and taken Fourier transforms of G with respect to the counting field u, to obtain the heat distributions. The results in the main text refer to the mean heat transfer, which can be computed more straightforwardly. Since the moments of the heat distribution are hQ n i ¼ 1 i n ∂ n Gðu;tÞ ∂u n u¼0 we find from Eq. where ρ ↑/↓ (t) are the occupations of the dressed-states, i.e., the diagonal elements of the reduced density matrixρ s ðtÞ ¼ρ s ðu ¼ 0; tÞ in the dressed-state basis. We calculate the mean heat transferred under the driving pulse by solving Eq. (7) with u = 0 to obtain the occupations, and then use Eq. (8) to compute the heat. The entropy of the dot, shown in Fig. 5 and used to compute the efficiency shown in Fig. 4, is calculated from S ¼ Àk B Trρ s ðtÞ lnρ s ðtÞ. Data availability The data generated or analysed in this work are included in this published article. Code availability The code which generated the data used in this work is available from https://doi.org/ 10.5281/zenodo.326482.
2023-02-24T14:16:25.051Z
2019-10-03T00:00:00.000
{ "year": 2019, "sha1": "00acc8d4aa798b9613187ed403f378a3284842ee", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42005-019-0215-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "00acc8d4aa798b9613187ed403f378a3284842ee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
32027155
pes2o/s2orc
v3-fos-license
Chemical Constituents from Andrographis echioides and Their Anti-Inflammatory Activity Phytochemical investigation of the whole plants of Andrographis echioides afforded two new 2′-oxygenated flavonoids (1) and (2), two new phenyl glycosides (3) and (4), along with 37 known structures. The structures of new compounds were elucidated by spectral analysis and chemical transformation studies. Among the isolated compounds, (1–2) and (6–19) were subjected into the examination for their iNOS inhibitory bioactivity. The structure-activity relationships of the flavonoids for their inhibition of NO production were also discussed. Introduction Andrographis (Acanthaceae) is a genus of about 40 species, various members of which have a reputation in indigenous medicine. In traditional Indian medicine, several Andrographis species have been used in the treatment of dyspepsia, influenza, malaria and respiratory infections, and as astringent and antidote for poisonous stings of some insects [1,2]. More than 20 species of Andrographis have been reported to occur in India. The phytochemistry of this genus has been investigated quite well in view of its importance in Indian traditional medicine and reported to contain several flavonoids [3,4] and labdane diterpenoids [5][6][7][8][9][10]. A. echioides, an annual herb occurring in South India, is listed in the Indian Materia Medica used as a remedy for fevers. However, information on the chemical composition and bioactivity of this species is very rare. There is only report of flavonoids as major components from the extracts of A. echioides in the previous literature [11][12][13][14]. As part of our program to study the bioactive constituents from Andrographis species [15,16], we have investigated the whole plant of A. echioides and four new compounds (1)(2)(3)(4) were characterized. Herein, we wish to report on the structure elucidations of compounds 1-5 and the effects of flavonoids on NO inhibition in LPS-activated mouse peritoneal macrophages. Anti-Inflammatory Activity Inflammation is related to morbidity and mortality of many diseases and is recognized as part of the complex biological response of vascular tissues to harmful stimuli. It is the host response to infection or injury, which involves the recruitment of leukocytes and the release of inflammatory mediators, including nitric oxide (NO). NO is the metabolic by-product of the conversion of L-arginine to L-citrulline by a class of enzymes termed NO synthases (NOS). Numerous cytokines can induce the transcription of inducible NO synthase (iNOS) in leukocytes, fibroblasts, and other cell types, accounting for enhanced levels of NO. In the experimental model of acute inflammation, inhibition of iNOS can have a dose-dependent protective effect, suggesting that NO promotes edema and vascular permeability. NO also has a detrimental effect in chronic models of arthritis, whereas protection is seen with iNOS inhibitors. The iNOS inhibiting potentials of 1-2 and 6-19 were evaluated by examining their effects on LPS-induced iNOS-dependent NO production in RAW 264.7 cells determined by MTT assays. Cells cultured with 1-2 and 6-19 at different concentrations except 18 (at 42 μM) used in the presence of 100 ng/mL LPS for 24 h did not change cell viability thus the NO inhibiting effects may not due to the cytotoxicity (Table 3). In the examined concentration ranges (5.25-74 μM), NO production decreased in the presence of 1-2 and 6-19 in a dose-dependent manner (Table 3). Flavonoids are widely distributed in the higher plants capable of modulating the activity of enzymes and affect the behavior of many cell systems, including NO inhibitory activity. The structure-activity relationships of 3',4'-oxygenated flavones were discussed by Matsuda [53] and Kim et al. [54]. In 1999, Kim et al. [54] examined the naturally occurred flavonoids for NO production inhibitory activity in LPS-activated RAW 264.7 cells and the following structural requirements were afforded: (a) the strongly active flavonoids possessed the C2-C3 double bond and 5,7-dihydroxyl groups; (b) the 8-methoxyl group and 4'-or 3',4'-vicinal substitutions favorably affected inhibitory activity; (c) the 2',4'-(meta)-hydroxyl substitutions abolished the inhibitory activity; (d) the 3-hydroxyl moiety reduced the activity; (e) flavonoid glycosides were not active regardless of the types of aglycones. Andrographis species are noted for profuse production of 2'-oxygenated flavones and in the present study, the bioactive data of the examined flavonoids using RAW 264.7 cells were in agreement with the previous report by Kim et al., and the additional structural requirements of flavonoids for NO production inhibitory activity were suggested as follows: (1) the glycosidic moiety reduced the activity, like 9 and 14; (2) the 2'-hydroxyl group did not cause significant effects on NO inhibitory activity; (3) methylation of 5-hydroxyl group enhanced the activity, like 13 and 14 ( Table 4). The structure-activity relationships of flavonoids for NO production inhibitory activity resulted from our study clarified the insufficiency in the previous report. General The UV spectra were obtained with Hitachi UV-3210 spectrophotometer. The IR spectra were measured with a Shimadzu FTIR Prestige-21 spectrometer. Optical rotations were recorded with a Jasco DIP-370 digital polarimeter in a 0.5 dm cell. The ESIMS and HRESIMS were taken on a Bruker Daltonics APEX II 30e spectrometer. The FABMS and HRFABMS were taken on a Jeol JMS-700 spectrometer. The ESIMS (negative ESI) data were measured using a Thermo TSQ Quantum Ultra LC/MS/MS spectrometer. The 1 H and 13 C NMR spectrums were measured by Bruker Avance 300, 400 and AV-500 NMR spectrometers with TMS as the internal reference, and chemical shifts are expressed in δ (ppm). The CD spectrum was recorded in a Jasco J-720 spectrometer. Sephadex LH-20, silica gel (70-230 and 230-400 mesh; Merck, Darmstadt, Germany) and reversed-phase silica gel (RP-18; particle size 20-40 μm; Silicycle) were used for column chromatography, and silica gel 60 F 254 (Merck, Darmstadt, Germany) and RP-18 F 254S (Merck, Darmstadt, Germany) were used for TLC. HPLC was performed on a Shimadzu LC-10AT VP (Tokyo, Japan) system equipped with a Shimadzu SPD-M20A diode array detector at 250 nm, a Purospher STAR RP-8e column (5 μm, 250 × 4.6 mm) and Cosmosil 5C 18 Plant Materials The whole plant of A. echioides Nees was collected from Tirupati, Andhra Pradesh, India in May 1998. The plant was authenticated by Professor C. S. Kuoh, Department of Life Science, National Cheng Kung University, Taiwan. The voucher specimens (DG-199) have been deposited in the herbarium of the Department of Botany, Sri Venkateswara University, Tirupati, India; and Department of Chemistry, National Cheng Kung University, Tainan, Taiwan, respectively. Determination of Aldose Configuration Compounds 1-5 (each 0.5 mg) were hydrolyzed with 0.5M HCl (0.4 mL) in a screw-capped vial at 60 °C for 1 h. The reaction mixture was neutralized with Amberlite IRA400 and filtered. The filtrates were dried in vacuo, then dissolved in 0.1 mL of pyridine containing L-cysteine methyl ester (0.5 mg), and reacted at 60 °C for 1 h. To those mixtures were added a solution of O-tolylisothiocyanate in pyridine (5 mg/1 mL) at room temperature for 1 h. Those reaction mixtures were directly analyzed by HPLC (Cosmosil 5C 18 ARII (250 × 4.6 mm i.d. Nacalai Tesque Inc., Tokyo, Japan); 20% CH 3 CN in 50 mM acetate; flow rate 0.8 mL/min; detection, 250 nm). D-glucose (t R 40.5 min) was identified as the sugar moieties of 1-5 based on comparisons with authentic samples of D-glucose (t R 40.5 min). Cell Viability Cells (2 × 10 5 ) were cultured in 96-well plate containing DMEM supplemented with 10% FBS for 1 day to become nearly confluent. Then cells were cultured with samples in the presence of 100 ng/mL LPS for 24 h. After that, the cells were washed twice with DPBS and incubated with 100 μL of 0.5 mg/mL MTT for 2 h at 37 °C testing for cell viability. The medium was then discarded and 100 μL dimethyl sulfoxide (DMSO) was added. After 30-min incubation, absorbance at 570 nm was read using a microplate reader (Molecular Devices, Orleans Drive, Sunnyvale, CA, USA). Measurement of Nitric Oxide/Nitrite NO production was indirectly assessed by measuring the nitrite levels in the cultured media and serum determined by a colorimetric method based on the Griess reaction [55]. The cells were incubated with a test sample in the presence of LPS (100 ng/mL) at 37 °C for 24 h. Then, cells were dispensed into 96-well plates, and 100 μL of each supernatant was mixed with the same volume of Griess reagent (1% sulfanilamide, 0.1% naphthyl ethylenediamine dihydrochloride, and 5% phosphoric acid) and incubated at room temperature for 10 min, the absorbance was measured at 540 nm with a Micro-Reader (Molecular Devices, Orleans Drive, Sunnyvale, CA, USA). By using sodium nitrite to generate a standard curve, the concentration of nitrite was measured form absorbance at 540 nm. Statistical Analysis Experimental results were presented as the mean ± standard deviation (SD) of three parallel measurements. IC 50 values were estimated using a non-linear regression algorithm (SigmaPlot 8.0; SPSS Inc. Chicago, IL, USA). Statistical significance is expressed as * p < 0.05, ** p < 0.01, and *** p < 0.001. Conclusions In the previous literature, there are four Andrographis species containing diterpenoids such as andrographolide, including A. paniculata, A. affinis, A. lineata, and A. wightiana. In our investigation, the major constituents of the titled plant were flavonoids rather than the crystalline bitter principle analogous to diterpenoids. In the evaluation of NO inhibition activity, compounds 10 and 14 were the most effective and the IC 50 values were 37.6 ± 1.2 μM and 39.1 ± 1.3 μM, respectively. These results suggested that the Andrographis species are valuable sources for the discovery of natural anti-inflammatory lead drugs.
2014-10-01T00:00:00.000Z
2012-12-27T00:00:00.000
{ "year": 2012, "sha1": "bc3db80c49739d4c4892f1cfea43045a84f3790f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/14/1/496/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "953ea101b326e5230d2f9265125d28da63d214d2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
104359829
pes2o/s2orc
v3-fos-license
Optimizing Micrometer-Sized Sn Powder Composite Electrodes for Sodium-Ion Batteries A nanometer-sized Sn (nano-Sn) powder composite electrode with polyacrylate binder delivers a discharge capacity of 600mAhg−1 with a good capacity retention for 100 cycles in non-aqueous Na cells, however, a micrometer-sized Sn (micro-Sn) composite electrode exhibits an insufficient cycle performance under the same condition. Although surface analysis of cycled electrodes reveals no apparent difference in solid electrolyte interphase layer formed on the nanoand micro-Sn electrodes, we found that in the case of nano-Sn electrodes the moderately porous composite layers and thin binder coating on Sn particles are responsible for a favorable cycle performance. On the other hand, the dense and less-porous micro-Sn electrode having a relatively thicker coating of binder on micro-Sn particles deteriorates the reversibility of sodium alloying reaction. Therefore, we optimize the electrode preparation process to introduce the suitable porosity and properly thin binder coating in the micro-Sn composite electrodes. The optimization enables the micro-Sn electrode to demonstrate high reversible sodiation capacity of 676–470mAhg−1 with much improved capacity retention over 100 cycles. © The Electrochemical Society of Japan, All rights reserved. Introduction The research activity in development of higher capacity materials for Na-ion battery is rapidly growing in the 2010s, and the energy density of Na-ion battery has been incrementally improved by newly developing sodium insertion materials, binders, electrolytes, electrolyte additives etc. over the past decade based upon the finding and knowledge of Li-ion chemistry since the 1980s. 1 Regarding negative electrodes of Na-ion battery, our group studied the mechanism of sodium storage into hard carbon and succeeded in demonstrating high capacity of 350-420 mAh g ¹1 . 1-3 Moreover, p-block elements are known to show higher reversible capacities exceeding that of hard carbon; Sn and P electrochemically transform into Na 3.75 Sn and Na 3 P delivering 847 and 2,600 mAh g ¹1 , respectively. 4,5 Though large theoretical capacities can be expected in these materials, they suffer from large volume change during successive cycles of sodiation/desodiation, which causes serious damages of the composite electrode, such as particle fractures and massive electrolyte decomposition, leading to electric isolation of the active material and rapid capacity decay. 6,7 As is known generally, nanosized Sn particles is crucial to avoid the capacity decay, and electrochemical sodiation of SnO and Sn-Co is employed to mitigate the volume change of active materials compared with pure metallic Sn. 8,9 Another idea is to improve the mechanical durability of the composite Sn electrode by using watersoluble functional binders 10 such as sodium carboxymethylcellulose, sodium poly-C-glutamate, and sodium polyacrylate (PANa) which are effective for Sn-Na system as well as Si-Li and P-Na systems. [11][12][13][14] These binders efficiently suppress the electric isolation of active materials in the composite electrode and act as pre-formed solid electrolyte interphase (SEI). As a result, the decomposition of the electrolyte is suppressed, leading to improvement of electrochemical performances. As reported by Nam et al., sodium alloying property of electrodeposited Sn electrodes depends on grain sizes of the Sn film, that is, micrometer-sized Sn deposited electrode showed an abrupt capacity degradation after 5 cycles due to the detachment of Sn grains from the Cu substrate, and nanometer-sized Sn grains are advantageous for long-term cycle. 15 They described that improved adhesion between an electrodeposited Sn film and the current collector is of importance to achieve a good electrode performance. We reported that a nanometer-sized Sn (hereafter denoted as nano-Sn) powder composite electrode delivered approximately 700 mAh g ¹1 reversible capacity over 100 cycles by adding graphite and a functional binder, PANa, with the voltage range between 0.00 and 0.65 V vs. Na to avoid oxidative SEI dissolution occurring at 0.68 V vs. Na. 13 However, the application of nano-Sn particles to rechargeable batteries is not a realistic solution, because nanosized powder is expensive, much lower tap-density, and dust toxicity, resulting in the difficulty of a practical battery application. Therefore, larger size particles, such as micrometersized Sn (micro-Sn) electrodes, are greatly preferable to nano-Sn electrodes. In this article, we elucidate the difference in sodium alloying properties of the nano-and micro-Sn electrodes by using PANa binder to understand the mechanism of capacity decay. On the basis of the understanding, we succeed in preparing the moderately porous micro-Sn composite layer by optimizing electrode preparation conditions and demonstrate the good battery performance in non-aqueous Na cells comparable to that of the nano-Sn electrodes. Preparation of electrodes and test cells Two types of reagent-grade Sn powder (Sigma-Aldrich Inc.) were used as the active material without any pretreatment: nano-Sn powder of less than 150 nm in diameter and micro-Sn powder less than 10 µm particles (average particle size: 1-2 µm), with surface areas of 5.6 and 1.1 m 2 g ¹1 , respectively. Four kinds of conductive carbon materials were used to meet each specific purpose: 3-, 15-, and 30-µm flaky graphites (SNO-3, SNO-15, and SNO-30, respectively, SEC Carbon, Ltd.) and acetylene black (AB). PANa (molecular weight is 2,000,000-6,000,000, Kishida Chemical Co., Ltd.) was used as a binder in this study. The Sn composite electrodes basically consist of a mixture of Sn powder:carbon:PANa = 80:10:10 in weight, except for the study of the dependence on binder content as described below for the electrode optimization. For the electrode preparation, Sn powder, carbon, and PANa were thoroughly mixed with a dispersant of 10 vol% methanol aqueous solution to prepare a uniform slurry. 13 Each prepared slurry was pasted onto an Al foil uniformly with a doctor blade and dried at 40, 80 or 150°C under atmospheric pressure overnight followed by drying in a vacuum oven for 1 day at the same temperature as the atmospheric pressure drying. The electrode dried at 40°C under atmospheric pressure was dried at 80°C in a vacuum oven. The composite electrodes were punched into discs of 10 mm in diameter, and the loading mass of Sn was around 1.6 mg cm ¹2 . R2032-type coin cells were assembled with the Sn composite disc and sodium metal as working and counter electrodes, respectively, which are separated with a glass fiber filter (GB-100R, ADVANTEC) and a microporous polyolefin membrane (Toray Co., Ltd.). The electrolyte solution used is 1.0 mol dm ¹3 NaPF 6 in ethylene carbonate (EC) and diethyl carbonate (DEC) (49:49 vol%, battery grade, Kishida Chemical Co., Ltd.) and 2 vol% fluoroethylene carbonate (FEC) as an electrolyte additive. The cells were assembled in an Ar-filled glove box with the dew point below ¹95°C. Charge-discharge cycle test The assembled coin cells were tested at 25°C with a battery tester (TOSCAT-3100, Toyo System Co., Ltd.). As the 1st cycle, the Sn electrodes were first reduced (charged) until 0.03 V vs. Na at a current rate of 25 mA (g of Sn) ¹1 , and after reaching 0.03 V, the voltage was maintained until the total reduction time including the constant current sodiation reached 40 h. Then the electrodes were oxidized (discharged) to 0.65 V vs. Na at 25 mA g ¹1 of constant current mode. From the 2nd cycle, charge/discharge tests of constant current mode were repeated in a range between 0.00 and 0.65 V vs. Na at 50 mA g ¹1 . Characterization of Sn composite electrodes The cycled coin cells were disassembled in the glove box to take out the tested electrodes, then the tested electrodes were softly and carefully rinsed with EC:DEC mixed solvent (1:1 in volume) and then with DEC solvent to remove the residual electrolyte. They were transferred from the glove box to analysis chambers of hard X-ray photoelectron spectroscopy (HAXPES) and time-of-flight secondary ion mass spectrometry (TOF-SIMS) with transfer vessels to ensure reliable surface analyses without air exposure. HAXPES measurement was conducted to analyze the electrode surface of around 10 nm depth at the synchrotron facility (SPring-8, Japan) at BL46XU equipped with a hemispherical electron energy analyser (VG-SCIENTA R4000). An excitation energy of 7939 eV was applied and a total energy resolution was 235 meV at room temperature. A carbon 1s peak of conjugated sp 2 hybridized carbon of 284.6 eV from the carbon material in the composite electrodes was used to correct the binding energies of photoelectron spectra. 5 TOF-SIMS (PHI TRIFT V nanoTOF, ULVAC-PHI Inc.) was employed for the analysis of the outermost electrode of several nm from the surface. A 100 µm © 100 µm area of electrodes was bombarded with a pulsed beam of 30 kV Au 3 + clusters until the ion dose reached 9.8 © 10 ¹11 ions cm ¹2 to acquire negative ion mode spectra. The pressure of the analysis chamber was maintained below 1 © 10 ¹8 Pa. Electrode morphologies were observed by using a field emission scanning electron microscope (FE-SEM), and crosssectional images were collected with a focused ion beam SEM (FIB-SEM, JIB-4500FE, JEOL, Ltd.). Surface and interfacial cutting analysis system (SAICAS, DN-GS type, Daipla Wintes Co., Ltd., Japan) is utilized to study peeling strength and mechanical strength of composite layers. A 1 mm width boron nitride blade was moved in the horizontal and the vertical directions with a velocity of 2.0 and 0.1 µm s ¹1 , respectively while maintaining a load force of 5 N. Figure 1 shows charge and discharge (corresponding to sodiation and desodiation, respectively) curves of nano-and micro-Sn electrodes. The micro-Sn electrode has similar voltage steps to those of the nano-Sn electrode and provides high reversible capacities approaching 700 mAh g ¹1 for several cycles. 4,13 The size of Sn particle was observed with SEM as shown in the insets of Figs. 1(a) and 1(b), and the particle sizes are confirmed to be nanometer-and micrometer-scale, respectively. However, three remarkable differences are found between them other than it. The first one is capacity retention. As shown in Fig. 1(c), the nano-Sn electrode shows discharge capacity of 600 mAh g ¹1 without capacity decay for 100 cycles, while the micro-Sn electrode exhibits a rapid capacity decay after 15 cycles though discharge capacity as high as 700 mAh g ¹1 is observed before the degradation. The second one is Coulombic efficiency of the 1st cycle: 72 and 86% for the nano-and micro-Sn electrodes, respectively, though no significant difference in efficiency is observed after the 2nd cycle. The third one is the charge profile. Figure 2(a) compares the 1st cycle charge curves of the nano-and micro-Sn electrodes, in which four voltage-plateaus located at 0.38, 0.18, 0.07, and 0.03 V appear for the nano-Sn electrode and are referred to Plateau 1, 2, 3, and 4, respectively, as seen in the figure. On the other hand, Plateau 1 does not appear in the micro-Sn electrode, though Plateaus 2, 3, and 4 are observed clearly without apparent polarization. Accordingly, the third difference is found in the 1st cycle charge curves: apparent Plateau 1 only for the nano-Sn electrode. Figure 2(b) compares the discharge curves at the 2nd cycle since the efficiency becomes similar between the nano-and micro-Sn electrodes from the second cycle. In Fig. 2(b), 3 plateaus at 0.18, 0.28 and 0.56 V are found, which are referred to Plateaus 4B, 3B, and 2B, respectively. It was reported that Plateaus 4B, 3B, and 2B are reverse reactions of Plateaus 4, 3, and 2, respectively. 15,16 The trend of voltage variation during desodiation of the two electrodes is almost the same, indicating no significant difference in the phase evolution during desodiation between the two electrodes. Additionally, we found the larger initial charge capacity of 830 mAh g ¹1 for the micro-Sn than that of nano-Sn one, 615 mAh g ¹1 , and the initial discharge capacity was lager for the micro-Sn. The difference in the initial charge capacity originates from lower current flow during the constant voltage application (see Supporting Information, Fig. S1). The smaller discharge capacity of nano-Sn electrode is probably due to the larger portion of tin oxides, SnO and SnO 2 , as native oxide in as-received nano-Sn Electrochemistry, 87(1), 70-77 (2019) powder and prepared electrode compared to micro-Sn ones (see also Fig. S1). Comparison of micro-and nano-Sn electrodes The electrochemical reaction of Plateau 1 is not fully understood in previous reports. [15][16][17] We previously reported that oligomer/ polymer on the nano-Sn electrode surface, which was believed to be a part of SEI layer, was anodically oxidized and dissolved at 0.68 V. Indeed, when the desodiation voltage is expanded beyond 0.68 V, Plateau 1 appeared in the following charge curves. 13 Therefore, we supposed that Plateau 1 corresponds to the reaction to form SEI, and the SEI layer on Sn-Na alloy should not suffer from the oxidative dissolution in this study since the upper cutoff voltage is limited up to 0.65 V. As shown in Fig. 2(a), some plateaus corresponding to the Sn alloying reactions are observed at lower voltage for the both electrodes. One of the possible reasons for the disappearance of Plateau 1 for the micro-Sn is the smaller surface area of micro-Sn particles. Actually, the small surface area of much larger micro-Sn particles than nano-Sn results in the formation of thicker PANa binder coating on the surface of Sn particles, leading to higher electrode resistance and large overvoltage. 18 As a result, the electrode resistance of the micro-Sn electrode becomes higher than that of the nano-Sn electrode. To confirm whether the thickness of binder coating on micro-Sn particles affects the 1st charge voltage profile, the composite electrodes were prepared with different ratios of micro-Sn powder:3µm graphite:PANa = 80:10:x (w/w), where x = 5, 10, and 20, by simply changing the amount of PANa binder. In the 1st charge curves, apparent Plateau 1 was not observed in all electrodes ( Fig. S2(a)). However, a small shoulder becomes visible just below the potential of Plateau 1 in the initial charge curve for the electrode containing the least amount of PANa binder, x = 5. The voltage drop until the appearance of the 1st voltage-plateau became larger as the binder content increased, meaning the resistance increased as the binder content increased. This reasonably agrees with our previous data for Si-polyacrylate electrodes. 18 Therefore, we concluded that Although the sodiation curve of the electrode of micro-Sn:graphite:PANa = 80:10:5 somewhat resembles that of the nano-Sn electrode due to the reduced polarization, the cycle performance was not improved compared with those of the electrodes containing larger amount of PANa (see Fig. S2(b)), which is because of the lower adhesion strength by reduction of binder content. 18 Besides, since the difference in electrode performance of the micro-Sn composite with different binder contents, x = 10 and 20, was not remarkable, the electrode degradation mechanism was further investigated for micro-Sn electrodes containing 10% binder dried at 80°C hereafter. Figures 3(a) and 3(b) show SEM images of electrode surfaces of the pristine nano-and micro-Sn electrodes, respectively. Both electrodes similarly possess smooth and uniform surfaces. Crosssectional SEM images of the pristine nano-and micro-Sn electrodes are shown in Figs. 3(c) and 3(d), respectively. The electron microscopic images confirm the existence of bright and roundshaped particles of Sn and dark flakes of graphite. The composite layer of the nano-Sn electrode is porous and thicker, while that of the micro-Sn electrode is dense and thinner and includes a small number of voids. The difference in thickness of the composite layers, 9.6 and 7.1 µm for the nano-and micro-Sn electrodes, respectively, in Table 1, results from the difference in porosity. A cross-sectional SEM image of a tested micro-Sn electrode after 20 cycles whose capacity degraded to 400 mAh g ¹1 is shown in Fig. 3(e). The composite layer severely cracked and was detached from the current collector of Al foil. It is known that alloy type electrodes undergo large volume change during charge and discharge, leading to cracks of electrodes and pulverization of active material particles, which cause severe capacity degradation. 6 Specifically, sodiation of Sn to form Na 3.75 Sn results in a large volume change of about 4.2 times. 7 The actual internal stress of the micro-Sn composite layer is noticeable compared with that of the nano-Sn electrode due to the difference in the porosity of the composite layer as confirmed by Figs. 3(c) and 3(d). That is, the pores properly distributed in the entire composite layer should absorb the volume change and mitigate the internal stress of the electrode, leading to suppression of the detachment. Based on the above discussion, further analyses of the electrodes are carried out to understand the effect of binder and porosity on the difference between the electrochemical properties of the nano-and micro-Sn electrodes. The surfaces of the micro-and nano-Sn electrode were examined by using HAXPES and TOF-SIMS. HAXPES enables to study the surface chemistry of electrode by detecting photoelectron generated from approximately 10-nm depth by irradiating hard X-rays. 5 Figure 4 shows C 1s and Sn 3d 5/2 HAXPES spectra of the nano-and micro-Sn electrodes of pristine and after the initial cycle. From the C 1s spectra of Fig. 4(a), peak intensities of -CH 2 > CH-COONa at 285.9 eV and -CH 2 > CH-COONa at 289.6 eV of the micro-Sn electrode are higher than those of the nano-Sn electrode. These peaks are attributable to PANa binder, suggesting that the micro-Sn electrode surface is covered with relatively thicker PANa binder. Figure 4(b) shows the Sn 3d 5/2 spectra of the pristine electrodes. Sn metal and tin oxide peaks are observed in Fig. 4(b), while Fig. 4(d) shows no signals in both electrodes after the cycle, suggesting that the electrode surfaces are thoroughly covered with deposited products of the electrolyte decomposition. In the C 1s spectra of Fig. 4(c), electrolyte decomposition products such as alkoxide, carbonate, and so on are confirmed on the both electrodes, which agrees with our previous data. 13 The HAXPES results show that there is no remarkable difference of chemical species between both electrode surfaces, suggesting micro-Sn electrode surface is covered with SEI consisting of the same chemicals. However, the thickness in surface layer of PANa coating and electrolyte decomposition products is different. As for the thickness of electrolyte decomposition products, those peak intensities of the micro-Sn electrode are lower than those of the nano-Sn electrode. It is suggested that the thicker PANa binder coating on micro-Sn particles suppresses the irreversible electrolyte decompositions. This is also consistent with the relatively higher Coulombic efficiency of the micro-Sn electrode at the 1st cycle, as mentioned in Fig. 1(c). TOF-SIMS is a mass spectrometry that detects the mass signals from the flight time of fragments emitted as secondary ions by irradiating high-energy Au 3 + cluster pulse beam of 30 keV on the sample surface. Since the penetration depth of the beam is about 1 nm, information of the outermost sample can be collected from the detected mass signals. 19 Figure 5 shows TOF-SIMS spectra of negative ion mode of the micro-Sn electrode. Figure 5(a) indicates peaks of oligomer/polymer from m/z 189 with m/z 106 interval in the micro-Sn electrode after 1 cycle. This is consistent with the result of the nano-Sn electrode reported previously, so it is considered that the outermost passivation layer on micro-and nano-Sn electrodes after 1 cycle are almost the same. 13 We confirmed that this oligomer/polymer was not formed by merely soaking in the electrolyte solution but was formed by the electrochemical reaction. Figure 5(b) also proves the existence of the same oligomer/ polymer in the degraded micro-Sn electrode after 20 cycles, showing that the degraded micro-Sn electrode is still passivated with the surface layer containing the oligomer/polymer. Since the upper cutoff voltage was set at 0.65 V, the SEI dissolution did not occur and the active material was maintained to be covered with SEI. 13 From Figs. 4 and 5, any notable difference in SEI was hardly found on the nano-and micro-Sn electrodes in spite of the thicker PANa coverage leading to the polarization and disappearance of Plateau 1 for the micro-Sn electrode. Furthermore, as is described in Fig. 3(e), the original round shape of micro-Sn particles drastically changed after 20 cycles, evidencing the fracture and aggregation of micro-Sn during cycles. 20 However, the resembled SEI layer remained even on the degraded electrode from the surface analyses, suggesting uniform coverage with the SEI containing the oligomer/ polymer components. Namely, we do not find any significant difference in SEI between the nano-and micro-Sn electrodes, and we further analyzed the morphological and mechanical property of the composite layer. SAICAS is employed for elucidating the adhesion strength of both the inner composite layer and the interface between the composite layer and Al foil with a small cutting edge. 21 The peeling strength P (N m ¹1 ) is given by P = F H /w where F H is the horizontal force to peel the electrode composite layer and w (m) is a blade width of the cutting edge. 21 During SAICAS test, the pristine nanoand micro-Sn electrodes were immersed in DEC solvent to imitate the actual condition of the electrode in battery. Table 1 shows the results obtained from SAICAS measurements. In case of the nano-Sn electrode, the values of peeling and mechanical strengths are 0.11 and 0.11 kN m ¹1 , while those of the micro-Sn electrode are 0.27 and 0.28 kN m ¹1 , respectively, proving that the micro-Sn electrode shows approximately threefold mechanical strength compared with the nano-Sn electrode. Despite the higher mechanical strengths, the micro-Sn composite layer was separated off from Al foil by cycling as shown in Fig. 3(e). The dense composite layer of micro-Sn should accumulate pronounced internal stress caused by the volume change, leading to the detachment of the composite layer. In contrast, the nano-Sn electrode with porous composite layer can deliver the high capacity for 100 cycles because the porous structure absorbs the internal stress of the composite layer. Therefore, it is considered that the cycle performance will be improved if the porous composite layer is also formed in the micro-Sn electrode. 3.2 Dependence of micro-Sn electrode performance on electrode preparation condition From the above results, we prepared a composite electrode by mixing the nano-and micro-Sn powders in order to improve the rapid capacity degradation of the Sn electrode by introducing the proper pores. The prepared micro/nano-Sn electrode consists of micro-Sn powder:nano-Sn powder:3-µm graphite:PANa = 7:1:1:1 to prove the idea mentioned above. Figure 6(a) shows a crosssectional SEM image of the pristine micro/nano-Sn electrode, which is more porous than that of the micro-Sn electrode shown in Fig. 3(d). Obviously, Fig. 6(b) shows a superior cycle performance for the micro/nano-Sn electrode delivering more than 500 mAh g ¹1 for 100 cycles, and the capacity retention becomes comparable to that of the nano-Sn electrode. We prove that adding a small portion of nano-Sn powder into the micro-Sn composite is highly effective in improving the electrode reversibility, because the proper porosity was introduced in the composite by controlling of the binder coating thickness on Sn particles utilizing the large surface area of small particles. Taking the above results including Fig. 6 into consideration, we tried introducing the porosity into micro-Sn composite layer and thinner binder coating on micro-Sn particles by adjusting the drying temperature after slurry coating and by adding different conductive carbon powders having different particle sizes. 22 According to previous reports, the binder distribution and electrode porosity are affected by varying drying temperatures, 23 and suitable choice of conductive carbon is effective for controlling porosity and inducing formation of conductive network in the composite layer. 24 First, the effect of drying temperature after the slurry consisting of micro-Sn powder:3-µm graphite:PANa = 8:1:1 pasted onto a current collector was examined with reference to 80°C used as a standard one. Figures 7(a)-(c) show cross-sectional SEM images of pristine electrodes dried at each temperature. Clearly, the composite layer becomes porous as the drying temperature elevates. The estimated porosities calculated from the thickness of the composite layer of the SEM images were approximately 10% for the electrodes dried at 40 and 80°C, and 20% for that dried at 150°C. One of possible reasons why the high drying temperature causes high porosity may be related to evolution of water vaper bubbles in the slurry pasted on Al foil during drying at higher temperatures beyond boiling point of water. Another reason is that the higher temperature accelerates the drying speed and induces a drastic increase of the slurry viscosity bringing the self-formed porous structure as observed in the slurries with partially neutralized polyacrylate binder. 22 In Fig. 7(d), the electrode dried at 150°C exhibits the better cycle performance than those dried at 40 and 80°C, whereas the capacity decreases after 30 cycles. Since the porosity of the nano-Sn electrode is estimated to be about 40% from the SEM image, further increase in the porosity would be efficient to improve the cycle performance of micro-Sn electrodes. Next, the effect of size and shape of carbon particles added as conductive materials on porosity of the composite layer and cycle performance was examined by using the electrodes consisting of micro-Sn powder:carbon:PANa = 8:1:1 dried at 150°C. , Plateau 1 appears in the 1st charge curve for the electrodes containing graphite and AB and more apparently appears for the electrode with only AB. As the surface area of AB is larger than graphite, AB powder absorbs the binder in the composite; therefore, the binder film on the surface of Sn particles should be thinner and the electrode resistance decreases, leading to the appearance of Plateau 1. In Fig. 8(d), rapid capacity degradation after 10 cycles was observed for the electrode containing only AB, which is possibly due to the decrease in mechanical strength 13 because of the binder absorption of AB. On the other hand, as shown in Figs. 8(a) and 8(b), the electrodes containing 3-µm graphite show good capacity retention even after 30 cycles. These results suggest that the flake graphite works effectively to maintain mechanical strength of the composite layer and electric conduction path in the entire composite layer. From Figs. 8(d) and 8(e), the electrode containing both AB and flake graphite exhibits the most stable cycle performance among them. We further examined the effect of size of graphite on the electrode performance. The composite electrodes of micro-Sn powder:graphite:AB:PANa = 80:5:5:10 dried at 150°C were prepared by using three different graphites of which particle diameters are 3-, 15-, or 30-µm in average. The electrode containing 15-µm graphite showed better cycle performance among them, and the discharge capacity of 450 mAh g ¹1 is maintained even after 100 cycles (see Fig. S3). We think that the different graphite provides different electron-conduction pathway, porosity, mechanical strength in the composite electrode, leading to the different capacity retentions. Consequently, the optimal size of graphite as conductive additive is found to be 15 micrometers. Optimized micro-Sn electrodes All the results shown above demonstrate the optimal micro-Sn electrode consisting of the mixture of micro-Sn:15-µm graphite:AB:PANa = 80:5:5:10 and dried at 150°C. Figure 9(a) shows a charge and discharge curves of the optimized micro-Sn electrode. Compared to the micro-Sn electrode performance in Fig. 1(b), highly reversible and relatively stable charge and discharge were achieved for the optimized micro-Sn electrode. Figure 9(b) shows a cross-sectional SEM image of the pristine micro-Sn electrode prepared under the optimal condition. It exhibits a porous composite layer like the nano-Sn electrode shown in Fig. 3(c). The moderately porous composite layer can be obtained simply by modulating the surface area of electrode materials and drying temperature. Figure 9(c) compares the capacity retention of the micro-Sn electrodes before and after the optimization. The optimized electrode delivers the high capacity of 676-470 mAh g ¹1 and satisfactory capacity retention over 100 cycles. Figure 9(d) shows that Coulombic efficiency of the optimized micro-Sn electrode at the 1st cycle is lower than that of the micro-Sn electrode due to the irreversible capacity of larger portion of AB, though the same efficiency was obtained for the two electrodes from the 2nd cycle. We successfully improved the electrode performance of the micrometer-sized Sn based upon understanding and analyzing the mechanism of capacity decay by carefully comparing the electrode behavior of nano-and micro-Sn powders. We believe that the battery performance will be further enhanced by comprehensive approach on electrolyte solvent, electrolyte salt, electrolyte additive, and binder. Conclusion We attempted to find out the reasons why the cycle performance of the nano-Sn electrode is better than that of the micro-Sn electrode. As the result, we proved that the cycle performance of Sn-Na reaction is affected to the porosity of composite layer. The porosity of the composite layer is varied by using different carbon additives and Sn powders and different drying temperature of the slurry. The properly porous composite layer is formed by drying at 150°C when PANa and graphite are used as binder and conductive additive, respectively. In addition, we found the optimal particle size of graphite for the Sn electrode, and AB addition as conductive additive influences the thickness of the binder coating and mechanical durability of the composite. Optimal thickness of binder coating is important for balancing the passivation and mechanical strength of the composite electrode. By optimization of the electrode preparation condition, we successfully enhanced the micro-Sn electrode performance for the application of Na-ion batteries. These findings and optimization methodology in this study will be applicable to improve composite electrode performance of Li, Na, and K alloying electrodes delivering higher capacity for nextgeneration batteries. Supporting Information The Supporting Information is available on the website at DOI: https://doi.org/10.5796/electrochemistry.18-00069.
2019-04-10T13:12:54.883Z
2019-01-05T00:00:00.000
{ "year": 2019, "sha1": "39773159c138cc14613b790c1afcc48200645f7e", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/electrochemistry/87/1/87_18-00069/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "151824752edaf568f600e382152cdf85d619738d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
239365345
pes2o/s2orc
v3-fos-license
Appropriating Emotional Distress, Disturbance, and Grief in the Novel Heart of Darkness and the Film Apocalypse Now-A Brief Analysis Joseph Conrad’s Heart of Darkness and Francis Ford Coppola’s appropriation Apocalypse Now are works that purposefully deal with the notion of introspective journeysthat lead to spiritual change. Such change, which can loosely be described as a conscious awakening to the wide-ranging horrors of mankind, occurs within intricately woven mosaics of physical and psychological suffering. In this respect, using colonialism as their nucleuses, both works present multi-layered observations on the struggle to maintain human morality when morality no longer bears guaranteed validity. As protagonists Charles Marlow (Heart of Darkness) and Willard (Apocalypse Now) embark on expeditions that physically and mentally lead to places they never intended to reach, the public issue of colonialism becomes decisively interwoven with the private issue of self-discovery. The two are presented as indissoluble. Introduction Joseph Conrad's Heart of Darkness and Francis Ford Coppola's appropriation Apocalypse Now are works that purposefully deal with the notion of introspective journeysthat lead to spiritual change. Such change, which can loosely be described as a conscious awakening to the wide-ranging horrors of mankind, occurs within intricately woven mosaics of physical and psychological suffering. In this respect, using colonialism as their nucleuses, both works present multi-layered observations on the struggle to maintain human morality when morality no longer bears guaranteed validity. As protagonists Charles Marlow (Heart of Darkness) and Willard (Apocalypse Now) embark on expeditions that physically and mentally lead to places they never intended to reach, the public issue of colonialism becomes decisively interwoven with the private issue of self-discovery. The two are presented as indissoluble. Appropriating Emotional Distress, Disturbance, and Grief in the Novel Heart of Darkness and the Film Apocalypse Now Out of the folly, madness, and horror of collective enterprise -war and colonisation -emerges an existentialist realisation that the world men make for themselves principally stems from the character of individual behaviour: 'We live, as we dream-alone' (Conrad,11). But insofar as Marlow and Willard are to certain extents men of reputation and sincerity, they are nonetheless fallible human beings. Despite their superior status on their respective boats, Marlow and Willard possess a normal capacity for good and evil and the liability to make the same errors as those they denounce in others. Ultimately, that is the intention of their creators. Conrad and Coppola present human beings as they are not as they ought to be. They dictate that each situation demands an individual choice and not a blind adherence to a code: Willard's decision not to call in the air strike following his murder of Kurtz is a suitable example. While Marlow is presented as a fictional English seaman, his character is a narrative channel repeatedly utilised by Conrad for the purposes of manifesting his contemplative forays into self. Marlow appears in several of Conrad's works, beginning with his 1898 autobiographical short story Youth (2016). In the Author's Notes to Heart of Darkness: SecondEdition, while discussing Youth Conrad remarks of Marlow that 'He haunts my hours ofsolitude, when, in silence, we lay our heads together in great comfort and harmony ' (1998). Marlow serves as Conrad's muse, his mouthpiece, and on occasion a veil behind which to hide; i.e., Marlow is a personification of the author's inner voice-while also providing a freedom of presence in his works. Conrad's Congo is as such both the historically located site of an imperialist atrocity and a personally-laden psychic phantasmagoria. In this context the story is less about the relationship between Marlow and Kurtz and more about the overarching dichotomy that exists between protagonist and author. While Marlow's Englishness marks him off from his Polish-born creator, Heart of Darkness is littered with autobiographical links. Eightand-a-half years before it was completed, Conrad served as the captain of a Congo steamer, the Roi des Belges (Najder,159). Allan Simmons' biography also connects Conrad and Marlowthrough near identical experiences aboard the ill-fated Palestine, a ship which sank off the coast of Sumatra in March of 1883 (Simmons,81). It is the mental connection between Marlow and Conrad however that prompts the most intrigue. Marlow is, in the main, an introspective character. He is described in Heart of Darkness as a 'Traveller in the country of the mind' (1989).On the surface, he is always observingand judging, but there is an underlying sense that his perceptions serve as juxtaposition for his own inner conflicts-in essence, uneasy apprehensions of colonialism. Gene M. Moor's assertion that we, the reader, 'are in Marlow's mind throughout' the narrative is reified by Albert J. Guerard's claim that the story is about Marlow's 'night journey into the unconscious, and confrontation of an entity within the self' (In Tredell,87). For insofar as 'going into the jungle seems to Marlow like travelling into one's own mind', it is ultimately a reflective journey that Conrad takes vicariously through his protagonist-a journey that develops an ambivalent attitude towards colonialism by its end. The notion of introspection is similarly true of ' Apocalypse Now'. Eleanor Coppola, in her Notes on the Making of the film details the strains and challenges experienced by her husband, and director, Francis Ford Coppola (1995). She comments that Coppola's arduous journey towards the film's completion, which had by 1979 spanned some nine years, slowly started to mirror the journey up the Nung river made by its protagonist, Willard. In the same respect that Willard is gripped by a 'fear of failure, fear of death, (and a) fear of going insane', Coppola's journey is by his own admission comparable; Eleanor notes that there was a point during the 18 months of filming when reality and fiction became entwined: I was watching from the point of view of the observer, not realising that I was on the journey too. Now I can't go back to the way it was. Neither can Francis. Neither can Willard (1995). Successfully, novel and film incorporate various dualisms to address the subject of human nature. The themes of method and madness appear consistently throughout. Moral dilemmas such as the requirements of practical necessity in contrast to pointless, random acts of brutality demand more than a mere acceptance of what is comprehensible in the cognisant world, for 'changes take place on the inside' (in Pallua, 47). Colonel Kurtz, Coppola's ostensibly brutal martinet, is a suitable case-in-point. He is the embodiment of the best and worst of man: a kind of pseudo-ditheistic demigod. His desire to bring the light of white civilisation to an impoverished people is inseparable from his inordinate pride and will-to-power. Much has been written about Kurtz. Orson Wells, who appropriated Conrad's novella for radio in 1938, drew explicit parallels between him and Hitler: 'I'm above morality...I'm the first absolute dictator' (Moore,214). Yet, slowly in text and on film, both Marlow and Willard have become somewhat analogous with him. Marlow (like Kurtz) is put forward 'as an exceptional and gifted human creature' and an 'emissary of light' (Larabee,60). The extent to which he comes to relate emotionally to Kurtz is suitably surmised when he says, 'I had, even like the Appropriating Emotional Distress, Disturbance, and Grief in the Novel Heart of Darkness and the Film Apocalypse Now -A Brief Analysis niggers, to invoke him -himself -his own exalted and incredible degradation' (Conrad, 2011). As such, Kurtz is both inside and outside colonial power, and colonial jurisdiction. He does, by the end of the Conrad's novella, come to embody the state of exception. As Apocalypse Now moves mournfully towards its conclusion, the cinematography and editing figuratively shows the transformation of Captain Willard. VitoroStoraro's use of pictorial lighting creates thematically symbolic shots that reveal the psychological and spiritual bond between Willard and the target of his mission, Kurtz. Both characters are filmed, backed by the haunting non-diegetic synthesised score written by Francis and Carmine Coppola, with their faces half in and out of the shadows. The lighting dictates that the moral conflict between good and evil of each character be seen as one entity, and that if Kurtz be considered devoid of method then so too must Willard: 'What do you call it when the assassins attack the assassins?' (Millius) Jake Horsley, in his work Blood Poets: a cinema of Savagery 1958Savagery -1999, amongst many criticisms of the film's conclusion, takes umbrage with the score. He states that the 'synthesised whines and groans and heartbeats sound more like a soundtrack for a horror film.' (18) However, in view of the terrible journey Willard has made and the dreadfulness he discovers at Kurtz' compound, the avant-garde, Stockhausen-esque soundtrack is more than apt and can be deemed a nuanced approach to genre melding: breaking with custom and convention to explore a specific issue. Notably, there is a definite sense of horror throughout the scene. In a setting dominated by corpses and death, a catalogue of unchecked violence and reciprocal vengence, Kurtz's tribesmen ritualistically dance around a fire to the mounting rhythms of their drums prior to his death. In conjunction with this repetition, the unanimous and overwhelming malice of the tribe is called forth and then discharged against the sacrificial carabao, which acts as a precursor for what is to follow; Willard as the savage tribe and Kurtz as the sacrificial carabao. By this point, Kurtz stands as the embodiment of US imperialism and as an unintentional homage to Frankenstein's monster. Harold Bloom, in his assessment of Coppola's representation of Kurtz, surmises that he alone is 'a precise definition of horror.' (4) As Apocalypse Now concludes, greater understanding of the film's magnificent opening, which showcases one of the most elaborate uses of lap dissolve in cinematic history, is understood in greater context. By design, Coppola's intention is to present the notion of a terrible re-occurrence throughout the film -one that deals with the notion of Kurtz and the Vietnam war as a composite horror. In this respect, the end is the beginning and viceversa; i.e., a cyclical horror, which is suitably symbolised by dual and suspended metaphors. For example, the upturned, austere, and heavily perspired face of Willard in the film's opening appears as a symbolic reference to 'hot war'. His face also doubles, however, as a symbol of psychological strain born of a world very much turned on its head. Appropriating Emotional Distress, Disturbance, and Grief in the Novel Heart of Darkness and the Film Apocalypse Now -A Brief Analysis In conjunction, Coppola presents a superb dualist allegory. The rotary blades of a U.S. army helicopter are transmuted into a low-angle shot of the blades of a ceiling fan situated in Willard's Saigon hotel room. This imposition infers that the fire of the outside world and the inferno of Willard's personal distresses are fundamentally linked. This notion is reified by Willard's non-diegetic introductory voiceover, which hints that a direct route from sweat-ridden cotton sheets to dew-soaked palm trees is plausible: 'Every time I think I am going to wake back up in the jungle.' (Coppola, 1979) In this sense, Willard is in a constant state of flux: When I was home after my first tour, it was worse. I'd wake up and there'd be nothing. I hardly said a word to my wife until I said yes to a divorce. When I was here, I wanted to be there. When I was there . . . all I could think of was getting back in the jungle. The beginnings of both works may appear on face value to have little to no coterminous characteristics, but the darkness present in Coppola's montage is also metaphorically with Marlow from the outset. In contrast to the light of ships on the London fairway, Marlow muses that 'the monstrous town was still marked ominously on the sky, a brooding gloom in sunshine.' Marlow, having returned affected by the psychological heat of the Congo, is presented as a man in possession of a heightened susceptibility to the darkness of the world, even on the placidity of the Thames. He speaks in collages of terms, referring to 'a running blaze' and the 'darkness' of yesterday; he also draws parallels between historic Roman brutality and events encountered in the Congo: '-death skulking in the air, in the water, in the bush.' Much like Willard, while at home, there is still much of Marlow that resides in the jungle. The miserable nature of Marlow's introduction, however, invokes a curiosity in him and his story. This is similarly applicable in Apocalypse Now; a connection between protagonist and viewer is established when Willard's suffering, emotional instability, and weakness is emblazoned for all to see in the arena of his hotel room. Novella and film create a desire in their respective audiences to understand the basis of each man's torment. The sombre aura that surrounds both men can be defined as the introduction to the darkness of civilised hearts. There is a psychological intimacy at play across both works that underpins the separation of reason from civilized morality, and the fragmentation of the self so typical of the technocrat. These factors ultimately cause Marlow and Willard to favour the nightmare of Kurtz over their home lives. While the darkness and the barbarity of man is the nucleus for both stories, questions on the sincerity of man often run parallel or underpin core themes, with emphasis placed on the notion of the lie. Marlow suitably suggests as much when he remarks that 'there is a taint of death, a flavour of morality in lies, -which is exactly what I hate and detest in the world -what I want to forget. It makes me miserable and sick, like biting something rotten would do.' (Wake,44) These words are paralleled in Apocalypse Now when Kurtz and Willard lament the insincerity of war. Willard notes that 'it was the way we had over here of living with ourselves. We'd cut them in half with a machine gun and give them a band-aid. It was a lie. And the more I saw of them, the more I hated lies (Millius). As the two characters chip away at the façade of deceit, their distress becomes increasingly heightened. Unlike Kurtz, in both novella and film, who has come to accept the horrors of war and in doing so displays human bodies as trophies -heads on sticks, cadavers hanging from trees and etcetera -Willard and Marlow are conflicted. In contrast, they leave their dead behind, offering them to the waters that they move on from. Neither man wants to 'make a face of horror', and yet the further they progress the deeper into the darkness they descend. Paradoxically, in doing so the torment of the past and the darkness that awaits are pieced-together in their subconscious minds to reveal an emergent kind of horror in facial form: 'the mind of man is capable of anythingbecause everything is in it, all the past as well as all the future.' (2002) Appropriating Emotional Distress, Disturbance, and Grief in the Novel Heart of Darkness and the Film Apocalypse Now -A Brief Analysis It seems fitting, in this respect, that Coppola asked that Apocalypse Now be viewed not so much as an anti-war statement but as an 'anti-lie' one (Suid,333). To return to the film's opening montage, this is symbolised aptly when an establishing shot of the Vietnamese jungle, presented in full 70mm panorama, is quickly obscured by yellowish-orange napalm smoke. The tone is being established with immediate effect. The mis-en-scene insinuates calculated distortion intended to hide what the viewer sees before them. As Jinim Park asserts, 'The Vietnam War produced a postmodern space where images precede realities and where causes are distorted by effects.' (Park,117) Thus, calculated distortion may be inferred as a reference to the general perception of America's people to the Vietnam War: what were the motives for America's presence, exactly? This distortion is in-keeping with themes present in Heart of Darkness. For example, when Marlow embarks on the journey that will transport Kurtz from the Congo back to Europe, the natives who had worshipped him gather on the shore and open fire. As a result, Marlow remarks that he can 'see nothing for the smoke', as if the horrors of what Kurtz had nurtured and was leaving behind were being hidden. In that moment, the realisation is that Kurtz has not civilized the natives; they have savagised him; i.e., thus in novella and film, Kurtz takes on his respective surroundings, faces up to the magnitude of the lie he is a part of, and goes insane in the process. The mental status of Willard and Marlow by the close of their respective works is somewhat uncertain, but both are deeply afflicted. This is best exemplified by Marlow, when he refers to Kurtz' soul in the process of trying to comprehend the disarray of his own conscious thoughts: 'Believe it or not, his intelligence was perfectly clear… but his soul had gone mad. Being alone in the wilderness, it had looked within itself, and, by heavens! I tell you, it had gone mad. I had-for my sins, I suppose-to go through the ordeal of looking into it myself.' A statement that can equally be applied to the Willard's perceptions of Coppola's Kurtz. Conclusion The success of novella and film as portmanteaus of emotional distress lies in their underlying messages. Whether implied or interpreted, these messages are a purposeful and poetic meditation on human existence via a confrontation with the mystery of unknown earth. In the end, it is the land that Marlow and Willard inhabit that is real, and humanity that is the nightmare. The aspects of distress, disturbance, and unhappiness that Marlow, Willard, and of course Kurtz experience are, like the tools of oppression they utilise, manmade. All three naively underestimate the power of the jungle and as they penetrate further into the heart of darkness, their capacity for 'self-control' and 'inborn strength' is tested. In conclusion, to use the words of Marlow to transcend the link between novella and film: 'I confounded the beating of the drum with the beating of my heart'. Here Marlow shows that he has reached the nucleus of his own darkness: 'the farthest point of navigation.' It is no longer with the wilderness that he grapples but the landscape of his conscience. For ultimately, one cannot serve colonialism without being corrupted by it.
2019-09-13T22:20:42.229Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "44c081162b4f9b86008258b5aad5fbb53f4c2da6", "oa_license": null, "oa_url": "https://doi.org/10.21694/2378-9026.19001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "47e5f7a90f1535230eee8c3c1b8e3a8bc2d1feec", "s2fieldsofstudy": [ "History", "Psychology" ], "extfieldsofstudy": [] }
260733839
pes2o/s2orc
v3-fos-license
Prognostic Importance of Combined Use of MELD Scores and SII in Hepatic Visceral Crisis in Patients with Solid Tumours Objective: To determine the sensitivity of combining the model for end-stage liver disease (MELD) scoring with new inflamma-tory indexes in determining the priority for liver transplantation and demonstrating its potential usability in solid tumour visceral crisis. Study Design: Descriptive study. Place and Duration of the Study: Dr. Abdurrahman Yurtaslan Oncology Training and Research Hospital, Ankara, Turkiye, from June 2017 to June 2022. Methodology: Patients hospitalised in the medical oncology clinic for hepatic dysfunction were included. The MELD scores of these patients were calculated, and the predictive contribution of the systemic immune-inflammatory index (SII) to prognosis and mortality was evaluated. Results: A total of 295 patients (158 (53.6%) men and 137 (46.4%) women) were included. When compared for primary tumour types, colorectal cancers were the most common with 55 (18.6%) cases, followed by breast cancers at 52 (17.6%), pancreatic carcinoma at 50 (16.9%), and stomach cancers at 40 (13.6%) cases. In the survival analyses of all three MELD scores (MELD-Original, MELD-Na, and MELD 3.0) between <20 groups and ≥20 groups, the median Overall Survival (OS) for MELD-Original was 1.44 vs. 0.88 months (p<0.001), for MELD-Na it was 1.64 vs. 0.85 months (p<0.001), and for MELD 3.0 it was 2.16 vs. 1.28 months (p=0.039). In the ROC analysis, the SII parameter cut-off was ≥626.28 for the estimation of mortality, SII sensitivity was 78.7%, and specificity was 100% (p=0.013). Conclusion: Combined use of MELD and SII scores in patients with solid tumours with hepatic visceral crises will be practical, cost-effective, and easy to access, eliminate gender-based disparities, and contribute to clinical follow-ups with objective data. INTRODUCTION Prognostic models determine disease severity, survival probability, treatment trends, and patients' treatment orientation.The model for end-stage liver disease (MELD) score is a prospective chronic liver disease severity scoring system calculated using serum bilirubin, creatinine, and international normalised ratio (INR).It is a prognostic assessment score to predict 90-day survival after transjugular intrahepatic portosystemic shunt (TIPS) to determine transplant priority in patients awaiting liver transplantation. 1 tumour-derived and host-derived mediators and initiates some inflammatory processes. 11,12These inflammation markers are practical in many solid tumours.However, there are also publications on SII being a robust prognostic marker for patients with hepatocellular and colorectal carcinoma.13,14 Evaluating MELD scores in combination with SII will enhance the sensitivity and specificity of MELD scoring.The aim of this study was to demonstrate the clinical practicability of inexpensive and easily accessible new biomarkers that can be used in conjunction with prognostic MELD scoring, commonly employed for determining organ transplantation priority, with a higher sensitivity and specificity rate.These scoring systems can be utilised by clinicians not only for prioritising organ transplantation but also for effectively managing visceral crisis during organ failure, displaying high sensitivity and specificity. The objective of this study was to determine the sensitivity of combining MELD scoring with new inflammatory indexes in determining the priority for liver transplantation, demonstrating its potential usability in solid tumour visceral crisis. METHODOLOGY The study was designed as a descriptive comparative study.All patients, who were admitted and treated for liver dysfunction in the Medical Oncology Service of Ankara Dr. Abdurrahman Yurtaslan Oncology Training and Research Hospital, Ankara, between June 2017 and June 2022, were evaluated.Data were retrospectively collected by scanning the hospital database.In addition to MELD scoring, traditional liver function tests such as transaminase levels, serum direct-indirect bilirubin, and INR parameters were used to assess liver dysfunction.Grading was performed using National Cancer Institute Common Terminology Criteria for Adverse Events (NCI CTCAE) Version 5.0.Patients requiring hospitalisation of Grade 3 or higher were included in the evaluation.MELD scores ≥20 were considered high. 15Patients were categorised based on their descriptive characteristics and primary cancer diagnoses.A total of 295 patients aged 18 and above who met the inclusion criteria were included in the study.Patients with liver cirrhosis without a cancer diagnosis and those with pre-existing liver dysfunction were not included in the study. Statistical Package for the Social Sciences program was used for analyses [SPSS for Windows, Version 24.0 (IBM Corp., Armonk, NY, USA)].Continuous variables were reported using median (interquartile range, IQR) and mean (standard deviation, SD).The authors analysed mortality rates via the Cox regression model.In the univariate analysis, a multivariate analysis with significant parameters was created.A Forest plot graph was created using Excel under the Cox regression multivariate model.Survival graphics were obtained using the Kaplan Meier survival graphics and log-rank test.Finally, ROC analysis was performed to determine the cut-off for SII.A pvalue of <0.05 was considered significant in all statistical tests. The SII parameter estimation was significant in distinguishing the development of mortality (p=0.013).The area under the ROC curve (AUC) for detecting the development of SII mortality is 0.862 (95% CI, 0.795-0.928).Therefore, the sensitivity of SII at a cut-off value of ≥626.28 for the mortality prediction sensitivity is 78.7%, and the specificity is 100%.A nearly two-fold difference was present in the Kaplan Meier survival analyses of all three MELD scores (MELD-Original, MELD-Na, and MELD 3.0) between the <20 groups and ≥20 groups.The median OS for MELD-Original was 1.44 vs. 0.88 months (p<0.001),for MELD-Na, it was 1.64 vs. 0.85 months (p<0.001), and for MELD 3.0 it was 2.16 vs. 1.28 months (p=0.039)(Figure 2). DISCUSSION Prognostic models are important modalities used in clinical practice to gain insight into patient survival, determine treatment approaches, and assess prognosis.MELD scores are a prognostic scoring system used in patients with advanced liver cirrhosis.Obtaining the most accurate prognostic scoring is crucial for patient monitoring and facilitating clinical practice for healthcare professionals.Consequently, MELD scores are updated with new versions to enhance their sensitivity and specificity.However, there is a lack of sufficient studies regarding their applicability in patients with solid tumours.Therefore, there is a need for studies aimed at improving the sensitivity and specificity of MELD scores by combining them with new markers that can be used in conjunction.The authors aimed to demonstrate the utility of these scoring systems for determining the prognosis of solid tumour hepatic visceral crises.The combined use of SII and MELD scores was found to be reliable, cost-effective, and accessible for determining survival in patients with solid tumours. Ross et al. stated that new studies are needed to confirm the relationship between the MELD score and mortality. 16In light of all these data, there is a need for scoring that can make a meaningful evaluation in terms of survival, determine the prognosis, and predict survival in hepatic failure visceral crisis that develops in patients with solid tumours.In this study, the evaluability of all three MELD scoring systems currently in use in oncological patients and the contribution of SII in determining prognosis and predictive effect on mortality were studied.It was found detected that these scoring systems are usable in primary liver tumours and metastases.In the present analysis, the vast majority of the 295 patients with hepatic visceral crisis had primary cancer originating from a non-liver tumour, and only 7 (2.4%) patients were diagnosed with hepatocellular carcinoma. Patients with a high MELD score receive less local treatment due to more severe hepatic dysfunction and require liver failure treatment rather than cancer treatment. 15Since primary liver cancers usually develop from a background of existing liver damage, there are some reports that these cancers have higher MELD scores and higher complications. 17For example, the contribution of these scores has not been clearly confirmed, as metastatic tumours of the liver have a lower incidence and lower grade of liver dysfunction, according to Frommer et al. 17 Additionally, Teh et al. claimed that MELD evaluated aside from cirrhosis could not accurately predict results. 18 However, liver dysfunction in cancer patients may develop due to several immunologic factors other than liver metastases. 19Cancer-induced inflammation elicits an immune response due to tumour-derived and host-derived mediators and is also known to initiate several inflammatory processes.Therefore, to elucidate the visceral crisis of hepatic dysfunction, evaluating the liver alone is insufficient in detecting the disease.Clarifying the hepatic visceral crisis requires a holistic patient evaluation with a systematic and multidisciplinary approach. In the study by Frommer et al., a patient with a MELD score >7.24 had an approximately three-fold increased risk of death within 30 days of metastatic liver resection.This provides additional important mortality markers that are also valuable in preoperative planning and risk stratification.Likewise, the present study demonstrates that in all three MELD scoring systems, the group with MELD ≥20 had an approximately 2-fold increased risk of death. Chen et al. found that the SII value significantly contributed to overall and progression-free survival in patients with colorectal cancer.The limitations of Chen et al. study were that it was a single-centre retrospective study, included only patients with colorectal cancer, and did not include patients who had not undergone radical surgery. 14Consequently, the result obtained from a more extensive and comprehensive evaluation to include all solid tumour patients aroused curiosity.In the present study, the prognostic value of the SII parameter was calculated to meet this curiosity, including all solid cancer patients with hepatic visceral crises.In the ROC analysis with SII value, mortality estimation was highly effective with a cut-off value of ≥626.28. The findings of this study have to be seen in light of some limitations.The fact that the majority of the patients were 50 years or older brings with it additional comorbid diseases and an increased burden of medical treatment.It should also be kept in mind that hepatic visceral crisis may occur due to the use of multiple therapies.In addition, the study was retrospective, and homogeneity could not be achieved between the patient groups for all these reasons.However, there are not enough relevant studies on these patients.Multicentre prospective studies with larger numbers of patients are needed in this area in future. CONCLUSION Calculating MELD scores and SII values in patients with solid tumours who have developed hepatic visceral crisis with practical, low-cost, easy-to-access, and objective evaluations will contribute to clinical follow-ups.In addition, it will eliminate situations such as gender-based disparities that are the subject of mortality estimation. ETHICAL APPROVAL: Ethical approval was obtained from the Health Sciences University, Dr. Abdurrahman Yurtaslan Ankara Oncology Training and Research Hospital's Ethical Committee on 08.24.2022 with decision number 2022-08/2019. PATIENTS' CONSENT: Patients' consent was waived as this study was conducted retrospectively. Table I : Demographic and clinical features of the patients. * : Model for End-Stage Liver Disease.
2023-08-10T06:17:47.574Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "f6bf249005fc4024d93e72ae0a7a1f65b12fab81", "oa_license": null, "oa_url": "https://www.jcpsp.pk/oas/mpdf/generate_pdf.php?string=cS9vOXhya0FkNUNlZ290WE9NK2s4QT09", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "18b94030992702c010ab8797eeeba90a3587d908", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266361206
pes2o/s2orc
v3-fos-license
The Effects of Introducing a Harm Threshold for Medical Treatment Decisions for Children in the Courts of England & Wales: An (Inter)National Case Law Analysis The case of Charlie Gard sparked an ongoing public and academic debate whether in court decisions about medical treatment for children in England & Wales the best interests test should be replaced by a harm threshold. However, the literature has scantly considered (1) what the impact of such a replacement would be on future litigation and (2) how a harm threshold should be introduced: for triage or as standard for decision-making. This article directly addresses these gaps, by first analysing reported cases in England & Wales about medical treatment in the context of a S31 order, thus using a harm threshold for triage and second comparing court decisions about medical treatment for children in England & Wales based on the best interest test with Dutch and German case law using a harm threshold. The investigation found that whilst no substantial increase of parental discretion can be expected an introduction of a harm threshold for triage would change litigation. In particular, cases in which harm is limited, currently only heard when there are concerns about parental decision-making, may be denied a court hearing as might cases in which the child has lost their capacity to suffer. Applying a harm threshold for triage in decisions about withholding or withdrawing life-sustaining treatment might lead to a continuation of medical treatment that could be considered futile. Supplementary Information The online version contains supplementary material available at 10.1007/s10728-023-00472-w. Introduction The court cases of Alfie Evans and Charlie Gard [1,27] that decided the withdrawal of their life-sustaining treatment drew world-wide attention.Interventions from the US President [40], the Pope [13], and an 'army' of supporters storming the hospital [12] turned usually private disagreements between parents and clinicians into global spectacles. Whilst conflicts about medical treatment for seriously ill children have been litigated in the courts of England & Wales for many years, following these high profile cases parents [66] and academics [18,36,65] have argued for a new approach; in particular, for the replacement of the best interests test by a harm threshold.This argument has gained traction following the appeal by the legal team representing Charlie Gard's parents, who proposed that when alternative medical treatment is available parental decision making should prevail unless their decision causes significant harm to the child [38]. In academia several arguments have been put forward in favour of an introduction of a harm threshold.Some relate to criticism of the best interests test, i.e. that it is ill-defined or unreasonably demanding [22,31,63], claims I do not discuss here.Others strive for different outcomes, such as increased parental discretion [63,66] and a reduction of cases decided in court [43].However, what is lacking from the discussion of outcomes is evidence about what the effects of introducing a harm threshold in the courts of England & Wales would be. Section 31 (2) of the Children Act 1989 (hereafter referred to as the Children Act) contains a harm threshold that must be crossed before a care or supervision (S31) order can be made.The aim of including the harm threshold in the Children Act was explicitly to safeguard parents from unwarranted State intervention.According to now Lady Hale the harm threshold was 'designed to restrict compulsory intervention to cases which genuinely warrant it' [37] Whether the harm threshold in the Children Act has achieved its goal is difficult to say.In England, the proportion of children subject to S31 orders continues to rise despite its enactment.Their number has more than doubled between 2007 and 2017 [34] which might suggest that the harm threshold is not sufficiently protective.However, many factors may contribute to the rise of S31 orders of which the extent of protection offered by the harm threshold is only one. Amongst those in favour of introducing a harm threshold there is some disagreement about how a harm threshold should be introduced.In England & Wales court decisions about children are a two-step process; the first step is a triage decision, that answers the question whether the case can be heard in court, and the second the actual court decision [19].Some argue for the replacement of the best interests test by a harm threshold for triage [25] whilst others argue for a replacement of the standard to be applied in decision-making [21].The distinction is important because triage tests by their nature are 'rough and quick' [25] whereas a substantive determination whether the threshold is crossed ideally involves a thorough and holistic assessment. 1 3 Health Care Analysis (2024) 32:243-259 In view of recent clinical, legal, and academic developments, this paper aims to provide much needed insight about the expected effects of introducing a harm threshold in the courts of England & Wales either for triage or as standard for decision-making.To do so, I analyse and compare case law regarding medical treatment decisions for children in England & Wales with case law in the Netherlands and Germany, two jurisdictions that use a harm threshold.Based on the investigation, I conclude that replacing the best interests test for a harm threshold is unlikely to increase parental discretion or reduce the number of court cases but will introduce new challenges. The article starts with a short description of the methodology followed by an analysis of the legal context in the three jurisdictions in which the harm threshold operates.This analysis generated two main findings.First, the lack of a legal equivalence of inherent jurisdiction in the Netherlands and Germany determines that only cases in which parents refuse medical treatment can be litigated.Second, only courts in England & Wales scrutinise parental care against an objective standard in addition to evaluating the significance of harm to the child.The latter finding is confirmed in the subsequent analysis of medical treatment decisions in the context of S31 procedures.The analysis and comparison of national and international case law finds that triage decisions using a harm threshold might prevent litigation of cases in which the harm is limited but because of their rarity the effect thereof will be small; these cases currently only reach the courts when there are concerns about parental decision-making.When the harm threshold is used as standard for decisionmaking the outcome of cases will remain largely unchanged.An application of the findings to decisions about withholding and withdrawing life-sustaining treatment in England & Wales concludes that when the harm threshold is used as a standard for decision-making the outcome of cases will likely not change.However, introducing the harm threshold for triage could prevent cases reaching the court when harm is limited because the child has lost their ability to suffer.The harm threshold introduced for triage may increase the number of children continuing on life-sustaining treatment considered futile by their clinicians. Methods The investigation analyses both national case law about medical treatment decisions for children in the context of S31 orders, thus using a harm threshold for triage and functionally compares [35] national and international case law, thus best interests decisions with harm threshold decisions.The comparator jurisdictions, the Netherlands and Germany, are suitable because both have used a harm threshold for decisions about children for more than a century and their societies are broadly similar to England & Wales with regards to views on medical ethics and diversity of the population.The structural difference, the Netherlands and Germany are under Roman Law whilst England & Wales operate a common law system is of minor importance in this context.Decisions regarding children in the three jurisdictions rely on statutory law and in England & Wales case law ruling medical treatment decisions for children in court is now well settled [28]. For England & Wales cases were identified by a search of the legal databases Lexis Library (www.lexis nexis.com) and BAILII (www.bailii.org) with the search terms 'medical treatment', 'child' and 'minor'.For the Netherlands www.recht spraak.nl and for Germany www.recht sport al. de and FamRz (www.giese king-digit al. de) were searched with the same search terms in Dutch and German respectively.Searches were last performed in July 2022.Judgments were eligible for inclusion when (1) it decided a dispute regarding medical treatment between parents and medical professionals, (2) the case was heard between 1st January 1990 and 1st July 2022 and (3) the parents (not the child) were the family decision makers.Notably, in the Netherlands and Germany clinicians or hospitals cannot directly apply to the courts but must alert their local child protection organisation that then takes on responsibility for litigation.The extent of clinician's involvement in litigation can thus be slightly more ambiguous than in England & Wales. Legal Context The most important difference in legal context between England & Wales on the one hand and Germany and the Netherlands on the other is the lack of a legal equivalent of the inherent jurisdiction in the latter jurisdictions.In England & Wales the powers of the court are derived from two sources; statutory power from parliament and the inherent jurisdiction originating in the duties of the Crown to protect its citizens [30]; the power of the courts in Germany and the Netherlands is derived from statutory law only. With the statutory powers as outlined in the Children Act the courts can issue a so-called Sect.8 order which usually takes the form of a specific issue order.With a specific issue order the court can either prohibit or give consent for specific medical treatment.The inherent jurisdiction also allows the courts to make declarations, i.e. that a treatment proposal is lawful and/or in the best interests of a child [30].Similar to England & Wales, courts in the Netherlands and Germany can prohibit or give substituted consent for medical treatment.However, there is no statutory law in place in either jurisdiction that allows courts to make declarations about the lawfulness of a particular treatment proposal for children.Neither clinicians nor parents can approach the courts to arbitrate a conflict about a proposal to withdraw or withhold life-sustaining treatment unless the parental decision can be framed as exposing the child to significant harm. The lack of a legal equivalent of the inherent jurisdiction in the Netherlands and Germany is important because it determines the type of case that can be arbitrated in court. The courts in England & Wales decide about withholding and withdrawing medical treatment and parental refusals of medical treatment in almost equal numbers [48] whereas reported Dutch and German cases are about parental refusal of medical treatment or, rarely, about prohibition of intended treatment. 3 Health Care Analysis (2024) 32:243-259 In summary, the three jurisdictions are alike in that they can rely on their courts for resolution of conflicts about parental refusal of medical treatment.Where parents and clinicians disagree regarding withholding or withdrawing treatment neither the Dutch nor German courts can decide unless the parental decision crosses the harm threshold. Comparing Harm Thresholds Section 31(2) of the Children Act 1989 contains the harm threshold for triage: 'A court may only make a care order or supervision order if it is satisfied. (a) That the child concerned is suffering, or is likely to suffer, significant harm; and.(b) That the harm, or likelihood of harm, is attributable to. (i) The care given to the child, or likely to be given to him if an order were not made, not being what it would be reasonable to expect a parent to give to him; or.(ii) The child being beyond parental control.' The Act explains in Sect.31 (9) that: 'health means physical or mental health; and development means physical, intellectual, emotional, social or behavioural development'. Section 31 (10) indicates but does not define what level of harm crosses the threshold: 'Where the question of whether harm suffered by a child is significant turns on.the child's health and development, his health or development shall be compared.with that which could reasonably be expected of a similar child'. Three factors in the above definition of the harm threshold are salient to decisions about medical treatment for children; 'significant harm', 'reasonable parent' and 'similar child' and are below compared with the approach in Dutch and German law. Significant Harm Neither Dutch nor German statutory law offers a description of 'significant harm'.The courts in the three jurisdictions however accept that children may be disadvantaged by parental decisions.For England & Wales' Mr Justice Hedley (as he then was) stated: '[…], that society must be willing to tolerate very diverse standards of parenting, including the eccentric, the barely adequate and the inconsistent.It follows too that children will inevitably have both very different experiences of parenting and very unequal consequences owing from it.'[58] is echoed by the German Federal Constitutional Court1 : 'It is not the task of the State to ensure an optimal development of the capabilities of the child against the will of the parents.The constitution has left the power of decision-making with regards to their child to the parents.It is accepted that children may be disadvantaged due to the decisions of their parents' [17] The three jurisdictions thus agree that the significance of harm is to be determined in court based on the specifics of each case.However, which factors can be taken into account in that determination differs across the jurisdictions.Similar to the courts in England & Wales [60], German law allows for a consideration of wider harm than merely medical considerations, namely 'the physical, mental or spiritual well-being of the child' [67].In contrast, the Dutch harm threshold, likely due to its placement in the Health Care Act only allows for considerations about the health of the child to be taken into account [20].The factors that courts can take into account in assessing whether the harm threshold is crossed are important because they determine the outcome in individual cases. Parental Decision-Making The three jurisdictions agree that the actual or future harm must be due to parental decision-making but again do so differently.The German law speaks of parents that are either 'not willing or not able to avoid the harm' [67].Similarly Dutch law focusses on the harm to be avoided and merely states that a parental refusal of medical treatment that is necessary to avoid significant harm to the child can be overruled by the court [20].In contrast, the Children Act speaks of parental care that must not fall below the standard of that of a 'reasonable parent'.This has been interpreted in court as an objective standard of parental care [41].The focus of English judges is thus not exclusively on the significance of harm but also evaluates parental decision-making. Similar Child Neither Dutch nor German law explicitly considers the child in question whereas the Children Act directs the judge to compare the child to a 'similar child' in order to determine whether the harm is significant.Judges have interpreted this clause to mean a child with similar attributes such as sex, age and ethnic origin.For example, Munby J commented: 'the court must always be sensitive to the cultural, social and religious circumstances of the particular child and family' [2].Not all attributes of children can be compared but in medical treatment decisions we can expect the child to be compared to a child with a similar health condition. 3 Health Care Analysis (2024) 32:243-259 In summary, the harm thresholds in the three jurisdictions are similar with regards to the opacity of the term 'significant harm' but differ in which considerations judges can take into account in the determination whether it is crossed.In addition, whilst Dutch and German law focusses on the significance of harm, the law in England & Wales also contains an objective standard for parental care. Case Law Analysis The case law analysis is divided in two parts.First I will investigate the effects of the current harm threshold for triage by analysing the cases about medical treatment in the context of S31 orders reported in England & Wales.This is followed by a comparison of case law in England & Wales with Dutch and German case law in order to distil characteristics of cases that might be denied a court hearing in England & Wales after introducing the harm threshold for triage.The effects of the introduction of a harm threshold as standard for decision-making, thus relating to outcome of individual cases is also investigated by comparing national case law with international case law.In both analyses it is assumed that the current harm threshold in the Children Act will be introduced in England & Wales. Compared Cases The search identified eight judgments in which courts decided about medical treatment within the context of a S31 order.Details of the cases are summarised in Table 1 in the supplementary data.The search further identified 83 cases in England & Wales using the best interests test.To allow comparison this analysis includes only reported cases in which a parental refusal of proposed medical treatment is litigated.All included cases are summarised in Table 2 in the supplementary data. As Table 2. shows 25 cases were heard in England & Wales, 13 in the Netherlands and 10 in Germany.No conclusions can be drawn about the frequency with which these cases are heard in the courts in the respective jurisdictions on the basis of these numbers; especially in Germany district court cases are seldom reported.Unfortunately, the three jurisdictions do not report the actual number of court decisions about medical treatment for children. Harm Threshold for Triage in England & Wales As mentioned above, the question to be answered in triage is whether the case can be decided in court.A case that does not cross the harm threshold leaves the decision to those with parental responsibility.Importantly, jurisprudence about the application of the harm threshold in the Children Act has developed in the context of child protection.Below I discuss relevant case law determining the application of the harm threshold and the identified S31 orders about medical treatment in more detail.Two factors were found to be important. Procedure In S31 orders it is the task of the applicant, in all identified cases the Local Authority, to prove, on the balance of probability [59], that the harm threshold is crossed. To do so the Local Authority submits a 'threshold document' to the court in which they set out their evidence.For an applicant it is advantageous to present as much evidence as possible in order to maximise the chance that the harm threshold is considered crossed.Due to the objective standard against which parental care is measured there is a focus on parental behaviour and characteristics that can be presented as parental failings.In the included cases Local Authorities have presented relatively trivial evidence i.e. missed medical appointments [52,57] and previous occasions when parents did not follow medical advice [3,56,57] as well as more serious concerns such as 'inappropriate' behaviour towards healthcare professionals [56], evidence about parental abilities and/or mental health [14,42,46,53,57] and parental personal history and relationships [14,53,57].This scrutiny of parental failings is lacking in court decisions about medical treatment using a best interests test; whilst parental reasons are scrutinised, parental failings are not.Due to the emphasis on parental failings the content of the threshold document can be experienced as both intrusive and adversarial which may negatively impact future relationships between the family and the clinical team when the applicant is a NHS Trust. The Role of Medical Evidence In order to decide whether the harm threshold is crossed judges must first establish the facts.In decisions about medical treatment the medical evidence about the health condition of the child and the benefits/harms of proposed treatment is crucial in the establishment of those facts.Whilst the traditional deference of the court to medical experts may have abated to some extent [15,62] it is still undeniably true that doctors are in a much better position to provide medical evidence than parents.More so as the decision that the harm threshold is crossed is accepted to be a value judgement [61].Indeed, in all identified cases the medical evidence was accepted and thus the harm threshold considered crossed.That includes decisions in which the medical evidence is an opinion rather than fact-based.In Re R [57] for examplethe parents preferred to tube-feed their child by blending regular food (blended diet) rather than using commercially prepared feeds.Despite a lack of evidence that a blended diet is inferior, the judge decided the child should be fed with commercially prepared feeds given the preference of the treating doctor.The same importance to medical opinion is attached when establishing future harm.To establish future harm the applicant has to show that future harm is a 'real possibility' [61].In M-W (a child) [46] an appeal court directed a case to be re-listed in High Court, to enable a medical opinion regarding the child's future psychological and emotional development on the basis of maternal characteristics to be taken into account when no harm was demonstrable at the time of the ruling.The approach was confirmed by the Supreme Court a few years later when, in the absence of demonstrable harm, a care and adoption order was approved against parental wishes based on parental characteristics only [55]. 3 Health Care Analysis (2024) 32:243-259 In summary, in applying the harm threshold for medical treatment decisions in S31 orders factors come into play that are not present in decisions using the best interests test.In the context of S31 orders the threshold decision also takes into account parental characteristics and behaviour interpretable as parental failings resulting in a more adversarial procedure.In addition, medical evidence plays a crucial role in the determination whether the harm threshold is crossed also when it is based on opinion rather than scientific facts, putting parents at a considerable disadvantage. Harm Threshold for Triage in Medical Treatment Decisions For decisions about children the courts in England & Wales currently use a best interests test for triage.In this section I investigate 1) whether introducing the harm threshold will deny cases access to court and 2) if so, the characteristics of those cases. Goal of Medical Treatment: Saving Life That the harm threshold will be crossed and the case thus heard in court seems all but certain when the child's life is at stake.That is important because medical treatment intended to prevent loss of life is the topic of 70% of court decisions across the jurisdictions (32/46) and 85% (21/25) in England & Wales.However, in establishing whether the harm threshold is crossed judges must also take into account the likelihood that medical treatment prevents death.Refusing treatment that has little chance of avoiding the death of the child should not cross the harm threshold.Two such cases have been decided in England & Wales which I will discuss in more detail below. In the first case the proposed treatment was experimental and had an estimated chance of curing the child's leukaemia of 10% [29].The case was brought to court because the parents disagreed; the mother favoured treatment given its life-saving potential, the father declined because of its burdens.In view of the low likelihood of cure and reasonable arguments of both parents in future similar cases might not cross the harm threshold and thus denied a court decision.However, when parents cannot agree and the issue is a serious one, there is an expectation that clinicians apply to the court for a decision [5].Should the introduction of the harm threshold for triage deny cases with limited harm and reasonable but disagreeing parents a court hearing, those parents might be forced to litigate against each other instead. In the second case the NHS Trust requested the court's consent for surgery to enable continuation of haemodialysis for a child with kidney failure.The clinicians agreed that the choice between active treatment and palliative care for this particular child was evenly balanced [44].A parental decision to refuse consent would thus seem reasonable.However, the parents based their refusal exclusively on their religious views, favouring prayer therapy instead.Likely, the clinicians' unease about parental decision-making led to the court application.For now it remains an open question whether a parental preference for unproven treatment would fall below the standard of 'reasonable parent' when proven treatment has limited chance of success.Should in future such cases be denied access to the court an application contesting parental capacity might be made in some cases. Goal of Medical Treatment: Avoiding Harm In 16 cases across the jurisdictions and 4 in England & Wales consent was sought from the court to allow or prohibit non-life saving treatment. In all three jurisdictions cases in which treatment sought to prevent the loss of vision or hearing have been litigated, including in a S31 order [14].Similarly, cases about proposed medical treatment for children with severe mental health issues have been heard in Germany, the Netherlands and in a S31 order [62].Such cases will thus continue to be heard in England & Wales should the harm threshold for triage be introduced. However, harm to the child is not always imminent.In 6 cases litigated in the Netherlands and Germany a variety of medical treatment was proposed aiming at preventing harm (mostly) experienced in adulthood such as short stature and cardio-vascular disease.Should similar cases arise in England & Wales the harm threshold in itself should not deny such cases a court hearing. In one of the remaining two cases about non-life saving treatment decided in England & Wales, the parents did not accept the diagnosis of incurable cancer and therefore palliative treatment [7].Given that the diagnosis was well established the parental reasons for their refusal to consent might fall below the standard of reasonable parents.Moreover, the child was in considerable pain.Taken together, this case would have crossed the harm threshold and similar cases can be expected to be heard in court. The second case is more controversial.The parents refused to consent to a brainstem test, to determine whether the child was legally dead, as they feared that the test might further injure their child [11].The child was thought by clinicians to have lost the capacity to suffer and thus would not suffer himself from continued intensive care treatment.Under those circumstances it seems possible and perhaps even likely that the harm to the child posed by continued medical treatment will not be significant enough to cross the harm threshold and similar cases might not be heard in court. In summary, after an introduction of the harm threshold for triage most cases about life-saving treatment will continue to be heard in the courts of England & Wales.A likely exception are cases in which the chance of successful treatment is low.So far such cases are rare and only lead to court applications when there are concerns about parental decision-making.Most cases litigating nonlife saving treatment would also still be heard given the extent of potential harm in cases litigated so far.However, cases in which the child is no longer capable of suffering may be denied a court hearing. Harm Threshold as Standard for Decision-Making In this section I compare the outcome of cases when either the harm threshold or the best interests test is used as standard for decision-making. Goal of Medical Treatment; Saving Life In 30/32 cases the courts provided substituted consent for proposed medical treatment intended to be life-saving.In more than 30 years of litigation across the three jurisdictions only twice did courts approve parental refusal of life-saving treatment. In Re T [54], the appeal court in England & Wales, decided it was not in the child's best interests to have a life-saving liver transplant.In the specific circumstances the transplant would involve the mother being forced to stay in England to care for a child with substantial medical needs due to a procedure she opposed given the burdens of treatment, whilst the father lived and worked abroad.Re T does however seem to be an outlier as equally invasive treatment also requiring considerable familial input, e.g. a bone marrow transplant, has since been allowed against the wishes of the parents [49].Similarly, courts in Germany and the Netherlands have authorised intensive treatment, i.e. chemotherapy demonstrating that a harm threshold does not prohibit intensive medical treatment against parental wishes when potentially life-saving. In the second case a 4-year old German child in 'Wachkoma'2 but suffering from severe, painful muscle spasms following a hypoxic incident resided in a rehabilitation centre.Given that the child was not expected to regain awareness and the painful symptoms could not be relieved except by deep sedation the parents decided stop her clinically-assisted feed and allow her to die at home under the supervision of a palliative care specialist.Her doctors disagreed.The lower court declared the harm threshold crossed [4].However, the appeal court found that given the prognosis and the inability to treat her pain other than by sedation the foreseeable death of this particular child did not cross the harm threshold [50]. The above demonstrates that, regardless of the standard used, for parents to obtain the court's approval of their refusal of life-saving medical treatment is a very high bar indeed.In the two cases in which the death of the child was allowed, the exceptional circumstances determined the court's ultimate decision.The second case also demonstrates that health care professionals are uncomfortable when parents decide, ahead of them, that their child should be allowed to die.Such cases have been discussed in the literature [16,64] and in England & Wales has been the subject of a S31 order [57]. Goal of Medical Treatment; Avoiding Harm In most cases, the courts also allow medical treatment that is not intended to be life saving.In 12/14 cases (85%) in which parents refused proposed medical treatment not intended to be life-saving, the courts provided substituted consent.Both cases in which the court declined to do so were decided using a harm threshold. In Germany deaf parents refused the implantation of a cochleair device in their youngest child [6].In a reasoning very similar to that in Re T [54] the German court balanced the impact on family life, the child spending many hours outside the family home in order to acquire spoken language after implantation of the device, against the harm of not being able to hear and concluded that the harm threshold was not crossed.The case demonstrates that a court decision using a harm threshold can be equally holistic as when using a best interests test.However, whether a holistic approach is taken does depend on how the harm threshold is defined.Should the German case described above have been heard in the Netherlands the court might well have provided substituted consent for cochleair implantation because the Dutch harm threshold only allows consideration of the health of the child.Moreover, that factors can be taken into account does not mean that they will be taken into account.For example, in circumstances in which parents were thought not to be able to afford the implantation of a cochleair device a court in England allowed a care and adoption order [14] against their wishes.The judgment takes into account medical evidence but not the potential harm caused by the child losing contact with her birth family. In the second case, a Dutch court was asked to allow a renal transplant for a child suffering from chronic renal failure [23].Whilst the child's health was stable on dialysis, the main aim of the transplant was to prevent serious cardio-vascular problems in adulthood caused by metabolic dysregulation inherent to renal failure.Because the harm would be experienced in adulthood and treatment could be deferred to when the child could decide the court considered the harm threshold not crossed.Given that part of the harm does occur in childhood it is likely that the best interests test would have allowed the renal transplant to go ahead in childhood.The argument that a decision can be deferred until children can decide for themselves has also been used in Germany to prohibit cross-sex hormone treatment for gender dysphoria.However, deferring treatment until the child can decide may be less convincing to the courts in England & Wales.In 2013 a court approved booster immunisation against mumps, measles and rubella (MMR) for a healthy 15-year old against her wishes [24]; a decision that could have been deferred until she reached adulthood with minimal or no harm. In summary, in the vast majority of cases in which parents and clinicians disagree about proposed treatment the outcome is similar regardless of the standard used in court.In most cases the courts provide substituted consent.The similar outcomes are likely due to the severity of preventable harm in the cases heard in the courts across the three jurisdictions. Taken together, the results indicate that introducing the harm threshold either for triage or as standard for decision-making will not substantially limit the number of future court applications.Nevertheless, the investigation found three characteristics 1 3 Health Care Analysis (2024) 32:243-259 of cases for which litigation might change.When introduced for triage two types of case could be denied a court hearing; 1) cases in which harm is limited and there are concerns about parental decision-making and 2) when parents refuse treatment for a child not capable of suffering.Only under these specific circumstances will the introduction of the harm threshold likely increase parental discretion.An introduction of the harm threshold as standard might change the outcome of cases when treatment can be deferred to adulthood increasing the autonomy of the child rather than their parents. Parental discretion might be further increased should a harm threshold be introduced that does not contain an objective standard of parental care.An introduction of a harm threshold for medical treatment decisions that directs the courts to focus on the significance of harm only might change the outcome of cases in which the harm is limited.For example, in the Netherlands a mother based the refusal of a renal transplant on her belief that her child would be miraculously cured [23].In England & Wales it is not uncommon for parents to place their faith in a 'miracle cure' [48] or have other faith-based reasons [45] which currently are unlikely to take precedence over medical evidence.A harm threshold that only takes into account the harm and not parental reasoning may go some way to broaden parental discretion when harm is limited. Withholding and Withdrawing Treatment So far, this investigation evaluated decisions in which parents refuse medical treatment for their child.In the next section I attempt to evaluate what the effects of the introduction of the harm threshold might be for cases about withholding and withdrawing life-sustaining treatment. Harm Threshold for Triage The investigation has demonstrated that cases that may be denied a court decision are those in which the harm ascribed to parental decision-making is limited and in which the child does not suffer.A common argument in decisions about withholding and withdrawing medical treatment applying is that the child is or would suffer treatment-related harm.However, in some recent cases it is accepted that the child does not have the capacity for suffering [1,9,11,39].In the absence of suffering the harm done to the child by continuing treatment may be too insufficient to cross the harm threshold and might thus be denied a court hearing. Harm Threshold as Standard for Decision-Making A substantive decision using a harm threshold as standard would involve a more holistic approach than when used for triage.Arguments could include complex concepts such as dignity and balance benefits and harms of continuing life against death.The question whether a child can suffer when she is in a persistent vegetative state has recently been discussed in the case of Pippa Knight [33].The legal team representing the mother argued that medical treatment did not harm the child because she did not have the capacity to suffer.The judge found that the daily treatment-related interventions were nevertheless burdens on the child that should be taken into account [32].The outcome of such considerations depend on the balance between level of awareness and severity of treatment burden.In the case of Tafida Raqeeb, a child needing mechanical ventilation but who was otherwise stable, MacDonald found that the child's minimal awareness was sufficient to continue medical treatment given the low burdens thereof [8]. It has been suggested that loss of dignity should be considered a harm when continuing medical treatment is futile [65].Indeed, 'dying with dignity' is sometimes referred to in judgements about withdrawal of treatment [26].However, so far judges have declined to define the burden of (futile) medical treatment in terms of dignity because of its subjectivity [10,33].If the concept of dignity is not taken into account, suffering and thus harm will be defined by the balance between burden of treatment and benefits of continued life as it is now.The introduction of the harm threshold as standard for decision-making should not substantially change these decisions. Conclusion This article investigated the effects of an introduction of the harm threshold, either for triage or as standard for medical treatment decisions for children in the courts of England & Wales.By analysing and comparing legal context, national and Dutch and German case law I found that an introduction of the harm threshold similar to the one in the Children Act will not broaden parental discretion or reduce the amount of litigation but will introduce new challenges.Two factors are important drivers of this conclusion; the extent of harm in cases that currently reach the courts and the crucial role of medical evidence in establishing the relevant facts of the case. When introduced as standard for decision-making the outcome of cases can be expected to be by and large the same.However, when introduced for triage, the harm threshold likely excludes cases from court in which (1) harm is limited because the treatment is unlikely to succeed or (2) the child has lost the capacity to suffer.In the first case the effect will be minimal as such cases are very rare and only litigated when there are additional concerns about parental decision-making.However, when the child is no longer capable of suffering the harm threshold used for triage could prevent cases reaching the courts.Applied to cases in which withdrawing life-sustaining treatment is the topic of litigation may lead to more severely compromised children continuing life-sustaining treatment when that treatment could be considered futile. The investigation leaves open the question which legislative steps would increase parental discretion for medical treatment decisions in England & Wales.It may be useful to first attempt to reach consensus about the specific circumstances in which broader parental discretion is desired before specific legislative steps are considered. 3 Health Care Analysis (2024) 32:243-259 Currently, it is argued that parents should be the decision-makers when alternative treatment is available [65] or should be allowed a period of time to arrange alternative treatment when the court allows withdrawal of life-sustaining treatment against their wishes [51] but consensus on either proposal has yet to be reached [47].Notably, the introduction of the current harm threshold does not guarantee broader parental discretion in the first circumstance as the decision will depend on the factors taken into account.In the case of Charlie Gard for example a court decision may have turned on the question whether the side effects of the treatment proposed by the parents should have been added to the already existing burden of treatment due to the mechanical ventilation or seen in isolation.When added to the existing burden the harm threshold might have been crossed. In conclusion, an introduction of the harm threshold, either for triage or as standard for decision-making is unlikely to substantially increase parental discretion other than under very specific circumstances and thus will not reduce the number of cases litigated in the courts of England & Wales.
2023-12-20T06:17:38.242Z
2023-12-18T00:00:00.000
{ "year": 2023, "sha1": "72ac44bb8466416fac1f19506ea5bb4206187437", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ac4070743301a28caff40c49cd09cc64baaf6b5a", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Medicine" ] }
15907454
pes2o/s2orc
v3-fos-license
On locality of Generalized Reed-Muller codes over the broadcast erasure channel One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Solomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. I. INTRODUCTION: MOTIVATION AND RELATED WORK In various Broadcasting standard reliable communication is achieved thanks to the use of coding mechanisms at different levels of the network layers either for overcoming the channel impairments or recovering lost symbols and packets. At the physical layer incremental redundancy HARQ mechanisms have been thoroughly studied using LDPC codes [1], and also standardized in UMTS and LTE using Turbo codes [2]. At upper layers historical Reed-Solomon (RS) codes have been first used in one to many communication standards as DVB-SH thanks to their MDS property . After the advent of LT and Raptor codes [3] , rateless codes have been widely adopted as a reference for upper layer forward error correction (FEC) and specified for many Broadcast standards like DVB-SH, LTE-EMBMS. Their success is technically motivated by their universality as they are optimal over all erasure channels discarding any constraint of feedback . Another line of work proposed the use of Network Coding for packet loss concealment in Broadcast applications, where systematic binary network coding has been proved to be the best tradeoff in terms of complexity, throughput and decoding delay compared to classical straightforward network coding in finite fields [4]. The interest of combining multilayer coding has been analyzed in [5] and is revealed to be more beneficial for the multicast scenario than the point to point communication usecase. Amira Alloum is with Nokia Bell Laboratories, France; Sian-Jheng Lin is with School of Information Science and Technology, University of Science and Technology of China (USTC), China and the Electrical Engineering Department, KAUST; Tareq Y. Al-Naffouri is with the Electrical Engineering Department, King Abdullah University of Science and Technology (KAUST). Besides some broadcasting standards like MBMS specify a uni-cast repair mechanism after a limited broadcast delivery phase. During this phase, coding can be used on the top of TCP protocol for uni-cast communication where network codes or rateless codes can be used on top of physical layer in a separate or a cross layer approach [6]. In the context of recovering the packets or symbols lost during bad channel realizations, the common channel model considered to evaluate the recovery performance is the symbol (bit or block) erasure channel. On another side, most proofs derived for demonstrating capacity achieving properties have been derived over the erasure channel first before their extensions to other Binary Memoryless Channels (BMC). While the common belief that random constructions are the unique road for capacity and that deterministic structures might be unable to honor the bet. This common belief has been abolished with the discovery of polar codes which are based on Reed Muller codes constructions. Consequently the invention of Polar codes rekindled the flame for algebraic codes among coding theory community, especially for Reed Muller Codes that have been introduced 50 years ago by Reed and Muller [7] [8], then have been generalized by Kasami and Lin in [9]. This class of code is special for having kept in the meanwhile the common interest of theoretical computer science and information theory communities concurrently. More recently Reed Muller codes have been proved to be capacity achieving over the binary and block erasure channel [10] [11]. Reed Muller codes are appreciated for being good practical extended cyclic codes meeting BCH codes in some instances of their generalization; they exhibit good geometrical and nesting properties and are good basis for constructing other codes , as they are a part of the LTE standard with the encoding of channel quality control informations [2] and have been used for Power Control issues for OFDM . Besides, Reed Muller codes exhibit an additional property called Locality which has been leveraged recently for coded based unconditionally secure protocols considered in theoretical computer science and cryptography communities [12]. Locality feature consist in the ability of retrieving a particular symbol of a coded message by looking only at ℓ < K positions of its encoding, where ℓ is known as the locality parameter and k denotes the dimension of the code.Besides, Locally decodable codes have been mentionned in [13] as a potential candidate for improving the power budget of HARQ schemes. The recent results cited above, led us to rise some questions regarding the practical use of Reed Muller Codes in upper layer coding mechanisms and more specifically questioning about the value of locality in this context. 978-1-4673-9044-6/16$31.00 c 2016 IEEE Our focus is on broadcasting for streaming services, where are required low complexity decoding algorithms in order to enable reception for energy constrained devices, together with short decoding delays in order to trigger the content reception as soon as possible without sacrificing throughput.In previous studies the decoding delay reduction is reached thanks to the use of systematic encoding constructions with progressive decoding . The complexity is decreased by the use of binary fields for network coding [4]. In our scheme we consider the decoding and delay performance of Generalized Reed Muller ( GRM) codes over the block erasure channel under a locally symbol wise decoding algorithm. This is motivated by the fact that the GRM codes can be systematically encoded, and enable a progressive decoding thanks to the locality property earlier than word-wise Reed Solomon algebraic decoding. The main purpose of our analysis, is to investigate the costs and the benefits of considering the locality in packets recovery using Generalized Reed Muller codes. The paper is organized as follows: In section II we introduce the model and notations related to the Generalized Reed Muller Codes, Maximum likelihood ( ML) decoder is described in section III, the exhaustive local decoder ( LD) is derived and described in section IV , complexities of both ML and LD schemes are evaluated in section V. We show simulations results and discuss the decoding performances in section VI and conclude in section VII. A. Transmission and Channel Model We consider a broadcasting communication standard where the sender is a basestation or a satellite transmitting k data packets encoded to k systematic packets and (n − k) parity packets using an error correcting code. The receivers are devices either within a satellite network or wireless cellular network, where each packet is considered by the upper layers either completely received or completely lost based on a CRC (Cyclic Redundancy Check) mechanism. Accordingly the described practical scenario is modelled as a transmission occurring through a virtual block erasure channel characterised by an erasure probability ǫ over coded symbols in the considered finite field F q . Consequently each erased packet is related to an erased coded symbol of F q within the model. B. Generalized Reed Muller Codes Generalized Reed Muller code are constructed by complete evaluation of low degree multivariate polynomial over a finite field. The code is specified by parameters (r, m, q) where: • q is the alphabet size and is a prime power. • m denotes the number of variables. i=0 denotes a finite field with q elements. Let us consider the variable vector X = (X 1 , . . . , X m ) where F (X) ∈ F q [X] denotes an m-variable polynomial of degree at most r ≤ q − 2. The number of coefficients of F (X) is the dimension of the code k = m+r r where the minimum distance is d = (q − r)q m−1 and n = q m is the code length of the code [14]. The (r, m) Reed-Muller code over F q is defined as: (F (P 1 ), F (P 2 ), . . . , F (P n )), where each point P i ∈ F m q belongs to the m-dimensional affine space over F q . Let us consider the vector: and the point: The q elements: forms a (q, r + 1) Reed-Solomon (RS) codeword of dimension r + 1 and code length q, and F P,V is a univariate polynomial of degree at most r. Local decoding is possible over a block erasure channel, when r + 1 received symbols are the evaluations of points aligned on a such a line, then the RS decoding algorithm can occur on the received symbol and perfectly reconstruct the erased q − (r + 1) symbols via polynomial interpolations. III. MAXIMUM LIKELIHOOD DECODING FOR GENERALIZED REED MULLER CODES The Maximum likelihood decoder over erasure channel fails to recover a given erasure pattern if this pattern contains the support of at least one non zero codewords. Their decoding consist in a Gaussian Elimination algorithm over the subsequent linear system. The generator matrix G of GRM code is of size k-by-n, where the i-line can be determined with the RM encoder. Let I i = [0 . . . 0 1 0 . . . 0] denote a k-element zero vector with 1 at i-th position. By applying the RM encoding approach on I i , the codeword vector, denoted as R i , is the i-th row of G. If the code is systematic, the generator matrix is in the form G = I k |P , and its corresponding parity-check matrix is written as H = −P ⊤ |I n−k . Given a sent codeword C from the GRM codebook and a received codeword Y at the output of a block erasure channel , we consider the syndrome constraint HY ⊤ = 0 from which we derive the following linear system : where Y 0 denotes the erasure symbols, and H 0 consists of the subsequent columns corresponding to Y 0 . Our objective is to solveȲ 0 in (4). To solve (4), we apply Gaussian elimination on H 0 |D ⊤ , to obtain a matrix Q in the reduced row echelon form. For each row of Q, if the row is in the form [I i |d i ], then the value of the i-th symbol is d i . Gaussian elimination utilizes all parity equations in RM codes, so it performs the optimal performance for erasure channels. However, the algorithm is very slow and thus it is untractable for long codes in finite fields of high orders. IV. LOCAL DECODING FOR GENERALIZED REED MULLER CODES In the present section we derive an exhaustive local decoding algorithm for GRM codes. Let us Consider the set Φ of received symbols in F q evaluating the set of points Φ ′ in F m q . Our algorithm consists in exhaustively and sequentially search in the space, all the lines including r+1 received symbols evaluating r+1 aligned points of Φ ′ in order to apply Reed Solomon interpolation algorithm. For thus, it is required to enumerate all r + 1 received symbols defining a support for a (q, r + 1) Reed Solomon sub-code among all the RS subcodes nested in a Reed Muller codeword. In order to perform the exhaustive enumeration of the lines in the space, we build a partition of the space similar to the one used for the construction of Projective Reed Muller codes [14]. Our approach paves the way to the following proposition: Proposition 1: The support of Generalized Reed Muller codeword of codelength n = q m includes the support of q m−1 × q m −1 q−1 number of (q, r + 1) Reed Solomon subcodes. Proof: In order to enumeratethe lines P + γ i · V, one can consider q m − 1 instances of V and q m instances of P where the affine space would include (q m − 1) × q m lines. Though, this is not a tight bound as some lines are counted multiple times. Let us consider the partition of the affine space F m q where V 0 ∪ · · · ∪ V m−1 = F m q \ {0} and defined as : where V i denotes a set of m-element vectors where the first i elements are zeros, and v i = 0. The number of instances per subspace is|V j | = q m−j−1 . Accordingly, the number of possible vectors V is q m−1 + q m−2 + · · · + 1 = q m −1 q−1 . Let us consider the partition of P ∈ F m q . The set of evaluation points is: the set of values of P ′ i is: Consequently we have enumerated q m−1 × q m −1 q−1 different lines in the affine space F m q and so as much Reed Solomon subcodes nested in the space of an RM codeword. Thus, we can decode one symbol via only field additions, and the field multiplications are unnecessary. Based on the previous discussion, the sequential exhaustive local decoding algorithm is described in Algorithm 1. Count the number of received symbols R on (P + γ i · V), for γ i ∈ F q . 11 if R ≥ r + 1 then 12 Apply (q, r + 1) RS decoding to interpolate the lost symbols. The maximum likelihood decoder is known to be the one of the GE applied on the matrix H 0 |D ⊤ , which requires O(n 3 ) operations. However regarding the proposed LD decoder the theoretical complexity is not straightforward, as the number of loops (line 1-19) is not analytically quantified so far. Nevertheless, we introduce an alternative related decoding strategy, termed progressive local decoding (PLD), that triggers the decoding on the fly with the reception of a fraction of the codeword on a symbol by symbol basis . Once a symbol is received, PLD checks all lines across this symbol. Once a line includes r + 1 either received or recovered symbols , RS decoding is applied. For a codeword of (r, m, q) RM code, there are q m−1 lines across a symbol. With the use of fast fourier transform (FFTs) techniques, RS erasure decoding requires O(q lg(q)) operations [15]. Hence, the global per symbol recovery operation requires O(q m lg(q)) operations in each round. As the decoder will receive at most n symbols, the complexity is quasi quadratic and no more than O(nq m lg(q)) = O(n 2 lg(q)). Consequently the PLD achieves lower complexity than GE does by around one order. VI. SIMULATION RESULTS AND DISCUSSION In the present section we show simulation results assessing the local decoding ( LD) performance of systematic GRM codes over the block erasure channel. We have selected metrics emphasizing the potential of the local decoder for progressive packet recovery in streaming applications [4]. For instance the metric represented in Figure 1 is the probability of successful decoding per information symbol varying with percentage of received symbols. In this scheme we consider LD and GE decoding of the (r = 6, m = 2, q = 8) RM code compared with interpolation decoding of the (n = 9, k = 4, q = 8) RS code in F 8 . We chose same fields orders to aline erasure dimension over the channel. The results in Figure 1 show that GE and local decoding execute partial decoding before the reception of k symbols and before building a full rank linear system for the GE. The locality being r + 1 = 7 symbols, the local decoder triggers recovery when receiving 10 percent of transmitted symbols which is lower than the code dimension that is 43, 37 percent of the transmission as k = 28. It is also exhibited that LD under-performs the RS decoder when reception is beyond the code dimension. However GE performs as good as the RS decoder and better than LD decoding after building a full rank linear system, an erasure point that we refer to as the full rank threshold ǫ * . Figure 2 shows the computational cost in terms of time per codeword varying with the percentage of erased symbols. In line with the theoretical analysis we find out that the GE decoding requires more operations than the LD and that the complexity gap is getting increased with the field order . Accordingly we conclude that LD is an alternative to the GE decoder for systematic locallly decodable codes before the full rank threshold ǫ * .The LD enhances the information recovery delay at lower complexity than ML decoder and with equal performance. However beyond ǫ * the system should switch to the GE to get the best performance in a combined architecture LD-GE . A further comparison to systematic random network coding is worth to be investigated. Whereas GRM codes have the inherent benefit of being systematic, and locally decodable with a mitigated complexity. VII. CONCLUSIONS AND FURTHER WORK In this paper, we evaluated the value of locality property in GRM codes for packet recovery in Broadcast Applications. We revealed that locality and systematic encoding of GRM codes are valuable assets for progressive decoding in streaming usecases. However, among the drawbacks we should mention about GRM codes is their decreasing rate when code-length get longer. Therefore investigating other locally decodable constructions at higher rates may be an interesting future research avenue for broadcasting applications.
2016-09-11T15:38:07.000Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "abe7ff8a86ed56c0e599b700d4d728d42b3c0c1f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.03173", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "caa1c888f91b44559b3bc52708c4c0fb2b695b7a", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
8635194
pes2o/s2orc
v3-fos-license
Decreasing predictability of visual motion enhances feed-forward processing in visual cortex when stimuli are behaviorally relevant Recent views of information processing in the (human) brain emphasize the hierarchical structure of the central nervous system, which is assumed to form the basis of a functional hierarchy. Hierarchical predictive processing refers to the notion that higher levels try to predict activity in lower areas, while lower levels transmit a prediction error up the hierarchy whenever the predictions fail. The present study aims at testing hypothetical modulatory effects of unpredictable visual motion on forward connectivities within the visual cortex. Functional magnetic resonance imaging was acquired from 35 healthy volunteers while viewing a moving ball under three different levels of predictability. In two different runs subjects were asked to attend to direction changes in the ball’s motion, where a button-press was required in one of these runs only. Dynamic causal modeling was applied to a network comprising V1, V5 and posterior parietal cortex in the right hemisphere. The winning model of a Bayesian model selection indicated an enhanced strength in the forward connection from V1 to V5 with decreasing predictability for the run requiring motor response. These results support the notion of hierarchical predictive processing in the sense of an augmented bottom-up transmission of prediction error with increasing uncertainty about motion direction. This finding may be of importance for promoting our understanding of trait characteristics in psychiatric disorders, as an increased forward propagation of prediction error is assumed to underlie schizophrenia and may be observable at early stages of the disease. Introduction Current views about the general principle of the functioning of the brain emphasize the importance of predictions that are generated by the central nervous system. According to these views the hierarchical organization of the (human) brain plays a fundamental role in implementing this predictive mode of operation (Rao and Ballard 1999;Friston and Kiebel 2009;Friston 2010;Hohwy 2013;Clark 2013). Sensory information enters the system at low hierarchical levels while predictions of these sensory inputs are represented in higher levels. The architecture of the visual system of primates, for example, accommodates such a hierarchical structure in that ascending (or feed-forward) pathways predominantly originate in superficial layers of lower regions and terminate in layer IV of the hierarchically higher areas. Conversely, descending (or feedback) projections from higher to lower regions generally originate in deep pyramidal cells of layer V of the higher region while ending in layer I and VI of the lower area (Mumford 1992;Felleman and Van Essen 1991). Although recent tracer studies were able to show that the hierarchy proposed by Felleman and Van Essen is correct in most aspects, some restrictions have to be made. First, quantitative methods slightly rearranged the level of some visual areas within the hierarchical order, for example the position of the frontal eye fields (FEF). Second, the definition of feed-forward and feedback pathways originating mainly in the supra-and infragranular layers, respectively, is less strict. But it is nevertheless an appropriate indicator of the direction of connections (Barone et al. 2000;Vezoli et al. 2004;Markov et al. 2014). Hierarchical predictive coding has been proposed as an explanation for extra-classical receptive-field effects in the visual cortex (Rao and Ballard 1999). This scheme assumes that redundancy in encoding sensory input is reduced by modeling its statistical regularities. Instead of propagating all inputs from one level to the next, only residuals or errors containing the deviation of the input from the prediction are passed up the hierarchy. Predictions provided by higher regions are used to explain the input in lower regions via their backward projections. The same concept underlies the ''Bayesian brain hypothesis'' (Knill and Pouget 2004) which emphasizes the probabilistic nature of such iterative processes of sensing and predicting. Friston (2010) has proposed a unified theory-the free-energy principle-which states that self-organizing systems need to minimize their free-energy to survive (Friston and Stephan 2007). In the current context it is important to know that under certain simplifying assumptions, minimizing free-energy is equivalent to minimizing prediction error with both leading to ''Bayes optimal'' results. In this way it is assumed that generative models, which try to infer the underlying causes in the (outside) world from sensory inputs, are implemented, tested and updated in the brain by hierarchical predictive processing (Clark 2013). In a previous study we investigated the effective connectivity of the cerebellum with visual areas during an attention-to-motion task (Kellermann et al. 2012). The pattern of modulatory inputs of attention to the uniform and therefore, highly predictable motion fitted well with both the presumed role of the cerebellum as a state estimator (also) in perception (Paulin 2005;O'Reilly et al. 2008) and the notion of hierarchical predictive processing. The posterior parietal cortex (PPC) sent its outputs via crus I of the cerebellum to the lower region V5, where the latter connection, namely from crus I to V5, was enhanced during attention to the predictable stimuli. Conversely, we found a suppression of the feed-forward connection from V5 to PPC at the same time, i.e., during attention to predictable motion. The present study aimed at testing specific hypotheses derived from hierarchical predictive processing during unpredictable visual motion by means of dynamic causal modeling (DCM) for functional magnetic resonance imaging (fMRI). Compared to our previous investigation, in which top-down (or goal-directed) attention was manipulated, the present study presumes attentional effects as a result of stimulus-driven feed-forward effects, with randomly behaving stimuli capturing more attention than predictable ones. In contrast to predictable stimuli, unpredictable visual motion would be associated with an enhanced strength of feed-forward connections, e.g., from V1 to V5 or from V5 to PPC. Goal-directed attention has been associated with the optimization of expected precision in the sense of an enhanced modulatory effect of attention on the self-connections of higher order nodes (Feldman and Friston 2010;Kok et al. 2012). This modulatory effect on the self-connections was also tested at the level of V5 in the present study with the distinction that attention was rather stimulus-driven as compared to goal-directed. While this enhancement of bottom-up processing reflects the message passing of prediction error up the hierarchy, a simultaneous down regulation of top-down influences might be conceivable. This effect might reflect reduced top-down ''explanations'' of sensory inputs in lower regions (e.g., V1) by representations in higher areas (e.g., V5). The main hypothesis pursued in this study states that a Bayesian model selection procedure among a large space of dynamic causal models would yield highest probability for a model (or a family of models) in which stimulus unpredictability positively modulates forward connectivity and/or negatively modulates backward connectivity within the visual hierarchy. The nodes whose hierarchical connections we chose to examine were primary visual cortex V1, motion-sensitive visual cortex V5 and posterior parietal cortex (PPC). Subjects The complete sample comprised 37 healthy, right-handed subjects, two of whom were excluded due to excessive head motion (translation of more than 3 mm). The remaining 35 participants (21 males, 14 females) had no history of neurological or psychiatric illness and were aged between 18 and 41 years (mean 27.2 years, SD 4.7 years). All subjects gave written informed consent prior to participation in the study. The study adhered to the standards provided by the Declaration of Helsinki regarding ethical principles for medical research involving human subjects and the local Institutional Review Board approved the protocol. Stimuli and task Visual stimuli were presented by means of an MR-compatible goggles system (Resonance Technology Company Inc., Los Angeles, USA). The visible screen covered approximately 25°9 19°of the visual field of the subjects with a resolution of 800 9 600 pixels. Controlling and timing of stimuli was achieved using the Presentation software (Neurobehavioral Systems Inc., Berkeley, USA). The visual stimuli consisted of a white frame (*24°9 16°) on a black background containing a white filled circle (*2°in diameter). During the baseline condition the white circle (or ''ball'') was presented stationary within the white frame where the starting point at the beginning of each run was the center of the screen. In each of the 30 experimental blocks the ball moved with a constant speed of *6°per second for 20, 20.5 or 21.5 s without leaving the frame. Between two subsequent experimental blocks a baseline with a mean length of 10 s was inserted in which the ball stopped moving and stayed at the last position of the preceding block. The following experimental block started with a jitter of 0, 0.5 or 1.5 s and the ball began moving again starting from its last position. The 30 experimental blocks of one run were divided into three different conditions, where the sequential order was pseudo-randomized and the durations as well as the jitters were counterbalanced. During the PREDICTABLE condition the ball changed its direction of motion if and only if it touched the border of the frame where the angle of dip corresponded to the emergent angle. Thus, the trajectory of the ball was predictable because of its resemblance of a ball bouncing from a cushion of a pool table. The RANDOM blocks were less predictable than the aforementioned condition since the emergent angle-when the ball rebounded from the cushion-varied randomly and thus did not correspond to the incident angle with the constraint that the ball stayed within the frame. Finally, the ARBITRARY condition was the least predictable because changes in direction of motion not only occurred with contacts of the ball with the cushion but also in random intervals in the middle of the frame (see Fig. 1). Hence, the predictability of the motion decreased from the PREDICTABLE over the RANDOM to the ARBITRARY condition. It should be noted, however, that the ARBITRARY condition differed from the other two conditions also in terms of the number of motion direction changes which occurred about 1.6 times more often than in one of the other two conditions. On average, one session contained 184.8 (SD 3.6) changes in the PREDICTABLE condition, 183.7 (SD 9.2) changes in the RANDOM condition and 288.9 (SD 12.2) changes in the ARBITRARY condition. This confounding effect and its impact on the interpretability of the results will be considered in the discussion of the data. In each of the two runs per participant the subject was instructed to keep track of the ball and to attend to its changes in direction of motion. In other words subjects were requested to pursue the moving ball overtly with their eyes and to look out for motion direction changes. The two runs differed from each other only in the response-mode where the participant had to indicate each (perceived) change in motion direction by a button-press with the right index finger in the ''active'' run, whereas the subject just had to keep track of the ball and attend to motion direction changes (without any motor response) in the ''passive'' run. To familiarize the subjects with the stimuli, the passive run preceded the active one for most of the participants (21 of the 35; 13 males, 8 females). To exclude, however, the possibility that any (main or interaction) effects of the response-mode (active vs. passive) might be due to their mere sequential order, the remaining 14 subjects (8 males, 6 females) were measured with the reversed order. Data acquisition Functional magnetic resonance imaging (fMRI) was performed using a Siemens Trio 3T MRI scanner. In each of the two runs per subject 515 functional images were acquired using a T2*-weighted echo-planar imaging (EPI) sequence covering the whole brain with 33 axial slices having a thickness of 3.4 mm (gap between slices 0.51 mm). Each slice had a resolution of 64 9 64 pixels and a field of view of 200 9 200 mm 2 , resulting in a voxel size of 3.125 9 3.125 9 3.4 mm 3 . The echo-time (TE) was 30 ms, the flip-angle amounted to 75°and the repetition time (TR) was 1800 ms, which resulted in an acquisition time of 15 min and 45 s per functional run. The first three images of each run were discarded due to T1 stabilization effects. After the two functional runs an anatomical image was acquired with a T1-weighted magnetization prepared rapid gradient echo (MPRAGE) sequence yielding a resolution of 1 9 1 9 1 mm 3 (TR: 1900 ms, TE: 2.52 ms, flip-angle: 9°). Data preprocessing and general linear model analyses Preprocessing and analyses of fMRI data were performed in SPM8 (Wellcome Trust Centre for Neuroimaging, London) implemented in Matlab 8 (The MathWorks). The remaining 512 functional images of each run were realigned using the two-pass procedure implemented in SPM. Anatomical scans were aligned to the resulting mean EPI of each run and normalization parameters were obtained using the unified segmentation approach (Ashburner and Friston 2005). The functional time-series were transformed into the standard space defined by the Montreal Institute of Neurology (MNI) by applying the normalization parameters to the time-series. Normalized images were resampled at a resolution of 2 9 2 9 2 mm 3 and spatially smoothed with an isotropic Gaussian kernel of 8 mm full width at half-maximum. The two runs per subject were modeled by convolving the boxcar functions of the three conditions per run with the canonical hemodynamic response function. The above mentioned baseline during which the ball was presented stationary served as implicit (i.e., not explicitly modeled) low-level baseline (see ''Stimuli and task''). The resulting six (2 runs by 3 conditions) predictors were used as regressors in a general linear model (GLM), where the realignment parameters and intercepts of each run served as covariates of no interest. Low-frequency drifts were removed by a high-pass filter with a cut-off period of 128 s and temporal autocorrelations were accounted for by removing the estimated first-order autoregressive effects of the time-series. The resulting six volumes of interest with the parameter estimates per participant were subjected to a 3 9 2 mixed-effects ANOVA at the group level with predictability (PREDICTABLE, RANDOM and ARBI-TRARY) and response-mode (ACTIVE and PASSIVE) as fixed effects factors. Variance components were specified to account for heteroscedasticity (between conditions and subjects, where the latter was implemented as random-effects factor) and dependencies among within-subject observations. The threshold for rejecting the null-hypothesis was set to p \ 0.001, family-wise error corrected at the voxel level for multiple comparisons per contrast with an additional extent threshold of 100 continuous voxels. Dynamic causal modeling Dynamic causal modeling (DCM) was performed using DCM10 as implemented in SPM8. In short, with DCM one models observed data from coupled brain regions in terms of their endogenous connectivity structure, driving inputs of experimental conditions and modulatory inputs of these conditions on the connectivities between nodes. The observed fMRI data is modeled by an explicit forward model specifying how the measured signal was caused at the neuronal level (Friston et al. 2003). Most importantly, the same data is then explained by a set of different competing models all of which are based on the same forward model but differ with respect to their connectivity structure. The interaction of the exogenous inputs (direct or modulatory) to the system and the neuronal states is modeled by means of a bilinear differential equation as shown in the formula below. The variable x represents the neuronal states in the n nodes (or regions). The n 9 n matrix A contains the time-invariant coupling parameters for the connections between nodes (if the respective connection is present) as well as the self-connections. The three-dimensional n 9 n 9 m matrix B entails the parameters of the modulatory inputs of the m experimental inputs (denoted by u) on the connections between nodes as well as on the self-connections. Finally, matrix C is of size n 9 m and comprises the direct input parameters of the m experimental conditions on the n nodes. Different competing models can be specified by inclusion (1) or omission (0) of one or more of the parameters in the matrices A, B and C, resulting in an exhaustive model space comprising 2 (n 9 n) 9 (m ? 1) ? (m 9 n) models in the case when all combinations of connections and inputs shall be modeled. Usually only a substantially smaller subset of ''plausible'' models is considered in the model space to keep inversion of all models in the space computationally feasible. Inference made during Bayesian model selection (BMS), however, refers only to the tested models within the space and does not extend to any model of the exhaustive space that is not included. The competing models can then be compared to each other based on their log-evidence approximated with their variational free-energy, from which a posterior probability for each model can be derived reflecting the relative evidence of that model given the data. Time-series were extracted for analyses of effective connectivity from primary visual cortex (V1), motion-sensitive extra-striate cortex (V5) and posterior parietal cortex (PPC). Coordinates of the regions were based on the group analysis of a contrast comparing all moving stimuli against the low-level baseline (not reported). Individual coordinates were then found by jumping to the nearest local maximum in the respective first-level contrast. The first eigenvariate of all suprathreshold voxels (p \ 0.01 uncorrected) within a sphere of 5 mm radius was used to represent the time-series of the respective region. High-pass filtering was applied to these data as specified above and the variance explained by the realignment parameters and the session intercepts was removed. The direct inputs used in dynamic causal modeling (DCM) were slightly modified compared to the GLM in the sense that the first predictor included all moving visual stimuli (i.e., PREDICTABLE, RANDOM and ARBI-TRARY, henceforth MOTION), the second one contained the two non-predictable conditions (RANDOM and ARBI-TRARY, henceforth UNPREDICTABLE) and the last was identical to the ARBITRARY regressor. Thus, the projection space was identical to the GLM analysis, where this modeling more directly reflects the additional effects of decreasing predictability. Bayesian model selection (BMS) was performed among a set of models to test the hypothesis that increasing unpredictability of visual motion positively modulates feed-forward connections. This main BMS was preceded by preselection of models which is described in detail with respect to its rationale, procedure and results in the paragraphs below. For the main BMS the endogenous connectivity structure between the three nodes consisted of reciprocal connections between V1 and V5 on the one hand and between V5 and PPC on the other, i.e., two feed-forward (V1 ? V5 and V5 ? PPC) and two backward (V5 ? V1 and PPC ? V5) connections. One family of models within the model space had a driving input of MOTION on V1 and another direct input of ARBITRARY stimuli on PPC (Fig. 2a). One other family had an additional direct input of MOTION on V5 (Rodman et al. 1989;Girard et al. 1992;Sincich et al. 2004) (see Fig. 2b). Based on previous model selection procedures concerning the effects of direct inputs on the three nodes (see below) we also included a family of models with the additional direct inputs of UNPREDICT-ABLE on V5 and of ARBITRARY on V5 and PPC (see Fig. 2c). Each family comprised 256 models reflecting the 2 (294) possible modulatory effects of the two conditions UNPREDICTABLE and ARBITRARY on the four connections described above. Common to each single model was the modulatory effect of MOTION on the V1 ? V5 connection. Thus far, the model space consisted of 768 models per subject and session. However, we also tested a change in the synaptic gain of V5 due to either UNPREDICTABLE or ARBITRARY stimuli, which tripled the number of models in the model space to 2304. This is the model space which is referred to in the results section. In what follows we describe a two-step pre-selection procedure that was performed prior to the main BMS described above. Because the main BMS depended on the results of this pre-selection, results of this procedure are already included here, whereas the results of the main BMS can be found in the results section. The first pre-selection of dynamic causal models (DCMs) served the identification of direct inputs of the three conditions (MOTION, UNPREDICTABLE and ARBITRARY) on one or more of the three nodes. The rationale for this procedure was the negligence of other regions that may exert particularly topdown effects on the modeled system, which may be associated with enhanced salience or saccadic eye movements during the ARBITRARY condition. If such effects of nonincluded regions (e.g., the frontal eye fields or superior colliculi) exist, one way to model these in a reduced system would be as direct inputs to one or more of the included nodes. The sequential testing of subspaces of models was necessary to keep the computational burden for the main research question feasible. Although sequential testing cannot equivalently replace a test of all combinations of parameters, we pursued this suboptimal strategy to test several different direct inputs while keeping the computational load manageable at the same time. Sequential testing of several subspaces is rather unproblematic for fixed-effects (FFX) Bayesian model selection (BMS) as long as all models in Occam's window are considered in each selection. Random-effects (RFX) BMS, however, may yield inconclusive results when used sequentially (Penny et al. 2010). It should be emphasized that sequential model selection still bears the risk that there are models with a combination of parameters not tested during one of the BMS which are superior to the winning models of the restricted spaces tested in this study. In other words, the main BMS only tests for those models that are included in that selection and it does not make any inference on models outside that space. Therefore, the pre-selection can only be regarded as some sparse evidence for the direct inputs. In a first consideration we concentrated on the combinations how UNPREDICTABLE and/or ARBITRARY might perturb the system at V5 and/or at PPC, reducing the number of possibilities to 2 (292) = 16. Because the visual input to the system did not change with respect to any other property than motion (even the low-level baseline included a static view of the visual stimuli), we also tested if MOTION exerted a direct influence on V1. Theoretically, the input to V1 might have been a constant across the entire time-series, where the effect of MOTION, for example, is realized as an exclusively modulatory input (e.g., on the connection from V1 to V5). Therefore, complete model space of this first pre-selection procedure included 2 (2 9 2)?1 = 32 models reflecting all possible combinations of direct inputs of MOTION on V1 and/or UNPREDICTABLE and/or ARBITRARY on V5 and/or PPC (see Table 1A where these 5 direct inputs are indicated with an X). The endogenous connectivity structure was the same as for all models, namely reciprocal connections between V1 and V5 and reciprocal connections between V5 and PPC. In addition, the models shared the modulatory input of MOTION on the V1 ? V5 connection. With respect to the modulatory inputs of UNPREDICTABLE and/or ARBI-TRARY on any of the endogenous connectivities, we included all parameters assuming that their inclusion rendered the need for additional parameters reflecting direct inputs rather improbable. BMS among these 32 models using fixed effects for inference indicated strong evidence in favor of the model with four simultaneous driving inputs, namely MOTION ? V1, UNPREDICTA-BLE ? V5, ARBITRARY ? V5 and ARBITRAR-Y ? PPC with a posterior probability exceeding 99.99 %. In a second step during pre-selection, we asked for the plausibility of a direct input of MOTION on V5 and/or PPC, keeping other direct inputs, endogenous connectivity and modulatory inputs from the winning model in the first step. Therefore, we tested the winning model of the BMS above against the three other models that allowed MOTION to drive either V5 or PPC or both V5 and PPC (Table 1B). The winning model of this BMS (again using fixed-effects) clearly outperformed the competing three models with a posterior probability exceeding 99.99 % and indicated that MOTION had a driving input in V1 and V5, UNPRE-DICTABLE had a direct input in V5 and ARBITRARY had a direct input in V5 and PPC (see Table 1C; Fig. 2c). The result of this pre-selection was the reason for inclusion of a whole model family in the main BMS with this rather complicated input structure which is depicted in Fig. 2c. Results Descriptive results of the behavioral data During the response-mode session subjects pressed the button on average 183.4 times (SD 7.5) in the PRE-DICTABLE condition. In the RANDOM condition subjects gave on average 173.7 responses (SD 12.0), whereas the average number of button presses in the ARBITRARY condition yielded 235.2 events (SD 24.5). Due to the frequent number of motion direction changes particularly in Table 1 A denotes the direct inputs that were switched on and off with an 'X' for a first pre-selection. This resulted in an input structure denoted with a '1' in B. Then another selection was performed using this input structure while switching those inputs denoted with an 'X' in B. C shows the winning input structure of this pre-selection the ARBITRARY condition (but also occasionally in the RANDOM condition when the ball was located near one of the edges) a distinct accuracy assignment was not possible. Two-way ANOVA predictability 3 response-mode Activation of the dorsal visual stream of all moving visual stimuli against baseline (MOTION contrast) covered the whole dorsal visual stream as well as the supposed human homologue to the frontal eye fields (FEF) and a large part of the cerebellum (results not shown). The main effect of predictability is confined to the one-tailed t contrasts RANDOM [ PREDICTABLE and ARBITRARY [ RANDOM and their conjunction (see Fig. 3) which was performed as test against the conjunction null-hypothesis (Nichols et al. 2005). The former of the two comparisons yielded a slightly right lateralized network as indicated by a negative lateralization index of -0.39. This index was assessed by subtracting the number of suprathreshold voxels in the right hemisphere from those in the left hemisphere and dividing this difference by the total number of suprathreshold voxels. This network comprised bilateral extra-striate cortices (V5 and middle occipital gyrus), bilateral frontal eye fields (FEF), right inferior frontal gyrus and left inferior precentral gyrus. The large cluster in the right hemisphere comprising V5 extended dorsally to superior temporal gyrus (STG) and the supramarginal gyrus, thus also covering the temporo-parietal junction (TPJ). The homologue areas in the left hemisphere of the last cluster corresponded to isolated activations in V5 and supramarginal gyrus. In addition, lobule VIIa of the left cerebellar hemisphere was more active during the RANDOM as compared to the PRE-DICTABLE condition (see Table 2). The comparison of ARBITRARY to RANDOM stimuli exhibited both similarities as well as differences to the aforementioned contrast. In general, the activation pattern was a bit more symmetrical (lateralization index -0.26), the common activated areas were spatially larger and other regions were recruited in addition, particularly dorsomedial prefrontal cortex (dmPFC) and subcortical nuclei in the thalamus and brain stem (a complete list of activated clusters is summarized in Table 3). The above mentioned large right hemispherical activation containing V5, STG and the supramarginal gyrus survived the statistical threshold again, where the cluster extended to more inferior brain areas comprising the fusiform gyrus (FFG) and even bestriding a local maximum within crus I of the right cerebellar lobule VIIa. The corresponding activation in the left hemisphere was fragmented into smaller clusters, butapart from V5-also included crus I, FFG, STG and the supramarginal gyrus. A huge activation cluster comprised of more than 7000 voxels was likely the result of a merging of several smaller clusters as indicated by several local maxima (see Table 3). This large cluster in the (right) prefrontal cortex stretched inferiorly from the anterior insula over the inferior frontal and precentral gyri to the superior frontal gyrus, bilateral supplementary motor area (SMA) including dmPFC and the dorsal anterior cingulate cortex (dACC), sometimes referred to as mid-cingulate cortex (see Fig. 3b). The homologue areas in the left prefrontal cortex were constrained to the lateral parts but also comprised the anterior insula and inferior frontal and precentral gyri. Two more clusters were found more anterior at the cortical level, namely in bilateral middle frontal gyrus. At the subcortical level there was one suprathreshold cluster covering most of the bilateral thalamus as well as part of the tectum, particularly the colliculi superior. The test against the conjunction null-hypothesis of the two contrasts RANDOM [ PREDICTABLE and ARBI-TRARY [ RANDOM (see Fig. 3c) yielded the highest absolute lateralization index of -0.61 indicating a lateralization to the right hemisphere. With respect to the predictability 9 response-mode interaction, it must be noted that 82.5 % of the suprathreshold voxels of this interaction are a subset of the main effect predictability (see Fig. 4). In other words, nearly all regions exhibiting an interaction effect differed also profoundly with respect to their responses to the predictability of visual motion, where less predictability was associated with more activity. More precisely, this interaction occurred in bilateral dmPFC (close to pre-SMA), bilateral thalamus and bilateral inferior frontal gyrus as well as insula. Moreover, two clusters were located symmetrically at the border between FFG and crus I of the cerebellum, where local maxima were found in both of these structures. The pattern of this interaction was similar across the reported supra-threshold areas with similar activation levels during the predictable and random conditions and slightly more response in the arbitrary block under the noresponse session. In the session with overt motor response, however, a quite strong increase in activation was observed with decreasing predictability (see Fig. 5 lower right panel showing the dmPFC as example). Because the interactions reported above were thresholded quite conservatively, we specifically looked for uncorrected effects of responsemode (either as interaction or as main effect) in the three regions of interest for the DCM analyses by extracting the test statistic for the interaction at the local maximum of the motion contrast (V1: x = 8, y = -90, z = 2; V5: x = 48, y = -66, z = 2; PPC: x = 20, y = -58, z = 62). Significant interaction effects were found for V1 (F 2,204 = 3.07, p = 0.049) and V5 (F 2,204 = 8.87, p = 0.003). For the PPC the interaction was not significant (F 2,204 = 0.78, p = 0.461), whereas the main effect of response-mode was (F 1,204 = 6.45, p = 0.012). The main effect of response-mode did not reach significance for V1 (F 1,204 = 1.96, p = 0.164) nor for V5 (F 1,204 = 3.44, p = 0.065). Finally, it should be noted that when rearranging the data, so that the interaction reflects the chronological sequence of the runs rather than the response-mode, then the respective predictability 9 sequence interaction yielded no suprathreshold voxels even when lowering the p threshold to 0.05 corrected with no extent threshold. Bayesian model selection among dynamic causal models Since the present study investigates a comparatively simple, perceptual task there is no need to expect, for example, different cognitive strategies between subjects. Hence, for the Bayesian model selection (BMS) procedure we assume that the subjects do not differ with respect to the model structure that caused the data, so that we based the inference method on fixed-effects. Due to the interaction effect on the selected regions-at least to a moderate extent-the two sessions varying the response-mode were not treated as being replications of each other, for which reason the BMS was performed separately for each mode. For the ACTIVE session the BMS resulted in a single model being clearly superior to all other ones as indicated by its posterior probability which was close to 1. The structure of this model was characterized by driving inputs of MOTION on V1 and V5 and a perturbation of PPC by the ABRITRARY condition. As common to all models the V1 ? V5 connection was modulated by MOTION, although the negative sign for this parameter was not expected. Moreover, both the UNPREDICT-ABLE stimuli and the ARBITRARY ones exerted an enhancing modulatory effect on the connection from V1 to V5 (see Fig. 6a). Posterior Probabilities of the PASSIVE session did not support a single model. Instead two models have been found to be in Occam's window with posteriors of 80.79 % and 19.15 %, respectively. The model with the higher probability neither had a modulatory input for UNPREDICTABLE nor one for ARBITRARY (the modulatory effect of MOTION on the V1 ? V5 connection was common to all models). The other, less likely model showed a slightly suppressing modulatory effect of the UNPREDICTABLE conditions on the backward connection from V5 to V1. Both models had all driving inputs in common with MOTION entering in V1 and V5, UNPREDICTABLE driving V5 only, and ARBITRARY perturbing V5 and PPC (see Fig. 6b). The mean coupling parameters along with their standard errors as calculated by Bayesian parameter averaging are listed in Tables 4 and 5 for the ACTIVE and PASSIVE responsemode, respectively. A closer inspection of the parameters revealed that the corrected confidence intervals for most parameters did not contain zero. The correction was performed according to the 13 parameters that were tested for each response-mode and was based on the respective quantiles of the sampling from the DCM posteriors as implemented in Bayesian model averaging in SPM. There were three parameters that did not differ significantly from zero in the above mentioned sense which all belonged to the model for the noresponse mode: the average coupling parameter from V1 to V5, the direct input of the motion condition on V1 and the modulatory input of the unpredictable condition on the V5 ? V1 connection. Discussion The present study was designed to test hypotheses about hierarchical predictive processing in the visual system according to pertinent theoretical assertions. These specific predictions within a small and circumscribed network are discussed in the following section, whereas the results Fig. 4 Maximum intensity pots of the predictable 9 response-mode interaction and its conjunction with the main effect predictability. Illustration of the similarity of the maximum intensity plots (MIP) of the predictable 9 response-mode interaction (a) and the conjunction of the same interaction with the main effect of predictability (b). Both MIPs were thresholded at p \ 0.001 corrected at the voxel level and an extent threshold of 100 contiguous voxels Brain Struct Funct (2017) 222:849-866 859 obtained at the level of the whole brain are reconsidered afterwards. Hierarchical predictive processing in the visual cortex According to the model of hierarchical predictive processing in the brain, the information flow from lower hierarchical regions to higher ones should be pronounced with decreasing predictability, because of a larger prediction error for unpredictable stimuli that is passed up the hierarchy (e.g., Clark 2013;Friston 2005). The present study tested this hypothesis using DCM for fMRI in a quite large sample of healthy volunteers performing a predictability of visual motion task. Bayesian model selection indicated quite strong support in favor of the predictive processing hypothesis in that the winning model of the condition requiring motor responses exhibited enhancing modulatory inputs of unpredictable and arbitrary stimulus types on the forward connection from V1 to V5. This effect may reflect increased bottom-up information processing from V1 to V5 during unpredictable visual motion, which is probably due to an enhanced transmission of prediction error. It should be emphasized that the modulatory input of unpredictable stimuli included both random and arbitrary motion. Hence, the modulatory effects of unpredictable (random and arbitrary) as well as arbitrary motion constitute an increase in this forward connection according to the three levels of increasing unpredictability. The BMS for the passive condition, however, did not corroborate this pattern, although the resulting connectivity structure did not contradict the idea of predictive processing. Instead of an enhancement of the forward connection, we observed a slightly suppressing input of unpredictable stimuli on the backward connection from V5 to V1. Assuming that backward connections originate in ''representation units'' in deep cortical layers of the hierarchically higher region and terminate mainly in ''error units'' in superficial layers of lower regions (Mumford 1991), the observed backward suppression might reflect the Fig. 5 Parameter estimate plots of the three regions of interest (V1, V5 and PPC) and the dmPFC. Bars indicate the parameters for each condition (implicitly compared to low-level baseline) and error bars indicate the standard error. The three conditions (predictable, random and arbitrary) are shown separately for the run without motor response (dark gray) and for the run with overt motor response (light gray). PPC posterior parietal cortex; dmPFC dorsomedial prefrontal cortex inability of representation units in V5 to explain away the prediction error that is generated by error units in V1. Nonetheless, it must be noted that the (Bayesian averaged) coupling parameter in question was close to zero because the more parsimonious model without that modulatory input had a far higher posterior (approx. 80 %) as compared to the second model within Occam's window which comprised this input (approx. 19 %). Now the question arises why the two response-modes of the same task yielded so different results. One reason for this might be that at least some of the subjects readily digress from the actual task when behavioral performance seems less important for task completion. This implies a diminished compliance of the subjects (be it intentional or not) to stay on track when their effort is not directly observable. In the free-energy formulation of attention Numbers indicate the mean of the respective parameter and numbers in brackets refer to their standard deviation this is equivalent to a reduction of precision at the sensory level, which would result in less propagation of prediction error, because of increased uncertainty and hence reduced sensitivity to sensory signals (Feldman and Friston 2010). This is exactly what we observed in the DCM analyses of the no-response run. Another view on this effect, which can be regarded as complementary to the above mentioned argument, assumes additional networks to be involved for the same task when an overt motor response is required. The fact that the predictability 9 response-mode interaction yielded a network, whose nodes also manifest a main effect of predictability, corroborates this notion. Although the subjects in the present study only reacted in response to-as opposed to act on-the stimuli, the differential processing of the same stimuli in the brain may be related to a resonating effect of (any kind of) motor output that presumably underlies active inference (Friston 2010;Limanowski and Blankenburg 2013). This interpretation implies that, regardless of the ability to manipulate the external world, any motor response has a non-negligible impact on the processing of external stimuli. Most likely, a more comprehensive picture of the observed differences requires an extensive modeling of other important nodes on the one hand, e.g., the dorsomedial prefrontal cortex (Regenbogen et al. 2013), the thalamus (Saalmann and Kastner 2011), or the cerebellum (Kellermann et al. 2012). On the other hand, refinements of experimental manipulation of response-modes need to be devised to differentiate potential effects that any diverse motor outputs might have (e.g., Warbrick et al. 2013). In conclusion, the BMS results of the no-response run should be considered with caution because the interim winning models do not seem to be quite plausible. This finding either suggests that our model space did not include a useful model for this session or that fixedeffects BMS might be untenable for this task because lacking behavioral relevance leads to the above mentioned decline in the subjects' compliance. The concepts of behavioral relevance (as differentially induced by the response-modes) and predictability may share a key effect that both exert on the nervous system, namely attention. The idea that attention should be rather regarded as an effect rather than a cause has been elaborated by Anderson (2011). Accordingly, (goal-directed) attention can be regarded as a consequence of behavioral relevance which is implemented in a top-down fashion, whereas unpredictability gives rise to (stimulus-driven) attention due to the salience of the stimulus in a bottom-up manner. In this sense stimulus-driven attention seems to be confounded in the present study, because more attentional resources are presumably allocated to processing unpredictable stimuli. Although the decision between cause or effect of attention cannot yet be made, the notion of attention as being rather an effect seems reasonable when unpredicted or salient stimuli are presented. Therefore, we argue that a decline in predictability inevitably goes along with more salience and stimulus-driven attention. An amalgamated representation of priority was proposed by Fecteau and Munoz (2006) to combine bottom-up effects induced by salience and top-down effects that determine the relevance of stimuli. The authors conclude that the combined representation of an object's distinctiveness and its relevance to observers in so called priority maps is likely instantiated in the oculomotor system (Fecteau and Munoz 2006), underscoring the need for an extension of the relevant network. Nevertheless, Kok et al. (2012) recently demonstrated that goal-directed attention can be manipulated orthogonally to predictability. Beyond that, the study has shown that directed spatial attention can reverse the attenuating effects of predictability on sensory processing (Kok et al. 2012). The present study, however, was designed to investigate the effects of predictability of perceptual properties with goal-directed attention held constant (although the response-mode may have implicitly changed goal-directed attention via behavioral relevance or priority). In its free-energy formulation attention is considered to be the process of optimizing the synaptic gain to represent sensory precision (Feldman and Friston 2010). Although this phrasing rather emphasizes a top-down control of attention the net effect with respect to hierarchical predictive processing is the same in relation to processing unpredicted or salient stimuli. Whereas goaldirected attention increases the synaptic gain of representation units to inputs from error units, salience directly increases the input from lower to higher regions, both leading to an amplification of prediction errors. This distinction between these two complementary processes may be the reason for the fact that-contrary to Kok et al. (2012)-we did not find evidence for a modulation of the self-connection of V5 for unpredictable stimuli. Moreover, it is important to note that a modulation of this self-connection is ambiguous with respect to hierarchical processing because an increase of the synaptic gain of V5 in our models can be associated with enhanced responsiveness to both forward inputs from V1 as well as backward projections from PPC. Apart from explaining perceptual and cognitive phenomena on a neuronal level, one of the central claims of the theory of hierarchical predictive processing is its ability to provide neuronal mechanisms able to describe phenomena observed in pathological and particular psychiatric circumstances. For instance, an aberrant prediction error has been associated with schizophrenia (Adams et al. 2013). According to this view, a reduction in the precision of prior beliefs (or top-down predictions), relative to sensory evidence (or bottom-up prediction error) may lead to abnormalities observed in schizophrenia, e.g., psychotic symptoms, cognitive deficits or negative symptomatology. Another psychiatric disease which has been tried to understand in terms of hierarchical processing is autism spectrum disorder (ASD). Two former theories-namely weak central coherence (WCC; Happé and Frith 2006) and enhanced perceptual functioning (EPF; Mottron et al. 2006)-separately emphasized reduced global processing (in case of WCC) or enhanced local processing (in case of EPF) observed in ASD. A predictive coding perspective may unify these accounts in the sense that an overemphasis of the prediction error or overly high precision expectation in sensory input may explain both of these observed effects (Van de Cruys et al. 2014;Lawson et al. 2014;Palmer et al. 2015a, b). We envisage an application of the task presented in this study to patients with schizophrenia and ASD. Although both disease patterns are associated with an enhanced forward passing of prediction errors, there are differential hypotheses according to the predictive coding perspective. Because in schizophrenia prior beliefs are assumed to be reduced, one would expect enhanced forward coupling of different sensory levels (V1 and V5) for all conditions with a diminished differentiation according to predictability. Contrariwise, ASD is rather associated with an excessively high precision expectation of sensory input which hypothesizes an augmented differential response in the coupling from lower to higher visual regions as a function of predictability of visual motion. Whole brain GLM analyses The results for the main effect predictability exhibited a large distributed network that bore at least some resemblance to the goal-directed and stimulus-driven attention network, which is associated with dorsal and ventral fronto-parietal areas, respectively (Corbetta and Shulman 2002). While our data lend only partial support for the goal-directed attention stream with dorsal engagement in the parietal lobe (e.g. in PPC) and in the FEF, the activation pattern of the main effect of predictability provides quite strong evidence in favor of the rather right-lateralized involvement of the ventral fronto-parietal network assumed to underlie stimulus-driven attention. The right inferior frontal gyrus has repeatedly been linked to novelty detection (e.g., Dobbins and Wagner 2005;Gur et al. 2007) and might play an important role-together with the (TPJ)-as a circuit-breaker during reorienting to spatially unexpected targets (Corbetta and Shulman 2002). The conjunction of the two contrasts indicates that both regions, right inferior frontal gyrus and right TPJ, are more or less parametrically linked to (un-) predictability in the present study. The contrast ARBI-TRARY [ RANDOM revealed additional cortical activation in the dmPFC as well as subcortical clusters in the thalamus and brainstem. Although the limited spatial resolution of fMRI scans prohibits a definite assignment of activations to distinct subcortical nuclei, the peak activity in the brainstem may be attributed to the superior colliculi, whereas the thalamic engagement may originate from the pulvinar (Petersen et al. 1987), the reticular nucleus (Sturm et al. 1999;Kellermann et al. 2011), and/or the intralaminar nuclei (Yeo et al. 2013). The superior colliculus is part of the oculomotor network and has been-like the pulvinar-associated with salience (Robinson and Petersen 1992;Fecteau and Munoz 2006). However, the role of the superior colliculus has been ramified because of its relation to inhibition of return. Inputs of bottom-up salience and top-down relevance seem to converge in the superior colliculus (albeit during different stages of processing) for which reason Fectau and Munoz (2006) proposed the term prioritymap to merge the two. A recent study suggests that the superior colliculi are indeed influenced by top-down signals from lateral prefrontal cortex (Everling and Johnston 2013). Even though we anticipated differences in activations between the two response-modes in motor related areas (not reported), we did not expect to find noteworthy effects of the response-mode on different levels of the predictability factor. Yet such differences between response-modes have been reported in a recent study, where subjects also performed a session in which they counted the number of targets in addition to a passive and a response condition (Warbrick et al. 2013). In general, stronger activation of the dmPFC (extending to SMA) is associated with tasks requiring overt motor as opposed to non-motor responses (Langner and Eickhoff 2013). This structure has been proposed to serve as brake to maintain a preparatory motor-set which is inhibited at the same time so as to avoid premature responses. Gradually releasing this break would trigger the prepared response when a certain threshold is exceeded (Eichele et al. 2008;Danielmeier et al. 2011;Langner and Eickhoff 2013). Based on this we assume that the arbitrary condition activates a preparatory motor-set which is inhibited by the dmPFC. Because of the (relative) anticipatory certainty of upcoming targets (i.e., movement changes) in the predictable but also in the random condition, the response-set is not pre-activated to the same extent as compared to the arbitrary blocks where targets occur any time. The temporal control of motor responses might be arranged more efficiently in predictable blocks without simultaneous motor preparation and inhibition. Crus I of the cerebellum may provide timing information of perceptual events (O'Reilly et al. 2008;Kellermann et al. 2012), which might be enhanced during the session requiring motor responses. The involvement of the inferior frontal gyrus (close to the inferior frontal junction) in the interaction is in line with its presumed role in setting up stimulus-response mappings (Hartstra et al. 2011;Langner and Eickhoff 2013). The thalamus is a key structure in the ascending reticular activating system (Yeo et al. 2013) as relay from the reticular formation to the cortex so as to generate and maintain an adequate arousal level (Hasselmo and Sarter 2011). It is conceivable that unpredictable and therefore, salient stimuli in the arbitrary blocks generate a high arousal which is even facilitated by response requirements. In summary, the present study is not designed to separate different stages of processing by a mere cognitive subtraction strategy. Nevertheless, we hope to have shown that the task yields robust activations in almost all wellknown areas assumed to support (visual) attention, including the dorsal and ventral parietal network as well as subcortical structures like the thalamus and superior colliculi. Importantly, an overt motor response seems to have an amplifying and/or modifying effect on processing in other regions, even if these are indirectly or not at all related to motor output. Therefore, this task seems to be well suited to characterize the functional integration of circumscribed attentional networks with dynamic causal modeling. Unfortunately, this characterization is beyond the scope of the present study, because we aimed at testing specific hypotheses regarding hierarchical predictive processing considered above. Limitations Besides aforementioned constraints regarding, for example, overt motor output there are other limitations of the present study that merit consideration for future work. The most severe constriction of the stimuli at hand is the number of changes in motion direction, which differs substantially between the arbitrary condition [with a mean (M) of 28.9 changes per block and standard deviation (STD) of 3.9] and the other two experimental manipulations [predictable (M = 18.5, STD = 1.0) and random (M = 18.3, STD = 2.6)]. There are three possibilities to overcome this limitation, although each one has other drawbacks which we judged more severe in relation to the compromise we made: Two possibilities comprise either a reduction of the duration of the stimuli or a deceleration of motion during the arbitrary motion condition to adapt the number of direction changes. A third option would be downsizing the frame in the other two conditions such that predictable and random changes in motion direction occur more often. These differences in motion direction changes are accompanied by differences in motor reaction regarding button presses (when a reaction was required) as well as saccadic eye movements. The latter will likely be associated with activations of the superior colliculi and frontal eye fields, where particularly the latter have an influence on the visual cortex (Heinen et al. 2014). As described in the methods (''Dynamic causal modeling'') we assumed that putative top-down effects from other regions (e.g., those mentioned above) may be captured as direct inputs of unpredictable or arbitrary stimuli on either V5 or PPC. The strong evidence which we found during the pre-selection for a direct input of the arbitrary condition on PPC may reflect an effect that is mediated by structures like the frontal eye fields. However, we did not find evidence for a direct effect of the arbitrary condition on V1 or V5, indicating that the above mentioned effects are mediated by the PPC at least in the present study. Nevertheless, this interpretation remains speculative unless the respective candidate regions like the frontal eye fields or superior colliculi are not included in the models under consideration. Moreover, the confounding effects of eye movements and number of motion direction changes in the arbitrary condition remain a limiting concern of the present study which should be addressed in future studies by changes in the stimuli as mentioned above. Although we already broached the issue of the limited number of nodes included in the DCM analysis, it must be pointed out that any change in the system may result in systemic effects on the whole network, i.e., coupling parameters depend on the structure of the whole model. We surmise that such an effect may have occurred to the endogenous connectivity from V1 to V5, which turned out to be negative for the winning models. If our assumption is correct, the direct input of motion to V5 (via the lateral geniculate body) is possibly overestimated because we did not constrain this input with any prior weights. Such a weighting, however, may yield physiologically more plausible results since the proportion of cells projecting from the lateral geniculate nucleus to V5 is only about 10 % of those compared to the population in V1 that innervates V5 (Sincich et al. 2004). Whereas questions regarding the absence or presence of connections between nodes or the impact of direct or modulatory inputs can be addressed by extending the model space for a Bayesian model selection accordingly, the question of including a node or not cannot be addressed by model selection (at least for fMRI data). The reason is that a comparison of different models requires the same data to be subjected to each model and inclusion or exclusion of a region is equivalent to adding or removing the data of that node, respectively. Conclusions and outlook The present study was designed to test specific hypotheses about enhanced feed-forward connectivity in the visual cortex in response to unpredictable visual motion. These predictions rest upon the notion of hierarchical predictive processing, which forms the basis of the Bayesian brain hypothesis (e.g., Clark 2013;Friston 2010). Importantly, the patterns of effective connectivity strongly supported these predictions when the stimuli were behaviorally relevant. Hence, the quite simple visual task presented in this study seems to be well suited to further investigate hierarchical predictive processing in and beyond the visual cortex so as to include other regions related to motor planning and execution as well. Moreover, the present task may be indicative of trait abnormalities in patients suffering from psychiatric disorders or yet even in their relatives. A recent review suggested that psychotic symptoms may be the result of an imbalance (in the precision) of feed-forward and backward connections between hierarchical levels of processing, presumably underlying known effects like an attenuated mismatch negativity, impaired smooth pursuit eye movements or a weaker force-matching illusion (Adams et al. 2013). In conclusion, the present study lends empirical support for hierarchical predictive processing in accord with the predictability of visual motion, for which reason the present task seems to be well suited to shed light on putatively disturbed effective connectivity in psychiatric disorders.
2017-08-02T20:04:41.762Z
2016-06-22T00:00:00.000
{ "year": 2016, "sha1": "88aee5fa6f8305fcb0927451938533726810d153", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00429-016-1251-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "88aee5fa6f8305fcb0927451938533726810d153", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
14960770
pes2o/s2orc
v3-fos-license
Evaluating Patient Usability of an Image-Based Mobile Health Platform for Postoperative Wound Monitoring Background: Surgical patients are increasingly using mobile health (mHealth) platforms to monitor recovery and communicate with their providers in the postdischarge period. Despite widespread enthusiasm for mHealth, few studies evaluate the usability or user experience of these platforms. Objective: Our objectives were to (1) develop a novel image-based smartphone app for postdischarge surgical wound monitoring, and (2) rigorously user test it with a representative population of vascular and general surgery patients. Methods: A total of 9 vascular and general surgery inpatients undertook usability testing of an internally developed smartphone app that allows patients to take digital images of their wound and answer a survey about their recovery. We followed the International Organization for Standardization (ISO) 9241-11 guidelines, focusing on effectiveness, efficiency, and user satisfaction. An accompanying training module was developed by applying tenets of adult learning. Sessions were audio-recorded, and the smartphone screen was mirrored onto a study computer. Digital image quality was evaluated by a physician panel to determine usefulness for clinical decision making. Results: The mean length of time spent was 4.7 (2.1-12.8) minutes on the training session and 5.0 (1.4-16.6) minutes on app completion. 55.5% (5/9) of patients were able to complete the app independently with the most difficulty experienced in taking digital images of surgical wounds. Novice patients who were older, obese, or had groin wounds had the most difficulty. 81.8% of images were sufficient for diagnostic purposes. User satisfaction was high, with an average usability score of 83.3 out of 100. Conclusion: Surgical patients can learn to use a smartphone app for postoperative wound monitoring with high user satisfaction. We identified design features and training approaches that can facilitate ease of use. This protocol illustrates an important, often overlooked, aspect of mHealth development to improve surgical care. Introduction Telemedicine has begun to supplement, and in some cases supplant, postoperative care received in the clinic in many surgical practices.Existing platforms include Web and mobile phone-based portals for virtual follow-up after elective general surgery and telephone follow-up after laparoscopic cholecystectomy and open inguinal hernia repair [1][2][3].These platforms have been met with wide acceptance and enthusiasm by patients and their surgeons in the low-risk, elective surgery cohorts studied [4].However, patients have not been rigorously included in the design of these apps despite an extensive literature on user-centered design in the scientific literature from the disciplines of medical informatics and human-computer interaction [5][6][7][8].Indeed, recognizing the importance of involving users in the development of new devices and protocols, the Food and Drug Administration (FDA) has mandated consideration of the user experience in their Quality System Regulation [9]. As ownership of tablets and mobile phones becomes more common [10], patients and their caregivers are increasingly willing to use technology to access care [11].Alongside this trend, policy mandates have made improving transitions of care following hospital discharge and reducing hospital readmissions a national priority [12][13][14][15].These trends together create an enormous opportunity for telemedicine to improve transitions of care for surgical patients.However, with increasing enthusiasm for telemedicine, new platforms must be rigorously vetted by patients, the end users, to ensure their full acceptability and accessibility.This can be achieved through the use of established user-centered design guidelines, which comprise a diverse set of concepts and methods grounded in human factors engineering and ergonomics that facilitate the usability of technology for the target user.Although clinical outcomes from the studies of existing telemedicine platforms in surgical practice are encouraging, they are limited by substantial bias-more than 80% of published telemedicine interventions include only those patients who can access or are familiar with the necessary technology (eg, tablet or mobile phone), resulting in the exclusion of between 12% and 56% of otherwise eligible participants [16].Additionally, much of the prior research of telemedicine protocols for surgical patients have focused on routine procedures that already have a low base rate of postoperative and postdischarge complications [1,2,17].As a result, major knowledge gaps remain regarding whether telemedicine can be used to monitor a higher-risk population that is less familiar with mobile technology and what is required of novice technology users to successfully complete such protocols. In addition, many existing telemedicine platforms designed for the postdischarge period are primarily text or audio based but transmit no visual information [2,3,18].A crucial component of postoperative and postdischarge recovery is appropriate healing of the surgical wound.The addition of a visual component (video and images) allows more complete evaluation of wound healing, which is vital for monitoring postoperative recovery for 3 primary reasons: wound infection is the most common nosocomial infection in surgical patients, it is a leading cause of hospital readmission [19] as infections increasingly develop after hospital discharge [20], and patients are unable to identify wound complications with a high rate of false negatives [21,22].Telemedicine protocols that rely on mobile devices, collectively termed mobile health (mHealth), are uniquely positioned to easily provide visual information, essential to the diagnosis of a wound infection. Those telemedicine protocols that do have a visual component are frequently asynchronous and episodic and have not been designed for ongoing monitoring of postoperative recovery.Most commonly, these protocols involve either digital images or videoconferencing intended to replace an in-person office visit [1,17,[23][24][25].However, while these are useful in their ability to decrease travel time and cost, they are not sufficient in diagnosing an early wound complication for reasons stated above, namely that, a surgical site infection (SSI) often develops before many follow-up visits are scheduled.Other protocols intended for wound monitoring, such as the mobile Post-Operative Wound Evaluator (mPOWER), are intended to allow patients to submit images, but do not guarantee that a provider will review them unless notified to do so [26,27].Unless patients alert their provider regarding a concerning finding, something patients are not reliably able to do, such protocols may inadequately detect the early signs of a wound complication. We address these gaps by creating an image-based smartphone app aimed at increasing communication between patients and their caregivers after they leave the hospital as part of a forthcoming effort to detect wound complications at an early stage and to reduce hospital readmissions.We then evaluate its usability in a largely technology-naive population of patients undergoing general and vascular surgery.In constructing this project, we consulted 2 international standards: International Organization for Standardization (ISO) standard 9241-12 was used to optimize the design of our application and then ISO 9241-11 was used to guide usability testing of the app.ISO 9241-11, a widely used guideline for current usability testing methods, which focuses on effectiveness (ie, task completion), efficiency (ie, time within task), and user satisfaction of new technology, was used to assess the patient-centeredness and usability of this app to monitor postoperative wounds [28].To our knowledge, we are the first to invoke ISO 9241-11 to assess an image-based app in a clinical patient population.Our findings have the potential to provide vast amounts of clinically vital information that has been otherwise unavailable to health care providers.We also address the utility of existing usability standards for image-capturing mHealth platforms. Subjects Eligible participants included inpatients 18 years of age or older on the vascular or general surgery service of a large, academic tertiary care hospital.Subjects were recruited during one of two usability sessions in November and December 2015.Participants were eligible if they had a surgical incision longer than 3 cm and were close to their baseline functional status.Subjects with major cognitive or neurologic deficits prohibiting their independent use of the app were included only if they had a capable caregiver who consented to complete the app on their behalf.All subjects who met inclusion criteria were approached to participate.Participants were asked regarding their prior experience with smartphones, whether they owned their own smartphone, and whether they had used a smartphone to take a digital image. We aimed for a sample size of at least 5 participants, a number based on evidence from the usability literature indicating that 5 participants make a sufficient sample size to detect 80-85% of an interface's usability problems [29].We continued to enroll purposively past our sample size goal to utilize the remaining time. The University of Wisconsin Health Sciences Institutional Review Board approved the study protocol. The App WoundCheck is an iOS app that enables patients to capture digital images of surgical wounds and sends them to their providers from home, along with brief updates on postoperative recovery.This app was developed internally through the University of Wisconsin Department of Surgery with the assistance of software programmers in our Information Technology division.In designing the app, we consulted ISO 9241-12, an international standard for screen layout and the visual display of complex information, and established guidelines on user interface design to ensure that the user interface was easily navigated by our target population of older adults and novice users [30,31].Table 1 summarizes the app's features and the method of development vis-à-vis the salient dimensions of the ISO standard for user centered design including clarity of the content, discriminability of information, conciseness, consistency of presentation, detectability, legibility, and comprehensibility.The app is accompanied by a training program to be delivered prior to discharge that draws on evidence-based tenets of adult learning and memory retention (Table 2), in keeping with similar transitional care programs targeting older adults [32][33][34].Among these tenets is the need for adult learners to feel actively engaged in the learning process, to frequently receive positive reinforcement, and to set the pace of learning.We allowed ample time for questions and for participants to interject comments.We also allowed participants to use the smartphone and the app directly after a short demonstration, engaging visual, auditory, and kinetic forms of learning.Adult learners also require repeated exposure to new material and to have it presented in a variety of formats.Each participant received a training booklet that reinforced the steps of the app for reference if questions arose after discharge. Sample training design features Evidence-based dimension of adult learning Let participant set the pace of training Require more time to learn new skills [35] Repetition; supplementary flash cards; let participant develop own narrative around the device Need repetition and multiple formats of materials [36] Emphasis on purpose of training; emphasize "why" of tasks Challenged by complex, unusual material [37,38] Frequent positive feedback; opportunities to reflect and ask questions throughout Decline in motivation when not experiencing success [38] Primary training session + refresher training prior to discharge Repeated exposure facilitates learning [39,40] Use of reminder alarm at the time of participant choosing as a cue to use app Cue-based recall [41] Provide a device to participant to use throughout training Task performance (not just observation) with teach-back [41] The program is ultimately designed for use during the period between hospital discharge and the routine postoperative clinic visit.The app was designed to be linear with one pathway through the app to maintain simplicity and intuitiveness.There are 2 phases to the app: an image-taking phase where participants take digital images of their wound and have the ability to review or retake their images, and a brief survey with yes or no questions regarding their recovery.Screenshots of the app are provided in Figure 1, and survey questions are provided in Textbox 1. To vet the content of the app and training and meet the burden of the ISO design standard, we conducted 2 focus groups to review the app with Community Advisors on Research Design and Strategies (CARDS).These are standing focus groups of community members from diverse racial, ethnic, socioeconomic, and educational backgrounds who are recruited from food pantries, senior meals, parenting programs, and other similar programs.They are trained to give constructive feedback to researchers, health educators, and outreach professionals.The CARDS members, the majority of whom are novice smartphone users, evaluated prototype screens of the app and all app language in the first focus group.The image capture training protocol was evaluated in the second focus group. Health Insurance Portability and Accountability Act Compliance The app and transmission of patient data were developed to fully comply with the Health Insurance Portability and Accountability Act.A passcode is used to secure and encrypt the device.Each device is profiled, allowing us to remotely wipe the device, prevent the installation of additional apps, and limit other device features.No information is stored on the mobile phone itself; the app can only be used to submit information, not retrieve it.The app transmits data to the University of Wisconsin Department of Surgery research server using the Hypertext Transfer Protocol Secure (HTTPS; Figure 2).A unique nonmedical record number identifier is used for each participant.No identifying information is transmitted, and participants were instructed not to send pictures that included identifying marks or their face.If the participant is idle for more than 10 minutes during data collection, the app times out and the data is deleted.Only research personnel with responsibility to review images have access to the submitted images.The system automatically logs off users after 30 minutes of inactivity.Audit controls monitor access. User Tasks Following preliminary design, we formally tested the usability of the app with postoperative vascular and general surgery patients at a major academic medical center.The app was loaded onto a 5 th generation iPod Touch running iOS8.We assessed patients' baseline familiarity with smartphones prior to testing.A researcher introduced the device to participants with an overview of its general functions and how to operate it, if needed.User tasks included waking up the device, launching the app, image capture, review and retake or acceptance of captured images, question response, and submission.Following the first round of usability testing, an interim assessment of the app was performed and adjustments were made based upon the findings of the first round.The updated version of the app was then used for the second round of testing. Measures and Analysis We consulted ISO 9241-11 in designing the format for formal usability testing of the app [28].Effectiveness (ie, the ability to successfully complete each task independently and whether assistance was required) and efficiency (ie, the time needed to complete each task) were measured by direct observation and by mirroring of the device onto a research computer using the software AirServer (App Dynamic).The mirrored screen on the laptop was recorded using Morae (TechSmith) screen recording software.Training sessions were audio recorded for later review. Following usability testing of the app, participants were asked to rate their performance and to provide feedback on the app itself.Participants also completed a system usability scale (SUS) to evaluate their satisfaction with the app (questions presented in Textbox 2) [42,43].Images generated during the testing sessions were independently reviewed by 3 physicians to assess whether they could be used for diagnostic and treatment purposes.If a reviewer deemed an image as not usable, they were asked to provide a reason. RenderX Textbox 2. System usability scale questions.Responses followed a 5-point Likert scale from "strongly agree" to "strongly disagree." • I think that I would like to use this app frequently Participant Characteristics Of the 14 patients who were approached to participate, 5 declined due to time constraints or disinterest.Nine participants completed usability testing, 3 of whom had caregiver assistance or proxy participation.Five participants owned their own smartphone, and 7 had used a smartphone to take a digital image at least once prior to this study, leaving 2 who had no prior experience with smartphones.Demographics and basic clinical information are presented in Table 3. Effectiveness and Efficiency Effectiveness and efficiency data are presented in Table 4.The mean length of time spent with each participant for the full app training session, excluding study introduction and survey completion, was 9.7 minutes (range: 3.9-23.0minutes).The mean length of time participants needed to complete the app independently was 5.0 minutes (range: 1.4-16.6minutes).For all of these measures, the participants in the second round (ie, users of the updated version of the app) had better efficiency over the participants in the first round (ie, users of the app in its original form).Forty-four percent of participants needed prompting or assistance from a member of the research team to complete the app; 55.6% were able to complete the app in its entirety without assistance.Of the documented instances when researcher's assistance was given, 64% were related to taking images of wounds, most often related to participant positioning and navigating the device's camera functionality. The most difficult task in the initial round of testing was to take a digital image of the wound.Participants were confused about the flow through the image-taking portion of the app, and they also faced difficulty with button placement.Specifically, the placement of the image capture button directly next to the cancel button led to image capture attempts that resulted in cancellation.In addition, the cancel button looped back to restart the app rather than sending participants forward even if they had already captured an image.As a result of these difficulties, the image-taking portion of the app was redesigned to make it more intuitive, and the camera buttons were placed in more convenient locations on the screen to facilitate image capture (Figure 3).Following these adjustments, participants in the second round of testing had less difficulty with this section.Novice smartphone users also experienced confusion with changing the direction of the camera to face toward or away from them and required frequent reminders and assistance. Participants with groin wounds, and particularly obese participants with groin wounds, had considerable difficulty taking images of their wound independently due to inadequate exposure of the wound.At least one other person was required to fully expose the wound, and even then, it was difficult to achieve the optimal angle for image capture.Participants who had active caregivers present were better able to perform this task without requiring researcher's assistance. On assessment of image quality, 9 of 11 (81.8%)images were deemed sufficient for diagnostic purposes by a majority of rating physicians (Table 4).Five of 11 images had at least one physician rate it insufficient, primarily because the entirety of the wound was not visible in the image (scope).One of these was a patient who was too close to surgery to fully uncover and visualize her wounds.Another patient had the very top of his abdominal incision covered by his gown but otherwise had an adequate image.A man with an amputation stump generated an image that had insufficient lighting for one rater to comfortably say whether there was erythema or ecchymosis, which was a function both of how wound healing appears in darker skin and the available light.The 2 wounds that the majority of raters found inadequate for clinical use were 2 of the 3 groin wounds; this was consistent with the participants' difficulty in taking the picture during usability testing, for the reasons stated above. The survey task within the app was easy for all participants to use.On the initial round of testing, the screen for reviewing survey responses was scrollable, such that all responses appeared on a single screen, but some were not visible unless the participant scrolled to the bottom of the screen.This was confusing for some participants, as this was the only scrollable screen within the app, requiring mastery of a new functionality.The response review screen was revised in the second round of testing to be split into 2 screens to eliminate the need to scroll.After this adjustment, participants had no difficulty with this section. Figure 3. Original and modified image-taking screen.On the left is the original camera screen with both the image-capture and cancel buttons at the bottom of the screen.On the right is the modified screen based on user feedback.The image-capture button takes up the whole bottom of the screen, but does not extend as far up into the screen, and the cancel button has been moved away from it to decrease button confusion. Usability The responses to the System usability scale (range: 0-100) are presented in Table 4.The overall usability score for the app was 83.3, which is considered good for usability testing [44].Most participants found the app easy to use, though the questions that did not elicit a unanimous positive response ("I think I would need the support of a technical person to be able to use this app," "I would imagine that most people would learn to use this app very quickly," and "I needed to learn a lot of things before I could get going with this app") indicate a degree of tentativeness regarding participants' ability to independently complete the app.One participant said she would "probably have to write the steps down" to be able to complete it independently, though said she "didn't find it that complex once (she) got into it" and that she "would do it because we need to do it."Another said she could imagine "a lot of people who would have all kinds of problems" learning to use the app.These challenges were also observed during usability testing, particularly with novice users who, in addition to learning to use the app, needed more time to become comfortable using XSL • FO RenderX the device itself.Four participants struggled with simply tapping the screen and alternating between tapping icons on the screen and pressing the home button; two came close to deleting the app by pressing the icon for too long rather than tapping it.As stated previously, novice users also struggled with using the camera, particularly with switching the direction of the camera to face them. The most commonly cited concerns regarding the protocol were confidentiality of patient information and whether anyone in the care team would actually review the submitted images and survey responses.One participant was concerned "whether information (would be) followed through," saying "you might have taken lots of pictures, but if no one looks at it, it's all for nothing."Other concerns raised were device battery life and difficulty being able to fully visualize the wound to take a digital image.Three participants stated they had no concerns.All 9 participants said they would be able to complete the app daily after discharge if they were given full instructions.One particularly enthusiastic participant said, "I wish I had it today."All nine said they would benefit from a protocol using this app following hospital discharge.One participant said, "I think it's really pretty neat...if you have a concern, you'll get an answer like that."Eight participants said they would recommend the app to a friend or family member if they had surgery, and one participant was neutral, saying "...that's their decision." Principle Findings The current standard of care for the majority of surgical patients following hospital discharge involves little formal communication between patients and their care team until their routine clinic follow-up 2-3 weeks after discharge.This is a crucial time period during which many complications and setbacks to recovery occur, and is thus ripe for mHealth innovation [45].Other mHealth protocols have been developed to improve patient monitoring or replace routine postoperative clinic visits [1,3,27].However, these protocols are limited in their episodic follow-up, the lack of guaranteed provider review, or the lack of any transmitted visual information. To address these gaps, we have developed a smartphone app that allows patients to be in daily communication with their provider with both subjective symptom data and visual information in the form of digital images.We have demonstrated that most patients and their caregivers are able to learn to use our app, can use it to transmit meaningful clinical information, and have a high level of satisfaction and enthusiasm regarding the protocol.Additionally, studying patients during the immediate postoperative period allows for the most conservative estimate of usability given that patients are still in recovery and may not be at their functional baseline.Given that our participants were mostly older adults, seen during the vulnerable postoperative period, some with very limited prior smartphone experience, the wide success we observed is encouraging for the ability of the general population to use the app without difficulty once given protocol-based training and clinical support at the outset. Insights from the field of systems engineering provide a helpful framework for the development of mHealth protocols, as well as their attendant training programs.Work focusing on universal access and assistive technology for persons with disabilities is especially relevant for creating mHealth protocols accessible to a diverse patient population, particularly patients recovering from surgery, who are elderly or have limited prior experience with the technology, as in this study.Vanderheiden [5], a systems engineer with a focus in user experience optimization, outlines 3 approaches to assist in those efforts that are as follows: changing the individual, providing adjunct tools, and changing the environment. For the purposes of our protocol, changing the individual involved tailored training, which we made modular so that portions could be added or skipped depending on the participant's needs.As expected, the participants who struggled the most with the app were novice smartphone users and older participants.Most of this difficulty was in learning to navigate the smartphone itself and not necessarily related to the app.This was reflected in the responses to the system usability scale, where 11-20% of participants expressed needing to learn a lot before they could get going with the app or felt that they would need assistance of a technical person to complete it.Previous studies of mHealth apps have found similar results, with lack of familiarity with mobile devices and the need for assistance identified by participants as barriers to independent use [46][47][48].As a result of this added difficulty, novice users of smartphones required dedicated training to become facile using the device before moving on to training specific to the app; those participants who were familiar with the device were able to skip this portion of training.This flexibility in training was envisioned prior to usability testing, but by doing formal usability testing, we were better able to identify components of the protocol that needed dedicated training and for which patients they were needed. Importantly, efficiency of training should not come at the expense of effectiveness.Protocol training will need to be performed at the pace of the learner, taking care to keep them engaged.Two participants expressed training fatigue, with one saying, "I'm glad you're getting out of here; that was time consuming" after 27 minutes of training, despite her not having fully mastered the task.Another said, "you mean we're not done?" after 25 minutes of training.Bearing this in mind, future training efforts may need to be spread over multiple sessions both to reinforce tasks and to avoid fatigue and boredom with a single session. The second approach for improving accessibility is to provide adjunct tools to overcome particular barriers to use.For participants who struggled with tapping the screen, a stylus may be easier and more intuitive than using their finger.One participant opted to do this on her own based on her prior experience using a stylus with her tablet device.Another barrier we encountered in our protocol was the difficulty experienced by patients with wounds in certain locations that were difficult to take an image of, particularly groin and abdominal wounds as well as amputation stumps.Potential tools to aid these patients might include training them to use selfie-sticks or mirrors to improve their ability to independently take images of wounds XSL • FO RenderX in these locations.However, assistive devices or tools have the potential to add an additional layer of complexity for patients who are already uncomfortable with the device or the app, and this must be weighed against the potential benefit of their use.Because groin wounds are at increased risk of developing surgical site infection [49], these are the very patients who stand to gain the most from postdischarge wound surveillance, and every effort should be made to maximize their ability to participate, which may also include identifying a competent caregiver willing to assist. Finally, user accessibility may be improved by changing the environment to be accessible to all users without the need for specialized devices or tailoring to the individual, an approach termed "universal design."Following the first round of testing, we made several subtle but significant improvements to the design of the app itself to improve its usability for a wide range of users.The reconfiguration of buttons on the camera screen made capturing images easier for participants with limited fine motor ability or who had difficulty with discrete touch.We eliminated screens that required scrolling up and down to preclude novice users or those with cognitive limitations from having to learn an additional skill.In making these changes, the app becomes more accessible to all users, including those who did not have difficulty completing it prior to these modifications, by making it as simple and straightforward as possible.mHealth platforms in the future should strive for universal accessibility in their design to maximize participation and benefit. One aspect of universal design we did not achieve was making the app compatible with an Android device.For those participants more familiar with Android technology than iOS, learning to use the app first required learning to use the device, a barrier not experienced by those participants who had used an iOS device in the past.This is particularly important given key demographic differences in smartphone ownerships, specifically that minorities, those of lower income, and those with lower educational attainment are more likely to own an Android device [50].Future iterations of this app should be made Android-compatible to increase its usability for a wider range of patients.However, despite our best efforts to incorporate these insights from systems engineering and develop a universal design for the app and for our training protocol, it is likely that some patients will still need the assistance of a caregiver to complete the app.Through usability testing, we identified several possible reasons why some might be unable to complete the app independently.Those patients who are novice smartphone users and are unable to learn to complete the app independently will by definition need assistance.Patients who have wounds in locations they cannot reach or cannot visualize sufficiently on their own will need a caregiver.Additionally, patients who have limited independence at baseline will need assistance, as with one of our participants who was a hemiparetic bilateral lower extremity amputee.In these cases, a competent caregiver or family member will need to be identified so that these patients may still benefit from mHealth protocols.These patients may already have a caregiver or involved family member due to their baseline functional status and reliance on others for aspects of their care. Interestingly, participants consistently rated themselves as having successfully completed the app, even when their performance did not warrant such an assessment.When asked whether taking a digital image of their wound was easy to complete, only 2 participants were neutral, while all others agreed or strongly agreed.All 9 participants agreed or strongly agreed with the statement "I am confident I completed this task" in reference to taking a digital image of their wound, even the participants whose images were not sufficient for clinical decision making.Sonderegger et al [51] found a similar trend in their study of mobile phone usability in older adults.They posited several possible explanations for this finding.One was that this may have been a result of low expectations participants had for themselves, such that they overstated even small successes.Another was that participants may have felt that with practice they would eventually be successful, valuing their potential success over their actual success.This is an important finding, indicating that participants using new technology need to be carefully educated about what is expected of them and what constitutes meaningful success.Despite these barriers, there was substantial enthusiasm from most participants about the protocol.One participant told the research team he wished he could take the device home upon discharge and use it to stay in contact with the care team.All participants thought they would benefit from this protocol and would be willing to complete the app daily if they were instructed to do so.This is consistent with previous studies of mHealth [11,26,52,53], which collectively indicate that patients and their caregivers are willing to participate in a variety of remote monitoring protocols, see such protocols as being potentially beneficial to them, and are satisfied when they participate. In addition, the fact that many participants could ultimately complete the app independently or with caregiver's assistance is encouraging.The overall usability score of 83.3 is above average for usability testing, indicating a level of comfort among first-time users of the app [54].Following a short training session, most patients will be able to participate in a protocol using this app, though as stated above, certain populations will likely need more focused training.This is the first study, to our knowledge, to formally investigate usability of a medical device with digital image taking capability using the ISO 9241-11 standards [28].Our findings indicate that patients are capable of completing such an app and that there is broad enthusiasm for its use.However, increased attention will need to be paid to novice users and older adults who may need more extensive training before they will be able to complete mHealth protocols independently.Additionally, to avoid widening of existing disparities in access and health outcomes, health systems must ensure such protocols, if proven beneficial, are available to all patients and not only to those who already have access to the necessary technology.As health systems increasingly focus on improving transitions of care and maximizing outpatient management of complex patients, the ability to monitor recovery of conditions that have a physical manifestation, including fields beyond vascular and general surgery, this app and those similar to it have the potential to XSL • FO RenderX revolutionize the way care is delivered in the postdischarge period. The results of this study should be interpreted in the context of several limitations.Our study may be limited by its sample size.Considerable debate exists within the literature regarding the ideal sample size for usability testing.Historically, a sample of only 5 participants was thought to be of sufficient size, but more recent data suggests a larger sample is required to make accurate assessments [29,[55][56][57].However, the more recent estimates for ideal sample size were based on usability testing of more complex websites with multiple possible pathways.Given the simplicity and linearity of the app in this study and the diversity of the participants studied, we feel confident that all major areas for improvement within the app were identified and addressed in the redesign of the app.In addition, our results may be limited by the fact that data was collected only at one medical center; our findings may be specific to our patient population and need additional testing in other patient populations with different sociodemographic or cultural characteristics.Moreover, while the training was performed by a researcher for the purposes of this study, it is likely that this would need to be performed by a nurse in the clinical setting.Further work will need to be done to examine implementation and feasibility of this protocol outside of a controlled research setting. Conclusion As postoperative lengths of stay decrease, health systems will need to become creative in their methods of monitoring patients in the outpatient setting.Many telemedicine protocols have emerged to address this goal, but ours is the first to add an asynchronous visual component through the use of digital images, whose power to efficiently convey vast amounts of information is unparalleled in today's standard of care.Additionally, by directly engaging with our patient population and making them active participants in their care, we participate in a growing movement toward patient-centered care and shared decision-making.We have demonstrated that the majority of patients can be taught to complete our app independently and that patients are enthusiastic about partnering with their providers in novel ways to optimize their recovery.Though the majority of participants had little difficulty completing the app, formal usability testing allowed us to identify components needing further improvement, providing invaluable information we could not have otherwise obtained.This argues strongly for the use of formal usability testing in the development of future novel protocols for patient-centered care. Figure 1 .Textbox 1 . Figure 1.Screenshots of the final app. A. Modified camera screen.B. Image review screen where participants can choose whether to keep the image they have taken or try again.C. Review screen of all added images; up to 4 images may be added.D. A series of yes or no questions follow.E. Participants can review their survey responses and have the option to change them prior to submission.F. Submission confirmation screen. Figure 2 . Figure 2. Wound Check app data flow overview. Table 1 . User interface design dimensions from International Organization for Standardization (ISO) standard 9241-12 and corresponding WoundCheck design features. Table 2 . Tenets of adult learning and memory and corresponding training design features. Table 3 . Demographic and baseline characteristics. Table 4 . Effectiveness, efficiency, and satisfaction results of usability testing.
2018-04-03T01:18:11.764Z
2016-09-28T00:00:00.000
{ "year": 2016, "sha1": "4c4bcb1fc27103ffb01277b2e6e74e413ec7289f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/mhealth.6023", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0fd294ebd49498e69fc4cf9571e67b85d6e0e505", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238835500
pes2o/s2orc
v3-fos-license
UAV, a Farm Map, and Machine Learning Technology Convergence Classification Method of a Corn Cultivation Area : South Korea’s agriculture is characterized by a mixture of various cultivated crops. In such an agricultural environment, convergence technology for ICT (information, communications, Introduction Corn, along with rice and wheat, is one of the world's three major grains, and it is a food crop that has high productivity per unit area and is widely used as snacks, forage, starch, and cooking oil [1,2]. The consumption of corn continues to increase worldwide [3]. Production was 1135 million tons in 2019, accounting for the largest proportion of food crops [4]. However, the Korean domestic grain self-sufficiency rate was 21.7% in 2018, and corn, among major food crops, had the lowest rate at 0.8%. In addition, the import volume of corn is about 10,166,000 tons, with an import value of 2126 million U.S. dollars, which is a situation dependent on imports. The South Korean government is preparing a plan to increase the production of crops with high import dependence, such as corn, by cultivating other crops in paddy fields and improving varieties [5]. Global abnormal weather events such as those in 2020 highlight the instability of international grain prices and the importance of food security. In particular, in South Korea, which is highly reliant on imports, it is very important to understand the current status of the cultivation area and production in order to stabilize consumer prices, make decisions on grain import policies, and efficiently inventory selfsufficiency. The crop cultivation area survey conducted in the past was carried out by selecting a sample and visiting the site in person or by conducting a cultivation intention survey by interview [6]. It is very difficult to obtain objective data because the survey results depend mainly on observations and can be subjective depending on the skill level of the investigator [7]. As an alternative to this, the research team proposed a method to check the cultivated area without directly visiting the site, and it is being applied and utilized in practice [6]. The proposed technology utilizes remote sensing technology that can acquire objective information for a large area and is being applied in various ways [8][9][10]. In particular, in the remote sensing (RS) field, unmanned aerial vehicles (UAVs) have advantages in precision, economy, and periodicity compared to satellite images, so they are attracting attention as a suitable platform for agricultural monitoring in South Korea, where various crops are grown in small areas and precipitation is concentrated in summer [11]. Land use/land cover (LULC) and crop classification are some of the most active research areas in the RS field [12][13][14][15][16][17][18]. The research results obtained through this study are useful as basic data for the calculation of cultivated areas and the monitoring of crop conditions in the agricultural field [19]. On the other hand, from 2010, research on land cover classification and crop classification that applied artificial intelligence technology increased rapidly with the development of software and hardware [20][21][22][23]. In particular, in the early stage, land cover classification was performed using satellite data such as Landsat images and various machine learning classification algorithms (ANN: artificial neural network, SVM: support vector machine, RF: random forest), and their functions were compared [24][25][26]. During this period, studies were conducted to evaluate and compare machine learning classification models for target regions of each country mainly using satellite images [27][28][29][30]. Since then, research on land use and classification methods has been continuously developed, and various application cases of object-based classification methods with GIS (Geographic Information System) have been reported [7,[31][32][33][34]. As for the cases of using UAV images, as various UAVs have been developed and distributed, the number of applications used, not only in surveying but also in agriculture, has increased. In the agricultural field, research on detecting weeds in paddy fields or classifying cultivated crops using UAV images and object-based classification techniques has begun to be actively conducted [35][36][37]. Most of these studies use SVM, RF, neural networks, Bayesian networks, and maximum likelihood classification methods to recognize plants or objects of interest [38][39][40][41][42][43][44][45][46]. In 2020, AI techniques using deep learning methods were proposed, but research focusing on leaves or individual plants has been the main focus [47]. Based on a review of various proposed studies, deep learning techniques are not applied in studies across a wide area, where mainly ANN, SVM, and RF algorithms are used, and it is suggested that the accuracy is higher than that obtained using other machine learning classification techniques [48,49]. Previous domestic studies using UAV and satellite images include studies on autumn cabbage and radish [8], onion, garlic [35], winter crops [36], and potatoes in high-altitude cabbage cultivation regions [49]. In the case of overseas regions, research is mainly conducted on major crops and in areas with social problems with high added value in each country [50][51][52]. In South Korea, there is a lack of research on corn classification, and research on corn is mainly conducted by estimating the cultivated area and production overseas based on satellite images [27,28]. In addition, most of the studies using UAVs are on a field scale of less than 20 to 200 ha, and there are very few studies on a 2000 ha-wide area [53]. In the future, as the scope of the application of UAVs is diversified, they are expected to be used in various ways in the agricultural field, and the scope of application is expected to expand. The expansion of the scope of application in agriculture requires the development of related technologies and research on how to apply them. To this end, the Korean government has developed a Farm Map and is trying to use it in various fields in agriculture [54]. The Farm Map provides important spatial information about agriculture that is updated every two years by applying satellite and aerial imagery. Although the technology using Farm Map is limited, if UAV and machine learning methods are combined, the time and effort required for image processing can be minimized [6]. Therefore, the purpose of this study is to (1) propose a UAV-based multi-spectral image acquisition and processing method for an area of 2000 ha or more, (2) apply a Farm Map and machine learning classification algorithm to identify the fields where corn is grown, and (3) estimate the corn cultivation area in the relevant area using the acquired cultivation land information. Study Area As shown in Figure 1, this study was conducted in Gammul-myeon (36 • 50 15 N, 127 • 52 29 E), located in the northeastern part of Goesan-gun, Chungcheongbuk-do, South Korea. The research area covers 4280 ha and consists of 7 legal districts. The western part of the region is a plain that forms agricultural land, and the southeast consists of mountainous areas where Juwolsan and Bakdalsan are located. To the northwest is the Dalcheon River flows, forming fertile agricultural land. less than 20 to 200 ha, and there are very few studies on a 2000 ha-wide area [53]. In the future, as the scope of the application of UAVs is diversified, they are expected to be used in various ways in the agricultural field, and the scope of application is expected to expand. The expansion of the scope of application in agriculture requires the development of related technologies and research on how to apply them. To this end, the Korean government has developed a Farm Map and is trying to use it in various fields in agriculture [54]. The Farm Map provides important spatial information about agriculture that is updated every two years by applying satellite and aerial imagery. Although the technology using Farm Map is limited, if UAV and machine learning methods are combined, the time and effort required for image processing can be minimized [6]. Therefore, the purpose of this study is to (1) propose a UAV-based multi-spectral image acquisition and processing method for an area of 2000 ha or more, (2) apply a Farm Map and machine learning classification algorithm to identify the fields where corn is grown, and (3) estimate the corn cultivation area in the relevant area using the acquired cultivation land information. Study Area As shown in Figure 1, this study was conducted in Gammul-myeon (36°50′15″ N, 127°52′29″ E), located in the northeastern part of Goesan-gun, Chungcheongbuk-do, South Korea. The research area covers 4280 ha and consists of 7 legal districts. The western part of the region is a plain that forms agricultural land, and the southeast consists of mountainous areas where Juwolsan and Bakdalsan are located. To the northwest is the Dalcheon River flows, forming fertile agricultural land. Goesan-gun is an area where wax corn was developed, and edible corn such as Mibaekchal and Miheukchal are mainly grown. In addition, this area was evaluated for local adaptability by selecting it as an open-field cultivation test bed for Golden Matchal, a newly developed variety developed by the RDA (Rural Development Administration, South Korea). The corn cultivation area of Gammul-myeon in Goesan-gun is 141.4 ha, based on the registration of agricultural business in 2017, and it is the region with the highest ratio of cultivated area [54]. Goesan-gun is an area where wax corn was developed, and edible corn such as Mibaekchal and Miheukchal are mainly grown. In addition, this area was evaluated for local adaptability by selecting it as an open-field cultivation test bed for Golden Matchal, a newly developed variety developed by the RDA (Rural Development Administration, South Korea). The corn cultivation area of Gammul-myeon in Goesan-gun is 141.4 ha, based on the registration of agricultural business in 2017, and it is the region with the highest ratio of cultivated area [54]. Corn Growth Schedule and Weather Conditions in Goesan-Gun Corn has a slightly different growing period depending on the growing region and method. Corn cultivation in Goesan-gun is mainly carried out by the seedling method. Corn is sown using seedling plugs in March and is planted in April after a seedling period of about 25-30 days. As shown in Figure 2, planted corn is harvested through vegetative growth stages (VE-V9), flowering stages (VT), and reproductive growth stages (R1-R6). The cultivated corn has been shown to grow with the growth characteristics suggested in the previous research results [55,56]. Corn's vegetative stage (VE~V9) is a period in which the number of leaves and plant height growth are remarkable, and it lasts for about 60 to 70 days after planting. Corn, which has sufficiently grown leaves and plant height, begins flowering (VT) from early to mid-June. Corn that has flowered is harvested after going through reproductive stages (R1~R6) for about 30 days. The reproductive stages are the period in which leaf and plant height growth are no longer achieved and the ears mature. Corn Growth Schedule and Weather Conditions in Goesan-Gun Corn has a slightly different growing period depending on the growing region and method. Corn cultivation in Goesan-gun is mainly carried out by the seedling method. Corn is sown using seedling plugs in March and is planted in April after a seedling period of about 25-30 days. As shown in Figure 2, planted corn is harvested through vegetative growth stages (VE-V9), flowering stages (VT), and reproductive growth stages (R1-R6). The cultivated corn has been shown to grow with the growth characteristics suggested in the previous research results [55,56]. Corn's vegetative stage (VE~V9) is a period in which the number of leaves and plant height growth are remarkable, and it lasts for about 60 to 70 days after planting. Corn, which has sufficiently grown leaves and plant height, begins flowering (VT) from early to mid-June. Corn that has flowered is harvested after going through reproductive stages (R1~R6) for about 30 days. The reproductive stages are the period in which leaf and plant height growth are no longer achieved and the ears mature. The optimum growing temperature for corn is 20~30 °C, and as shown in Figure 3, the average temperature in Goesan-gun in early April is 5~15 °C [57]. As a result, corn planted in early April often suffers from cold damage. In addition, corn is one of the crops that causes a large decrease in farmers' income due to a decrease in harvest when rainfall is concentrated during the harvest season. The year 2020 was also greatly affected by cold weather in the early stages of cultivation and rain that continued during the harvest period ( Figure 3). The optimum growing temperature for corn is 20~30 • C, and as shown in Figure 3, the average temperature in Goesan-gun in early April is 5~15 • C [57]. As a result, corn planted in early April often suffers from cold damage. In addition, corn is one of the crops that causes a large decrease in farmers' income due to a decrease in harvest when rainfall is concentrated during the harvest season. The year 2020 was also greatly affected by cold weather in the early stages of cultivation and rain that continued during the harvest period ( Figure 3). The UAV image acquisition was carried out for four days over a total of two corn growth stages: 8-9 May 2020, which corresponds to stage V3 in Figure 2, and 18-19 June, which corresponds to stage VT. The field survey was conducted for a total of four days from 18 to 21 June in accordance with the UAV imaging period. The Farm Map was promoted with the goal of increasing the efficiency of policies related to agriculture by generating accurate farmland data consistent with the field, and at the same time linking agricultural and rural-related administrative information. Therefore, the Farm Map is an electronic map produced to construct high-quality farmland information that matches the actual farmland due to the discrepancy in the boundary between the South Korean cadastral map and the actual farmland [54]. The Ministry of Agriculture, Food, and Rural Affairs, South Korea, established an electronic map of agricultural land across the country that was acquired from aerial imagery from 2014 to 2016 and has been continuously updated since 2017. The current version of the Farm Map is updated every two years by dividing South Korea into eastern and western regions. Therefore, in the case of general rural areas, the Farm Map accurately reflects the current state of farmland boundaries. The UAV image acquisition was carried out for four days over a total of two corn growth stages: 8-9 May 2020, which corresponds to stage V3 in Figure 2, and 18-19 June, which corresponds to stage VT. The field survey was conducted for a total of four days from 18 to 21 June in accordance with the UAV imaging period. Farm Map (Electronic Map of Farmland) The Farm Map was promoted with the goal of increasing the efficiency of policies related to agriculture by generating accurate farmland data consistent with the field, and at the same time linking agricultural and rural-related administrative information. Therefore, the Farm Map is an electronic map produced to construct high-quality farmland information that matches the actual farmland due to the discrepancy in the boundary between the South Korean cadastral map and the actual farmland [54]. The Ministry of Agriculture, Food, and Rural Affairs, South Korea, established an electronic map of agricultural land across the country that was acquired from aerial imagery from 2014 to 2016 and has been continuously updated since 2017. The current version of the Farm Map is updated every two years by dividing South Korea into eastern and western regions. Therefore, in the case of general rural areas, the Farm Map accurately reflects the current state of farmland boundaries. The initial purpose of developing the Farm Map was to provide accurate data to overcome the limitations of front-line work on the payment status and field verification, as the problem of the difference between the area under agricultural subsidy inspection and the actual area was raised in the inspection system for crops. The accuracy of the Farm Map boundary is very accurate because the farmland boundary was demarcated and divided by experts using high-resolution satellite and aerial images. Currently, it is usefully used to block the illegal receipt of agricultural subsidies. However, there are some problems in the lot number mapping on the Farm Map and the cadastral map, so the utilization rate in the field is somewhat insufficient. In addition, there is still inefficiency in the operation of geospatial data such as those of Farm Maps due to the lack of a connection base for agricultural-related geospatial data and a lack of awareness of their use. A Farm Map is produced through the following steps: First, satellite (Kompsat, etc.) and aerial imagery data are collected. Second, image processing of the collected data is undertaken. Third, the boundaries for seven agricultural land items are demarcated. Farm Map (Electronic Map of Farmland) The initial purpose of developing the Farm Map was to provide accurate data to overcome the limitations of front-line work on the payment status and field verification, as the problem of the difference between the area under agricultural subsidy inspection and the actual area was raised in the inspection system for crops. The accuracy of the Farm Map boundary is very accurate because the farmland boundary was demarcated and divided by experts using high-resolution satellite and aerial images. Currently, it is usefully used to block the illegal receipt of agricultural subsidies. However, there are some problems in the lot number mapping on the Farm Map and the cadastral map, so the utilization rate in the field is somewhat insufficient. In addition, there is still inefficiency in the operation of geospatial data such as those of Farm Maps due to the lack of a connection base for agricultural-related geospatial data and a lack of awareness of their use. A Farm Map is produced through the following steps: First, satellite (Kompsat, etc.) and aerial imagery data are collected. Second, image processing of the collected data is undertaken. Third, the boundaries for seven agricultural land items are demarcated. Fourth, the processes of classification and segmentation of the image are undertaken. Fifth, a field survey for comparison with field conditions is performed. Sixth, on-site inspection is carried out to ensure that the site conditions are properly reflected. Seventh, metadata are created so that they can be used in various ways in the agricultural field. The Farm Map is an agricultural support map optimized for agriculture by linking aerial and satellite images, which are not easy to obtain and process by ordinary people, with field conditions. As a problem, it has been pointed out that it is difficult to immediately reflect changes in the field conditions, such as agricultural land near town centers, where the update cycle of the source data, such as aerial images, rapidly changes every two years. On the other hand, as in this study, rural areas are represented in an electronic map of farmland that can be used as farmland does not change much. The land use classification of the Farm Map is divided into seven fields: paddy fields, fields, orchards, facilities, ginseng, fallow land, and bare land. In this study, paddy fields, fields, orchards, ginseng, and fallow land were investigated, i.e., not facilities and bare land, from the land use classifications using the 2019 Farm Map, the most recent data of the Chungcheongbuk-do region ( Figure 4). satellite images, which are not easy to obtain and process by ordinary people, with field conditions. As a problem, it has been pointed out that it is difficult to immediately reflect changes in the field conditions, such as agricultural land near town centers, where the update cycle of the source data, such as aerial images, rapidly changes every two years. On the other hand, as in this study, rural areas are represented in an electronic map of farmland that can be used as farmland does not change much. The land use classification of the Farm Map is divided into seven fields: paddy fields, fields, orchards, facilities, ginseng, fallow land, and bare land. In this study, paddy fields, fields, orchards, ginseng, and fallow land were investigated, i.e., not facilities and bare land, from the land use classifications using the 2019 Farm Map, the most recent data of the Chungcheongbuk-do region ( Figure 4). Crop Classification Method and Algorithm The classification method used in this study is an object-based classification method. For wide-area classification using low-resolution satellite images, pixel-based classification has been used [13,15]. However, in high-resolution images of 5 m or less, there is a limitation in classification accuracy due to the Salt and Pepper phenomenon [22]. For this reason, object-based classification is widely used for UAV or high-resolution satellite images with a spatial resolution of 5 m or less [37,53,58]. Machine learning is a branch of AI that finds rules by inputting data and obtaining expected answers from these data [59,60]. For this, it is important to learn the input data. In the field of spatial information, machine learning is applied for classifying or recognizing objects using images and point clouds. When machine learning provides a lot of data related to a task, it can find a statistical structure in these data and create rules to automate that task. If data on crops are accumulated in the field of spatial information and such learning data can be automatically built, the work efficiency for machine learning will be improved. In this study, SVM and RF methods were reviewed to evaluate the efficiency of object classification by applying machine learning to the spatial information field. Crop Classification Method and Algorithm The classification method used in this study is an object-based classification method. For wide-area classification using low-resolution satellite images, pixel-based classification has been used [13,15]. However, in high-resolution images of 5 m or less, there is a limitation in classification accuracy due to the Salt and Pepper phenomenon [22]. For this reason, object-based classification is widely used for UAV or high-resolution satellite images with a spatial resolution of 5 m or less [37,53,58]. Machine learning is a branch of AI that finds rules by inputting data and obtaining expected answers from these data [59,60]. For this, it is important to learn the input data. In the field of spatial information, machine learning is applied for classifying or recognizing objects using images and point clouds. When machine learning provides a lot of data related to a task, it can find a statistical structure in these data and create rules to automate that task. If data on crops are accumulated in the field of spatial information and such learning data can be automatically built, the work efficiency for machine learning will be improved. In this study, SVM and RF methods were reviewed to evaluate the efficiency of object classification by applying machine learning to the spatial information field. SVM is an algorithm to find the optimal linear decision boundary that classifies classification items using the concepts of support vectors and margins [61,62]. Here, the support vector and margin mean the distance between the support vector and the decision boundary, that is, the data closest to the decision boundary. SVM is an algorithm that finds the optimal decision boundary that maximizes the margin using this principle. However, when the input data are difficult to distinguish with a linear decision boundary, the kernel method can be used in SVM. The kernel trick is a method to find the optimal decision boundary in the projected multidimensional space by mapping the dimension of the input data to the multidimensional space [29]. Here, various nonlinear kernel functions can be used. Among them, a polynomial kernel and a radial basis function (RBF) kernel are representative [30]. In this study, the RBF kernel was selected as the kernel function. The next step was to select hyperparameters. The hyperparameter has a regulatory parameter (C) and gamma, which is the inverse of the kernel width [43]. C serves to limit the importance of each training material. The larger the C value, the more non-linear the decision boundary becomes for classifying the classification items. Gamma defines the range of influence that training data can have. The larger the gamma and the narrower the kernel width, the more complex the model. RF is an ensemble learning approach developed by Breiman [63] and maximizes algorithm performance by collecting several decision tree algorithm results, which are basic components, compositing them into one result [64,65]. The final class decision is a method of weighted voting for the discriminant classes of the generated decision tree. Since RF iteratively constructs independent decision trees through maximum randomness when selecting samples and variables for each model, it is possible to reduce the prediction error by lowering the variance while maintaining the low bias of the decision tree [47,66]. In addition, RF is stable even for high-dimensional data including multiple explanatory variables because it considers the interaction and nonlinearity between explanatory counts. The hyperparameters to be considered in the RF algorithm are mtry and n.tree. Here, mtry is the number of randomly sampled variables in each partition, and n.tree is the number of decision trees. If the number of mtry is reduced, the calculation speed increases, but the correlation between the two trees and the strength of all trees in the forest decrease, which has a complicated effect on the classification accuracy [66]. It is known that n.tree has less influence on classification results than mtry [52]. Verification of the Accuracy of the Classification Results The accuracy evaluation of the SVM and RF classification algorithms was evaluated by creating an error matrix by comparing it with field survey data collected at the time of image acquisition. The error matrix is evaluated with five indicators for LULC classification results [25]. Evaluation indicators were compared and evaluated by expressing the overall accuracy (OA), user's accuracy (UA), producer's accuracy (PA), Kappa coefficient, and the F-measure. The evaluation of these indicators was performed by preparing an error matrix, as shown in Table 1. In Table 1, Type A and B indicate classification items. In this formula, a is the number of rows and columns in the error matrix, n is the number of observations in the error matrix, a ij is a major diagonal element for class i, a i+ is the total number of observations in row i (right margin), and a +j is the total number of observations in column j (bottom margin). Research Progression and Method This study was conducted using the following steps and methods. (1) Image acquisition equipment: The UAV used to acquire wide-area images was a fixed-wing eBee Plus (Sensefly, Cheseaux-sur-Lausanne, Switzerland) ( Figure 5). The UAV-mounted sensor used an RGB (16 MP) camera and a multi-spectral sensor (Sequoia+, Parrot, Paris, France) consisting of four spectral bands (12 MP): Green, Red, Red-Edge (RE), and Near Infrared (NIR). The Sequoia+ sensor collects the amount of light in real time by using the light sensor located on the top, and the light information is inputted into every single image and corrected. reflectance and NDVI (Normalized Difference Vegetation Index) of 10 variables in the two periods. NDVI was calculated using Equation (1): Here, NIR is the reflectance of the UAV image near infrared band, and RED is the reflectance of the Red band. (5) The dependent variable was based on whether or not corn was cultivated. The constructed variables were divided into training data and test data. For training data, 30% of the total data were randomly selected. The remaining 70% were used as test data. (6) Crop classification: In this study, an object-based classification method based on the Farm Map was applied. In object-based classification, image segmentation precedes object classification. The segmentation operation creates a region based on the similarity of texture, distance, and spacing of neighboring pixels. In order to apply this method to each site, the optimal division operation is completed through trial and error, and a lot of time and effort are required. Therefore, in this study, the image segmentation task was omitted by setting the boundary of the Farm Map as one object and using it for classification in order to efficiently utilize the high-capacity UAV data. Figure 5. Configuration of fixed-wing UAVs (eBee Plus, Sensefly) and onboard devices used to collect imagery in a wide area. Figure 5. Configuration of fixed-wing UAVs (eBee Plus, Sensefly) and onboard devices used to collect imagery in a wide area. (2) UAV image acquisition plan: For the UAV imaging plan, an area of 2520 ha was planned using eMotion 3 (Sensefly, Cheseaux-sur-Lausanne, Switzerland) software, as shown in Figure 6, excluding mountainous and non-agricultural land. In consideration of the battery capacity, the flight time was set to not exceed 40 min at a time, and a total of 18 routes were set to acquire the entire area of the farmland. In Figure 6, the path in the orange box is the path taken on Day 1 of image acquisition, and the blue box is the path progression on Day 2. The numbers in the boxes are numbered arbitrarily, and the number in parentheses is the flight time. The time required for acquiring images in a wide area was two days for each acquisition, and it was performed once in May and June, taking a total of 4 days. The image acquisition altitude was 110 m, and a spatial resolution of 10 cm was set based on the multi-spectral band. The collected single images were produced as one reflectance image for each band using the Pix4D Mapper (Sensefly, Cheseaux-sur-Lausanne, Switzerland) program. The production time was about 26 h at a time in the hardware environment of Table 2. Radiation correction of each image was performed using the real-time light amount collected by the optical sensor in Figure 5 and the calibration panel during the image processing process. Here, NIR is the reflectance of the UAV image near infrared band, and RED is the reflectance of the Red band. (7) Accuracy evaluation: The UAV multispectral image consists of four bands, and when time series images are analyzed, 4*n high-dimensional data are generated. In this study, the mutual accuracy was compared by applying the SVM and RF algorithms with high interpretability in high-dimensional data. For crop classification, the Farm Map boundary was set as one object, and the SVM and RF algorithms were used to classify corn and other crops. The classified results were evaluated for accuracy using an error matrix. (8) Analysis tools: ArcGIS Pro software version 2.5 was used for explanation and input of dependent variables as well as visualization of image classification results. All procedures of data preprocessing and classification were performed using RStudio, an open-source program. The SVM and RF models were built using the R packages "Caret" and "randomForest". Figure 8 shows the schematic of the whole process of the study. The composition of the research process system was largely divided into data acquisition, image pre-processing, image classification, and cultivation area estimation steps to calculate the total area of corn cultivation in Gammul-myeon, South Korea. (5) The dependent variable was based on whether or not corn was cultivated. The constructed variables were divided into training data and test data. For training data, 30% of the total data were randomly selected. The remaining 70% were used as test data. (6) Crop classification: In this study, an object-based classification method based on the Farm Map was applied. In object-based classification, image segmentation precedes object classification. The segmentation operation creates a region based on the similarity of texture, distance, and spacing of neighboring pixels. In order to apply this method to each site, the optimal division operation is completed through trial and error, and a lot of time and effort are required. Therefore, in this study, the image segmentation task was omitted by setting the boundary of the Farm Map as one object and using it for classification in order to efficiently utilize the high-capacity UAV data. (7) Accuracy evaluation: The UAV multispectral image consists of four bands, and when time series images are analyzed, 4*n high-dimensional data are generated. In this study, the mutual accuracy was compared by applying the SVM and RF algorithms with high interpretability in high-dimensional data. For crop classification, the Farm Map boundary was set as one object, and the SVM and RF algorithms were used to classify corn and other crops. The classified results were evaluated for accuracy using an error matrix. (8) Analysis tools: ArcGIS Pro software version 2.5 was used for explanation and input of dependent variables as well as visualization of image classification results. All procedures of data preprocessing and classification were performed using RStudio, an open-source program. The SVM and RF models were built using the R packages "Caret" and "randomForest". Figure 8 shows the schematic of the whole process of the study. The composition of the research process system was largely divided into data acquisition, image pre-processing, image classification, and cultivation area estimation steps to calculate the total area of corn cultivation in Gammul-myeon, South Korea. UAV Orthoimage and Field Survey Image acquisition using UAV was performed twice, in May ( Figure 9a) and in June (Figure 9b), targeting agricultural land, except for mountainous areas in Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea, as shown in Figure 9. The area of the obtained image was 2721 ha in May and 2680 ha in June, which corresponds to about 63% of the total area of Gammul-myeon. The numbers of images were 48,564 and 47,858 in May and June, respectively. As a result of matching based on the image acquisition period, the acquired image capacities were 14.8 and 14.7 GB, respectively. Table 3 summarizes the data acquisition area, number of shots, and orthoimage capacity of UAV images for each image acquisition date. UAV Orthoimage and Field Survey Image acquisition using UAV was performed twice, in May ( Figure 9a) and in June (Figure 9b), targeting agricultural land, except for mountainous areas in Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea, as shown in Figure 9. The area of the obtained image was 2721 ha in May and 2680 ha in June, which corresponds to about 63% of the total area of Gammul-myeon. The numbers of images were 48,564 and 47,858 in May and June, respectively. As a result of matching based on the image acquisition period, the acquired image capacities were 14.8 and 14.7 GB, respectively. Table 3 summarizes the data acquisition area, number of shots, and orthoimage capacity of UAV images for each image acquisition date. 52 was used to unify the UAV coordinate system with the Farm Map coordinate system. As shown in Figure 10, the Farm Map showed high accuracy because the study area was mostly composed of agricultural areas. The total number of lots in Gammul-myeon is 5827, based on the Farm Map. The number of lots included in the UAV image is 5500, which corresponds to 94.4% of the total agricultural land area. The field survey was conducted on 5500 lots included in the UAV image. The number of corn cultivation plots surveyed in the field was 582, and the total of other crops was 4918. The cultivated area was calculated using ArcGIS Pro based on the farmland division of the Farm Map. As a result of the calculation, the total agricultural land area was 855.99 ha, the cultivated area of corn was 106.94 ha, and the cultivated area of other crops was 749.05 ha. The corn cultivation area accounted for 12.5% of the total agricultural land area (Table 4). South Korea uses cadastral maps to classify land locations, lot numbers, ground points, and boundaries. However, there is a problem in that the cadastral map does not accurately indicate the relationship between agricultural boundaries and land ownership in agricultural areas. Therefore, farmers in many agricultural areas have demanded solutions to these problems. Accordingly, the South Korean government has produced and distributed agricultural land maps for crop-growing areas that are not related to land ownership, apart from the cadastral map of cultivated land [54]. This map is called a Farm Map and is currently being used for agricultural statistics and policy operation [6]. The Farm Map was used because it accurately distinguishes the boundaries of farmland using aerial and satellite images. The Farm Map consists of a PNU code, lot center coordinates, read code, readjustment properties of arable land, and lot area as the major constituent items. First, it is necessary to integrate the coordinate system of the image data acquired by the UAV and the Farm Map. Therefore, in this study, WGS 1984 UTM Zone 52 was used to unify the UAV coordinate system with the Farm Map coordinate system. As shown in Figure 10, the Farm Map showed high accuracy because the study area was mostly composed of agricultural areas. The total number of lots in Gammul-myeon is 5827, based on the Farm Map. The number of lots included in the UAV image is 5500, which corresponds to 94.4% of the total agricultural land area. The field survey was conducted on 5500 lots included in the UAV image. The number of corn cultivation plots surveyed in the field was 582, and the total of other crops was 4918. The cultivated area was calculated using ArcGIS Pro based on the farmland division of the Farm Map. As a result of the calculation, the total agricultural land area was 855.99 ha, the cultivated area of corn was 106.94 ha, and the cultivated area of other crops was 749.05 ha. The corn cultivation area accounted for 12.5% of the total agricultural land area (Table 4). Data Preprocessing and Hyperparameter Tuning For data preprocessing, the reflectance and vegetation index (NDVI) corresponding to each lot were extracted using multispectral images of the two periods and Farm Map. The pre-processed data were displayed in 12 rows and 5501 columns consisting of a lot number for parcel classification, 10 explanatory variables, and independent variables. For the preprocessed data, the training data were set before applying the classification algorithm. Of the total 5500 lots, 1650 lots, which accounted for 30% of the training data, were randomly selected. A total of 30% of the training data were used to tune the optimal hyperparameters. The most representative tuning parameter setting method is a grid search. A grid search is a function that creates a model for each parameter and finds the optimal parameter, and it is a method of searching including all combinations of each parameter. The optimal combination of each parameter was determined through 5-fold cross-validation of training data. In the SVM algorithm, the optimal hyperparameters were selected using a grid search method, such as C and Gamma. As a result of the analysis, as shown in Figure 11a, C was set to 1 and Gamma was set to 0.25, and the accuracy of the training model was 0.955 at this time. The hyperparameter of RF was searched by using mtry and n.Tree. n.Tree was set to 500, which is the default value of R package "random Forest", and mtry was searched by a grid search method. The applied n.Tree was set to 500, mtry was set to 5, and the accuracy of the training model was about 0.958 (Figure 11b). Data Preprocessing and Hyperparameter Tuning For data preprocessing, the reflectance and vegetation index (NDVI) corresponding to each lot were extracted using multispectral images of the two periods and Farm Map. The pre-processed data were displayed in 12 rows and 5501 columns consisting of a lot number for parcel classification, 10 explanatory variables, and independent variables. For the preprocessed data, the training data were set before applying the classification algorithm. Of the total 5500 lots, 1650 lots, which accounted for 30% of the training data, were randomly selected. A total of 30% of the training data were used to tune the optimal hyperparameters. The most representative tuning parameter setting method is a grid search. A grid search is a function that creates a model for each parameter and finds the optimal parameter, and it is a method of searching including all combinations of each parameter. The optimal combination of each parameter was determined through 5-fold cross-validation of training data. In the SVM algorithm, the optimal hyperparameters were selected using a grid search method, such as C and Gamma. As a result of the analysis, as shown in Figure 11a, C was set to 1 and Gamma was set to 0.25, and the accuracy of the training model was 0.955 at this time. The hyperparameter of RF was searched by using mtry and n.Tree. n.Tree was set to 500, which is the default value of R package "random Forest", and mtry was searched by a grid search method. The applied n.Tree was set to 500, mtry was set to 5, and the accuracy of the training model was about 0.958 (Figure 11b). Crop Classification Results and Accuracy Verification The classification of cultivated crops in the study area was performed using Farm Map-based SVM and RF algorithms. In general, classification items are used for cultivated crops grown during the observation period. However, since this study was aimed at estimating the cultivated area of corn, the classification items were set by dividing them into corn and other crops. Figure 12 shows the classification results of corn cultivation areas obtained by each algorithm. The error matrix was used to evaluate the accuracy of the classified results using the Farm Map-based SVM and RF algorithms. Accuracy evaluation was conducted by comparing the image classification results and field survey data. The Crop Classification Results and Accuracy Verification The classification of cultivated crops in the study area was performed using Farm Map-based SVM and RF algorithms. In general, classification items are used for cultivated crops grown during the observation period. However, since this study was aimed at estimating the cultivated area of corn, the classification items were set by dividing them into corn and other crops. Figure 12 shows the classification results of corn cultivation areas obtained by each algorithm. The error matrix was used to evaluate the accuracy of the classified results using the Farm Map-based SVM and RF algorithms. Accuracy evaluation was conducted by comparing the image classification results and field survey data. The verification items were set as corn and other crops as the classification items. Accuracy evaluation was performed on 5500 parcels acquired in UAV images. As a result of classification using the Farm Map-based SVM algorithm, corn was classified as 437 lots and other crops as 4820 lots, so a total of 5257 lots were exactly the same. The overall accuracy obtained by the SVM algorithm was 95.88%. The classification accuracy of corn, i.e., the purpose of this study, was 81.68% for PA and 75.09% for UA. The Kappa coefficient was 0.7777 and the F-measure was 0.78 (Table 5). As a result of classification using the Farm Map-based SVM algorithm, corn was classified as 437 lots and other crops as 4820 lots, so a total of 5257 lots were exactly the same. The overall accuracy obtained by the SVM algorithm was 95.88%. The classification accuracy of corn, i.e., the purpose of this study, was 81.68% for PA and 75.09% for UA. The Kappa coefficient was 0.7777 and the F-measure was 0.78 ( Table 5). The classification result obtained by applying the Farm Map-based RF algorithm was 537 lots of corn and 4899 lots of other crops, and a total of 5436 lots were exactly the same. The overall accuracy of RF algorithm application was 98.84% higher than that of SVM application. In the classification accuracy of corn, PA was 96.58%, UA was 92.27%, the Kappa coefficient was 0.94, and the F-measure was 0.94, showing a higher classification performance compared to that of the SVM algorithm (Table 6). To calculate the total corn cultivation area of Gammul-myeon, the results of classification using two algorithms, SVM and RF, were used. The result of calculating the cultivated area using the Farm Map-based SVM algorithm was 96.54 ha, which was estimated to be 90.27% of the actual corn cultivation area. The result of calculating the cultivated area by applying the Farm Map-based RF algorithm was 98.77 ha, which was estimated to be 92.36% of the actual corn cultivation area (Table 7). In this way, it can be seen that when estimating the cultivation area for a wide area, some errors are included depending on the applied algorithm. In both algorithms, RF was found to be higher in accuracy than SVM, and, in terms of processing speed, SVM was found to be slightly faster than RF. Considering the recent computer processing power, there is no significant difference in processing speed and time, so it will be necessary to increase the efficiency and to maximize the learning of wide-area images for machine learning to secure accuracy. Crop Classification Methodology The overall accuracy of the classification results using the Farm Map-based SVM and RF algorithms used in this study did not differ significantly, but there was a slight difference in the classification accuracy of corn. The reason is the difference between the total number of corn items and the total number of other items. There was an exact seven-fold difference between the number of non-corn and corn items, and even if the classification accuracy of corn was lower than others, it did not significantly affect the overall accuracy. Therefore, it would be appropriate to judge the accuracy of this study based on the classification accuracy of corn. The corn classification results of the two algorithms used differed by about 15% in UA and 17% in PA, confirming that the RF algorithm was more suitable for corn classification. For the computational speed of both algorithms, SVM took about 10 min and RF took about 60 min. As such, it was confirmed that the most important factor in classifying the research target crop in the agricultural field is that it is more accurate to use the classification accuracy for the target crop rather than the overall classification accuracy. In addition, it can be seen that the time required for classification differs depending on the algorithm, but is short when the investigation time is taken into consideration. Therefore, in consideration of the development of the computer processing speed, etc., the acquisition of learning data that can be applied to AI application algorithms and learning about them are considered to be important to further increase accuracy. Comparison between the Existing Image Segmentation Method and the Method Using Farm Map Data The Farm Map-based method was compared with the image segmentation method used in the existing object-based classification method to verify the efficiency. In general, the image segmentation algorithm used in wide-area satellite images utilizes the region growing technique, which is a technique for expanding an area with one pixel. The region growing technique creates one region based on the similarity of spacing, texture, and distance in pixels around one pixel, and when it exceeds the threshold value of a set parameter, it is set as another object. Parameters for object segmentation include segmentation scale, spatial information (shape), spectral information (color), compactness, and smoothness. In general parameter threshold selection, a trial-and-error method is used to find the optimal threshold value while changing parameter values. The optimal parameter settings were determined to be scale 700, shape 0.1, color 0.9, compactness 0.5, and smoothness 0.5. The segmentation result is presented in Figure 13 for comparison with the boundary data of the Farm Map. Image segmentation techniques are generally very complex and time-consuming [42]. The image segmentation of the UAV image in the study area took about 18 h to perform once. The optimal threshold takes a lot of time because of trial and error while changing the parameter values in many cases. Agronomy 2021, 11, x FOR PEER REVIEW 18 of 22 about 10 min and RF took about 60 min. As such, it was confirmed that the most important factor in classifying the research target crop in the agricultural field is that it is more accurate to use the classification accuracy for the target crop rather than the overall classification accuracy. In addition, it can be seen that the time required for classification differs depending on the algorithm, but is short when the investigation time is taken into consideration. Therefore, in consideration of the development of the computer processing speed, etc., the acquisition of learning data that can be applied to AI application algorithms and learning about them are considered to be important to further increase accuracy. Comparison between the Existing Image Segmentation Method and the Method Using Farm Map Data The Farm Map-based method was compared with the image segmentation method used in the existing object-based classification method to verify the efficiency. In general, the image segmentation algorithm used in wide-area satellite images utilizes the region growing technique, which is a technique for expanding an area with one pixel. The region growing technique creates one region based on the similarity of spacing, texture, and distance in pixels around one pixel, and when it exceeds the threshold value of a set parameter, it is set as another object. Parameters for object segmentation include segmentation scale, spatial information (shape), spectral information (color), compactness, and smoothness. In general parameter threshold selection, a trial-and-error method is used to find the optimal threshold value while changing parameter values. The optimal parameter settings were determined to be scale 700, shape 0.1, color 0.9, compactness 0.5, and smoothness 0.5. The segmentation result is presented in Figure 13 for comparison with the boundary data of the Farm Map. Image segmentation techniques are generally very complex and time-consuming [42]. The image segmentation of the UAV image in the study area took about 18 h to perform once. The optimal threshold takes a lot of time because of trial and error while changing the parameter values in many cases. On the other hand, the Farm Map has the advantage of reducing the processing time because EPIS (Korea Agency of Education, Promotion and Information Service in Food, Agriculture, Forestry and Fishery) provides various types of information based on satellite and aerial images by combining them [54]. This study was able to minimize the time required for image segmentation by using the open source provided by EPIS. In addition, if the Farm Map is not used, the time and effort required in the process of image segmentation must be put into operation. In the general method, other land cover On the other hand, the Farm Map has the advantage of reducing the processing time because EPIS (Korea Agency of Education, Promotion and Information Service in Food, Agriculture, Forestry and Fishery) provides various types of information based on satellite and aerial images by combining them [54]. This study was able to minimize the time required for image segmentation by using the open source provided by EPIS. In addition, if the Farm Map is not used, the time and effort required in the process of image segmentation must be put into operation. In the general method, other land cover information such as forests, downtown areas, roads, and water is included in addition to agricultural land, so cover classification for agricultural land and non-agricultural land must be performed separately and removed. However, when using the Farm Map, land cover other than agricultural land is excluded from the classification items in advance, thereby minimizing the possibility of misclassification and increasing the classification accuracy and data processing efficiency. Conclusions In this study, the vegetative (V3) and flowering (VT) stages of corn were selected for Gammul-myeon, Goesan-gun, Chungcheongbuk-do, South Korea, and the cultivation area of corn was estimated using UAV imaging and Farm Map-based machine learning techniques. The conclusions obtained in this study can be summarized as follows: 1. Existing UAV images were used for a narrow area of a lot unit (2 ha) or a field unit (20 ha), but in this study, a wide area of about 2700 ha was studied. It took about 4 days to acquire the UAV imagery, 36 flights, and about 52 h to produce the reflectance image. 2. As a result of classification using the Farm Map-based SVM algorithm for wide-area images, we found 81.68% for PA, 75.09% for UA, 0.77 for the Kappa coefficient, and 0.78 for the F-measure. In particular, the results of the classification of the Farm Map-based RF algorithm showed a PA of 96.58%, UA of 92.27%, a Kappa coefficient of 0.94, and the F-measure of 0.94, with higher accuracy than SVM. 3. As a result of applying the SVM algorithm, the corn cultivation area was estimated to be 96.54 ha, showing an accuracy of 90.27%. The RF algorithm estimated the corn cultivation area to be 98.77 ha and showed higher accuracy than SVM at 92.36%. 4. In the task of classifying and dividing the cultivated area of crops, as a result of performing object-based classification using the Farm Map, it was found that the work efficiency, accuracy improvement, and processing time reduction were found to be very effective compared to the existing object division method. In addition, the Farm Map-based method was able to increase the efficiency of data processing by minimizing the possibility of misclassification of farmland and cultivated crops. As a result, it was found that the identification of the cultivated area of crops is a method that enables rapid and reliable analysis when using UAV images, Farm Map, and machine learning convergence. The Farm Map is an agricultural land information map specifically produced for agriculture by the South Korean government. Although the information provided here may not immediately apply changes depending on the agricultural area, most agricultural areas do not show any difference due to the update every two years. In this way, if the boundary and area information of farmland directly digitized through visual inspection and field inspection using aerial and satellite images are effectively utilized in related fields, this will be useful for saving time and for management in agricultural fields. In particular, since the agricultural field requires a lot of effort and time according to environmental changes, it will be possible to create a more efficient working environment, classify crops, and derive cultivation areas by using public data and machine learning algorithms. In the future, it is necessary to improve the accuracy through acquisition and learning of large amounts of data to proceed and apply AI.
2021-09-27T20:36:36.479Z
2021-08-04T00:00:00.000
{ "year": 2021, "sha1": "e01080e9d8e6a6034069bee5074c2a7a45500565", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/11/8/1554/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b8b68cadb785dcab2bfe2260ebfe0bbda012ad1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Computer Science" ] }
13245561
pes2o/s2orc
v3-fos-license
Model to Predict Endothelial Cell Loss after Iris-Fixated Phakic Intraocular Lens Implantation P URPOSE . To describe a model predicting endothelial cell (EC) loss after iris-fixated phakic intraocular lens (pIOL) implantation, taking distance from the edge of the pIOL to the endothelium into account. M ETHODS . This prospective observational study monitored long-term EC changes in 306 eyes after pIOL implantation. EC density (ECD) was determined before surgery, 6 months after surgery, and then annually up to 8 years after surgery. Mean follow-up was 31.7 (cid:1) 25.7 months. All eyes underwent anterior segment optical coherence tomography to determine minimum distance from the edge of the pIOL to the endothelium. Linear mixed-model analysis was performed to present a model that describes EC loss as a linear decrease and an additional decrease depending on the postoperative edge distance of the patient. R ESULTS . Mean minimum edge distance was 1.43 (cid:1) 0.23 mm (range, 0.70–2.21 mm). For this mean edge distance, the model predicted a yearly EC loss of 1.0%, whereas an edge distance of 1.20 mm resulted in a yearly EC loss of 1.7%, and an edge distance of 1.66 mm led to a yearly EC loss of only 0.2%. Furthermore, the model predicted that for patients with pre-operative ECDs of 3000, 2500 or 2000 cells/mm 2 , and edge distances of 1.43 mm, a critical ECD of 1500 cells/mm 2 (at which point pIOL explantation and cataract extraction can still safely be performed) will be reached at 56, 37, and 18 years after implantation. C ONCLUSIONS . The presented model predicts EC loss after iris-fixated pIOL implantation in relation to the measured edge distance, patient age, and preoperative ECD, which can assist ophthalmologists in patient selection and follow-up S ince 1991, iris-fixated phakic intraocular lenses (pIOLs) have been successfully implanted in healthy eyes to correct myopia, hyperopia, and astigmatism.2][3] However, long-term endothelial cell (EC) loss remains a point of discussion.To investigate the effect of irisfixated pIOLs on the corneal endothelium, several clinical trials studied EC loss after pIOL implantation, with variable results.Some investigators reported no statistically significant EC loss, whereas others found highly significant EC losses continuing up to 5 years after pIOL implantation (9.0% at 5 years postoperatively). 4,5Furthermore, Saxena et al. 6 reported a significant 12.6% EC loss 7 years after pIOL implantation and a significant negative correlation between anterior chamber depth (ACD) and EC loss after 3 years. New noncontact imaging techniques have been extremely valuable in guaranteeing a safe distance from the pIOL to critical ocular tissues. 7Anterior segment optical coherence tomography (AS-OCT) has proven to be a good imaging tool to visualize the pIOL in the anterior chamber and to analyze its distance from the corneal endothelium and the crystalline lens. 8This has led to the development of new criteria to warrant the long-term safety of pIOLs.One of the mentioned criteria is a minimum distance from the edge of the pIOL to the corneal endothelium, which is the smallest distance (1.5 mm) from the pIOL to the endothelium because of its convexconcave shape. 9A new software update (Visante AS-OCT; Carl Zeiss Meditec Inc., Dublin, CA) has the ability to perform pIOL simulation and, consequently, can measure edge distance before surgery. Recently, the importance of this edge distance has been demonstrated by Doors et al. 10 They found that EC loss after pIOL implantation was associated with the distance from the edge of the pIOL to the corneal endothelium.An edge distance of 1.37 mm resulted in a yearly EC loss of 0.98%, whereas an edge distance of 1.15 mm predicted a yearly loss of 1.8%.Furthermore, the edge distance might not be constant with advancing age.Age-related changes to the crystalline lens cause a decrease in ACD of approximately 20 m per year. 11,12Guell et al. 13 reported a stable distance between the pIOL and the crystalline lens during accommodation, which might suggest that the iris and the crystalline lens act as a unit and move forward.Consequently, if the iris and the crystalline lens move forward with an iris-fixated pIOL, the distance from the edges of the pIOL to the endothelium might decrease with increasing patient age. In this study, we have extended our previous observations to an increased number of patients and assessed whether the endothelial cell density (ECD) changes conform to a mathematical model, taking the relationship between EC loss and edge distance into account.The main purpose was to design a model that can help physicians during the patient selection and follow-up process.It might be used to predict when a patient will reach a critical endothelial cell density level, such as an ECD of 1500 cells/mm 2 , 14,15 at which point, in our opinion, pIOL explantation and cataract extraction can still safely be performed.To help ophthalmologists in patient selection, the model might be used to predict how long the pIOL can remain safely in the eye using the preoperative edge distance, measured with the pIOL simulation program (Visante AS-OCT; Carl Zeiss Meditec Inc.), and the preoperative ECD count. METHODS This prospective observational study included 306 consecutive eyes of 162 patients, who underwent pIOL implantation with Artisan or Arti-flex (Ophtec B.V., Groningen, The Netherlands) between 1998 and 2008 at the Academic Center for Refractive Surgery, University Eye Clinic Maastricht, for the correction of moderate to high myopia and astigmatism.Forty-eight men and 114 women were included in this study; mean patient age was 41.8 Ϯ 10.6 years (range, 18 -63 years) at the time of pIOL implantation.The study was conducted in accordance with the Declaration of Helsinki, and informed consent was obtained from all patients.Investigational review board approval was obtained from the Academic Hospital Maastricht. The Artisan pIOL (Ophtec B.V.) is a rigid single-piece lens composed of polymethyl methacrylate (PMMA).It has a convex-concave shape with either a 6-mm (for intraocular lens powers up to Ϫ15.5 diopters [D]) or 5-mm (for intraocular lens powers from Ϫ16.0 to Ϫ24.0 D) optic.In contrast, the foldable Artiflex pIOL (Ophtec B.V.) is a three-piece lens that consists of a flexible optical part made of ultraviolet-absorbing silicone and two rigid haptics made of PMMA.Because of its foldable 6-mm silicone optic, this lens can be inserted through a smaller incision.The Artiflex pIOL is available in dioptic powers of Ϫ2.0 to Ϫ14.5.The surgical procedures were performed by the same surgeon (RN).The surgical technique of pIOL implantation and postoperative eyedrops regimen has been described elsewhere. 10,16The incision size for implantation of the rigid Artisan pIOL was 6.2 or 5.2 mm, depending on the size of the optic.For the foldable Artiflex pIOL, a 3.2-mm incision was used.The criteria for performing pIOL implantation in our institution are stable refractive error during the previous 2 years; central ACD of 2.8 mm or more (measured from the endothelium to the crystalline lens); pupil (in mesopic light conditions) Ͻ6 mm; endothelial cell density Ն2000 cells/mm 2 ; no corneal, pupil, or iris abnormalities; and no history of glaucoma or chronic or recurrent uveitis. Before surgery, central ECD measurements were performed using a noncontact specular microscope (Noncon Robo SP-8000; Konan Medical Inc., Hyogo, Japan) and were repeated at 3 and 6 months and at 1, 2, 3, 4, 5, 6, 7, and 8 years after surgery.Follow-up ranged from 3 months to 8 years, with a mean follow-up of 31.7 Ϯ 25.7 months per eye.Three consecutive endothelial images of the central cornea were obtained and analyzed using the dot method, in which the centers of 50 or more contiguous cells are marked.The average of these three measurements was used for the analysis.EC loss was defined as the decrease in cell density between the preoperative and postoperative examination expressed as a percentage of the preoperative cell density.Paired t tests were used to compare preoperative EC counts with postoperative EC counts for each follow-up visit.To correct for the multiple tests, we used a Bonferroni correction, which meant that a P Ͻ 0.009 was considered significant for the conducted paired t-tests (Table 1).During all follow-up examinations, patients were examined to detect complications, such as glaucoma or corneal edema. From 2006 to 2008, AS-OCT was performed once in all included patients to analyze the position of the iris-fixated pIOL using an OCT system (Visante; Carl Zeiss Meditec Inc.).All AS-OCT images were made on the horizontal meridian, in an unaccommodated state, and in the same light conditions (50 lux).Cross-sectional images were taken using the enhanced anterior segment single scan.One examiner analyzed the images and measured the distances from the edges of the pIOL to the corneal endothelium using the refractive tools provided by the manufacturer (Fig. 1).Of the two edge distances (nasal and temporal side), the smallest distance was used for statistical analysis. Linear mixed-model analysis was applied to our data, with ECD as an independent variable and time as a covariate and assuming a random intercept per eye.This linear model is a useful test because it uses all available ECD data for each patient to fit the best linear model.To look for possible differences in EC loss for pIOLs with different distances between the edge of the pIOL and the corneal endothelium, we also included an interaction term "time" ϫ "edge pIOL-corneal endothelium distance."Our approach was to fit a linear mixed model using the following equation: where y i (t,d) is the ECD count of an eye i after a follow-up of t months with edge distance d; ␣ represents the intercept; ␣ i represents the random intercept per eye; ␤ is the effect of time after a follow-up of t months; ␥ is the interaction effect of time and edge distance with edge distance d and a follow-up of t months; and i is the residual error. RESULTS Of Mean EC losses at 6 months and 1 year after pIOL implantation were 0.40% Ϯ 9.85% and 1.23% Ϯ 9.52%, respectively.To illustrate the ECD data used to compute the model, the changes in ECD at each follow-up visit compared with before surgery are shown in Table 1.All corneas of the included eyes stayed clear during follow-up.Estimated parameters of the linear mixed model are listed in Table 2.When using these estimates in the described model, the following equation can be computed: When applying this model to a patient with a preoperative ECD count of 2693 cells/mm 2 (i.e., mean ECD count of the investigated population) and a minimum edge distance of 1.43 mm (i.e., mean edge distance of the investigated population), the predicted ECD at 5 years after surgery would be y(60,1.43)ϭ 2693 Ϫ 12.95 ϫ 60 ϩ 7.51 ϫ (60 ϫ 1.43) ϭ 2560 cells/mm 2 .We can also calculate that after 44 years the patient will have enough endothelial cells (1525 cells/mm 2 ) 14,15 to undergo pIOL explantation and cataract extraction (Figs. 2, 3).The predictions of minimum edge distances of 1.66 mm and 1.20 mm (i.e., mean edge distance Ϯ 1 SD) in the same patient are also shown in Figure 2 with the corresponding years of reaching ECD counts of 1500 cells/mm 2 in Table 3.For the mean edge distance of 1.43 mm, the model predicted a yearly EC loss of 1.0%, whereas an edge distance of 1.20 mm resulted in a yearly EC loss of 1.7%, and an edge distance of 1.66 mm led to a yearly EC loss of only 0.2%. For patients with preoperative ECDs of 3000, 2500, or 2000 cells/mm 2 and edge distances of 1.43 mm, the model predicted that a critical ECD of 1500 cells/mm 2 will be reached at 56, 37, and 18 years after implantation, respectively. DISCUSSION In this study, we analyzed the data of 306 eyes after iris-fixated pIOL implantation and computed a linear mixed model to predict long-term EC loss in relation to the distance from the edge of the pIOL to the corneal endothelium.To our knowledge, this is the first attempt to describe such a model for patients after pIOL implantation.In the past, models predicting EC loss after cataract surgery and penetrating keratoplasty (PK) have been presented using exponential decay models. 17 Patel et al. 19 and Armitage et al. 17 both described a biexponential model of EC loss after PK.Single exponential decay models have been shown to underestimate the early EC loss and to overestimate late EC loss after PK.Other possibilities include an exponential followed by a linear decrease or a two-phased linear decrease with a rapid linear decrease in the early postoperative period. 20We used a single linear model because it was the best fit to our data.Our EC loss results showed that we did not find a large EC loss shortly after pIOL implantation, which is the case after cataract surgery and penetrating keratoplasty and thus follows an exponential decay model.An explanation for this stable ECD in the first months after pIOL implantation might be the redistribution of endothelial cells from the periphery to the center after the discontinuation of contact lens wear.Peripheral corneal ECD seems to be significantly higher than central corneal ECD and functions as a physiologic reserve for endothelial cells. 21Similar to our results, most studies investigating EC loss after pIOL implantation did not report a rapid loss in the first 6 months after sur- 1,3,4,22,23 In these studies, short-term EC losses varied from 0.09% at 6 months to 3.3% at 1 year after surgery.Our reported mean EC losses of 4.3% at 5 years and 5.4% at 7 years after pIOL implantation are in accordance with the recent literature.Three years after pIOL implantation, Stulting et al. 3 reported an EC loss of 4.8%, and Benedetti et al. 5 found an EC loss of 9.0% at 5 years after surgery. The aim of our study was to describe a model to assist ophthalmologists in their decision to implant iris-fixated pIOLs into healthy eyes.The presented model uses preoperative ECD and the minimum edge distance to estimate ECD counts during follow-up.To build the described model, the edge distance was measured postoperatively because AS-OCT has only been available in our institution since 2006.A new software update of the OCT (Visante; Carl Zeiss Meditec Inc.) system now has the possibility to assess the edge distance in the preoperative setting using a pIOL simulation program.Recently, we tested this pIOL simulation tool and found small mean differences between preoperative simulation and actual postoperative measurements. 24Therefore, this preoperative simulation is a useful tool in determining edge distance in the preoperative patient, which can be used in the presented model. However, as we mentioned before, it is known that ACD decreases with age, which might result in a decrease in the minimum edge distance over time.In our opinion, all patients should be monitored using AS-OCT during long-term follow-up to investigate the effect of age on the minimum edge distance.For example, when presuming a yearly ACD decrease of 20 m, which, in a worst-case scenario, will result in a decreasing edge distance of 0.02 mm per year, the presented patient with a preoperative ECD count of 2693 cells/mm 2 and a minimum edge distance of 1.43 mm, will reach an ECD count of 1500 cells/mm 2 42 years after pIOL implantation (Fig. 4).This is 2 years sooner than predicted without taking this decrease of ACD into account.As more data become available during long-term follow-up of pIOLs using AS-OCT, we hope this leads to the development of a good mathematical description of the postoperative EC loss with an accurate estimation of the decrease in edge distance over time. 5][6]23,[25][26][27][28] However, the main problem of previously reported studies was the unavailability of measured edge distances, which can be expected because the Visante OCT (Carl Zeiss Meditec Inc.) has only been on the market since 2006.Computing an estimated mean edge distance for all available data was difficult because edge distance is related to ACD but also to the power and design of the pIOL.Furthermore, some studies did not report mean ACD or did not describe the measurement device, which makes it difficult to assess whether ACD was measured from the crystalline lens to the endothelium or epithelium.Our suggestion would be to perform AS-OCT before surgery and to use the pIOL simulation program to estimate edge distance but also to continue the evaluation of this edge distance during long-term follow-up. One of the limitations of a prediction model in general is the uncertainty of extrapolation of the data outside the ranges of the estimated values.The minimum edge distances of the eyes included in our study ranged between 0.70 and 2.21 mm.Therefore, in patients with minimum edge distances lower than 0.70 mm, the model should not be used since this could lead to inaccurate ECD estimates.Our maximum follow-up period was 8 years with a limited number of eyes.Therefore, extrapolation after our 8 years of follow-up could become more unreliable.We will continue to monitor our ECD data in the coming years and hope to provide more accurate values beyond 8 years of follow-up with a larger number of eyes in the future.Furthermore, to increase the validity of the presented model, it should be applied to a second independent population for validation.In our model we included both eyes of a large number of patients, which can cause bias in the statistical analysis.However, when repeating the analysis using only right or left eyes, the results were not very different from the presented model; therefore, we believe that the inclusion of both eyes did not lead to severe bias. Our decrease in ECD is described as an absolute decrease in endothelial cells, which is actually a worst-case scenario.When using a relative decrease, which is usually reported in articles about EC loss, the decrease in ECD would be not as fast as the 4.During the first 15 years of follow-up, the relative and absolute EC losses are almost identical.After this period, the difference between the models becomes evident.The patient will reach our critical ECD count of 1500 cells/mm 2 70 years after implantation, which is 14 years later when compared with the absolute decrease in ECD.When follow-up periods of 15 years and longer become available in the future, the choice between a relative and an absolute decrease will be more reliable. In conclusion, a linear mixed-model analysis was used to describe a linear model that predicts ECD counts after irisfixated pIOL implantation in relation to the measured minimum edge distance using AS-OCT.Longer follow-up with AS-OCT will be needed to evaluate the effect of age-related changes of the natural lens on the distance from the edge of the pIOL to the endothelium. the 306 included eyes, 186 eyes received an Artisan Myopia pIOL with a mean lens power of Ϫ12.91 Ϯ 3.79 D (range, Ϫ5.00 to Ϫ23.50 D), 15 eyes were implanted with an Artisan Toric pIOL with a mean lens power of Ϫ7.77 Ϯ 4.04 D (range, Ϫ2.00 to Ϫ15.00 D), 99 eyes received an Artiflex Myopia pIOL with a mean lens power of Ϫ10.06 Ϯ 2.25 D (range, Ϫ4.00 to Ϫ14.50 D), and 6 eyes received an Artiflex Toric pIOL with a mean lens power of Ϫ8.33 Ϯ 2.32 D (range, Ϫ5.00 to Ϫ10.50 D).The mean preoperative ECD count was 2693 Ϯ 347 cells/ mm 2 (range, 1588 -3753 cells/mm 2 ), and the mean minimum edge distance was 1.43 Ϯ 0.23 mm (range, 0.70 -2.21 mm).Artisan pIOLs showed a mean edge distance of 1.48 Ϯ 0.23 mm and a mean preoperative ECD count of 2624 Ϯ 355 cells/mm 2 .The mean edge distance of the Artiflex pIOLs was 1.32 Ϯ 0.20 mm, and the mean preoperative ECD count was 2775 Ϯ 310 cells/mm 2 .Eleven patients included in this study wanted the pIOL implantation despite an ECD Ͻ2000 cells/mm 2 and our warnings about possible complications; ECD in most of the patients (n ϭ 8) was between 1950 and 2000 cells/mm 2 .They underwent implantation because of complete contact lens intolerance and the inability to continue their occupations with spectacle correction. - 19 FIGURE 3 . FIGURE 3. Linear mixed model applied to patient eligible for pIOL implantation with preoperative endothelial cell count of 2693 cells/ mm 2 and edge distance of 1.43 mm (straight line) with the confidence intervals of the model (dotted lines). FIGURE 4 . FIGURE 4. Linear mixed model with decreasing edge distance because of aging (straight line) and relative endothelial cell loss (dotted line) applied to patient eligible for pIOL implantation, with preoperative endothelial cell count of 2693 cells/mm 2 and edge distance of 1.43 mm. TABLE 2 . Parameter Estimates of Linear Mixed-Model Analysis FIGURE 2. Linear mixed model applied to patient eligible for pIOL implantation with preoperative endothelial cell count of 2693 cells/ mm 2 and edge distances of 1.20 mm (half dotted line), 1.43 mm (straight line), and 1.66 mm (dotted line). TABLE 3 . Linear Mixed Model Applied to Patient with Preoperative ECD Count of 2693 Cells/mm 2 and Different Edge Distances, with Corresponding Years of Reaching ECD Levels of 1500 Cells/mm 2 An example of a relative decrease is visualized for the presented patient in Figure
2017-06-25T00:59:57.714Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "a51a3f653b2c66f555a926c545c06a6393693b0e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1167/iovs.09-3981", "oa_status": "CLOSED", "pdf_src": "ScienceParsePlus", "pdf_hash": "a51a3f653b2c66f555a926c545c06a6393693b0e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218862742
pes2o/s2orc
v3-fos-license
A Note on the Maximum Principle-based Approach for ISS Analysis of Higher Dimensional Parabolic PDEs with Variable Coefficients This paper presents a maximum principle-based approach in the establishment of input-to-state stability (ISS) for a class of nonlinear parabolic partial differential equations (PDEs) over higher dimensional domains with variable coefficients and different types of nonlinear boundary conditions. Technical development on ISS analysis of the considered systems is detailed, and an example of establishing ISS estimates for a nonlinear parabolic equation with, respectively, a nonlinear Robin boundary condition and a nonlinear Dirichlet boundary condition is provided to illustrate the application of the developed method. Introduction Since the last decade, the ISS theory for infinite dimensional systems governed by partial differential equations (PDEs) has drawn much attention in the literature of PDE control. A comprehensive survey on this topic is presented in [18]. It is worth noting that the extension of the notion of ISS for finite dimensional systems originally introduced by Sontag in the late 1980's to infinite dimensional systems with distributed in-domain disturbances is somehow straightforward, while the investigation on the ISS properties with respect to (w.r.t.) boundary disturbances is much more challenging. In recent years, different methods have been developed for ISS analysis of PDE systems with boundary disturbances, including, e.g.: (i) the semigroup and admissibility methods for ISS of certain linear or nonlinear parabolic PDEs [3,4,5,6,7,20]; (ii) the approach of spectral decomposition and finite-difference scheme for ISS of PDEs governed by Sturm-Liouville operators [8,9,10,11,12]; (iii) the Riesz-spectral approach for ISS of Riesz-spectral systems [15,16]; (iv) the monotonicity-based method for ISS of certain nonlinear PDEs with Dirichlet boundary disturbances [17]; (v) the method of De Giorgi iteration for ISS of certain nonlinear PDEs with Dirichlet boundary disturbances [23,25]; (vi) the application of variations of Sobolev embedding inequalities for ISS of certain nonlinear PDEs with Robin or Neumann boundary disturbances [24,25,26]; (vii) the maximum principle-based approach for ISS of certain nonlinear PDEs with different types of boundary conditions [27,28]. Although a rapid progress on ISS theory has been obtained, it is still a challenging issue for ISS analysis of nonlinear PDE systems defined over higher dimensional domains with variable coefficients and different types of nonlinear boundary conditions. For example, the methods in (i) can be applied to certain linear or nonlienar PDEs, while it may be difficult to apply them to non-diagonal systems as the one given by, e.g., (2) and (3) It has been demonstrated in [27] and [28] that the method in (vii) is applicable for ISS analysis of certain nonlinear parabolic PDEs, with different types of boundary disturbances, or over higher dimensional domains. Therefore, the aim of this paper is put on the application of this approach to ISS analysis for a class of nonlinear parabolic PDEs defined over higher dimensional domains with variable coefficients under different types of nonlinear boundary conditions simultaneously. The proposed method for achieving the ISS estimates of the solutions will be based on the the Lyapunov method and the maximum estimates for nonlinear parabolic PDEs with nonlinear boundary conditions. Specifically, we set up in the first step several maximum estimates of the solutions to the considered nonlinear parabolic PDEs with a nonlinear Robin or Dirichlet boundary condition by means of the weak maximum principle. In the second step, applying the technique of splitting as in [2,23,27,28], we consider a nonlinear equation with the initial data free and establish the maximum estimate of the solution (denoted by v) according to the result obtained in the first step. By denoting the solution of the target system by u, then in the third step, we establish the L 2 -estimate of u − v by the Lyapunov method. Finally, the ISS estimate of the target system in L 2 -norm, i.e., the estimate of u in L 2 -norm, is guaranteed by the maximum estimate of v and the L 2 -estimate of u − v. It's worthy noting that combining with other approaches or techniques, the Lyapunov method was also applied for the ISS analysis of PDE systems in [4] by constructing non-coercive Lyapunov functions based on ISS characterizations devised in [19], and in [20,21] and the literature mentioned in (iv)-(vii) by constructing coercive Lyapunov functions. In the rest of the paper, Section 2 presents the problem statement, the basic assumptions, and the main result. By the weak maximum principle, some maximum estimates for nonlinear parabolic PDEs with nonlinear Robin and Dirichlet boundary conditions are proved respectively in Section 3. ISS analysis of nonlinear parabolic PDEs with different boundary conditions are detailed in Section 4. In order to illustrate the application of the approach presented in this paper, an example of ISS estimates for a parabolic equation with respectively a nonlinear Robin boundary condition and a nonlinear Dirichlet boundary condition is provided in Section 5, followed by some concluding remarks given in Section 6. Notations: In this paper, R + denotes the set of positive real numbers and R ≥0 : Let B R be a ball in R n (n ≥ 1) with the centre at 0 and a radius R > 0, i.e., B R = {x ∈ R n ||x| < R}. We denote by ∂B R and B R the boundary and the closure of B R , respectively. Denote by |B R | the n-dimensional Lebesgue measure of B R , i.e., We use · to denote the norm · L 2 (BR) in L 2 (B R ). Problem Setting and Main Result Given the following functions: we consider the following nonlinear parabolic equation with variable coefficients: or or , which is the unit outer normal vector at the point x ∈ ∂B R . In general, f and d represents the distributed in-domain disturbance and boundary disturbance, respectively. (3), (4) and (5) represent the nonlinear Robin, Neumann and Dirichlet boundary condition, respectively. Throughout this paper, without special statements, we always denote by x, t respectively the first and second variable (if any) of the functions a, b i (i = 1, 2, . . . , n), c, h, f, d, φ. Moreover, we always assume that a, b i (i = 1, 2, . . . , n), c, h, ψ, f, d, φ are given by (1) and satisfy for some a, a, b, b, c ∈ R + : where C Trace is the best constant of the trace embedding inequality given by the Trace Theorem in the appendix, and for all (x, t) ∈ B R × R + and all u, v, w ∈ R with u < v. Furthermore, we impose the following compatibility condition: and the states in U 0 , if there exist functions β ∈ KL and γ 0 , γ 1 ∈ K such that the solution of (2) satisfies for any T > 0: Moreover, System (2) is said to be exponential input-to-state stable (EISS) in L 2 -norm w.r.t. the boundary disturbance d ∈ D 0 , the in-domain disturbance f ∈ Y and the states in U 0 , if β( φ , T ) can be chosen as M 0 e −λT φ with certain constants M 0 , λ > 0 in (9). The main result of this paper is stated in the following theorem. Theorem 1 System (2) with (3) (or (4), or (5)) is EISS w.r.t. the boundary disturbance d ∈ D 0 , the in-domain disturbance f ∈ Y and the states in U 0 having the estimate given in (12) (or (15), or (16)). [14], for a heat conduction problem the nonlinear boundary conditions can be seen as a nonlinear radiation law prescribed on the boundary of the material body. (ii) By [13,Theorem 6.1 and 7.4,Chapter V], system (2) admits a unique solution u ∈ C 2,1 (Q T ) for any T > 0. Moreover, every system appearing in this paper admits a unique solution belonging to C 2,1 (Q T ). (4) and (5) are equivalent to the linear boundary coditions: ∂u ∂ν = ψ −1 (d) and u = ψ −1 (d), respectively. Thus we can conduct ISS estimates for the considered systems as in [28] by the sppliting technique combined with the penalty method (see [28,Remark 5]). (ii) The requirement on the smoothness of these functions in (1) and the compatibility condition (8) are only for establishing the existence and regularity of a classical solution of the considered PDEs, and can be weakened for the ISS analysis if weak solutions are considered (see also [28,Remark 3]. (iii) Indeed, we can weaken the condition (6c) to be "c ≥ 0 in B R × R + ". For example, we consider (2) with the Robin boundary condition (3). Noting that there always exists ρ ∈ C 2 (B R ; R + ) such that Remark 2 (i) As ψ is invertible, the nonlinear boundary conditions where c 0 is a positive constant depending on ρ. Using u = wρ, we can transform the u-system (2) into w-system with the coefficient of w, denoted by c, satisfying Moreover, the w-system has the structural conditions as (6), (7a) and (7b). Then we can prove the ISS of the w-system, which results in the ISS of u-system. Due to the spacial limitation, we omit the details. (iv) It should be mentioned that proceeding as in this paper and with more specific computations, one may establish ISS estimates for (2) over any bounded domain Ω ∈ R n with a smooth enough boundary. It seems that the result given in Lemma 2 is trivial. Nevertheless, for the completeness, we provide a proof by following a similar way given in [22, page 237]. We have then which is a contradiction and hence, the claim is valid for f ≤ 0. For the case of f ≥ 0, one can proceed in the same way to complete the proof. Maximum estimate for parabolic PDEs with a nonlinear Robin boundary condition Proposition 3 Let u ∈ C 2,1 (Q T ) be the solution of the following parabolic equation: . By (7a), (6a), (6b) and (6c), it follows that Noting that cq ≥ sup QT |f | + 2p an + Ra + Rb , we get By Lemma 2, if v has a negative minimum, then v attains the negative minimum on the parabolic boundary ∂ p Q T . On the other hand, noting that v( is the negative minimum. Thus, ∂v ∂ν (x0,t0) ≤ 0. Then, at the point (x 0 , t 0 ), we get by (7b) which is a contradiction. Therefore, there must be v ≥ 0 in Q T , which follows that |u| ≤ M ≤ pR 2 + q in Q T . Maximum estimate for parabolic PDEs with a nonlinear Dirichlet boundary condition Proposition 4 Let u ∈ C 2,1 (Q T ) be the solution of the following parabolic equation: Proceeding as in the proof of Proposition 3, we have Then it suffices to show that if v(x 0 , t 0 ) is the negative minimum at some point (x 0 , t 0 ) ∈ ∂B R × (0, T ), we will obtain a contradiction. Indeed, noting that Then we have 0 > v(x 0 , t 0 ) = M ± u(x 0 , t 0 ) ≥ 0, which is actually a contradiction. EISS Estimates for Parabolic PDEs with Different Types of Nonlinear Boundary Conditions Proof of Theorem 1 We proceed on the proof in the following 3 steps. (i) We establish an EISS estimate of the solution to (2) with the nonlinear Robin boundary condition (3). Let v ∈ C 2,1 (Q T ) be the unique solution of the following parabolic equation: According to Proposition 3, we have where R 0 = R 2 + 1 cR (an + Ra + Rb). Let w = u − v. It is obvious that w satisfies: Multiplying (11) with w and integrating by parts, we have Applying the formula of integration by parts, the Trace Theorem (see the appendix) and by (6b), we have By (7b) and (7a), we always have Thus, we obtain by (6a), (6c) and (6d) Finally, we have (ii) We establish an EISS estimate of the solution to (2) with the nonlinear Neumann boundary condition (4). Let v ∈ C 2,1 (Q T ) be the unique solution of the following parabolic equation: According to Proposition 3, we have where R 0 = R 2 + 1 cR (an + Ra + Rb). By Gronwall's inequality, we have w(·, T ) 2 ≤ φ 2 e −λT + max Finally, by u(·, T ) ≤ w(·, T ) + v(·, T ) , (13) and (14), for any T > 0, it follows that (iii) For the EISS estimate of the solution to (2) with the nonlinear Dirichlet boundary condition (5), it suffices to estimate the solutions of the following parabolic equations: Indeed, by Proposition 4, we have Proceeding as in (i), we get Finally, for any T > 0, it follows that An Illustrative Example We consider the following super-linear parabolic equation: coupled with the nonlinear Robin boundary condition: or the nonlinear Dirichlet boundary condition: The initial value condition is given by: u(·, 0) = φ(·) in B R . Concluding Remarks This paper presented an application of the maximum principle-based approach proposed in [27,28] to the establishment of ISS properties w.r.t. in-domain and boundary disturbances for certain nonlinear parabolic PDEs over higher dimensional domains with different types of nonlinear boundary conditions. The proposed scheme for achieving the ISS estimates of the solution is based on the the Lyapunov method and the maximum estimates for parabolic PDEs with nonlinear boundary conditions. An ISS analysis for a parabolic PDE with a super-linear term and nonlinear boundary conditions has been carried out, which demonstrated the effectiveness of the developed approach.
2020-05-25T01:00:53.875Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "a9a898442b2bdb6eb7eb94920a530d8c2f5ca65c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a9a898442b2bdb6eb7eb94920a530d8c2f5ca65c", "s2fieldsofstudy": [ "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
15280739
pes2o/s2orc
v3-fos-license
On the K-stability of complete intersections in polarized manifolds We consider the problem of existence of constant scalar curvature Kaehler metrics on complete intersections of sections of vector bundles. In particular we give general formulas relating the Futaki invariant of such a manifold to the weight of sections defining it and to the Futaki invariant of the ambient manifold. As applications we give a new Mukai-Umemura-Tian like example of Fano 5-fold admitting no Kaehler-Einstein metric and a strong evidence of K-stability of complete intersections on Grassmannians. Introduction The problem of determining which manifolds admit a Kähler constant scalar curvature metric (Kcsc), and in which Kähler classes, is by now a central one in differential geometry and it has been approached with a variety of geometric and analytical methods. A classical result due to Matsushima and Lichnerovicz [17,15] shows that a such a manifolds has a reductive identity component of the automorphisms group, a condition unsensitive of the Kähler class where we look for the Kcsc metric. In the eighties Futaki [10], later generalized by Calabi [3], introduced an invariant, since then called the Futaki invariant, sensitive of the Kähler class. The deep nature of this invariant has stimulated a great amount of research. While it can be used directly to show that a manifold M does not have a Kcsc metric in a Kähler class, a more refined analysis, mainly due to Ding-Tian [4], Tian [23], Paul-Tian [21] and Donaldson [6], has led to relate this invariant on a manifold M to the existence of Kcsc metric on any manifold degenerating in a suitable sense to M . This idea has been formalized in a precise conjecture due to Donaldson [6] relating the existence of such metrics to the K-stability of the polarized manifold. We will summarize this in Section 2. The key point relevant for our paper is that the knowledge of the Futaki invariant gives informations on the existence of Kcsc metrics on the manifolds on which the calculations are carried on and also on any Kähler manifold degenerating on it. The problem of calculating explicitely the Futaki invariant of a polarized manifold has then got further importance. Its original analytical definition is extremely hard to use, since requires an explicit knowledge of the Ricci potential and of the Kähler metric, data which are almost always missing. On the other hand it led to the discovery of the so called localization formulae [11,22] which have been a very useful tool in this problem. Yet, they require an explicit knowledge of the space of holomorphic vector fields and of the Kähler metric which is again very hard to have. Finally Donaldson [6] gave a pure cohomological interpretation of the Futaki invariant, extending it to singular varieties and schemes, which is the one we use in this paper and that will be recalled in Section 2. Let us just recall at this point that the Futaki invariant is defined for a polarized scheme (M, L) endowed with a C × -action ρ : C × → Aut(M ) that linearizes on L (hence a holomorphic vector field η ρ ). We will then denote thorought this paper such a structure by (M, L, ρ) and by F (M, L, ρ) the Futaki invariant of η ρ in the class c 1 (L) of this triple. We can now describe our result. We assume that we are given a polarized variety (M, L) endowed with a C × -action that linearizes on L. If X ⊂ M is an invariant complete intersection of sections of holomorphic vector bundles E 1 , . . . , E s on M , we will show that is possible to express F (X, L |X , ρ) in terms of the weights of sections defining X and holomorphic invariants of the bundles E j 's and L. In this paper we make explicit the formula in two relevant cases: the first, when L is the anti-canonical bundle K −1 M of M and all E j 's are isomorphic to a fixed vector bundle E such that det E is a (rational) multiple of L as linearized vector bundle; the second, when each E j is isomorphic to some power L rj of the polarizing line bundle. We do not state the formula for the general case, but it can be recovered through some calculations from lemmata 5.2 and 5.3. Let us consider the first case. Let E be a C × -linearized holomorphic vector bundle on a smooth Fano manifold M such that (det E) q = K −p M for some integers p, q. For each j ∈ {1, . . . , s} let σ j ∈ H 0 (M, E) be a non-zero holomorphic semiinvariant section, in other words there exists α j ∈ Z such that ρ(t) · σ j = t αj σ j . Thus the zero locus X j = σ −1 j (0) is ρ -invariant and L = det E restrict to a linearized ample line bundle on X j . Consider the intersection X = s j=1 X j and assume that dim(X) = n − sk, being k = rank(E). Moreover, by adjunction, X is a possibly singular Fano variety if q − ps > 0. Our first result is the following Theorem 1.1. Under the above conventions and assumptions we have where d 0 (X, L| X ) and a 0 (X, L| X ) are respectively the degree of (X, L| X ) and its equivariant analogue (see definition 2.1) and can be computed by means of holomorphic invariants of E and the quantity s j=1 α j . The above theorem gives a significant simplification of the Donaldson version of the Futaki invariant (definition 2.1) in that the above formula involves only a 0 and d 0 and not a 1 and d 1 which are in general much harder to compute. It is also important to notice that s j=1 α j is nothing but the Mumford weight of the plane P = span{σ j } ∈ Gr(s, H 0 (M, E)). With an additional hypothesis on the linearization of the given C × -action on E, theorem above gives the following where are characteristic numbers of E (independent of the C × -linearization). The interest in the above Corollary is twofold. On the one hand it relates two very natural, and a priori unrelated, invariants of the manifold X in a completely general setting. On the other hand it generalizes a special case, proved by completely different ad hoc arguments by Tian [23], used to produce the first (and up to now the only) examples of smooth Fano manifolds with discrete automorphism group without Kähler-Einstein metrics. Another application of our study is that if (M, L) is a complex Grassmannian anticanonically polarized and P is a generic subspace of H 0 (M, E), in a sense explained in Section 6, then X P degenerates onto a X P0 whose Futaki invariant is positive, hence hinting at the K-stability of this type of manifolds. In particular this gives strong evidence to K-stability of these manifolds if their moduli space is discrete. Of course the above Corollary rises the question whether T has a specific sign. We do not believe in general this to be the case, but we describe some classes of examples for which we can conclude, thanks to a theorem of of Beltrametti, Schneider and Sommese [1], that T is indeed positive (see also Remark 3.4). Our second type of results comes from looking at classes different form the canonical one. We will restrict ourselves to the case when the bundles where to choose the sections are all line bundles and are all (possibly varying) powers of a fixed line bunlde L. Thus if L is sufficiently positive we can embed M in a projective space P N and X is the intersection of M with a number of hypersurfaces. We are then interpreting our results in terms of Kcsc metrics in c 1 (L). This situation has been previously studied by Lu [16] in the case when the ambient manifold is projective space. Again our result has a computational interest in that it makes very easy to calculate the Futaki invariant for a great variety of manifolds, but also a conceptual one that we underline in the following Corollary 1.3. Let (M, L) be a n-dimensional polarized manifold endowed with a C × -action ρ : C × → Aut(M ) and a linearization on L. For each j ∈ {1, . . . , s} consider a section σ j ∈ H 0 (M, L r ) such that ρ(t) · σ j = t αj σ j for some α j ∈ Z. Let X = s j=1 σ −1 (0). Suppose dim(X) = n − s, then In particular, if M has a Kcsc in c 1 (L) and X is K-semistable, then (X, L |X ) is Chow stable. The relevance of this last statement is that the conclusion is not about asymptotic Chow stability, which is known to be related by a result of Donaldson [5] to the existence of Kcsc metrics. For example, even in the very special case of hypersurfaces of projective spaces, this gives strong further evidence of their K-semistability (cfr. Tian [24]). Having dropped the assumption on the smoothness of X we can use our formulae for singular varieties which arise as central fiber of test configurations. We give in Section 6 an explicit example of this situation with a central fiber of our type with non positive Futaki invariant, hence producing non Kcsc manifolds (the degenerating ones). Another explicit application of our formulae comes when looking at the quintic Del Pezzo threefold, X 5 , for which it was not known whether it admits a Kcsc metric. In fact our analysis shows that it is K-stable, when confining to those test configurations whose central fibers are still manifolds of the type considered in our paper. While we believe a complete algebraic proof of its K-stability is then at hand, showing that every test configuration is indeed of this type, we remark that we can adapt a very recent observation of Donaldson [7] about the Mukai-Umemura threefold, to prove that this manifold (which is rigid in moduli) indeed has a Kähler-Einstein metric. Unfortunately the other Fano threefolds with Pic = Z for which the existence of a canonical metric is unknown, when smooth do not have continuous automorphisms. If we take singular ones defined by sections of the appropriate bundles with non positive Futaki invariant, we still cannot find test configurations with smooth general fibers. We leave this important problem for further research. Part of this work has been carried out in Fall 2007 during the visit of the second author at the Princeton University, whose hospitality is gratefully acknowledged. It is a great pleasure to thank G. Tian for many enlightening discussions. Thanks also to Y. Rubinstein and J. Stoppa for many important conversations. Preliminaries At this point we recall some definitions (mainly form [6]) for future reference. Definition 2.1. Let (V, L) be a n-dimensional polarized variety or scheme. Given a one parameter subgroup ρ : C × → Aut(V ) with a linearization on L and denoted by w(V, L) the weight of the C × -action induced on we have the following asymptotic expansions as k ≫ 0: The (normalized) Futaki invariant of the action is Remark 2.2. Is not difficult to see that the Futaki invariant is unchanged if we replace L with some tensor power L r , moreover it is independent of the linearization chosen on L. Unlike the general case, when V is smooth and is the canonical bundle there is a natural linearization of the C ×action ρ on L induced by the (holomorphic) tangent map In this case we will call L the anti-canonical linearized bundle. We observe that the Futaki invariant of a polarized manifold (V, L) assume a simple form when L is the anti-canonical linearized line bundle. Indeed, by the equivariant Riemann-Roch theorem we get d 0 (V, The relevance of the Futaki invariant is related to the definition of Kstability. To introduce it we need the following Definition 2.3. A test configuration of a polarized manifold (X, L) consists of a polarized scheme (X , L) endowed with a C × -action that linearizes on L and a flat C × -equivariant map π : When (X, L) has a C t × action ρ : C × → Aut(M ), a test configuration where X = X × C and C × acts on X diagonally trought ρ is called product configuration. Definition 2.4. The pair (X, L) is K-stable if for each test configuration for (X, L) the Futaki invariant of the induced action on (π −1 (0), L| π −1 (0) ) is greater than or equal to zero, with equality if and only if we have a product configuration. Finally we remark that the apparently different definition of K-stability given in [6] is due to the different choice of the sign in the dfinition of the Futaki invariant. M , be a n-dimensional anti-canonically polarized Fano manifold endowed with a C × -action ρ : C × → Aut(M ) and a linearization on L. Let E be a rank k linearized vector bundle on M such that where Remark 3.2. Clearly the linearization of E is fixed from the one of L thanks to the hypothesis (det E) q ≃ L p as linearized bundles. The latter is crucial to get the compact formula (5). Indeed α j and a 0 (X, L| X ) depend on the linearization of E and L respectively, but on the other hand F (X, L| X , ρ) is independent of the linearization of L. Since F (X, L| X , ρ) is indipendent of the linearization on L, we are free to change it to make easier the calculations. In particular we choose on L ≃ K −1 M the natural linearization coming from the lifting of the C × -action on the holomorphic tangent bundle T M . This gives c G 1 (L) = c G 1 (M ), wehere c G 1 denote the equivariant first chern class (in the Cartan model of the equivariant cohomology of M ). To preserve the hypothesis we have to vary accordingly the linearization of E to have q c G Thus, by definition 2.1 we get When E has the right linearization, the Futaki invariant of X is a multiple of the weight s j=1 α j of P = span{σ 1 , . . . , σ s }. Indeed we have the following where Proof. Substituting the expressions of a 0 (X, L| X ) and d 0 (X, L| X ) on (5) we get and formula (6) follows immediatly by hypothesis. To show the positivity of the constant C is enough to observe that L X is ample and, by definition of d 0 (X, L| X ), the constant 1/C is a positive multiple of the degree of (X, L| X ). Remark 3.4. Establishing the positivity of the constant T is a problem quite delicate. At least when E is ample, one would apply the theory of Fulton and Lazarsfeld [9] to conclude that T > 0. This is true when q − ps ≤ 0 (i.e., by adjunction formula, when X is not Fano), but unfotunately this is not true in general because the polynomial in the Chern classes defining T is not numerically positive. Nevertheless, if E is very ample (i.e. the tautological line bundle O P(E) (1) on P(E) is very ample), then by a theorem of of Beltrametti, Schneider and Sommese [1] we get the bound T ≥ k n−sk+1 (p(n + 1) − kq), that already gives a good number of examples, some of which are described in the last section. 4 The case E j ≃ L r j Now we turn to consider the second case mentioned in the introduction. In particular we allow L = K −1 M , but we consider sections σ j ∈ H 0 (M, L rj ) in some tensor power of the polarizing bundle L. We have the following Theorem 4.1. Let (M, L) be a n-dimensional polarized manifold endowed with a C × -action ρ : C × → Aut(M ) and a linearization on L. For each j ∈ {1, . . . , s} consider a section σ j ∈ H 0 (M, L rj ) such that ρ(t) · σ j = t αj σ j for some α j ∈ Z. Let X = s j=1 σ −1 (0). Suppose dim(X) = n − s, then we have Proof. Since c s (L r1 ⊕ · · · ⊕ L rs ) = s j=1 r j c 1 (L) s , c 1 (L r1 ⊕ · · · ⊕ L rs ) = s j=1 r j c 1 (L), by Lemma 5.2 we get and analogously by 5.3 Corollary 4.2. Let X ⊂ C n be a (n − s)-dimensional subvariety defined by homogeneous polynomials F 1 , . . . , F s of degree r 1 , . . . r s respectively. Let ρ : C × → SL(n + 1) be a one parameter subgroup such that ρ(t) · F j = t αj F j , j = 1, . . . , s for some α 1 , . . . , α s ∈ Z. Then we have Proof. Since H 0 (P n , O P n (m)) ≃ C[z 0 , . . . , z n ] m then h 0 (P n , O P n (m)) = n + m m = 1 n! m n + n(n + 1) 2n! m n−1 + O(m n−2 ), thus 2d1 nd0 = n + 1. Moreover, taking on O P n the unique linearization induced by SL(n + 1) we get w(P n , O P n (m)) = 0, and in particular a 0 = 0. The formula (4.1) becomes simpler if all the r j 's are equal. Moreover in this case F (X, L| X , ρ) has a nice expression in term of the so-called "Chow weight" of (X, L| X ), whose definition, essentially due to Mumford [20], is the following If G ⊂ Aut(V ) is a reductive subgroup, we say that X is Chow stable (resp. semi-stable) w.r.t. G if µ(X) < 0 (resp. ≤) for all one-parameter subgroups of G. and the statement is proved. Lemma 5.2. Let (M, L) be a n-dimensional polarized manifold and let E 1 , . . . , E s be a collection of holomorphic vector bundles on M . Set k j = rank(E j ), B = E 1 ⊕ · · · ⊕ E s and b = rank(B) = s j=1 k j . For each j ∈ {1, . . . , s} consider a non-zero section σ j ∈ H 0 (M, E j ) and set σ = (σ 1 , . . . , σ s ) ∈ H 0 (M, B) and Let O X be the structure sheaf of X. By assumption σ is a regular section, so the Koszul complex induced by σ is exact. Tensoring by L m preserves the exacteness, thus and by the Hirzebruch-Riemann-Roch theorem we get Proof. It is very similar to the previous on the dimension of H 0 (X, L| m X ). Since sections σ j are only semi-invariant, they do not give rise to equivariant sequences of bundles, but to overcame the problem we can initially change the linearization of each E j and go back to original one at the end of computations. Denoted by C β the trivial line bundle on M with linearization t · u = t β u, for each j ∈ {1, . . . , s} let In this way, each σ j ∈ H 0 (M, E j ) is an invariant section. Now consider the rank b = s j=1 k j , C × -linearized vector bundle F = s j=1 F j , and let σ ∈ H 0 (M, F ) be the holomorphic section defined by σ = (σ 1 , . . . , σ s ). Clearly σ is invariant and we have X = σ −1 (0). Let O X be the structure sheaf of X. By assumption σ is a regular section, so the Koszul induced by σ is exact and equivariant. Tensoring by L m preserves the exacteness and equivariance, thus and by the equivariant Riemann-Roch theorem we get where the last equality holds by lemma 5.1. Since the right part of the equivariant Riemann-Roch theorem is a power series convergent in some neighborhood of zero of the lie algebra of the acting group, to get the trace of the generator of the action on the virtual space q (−1) q H q (X, L| m X ), is sufficient to take the "linear term" of the integrand. Explicitly, as m → +∞ we have H q (X, L| m X ) = 0 for q > 0 by ampleness of L, and we get the expansion and substituting in (5) we are done. Applications and examples In this section we show some consequences of the Theorems 3.1 and 4.1. In particular we use those theorems to calculate the Futaki invariant of central fibers of test configurations arising from degenerations of linear sections of vector bundles. More precisely consider a n-dimensional polarized manifold (M, L) endowed with a one-parameter subgroup of automorphisms ρ : C × → Aut(M ) that linearizes on L. Let P = span(η 1 , . . . , η s ) ⊂ H 0 (M, E) be an s-dimensional linear system of a rank k linearized holomorphic vector bundle E on M . Thus The ρ-action on P gives naturally a test configuration for the variety (X P , L| XP ) as follows. Let P t = ρ(t) · P and let X be the closure of The projection on the second factor induces a flat morphism π : X → C. Let X P0 = s j=1 σ −1 j (0), where P 0 = span(σ 1 , . . . , σ s ) = lim t→0 ρ(t) · P with σ j 's semi-invariant. By the uniqueness [12, Proposition 9.8] we have π −1 (0) = X P0 . By local calculations it is easy to see that X P0 is C × -invariant and is singular at points e 2 ∧ e 3 ∧ e 5 ∧ e 6 and e 1 ∧ e 2 ∧ e 4 ∧ e 5 . On the other hand, for ε = 0 the variaty X Pε is non-singular but not invariant. Now let σ 1 = e 16 + e 25 + e 34 , σ 2 = e 15 + e 24 , σ 3 = e 26 + e 35 . We have P 0 = span{σ 1 , σ 2 , σ 3 }, moreover ρ(t) · P ε tends to P 0 as t → 0. Thus, following the construction shown at the start of this section, there is a test configuration of (X Pε , L| XP ε ) with central fibre (X P0 , L| XP 0 ). Since by the Corollary 4.4 we get where we used F (M, L, ρ) = 0 and a 0 (M, L) = 0. Hence by [23] or [6] we proved the following Proposition 6.1. For each ε = 0 the manifold X Pε is not K-stable, hence is not Kähler-Einstein. The quintic Del Pezzo threefold Consider the Grassmannian M = G(2, 5) of planes in C 5 polarized with L = 3 Q, where Q is the universal quotient bundle. As well known the Kodaira map induced by L is the Plüker embedding M ֒→ P 9 . Thus for each σ 1 , σ 2 , σ 3 ∈ H 0 (M, L) linearly independent, the subvariety X = 3 j=1 σ −1 j (0) is a section of G(2, 5) with a 3-codimensional subspace in P 9 . The general X arising in this is the quintic Del Pezzo threefold [13], in particular it is Fano. Proof. Consider the isomorphism H 0 (M, L) ≃ 3 C 5 given by for all E ∈ M . Thus we can identify σ j with u j ∈ 3 C 5 . • It is not hard to adapt Donaldson proof of the existence of Kähler-Einstein metric on the Mukai-Umemura manifold X 22 to this case, hence proving that X is indeed Kähler-Einstein and so K-stable. As showed in [19], the manifolds X 22 and X 5 share all the properties involved in his argument. In particular we observe that X 5 has a P SL(2)-invariant anti-canonical section with at worst cusp-like singularities. General complete intersections in Grassmannians Following a construction given by Tian [22], we generalize Proposition 6.2 to general intersections of some exterior power of the universal quotient bundle on the Grasmannian. As will be clear from the proof the generality condition depends on the one-parameter subgroup ρ. where P 0 = lim t→0 ρ(t) · P . Proof. Take on E and L the unique linearizations induced by SL(N ). Consider the induced representation of ρ on H 0 (M, E) and fix a basis of semi-invariant sections σ 1 , . . . , σ h 0 (E) . Thus for each j ∈ {1, . . . , h 0 (E)} there is a unique α j ∈ Z such that t · σ j = t αj σ j . We can suppose without loss Let η 1 , . . . , η d be a basis of P . Since P is general we can suppose where c ii = 0 for all i ∈ {1, . . . , d}. Thus the limit of P under the action of ρ is the plane P 0 = span(σ 1 , . . . , σ d ). In the chosen linearization ρ acts on H 0 (M, E) as a subgroup of SL(h 0 (E)), thus h 0 (E) j=1 α j = 0. Hence, by (7) and non-triviality of ρ we have Since P is general, X P is smooth. Moreover, by the adjunction formula and the hypothesis on E we get where ι : X ֒→ M is the inclusion. This prove the Fano condition. By the localization theorem for equivariant cohomology is not hard to see that Hence, by the Corollary 3.3 we get where C > 0 and Actually ℓ Q is not very ample, however in this case we can apply [1, Proposition 1] to get the first inequality above.
2008-10-08T16:18:08.000Z
2008-10-08T00:00:00.000
{ "year": 2008, "sha1": "0f19be4290c9899322a6dfbdfc949453c3e2df34", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.aim.2010.12.018", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0f19be4290c9899322a6dfbdfc949453c3e2df34", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
15180812
pes2o/s2orc
v3-fos-license
Validation of the GenoType® MTBDRplus assay for detection of MDR-TB in a public health laboratory in Thailand Background Over the past several years, new diagnostic techniques have been developed to allow for the rapid detection of multidrug resistant tuberculosis. The GenoType® MTBDRplus test is a deoxyribonucleic acid (DNA) strip assay which uses polymerase chain reaction (PCR) and hybridization to detect genetic mutations in the genes that confer isoniazid (INH) and rifampn (RIF) resistance. This assay has demonstrated good performance and a rapid time to results, making this a promising tool to accelerate MDR-TB diagnosis and improve MDR-TB control. Validation of rapid tests for MDR-TB detection in different settings is needed to ensure acceptable performance, particularly in Asia, which has the largest number of MDR-TB cases in the world but only one previous report, in Vietnam, about the performance of the GenoType® MDRplus assay. Thailand is ranked 18th of 22 "high-burden" TB countries in the world, and there is evidence to suggest that rates of MDR-TB are increasing in Thailand. We compared the performance of the GenoType® MTBDRplus assay to Mycobacterial Growth Indicator Tube for Antimycobacterial Susceptibility Testing (MGIT AST) for detection INH resistance, RIF resistance, and MDR-TB in stored acid-fast bacilli (AFB)-positive sputum specimens and isolates at a Public TB laboratory in Bangkok, Thailand. Methods 50 stored isolates and 164 stored AFB-positive sputum specimens were tested using both the MGIT AST and the GenoType® MTBDRplus assay. Results The GenoType® MTBDRplus assay had a sensitivity of 95.3%, 100%, and 94.4% for INH resistance, RIF resistance, and MDR-TB, respectively. The difference in sensitivity between sputum specimens (93%) and isolates (100%) for INH resistance was not statistically significant (p = 0.08). Specificity was 100% for all resistance patterns and for both specimens and isolates. The laboratory processing time was a median of 25 days for MGIT AST and 5 days for the GenoType® MTBDRplus (p < 0.01). Conclusion The GenoType® MTBDRplus assay has been validated as a rapid and reliable first-line diagnostic test on AFB-positive sputum or MTB isolates for INH resistance, RIF resistance, and MDR-TB in Bangkok, Thailand. Further studies are needed to evaluate its impact on treatment outcome and the feasibility and cost associated with widespread implementation. Background Drug-resistant tuberculosis (TB) has emerged as an important global public health threat. The World Health Organization (WHO) estimates that 489,000 cases of multi-drug resistant TB (MDR-TB), defined as infection with a Mycobacterium tuberculosis (MTB) strain resistant to at least isoniazid (INH) and rifampin (RIF), occur annually, predominantly in Eastern Europe and Asia [1]. Treatment involves prolonged use of "second-line" anti-TB drugs that are less effective, less tolerated, more toxic, and more expensive than "first-line" anti-TB medications [2]. Under optimum program conditions, cure rates for drug-susceptible TB exceed 90%; for MDR-TB, cure rates infrequently exceed 70% [3]. In most high-burden TB countries, MDR-TB is only diagnosed after prolonged * Correspondence: lpp8@cdc.gov 3 U.S. Centers for Disease Control and Prevention, Atlanta, USA Full list of author information is available at the end of the article treatment with first-line TB drugs and clinical recognition that treatment has failed. Treatment of drug-resistant TB with standard first-line drugs, instead of a regimen designed according to the resistance pattern, has several potential adverse consequences: patients remain on inadequate treatment longer, increasing the risk of treatment failure or death; regimens inadequate to kill MTB amplify resistance to drugs to which their isolates were previously susceptible; and patients remain infectious, promoting transmission to close contacts [4]. Because of this problem, the WHO is recommending that countries immediately expand their capacity for culturebased drug-susceptibility testing (DST) and consider new, molecular-based assays for diagnosing drug resistance [5,6]. The internationally accepted gold standard for MDR-TB diagnosis is demonstration of MTB growth in cultures inoculated with INH and RIF. Even using modern broth-based culture systems, obtaining DST results from sputum specimens still takes several weeks [7]. New assays have been developed to detect resistance faster using genotype, rather than phenotype. The GenoType ® MTBDRplus test is a deoxyribonucleic acid (DNA) strip assay which uses polymerase chain reaction (PCR) and hybridization to detect mutations in the inhA, katG, and rpoB genes that confer INH and RIF resistance [8]. A recent meta-analysis found that the GenoType ® MTB-DRplus assay and one other similar commercial test have a pooled sensitivity of 98% for detecting RIF resistance and 89% for detecting INH resistance [9]. Specificity averages 99% for RIF and INH [9]. Testing can be performed on isolates or AFB-positive sputum specimens and can return results in 8 hours, making this a promising tool to accelerate MDR-TB diagnosis and improve MDR-TB control. Although the GenoType ® MTBDRplus assay has been studied in several laboratories, there is wide variation in circulating MTB strains across the globe [10,11], and false negative results occur due to unique genetic mutations [9,[12][13][14][15][16][17]. Validation in different settings is needed to ensure acceptable performance, particularly in Asia, which has the largest number of MDR-TB cases in the world but only one previous report, in Vietnam, about the performance of the GenoType ® MDRplus assay [18]. Thailand is 18 th on the list of 22 "high-burden" TB countries in the world, with over 90,000 cases occurring annually [19]. A national survey in 2002 found that the rate of MDR-TB was 1% in previously untreated TB patients and 20% in previously treated patients; a second survey in 2006 found that the rate of MDR-TB had increased to 1.7% in previously untreated patients and 34.5% in previously treated patients [20]. We have previously validated the feasibility and performance of broth-based culture and DST at the Bangkok city public TB laboratory in Thailand [7]. In this study, we compared the performance of the GenoType ® MTBDRplus to a broth-based DST assay for detecting INH resistance, RIF resistance, and MDR-TB in AFB-positive sputum specimens and isolates in the same laboratory. Setting Between July and September 2008, we evaluated the performance characteristics of the GenoType ® MTBDRplus assay at the Bangkok Metropolitan Administration (BMA) Health Laboratory Division, the primary clinical laboratory for the city's TB control program. Before study implementation, we tested 50 MTB isolates of known resistance patterns from WHO's External Quality Assurance program to evaluate laboratory proficiency in using this assay. After this, the performance characteristics of the GenoType ® MTBDRplus assay were assessed in two populations: (a) 50 stored MTB isolates with known resistance patterns to INH and/or RIF from pulmonary TB patients in Bangkok; and (b) 164 stored acid fast bacilli (AFB)-positive sputum specimens that were submitted to the BMA laboratory by 163 TB patients for routine culture and DST. All MGIT AST results were blinded to the microbiologists performing the GenoType ® MTB-DRplus assay. Routine testing of sputum specimens Sputum specimens were processed using the U.S. Centers for Disease Control and Prevention (US CDC) recommended method of N-acetyl-L-cysteine 4% NaOH-2.9% citrate (final concentration of NaOH 1%). Following incubation at room temperature for 15 minutes, specimens were concentrated, decanted, re-suspended, and then examined for the presence of AFB using Ziehl-Neelsen (ZN) method. The remaining suspension was used to inoculate two Lowenstein-Jensen (LJ) tubes and one Mycobacterial Growth Indicator Tube (MGIT, Becton-Dickinson). Residual processed sputum was frozen at -70°C and used for GenoType ® MTBDRplus testing. MGIT cultures were incubated in the MGIT BACTEC 960 for six weeks. All cultures flagged as positive were removed, examined for AFB using ZN staining, and subcultured to LJ and confirmed as MTB using classical biochemical tests including niacin accumulation, nitrate reduction, and inhibition to para-nitrobenzoic (PNB) acid [20]. All isolates identified as MTB underwent MGIT AST for INH, RIF, streptomycin, and ethambutol, using previously described methods [7]. All positive MGIT vials were stored at room temperature for the duration of the assessment to allow discrepant testing. DNA isolation from clinical specimens and MTB isolates AFB positive sputum specimens graded as scanty, 1+, 2+ and 3+ were prepared for the GenoType ® MTBDRplus assay by first concentrating 500 μL of the residual processed specimen in a microcentrifuge (10,000 × g, 15 minutes, room temperature). The supernatants were decanted and the pellet was re-suspended in 100 μL of distilled water, then inactivated by incubating the bacteria in a heating block for 20 minutes at 95°C. Cells were sonicated in an ultrasonic bath for 15 minutes, and concentrated for an additional 5 minutes. The supernatants were transferred to a new tube to be stored for PCR. Isolates were prepared for the GenoType ® MTBDRplus assay by first sub-culturing stored clinical strains to broth culture then to LJ. We then used approximately 1 loop full of colonies taken from the LJ to prepare a bacterial suspension in 300 μL of distilled water. This suspension was inactivated, sonicated and concentrated using the same procedures applied to sputum specimens. DNA amplification Amplification was performed by combining 35 μL of primer nucleotide mix (PNM) with 5 μL of 10× PCR buffer (containing 15 mM MgCl 2 ), 2 μL MgCl 2 (25 mM MgCl 2 ), 3 μL molecular grade H 2 O, 0.2 μL (1 unit) Hot-Star Taq polymerase (QIAGEN, Hilden, Germany), and 5 μL of the bacterial suspension for a total final volume of 50.2 μL. The amplification profile for direct patient material as described by the manufacturer was used for all bacterial suspensions. First, the template DNA was denatured for 15 minutes at 95°C, followed by 10 cycles consisting of 30 s at 95°C and 2 minutes at 58°C, with an additional 30 cycles consisting of 25 s at 95°C, 40 s at 53°C and 40 s at 70°C. The final cycle consisted of an 8 minute run at 70°C. Hybridization Hybridization was performed manually using a shaking water bath/Twincubator ® preheated to 45°C. Twenty microliters of denaturation solution were mixed thoroughly in a plastic 12-well tray with 20 μL of amplified sample and incubated at room temperature for 5 minutes. One milliliter of hybridization buffer was added to each well and mixed. We then placed 1 pre-labeled test strip into each well, and incubated the test strips and solutions for 30 minutes at 45°C. All solutions were completely aspirated following incubation. One milliliter of stringent wash solution was then added to each strip and incubated for 15 minutes at 45°C. Once all solutions were completely aspirated, we applied 1 mL of rinse solution to each strip for 1 minute. The rinse solution was then completely removed and 1 mL of diluted conjugate was added to each strip, and incubated for 30 minutes. After incubation, all solutions were removed, and the test strips were rinsed twice by using a rinse solution for 1 minute, followed by distilled water for 1 minute. All solutions were completely aspirated between rinses. We then added 1 mL of diluted substrate to each strip and incubated the test strips protected from light for up to 20 minutes. All solutions were removed, and the reaction was stopped by rinsing twice with distilled water. The test strips were allowed to dry, and then taped to the GenoType ® MTB-DRplus assay worksheet for interpretation. Repeat testing and discrepant analysis Sputum specimens and isolates resulting in inconsistent development of bands on the MTBDRplus strip, and/or no MTB control band underwent repeat PCR and hybridization from the extracted DNA. Isolates with discrepancies between the susceptibility results of the GenoType ® MTBDRplus assay and MGIT AST were sent to US CDC for sequencing. Statistical analyses The time from positive culture in the laboratory to the time DST results were read was calculated using the Gen-oType ® MTBDRplus assay and the MGIT AST method for each specimen. All statistical tests were performed using Stata 10.0 (College Station, TX). Statistical significance was established at an alpha level of 0.05. Ethical review This project underwent formal ethical review at the U.S. CDC and Bangkok Metropolitan Administration. It was approved as a public health program evaluation, not requiring individual informed consent. Results Phenotypic testing identified INH resistance in 14/50 (28%) isolates and in 29/164 (18%) sputum specimens, and RIF resistance in 6/50 (12%) isolates and in 19/164 (12%) sputum specimens. The GenoType ® MTBDRplus assay had a sensitivity of 95.3% (41/43), 100% (25/25), and 94.4% (17/18) for INH-resistance, RIF-resistance, and MDR-TB, respectively (Tables a1a and b1b). Of the 41 specimens that were identified as INH-resistant using the GenoType ® MTBDRplus assay, 32 had a mutation in the katG gene, 6 in the inhA gene, and 3 in both genes. For INH resistance, the sensitivity was lower for sputum specimens (93%) than for isolates (100%), but the difference was not statistically significant (p = 0.08). Specificity was 100% for all resistance patterns and for both specimens and isolates. We were able to obtain interpretable sequence data for one of two sputum specimens with INH resistance detected by conventional testing but not molecular testing; this isolate was identified as wild type. For the 211 specimens with dates recorded for time from the culture result to the DST result, the laboratory processing time for the GenoType ® MTBDRplus assay was significantly shorter than MGIT AST: 5 days compared with 25 days (p < 0.01) ( Table 2). The most marked advantage in laboratory processing time was evident when specimens were processed directly from AFB-positive sputum smears (3 days vs. 24.5 days; p < 0.01). Discussion In Bangkok, the GenoType ® MTBDRplus detected INH resistance, RIF resistance, and MDR-TB with a high sensitivity and 100% specificity in both isolates and AFBpositive sputum specimens, with substantial reductions in turn-around time compared with MGIT AST. This study provides assurance that TB patients in Thailand with MDR-TB detected with the GenoType ® MTB-DRplus assay should immediately commence treatment with second-line anti-TB drugs. The 100% specificity corresponds to a positive likelihood ratio of near infinity, suggesting that, regardless of the pre-test odds of MDR-TB, the post-test odds from a positive test are sufficiently high to warrant MDR-TB treatment. Ideally, the Geno-Type ® MTBDRplus assay should be performed on AFBpositive sputum specimens, rather than isolates, to take advantage of the more than three week acceleration in turn-around-time. Use of the assay on AFB-positive sputum and rapid commencement of MDR-TB treatment for patients with genotypic resistance has the potential to improve patient outcomes, reduce TB transmission, and reduce amplification of resistance to first-line drugs (e.g., ethambutol, pyrazinamide, and streptomycin) to which the isolate may not yet be resistant. One weakness of the GenoType ® MTBDRplus is the lack of 100% sensitivity for MDR-TB, attributable to the test's failure to detect all mutations that confer INH resistance. Two AFB-positive sputum specimens were confirmed as INH resistant by MGIT AST and sensitive by the Geno-Type ® MTBDRplus. We were able to obtain interpretable sequence data of the promoter region for inhA and katG for one specimen which confirmed the result of the Gen-oType ® MTBDRplus. INH resistance in this strain may be due to mutations in regions other than codon 315 of katG or nucleic acid positions -15, -16 and -8 in the inhA promoter region; or mutations in genes not represented on the test strip, such as ahpC-oxyR, and ndh [21,22]. However, a recent evaluation of 160 INH resistant isolates in Thailand found that 92.5% (148) of the gene mutations were found in codon 315 of katG and the inhA promoter and coding regions (127 and 22, respectively) [23]. These findings support the utility of the GenoType ® MTBDRplus assay for detecting the majority of genetic mutations conferring INH resistance in Thailand. As we were not able to obtain quality sequencing results for our second specimen, the reason for the discordance remains unclear. Advances in understanding the molecular epidemiology of INH resistance may contribute to improved test performance. Our validation study was conducted using stored sputum specimens and isolates from TB patients in Thailand. Therefore, our turn around time using the GenoType ® MTBDRplus for identification of RIF and INH resistance in AFB positive sputum specimens is based on the time from confirmed AFB positive smear results to interpreta-tion of the GenoType ® MTBDRplus. The turn around time for isolates is based on the time required to subculture strains of MTB, obtain growth, perform and interpret results of the GenoType ® MTBDRplus assay. On average, results were available using GenoType ® MTBDRplus on AFB-positive sputum specimens in 3 days and in 16 days for isolates; compared with 25 days using MGIT AST. Our laboratory uses the manual method for the Geno-Type ® MTBDRplus assay and performs DNA extraction and PCR on the first day and hybridization on the second day. Interpretation of the test strips required an additional 1-2 days. Laboratories with adequate staff may reduce their turn around time by performing DNA extraction, PCR, and hybridization in one day, and by using the automated system (GT-Blot 48; Hain Life-Science GmbH, Germany). Turn around time from specimen collection to susceptibility result will be determined during the implementation phase of our study. This study evaluated a large number of sputum specimens at a routine public health facility in a high-burden TB setting. Most evaluations of the GenoType ® MTB-DRplus assay have involved a relatively small number of specimens (usually less than 50) and were conducted in primarily academic facilities located in low TB burden, high income countries [9,[12][13][14][15][16][17]. The only other evaluation that involved a large number of samples from a public health facility in a high-burden TB country was done in South Africa [16,17]. Test performance in 536 sputum specimens was similar to our findings, although the test showed the additional ability to detect MTB and drugresistant MTB in smear-negative, culture-positive specimens. The major limitation of our study is that it involved a relatively small number of sputum specimens compared with the number routinely processed in the laboratory, which may have been an insufficient sample to detect differences in test performance between isolates and AFBpositive sputum. In conclusion, the GenoType ® MTBDRplus assay has been validated as a rapid and reliable first-line diagnostic test on AFB-positive sputum or MTB isolates for INH resistance, RIF resistance, and MDR-TB in Bangkok, Thailand. Whether it should be implemented into routine public health programmatic use, however, is not yet clear. Further studies are needed to evaluate its impact on treatment outcome and the feasibility and cost associated with widespread implementation.
2014-10-01T00:00:00.000Z
2010-05-20T00:00:00.000
{ "year": 2010, "sha1": "e7c7bbf0c3ddcf468237671a81d5765b4c88a9b9", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-10-123", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac2e7e0f8c05530337e99b40df77654143dff71f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
218619158
pes2o/s2orc
v3-fos-license
Acute kidney injury after nephron sparing surgery and microwave ablation: focus on incidence, survival impact and prediction Abstract Purpose To compare acute kidney injury (AKI) incidence between nephron sparing surgery (NSS) and microwave ablation (MWA) for T1a RCC patients, reveal the effect of AKI on survival prognosis, construct AKI nomogram and use Law of Total Probability for survival probability (SP) prediction. Materials and methods Patients were studied retrospectively after NSS (n = 1267) or MWA (n = 210) from January 1, 2011 to June 30, 2017. Using one to one Propensity Score Matching (PSM), 158 pairs of patients were identified for the cohort study. AKI incidence, risk factors and impact on survival outcomes were analyzed using Chi-square test, logistic and cox regression analysis. AKI risk and SP were predicted by nomogram and Law of Total Probability. The performance of the nomogram was assessed with respect to its discrimination, calibration, and clinical usefulness. Results AKI occurred more commonly in NSS (27.85%) cohort, when compared to MWA (17.72%) cohort (p = 0.032), but treatment modality was not independently predictive of AKI occurrence (odds ratio [OR]: 0.598; 95% confidence interval [CI]: 0.282–1.265; p = 0.178). The 5-yr overall survival (OS) was lower in AKI patients (73.5%) compared with non-AKI patients (94.8%; p < 0.001). AKI was an independent risk factor for all-cause mortality in RCC patients (hazard ratio [HR]: 2.820; 95% confidence interval [CI]: 1.110–7.165; p = 0.029). Predictors for both NSS- and MWA-related AKI included tumor diameter, baseline eGFR and CCI score. RENAL score and tumor blood supply can predict AKI after NSS and MWA, respectively. The AKI normograms demonstrated good discrimination, with AUCs >0.86, excellent calibration and net benefits at the decision curve analysis with probabilities ≥5%. SP predicted by Law of Total Probability was comparable to actual OS. Conclusion AKI was an early indicator for poor overall survival in RCC patients. It can be predicted by several oncological parameters. Nomogram and Law of Total Probability can accurately predict AKI risk and SP. Introduction Acute kidney injury (AKI) is a major complication following surgical resection or local radical ablation for renal cell carcinoma (RCC) [1]. Numerous studies demonstrated that AKI was independently associated with increased risk of all-cause patients' mortality after surgery, sepsis or nephrotoxic drug administration [2][3][4]. However, for RCC treatment, the effects of AKI on prognosis of RCC patients were only studied in nephrectomy. AKI following PN or RN is associated with increased mortality, new-onset CKD, worsening of preexisting CKD, and prolonged hospitalization [5]. The role of AKI in TA for treating RCC has not been revealed.Given the favorable oncologic efficacy across T1a RCC management strategies, a better renal function preservation and lower non-cancer causes of death are often of paramount concerns [6][7][8][9]. The top four causes of mortality for T1a RCC patients are cardiovascular disease, pulmonary disease, renal events and other malignancies [10]. AKI identified as a vital predictor for all causes of death in other fields was only studied on incidence, impact on in-hospital mortality and promoting effect on chronic kidney disease in RCC [5]. No study directly investigated the relationship between AKI and long-term survival prognosis in RCC patients.The least invasive methods, such as nephron sparing surgery (NSS) and thermal ablation (TA), are recommended by the American Urological Association (AUA), the European Association of Urology (EAU) for T1a RCC treatment [11,12]. NSS was reported to have 30.3%-55.7% AKI incidence [13] and there was no data reporting on AKI after MWA. Thus, the differences in AKI incidence, impact of AKI on survival prognosis and AKI risk factors for the two techniques, are unknown. Based on these, we performed a cohort study to compare AKI incidence between NSS and MWA and to explore AKI impact on survival prognosis in T1a RCC patients. We further constructed nomogram that assess AKI risk and used Law of Total Probability to predict survival probability (SP) based on AKI risk. This study comprehensively evaluates the role of AKI in RCC patients. Patient selection A total of 1554 consecutive (>18 years of age) RCC patients who underwent elective NSS (n ¼ 1330) and MWA (n ¼ 224) from the 1st of January 2011 to the 30th of June 2017, were reviewed in electronic medical records. The choice of NSS or MWA was based on a result of multidisciplinary discussion after a review of clinical, imaging and functional studies. Indications for MWA of 210 renal nodules in 210 patients were the following: advanced age or poor surgical candidates for significant comorbidities in 37 patients, poor liver function test in 24 patients, single kidney after nephrectomy in 9 patients, poor renal function in 73 patients, association of other cancers in 13 patients, and patient preference in 54 patients. NSS was offered to patients who were relatively healthy and young enough to endure the procedure of surgery. The inclusion criteria for the cohort study were as follows: (1) clinical TNM classification of AJCC T1aN0M0; (2) single tumor; (3) absence of vascular invasion or extrarenal spread. The exclusion criteria were as follows: (1) patients with suboptimal MRI or CT images; (2) those who underwent neoadjuvant therapy; (3) lack of clinical or imaging data or follow-up information; (4) those who suffered from sepsis, severe anemia, tumor lysis syndrome before NSS or MWA; (5) had severe coagulation disorders (i.e. prothrombin time > 25 s, prothrombin activity <40%, and platelet count <50 cells  10 9 /L). Finally, 1477 patients (NSS: n ¼ 1267; MWA: n ¼ 210) were included. The NSS and MWA procedures were previously described [14,15]. Outcome measurements and follow-up AKI is defined according to the AKIN criteria (!1.5-fold increase or increase by 26.5 lmol/L in preoperative creatinine within 48 h after procedure). AKI is classified into three stages: stage 1 as creatinine increases 1.5-to 2-fold; stage 2 as creatinine increase 2-to 3-fold; stage 3 as creatinine increase >3-fold (or need for dialysis or a peak sCr > 4 mg/mL with at least a 0.5 mg/dL increase) [16]. In terms of the KDIGO guidelines and relative references, the AKI recovery is defined as the sCr level fall back to within 120% of baseline creatinine level closest to 90 days after AKI episode to allow sufficient time for recovery [17]. After a complete resection or ablation was achieved, routine visit was repeated at the third month and then every 6 months. Followups were closed at times of death or last visits of the patients. Last follow-up date and status were recorded. Reasons for death were measured and recorded. OS and CSS were calculated using the days between NSS or MWA and death, or end of follow-up. Preoperative variables Perioperative variables, known to be or that could potentially be associated with AKI, were examined. These factors were chosen a priori, based on AKI literature and on our clinical experience with AKI [18][19][20]. The collected data were as follows: (1) patient demographics (sex, age), comorbidities (CCI score), ECOG performance status, laboratory examination (baseline eGFR, blood sugar, blood uric acid, triglyceride, total cholesterol, calcium, hemoglobin, white blood cell, platelet counts, ALT, AST, ALB, TIBL and DIBL); (2) tumor features (size, laterality and pole, adjacency, blood supply and RENAL Score); (3) procedure parameter (blood loss, warm ischemia time or ablation time). Baseline eGFR was calculated by CKD-EPI equation, which was successfully used to assess renal function in elderly cancer patients. Judging standards of tumor adjacency and blood supply, were in accordance with relevant references. Tumors adjacency to the renal pelvis and bowel were defined as distance between tumor margin and bowel or renal pelvis < 5mm as measured by US) [21]. Contrast-enhanced MR or CT imaging was used to assess vascularity of the tumors. Fast multiplanar spoiled gradient-recalled-echo sequences with fat saturation (125/4.2, 90 flip angle, 256  192 matrix, 16-25 s breath hold) or multidetector CT (Lightspeed 16; GE Medical Systems, Milwaukee, Wis, 5-mm section thickness, a pitch of 1.35:1.0, 120 kV, and 250 mA) were performed dynamically. At visual inspection, the signal intensity enhancement of the tumor greater than and equal to that of normal renal cortex from late cortical phase (40-50 s) to delayed phase (210 s) were classified as hypervascular and the tumor enhancement less than that of normal renal cortex throughout the whole phases (cortical phase, comedullary phase and delayed phase) were classified as hypovascular [22]. Law of total probability for prediction of survival probability Law of Total Probability: p(A) ¼ p(AjB 1 )p(B 1 ) þ p(AjB 2 )p(B 2 ) þ … þ p(AjB n )p(B n ), can be understood as follows: There are many reasons for an event (all kinds of reasons are mutually exclusive); thus, the probability of the event is the sum of the probability of each reason causing the event [23]. Each patient has two conditions; suffering from AKI or not, which are mutually exclusive. Based on the Law of Total Probability, survival probability ¼ AKI risk  overall survival (OS) of AKI patients þ (1 -AKI risk)  OS of non-AKI patients. AKI risk and OS can be known from separate AKI nomograms and Kaplan-Meier survival analyses for NSS and MWA. Statistical analysis Continuous features were summarized as means and standard deviations or medians and interquartile ranges (IQRs). Categorical data were summarized with frequency counts and percentages. The comparison for continuous variables was conducted using Student's t-test or Wilcoxon signed rank test and Pearson v 2 test for categorical variables. Cox regression analysis and logistic model were used to test significant effects on all-cause mortality and AKI using multiple factors and the coefficient of logistic model was used to develop AKI nomogram. The prediction of AKI, obtained from the nomogram was used to compute the AUC and perform the decision curve analysis (DCA). Calibration curves were plotted to assess the calibration of the nomogram. OS and CSS were calculated using the Kaplan-Meier method and compared using the logrank test. The ability of Law of Total Probability to predict survival probability was assessed by Group t test. All tests were two sided with a significance level set at p < 0.05 and statistical analyses were performed using SPSS 16.0. Comparison of patient characteristics between NSS and MWA cohorts Before Propensity Score Matching, MWA patients were significantly older, with higher CCI score, lower baseline eGFR (p < 0.0001 for all), lower serum albumin (p < 0.05) and worse physical condition (p < 0.01) in comparison to NSS patients. RCCs were found more frequently in the right laterality in MWA patients (p < 0.0001) ( Table 1). After Propensity Score Matching, all parameters were well balanced (supporting material Table 1). Comparison of AKI incidence and AKI recovery between NSS and MWA cohorts AKI incidences were 27.85% and 17.72% in NSS and MWA cohorts, respectively (p ¼ 0.032, supporting material Table 1). Of these AKI patients, the number of patients met 1, 2 and 3 stage AKI criteria were 35 (79.55%), 6 (13.64%) and 3 (6.82%) for NSS and 24 (85.71%), 3 (10.71) and 1 (3.57%) for MWA, respectively (supporting material Table 1). Our results showed that MWA had lower incidence of moderate to severe AKI severity relative to NSS. Nine patients in NSS cohort and five patients in MWA cohort recovered from AKI before discharge. The results showed that 43.2% (19/44) and 42.9% (12/28) AKI patients recovered to within 120% of their baseline sCr for NSS and MWA cohorts, respectively (p ¼ 0.9728). The CKD upstaging proportions in NSS and MWA cohorts were 45.4% (20/44) and 28.5% (8/28), respectively (p ¼ 0.3111). Patients with higher baseline eGFR and lower CCI score were more likely to recover AKI (supporting material Table 1). The median eGFR in AKI patients who recovered was 66.62 mL/min per 1.73 m 2 versus 53.57 mL/min per 1.73 m 2 in those who did not recover (p ¼ 0.000), and the median CCI score was 3.89 versus 4.60 (p ¼ 0.011, supporting material Table 2). Comparison of survival outcomes between AKI and non-AKI patients In total patients, the 1-, 3-and 5-year OS of AKI patients were 91.3%, 83.8% and 73.5%, respectively; and for non-AKI patients, they were 100.0%, 98.9% and 94.8%, respectively (p < 0.001) (Figure 1). The differences of 1-, 3-, and 5-year CSS between AKI patients and non-AKI patients were not significant ( Figure 1 and supporting material Table 3). Multivariate analysis indicated that AKI was an independent risk factor for all-cause mortality in RCC patients (supporting material Table 4). AKI nomogram AKI influencing factors, such as patient, tumor and procedure, were shown in Table 2. The results showed statistically significant differences in AKI occurrence depending on age, CCI score, baseline eGFR, tumor diameter, blood loss, tumor blood supply, RENAL Score, Performance status and treatment modality. Multivariate analysis showed several factors related to AKI occurrence in 316 patients, including tumor diameter, baseline eGFR, CCI score and RENAL Score. Nomogram that incorporated the above independent predictors was developed and presented in Figure 2. Nomogram performance and clinical use The AUC for the prediction nomogram obtained from 316 patients was 0.864 (95% CI 0.813-0.914) (Figure 3(A)). Calibration curve of the nomogram was presented in Figure 3(B), which demonstrated that the AKI probabilities predicted by the nomogram agreed with the actual probabilities. DCA demonstrated the net clinical benefit originating from applying this model with probabilities !5% (Figure 3(C)). The discrimination, calibration and clinical usefulness for separate nomograms of NSS and MWA were shown in supporting material Figure 3. 3.6. Law of total probability to predict survival probability (SP) Discussion Acute kidney injury is prevalent in cancer patients with incidence of 25.8-33.8%, especially in RCC patients with incidence of 49.01-54.2% [24,25]. The causes of AKI prevalence in cancer patients was not only related to medical interventions, such as antitumor drugs, surgery or ablation, but also related to spontaneous tumor lysis syndrome, sepsis, infection, hypercalcemia, abdominal compartment syndrome, urinary tract obstruction, vascular occlusion and contrast agents, etc. [1,26,27]. However, the impact of AKI on survival outcomes in RCC patients is not clear and the predictors of AKI after MWA for RCC have not been studied. Therefore, the current study provides new knowledge on AKI incidence, clinical significance and prediction nomogram for T1a RCC patients. Our findings may serve as evidence for that the option of RCC treatment strategy should give priority to postprocedural AKI risk.After thermal ablation (TA) as first line of treatment for T1a RCC, some studies have investigated acute renal failure (equivalent to AKI stage 3) after TA with incidence 0-5.9% [28]. However, AKI after TA has not been reported. High-power MWA used in our study had a modest AKI (17.72%) and AKI stage 3 (3.57%) incidences, lower than NSS group (27.85% and 6.82%). But subsequent multivariate logistic analysis showed that treatment modality, [16]. It can reduce the chronic damage to renal function caused by comorbidities such as chronic nephritis, hyperuricemia or diabetes, etc. Our results demonstrated that, after NSS or MWA, AKI patients had increased risk of all-cause mortality relative to non-AKI patients, but cancer-specific survival was not significantly different between these two categories. Most studies blamed that AKI can induce chronic kidney disease (CKD) and end stage renal disease, which were the direct cause of patients' death. However, another mechanism attributed to death induced by AKI is that IL-1, TNF, and angiopoietin-2 cytokines releasing induced by AKI promote inflammation reaction, which leads to cardiac cell apoptosis, endothelial disfunction, and deteriorated microcirculatory dysfunction. They are more important pathogenic or lethal mechanism than the only eGFR decline [30]. Cox regression analysis indicated that postoperative AKI was an independent risk factor for all-cause mortality. The data sufficiently demonstrates that AKI is a definite early indicator for poor survival prognosis in RCC patients.Tumor diameter, baseline eGFR, and CCI score are independent AKI predictors for both NSS and Figure 2. Nomogram for the prediction of AKI after NSS or MWA, based on multivariable logistic regression analysis. Instructions: locate the patient's baseline eGFR on the corresponding axis. Draw a line straight downward to the score axis to determine how many points toward the probability of AKI the patient receives for his/her baseline eGFR. Repeat the process for each additional variable. Add the points for each of the predictors. Locate the final sum on the total score axis. Draw a line straight up to find the patient's probability of AKI. AKI ¼ acute kidney injury; eGFR ¼ estimated glomerular filtration rate; CCI ¼ Charlson comorbidity index. Figure 3. AUC, calibration curve and decision curve analysis of AKI nomogram. A, AKI nomogram ROC curve. The x-axis indicates 1-specificity and y-axis the sensitivity. Area under the curve of ROC (AUC ¼ 0.864) represents the great AKI discriminative ability of the nomogram. B, AKI nomogram calibration curve. The x-axis represents the nomogram-predicted probability and y-axis AKI actual probability. A perfect prediction would correspond to the 45 blue dashed line. The red dotted line represents primary cohort (n ¼ 316) and the black solid line is bias corrected by bootstrapping (B ¼ 1000 repetitions), indicating observed nomogram performance. C, Decision curve analysis (DCA) of AKI nomogram. DCA demonstrating the net benefit associated with the use of the nomogram-derived probability, based on multivariable logistic regression analysis, for the prediction of AKI. AKI ¼ acute kidney injury. MWA. Rewa and Bagshaw indicated that the quantity and quality of preserved renal parenchyma, are the most important determinants of functional change [31,32]. Laguna indicated that defective quantity or quality of preserved parenchyma can result in serious deficiencies of compensatory function of residual nephrons [33]. Baseline eGFR directly reflects the quality of preserved parenchyma, because a lower eGFR represents extensive glomerulosclerosis and tubular fibrosis [34]. Tumor diameter indirectly determines the quantity of preserved parenchyma, as larger tumor necessitates resection or ablation of more peritumoral parenchyma for safety boundary. An intense inflammation response, after NSS or MWA, can accelerate the exposure of a latent kidney injury to hypertension, diabetes or inflammatory disease [35,36], which can explain why high CCI score patients are more vulnerable to post-procedural AKI. In separate nomograms, high RENAL Score and tumor rich blood flow were independent predictors for NSS-and MWArelated AKI, respectively. Some studies reported that the ischemia time can predict the AKI occurrence after NSS [37], but our results demonstrated that RENAL Score was independently predictive of AKI rather than ischemia time. Antonelli et al. and Beksac et al. studies support the above conclusion. They found that the on-clamp and off-clamp approaches for robotic partial nephrectomy showed a comparable AKI incidence and there was no significant difference in AKI incidence between those that achieved trifecta and in those where trifecta was not [38,39]. Bertolo et al. confirmed worse probability of maintaining !90% baseline renal function was found more in patients with CCI ! 3 (p ¼ 0.004) and patients with PADUA score !8 (p ¼ 0.023) [40]. Casey and Inderbir et al. indicated that, for medially based hilar tumors which are higher RENAL nephrometry scores, robotassisted and laparoscopic partial nephrectomies are virtually impossible if the hilum remains unclamped, because one or more distinct, higher-order arteries immediately supply the tumor or tumor-bearing segment of the kidney [41]. It is quite different with laterally based tumor, as the increased intraparenchymal distance between the main renal artery and the tumor makes it less likely that a dedicated arterial branch immediately supplying the tumor. So, there are two speculations to explain why RENAL Score rather than ischemia time takes more responsibilities for AKI occurrence after NSS: 1. In present study, the time of warm ischemia was generally shorter, 22 (18,27) minutes within the range of 25-30 min recommended by RCC management guidelines [11,12]. 2. AKI is a complex pathophysiological process. It is not only related to the ischemia-reperfusion injury [42], but also related to the type, quantity and blood supply distribution of damaged vessels in tumor resection. Blood flow can produce heat sink effect. The peripheral renal tissue around ablation zone showed a congestive status with a temperature 40-50 C, which can damage the non-neoplastic nephrons potentially [43]. Unfortunately, so far, no research has been conducted to explore the influence of thermal sedimentation effect on blood flow temperature. Meanwhile, the present study also found that serum Cr decreased by more than 10% baseline occurred in some patients after MWA, which, from another angle, pointed out that the thermal sedimentation effect had a definitely certain action on peritumoral renal tissue. This phenomenon has never been reported before. Post et al. demonstrated that microcirculation perfusion disorder was one of the main AKI pathogenic mechanisms [44]. Based on this evidence, we speculated, that the reduction of serum Cr or AKI occurrence after MWA, was the result of renal microcirculation perfusion changes caused by thermal sedimentation effect. The pathological changes of the peritumoral tissue after MWA may be helpful to reveal our clues. Damage of glomerular capillary loops and renal tubular epithelial cells or microcirculatory vasodilation without cell damage may be the corresponding renal pathological change in patients with AKI or serum Cr decline. The microcirculatory vasodilation can increase glomerular perfusion blood flow and further improve glomerular filtration rate. Heat damage or improvement in microcirculation perfusion may largely depend on the heat intensity carried away, which is necessary to study the differences of pathological changes and temperature distribution around tumor margin between AKI patients and those with significant serum Cr decrease.High AUC, excellent calibration and great clinical benefit proved that nomogram has strong AKI discriminative ability, accurate AKI prediction ability and potent clinical usefulness. Group t test definitely proved the average calculated survival probability was closely comparable to actual OS for NSS or MWA cohort. Clinicians can use AKI risk value and the Law of Total Probability to calculate patient's survival probability for preoperative counseling. Additionally, when counseling patients, one should seriously mention that adjuvant treatment modalities that might be required after procedure must list nephrotoxicity among their side effects, especially for screening patients with high AKI risk. This study had numerous limitations and one of them is its retrospective design with a relatively small patient series after PSM. The limited sample size might have reduced the statistical power in comparative analyses resulting in bias and coincidence. Also, stage 1 AKI accounted for the majority of AKI events in NSS and MWA cohorts. Therefore, we did not analyze the impact of AKI severity on survival outcomes according to AKI stage stratification. Finally, our survival estimates encompassed a relatively prolonged duration of 7 years, but outcomes beyond this point require further study. Conclusion AKI was an early indicator for poor overall survival in RCC patients. It can be predicted by several oncological parameters. Nomogram and Law of Total Probability can accurately predict AKI risk and SP. They may serve as tools for screening patients at high risk of AKI and poor survival prognosis after NSS or MWA.
2020-05-14T13:03:13.158Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "f30e55ac23f9ee98806180d704ffe63ded54b953", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02656736.2020.1752944?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "5eab32530ff9b1c9de38d6067d0a0d7b0d840842", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248725141
pes2o/s2orc
v3-fos-license
Changes in healthcare spending attributable to obesity and overweight: payer- and service-specific estimates Background National efforts to control US healthcare spending are potentially undermined by changes in patient characteristics, and in particular increases in rates of obesity and overweight. The objective of this study was to provide current estimates of the effect of obesity and overweight on healthcare spending overall, by service line and by payer using the National Institutes of Health classifications for BMI. Methods We used a quasi-experimental design and analyzed the data using generalized linear models and two-part models to estimate obesity- and overweight-attributable spending. Data was drawn from the 2006 and 2016 Medical Expenditures Panel Survey. We identified individuals in the different BMI classes based on self-reported height and weight. Results Total medical costs attributable to obesity rose to $126 billion per year by 2016, although the marginal cost of obesity declined for all obesity classes. The overall spending increase was due to an increase in obesity prevalence and a population shift to higher obesity classes. Obesity related spending between 2006 and 2016 was relatively constant due to decreases in inpatient spending, which were only partially offset by increases in outpatient spending. Conclusions While total obesity related spending between 2006 and 2016 was relatively constant, by examining the effect of different obesity classes and overweight, it provides insight into spend for each level of obesity and overweight across service line and payer mix. Obesity class 2 and 3 were the main factors driving spending increases, suggesting that persons over BMI of 35 should be the focus for policies focused on controlling spending, such as prevention. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-022-13176-y. Background Obesity has been identified as one of the key drivers of increased healthcare spending and reduced life expectancy in the United States [1][2][3][4][5] and worldwide [6]. Obesity has been linked to a multitude of health conditions, including coronary heart disease [7], chronic renal failure [8], many cancers, sleep apnea, gallbladder disease [9], Type 2 Diabetes [10] and other conditions. The link between obesity and chronic illness is the reason for the link between obesity and reduced life expectancy [3,4]. There has also been an extensive investigation of the impact of obesity on healthcare spending. Obesity was identified as one of the key drivers of increased healthcare spending during the 1996-2006 time period [1], with the effect largely driven by increases in spending on chronic diseases caused by obesity [5]. More recent work has found that the proportion of spending attributable to Open Access *Correspondence: eline.altenburg@med.uvm.edu Page 2 of 7 van den Broek-Altenburg et al. BMC Public Health (2022) 22:962 obesity increased by 29% from 2001 to 2015, from 6.1 to 7.9% [11] with obese adults having higher inpatient and prescription drug spending, in particular [12]. The costs of obesity are higher in more obese individuals, both overall and for particular chronic illnesses, such as diabetes [13]. "Interesting, there is some evidence that the effect of obesity on total spending may have moderated in recent years, with a statistically insignificant decrease in total spending from 2010 to 2013, from $3748 [14] to $3429. " The more recent economic literature has begun measuring the effect of obesity by Body Mass Index (BMI) categories, mirroring the medical community. This is done using the National Institutes of Health (NIH) Body Mass Index (BMI) categories of overweight (BMI of 25-29.9), Class 1 (30-34.9), Class 2 (35-39.9) and Class 3 (Extreme) (BMI over 40). In clinical research, the new classification system has shown that decreases in life expectancy are concentrated in Class 3 [15]. There is limited evidence about whether healthcare spending is similarly concentrated in higher BMI classes, despite studies addressing BMI up to 45 [13,16]. This study makes a number of new contributions to the existing literature on the effect of obesity and overweight on healthcare spending. First, we measure the effect of obesity and overweight on spending by service line (Emergency care, Inpatient and Outpatient) and payer using the NIH classifications for obesity and overweight; constant obesity related spending is largely due to ashift from inpatient care to outpatient care coupled with slight reductions in prescription drug spending. Previous service line and payer specific estimates used the more general obese / non-obese framework [17,18], which may miss important nuances if the effect of obesity is concentrated in the higher categories [1]. Second, reforms in the Affordable Care Act (ACA) have shifted payer types, particularly through Medicaid expansions, which may have changed the distribution of payers from previous studies. Third, we examine the effect of different obesity classes and overweight on spending, by service line, to understand differences in how utilization occurs for different levels of obesity and overweight. Finally,we provide a careful examination of the suggestive evidence cited above that the effect of obesity may have moderated over more recent years. To do this, we analyze trends in obesity rates and obesity-induced spending between 2006 and 2016 and model the changes in spending for different BMI classes. Methods Our data source is the Medical Expenditure Panel Survey (MEPS) Household Component, which collects detailed information regarding the use and payment for health care services from a nationally representative sample of Americans [19]. We used the 2006 and 2016 Full Year Consolidated file for our analyses. The MEPS data uses a consistent sampling frame over time and is a representative sample of the US non-institutionalized civilian population. The MEPS sample included 34,655 observations for 2016 and 34,145 for 2006. The insurance categories are drawn from MEPS categorizations and are not mutually exclusive. To analyze the effect of obesity and overweight on healthcare spending, we looked at expenditures across service lines (total, inpatient, non-inpatient, and drugs), as well as by payer. We excluded everyone under the age of 18 and observations for whom we had no insurance or BMI information, which left us with 24,408 observations for 2016 and 22,989 for 2006. In our empirical model, our dependent variables are healthcare expenditures, including total expenditures, inpatient, non-inpatient, and drugs expenditures. Noninpatient is defined as outpatient and office-based expenditures. The main explanatory variable is BMI categories. BMI was used to create dummy variables for four BMI categories, overweight (BMI 25-29.9), BMI obesity class 1 (30-34.9), BMI obesity class 2 (35-39.9) and BMI obesity class 3 (extreme) (above 40). BMI was calculated based on self-reported height and weight. The BMI class "normal" (18.5-24.9) was the reference group in all models. Individuals with a BMI less than 18.5 were coded as "underweight"; underweight is controlled for in the model but not reported in the tables. The models controlled for sociodemographic and health characteristics that are not in the causal pathway between obesity and spending. The control variables are drawn from the MEPS data, and include gender, race/ethnicity, smoking status, marital status, region of the country, education, and family income. Age was included and coded as a categorical variable for ages 18-34, 35-44, 45-54, 55-64, 65-74 and 75+. Expenditures were modelled using Generalized Linear Models (GLM) for total and non-inpatient expenditures; inpatient and drugs spending were modelled using two-part models (TPM) [20,21] For all the expenditures classes, we performed a Modified Park test to identify the distribution of the expenditure data and the coefficient of the conditional variance function. The test supported the choice for GLM with gamma family and log link for all models. We used the Hosmer-Lemeshow test for goodness of fit. We calculated standard errors using bootstrap with 1000 iterations per model. Differences between coefficients were estimated using a standard t test. Observations with missing data for insurance (n = 265 for 2006 and 256 for 2016) or BMI (n = 741 and 701) were omitted from the analysis. We also estimated the attributable fraction (AF) for obesity and overweight, which is equal to the ratio of the change in spending with and without obesity and overweight divided by total spending. The AF represents the proportion of spending attributable to the different BMI categories, controlling for other variables in the model. The estimated magnitude of the cost of obesity in previous work has varied considerably, perhaps driven by different study methodologies [22]. The advantage of using the AF methodology is that the estimates can be updated periodically to track the cost effect of BMI. This approach has been previously used in obesity as well smoking [23] and falls in older adults [24,25]. Standard errors were calculated using a bootstrap method with 200 replications. We used STATA 15 for all analysis. Expenditure numbers from 2006 were adjusted to 2016 prices using the gross domestic product implicit price deflator (GDP deflator) from the Bureau of Economic Analysis [26]. The general price deflator was preferred to allow for differences in the social value of healthcare interventions [27]. Results We first estimated the marginal effect of obesity and overweight (in dollars), by BMI category, on overall healthcare spending (Table 1). This marginal effect represents the mean association of spending with obesity and overweight, controlling for other factors. The largest difference in spending was for the Obese 3 class; individuals who have Class 3 Obesity spent an average of $2719 more per person per year in 2016 than those in a normal weight class. This is significantly higher than those in Obese 2, who spent an average of $1804 more per person per year, and Obese 1, where mean spending was $1029 per person per year. The increase in healthcare spending in Class 3 is problematic because the proportion of individuals in Class 3 has increased by 31.5% between 2006 and 2016 (from 3.8 to 5%). Surprisingly, the marginal effect was smaller in 2016 than 2006 for all obesity classes, after adjusting for inflation. The largest decline was for Obese 3, which declined from 10.5% from $3003 in 2006 to $2719 in 2016. This same trend was found for Obese 2, which went decreased16,67% from $2165 to $1804 and Obese 1, which decreased 30.6% from $1482 to $1029. Individuals in the overweight category were marginally significantly different (p < 0.1) from the reference group only for 2006, the estimated coefficient for 2016 was similar to that in 2006. This time trend varies across payers (Table 2). Being overweight had no effect on spending overall. Although 33% of the population was overweight in 2016, the marginal effect overall (Table 1) and by payer ( Table 2) was insignificantly different from zero for all models except private insurance in 2006. The reason for these trends is suggested by Meanwhile, there was a small decrease for Obese 1 in non-inpatient spending. Prescription drug spending was relatively flat for Obese 3 and Obese 2 but declined from $643 to $379 for Obese 1. Table 3 suggests a shift in spending for obesity. For Obese 3, the most expensive spending category in 2006 was inpatient spending ($1110), followed by prescription drugs ($1031) and non-inpatient spending ($714). In contrast, the top expense in 2016 was for prescription drugs ($1046), with inpatient spending third ($727). Obese 2 showed the same general pattern: a very slight decline in drug spending, an increase in non-inpatient spending and a decrease in inpatient spending. Overall, changes in the attributable fraction of healthcare spending varied depending on the service line and BMI category (Table 4). For inpatient care, the Actual spending increased for prescription drugs and non-inpatient care for Obese 3, with a nearly $5B increase in prescription drug spending alone. Spending in Obese 2 had the largest overall increase, with an increase of nearly $4B in prescription drugs ($10.2B to $14.1B) and $6B in non-inpatient care. Spending for Obese 1 was largely flat for prescription drugs and noninpatient care. Inpatient spending declined for Obesity Class 2 ($3.2B) and Obesity Class 3 ($3.4B) but increased for Obese Class 1 ($4.1B). The effect by payer varied (Table 5). Medicare experienced an increase in attributable fraction for Obese 1 (from 3.1% in 2006 to 3.6% in 2016) and Obese 2 (1.8 to 2.4%) and a decline for Obese 3 (from 4.0 to 2.3%). Medicaid also experienced an increase in attributable fraction for Obese 1 (from 2.7 to 3.5%), while private insurance saw a decrease for that same class (from 7.0 to 3.6%). Both Medicaid and private insurance saw deceases in the attributable fraction for Obese 2 (3.8 to 2.2% and 4.2 to 2.9%, respectively). The pattern for Obese 3 was similar, with declines for Medicaid (6.8 to 4.3%) and private insurance largely unchanged (2.6 to 2.8%). Overall spending for Obese 1 decreased from $53.9 billion to $48.2 billion, while it increased for Obese 2 from $34.3B to $37.7B and for Obese 3 from $36.1B to $40.2B. Total spending increased from $124.3B to 126.1B, even though spending in Obesity 1, the largest group, declined. The largest increase is in Obese 3 (from 36.1B to 40.2B), an increase that nearly matched the decrease in spending Obesity 1, despite the Obese 1 group being more than three times as large. Discussion In this paper we find that spending associated with obesity and overweight has changed in some important ways over the past 10 years. First, we show that the spending on obesity is increasingly focused on individuals in Obesity Class 3 (Extreme). These individuals are 5% of the total population and only about one in six obese persons fall into this class. Yet more than a quarter of obesity related costs (26.1%) are concentrated in this group. And this is the group that is proportionately growing the fastest, with a 32% increase over the past decade. For other obesity classes and overweight, spending has been more effectively controlled and total spending has been relatively flat. The models separating the effect of changes in obesity prevalence and the relationship of obesity and spending indicate that the latter is the reason for the moderation in effect. This is largely due to a shift from inpatient care to outpatient care coupled with slight reductions in prescription drug spending. Also, despite the coverage expansions in the ACA, the majority of spending remains paid for by private insurance ($67B), rather than Medicare ($43B) or Medicaid ($16B). The higher spending in private insurance reflects the higher number of individuals with coverage of that type. Spending for overweight persons is insignificantly different from normal weight spending, which may suggest a lost opportunity to intervene. There are a number of limitations to this study. First, the analysis is based on MEPS data. Other data sources may have different spending numbers, particularly due to the inclusion or exclusion of long-term care spending. The advantage of MEPS is its widespread usage as a measure of healthcare spending, which allows comparisons to other studies. Second, our data is based on self-report height and weight as there are no nationally representative data set that includes both measured height/weight and annual medical spending [2]. Previous research concluded that reporting error in weight can lead to bias in estimates of the healthcare consequences of obesity and the extent of underreporting increases with measured weight [28]. Endogeneity is also possible in this study if there are unobserved characteristics associated with both insurance and obesity. "For example, unobserved socioeconomic status could be correlated with obesity creating bias in the obesity coefficient. " Comparisons across different studies should be done with caution because of differences in model design, sampling frame and control variables. Caution should be used in comparing differences between payers, given that patients with different arrangements face very different prices, which may lead to differences in demand or access for a given level of health. For example, individuals in Medicaid may struggle to access primary care which could lead to relatively lower outpatient spending and higher inpatient spending. We do not include the uninsured in our study because our focus is on changes by payer. Future research should examine how obesity and overweight affects the uninsured. We stratify the analysis by age and insurance status to be able to compare results to comparable work done with earlier years, so time trends could be established. It would have been interesting to stratify by gender as well, but it is difficult to treat the gender issue carefully without providing significant additional context and result tables. Finally, the interplay between age, obesity or overweight and chronic illness may become more important as people who became obese at a younger age enter the Medicare program. Conclusions Overall, we find that the obesity attributable fraction of healthcare spending has actually declined over the past decade, despite increased obesity and overweight prevalence. Our results suggest this success is due to the shift from inpatient to outpatient settings for care. These findings suggest there are two potential conclusions that may be drawn from this. First, obesity has not been the key driver of increases in healthcare spending over the past decade. Obesity related spending has increased, but other spending (the denominator) has increased more quickly. To understand why costs have increased over the past decade, analysts need to look for other culprits. Second, obesity may be a more important cost driver in the next decade. The proportion of the population which is obese is increasing. Over the past decade, the increased prevalence was offset with changes in the pattern of spending -inpatient to non-inpatient -which moderated the increase. Without further reductions in per capita spending, the effect of increases in the proportion of the population which is obese may have a larger effect on healthcare spending. This is particularly true because of the increase in extreme obesity. Future efforts to control obesity-related spending are likely to be most impactful if they concentrate on individuals with BMI over 40 as well as preventing individuals from progressing to high levels of obesity.
2022-05-13T13:51:34.133Z
2022-05-13T00:00:00.000
{ "year": 2022, "sha1": "6b1f295327927cd2945179d2c71edf94ba6ef0e9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "6b1f295327927cd2945179d2c71edf94ba6ef0e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16182439
pes2o/s2orc
v3-fos-license
Association between Interferon-Inducible Protein 6 (IFI6) Polymorphisms and Hepatitis B Virus Clearance CD8+ T cells are key factors mediating hepatitis B virus (HBV) clearance. However, these cells are killed through HBV-induced apoptosis during the antigen-presenting period in HBV-induced chronic liver disease (CLD) patients. Interferon-inducible protein 6 (IFI6) delays type I interferon-induced apoptosis in cells. We hypothesized that single nucleotide polymorphisms (SNPs) in the IFI6 could affect the chronicity of CLD. The present study included a discovery stage, in which 195 CLD patients, including chronic hepatitis B (HEP) and cirrhosis patients and 107 spontaneous recovery (SR) controls, were analyzed. The genotype distributions of rs2808426 (C > T) and rs10902662 (C > T) were significantly different between the SR and HEP groups (odds ratio [OR], 6.60; 95% confidence interval [CI], 1.64 to 26.52, p = 0.008 for both SNPs) and between the SR and CLD groups (OR, 4.38; 95% CI, 1.25 to 15.26; p = 0.021 and OR, 4.12; 95% CI, 1.18 to 14.44; p = 0.027, respectively). The distribution of diplotypes that contained these SNPs was significantly different between the SR and HEP groups (OR, 6.58; 95% CI, 1.63 to 25.59; p = 0.008 and OR, 0.15; 95% CI, 0.04 to 0.61; p = 0.008, respectively) and between the SR and CLD groups (OR, 4.38; 95% CI, 1.25 to 15.26; p = 0.021 and OR, 4.12; 95% CI, 1.18 to 14.44; p = 0.027, respectively). We were unable to replicate the association shown by secondary enrolled samples. A large-scale validation study should be performed to confirm the association between IFI6 and HBV clearance. Introduction Between 350 and 400 million people worldwide are chronically infected with the hepatitis B virus (HBV) [1,2]. In most HBV-infected patients, spontaneous recovery (SR) by the host immune system is common. However, 5% to 10% of patients fail to recover and remain as HBV-induced chronic liver disease (CLD) patients [3]. CLD, including HBV-induced chronic hepatitis B (HEP) and HBV-induced cirrhosis (CIR), is a major cause of hepatocellular carcinoma, which can lead to liver-related death [4]. The high mortality of CLD is a major problem in HBV-endemic countries [5]. In Korea, which is an HBV endemic area, more than 70% of CLD patients are infected by HBV [6,7]. CD8+ T cells are key factors involved in the chronicity of CLD. The major roles of CD8+ T cells in HBV clearance are the production of interferon (IFN)-γ, which inhibits HBV gene expression and the assembly of HBV RNA-containing capsids, and the induction of apoptosis of virus-infected hepatocytes, which requires physical contact with CD8+ T cells [8][9][10][11]. However, the CD8+ T cells of CLD patients undergo activation-induced apoptosis instead of proliferation in the presence of antigen-presenting cells [12,13]. Apoptosis of antigen-specific CD8+ T cells in CLD patients and lymphocytic choriomeningitis virus (LCMV)-infected type I IFN receptor-null mice is mediated by B-cell lymphoma (Bcl)-2 [12,[14][15][16], indicating that type I IFN is critical to the survival of antigen-specific CD8+ T cells during the transition from acute to chronic HBV infection. Kolumam et al. [16] reported that type I IFN acts directly on CD8+ T cells to allow clonal expansion and memory formation in response to LCMV infection. Type I IFN GH Park, et al. Association between IFI6 Polymorphisms and Clearance of HBV receptor-null CD8+ T cells neither produce antiviral molecules, including IFN-γ, granzyme B, and tumor necrosis factor (TNF)-α nor show reduced survival after antigen-induced stimulation [16]. Type I IFN on CD8+ T cells is critical for survival, proliferation, and antiviral functions [16]. IFNs are a well-known family of cytokines with antiviral effects [17,18]. IFNs modulate cellular proliferation and stimulate immune responses through several IFN-stimulated genes (ISGs) [19]. IFN-α-inducible protein 6 (IFI6) is a type I ISG [20][21][22] that maps to chromosome 1p35 [23] and is regulated by the Janus tyrosine kinase signal transducer and activator of transcription signaling pathway [24]. IFI6 is a mitochondria-targeted protein; it inhibits the release of cytochrome c from mitochondria and delays the apoptotic process initiated and transduced by the TNF-related apoptosis-inducing ligand/caspase 8 pathway [25]. The role of IFI6 is strongly associated with the immune system, but its antiviral effects are not well known [26]. In the present study, we hypothesized that IFI6 may be a survival-promoting factor for CD8+ T cells and therefore a determinant of the chronicity of HEP. The frequencies of IFI6 polymorphisms in CLD patients and SR controls were compared using logistic regression. Subjects for the case-control study A discovery stage included 305 blood samples obtained from the outpatient clinic of the Gastroenterology Department and from the Center for Health Promotion of Ajou University Hospital (Suwon, Korea) without gender or age restrictions between March 2002 and February 2006. Samples were derived from genetically unrelated Korean patients. The experimental protocol was approved by the institutional review board. Samples were divided into SR control (n = 107), HEP (n = 111), and CIR (n = 87) groups, according to serological markers and biopsy results. Three samples in the HEP group were not genotype-replicated and were excluded from the analysis. Finally, 107 SR control, 108 HEP, and 87 CIR patients were analyzed. In the replication stage, 736 blood samples were collected from Ajou University Hospital and Keiymung University (Daegu, Korea) between February 2006 and September 2012. Samples were derived from genetically unrelated Korean patients. The experimental protocol was approved by the institutional review board. Samples were divided into two 205 SR controls, 437 HEP patients, and 94 CIR patients according to serological markers and biopsy results. All samples were infected with HBV and classified into one of the three groups, according to their HBV infection status, clinical data, and serological profile, by a pathologist. Every 6 months for >12 months, the 218 patients were subjected to serological tests for serum levels of hepatitis B core antibody (Anti-HBc II Reagent Kit; Abbott Laboratories, South Pasadena, CA, USA), hepatitis B surface antigen (HBsAg) (Anti-HBs; Abbott Laboratories), and hepatitis B surface antibody (HBsAb) (HBsAg; Abbott Laboratories). Liver function was evaluated by measuring aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin, and bilirubin levels using commercially available assays. All samples showed elevated ALT at least once during the follow-up period and were positive for HBV DNA, irrespective of hepatitis B e antigen (HBeAg) positivity. Patients in the SR group were HBsAg-negative, HBeAgnegative, anti-HBs-positive, and anti-HBc-positive and had recovered from HBV infection. Patients in the CLD group, including those in the HEP and CIR groups, were HBsAgpositive for more than 6 months with elevated ALT and AST (≥2 times the normal upper limit). Samples that were positive for anti-hepatitis C virus (Genedia HCV ELISA 3.0; GreenCross, Yoingin, Korea) or anti-immunodeficiency virus antibodies (HIV Ag/Ab combo; Abbott Laboratories) were excluded. Sample preparation All blood samples were stored at -80°C for the handling of human genomic DNA. Genomic DNA was purified using G-DEX blood genomic DNA (gDNA) purification kits (Intron Biotechnology Inc., Seongnam, Korea). The gDNA for the discovery analysis was quantified using the picogreen dsDNA quantification reagent following a standard protocol (Molecular Probes, Eugene, OR, USA). The plates were read using a VICTOR 3 1420 Multilabel counter (excitation 480 nm, emission 520 nm; PerkinElmer Inc., Waltham, MA, USA), and a standard curve for gDNA concentration was generated using known concentrations of lambda DNA. The quality of the gDNA analyzed in the replication stage was determined using a NanoDrop ND-1000 UV-Vis Spectrophotometer (Thermo, Eugene, OR, USA). Genomic DNA was diluted to a concentration of 10 ng/μL in 96-well PCR plates. Single nucleotide polymorphism (SNP) selection and genotyping In the discovery stage, six SNPs were selected from a public SNP database (http://www.ncbi.nlm.nih.gov/snp/) for the genotyping assay: 1) polymorphic in Chinese and Japanese; 2) tag SNPs in Asian; 3) might have functionality in protein or expression level. The selected SNPs were 1) one SNP in the 5' flanking region (rs2808426); 2) three intronic SNPs (rs10902662, rs1316896, and rs4908351); 3) one SNP in the untranslated region (rs1141747); and 4) one SNP in the 3′ flanking region (rs2808430). The genotyping was performed using the GoldenGate kit according to a standard protocol (Illumina Inc., San Diego, CA, USA). Oligos were amplified by allele-specific primer extension. After hybridization to a sentrix array matrix, signal intensities were read by BeadArray Reader (Illumina Inc.). Genotyping analysis was performed using GenomeStudio software (version 1.5.16; Illumina Inc.). In the replication stage, rs2808426, which was identified in the discovery stage, was genotyped using Taqman technology. The probes were labeled with FAM or VIC dye at the 5' end and a minor groove binder and nonfluorescent quencher at the 3' end. All reactions were performed following the supplier's protocol. SNP genotyping reactions were performed on the ABI PRISM 7900HT real-time PCR system (Applied Biosystems, Foster City, CA, USA). After the PCR amplification, allelic discrimination was performed on the ABI PRISM 7900HT. Allele calls were made with SDS v2.4 software (Applied Biosystems). Statistical analysis The genetic models for the association test were divided according to additive (AA vs. Aa vs. aa), dominant (AA vs. Aa plus aa), and recessive (AA plus Aa vs. aa) models. The χ 2 test was used to assess the Hardy-Weinberg equilibrium (HWE) in the SR, HEP, CIR, and CLD groups. The difference between groups was determined by the odds ratio (OR). ORs were presented with 95% confidence intervals (95% CIs) and adjusted for age and sex. Each individual haplotype was inferred from the EM algorithm using the SAS haplotype procedure (version 9.1; SAS Institute Inc., Cary, NC, USA). Linkage disequilibrium (LD) blocks were checked by the Gabriel method using Haploview software (version 4.2; Broad Institute, Cambridge, MA, USA). All statistical tests were performed using SAS software, and the significance level was set at p < 0.05. The probability values obtained were corrected for multiple testing by using Bonferroni's correction and permutation test. Bonferroni's p-value for reaching significance was 0.025 (0.05/2). The Plink program was used to confirm the results and permutation test (n = 100,000; http://pngu.mgh.harvard.edu/~purcell/plink/). Results The fate of the patients infected with HBV was determined by several factors, including host immune reactions. Type I IFNs play a key role in the defense against HBV infection and therefore in the prevention of chronic hepatitis. IFI6 is induced by type I IFN. To test the effect of IFI6 polymorphisms on the chronicity of HEP, samples were collected from SR controls (HBsAg-), who recovered from HBV infection without any treatment, and CLD patients, including HEP and CIR groups (HBsAg+), who were at risk of HBV infection. To analyze first whether variations in the IFI6 gene were associated with the susceptibility to HEP in the Korean population, 107 controls in the SR group, 108 patients in the HEP group, and 95 patients in the CIR group were analyzed for six SNPs of IFI6 (n = 302). The characteristics of the study subjects are summarized in Table 1. In the first phase or discovery stage, four out of six SNPs (rs1316896, rs4908351, rs1141747, and rs2808430) were monomorphic. Genetic variants of rs2808426 and rs10902662 did not show evidence of departure from minor allele frequency and HWE in either of the groups (p > 0.05). Two GH Park, et al genotypes had minor allele frequencies greater than 1% ( Table 2). The results of the genotype analysis showed that the CC genotype was the most common in the rs2808426 and rs10902662 polymorphisms in all groups. To analyze the genetic association between IFI6 polymorphisms and clearance from CLD, HEP, and CIR, multiple logistic regression analysis with adjustment for gender and age was performed. Comparison between the SR and HEP groups showed that the IFI6 SNPs rs2808426 and rs10902662 in the promoter region were associated with a higher risk that correlated with the homozygous variant TT genotype in a recessive model (OR, 6.60; 95% CI, 1.64 to 26.52; p = 0.008). After the permutation test, the rs2808426 and rs10902662 SNPs still had significant correlations (p = 0.001 in both genotype analyses), which were maintained after Bonferroni's correction (p = 0.016 in both genotype analyses) ( Table 2). The results of the multiple logistic regression analysis comparing the SR and CIR groups showed that the rs2808426 and rs10902662 SNPs were not associated in all genetic models ( Table 2). The possible genetic linkage between the rs2808426 and rs10902662 polymorphisms in the protection against chronic HBV infection was examined. LD blocks were constructed by the Gabriel method using Haploview software. The complete LD block consisted of rs2808426 and rs10902662 and showed a pairwise |D'| = 1 and r 2 = 0.942, which reflect strong LD. The variants across IFI6 consisted of a single LD block structure composed of two haplotypes (HTs). The diplotype consisted of HT1 C-C (C allele of rs2808426; C allele of rs10902662) and HT2 T-T (T allele of rs2808426; T allele of rs10902662). The results of the HT estimation showed that the CC and TT haplotypes accounted for over 99% distribution in all groups. According to three genetic models, estimated HTs were used for diplotype analysis by logistic regression, adjusting for age and sex. In the recessive model, HT1 frequency was significantly different between the SR and the CLD (OR, 0.021; 95% CI, 1.25 to 15.26; p = 0.021) and HEP (OR, 6.67; 95% CI, 1.64 to 26.52; p = 0.008) groups. Analysis of the HT2 diplotype showed a significant difference between the SR and HEP (OR, 0.15; 95% CI, 1.64 to 26.52; p = 0.008) and CLD (OR, (Table 3). All diplotype p-values remained significant after the permutation test (p < 0.022), and with the exception of HT2 in the SR and CLD groups, almost all of the diplotype p-values remained significant after Bonferroni's correction (p < 0.042). To replicate the significant associations of the SNP rs2808426, 736 samples, consisting of 205 SR, 437 HEP, and 94 CIR patients, were collected. The clinical information of the patients included in the analysis is summarized in Table 1. The second-stage genotyping was performed using the Taqman assay. The association of rs2808426 with CLD was assessed using the three genetic models, and multiple logistic regression with adjustment for gender and age was used as the first-stage analysis. The results of the genotype analysis of the second set of samples in association with CLD are summarized in Table 4. The significance of the results of the first genotype analysis was not maintained in the second genotype analysis. Furthermore, no significant associations were detected in a meta-analysis of the first-stage and second-stage samples (Table 4). Discussion The rs2808426 and rs10902662 SNPs are located in the 5' flanking region and the first intron of the IFI6 gene, respectively. These SNPs by themselves are known to regulate gene expression by causing alternative splicing or by changing the binding to a transcription factor or microRNA [21]. The presence of the rs2808426 SNP in the promoter region of IFI6 led us to screen for transcription factors with binding sites near or on rs2808426 (C > T). The binding of several transcription factors, including isoforms of the glucocorticoid receptor α, STAT4, v-ets erythroblastosis virus E26 oncogene homolog 1 (ETS1), and ETS2, to the protective allele (C) was predicted by ALLGEN PROMO (version 3.0.2; http://alggen.lsi.upc.es/cgi-bin/promo_v3/ promo/promoinit.cgi?dirDB=TF_8.3) [27]. Interestingly, the binding of ETS1 to the region containing rs2808426 T was not predicted. Differential binding of ETS1 according to the genotype of rs2808426 may affect the expression of IFI6. IFI6 expression by type I IFNs triggers the formation of IFN-stimulated gene factor 3 (ISGF3) complexes containing activated STAT1/STAT2 and IFN regulatory factor 9 and their translocation into the nucleus, where they bind to the tandem IFN-stimulated regulatory element (ISRE) in the promoter of IFI6 [21,[28][29][30][31]. Tandem binding of ISGF3 to the ISRE is required for maximum expression of IFI6 [32], and the promoter region, including rs2808426, enhances IFI6 expression more than the ISRE region alone [21]. The ISGF3-binding site for the ISRE is GH Park, et al separated from the ETS1-binding site by about 1.35 kb. The transcription factor ETS1 may regulate the expression of intracellular adhesion molecule-1 by protein-protein interaction with STAT1, which is a component of ISGF3 [33]. Overexpression of ETS1 in the MCF-7 breast cancer cell line enhances the expression of IFI6 up to 18.4-fold [34]. These data led us to speculate that the interaction between ETS1 and STAT1 in the ISGF3 complex may increase the expression of IFI6. The present study investigated the association between the rs2808426 and rs10902662 polymorphisms of the IFI6 gene and the clearance of HBV in the Korean population by multistage comparison between the SR and CLD groups, including the HEP and CIR groups. In the first stage of the analysis, significant associations between the rs2808426 and rs10902662 polymorphism genotypes and diplotypes were detected. A risk that was associated with the TT genotype in rs2808426 and rs10902662 was detected in the comparison between the SR and the CLD and HEP groups. Strong LD was found between the SNPs rs2808426 and rs10902662, containing most of the promoter region. In addition, diplotype analysis showed that the C-C HT was associated with a higher chance of SR than the T-T/T-T diplotype and that the C-C HT had a protective effect. The results of the first-stage analysis suggested that rs2808426 and rs10902662 may serve as candidate genetic screening markers for HBV clearance or that causative variants that are responsible for HBV clearance may be present in this LD block. The association between IFI6 polymorphisms and HBVinduced chronic disease suggest that these polymorphisms might change the expression level of IFI6 according to transcription factor binding. Therefore, an increase in IFI6 expression that is associated with polymorphisms of the gene could inhibit the release of cytochrome c from mitochondria and block the transmission of the apoptosis signals through Bim in HBV-specific CD8+ T cells. HBV-specific CD8+ T cells would thus escape from antigen-induced Genomics & Informatics Vol. 11, No. 1, 2013 apoptosis, proliferate, and then differentiate into activated CD8+ T cells to eliminate HBV from the host. The results of the first-stage analysis suggested that IFI6 polymorphisms play a significant role. In previous studies, CD8+ T cell-related gene polymorphisms, such as those of secreted phosphoprotein 1, interleukin-18, and cyclin D2, were reported to affect the natural course of chronic HBV infections in the Korean population, but the effect of their genetic association is minor (OR, 0.69 to 1.44) [35,36]. Furthermore, genomewide association studies of human leukocyte antigen (HLA) region polymorphisms, including HLA-DPA1, HLA-DPB1, and HLA-DQ, demonstrated their association with the chronicity of HBV [37][38][39][40][41][42][43]. In our first-stage analysis, the protective effect of the rs2808426 and rs10902662 polymorphisms was stronger than that reported previously in studies addressing the association with HBV (OR, 6.60). The genotype and diplotype distribution in both groups remained significant after multiple testing by Bonferroni's correction and permutation test. These results might support that genetic variation in IFI6 affects the clearance of HBV. A second set of samples was used to replicate the results of the first-stage analysis. However, in the second association analysis, the comparison of the SR and the HEP and CIR groups did not yield significant results, even when merging the first-and second-stage samples in a metaanalysis. This could have been due to variation in the sampling cohort, environmental interactions, inadequate statistical power, or gene interactions [1,[44][45][46][47][48][49]. Furthermore, information on factors important for the progression of liver disease was lacking in the samples analyzed, such as data on alcohol consumption [50]. Although our data could not be reproduced, the results showing an association between IFI6 polymorphisms and HBV chronicity are significant. Our study is the first study to investigate the association between IFI6 polymorphisms and HBV clearance as an ISG. In addition, SR patients were used as controls instead of normal healthy subjects to show the effect of genomic background on the chronicity of HBV infection. Normal controls that never contracted HBV are not suitable to show the genetic effects. Future studies should include a larger sample size and additional information in the replication study to validate the significance of the results through epistasis and environmental interactions. In addition, IFI6 promoter variations should be characterized using next-generation sequencing techniques, causal variants should be identified, and mechanisms underlying the effect of IFI6 on HBV clearance that is mediated by HBV antigen-specific CD8+ T cell survival need to be investigated. In the present study, an initial discovery stage showed that the rs2808426 and rs10902662 genotypes and the corresponding diplotype were associated with a higher probability of HBV clearance in a Korean population. However, the results could not be replicated in a second stage with a different patient sample. Further studies should be aimed at showing how IFI6 affects HBV clearance by promoting HBV antigen-specific CD8+ T cell survival. Moreover, identification of causal variants in the IFI6 by including a large number of samples may help clarify the role of IFI6 on HBV clearance.
2018-04-03T00:55:41.426Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "8736a0fb9d70310573036e464d113f01cbd8d5ee", "oa_license": "CCBYNC", "oa_url": "http://genominfo.org/upload/pdf/gni-11-15.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8736a0fb9d70310573036e464d113f01cbd8d5ee", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243834527
pes2o/s2orc
v3-fos-license
Phytoecdysteroids Accelerate Recovery of Skeletal Muscle Function Following in vivo Eccentric Contraction-Induced Injury in Adult and Old Mice Background: Eccentric muscle contractions are commonly used in exercise regimens, as well as in rehabilitation as a treatment against muscle atrophy and weakness. If repeated multiple times, eccentric contractions may result in skeletal muscle injury and loss of function. Skeletal muscle possesses the remarkable ability to repair and regenerate after an injury or damage; however, this ability is impaired with aging. Phytoecdysteroids are natural plant steroids that possess medicinal, pharmacological, and biological properties, with no adverse side effects in mammals. Previous research has demonstrated that administration of phytoecdysteroids, such as 20-hydroxyecdysone (20E), leads to an increase in protein synthesis signaling and skeletal muscle strength. Methods: To investigate whether 20E enhances skeletal muscle recovery from eccentric contraction-induced damage, adult (7–8 mo) and old (26–27 mo) mice were subjected to injurious eccentric contractions (EC), followed by 20E or placebo (PLA) supplementation for 7 days. Contractile function via torque-frequency relationships (TF) was measured three times in each mouse: pre- and post-EC, as well as after the 7-day recovery period. Mice were anesthetized with isoflurane and then electrically-stimulated isometric contractions were performed to obtain in vivo muscle function of the anterior crural muscle group before injury (pre), followed by 150 EC, and then again post-injury (post). Following recovery from anesthesia, mice received either 20E (50 mg•kg−1 BW) or PLA by oral gavage. Mice were gavaged daily for 6 days and on day 7, the TF relationship was reassessed (7-day). Results: EC resulted in significant reductions of muscle function post-injury, regardless of age or treatment condition (p < 0.001). 20E supplementation completely recovered muscle function after 7 days in both adult and old mice (pre vs. 7-day; p > 0.05), while PLA muscle function remained reduced (pre vs. 7-day; p < 0.01). In addition, histological markers of muscle damage appear lower in damaged muscle from 20E-treated mice after the 7-day recovery period, compared to PLA. Conclusions: Taken together, these findings demonstrate that 20E fully recovers skeletal muscle function in both adult and old mice just 7 days after eccentric contraction-induced damage. However, the underlying mechanics by which 20E contributes to the accelerated recovery from muscle damage warrant further investigation. INTRODUCTION Lengthening (eccentric) muscle contractions elicit higher force production, with lower energy cost, than shortening (concentric) contractions. While eccentric muscle contractions are performed on a daily basis (e.g., descending stairs), eccentric exercise has also been utilized in rehabilitative settings as an effective countermeasure against muscle atrophy and weakness, as well as a modality to treat tendinopathies (1). If repeated multiple times, eccentric contractions are a useful tool to induce a physiologically-relevant injury to skeletal muscle, resulting in damage to the contractile apparatus and loss of function (2,3). Skeletal muscle possesses the remarkable ability to repair and regenerate after an injury or damage; however, this ability is impaired with aging (4-7). The repair/regeneration of skeletal muscle tissue after damage relies on a series of highlycoordinated and time-dependent processes (8). While the exact mechanisms responsible for the impaired regenerative response in aged skeletal muscle continue to be explored, it appears that changes to inflammatory processes (9), protein metabolism (10), and endogenous hormones (11) all play significant roles. Evidence suggests that age-related alterations in immune cell (i.e., macrophage) phenotype may contribute to the impaired regenerative capacity of aged skeletal muscle. Greater increases in M2-like macrophages were observed in aged skeletal muscle, compared to young, in both humans (12) and mice (13) at the same time point during regeneration following eccentric damage. Interestingly, no age-related differences in pro-inflammatory cytokine expression (IL-1B, TNFα, IFNγ) have been observed in regenerating mouse muscle tissue (14). Additionally, the healthy maintenance of skeletal muscle mass is achieved by an intricate balance between protein synthesis and protein breakdown. It has been demonstrated extensively that aged skeletal muscle displays a reduced ability for anabolic stimuli [e.g., resistance exercise (15,16) and dietary protein ingestion (17,18)] to stimulate protein synthesis, resulting in a negative protein balance; a phenomenon aptly named "anabolic resistance." Lastly, correlations exist between declining gonadal hormones and diminished skeletal muscle health in elderly men (11) and women (19). Some previous studies have reported efficacy of hormone therapy (HT) on improving or maintaining skeletal muscle health in certain aged populations (20)(21)(22)(23); however, the long-term potential for adverse side effects with HT [e.g., increased risk for the development of certain types of cancers and cardiovascular events (24,25)] may outweigh the possible benefits. Clearly, the underlying mechanisms responsible for impaired regeneration in aged skeletal muscle are multifaceted. While treatments that target exercise, dietary protein intake, and endogenous hormones have shown marginal success, there is still a great need for developing effective, natural interventions to enhance muscle regeneration with aging. Phytoecdysteroids (PEs) are natural plant steroids found in a variety of hardy plant species, such as Ajuga and Leuzea, as well as commonly consumed spinach (Spinacia oleracea). PEs possess a plethora of medicinal, pharmacological, and biological properties, with no adverse side effects in mammals (26,27). Characterized as polyhydroxylated basic carbon ring structures of 27-29 carbons, PEs elicit immunoprotective, antioxidant, anabolic, hepatoprotective, hypoglycemic, and physical performance enhancing effects (28). While over 250 different PEs have been identified, 20-hydroxyecdysone (20E) is the most widely investigated. It has been suggested that the anabolic effects of 20E are mediated via a G-protein coupled cell surface receptor (29), as opposed to an intracellular androgen receptor. Thus, 20E is considered anabolic, but non-androgenic since it does not increase prostate or seminal vesicle mass in young castrated rats after 10 days of treatment (30), nor does it alter organ or testes mass in aging mice with 28 days of treatment (31). Regarding skeletal muscle, 20E has been reported to increase grip strength in young rats and stimulate protein synthesis via the PI3K/Akt pathway in C2C12 myotubes in vitro (32). Further, Toth et al. (33) reported that 7 days of 20E treatment increases fiber cross sectional area in healthy soleus, but not the extensor digitorum longus muscle, as well as enhances muscle growth in regenerating (myotoxin-injected) soleus muscles of young rats. Conversely, we recently reported that 28 days of 20E treatment does not alter muscle mass or fiber size, nor does a single acute treatment of 20E stimulate anabolic signaling in skeletal muscle tissue from sedentary aging C57BL/6 mice (31). From these findings, we concluded that a concurrent stress (e.g., recovery from damage) may be required for 20E to elicit beneficial effects on skeletal muscle. Taken together, it appears that 20E may have the potential to modulate muscle regeneration. With a few exceptions [e.g., (33)], most studies to date have only examined the effect of phytoecdysteroids on muscle size or mass in sedentary, healthy skeletal muscle. No studies have assessed the functional characteristics of skeletal muscle after injury/damage with phytoecdysteroid treatment. Therefore, the purpose of this study was to investigate if 20E accelerates the functional recovery of skeletal muscle after eccentric damage in adult and old mice. Experimental Design Male C57BL/6 adult [7.4 ± 0.1 and 7.8 ± 0.4 months of age in the PLA (n = 4) and 20E (n = 7) treatment groups, respectively] and old [26.4 ± 0.4 and 26.5 ± 0.4 months of age in the PLA (n = 7) and 20E (n = 7) treatment groups, respectively] were used in this study. The ages of the adult mice in this study were the equivalent of 30-35 year-old humans, while the old mice were the equivalent of 70-75 year-old humans (9). First, mice were anesthetized (4% isoflurane and maintained with 2% isoflurane) and pre-eccentric damage in vivo contractile function of the anterior crural muscle group [tibialis anterior (TA), extensor digitorum longus (EDL), and extensor hallucis longus] was measured. Mice were immediately subjected to the eccentric contraction-induced muscle damage protocol, and then in vivo contractile function was reassessed. Upon completion of the post-eccentric damage contractile function test, mice were allowed to recover from anesthesia and then returned to their cage. Once fully recovered from anesthesia, mice were randomly assigned to either the placebo control treatment group (PLA) or the 20-hydroxyecdysone treatment group (20E) and received the first treatment via oral gavage. Mice were administered daily treatment doses, at approximately the same time of day, for 6 days. Twenty-four hours after the seventh and final treatment dose, mice were anesthetized again and 7-day recovery in vivo contractile function was measured, as described above (repeated measures design). Finally, mice were sacrificed under anesthesia and the TA and EDL muscles were harvested, weighed for mass, and mounted for histological analysis. Phytoecdysteroid Treatment Mice assigned to the 20E treatment groups received daily doses of 50 mg • kg −1 body mass (BM) 20E (E6425-HE; Bosche Scientific, New Brunswick, NJ, USA) dissolved in phosphate-buffered saline (PBS) via oral gavage for 7 days. Mice assigned to the vehicle control treatment groups (placebo; PLA) received daily doses of equivalent volume PBS for 7 days. BM was recorded each day. The dose of 50 mg • kg −1 BM is based on the previous studies by Gorelick-Feldman et al. (32) and Lawrence et al. (31). In vivo Contractile Function Testing and Eccentric Contraction-Induced Damage Protocol Contractile function of the anterior crural muscles was measured three times in each mouse: pre-and post-eccentric damage, as well as after the 7-day treatment period, via the torque-frequency relationship, as previously described (34,35). Under anesthesia, hair on the left hindlimb was removed with dilapidation cream and thoroughly rinsed with water. Mice were then placed on a heated (37 • C) platform and the left foot was secured with an aluminum foot-cover and tape to the footplate affixed to the shaft of a dual-mode servomotor (300B-LR, Aurora Scientific, Aurora, ON, Canada). A clamp secured to a micro-manipulator (World Precision Instruments, Sarasota, FL) was used to position and hold the left knee in place during the procedure. The ankle joint was held at 90 • of passive dorsiflexion with respect to the tibia and the tibia was positioned at 90 • with respect to the femur. Sterilized 30-gauge needle electrodes (Grass Instruments, Warwick, RI) were inserted through the skin for stimulation of the left common peroneal nerve, each was positioned and held in place with a micro-manipulator. Single isometric contractions (1 Hz) were used to obtain initial needle electrode placement; optimal stimulation voltage (5-10 volts) and needle electrode placement was confirmed by 5-10 isometric contractions (200ms train duration, 0.1-ms pulse width at 300 Hz). Following needle electrode placement, a torque-frequency curve measured peak isometric torque produced by the anterior crural muscle group at 10 ascending stimulation frequencies, from 20 to 300 Hz, with 2 min rest between each contraction. After completion of the initial pre-eccentric damage in vivo contractile function testing, the anterior crural muscle group was immediately subjected to the eccentric contraction-induced muscle damage protocol using 150 eccentric contractions (300 Hz, 120-ms train of 0.1-ms pulses, with 38 • of angular movement at 2,000 • •s −1 starting in 19 • of dorsiflexion, moving to 19 • of plantarflexion) described by Corona et al. (35). Muscle function was also assessed every 10th eccentric contraction via individual 300 Hz isometric contractions (Figure 1). Warren et al. previously reported that decreases in isometric torque during this eccentric contraction-induced muscle damage protocol are the result of muscle injury, not fatigue (36). After completion of the eccentric contraction-induced damage protocol and a 5min delay, post-eccentric damage in vivo contractile function was measured. Finally, after the 7-day treatment period, mice were anesthetized and 7-day recovery in vivo contractile function was measured as described above. Additionally, separate groups (n = 10/each) of sham-treated adult and old mice were utilized as experimental controls. Mice in the sham-treated groups performed all of the same experimental procedures described above (including the 7-day 20E and PLA treatments), except they were not subjected to the eccentric damage protocol. During the time required to complete the eccentric damage protocol (∼30 min), mice remained anesthetized and resting before continuing with the post-and eventually the 7-day sham-recovery contractile function tests. Contractile Function Data Acquisition and Analysis The muscle lever system (Aurora Scientific 1300A), stimulator and force transducer were connected to a signal interface (Aurora Scientific, Model 610A) that sends the analog signal to an analog-to-digital converter card (National Instruments, Austin, TX) on a computer with Dynamic Muscle Control software (Aurora Scientific, DMC 610A). The force output data were analyzed utilizing Dynamic Muscle Analysis software (Aurora Scientific, DMA 610A). Raw (group mean) torque-frequency relationships displayed in Figure 2 and Supplementary Figure 1 were analyzed for statistical significance prior to modeling. To generate the EC 50 , torque-frequency relationship data were modeled with the following four-parameter logistic fit equation using GraphPad Prism version 8.4.3, GraphPad Software, San Diego, California USA: where x is the stimulation frequency; min and max are the smallest (i.e., twitch) and largest (i.e., peak tetanic) torques estimated from the best-fit torque-frequency relationship curve, respectively; EC 50 is the stimulation frequency required to generate 50% of maximal estimated torque (max-min); and n is the Hill Slope Coefficient indicating the slope of the steepest portion in the estimated torque-frequency relationship curve (37). The EC 50 provides further insight on contractile function from the estimated torque-frequency relationships (38). Histological Analyses The TA muscle was harvested from the left hindlimb and mounted for histological analysis following measurement of 7-day recovery in vivo contractile function. Harvested tissues were mounted on cork using a mixture of tragacanth gum and optimal cutting temperature medium (Fisher Scientific, Houston, TX), frozen in liquid nitrogen-cooled isopentane, and stored at −80 • C until sectioning. TA muscle samples were cut into 10 µm cross-sections using a cryostat (CryoStar Model HM 505; ThermoFisher Scientific Inc.) and mounted on positively charged microscope slides. Muscle section quality, tissue integrity, and markers of muscle damage were assessed using common histological techniques for cytosolic and nuclear components using Mayer's hematoxylin and eosin (H&E) solutions (Millipore Sigma, St. Louis, MO). H&E-stained muscle sections were imaged with 4× and 10× objectives using an Olympus IX81 light microscope and cellSens Imaging Software (Olympus, Waltham, MA). Markers of muscle damage (e.g., edema, overt fiber damage, presence of infiltrating inflammatory cells, and centrally-located myonuclei) (39)(40)(41) were assessed in each image using qualitative indices on a scale of 0-3 to provide a muscle damage score: with 0 = no apparent muscle damage; 1 = minimal muscle damage; 2 = moderate muscle damage; and 3 = severe muscle damage. Skeletal muscle repair and regeneration after damage is often accompanied by remodeling of connective tissue. Gomori trichrome staining (Millipore Sigma, St. Louis, MO) was performed on TA muscle sections (10 µm) as this technique differentiates skeletal muscle from connective tissue. Gomori trichrome-stained muscle sections were imaged with 4× and 10× objectives using an Olympus IX81 light microscope and cellSens Imaging Software. No analyses were performed on Gomori trichrome-stained sections. Statistical Analyses All data are expressed as mean ± SEM. Torque-frequency relationship data were analyzed using a repeated measures three-way factorial ANOVA (treatment, two levels; time, three levels; stimulation frequency, 10 levels). Measures of contractile performance data (eccentric damage isometric contractions, twitch torque, maximal tetanic torque, twitch:tetanic ratio, and EC 50 ) and animal body masses were analyzed using a repeated measures three-way factorial ANOVA (treatment, two levels; time, two or three levels; age, two levels). Individual muscle masses were analyzed using a repeated measures two-way factorial ANOVA (treatment, two levels; age, two levels). The a priori level of significance was set at p < 0.05. Following a significant F-ratio, Fisher's LSD pairwise post-hoc comparisons were made. Data were analyzed using SPSS (IBM Corp., Armonk, NY). No statistical analyses were performed for the qualitative analysis for muscle damage scores. Animal Subjects and Skeletal Muscle Mass Animal subject characteristics are shown in Table 1. Body mass did not differ between any treatment group (time x treatment x age interaction, p = 0.875); however, all groups lost body mass between initial and the 7-day recovery time point (main effect of time, p = 0.008). Furthermore, TA and EDL muscle mass (normalized to body mass; mg•g −1 ) were significantly lower in the old mice (main effect of age, p = 0.003 for both TA and EDL), regardless of treatment (treatment x age interaction, p = 0.453 for TA and p = 0.326 for EDL). Animal and muscle characteristics for sham-treated groups followed similar patterns of significance as the eccentric damage groups stated above (Supplementary Table 1). Eccentric Contraction-Induced Muscle Damage The eccentric contraction-induced muscle damage protocol results in a ∼40-50% decline (main effect of time, p < 0.001) in anterior crural muscle group contractile function, measured by maximal isometric torque (Figure 1). Additionally, there were no differences in the loss of contractile function in response to eccentric damage between ages of mice or treatment groups (time x treatment x age interaction, p = 0.740). In vivo Skeletal Muscle Contractile Function To investigate if 20E accelerates the recovery of skeletal muscle after eccentric damage in adult and old mice, in vivo skeletal muscle contractile function was measured before, immediately after, and at 7 days of recovery from eccentric damage. Indeed, the eccentric contraction-induced damage protocol significantly reduced isometric torque immediately post-damage in both adult and old mice (main effect of time, p < 0.001; Figures 2B,E, respectively; time × treatment × age interaction, p = 0.753). Most remarkable, however, was the significant time × treatment interaction (p < 0.001) demonstrating that 20E treatment resulted in full recovery of isometric torque at 7 days-post eccentric damage in both adult and old mice, while neither PLA-treated group (both adult and old) recovered by 7 days (Figures 2C,F, respectively). Additionally, we performed experiments on adult and old sham-treated mice that had not performed the eccentric damage protocol. In vivo skeletal muscle contractile function was not altered over time or treatment in any of the adult or old sham-treated groups (time × treatment × age interaction, p = 0.766), thus demonstrating the repeatability of our experimental procedures (Supplementary Figure 1). Further analysis of contractile function revealed that there was only a significant main effect of time (p < 0.001) in FIGURE 1 | Percent of initial isometric torque during the 150 eccentric contraction-induced muscle damage protocol. Maximal tetanic isometric torque of the anterior crural muscles was measured after every 10th eccentric contraction of the damage protocol in adult and old mice. There was a significant 40-50% decline in isometric torque after the 150-eccentric contraction muscle damage protocol, regardless of age or treatment condition. Values represent mean ± SEM. PLA, placebo; 20E, 20-hydroxyecdysone. * significantly different than initial; p < 0.001. twitch torque (20 Hz), as twitch was significantly lower in all treatment groups and ages immediately after eccentric damage (p < 0.001) and remained lower at 7 days (p = 0.032; Figures 3A,E, respectively), compared to pre-damage. While no time x treatment x age interaction (p = 0.677) existed in maximal tetanic torque (250 Hz), there was a significant time x treatment interaction (p < 0.001). Maximal tetanic torque was significantly lower immediately after eccentric damage in all treatment groups and ages (p < 0.001); however, only the 20E-treated groups in both adult ( Figure 3B) and old mice ( Figure 3F) fully recovered maximal tetanic torque by returning to pre-damage levels at 7 days of recovery. Similar to twitch torque, there was only a significant main effect of time in the twitch to tetanic ratio (p < 0.001), as the ratio was significantly lower in all groups immediately after eccentric damage (Figures 3C,G), but recovered with 7 days of recovery (time × treatment × age interaction, p = 0.630). Furthermore, there were no age-or treatment-related differences regarding how eccentric damage caused muscle contractile dysfunction or recovery from muscle damage (all treatment x age interactions, p > 0.500). In other words, adult and old mice displayed similar responses to eccentric muscle damage and recovery with 20E treatment. There was no time x treatment x age interaction (p = 0.586) for EC 50 ; however, there was a significant main effect of time (p < 0.001), such that EC 50 was significantly higher in all age and treatment groups immediately post-damage (p < 0.001), before beginning to return back to pre-damage levels by 7 days of recovery, i.e., there was still a significant difference between 7-d vs. Pre (p = 0.009), but also a difference between 7-d vs. Post (p = 0.002) time points (Figures 3D,H). These data signify that a greater stimulation frequency is required to elicit 50% of maximal tetanic torque immediately after eccentric damage. Histology H&E staining was performed to observe morphological muscle damage (i.e., edema, fiber damage, presence of infiltrating inflammatory cells, and centrally-located myonuclei). Markers of muscle damage were minimally observed (muscle damage scores ≤ 1.0 on a scale of 0-3) at 7 days post-injury in TA muscles subjected to eccentric damage (Figures 4Av-viii), compared to sham-treated muscles (Figures 4Ai-iv). Muscle damage scores obtained via qualitative/semi-quantitative analyses revealed that muscles in the PLA-treated groups appear to have higher muscle damage scores, compared to 20E-treated groups (39)(40)(41). Furthermore, the old PLA-treated mice appeared to have the greatest muscle damage score at 7 days post-injury, compared to any other group (Figure 4B). Gomori trichrome staining of the TA muscles revealed that connective tissue staining appears to be higher in the muscles subjected to eccentric damage (Figures 5E-H)" compared to sham-treated muscles (Figures 5A-D). It does not appear that 20E treatment has any effect on connective tissue staining in regenerating muscle tissue at 7 days post-injury. DISCUSSION When skeletal muscle is subjected to contractile stress and/or strain that supersedes the normal capabilities of the muscle, an injury or damage occurs and functional capacity (i.e., muscle torque/force or strength) is diminished. Here in this study, as with many others (34,35,(42)(43)(44), we reaffirm that functional capacity is significantly impaired immediately, and for several days after, 150 eccentric contractions. This eccentric contraction-induced damage protocol results in ∼40-50% reductions in isometric torque of the anterior crural muscles (Figure 1). Consistent with previous studies in rodents (45,46), these declines in functional capacity with eccentric contractions were similar between the adult and old mice and between treatments. In other words, all mice in this study sustained similar degrees of muscle damage, regardless of age or treatment. It is important to note that 20E (or placebo) treatments did not begin until the mouse had recovered from anesthesia after FIGURE 3 | Isometric twitch torque, maximal tetanic torque, twitch:tetanic ratio, and EC 50 of the anterior crural muscles in Adult and Old mice at the pre-injury, immediately post-injury, and 7-day recovery time points. Analysis revealed significant reductions in twitch torque (20 Hz; A,E) in all groups immediately after (Post) and at 7 days (7-d) following the eccentric contraction-induced muscle damage protocol, regardless of treatment condition. Maximal tetanic torque (250 Hz) was also significantly reduced in both adult (B) and old mice (F) Post eccentric damage, compared to Pre, regardless of treatment condition. Only the 20E-treated adult and old mice fully recovered maximal tetanic torque to pre-injury levels by 7-d. Twitch:Tetanic ratio was significantly reduced at Post in both adult (C) and old mice (G), regardless of treatment condition, compared to Pre, but Twitch:Tetanic recovered by 7-d. EC 50 was significantly higher in all age and treatment groups at Post, but EC 50 at 7-d was still significantly different than Post and Pre (D,H). Values represent mean ± SEM. PLA, placebo; 20E, 20-hydroxyecdysone; EC 50 , stimulation frequency required to elicit 50% of maximal tetanic torque. * significantly different than Pre within the same group (p < 0.05); & significantly different than Post within the same group (p < 0.05). the initial muscle function testing session, which included the eccentric damage protocol. Therefore, it is not possible for 20E to have provided any prophylactic or protective effects on muscles suffering eccentric damage. Arguably, functional loss of strength is the most important and reliable indicator of skeletal muscle injury, not to mention the clinical implications that loss of muscle function imparts (3). The force-producing capabilities of the muscle usually return to baseline (pre-injury) levels between 2 and 4 weeks post-eccentric contraction-induced injury (34,42,44,46,47). But this process may take longer depending on the severity of damage incurred (48). The regenerative capacity of skeletal muscle is impaired during aging, and this can lead to prolonged or incomplete recovery from eccentric damage, as well as further loss of muscle mass and strength. Seminal work by Brooks and Faulkner (46) demonstrated that while young and aged mice respond similarly to eccentric damage (i.e., loss of strength and markers of muscle damage), young mice had recovered muscle function by 4 weeks post-damage, whereas aged mice had not. The most intriguing and novel finding of the current study was that 20-hydroxyecdysone (20E) completely recovered skeletal muscle functional capacity by 7 days postinjury, not only in the adult mice, but also in the old mice. Based on only measuring a single time point and the relatively short recovery period examined in the current study (7 days), it is difficult to forecast just how long it would have taken for the placebo-treated mice in our study to recover muscle function back to pre-injury levels. It seems plausible that some, but not full recovery of muscle function is possible in just 7 days after eccentric muscle damage (35,49,50). This study demonstrates, for the first time, that oral supplementation with phytoecdysteroids accelerates the functional recovery of skeletal muscle in both adult and old mice after eccentric contractioninduced injury. A major tenet of muscle contraction and functional capacity is the ability for a neural impulse to stimulate the release of calcium (Ca +2 ) from the sarcoplasmic reticulum (SR) leading to the development of force at the sarcomere, a process widely known as excitation-contraction (EC) coupling. Traditionally, it was thought that disruption of the force-generating and force-transferring structures of the sarcomere was responsible for the loss of muscle function after eccentric contractioninduced muscle damage (51). However, there is a dissociation between muscle function and histological markers of muscle damage that disputes the latter causal relationship. Recall that the greatest declines in muscle function occur immediately following eccentric contraction-induced injury, yet markers of muscle damage are not fully apparent until 1-2 days after injury (34,47,52). Studies from the past few decades have established that the early functional loss of strength from eccentric contractions (0-5 days post-injury) is primarily the result of EC coupling dysfunction, not contractile structure disruption. Warren et al. (53) were the first to describe eccentric contraction-induced EC coupling dysfunction in mouse soleus muscle in vitro. Later, Ingalls et al. (47), exhibited that the site for EC dysfunction in response to eccentric damage lies at the level of the t-tubule and the SR Ca +2 release channel (i.e., the "triad"), specifically via disruption of the interface between dihydropyridine receptors (DHPR) and ryanodine receptors (RyR1). The decreased twitch to tetanic ratio observed immediately after injury provides an indirect indication of EC coupling dysfunction with eccentric damage; however, this impairment was recovered by 7 days, regardless of treatment or age (50,54). While 20E elicits a rapid, but transient (30-180 s) increase in intracellular [Ca +2 ] in C2C12 myotubes in vitro (29), it is highly unlikely that 20E is producing any long-term effects on Ca +2 kinetics or repair of the dysfunctional EC coupling mechanism resulting in the observed rescue of muscle function at 7 days in this study. It has been estimated that EC coupling dysfunction is responsible for 57-75% of the functional loss of muscle strength acutely after eccentric contractions (51). However, in addition to reversing EC coupling dysfunction, other mechanisms also contribute to the recovery of muscle function observed in the days to weeks following eccentric damage. As the normal progression of damage and regeneration processes occur, disruptions to sarcomeric (contractile apparatus) and sarcolemmal (membrane) structures have been observed within days of the eccentric exercise bout (35,55,56). Furthermore, there is sufficient evidence that alterations in skeletal muscle protein metabolism may be responsible for the remaining strength deficits in the 1-2 weeks after eccentric damage (34,42). Therefore, a need to restore protein balance may account for the prolonged recovery time. Skeletal muscle protein synthesis signaling, via activation of the mTORC1 pathway, is stimulated in the range of 1-7 days post-damage in response to various eccentric or lengthening contraction protocols in rodents and humans (57)(58)(59)(60)(61). Phytoecdysteroids, particularly 20E, promote anabolic responses in many tissues, including skeletal muscle. Gorelick-Feldman et al. demonstrated that 20E stimulates protein synthesis (32), primarily through the activation of PI3K/Akt signaling (29), in a dose-dependent manner in C2C12 myotubes in vitro. Furthermore, pre-treatment of myotubes with the G protein inhibitor, Bordetella pertussis toxin (PTX), abolishes the 20E effect on Ca +2 influx and Akt activation, suggesting that 20E functions through a G proteincoupled cell surface receptor (29). We recently reported that 20E does not provide an anabolic stimulus, via activation of the PI3K/Akt/mTORC1 pathway, in skeletal muscle from aging sedentary mice, despite using the same dose of 20E as in the current study (50 mg•kg −1 BM) (31). From this study, we concluded that an additional stimulus, for example exercise or injury, may be required for 20E to elicit anabolic effects on skeletal muscle. The novel observation that 20E treatment fully recovers skeletal muscle function in just 7 days after eccentric damage is much earlier than other studies using similar eccentric damage protocols in mice (34,35,42,47). Therefore, we conclude that 20E accelerates the functional recovery of muscle after eccentric damage. While we cannot definitively state that the recovery of eccentric contraction-induced skeletal muscle dysfunction after just 7 days with 20E supplementation is due solely to the anabolic effects of 20E (via PI3K/Akt/mTORC1 signaling), based on previous literature it seems plausible that 20E may provide an anabolic stimulus to damaged muscle leading to accelerated recovery of muscle function (62). Moreover, since aged skeletal muscle displays anabolic resistance, 20E may be able to stimulate alternative anabolic pathways to those traditionally responsible for the anabolic resistance in aged muscle. Thus, daily 20E treatments, starting immediately after eccentric damage, could be providing an anabolic stimulus required to accelerate muscle recovery after eccentric damage, compared to placebotreated mice. Whether 20E functions via the canonical protein synthesis pathway leading to recovery of skeletal muscle function needs to be investigated in future studies. Skeletal muscle damage triggers a widespread series of events that can essentially be divided into two main stages: tissue degeneration and tissue regeneration. Tissue degeneration is necessary for removing damaged and dysfunctional structures, whereas tissue regeneration works to repair or rebuild muscle structures and regain function. Both stages rely heavily on mechanisms of inflammatory and myogenic pathways, not to mention many others. As discussed previously, inflammation is not responsible for the immediate decline of muscle function with eccentric contractions. However, inflammatory processes are essential for the successful regeneration of skeletal muscle and recovery of muscle function after injury (9). One of the hallmarks of histological muscle damage is the ordered infiltration of immune cells; first to respond are the neutrophils that appear in the first 24 h after injury, followed by M1-like macrophages around day 2, and finally M2-like macrophages by day 4 post-injury (8). In the current study we did not assess immune cells or inflammatory mechanisms, but phytoecdysteroids, including 20E, have been suggested to possess anti-inflammatory properties (27). While the anti-inflammatory effects of 20E directly on skeletal muscle are not known, previous reports suggest that the PI3K/Akt/mTORC1 pathway regulates immune (macrophage) cell function (63,64). Theoretically, if 20E treatment is activating the PI3K/Akt/mTORC1 pathway during the 7-day recovery period, this could be potentially beneficial in managing the immune cell/inflammatory response to eccentric muscle damage and accelerating muscle recovery. However, we have no direct evidence that phytoecdysteroids influence inflammatory mechanisms during recovery from eccentric muscle damage, but this area certainly warrants further investigation. Regarding our histological findings, it appears that markers of muscle damage are still apparent, albeit minimal, after 7 days of recovery from eccentric damage. It is interesting that despite a full recovery of muscle function (i.e., isometric torque) at 7 days in our 20E-treated groups, markers of muscle damage remain evident. This is not surprising as similar findings have been described previously wherein muscle function had recovered, but markers of muscle damage and regeneration, particularly centralized nuclei, are still visible weeks after eccentric contraction-induced muscle damage (35,44). As described above, much of the inflammatory/immune response contributing to muscle regeneration has run its course by 7 days post-eccentric injury and what remains is tissue remodeling and growth processes. While the muscle tissue has been the main focus of this study, it is also important to recognize the importance of the association between the muscle fiber and connective tissue, or extracellular matrix (ECM), that surrounds the muscle fibers. Whether the ECM is damaged in response to eccentric contractions is still unclear (65). Generally speaking, aged skeletal muscle tissue contains more ECM than young muscle. One of the major complications with the age-related impairment in muscle regeneration after injury is not only the decreased myogenic potential, but also increased fibrogenesis (growth of connective tissue) (66). During regeneration of aged skeletal muscle, muscle tissue is replaced with connective tissue and, consequently muscle function and muscle quality decline. To our knowledge, no studies have investigated the effect of phytoecdysteroids on connective tissue or components of the ECM during muscle regeneration. If 20E is influencing connective tissue remodeling, it cannot be determined from our histological observations. Therefore, we cannot conclude whether 20E is providing any positive benefit to muscle damage markers or tissue remodeling in the 7-day recovery period after eccentric damage. However, future studies should investigate the potential role of 20E with regards to markers of muscle damage and tissue morphology with extended time points after eccentric muscle damage. CONCLUSION In conclusion, eccentric contraction-induced damage results in significant declines in skeletal muscle function. Normal repair/regeneration of skeletal muscle tissue after damage relies on a series of highly-coordinated and time-dependent processes, including, but not limited to inflammatory, myogenic, and protein balance mechanisms. However, the regeneration process is impaired with aging. 20-hydroxyecdysone (20E) possesses anabolic, anti-inflammatory, and antioxidant properties. Here, for the first time, we demonstrate that daily treatment with 20E fully recovers skeletal muscle function in both adult and old mice within just 7 days after eccentric damage. The underlying mechanics by which 20E contributes to the accelerated recovery from muscle damage warrant further investigation. Taken together, it is reasonable to suggest that 20E has potential to be utilized as a supplementary intervention for muscle recovery after damage and in aging. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by the Appalachian State University Institutional Animal Care and Use Committee. AUTHOR CONTRIBUTIONS KZ and RS were responsible for the conceptualization of the project, development of methodology, project administration, monitoring of data collection, statistical analysis, and writing of this manuscript. JG and CH were responsible for data collection, processing, and analysis and editing of this manuscript. All authors significantly contributed to the article and approved the submitted version.
2021-11-08T14:22:58.267Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "9e0a1d944fe5d57e3ffb78fb28216c5037d34189", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fresc.2021.757789/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "9e0a1d944fe5d57e3ffb78fb28216c5037d34189", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
230994339
pes2o/s2orc
v3-fos-license
Outdoor Testing of Anti-Soiling Hydrophobic Coatings: Observations of Cementation It is of an increasing interest for the solar research community to understand and master the effects of environmental conditions on photovoltaic (PV) module performance and reliability. This study demonstrates that soiling is not only an issue for PV installed in dusty and dry regions of The Middle East and North Africa. Soiling is a global problem and the type of soiling and its extent is dependent on the geographical location. Cementation, a process by which particles strongly adhere to the surface, has been observed on all surfaces exposed outdoors in a coastal location of Denmark and experiments are ongoing in two different geographical locations and climates. Applying hydrophobic coatings to PV module cover glass is a potential solution to minimize soiling. Although the use of a hydrophobic coating was initially effective, its gradual degradation was linked to the build-up of surface cementation. Degradation of the hydrophobic surface chemistry increases surface energy and leads to the formation of hard to remove cementation. This results in the retention of droplets and particles causing a reduction in the optical transmission into the module. I. INTRODUCTION Soiling of solar module cover glass is an important but often neglected operational issue facing solar asset managers. Accumulation of dirt, dust, pollen, snow, ice and other particles on the solar module cover glass can significantly reduce the light transmitted to the active absorber and hence reduce the power output from the module [1,2]. Among many active and passive soiling mitigation strategies, the application of a thin layer of an anti-soiling coating on the module cover glass is a simple and promising solution to minimize soiling and to maintain the power output along with the return on investment [2,3]. In principle, hydrophobic coatings are low surface energy coatings that may be well suited for this purpose. Their high water contact angle (WCA) (ideally > 120°) and low roll off angle (RoA) (ideally < 10°) reduces adhesion of incident particles. Also rain or cleaning water droplets remain roughly spherical and roll off the tilted module carrying away the soiling particles [4]. Several approaches to coating chemistry and application methods have been taken, but current formulations are failing to produce a coating able to withstand the challenge of outdoor conditions [5,6]. PV modules are exposed outdoors 24/7, all year round, and subjected to regular cleaning cycles. The coating must be chemically stable and abrasion resistant. We have assessed a number of candidate hydrophobic coatings used in other applications, such as on displays and ophthalmic lenses. These coatings are being tested in the laboratory in parallel with outdoor testing. This strategy informs the development of a more durable second-generation coating designed for solar modules. The laboratory tests follow IEC protocols and include UV exposure, damp heat exposure, temperature cycling, abrasion and sand impact tests. Such accelerated lifetime tests are useful to compare the durability of coatings and to uncover the mechanisms involved in degradation. However, outdoor exposure combines all these environmental stresses together and coating degradation occurs much faster than would be anticipated from the laboratory tests [7]. Moreover, particles and other soiling are present in the outdoor environment that are location dependent (salts, pollens, sand etc.). These can be anchored to the coating/glass surface in a process of cementation and are difficult to remove even by mechanical cleaning. Cementation lowers the transmittance through the cover glass. It also impedes the 'self-cleaning' properties of the coating. In this work, we have exposed candidate hydrophobic coatings outdoors in a coastal location of Denmark for up to 24 weeks. The types of cementations formed on the coatings have been monitored and transmittance through the coated coupons was measured and compared to the uncoated and coated cleaned coupons. Similar experiments of outdoor exposure are being conducted in two different locations: rainy inland England and dry and cold mid-continent Colorado. A. Hydrophobic coatings Hydrophobic coatings were applied onto glass coupons. The candidate coatings had different chemistry but generally, the presence of fluorinated surfaces or/and nanoparticles are responsible for their hydrophobicity. For each type of coating, 12 uncoated glass coupons (as a reference) and 12 coated coupons were prepared for each location to be exposed outdoors. One coated and one uncoated coupon of each type was removed from the rack each month for analysis. Figure 1 shows how the coated and uncoated coupons were mounted and exposed outdoors. B. Outdoor exposure The 6 months outdoor exposure of the coatings took place from mid-February till mid-August. The coupons were exposed in the coastal city of Esbjerg in Denmark (55.5°N latitude, 8.5°E longitude) on a ground mounted rack located 2 km away from the sea, 11 km from a weather station and in direct proximity of bushes and plants. Esbjerg is an oceanic location with mild temperatures throughout the year, high relative humidity and frequent light rainfall. C. Characterization Surface and cross-sectional images of the coated and uncoated coupons were obtained using a scanning electron microscope (JEOL 7800F FEGSEM) equipped with an Oxford Instruments EDX detector for elemental analysis. The hydrophobic properties of the coatings during exposure were monitored by measuring water contact angle (WCA) and roll off angle (RoA). The transmittance of the coated and uncoated coupons was measured using Varian Cary UV-Vis spectrophotometer in the wavelength range of 300-800 nm. III. RESULTS AND DISCUSSION Preliminary findings of the outdoor exposure of candidate coatings are presented in this extended abstract. Cementation was observed on two types of hydrophobic coating (one containing nanoparticles and one with a fluorinated surface without nanoparticles). Uncoated glass was also monitored as a control reference. Cementation takes place when impurities become trapped and anchored on the surface during the droplet condensation and evaporation cycles [7]. Cementation typically occurs in locations with high relative humidity, diurnal temperature variations which lead to the formation of dew and places where rainfall is frequent. Cemented particles are difficult to remove, and their presence reduces the optical transmittance, increases the RoA, but can also cause damage to the cover glass surface by abrasion during cleaning. Figure 2 shows a selection of different types of cementations observed on the coated and uncoated surfaces. These results show that there is no apparent trend in the type, shape and size of the cemented particles found on coated and uncoated surfaces. This means that similar particles in similar concentrations were observed on all three coated and uncoated surfaces. The cemented particles observed varied in shape. The size was typically between 10-40μm. Chemical analysis of the particles showed that the most common elements found were C, Na, Cl, Fe, Mg, Ca, Al, K suggesting that the cemented particles are of both, organic and inorganic composition. EDX elemental images obtained from a cemented particle on a coated surface are shown in Figure 3. Although the exposure at different locations is at an early stage, it is anticipated that the elements present in the cemented particles will be significantly different depending on the climate of the location. For instance Cl found in the cemented particle in Figure 3 can be likely attributed to the salt carried by the wind in the coastal location in Denmark. The inland location of Colorado is not expected to show this type of cemented particle. Coating degradation and associated cementation caused the WCA of the coated surfaces to progressively decrease over time. The RoA increased even after only a few weeks of exposure. The RoA quickly exceeded the tilt angle at which the coupons were held. Due to the high RoA, the coatings were inefficient in removing particles from the surface. The soiling decreased the transmittance of the coupon. Mechanically assisted cleaning of the coupons partially restored the transmittance, but both candidate coatings were vulnerable to damage by abrasion by the cleaning process. Figure 4 shows the transmittance of a coated coupon prior to and after being exposed outdoors for 3 and 6 months. The transmittance was partially restored when coupons were cleaned using water, IPA and a microfiber cloth. IV. CONCLUSIONS Uncoated and coated glass coupons with hydrophobic coatings have been exposed outdoors to study the effects of soiling. Particle cementation was observed on all coupons exposed outdoors. Particles such as dust, pollen or salt strongly adhere to the surface resulting in reduced transmittance, reduced WCA and increased RoA. No link was found between the type of surface (coated, uncoated) or the type of coating, and cementation (shapes, types and sizes). However, the cemented particles are expected to differ in type and composition for different geographical locations Experiments of outdoor exposure of hydrophobic coatings in more locations are ongoing to confirm this. Hydrophobic coatings have great potential to mitigate the effects of soiling for the solar application. However, the outdoor environment is very challenging and coating degradation occurs surprisingly quickly. Research is needed to improve coating durability. Laboratory and outdoor testing is revealing the vulnerabilities of currently available coatings and guiding the development of new anti-soiling formulations.
2021-01-07T09:00:52.488Z
2020-06-14T00:00:00.000
{ "year": 2020, "sha1": "c0dc935597be53b72ac69cf8363aadc6b9ce7b05", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/conference_contribution/Outdoor_testing_of_anti-soiling_hydrophobic_coatings_Observations_of_cementation/13602998/files/26086172.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "703fe94feecc04cb1a11a4145a920e5671fdf8ec", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
59421665
pes2o/s2orc
v3-fos-license
A Game-Theoretic Modelling of Risk Preferences of the Students during the Placement Process for Summer Internship , IIM Ahmedabad # corresponding author. Type of Review: Peer Reviewed. DOI: http://dx.doi.org/10.21013/jmss.v9.n3.p5 How to cite this paper: Mandal, G.K., Agarwal, G., Pingali, V. (2017). A Game-Theoretic Modelling of Risk Preferences of the Students during the Placement Process for Summer Internship, IIM Ahmedabad. IRA-International Journal of Management & Social Sciences (ISSN 2455-2267), 9(3), 160-173. doi:http://dx.doi.org/10.21013/jmss.v9.n3.p5 ABSTRACT The placement process for summer placements at IIM Ahmedabad renders a series of decisions to be judiciously taken by every student owing to the limited time availability and a highly competitive environment.In this paper, we have primarily analysed the period between the day students of first year management programme have their CVs frozen (cannot be altered later) and the day the last student of the batch gets an internship offer.The dilemmas faced by a student are dependent on their personal profile but also largely on the sequence of events unfolding during the period.Using the publicly available data of last two years' summer placements, a survey administered to second year students, long form conversational interviews (LFCI) with concerned office bearers over the last one month, and the previously established econometric models for utility derived in various human activities as given in literature have been quantitatively and qualitatively analysed and combined together to generate a set of inequalities attempting to resolve the dilemmas that students face.The various decisions are considered as subgames in a decision tree, and have been solved using the inequalities developed.Significant insights about the proposed rational behaviour has been drawn and interpreted back in nonmathematical terms in the end.In an attempt to maximise the various utility functions, some of the variables have been taken as categorical predictors providing a discrete description of the placement process.Overall, the paper attempts to give a systematic method of analysing and recommend the decisions a rational student should optimally take during the three months period of excessive work load, and high stakes. INTRODUCTION The summer internship process at IIM Ahmedabad is considered as a highly disciplined, time-constrained and rigorous exercise for the first year students of the flagship two-year management programme.Students at the premier institute need to start planning about their summer placements as soon as just one months within their admission, with the submission deadlines for the final unalterable version of resume being generally around August 15, nearly two months after the term starts for the new joiners.In this paper we aim to investigate the process in which a particular student can optimise her time allocation for preparation for placement in companies of her choice, and resolve various dilemmas originating due to trade-offs like increased chance of selection by applying to a number of firms, but decreased quality of submitted applications & firm-specific preparation if she applies to a large number of firms. Let us first try to qualitatively understand the time constraints that a student of first year management programme goes through during the preparation phase, i.e. during approximately three months before the summer placement dates (which generally occurs in first week of November).Specifically, the time range for our analysis is between the day CV freezes (which has been assumed as August 15), and the day summer placement ends i.e. last cluster ends with the last student of the batch getting placed (which has been assumed as November 15).The time available with a student during the preparation phase is limited, and can be broadly allocated to these activities: In the above segmentation, the portion of God"s hour of time which is essential for daily human body requirements, like sleep and maintaining personal hygiene have been ignored, and hence in our analysis, the sum of time taken for the above four segments shall be less than the total available time which is God"s our minus the time required for basic human body requirements.It is intuitive to claim that the general trend of the level of performance obtained in all these four sets activities (except for leisure activities, where we consider utility derived from them as an indicator of performance) with respect to the inability to devote time to that particular activity shall follow a trend similar to what has been shown in the curve below.It is a general adaptation of the concept of diminishing marginal returns.This has to be kept in mind while we generate the model for the complete analysis of the impact of time allocation differences.The graph shown below is a polynomial function, however, a similar logarithmic function could have been used as well.An inherent assumption in the analysis is that when a student decides to invest a particular amount of time for any activity, he or she executes the activity without any interruption or diversion of attention from the activity, since we are investigating the case for highly motivated and high-performing individuals of the premier institute. Dilemmas: We have identified five types of dilemmas or decision points that a student encounters in his or her preparation phase.In the final model built by us, to maximize the overall utility of any candidate, we shall incorporate the impact of all these five decision points as predictor variables. Decision 1: Should we fill all the available forms (i.e. should one apply for all the firms)?If not, then what should be the basis for leaving out on applications?A student may leave filling forms of reputed but difficult to join firms, or of less reputed but easier to join firms (i.e.so called "not so good" firms), or all firms of a particular sector, or as per the amount of expected time for the firm specific preparations. Decision 2: Which all cohorts (marketing, finance, consulting, general management, others) should I prepare for?A student may decide to prepare for all cohorts, or choose to focus on just one or two of these.It is well understood that different firms that come under same cohort generally require similar type of preparations, but two firms picked from two different cohorts shall require completely different type of preparations.The decision regarding selecting to prepare for just one or two cohorts is generally backed by the personal profile and abilities of a candidates as well as her perception about the chances of her selection in a particular cohort.Note that the abilities of the candidates mean their relative strength which is a result of their past experiences and natural talent, and is not same as the capabilities acquired by them by investing more time in preparations.A student may be risk averse and may not risk leaving any opportunity, hence apply for all cohorts by spreading her preparation time and efforts to all the cohorts, rather than concentrating heavily on just one or two. Decision 3: Which particular firm should be my first priority?This includes the decision about firm specific preparations.A student may choose to prioritise a company based on her knowledge about that company, or her perceived fit (Human resource based aspects like similar values) with the company.This means that she will decide to invest more time in preparations for such firms.Similarly, having proficiency in similar functional area which a particular firm is offering (e.g. a student with expertise in digital marketing may value a firm offering digital marketing roles more) may impact the decisions of a student.However, choosing to invest more time in firm specific preparations comes with an added risk of missing out on the preparations for many other firms or a cohort.Also, many students want to broaden their area of expertise by applying for firms with kind of roles which they are yet to experience, and assuming that they would be able to perform reasonably well in the others. Decision 4: Should I participate in cluster-1 or not?Actual summer placement dates at IIMA are divided into 3 clusters and a rolling round.Cluster-1 (or the day 0 & day 1) generally includes consulting and finance firms visiting the campus.Cluster-2, which occurs after Cluster-1 is over, generally includes marketing and general management firms.We have assumed Cluster-3 and an optional rolling round to account for all the other types of firms.Now the decision dilemma that a student faces is whether or not to participate in Cluster-1.If she is hopeful that she is most likely to get placed in Cluster-2, based on her preparations & profile, then she might choose not to participate in Cluster-1 as it will hamper her actual performance in Cluster-2 due to fatigue because of the highly strained and fast-paced requirements of the process.It will also deprive her to be able to invest any time for preparations for her Cluster-2 firm during the peak period.Peak period has been defined as a time frame of 2-4 days the day before a candidate has to appear for an actual interview, where she needs to involve herself in all sorts of relevant documentation, revision, and firm dependent personalisation works.However, participating in cluster-1 shall definitely provide her an increased opportunity to get placed in reputed firms in a quicker time, and be able to be remaining in the zone of mental strain for less length of time.It is interesting to note that, even if a student prefers only cluster-1, she may have to appear for cluster-2 in case of not being able to get placed during cluster-1.However, if a student prefers only cluster-2 and cannot get placed, she cannot go back and appear for the cluster-1 firms.This also adds more complexity to the decision that a student has to take, as discussed mathematically later in the paper. Decision 5: How to allocate time between academics, placement preparation, competitions/club works and Leisure?These four categories have been explained above, and a student needs to figure out the relative amount of time allocation which is best suitable for him.Here, the goal is to get placed in the best possible firm, and the optimisation has to be centred around this goal only. Building the Mathematical Model We now aim to enter the step by step process to generate mathematical behaviour of all these decisions, and then merge all of them to get a mathematical model for the entire placement process.The model should maximize the overall utility which comes from getting placed in the best possible firm from the institute, which in turn depends of many parameters, one of which is the maximum possible utility derived from getting placed in the best possible firm during the summer internship. In that regard, let us first analyse a student"s relative strength for each of the companies in which he is eligible to apply for.At the outset, for a large number of available company (n>30 has been considered as sufficiently large number based on normal statistical analysis which shall be followed in this paper) the strength of the candidate shall appear as a set of random numbers varying from 0 to 100, where 100 means he is the 100 th percentile (1 st rank) in the batch for that i th company.Similarly, a value of 60 means he is in the 60 th percentile i.e. better than 60 percent of the batch as far as the strength for selection that particular firm is concerned.Here, where i=1,2,3…n; where n is the number of companies available to the student (assumed same for the whole batch, as the sample set of firms is available for each student.) Now one may argue that some firms require mandatory 2 years" professional experience and so on, and thus the fresh undergraduates do not have those firms available to them; and hence for them the number of firms available should be considered less than n; however, our model shall assume that this only means that the firm is available to the student but his/her relative strength in that firm is zero.This means all students shall have n firms available to them. Note that this representative graph is for one student alone, and there would be similar graphs for all the students for the set of all possible companies. Now onwards, unless otherwise mentioned we shall conduct the analysis from the point of view of a single student, and find out how he or she can optimise his or her choices at each stage. Of course based on the associated coefficients and factors, this optimisation exercise would vary for every student. For a student, his ultimate aim is to maximise his overall utility by getting placed in the best possible firm at IIMA.Let this be utility be denoted as U j (later written as just U in student specific analysis). Then U j = f (the company that one finally gets placed in) = f(g) i.e. the ultimate objective is to maximize f(g) or U j . Note that the "g" as mentioned above is itself a function of many other factors.One of the main factors shall be the company in which the student does his summer internship.Hence, the core of our analysis remains to be evaluating from a game-theoretic perspective, a method to get the best possible summer internship.Other factors that shall be acting as predictor variables have been explained in the function below. Here, analysed period refers to the period of 90 days from 15 th August to 15 th November, which is the duration between the day CVs are frozen and the day summer placements end.Student's relative strength in the firm i=1,2,3.... Now, U j = f(g) = f(g) where student j interns, academic performance of j during the analysed period, performance of j in case competitions and club activities during the analysed period, factors dependent on the student j but outside the analysed period (such as performance in summer internship, academic performance after analysed period and up-to term 5, performance in competitions after analysed period etc.), uncontrollable exogenous factors e.g.health ) Note that the function "g" has been shown to be a function of five factors above.Out of these five factors, two have been written with a bar overlined above them, which means that they are outside the control of students viewing from the frame of the analysed period, and hence have been taken as given for the decision making which has to happen during the analysed period and not later. Before we start writing exact notations for the above function, we would like to drill down a bit on its components as well in a similar fashion.The five components of the function "g" have been described below. 1.Where student j interns = f(frozen CV, number of students applying to different companies i=1,2,3,…n, preference order of companies for each student including student j, expected number of seats offered by different companies i=1,2,3,…n, whether student j applied to company i or not, student j"s overall preparation time devoted for company i"s specific preparation, j"s time devoted for preparation of cohort k, student j"s relative ability & strength for cohort k excluding (minus) her frozen CV, overall leisure time which holistically improves my performance and well-being Error!Bookmark not defined., availability of time for placement preparation and rest during the peak period). There are two points worth mentioning here.Firstly, we have not taken the actual performance on D-day (interview day) as a predictor because those have been considered as uncontrollable factors, and hence have already been included in the exogenous factors.Secondly, the rare case where a student has got any PPI (pre-placement interview call) because of winning any competition, has been ignored here.The reason is that the number of such students is very less and ignorable. 2. Academic performance in the analysed period = f(actual performance in individual evaluation which in turn depends on j"s time devoted to academics and j"s relative strength in the academic courses, actual performance in study group tasks which in turn depends on the time devoted by j to group tasks and time devoted by other members to group tasks and also the relative strength of the group). Note that here an inherent simplifying assumption is that the time devoted by me is unrelated to time devoted by others in any particular activity. 3. Performance in club activities or competitions = f (time devoted by student j, j"s relative strength which is highly contextual). 4. The fourth predictor was constant and it will not be written in a functional form, as those are factors dependent on the student j but outside the analysed period . 5. Similarly, the fifth predictor was constant and it will not be written in a functional form, as they are uncontrollable exogenous factors e.g.health which cannot be predicted during the analysed period.Now that we have written all the functions in plain language and shown the predictors on which their behaviour is supposed to depend, it is imperative that the mathematical notations are introduced so as to be able to analyse them further.Let us start from examining the behaviour of all these functions.The behaviours have been finalised as shown in the formulations and graphs represented below.This has been done based on 3 LFCI (long form conversational interview with placement committee members), 1 FGD (focussed group discussion of 9 students being remotely observed), and one survey administered to 38 second year students of IIMA, who have already and recently undergone through the whole process of summer internship. Most of the scatter plot based on the quantitative data collection by surveys turned out to be highly unarranged without display of any strong trend, and R 2 values for any common logarithmic, exponential and simple polynomial fit were poor.However, each of these scatter plots could be taken iteratively with the best R 2 curve first and then merged with our qualitative understanding of the practical process.If there was a logical coherency between the two behaviours, the scatter plots were accepted, and then validated from similar analysis in existing literature.However, if there were logical inconsistencies between the two behaviours, we moved to the next best R 2 curve. Empirical formulation of Utilities After all the curves were finalised, we checked for the logical consistency of the effects produced by combinations of such behaviours, which were again validated by drawing analogies with instances in the established secondary literature in the field of economics and econometric analysis (such as econometric model for choice between products), as there is no direct literature developed on this subject matter yet.Let us first define some of the key notations as below.Remember that each of the ℾ 1 , ℾ 2 , and ℾ 3 , are in functions of their respective sets of variables with different behaviours which we are interested in investing.The method of zeroing down on the behaviours of the variable is the iterative method as explained above.For once, let us assume that we could finalize the behaviour of all those variables, which shall be later explainedin detail.As such, once we have the finalised behaviours of the variables known, their appropriate combinations can be used to build an empirical model for the utilities ℾ 1 , ℾ 2 , ℾ 3 , U(T u ) and U. U= The model obtained for the above-mentioned utilities are as given below. , = ℾ + ℾ + ℾ + + The overall utility, of course is the sum of all the component utilities identified. This basically shows the utility derived from the competitions (c) and clubs (l).For competitions, the utility has been found to be increasing with the time invested in the activity, however, the marginal benefit is considered very low for initial few hours of investment of time and it goes on increasing as more and more number of hours are dedicated to the competitions.Also, it has historically been seen that in most cases.To validate this, we approached 16 winners of case competitions and talked to them.81.25% respondents agreed that as they invested more and more time, the quality of their work improved exponentially because the initial time for cracking the core of case was very large, however, once the cases got cracked it was just a matter of analysing the case from various established frameworks, and they could just go on with an accelerated analysis. For club related works, the output is in form of CV points for next year and interpersonal skills development, as accepted by people in the FGD.This is perceived to be increase linearly with the amount of time invested to the club related works.If someone invests no time at all, then he has no advantage as such because the club entry points are already taken into account in the frozen CV, and any extra value would be added only if the person works for the club. ℾ 𝟐 = [𝒂 𝒈𝟏 + 𝒂 𝒈𝟐 𝐥𝐨𝐠 𝟏 + 𝑻 𝒈 + 𝒂 𝑰 𝐥𝐨𝐠 𝟏 + 𝑻 𝑰 The utility function for the academic performance, as qualitatively explained earlier, depends only on two variables, while all other predictors are constant for us and can be subsumed in minimum number of constants as coefficients 1 , 1 etc.The two predictors variables for academic performance are the actual performance in individual evaluation which in turn depends on j"s time devoted to academics T I , and actual performance in group evaluation which in turn depends on time devoted by the group to academics T g .The behaviour have been obtained to be logarithmic based on the approach explained above, and explained by scatter plots later.Logarithmic behaviour follows the diminishing marginal return behaviour with the time invested.It is worth noting that natural logarithmic function would become infinitely negative (or undefined) if the time invested were zero, however, in the statistically controlled empirical formula, the use of argument (1+ T g ) for log e , instead of simple T g, not only resolves that issue, but ensures that it defines a behaviour that is practically observable in real-life. The empirical equation for the utility derived by interning in a particular company i is slightly complex because it has many active variables.This utility has been denoted as ℾ 1 and encompasses four major terms as shown below.The first two terms are related to the utilities derived from cluster 1, while the remaining 2 are from cluster 2. As we have focussed only on 4 cohorts (2 in each of these clusters), and ignored other cohorts which belong to cluster-3 as discarded firms (even though it is not necessarily the case always), we may assume the utility by those firms to be zero or negligible.(P+Q) max in the above formula is the total number of hours one can stretch a particular activity to, which will be equal to Gods hour minus sleep time in simple terms.In theory, a person could use all the other times just for preparing for one particular cohort. ℾ In case we believe that a person can work for maximum 16 hours a day, then in the 90 days window of analysed period, the value of (P+Q) max in hours would be 16x90 =1440 hours.This (P+Q) max has been used to normalise the terms of cohort preparation times e.g.P 1 (before shortlist comes) and Q 1 (after shortlist comes), which are already multiplied by another function of time based on the sum of company specific preparations i.e.[ ⅀ + ].Here, T 1i is the time invested by student j on company specific preparations for the firm I, while D 1i is an indicator variable which takes value as following: The constant b 1i is takes care of the relative positioning or strength of a candidate among those who apply for a particular firm, and factors such as risk preference of the candidate.This shall be different for each firm and hence, a summation has been taken over i to get the net utility values.Note that the process of normalisation not only help preserve the dimensional homogeneity if at all it was going to get violated, but also prevents the utilities getting squared. The factor D 4 is extremely critical as it relates to the decision 4 mentioned in the beginning of this paper.It tells whether or not to participate in cluster-1.This can be viewed as a case of sub-game perfect Nash equilibrium, as this decision comes at the end of the chain and is contingent on all the decisions taken in other stages of the decision tree.This has been analysed in detail in the next section.D 4 must be taken as 1 if the candidate chooses to participate in cluster-1, hence its usage in the first and the second term of the equation for ℾ 1 is simple i.e. 4 [ ⅀ 2 … … but not so simple in case of the third and the forth term, where a candidate may choose to participate in cluster-2 even after appearing in cluster-1 but his chances shall be lesser due to diverged attention and energy, lack of rest during the peak period etc. Hence the utilities have been divided with a factor of 1 + 4 ……….. form, which shall be 2 is the candidate chooses to appear in both the clusters.The scatter plots all the empirical behaviors which have been generated by survey results based on quantitative survey questionnaire administered to the second year students (e.g.how many hours did you prepare for cluster-1 firms) , shall be explained putting more light into the empirical formulation for ℾ 1. 𝑨𝒍𝒔𝒐, 𝑼(𝑻 𝒖 ) = 𝒂 𝒖 𝐥𝐨𝐠 𝟏 + 𝑻 𝒖 𝐴𝑛𝑑, 𝑬 = 𝑎𝑑𝑗𝑢𝑠𝑡𝑜𝑟 𝑒𝑟𝑟𝑜𝑟 𝑡𝑒𝑟𝑚, 𝑤ℎ𝑖𝑐ℎ 𝑎𝑙𝑠𝑜 𝑖𝑛𝑐𝑎𝑝𝑠𝑢𝑙𝑎𝑡𝑒𝑠 𝑡𝑖𝑚𝑒 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡𝑠 𝑤𝑖𝑡ℎ𝑖𝑛 𝑖𝑡𝑠𝑒𝑙𝑓 Where ( ) is representative of the positive influence on the candidates overall performance, if he or she is able to enjoy an adequate leisure time also.This function may look little confusing as it says that with increasing T u the utility keeps on increasing.It is worth mentioning that the behaviour is true, but as one invests more time in T u , due to time constraints he or she misses out on investing the available time on more powerful functions with bloated coefficients e.g.time dedicated to club works. The scatter plots used in our analysis have been displayed below.These plots, generated by survey data, are loosely matching with the expected trend in some of the cases.It is expected that if there would be large number of participants i the survey, the trend would be refined and closer to the stated functions.To special interest is the graph for cohort preparation which was later verified from the participants of the FGD.It was discovered that participants feel that initially they can learn quickly about the cohort with minimum effort and time.As they want to invest more time into preparation for the cohort, the learnings multiply, and because the information is generic and useful for so many firms, the learning process and the expected utility increases rapidly.However, after a certain level of preparation, redundancies start coming in.That is shown by a point of inflexion and then decelerated growth of the utility function.There arises a need to look into company wise details for firms belonging to a particular cohort, and I someone still continues investing more time, the utility growth becomes almost stagnant.This behavior is similar to cube-root (power 1/3) function as written in the formula above as well.Let us first see the scatter plot for relative increase in utility from group evaluation wit increase of time invested T g . An important point to remember the time constraints while analysing any individual utility-time graph.Hence, all the above equations are subject to this constraint: Time required to fill the forms of companies + Time devoted to firm specific preparations + Time devoted cohort specific preparations + Time devoted to individual evaluation + Time devoted to group tasks + Time devoted to clubs + Time devote to competitions + Liesure time = Total time available for allocation during analysed period (God"s hoursleep etc) i.e. ⅀d i D ki + ⅀T i + ⅀P k + ⅀Q k + T g + T I +T c + T l + L(T u ) = T allocation = T Gods -T sleep,hygiene Keeping this constraint in mind, the scatter plots based on the data collected from the surveys have been analysed further. A similar behaviour but with more scatter was observed in the data obtained for individual performance. For competitions, the quality of output and chances of winning increase very rapidly once candidate start investing more time. For club works, the difference in the perceived status of the clubs comes into play, and no definitive trend can be observed.However, students during the FGD felt that club work proportionally influences our career benefits. Cohort preparation time and its effect are interesting as explained above.After a certain level of preparation, redundancies start coming in.That is shown by a point of inflexion and then decelerated growth of the utility function. Company specific preparation, again, follows a logarithmic trend.However, the values must not be negative if no effort is put, hence this has been taken care of in the empirical relations developed.(This is because a log function has negative values between 0 to 1, hence the time invested has been appropriately scaled to be greater than 1, by using 1+ variable, instead of just variable).There was a consensus of company specific preparations among the members of the focussed group discussion and even the data supplied by the survey respondents also rendered the scatter plot of comparatively healthy behaviour Still, it is worth noting that the R-square value is not close to 1 because of the low sample size and many other determinants (availability of materials for preparing for a particular firm) that have been ignored for simplification. The scatter plots have been created to provide a backing to the idea about, and the systematic procedure involved in generating the empirical equations.Now let us see the equation for ℾ 1 in detail, and understand each term sincerely.Consequently, we shall see if that understanding can provide us some systematic method of analysing whatever decision we could have taken by students.We shall try to define the sub-game situation that a candidate goes through during analysed period and try to resolve the same using the developed relationships. Before the decision tree analysis, let us discuss about few aspects of decision making which do not pertain to the decision tree.For example, how much time to allocate to competition, group task, individual evaluations, placement preparations and competitions needs to be decide.This has been found upon the LFCI (interviews with students) that the first year students have very less chance of winning in competitions in which PGP-2 also participate.We collected the data for last 23 case competitions and found that PGP-2 members were directly or indirectly involved in 73.9% of the winning cases.Hence, it is recommended that after the CV gets frozen, the students are not advised to spend time in competitions.Similarly, it was found that benefit from devoting time to club activity post CV freeze in the analysed period is negligible and a rational student thinking purely from these objectives would refrain from them. Additionally, it has been established that students should devote minimum possible time to the group tasks because the lesser the time given to group task, the more will be the time devoted by the other members of the group to compensate for that loss.This creates two-way benefit for the student from a mathematical point of view.She gets more time T i available, while others value of T iy reduces in a competitive domain (y = another student).Similarly, it is advised that students should devote moderate amount of time to academics during this period as it will have only moderate effect on the ultimate goal as defined earlier.Finally, the students should devote maximum possible time towards preparation for summer placements while taking adequate rest and leisure, as internship is found to have the most significant impact among all the predictors, on the ultimate goal as defined earlier. Decision Tree and Analysis of sub-games For any cohort k =1,2,3,4 the student j shall go through the typical decision making process as shown in the flowchart below.The square boxes denote "decision nodes" i.e. the actions that are to be taken by the student by taking a decision by himself or herself.The circles denote "chance nodes" i.e. the probabilistic event that the student has to observe whatever happens and then respond accordingly.The dotted boxes contain multiple choices at times, which mean that the next action or chance shall apply to all such choices. A student "j" has to finally take the most crucial decision at the action node where she is trying to figure out: This is the subgame situation that the student is encountering marked as "subgame of interest".However, the decisions at the subgame shall be dependent on the events preceding the same, which has been mathematically framed using the notations and definitions discussed earlier. See decision tree in the next page. At the decision node of "whether or not to participate in cluster" i.e. at the sub-game of interest, the utility function as describe above having one portion of utility derived from the summer intern has been used for building inequalities aimed at providing a logical decision. In plain terms, a rational student will take part in cluster 1 only if the following condition gets satisfied: Benefit by participating in cluster 1 -benefit in cluster 2 by not participating in cluster 1 > 0 Using the earlier derived equations, this can be translated into mathematical terms as follows. We have, utility ℾ 1 defined as: For uniformity in analysis, let us assume that firm specific preparation constitutes a relatively constant amount of time as far as all the preparations are concerned.The effect of the firm specific preparation has no variable effect as such in the forthcoming analysis.The inequality for decision making described above, thus becomes: The values that b attains is significantly different for each cluster, and the diminishing impact due to the fatigue generated by cluster-1 process is very miniscule, as established by those members of FGD who participated in cluster-2.Moreover, it was found that if a student prepares only for a single cluster, the marginal benefit derived from the additional time devoted is not significant after a certain duration of time as analysed in the graph as well.However, a student would also not be able to prepare for any cohort well if he tries to extend this logic and study for three or more cohorts. Benefit by participating in Hence, the rational decision about whether or not to participate in custer-1 will be based on top two cohorts for which the student perceives himself as most suitable.(based on the value of b k -i.e. the combination of preference and relative strength). In a similar manner, sequentially analysing the sub-game tree at the previous decision node i.e. just after the shortlist arrive and for which cohorts to prepare for shall be decided.Here also, the rational decision will be to study for two cohorts from which the student has got shortlist based on the values of b k for different cohorts, and also the preparations done till that point of time. Similarly, proceeding back in the tree will mean the decisions should be taken even when the shortlists are not out as the time of preparations is limited.Hence, in the previous decision node, we need to decide which cohorts to prepare for before the shortlists are released.Here doing the similar mathematical exercise, the expected utility will be maximised by studying for the cohort which has highest value of b k .As such, the student is advised to prepare for prepare for just one cohort before the shortlists are out. Further, we reach to the top decision node i.e.where we need to decide which all forms are to be filled.Here, based on a similar mathematical exercise and comparison of coefficients, a rational student should fill all forms of Finance & Consulting roles based on ⅀d i summated for these two cohorts i.e. because the forms of consulting and finance are very short and doesn"t take much time to be filled.Contrarily, firms offering markeing and general management roles have very long forms to be filled which can take hours of concentrated effort per form.These must be filled only if the b value of the specific cohort is high in the proportions so as to negate the effect of longer form filling time i.e. ⅀d i .i.e Effect(b) > Effect (⅀d i ). This concludes our analysis of sub-games involved for any student j in the summer placement process.For concluding the above mathematical solutions in consonance with the stated decision dilemmas that a student encounters, it is imperative to visit back each of the dilemmas and analyse their optimal solution as done in the next section. Conclusion As most of the recommendations have been presented s and when the scenario has been analysed, the key message can be collated in the form of these five decisions with which the whole research had been started. Decision 1: Which all companies" forms to fill?Fill the forms of consulting and finance roles as they take negligible time, and fill the forms of marketing and general management roles based on the empirical criteria described above which can be interpreted as follows: If the impact combination of cohort preference and relative strength for it offsets the loss of time required in duly filling the forms, then only fill the forms for these cohorts. Decision 2: Which all cohorts to prepare for?Before the shortlist are out, an analysis of the sub-game tree reveals that the student should rationally prepare for exactly one cohort, and after the shorts are out she should prepare for exactly two cohorts which are falling higher in priority based on comparing the coefficients of the preference of cohorts and the relative strengths. Decision 3: Which all firms to prepare for?It has been established that this decision shall remain subjective to the variables as coefficients (instead of constants), hence a qualitative decision should be taken s follows: Only after the shortlist is announced, should a student start firm specific preparation, based on the two cohorts he has been preparing for, and his preference order. Decision 4: Should a student participate in cluster-1l?This is the heart of sub-game perfect Nash equilibrium analysis, as shown in the tree.This decision has to be governed only by the two cohorts for which the student has been preparing, after the shortlists were announced.Decision 5: How to allocate time between different activities during the analysed period? The student is recommended to not invest time in competitions, allocate minimum possible time to club activities and group tasks, devote moderate time for individual evaluation, and invest maximum possible proportion of time for placement preparation while taking adequate rest. (a) Academic activities & courses (b) Preparation for summer placements (c) Club activities & Competitions (d) Leisure, recreational & personal activities The overall utility by achieving the final goal of getting placed in the best possible firm ℾ = The utility derived by where a student j interns ℾ = The utility derived by the academic performance in the analysed period ℾ = The utility derived by j's performance in competitions (c), and club activities (l) U(T u ) = The utility derived from the leisure time which is positive for the overall utility, however, it may go against the time constraint and may cause other negative impacts on correlated variables. (a) Whether to participate in cluster-1 (b) Or else, do not participate in cluster-1 & go to cluster-2 cluster 1 -benefit in cluster 2 by not participating in cluster 1 sub-game dilemmas Upon interviewing (LFCI) of PGP-2 students about their past internship placement process we found that the decision to participate in a cohort or a cluster is strongly governed by a combination of preference for a cohort such as marketing, and the relative strength of the student for that cohort ie the marketing nerd.This combination of preference and relative strength is what has been shown in the equation in the form of b 1 , b 2 , b 3 , b 4 as coefficients.
2018-12-26T21:09:00.127Z
2018-01-05T00:00:00.000
{ "year": 2018, "sha1": "33e3592cd830b103939fea587a38008dc3f781e5", "oa_license": "CCBYNC", "oa_url": "https://research-advances.org/index.php/RAJMSS/article/download/1091/963", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33e3592cd830b103939fea587a38008dc3f781e5", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
36676828
pes2o/s2orc
v3-fos-license
Evaluation of factors associated with immunoglobulin G, fat, protein, and lactose concentrations in bovine colostrum and colostrum management practices in grassland-based dairy systems in Northern Ireland The objectives of this study were to investigate colostrum feeding practices and colostrum quality on commercial grassland-based dairy farms, and to identify factors associated with colostrum quality that could help inform the development of colostrum management protocols. Over 1 yr, background information associated with dairy calvings and colostrum management practices were recorded on 21 commercial dairy farms. Colostrum samples (n = 1,239) were analyzed for fat, protein, lactose, and IgG concentration. A subset was analyzed for somatic cell count and total viable bacteria count. Factors associated with nutritional and IgG concentrations were determined using both univariate and multivariate models. This study found that 51% of calves were administered their first feed of colostrum via esophageal tube, and the majority of calves (80%) were fed >2 L of colostrum at their first feed (mean = 2.9 L, SD = 0.79), at a mean time of 3.2 h (SD 4.36) after birth, but this ranged across farms. The mean colostral fat, protein, and lactose percentages and IgG concentrations were 6.4%, 14%, 2.7%, and 55 mg/mL, respectively. The mean somatic cell count and total viable count were 6.3 log10 and 6.1 log10, respectively. Overall, 44% of colostrum samples contained <50 mg/mL IgG, and almost 81% were in excess of industry guidelines (<100,000 cfu/mL) for bacterial contamination. In the multivariate model, IgG concentration was associated with parity and time from parturition to colostrum collection. The nutritional properties of colostrum were associated with parity, prepartum vaccination, season of calving, and dry cow nutrition. The large variation in colostrum quality found in the current study highlights the importance of routine colostrum testing, and now that factors associated with lower-quality colostrum on grassland-based dairy farms have been identified, producers and advisers are better informed and able to develop risk-based colostrum management protocols. INTRODUCTION Colostrum is the first secretion produced from the bovine mammary gland postcalving (Jaster, 2005). It is composed of a range of compounds that are rich in nutritional, antimicrobial, and growth properties and are essential for stimulating cellular and humoral immune defense systems that the newborn calf needs to survive (Blum and Hammon, 2000). Colostrum contains 3 major immunoglobulin isotypes-IgG, IgA, and IgM-and a range of subclasses. Immunoglobulin G antibody is the most abundant isotype found in colostrum; it represents over 75% of the total Ig concentration (Korhonen et al., 2000), and consequently the quality of colostrum is assessed with reference to the concentration of this specific immunoglobulin class. Calves are born with a functional immune system, but it is considered naive until it is fully developed (Franklin et al., 2003). Calves will acquire adequate immunocompetence only through passive transfer of immunoglobulins from colostrum. However, absorption of immunoglobulins ceases 24 h after birth (Stott et al., 1979), and the quality of colostrum can vary between animals due to a number of physical and environmental factors (Quigley and Drewry, 1998). Previous research has determined that colostrum is of satisfactory quality if it contains >50 mg/mL of IgG (McGuirk and Collins, 2004). Colostrum is the primary source of nutrients to the newborn calf (Blum and Hammon, 2000). Fat, protein, 2069 and lactose are readily available in colostrum and are necessary as metabolic fuels (NRC, 2001), essential for thermoregulation (Le Dividich et al., 1994;Morrill et al., 2012), and needed for protein synthesis and glucogenesis to ensure homeostasis (Quigley, 2001b). Colostrum is also a valuable source of the vitamins and minerals required for general maintenance functions and vital as cofactors for enzymes (Morrill et al., 2012), with a particular role in the supply of fat-soluble vitamins (Spielman et al., 1946). Bacterial contamination is also a good indicator of colostrum quality: industry guidelines recommend <100,000 cfu/mL in bovine colostrum, primarily to prevent transmission to the calf of a wide range of pathogens that have been identified in previous research (Doyle et al., 1987;Meganck et al., 2014). Several studies have shown a wide range of variation in colostrum IgG concentration (Gulliksen et al., 2008;Morrill et al., 2012;Conneely et al., 2013), nutritional properties (Kehoe et al., 2007;Zarcula et al., 2010;Morrill et al., 2012), and bacterial properties (Elizondo-Salazar and Heinrichs, 2009a;Morrill et al., 2012) but no study has explored the variation in these properties on commercial grassland-based dairy farms over an extended period of time and investigated how animal and management factors may influence colostrum quality in this type of production system. The objectives of the current study were to investigate colostrum feeding practices and colostrum quality on commercial grassland-based dairy farms over a 1-yr period, and to identify factors associated with colostrum quality that would help inform the development of colostrum management protocols. Selection and Description of Herds Commercial dairy farms (n = 21) geographically spread across Northern Ireland participated in this study between February 2013 and February 2014; herd size ranged from 85 to 425 lactating dairy cows. Producers were required to collect a colostrum sample from every cow as soon as possible after calving, demon-strate excellent record keeping, maintain a milk record, and show a high level of commitment to the research program. Colostrum feeding practices (Table 1) of the offspring (n = 1,177) of these cows were also monitored. Data Collection and Description Producers completed data collection sheets for each animal. Data collected included herd size; breed of cow; parity; estimated BW of cow precalving; cow immunization regimen; length of dry period; dry cow nutrition; season of calving; BCS at calving; calving difficulty score; colostrum yield; colostrum management, including quantity fed at first and second feed; duration of colostrum feeding; feeding method; and time interval from calving to sample collection. All producers were involved in a milk-recording scheme, and access was granted to obtain individual animal data on previous 305-d milk yield. Sample Collection The farmer collected maternal colostrum (250 mL, mixed thoroughly) from each animal at the time of first milking after parturition. Samples were labeled with farm identification number, dam freeze brand number, and date of calving. Samples were stored in a refrigerator on the farm and collected within 3 d for nutritional and IgG analysis or within 1 d for bacterial analysis. All samples were transported in a chilled container to the Agri-Food and Biosciences Institute, Hillsborough, where they were subsampled into 10 aliquots of 25 mL. Samples for bacterial analysis [SCC and total viable count (TVC)] were transported in a chilled container to the laboratory (Agri-Food and Biosciences Institute, Newforge) for immediate analysis. Samples for fat, lactose, and protein concentration analysis were stored in a refrigerator. The remaining aliquots (5 × 25 mL) were stored at −20°C for later IgG analysis. Determination of Colostrum Quality Nutritional and Bacterial Composition. Colostrum fat, protein, and lactose concentration were de- termined using the Foss MilkoScan FT120 (Foss, Warrington, UK). Only samples that could be processed within 24 h of calving were analyzed for TVC (n = 119) and SCC (n = 117). We determined TVC using the pour plate method (Clark, 1967) and counted colonies using a Stuart colony counter (Bibby Scientific Ltd., Staffordshire, UK). We analyzed SCC using the Delta Somascope Lactoscope method (Delta Instruments, Drachten, the Netherlands) as described by Hanuš et al. (2014). Immunoglobulin G. Colostrum samples were removed from a −20°C freezer and thawed in a fridge at 4°C overnight. The IgG concentration was then measured using an ELISA kit for bovine IgG from Bio-X Diagnostics (Jemelle, Belgium). The test was performed on colostrum that had the fat removed though centrifuging before freezing. All kit components were brought to 21°C before use. The wash buffer was diluted 20-fold with distilled water. A calibration curve was developed as per the manufacturer's instructions (BioX, Jemelle, Belgium). The samples were diluted in PBS, and the diluted samples were added to the test plate and incubated at 21°C for 1 h. The test plate was washed 3 times with the wash buffer, and then chromogen solution (100 μL) was added to each well and incubated away from light for approximately 10 min. Stop solution (50 μL) was then added to each well. The optical densities were recorded using a microplate spectrophotometer with a 450-nm filter (Tecan, Magellan, Switzerland), and the concentration of IgG in samples was calculated from the standard reference curve containing known concentrations of IgG provided in the test kit. Any sample that resulted in an IgG concentration above or below the range of the standard reference curve was retested after further dilution according to the test kit recommendations. An interassay coefficient of variation of <15% was observed. Statistical Analysis We carried out univariate analyses to investigate the relationship between each response variable and each explanatory variable in turn (both continuous and categorical), using a linear mixed model methodology and the method of REML in GenStat (16th ed.; VSN International, Hemel Hempstead,, UK). Farm was fitted as a random effect, and the explanatory variables as fixed effects. We tested the following variables for association with IgG, fat, protein, and lactose concentration: herd size, season of calving, calving difficulty score (1 to 5), calving location, breed, parity, estimated live weight of cow precalving (kg), BCS at calving (1 to 5 scale), length of dry period (wk), first colostrum yield (L), sec-ond colostrum yield (L), immunization regimen (bovine viral diarrhea, leptospirosis, Salmonella, Escherichia coli, rotavirus, coronavirus, and clostridial disease), dry cow nutrition, description of supplements offered to dry cows, time interval from calving to colostrum collection (h), colostral TVC (cfu/mL), colostral SCC (10 3 /mL), and previous 305-d milk yield (kg). For each response variable, we developed a multivariate model to examine more complex associations, again using the linear mixed model methodology with farm as a random effect in all models. Any explanatory variable that had a P-value <0.15 from the REML analysis and a minimum of 900 observations was considered a candidate for the multivariate models. The multivariate analysis was also restricted to a subset of units that had a non-missing value for all variables. In each case, we used backward elimination to establish the multivariate model. At each step, the least significant variable was removed from the model, and the procedure was terminated when all remaining variables were significant at P < 0.05. We converted a range of variables into parametric and categorical variables for statistical analysis. Calving difficulty was indicated by group, where 1 = unobserved/unassisted, 2 = assisted without calving aid, and 3 to 5 = aided by calving aid or vet. Breed of cow was indicated as follows: 1 = Holstein, 2 = Friesian, 3 = Ayrshire, 4 = crossbreed (Jersey crossbreed, Swedish Red crossbreed, and a single Jersey cow grouped with Jersey crossbreeds for analytical purposes). Animals were also grouped by parity number: 1, 2, 3, 4, and ≥5. Season of calving was classified as follows: spring (March, April, and May), summer (June, July, August), autumn (September, October, November), and winter (December, January, February). Immunizations were recorded as yes/no answers to whether the dry cow had received a certain vaccine or not. Likewise, dry cow diet was recorded as yes/no answers according to feed type (i.e., grass silage, concentrate, grazed grass, and straw). Length of dry period was classified as follows: <8, 8 to <12, 12 to <16 and ≥16 wk. Time interval from calving to colostrum collection was grouped as follows: <0.5, <1, <3, 3 to <6, 6 to <12, and ≥12 h. Cow BCS was determined using a scale of 1 to 5, where 1 was extremely thin and 5 was extremely fat (DEFRA, 2011). All variables in the survey were initially tested for association with fat, protein, lactose, and IgG concentration in colostrum. Results shown in Tables 2, 3, 4, and 5 were all independently associated with fat, protein, lactose, or IgG concentration in the univariate and multivariate analyses. Factors Associated with Colostrum Quality in Univariate Analysis Immunoglobulin G. Cows calving in the winter months produced colostrum with greater (P = 0.002) IgG concentration than cows calving in the autumn and spring months (Table 3). Cows with a dry period of 8 to <12 and ≥16 wk had higher IgG concentrations than cows with a dry period of less than 8 wk (P < 0.001; Table 3). Cows immunized against salmonella (58.7 mg/mL) had greater (P = 0.02) IgG concentrations than nonimmunized cows (51.1 mg/mL). Previous lactation 305-d milk yield had a significant effect on colostral IgG concentration (P = 0.003); as milk yield increased, the IgG concentration also increased. We observed no differences (P > 0.05) in IgG concentration between animals that were treated with a dry cow tube and those treated with a combination of dry cow tube and teat sealant at the drying off stage. Nutritional Concentration. Colostral fat concentration was greatest in spring-calving cows (P < 0.05), compared with cows calving in the summer, autumn, or winter (Table 3). Fat concentration was also greater (P = 0.03) in colostrum from cows that were immunized against leptospirosis (6.8%) than from nonimmunized cows (5.9%). Colostral protein concentration was greater in cows with a dry period length of ≥16 wk than in cows that were dry for less than 8 wk (P < 0.001) ( Table 3). Cows fed concentrates during the 0 to 3 wk period before parturition had a greater (P = 0.02) colostral fat concentration than non-concentratefed cows (Table 4). Cows vaccinated against infectious bovine rhinotracheitis (13.4%) had lower colostral protein concentration (P = 0.04) than nonvaccinated cows (14.4%). Calculated previous 305 d milk yield had a significant effect on colostral protein concentration (P < 0.001); as milk yield increased, protein concentration also increased. Colostral lactose concentration was greater (P = 0.03) in cows that were immunized against infectious bovine rhinotracheitis (2.8%) than in nonimmunized cows (2.7%). Factors Associated with Colostrum Quality in Multivariate Analysis Immunoglobulin G. Parity was associated with colostral IgG concentration (P < 0.001): cows with a parity of 5+ had greater colostral IgG concentration than lower-parity animals ( Table 5). Colostral IgG concentration was significantly lower (P = 0.01) for samples collected later than 12 h after parturition ( Table 5). Length of dry period, dry cow nutrition, estimated BW gain precalving, and season of calving had no effect (P > 0.05) on colostral IgG concentration. Protein. Parity 5+ animals had the greatest colostral protein concentration compared with cows in their first and second parity (Table 5). Cows fed grass silage at 4 to 6 wk prepartum produced greater protein concentration than cows that were fed grazed grass (P = 0.001). Cows fed concentrates 4 to 6 wk prepartum produced lower protein concentration than cows that were not fed concentrates (P < 0.001). Colostrum protein concentration was highest (P = 0.02) in the winter months compared with other seasons (Table 5). Colostral protein concentration was lower (P = 0.001) for samples collected later than 12 h after parturition. Cows that were not immunized against infectious bovine rhinotracheitis produced higher protein concentration (P = 0.03) than cows that were immunized (Table 5). Fat. Cows in their first parity had a higher (P = 0.03) colostral fat concentration than higher-parity cows (Table 6). Cows with a dry period of 8 to 12 wk had higher fat concentration than cows with a dry period of less than 8 wk, but cows with a dry period of 16 wk or longer had a higher (P < 0.001) fat concentration than cows with a dry period of less than 12 wk. Colostrum fat concentration was higher (P = 0.03) in cows that had been immunized against leptospirosis (7.0%), compared with nonimmunized cows (6.1%). Dry cow nutrition showed a significant association with colostral fat concentration; cows fed grass silage had a higher (P < 0.001) fat concentration than cows fed grazed grass. Time from calving to colostrum collection had no effect (P > 0.05) on the colostral fat concentration produced at first milking after parturition (Table 6). Lactose. Colostral lactose concentration decreased as parity increased; we observed the lowest lactose concentration in parity 5+ cows (Table 6). Cows with a dry period length of 16 wk or longer had superior (P = 0.007) lactose concentration compared to cows with a dry period length less than 16 wk. We observed the greatest lactose concentration in colostrum from cows that calved in the spring (Table 6). Lactose concentration was greater (P < 0.001) in samples collected later than 12 h after parturition. Farm Management Practices The mean parity of the cows involved in this survey was 3, ranging from 1 to 14. The mean BW of the cows during the precalving period was 609 kg (SD 70.1). The mean BCS of the cows was 2.9 ± 0.5 at calving (range 1.65-4.5). Almost 85% of colostrum samples obtained were from Holstein and Friesian cows, and the rest were from Ayrshire and crossbreeds. The management of dry cows differed across farms in terms of calving season, immunization regimen, feeding, and housing. The mean birth weight of calves born from cows in this study was 40.9 ± 8.4 kg. On-farm colostrum management practices, including volume, timing, and duration of feeding colostrum to calves are shown in Table 1. Almost 52% of calves were given their first feed of colostrum via esophageal tube, 28% were left to suckle the dam, 17% were bottle-fed, and the remaining 3% were fed using a combination of these methods. The majority of calves (80%) were fed >2 L of colostrum at their first feed [mean 2.9 L (SD 0.79)], and on average calves were fed 3.2 h (SD 4.36) after birth. DISCUSSION Studies conducted in the United States have shown large variability in colostrum IgG concentration between individual dairy cows and farms (Kehoe et al., 2007;Morrill et al., 2012). Currently, no data are avail- 14.0 b <3 13.7 b 3 to 6 13.7 b 6 to 12 14.0 b 12 to 24 12.5 a 2075 able to show the variation in colostrum and factors associated with colostrum quality for dairy herds in Northern Ireland, which are typically grassland-based systems. Although this study is specific to dairy farms in Northern Ireland, we expect that the findings will be relevant to grassland-based systems in other parts of the world. This paper provides data on the nutritional, immunological, and bacterial composition of colostrum, detailing how certain physical and managerial factors are associated with colostrum quality and outlining colostrum management practices in grassland-based dairy systems. In the univariate model of this study, we found that individual farm had an effect on colostrum quality in terms of IgG, fat, protein, and lactose concentration. This finding indicated that different management practices on different farms had a significant effect on colostrum quality and confirmed that colostrum quality varies not only between cows but also between herds. Colostrum IgG Concentration The variation in colostral IgG concentration observed across all 21 farms (Figure 1) was similar to previous reports (Gulliksen et al., 2008;Morrill et al., 2012). Of colostrum samples in this current study, 44% contained <50 mg/mL IgG, and were therefore deemed unsatisfactory in terms of quality. Consequently, a sizable proportion of newborn calves from these herds were at increased risk of receiving colostrum of inadequate quality and experiencing failure of passive transfer (FPT). Taking into account the variations in IgG concentration, it may be relevant to consider how much colostrum a calf requires to achieve apparent passive transfer (APT). A recent study has suggested an intake of 150 to 200 g IgG (Chigerwe et al., 2012) to achieve APT. Using the equation described by Quigley (2001a), we can determine how much colostrum is required to meet the needs of the calf. This involves making assumptions in relation to BW (40 kg), apparent efficiency of absorption (26.4%), plasma volume (9% of BW), and plasma concentration (10 mg/mL). If calves were fed the historical recommendation of 2 L of colostrum, a colostral IgG concentration of 69 mg/ mL would be required to achieve APT. In the current study, 61% of calves would have experienced FPT if fed 2 L of colostrum. On average, in the current study, calves were fed 2.9 L of colostrum for their first feed. Calves fed 2.9 L of colostrum containing at least 50 mg/mL IgG would have achieved APT, but 39% of calves would have experienced FPT if fed this volume at their first feed based on the colostrum IgG concentration. To manage this risk, feeding 4 L of colostrum would result in only 19% of calves experiencing FPT. A number of management practices can have a positive influence on the colostrum quality produced, but it is unlikely that calves from cows that produce colostrum with IgG below 20 to 29 mg/mL will achieve APT, independent of management practice. As reported by others (Tyler et al., 1999;Morrill et al., 2012;Conneely et al., 2013), we found that increased parity positively influenced colostrum IgG concentration. However, on average, primiparous dams produced colostrum of adequate IgG concentration (50.8 mg/mL), and 44% of animals in their first and second parity produced high-quality colostrum (>50 mg/mL IgG), at an average yield of 5.4 L at the first milking postpartum. Consequently, 72% of the cows in their first and second parity produced an adequate IgG yield to provide the calf with a minimum of 150 g of IgG to achieve APT. This indicates that primiparous colostrum should not be automatically discarded and should be tested for IgG concentration. This study also showed that 73% of colostrum samples from cows in their fifth or greater parity were deemed high quality. Previous research has suggested that this is related to increased antigenic exposure in older cows, so that a greater array of antibodies are transferred from bovine serum to the colostrum (Donovan et al., 1986). In addition, the development of the mammary gland may have a role to play: younger cows may not be fully developed, and the transport of IgG into the mammary gland may be reduced (Devery-Pocius and Larson, 1983). In agreement with others (Annen et al., 2004;Rastani et al., 2005;Mayasari et al., 2015), we found that a short dry period had a negative effect on IgG concentration in the univariate analysis. However, in the multivariate model, this association did not persist, in agreement with previous research (Watters et al., 2008;Shoshani et al., 2014). Overall, it is likely that dry period length does not have a major effect on IgG concentration unless the cow has insufficient time to allow for colostrogenesis, which occurs during the last few weeks of pregnancy. Because the colostrogenesis process begins several weeks before parturition (Brandon et al., 1971;Godden, 2008), it was logical to presuppose that maternal nutrition during the dry period might have an effect on colostral Ig concentration. However, in agreement with others, we observed no relationship between dry cow nutrition and colostral IgG concentration (Blecha et al., 1981;Burton et al., 1984;Hough et al., 1990). A limitation of the current study was the restricted range of feed types offered to the cows, with the majority of dairy producers offering nonlactating cows either grass silage or grazed grass. The interval from parturition to colostrum collection was negatively associated with colostrum IgG, in agreement with previous studies (Moore et al., 2005;Morin et al., 2010;Conneely et al., 2013). Therefore, reducing the time from calving to colostrum collection is a simple way for producers to positively influence the quality of colostrum fed to their calves and reduce the risk of FPT. Colostrum feeding method has been found to affect FPT; Besser et al. (1991) reported that the highest rate of FPT occurred when the calf was left to nurse the dam (61.4%), compared with bottle-feeding (19.3%) and using an esophageal tube (10.8%). In addition, Vasseur et al. (2010) found that 22% of Holstein calves 2 to 6 h old were unable to consume 2 L of colostrum from bottle-feeding. In this study, we observed that over 25% of calves were left to suckle the dam and 17% were bottle-fed; to increase APT in calves, it may be necessary for farmers to use esophageal tubes. Previous research found that feeding calves colostrum that was high in bacteria reduced the apparent efficiency of absorption and resulted in calves achieving a lower serum IgG concentration at 24 h after birth (Elizondo-Salazar and Heinrichs, 2009b). In agreement with others (Fecteau et al., 2002;Swan et al., 2007), we found extremely high levels of bacterial contamination in the colostrum samples. It has been suggested that the bacteriological quality of maternal colostrum is influenced by storage method and management practices (Stewart et al., 2005;Houser et al., 2008). We speculate that this may be the reason for the high bacterial contamination in this study. To avoid the risk of feeding pathogenic bacteria to naive calves best practice guidelines must be in place for producers to help prevent bacterial contamination of colostrum. One such practice is heat-treating, which has been shown by Elizondo-Salazar et al. (2010) to reduce bacteria levels: heating colostrum at 60°C for 30 or 60 min reduced the bacterial load. Nutritional Components Few studies have examined variation in the nutritional components of bovine colostrum (Kehoe et al., 2007;Morrill et al., 2012), and no data are available on dairy production systems in Northern Ireland. As suggested by Quigley et al. (2001b), calves fed colostrum that is low in protein may have a reduced ability to achieve glucogenesis during the first 24 h of life. This metabolic process is essential in neonatal calves to produce glucose (Hammon et al., 2013), which is necessary to provide a source of energy for the brain (Zierler, 1999). Similar to IgG concentration, colostral protein concentration improved as parity increased, but this was expected, because IgG is a protein (Parrish et al., 1950). Cows that calved in the winter produced 9 g/L more protein than spring-calving cows, but several factors tend to differ across seasons, including diet (Heck et al., 2009;Yasmin et al., 2012), housing, and climate (Nardone et al., 1997;Cabral et al., 2016). Dams immunized against infectious bovine rhinotracheitis before calving produced 11 g/L more protein than nonvaccinated cows. It is currently unknown why immunization is associated with the nutritional components of colostrum; this points to a need for further research. We found that several management practices affected the level of fat produced in colostrum, including the fact that cows dry for longer than 16 wk produced 15 g/L more fat than cows dry for less than 8 wk. In comparison, Shoshani et al. (2014) reported that cows dry for 60 d had increased fat levels in their milk during the first month of lactation cows that were dry for only 40 d. In our study, heifers produced 22 g/L more fat than cows in parity 5+, in agreement with Morrill et al. (2012). Limited research has been conducted into the effect of dry cow nutrition on colostrum nutritional properties. In the current study, we found a relationship between colostral fat concentration and cow diet at 7 to 9 wk before parturition. Lerch et al. (2015) found that a high-energy/high-protein diet may result in the mobilization of body reserves and affect colostral nutritional composition. Lactose is the primary carbohydrate present in colostrum and milk, and the major role of lactose is to regulate water and as a result osmotic content (Davies et al., 1983;Jenness, 1985). In this study, we found that colostrum lactose concentration was negatively correlated (P < 0.001) with IgG concentration (R 2 = 0.34). Thus, increased lactose concentration may have a dilution effect and may result in reduced IgG concentration. This was likely related to the increase in lactose synthesis that occurs with time after parturition and related to a water dilution effect lowering IgG concentration. CONCLUSIONS In the current study, colostrum quality in grasslandbased dairy systems was highly variable in its nutritional, immunological, and bacterial composition. Colostrum IgG concentration averaged 55 mg/mL, with increased parity and sample collection earlier after parturition associated with the greatest IgG concentrations. Parity, prepartum vaccination, season of calving, and dry cow nutrition all affected the nutritional composition of colostrum. The results of this study also highlighted significant levels of bacterial contamination in colostrum, much greater than industry guidelines and an area for further investigation. Improvements should be made in colostrum feeding practices to reduce the number of calves left to suckle the dam and to feed a greater quantity of colostrum as soon as possible after birth. Because APT of immunity to the newborn is associated with the timing, volume, and quality of the colostrum offered to the calf, the findings from this study indicate the importance of measuring colostrum quality and highlight risk factors that dairy producers and advisers should consider when drawing up best practice management guidelines for colostrum management. ACKNOWLEDGMENTS This study was co-funded by the Department of Agriculture and Rural Development in Northern Ireland, and by AgriSearch (farmer levy). Thanks are due to the 21 producers who participated in the survey and to the staff at the AFBI Hillsborough for collection of colostrum samples and data, the laboratory staff in AFBI Hillsborough for undertaking colostrum nutritional analysis, and to the staff in AFBI Veterinary Sciences Division for assisting with colostrum IgG analysis. Amanda Dunn acknowledges the receipt of a PhD studentship from AgriSearch.
2018-04-03T03:49:57.482Z
2017-01-11T00:00:00.000
{ "year": 2017, "sha1": "0026cba53aac3a6b9d8589f55877d4400942ccfd", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.journalofdairyscience.org/article/S0022030217300206/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "41bd9e9d57b9154fcfb3a0b062b5cad40d09421f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245852528
pes2o/s2orc
v3-fos-license
Shear strength behaviour of liquefiable sand of petobo on treated by agarose under direct shear test Liquefaction process is associated with the loss of the shear strength of the saturated loose sands caused by strong earthquakes. Due to mitigitation of liquefaction hazard, an appropriate mitigation of liquefaction using environmentally friendly methods is critical and becoming increasingly important and unavoidable. The laboratory investigation was carried out to study the shear strength behaviour of liquefiable sand of Petobo treated by agarose on different concentration 1%,3% 5%. A series of direct shear test were conducted under three level of vertical stress 10 kPa, 20 kPa, and 30 kPa on the specimen. It was found that the optimum content of agarose which can be considered is at 1%-3%, using stress ratio (τ/σv) analysis shows that stress ratio decreases with increasing the vertical stress on the same agar content. The implication this result that the application of this method must consider variation of material source and characteristic, and the suitable level of vertical stresses. Introduction Petobo sub-district is one of the areas that was affected by liquefaction on due to the 7.5 Mw Palu Central Sulawesi Earthquake in 2018. Lateral spreading, surface fracture and flow sliding occur massively and cover a large area. Although, the mechanisms and effects of liquefaction have been studied by a number of researchers [1] [2] but because the coverage area affected is very broad, massive research in various reviews to study further this phenomenon includes behavior of shear strength characteristic of soil sand which can be useful for the purposes of numerical analysis, empirical methods, experimental studies for the purpose of mitigation and remediation. The mechanism that occurs during the liquefaction process is associated with the loss of the shear strength of the saturated loose sands as a result of the loss of effective stress as a consequence of increased pore water stress due to seismic force propagation [3] [4]. Currently, the most common methods propose to improve the mitigation and remediation of liquefaction hazards are vibration, grouting and injection methods, chemical stabilization using lime, cement and fly ash, geosynthetic stabilization [5] [6]. However, due to environmental considerations some of these methods are not recommended, therefore the need to find sustainable technology is becoming increasingly important and unavoidable. In this regards, several alternative environmentally friendly methods that have been researched, the use of biopolymers such as agarose seems quite promising and effective [7]. Recent previous studies related to the use of agarose as a polymer to improve sandy soil strength have done, include by [8] using the unconfined compression test, using 1%-4% agarose concentration, have certain the strength and deformation characteristics behavior in stiffness and ductility. Likewise other reseacher [9]the results of their study as passive stabilization using coloidal silica, while [10] [7] which specifically focused their research on the dynamic loading aspect using the triaxial test concluded that increasing the behavior of shear strength in agorase content 0.5%, 1% and 2%. However, from these research, the type and source of the test materials used were slightly limited, so IOP Publishing doi:10.1088/1757-899X/1212/1/012035 2 to see its behavior, it still requires various sources of sand material and the type of study, includes material from the liquefaction site and using other types testing such as direct shear tests. Therefore, in this study, experiment were conducted to explore the behavior of agarose using a different sand source from previous studies where in this case the sand was taken from a location that affected liquefaction. Sand were taken from Petobo, testing using a direct shear test with three levels of vertical stress, respectively are 10 kPa, 20 kPa and 30 kPa. Using agarose content of 1%, 3% and 5%, and untreated was 0% agarose content. The effectiveness of using agarose as a stabilizing agent through variations in the sand source being tested and certain types of conditions such as different loading and testing conditions, so that agarose could be used as option for liquefaction mitigation according to the conditions and properties of the material being tested. Sand Sandy soil presented in this research were taken from Petobo, located in Palu city, Central Sulawesi at the depth of 1.5m-2.0m. The physical properties are presented in Table 1, with Cu 4.67 and Cc 0.87 was categorized as poorly graded sand soils. Minimum and maximum void ratio figures seem appropriate value that has a very large range of 0.58 to 0.98, so that the soil can be in a very loose and can be very dense conditions. The plot range of grain size materials in Figure 1 shows the compares of the susceptible curves according to Japanese liquefaction susceptibility criteria for port and harbour facilities [11] Tabel. Agarose The commercial agarose was used in this study in powder form produced by PT Agar Swallow, without preservatives and food hardener. Agarose powder products begin to dissolve at 85°C and solidify at 32°C -40°C where the viscosity are constant at this temperature. A preliminary test was carried out to obtain the viscosity of agarose solution in a mixture of 1%, 3% and 5%. Sample Preparation A quantity of natural sandy soil was sieved to separately sand passed sieve number #8(2,36mm) and retained sieve number #100 (0,15mm) to obtain some clean sand samples with zero fines content. The clean sand soil specimens are prepared using the wet mixing method. The required dry weight of clean sand to meet of the specimens mixing weight then is recalculated as function of the relative density and saturated water content. The content of agarose water mixture solution were carried out for all of the test specimens by weight which ranged at 1%, 3% and 5% portion. The mixture of agar and water which has previously heated at temperatures over 85 o C, was poured into a cup that has been filled with a number of dry sand, then stirred softly until evenly mixed well. The mixture was filled into specimen molds, 63 mm in diameter by 35 mm height, with the weight adjusted to produce wet densities of 15,0 kN/m 3 to 15,5 kN/m 3 , relative densities of 36%-47%. The same initial wet densities were maintained during test preparation. Trial test was undertaken to meet a sample preparation method that was simultaneously easy and reproducible. All sample has been cured for one day before being tested. b. c. Figure 2. The test specimen preparation process; (a). mixing agarose solution and dry sand; (b). pouring the mixture into a direct shear test mold; (c). specimens that have been left for one day and prepared are put in a direct shear test box Test procedure The testing apparatus used in the current study is direct shear test of soils. All tests were conducted with the ASTM D-3080 procedures. The shear loading is controlled using automated system at rate of 1.0 mm/minute. The specimen were tested under three different vertical normal stress values of 15 kPa, 20 kPa and 30 kPa. Effect of agar content on shear strength parameter The direct test results at three levels vertical stress for all specimen with specified agar concentration is tabulated in Table 2. The calculation of shear strength parameters, cohesion (c) and angle of internal friction (I) refers to the Mohr Coulomb formula. Meanwhile, peak shear strength is obtained from the shear stress vs shear displacement relationship curve in the direct shear test. It was found that 1% agar content increased soil cohesion by 11.84% from 8.86 kPa to 9.91 kPa and for 3% agar content the increase was 9.81% from 8.86 kPa to 9.67 kPa while 5% agar content decreased the cohesion value by 8.24% from 8.86 kPa to 8.13 kPa. These results as shown in agarose mixed in the soil grains that fill the pores forms a proportional condition that changes the cohesion and friction values between soil particles, high concentrations cause weak bonds and friction between grains. Effect of agar content on Peak Shear Strength behaviour The mixtures of specimen samples were tested under three different normal vertical stress values of 10 kPa, 20 kPa and 30 kPa in the direct shear test as shown in Table 2. The maximum shear strength was 32.67 kPa and the minimum value was 14.63 kPa which is lower than minimum value of untreated sample. The peak of shear strength on the different level of vertical stress values and variation of agarosa content are plotted Figure 2. The shear test result found that the optimum value of shear strength was reached at 1% for all concentration of agarosa. The curves at 10 kPa and 20 kPa vertical stresses change slightly at each given agar content, which differs to the 30 kPa vertical stresses. For 5% agarose content, the shear strength fell below the untreated soil at all levels of applied vertical stress. At 5% level of agarose content, the achieved shear stress was lower than untreated soil at all vertical stress levels, this is due to the volume of agaraso is no longer effective in forming cohesion and friction bonds between grains, from this result the optimum content of agarosa between 1% to 5%. Figure.3. Peak shear strength of specimen mixtured agar content at three levels vertical stress respectively are 10 kPa (∎), 20 kPa (•).and 30 kPa (▲); for untreated sampel 0%, agar content 1%; 3%, 5%. Effect of agar content on vertical normal stress According to the coulomb formula, the shear strength of the soil is a combination of the cohesion value, the shear angle and the vertical stress given. As a result of changing the value of the shear angle and soil cohesion, it gives a different value of shear strength. The analysis was carried out using stress ratio (W/Vv) in term normalized shear stress to each vertical stress level applied for each speciments test. It was found that peak shear stress depends on the level of vertical stress applied, where the value of shear stress rato decreases with increasing the vertical stress, as can be seen from Figure 4-6. These results show that the increase in vertical stress increases the shear strength due to the confining stress of the shear ring, and the greatest is at 1% level, and decreases drastically at 5% level which is lower than the shear stress of untreated soil. This indicates that the high vertical stress causes the adhesions and bonds formed between the grains and the agarose gel to break. The implication is that the application of this method must consider vertical stresses, or in other words its application is more suitable in conditions of low vertical stress, this result relevant to previous research by [10]. Summary and Conclusions This study investigated the shear stress behavior of a mixture of sand and agarose using a direct shear test under 10kPa, 20 kPa and 30 kPa vertical stress. Mixing solution of agarose content tested at 1%;3% and 5% for 24 hours curing using sand which has previously been examined for criteria of most liquefiable soil, taken from the Petobo village, previously exposed to liquefaction due to the 7.5Mw earthquake in 2018 so that it represents a liquifiable soil. The effectiveness so far has been observed using the ratio of normal stress to shear stress according to the agarose content. The significant conclusions drawn from this study include the following. The optimum content of agarose which can be considered is at 1% -3%. Whereas at 1% agarosa content, the cohesion value decreases but the shear angle does not change significantly, but for 3% the cohesion and sediment of soil shear changes significantly. So that the choice of 1% level can be selected if the target of modification is to change the cohesion value without changing the friction angle, while 3% changes both the cohesion value and the friction angle. Peak shear strength shows the highest value at 1% content of agarose, for all vertical stress levels, while for 5% peak shear strength is lower than untreated soil at all levels of vertical stress, this is due to the high volume of agaraso is no longer effective in forming cohesion and friction bonds between grains. The high vertical stress causes the adhesions and bonds formed between the grains and the agarose gel to break. The implication is that the application of this method must consider vertical stresses, or in other words its application is more suitable in conditions of low vertical stress. This conclusion is drawn based on a limited number of tests and methods, therefore it can be developed using more test samples, with variations in moisture content and mixed viscosity and saturation, the test method on dynamic loading with the measurement of pore water pressure and long term of treatment must be considered in the next develop research. The results of this study may be different or same as previous research, therefore it is important to conduct more research using different sand sources to assess the significance of agarose as a stabilization material for mitigating of liquefaction hazard.
2022-01-11T20:05:42.321Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "43de5d3b57b1bf1be13ac026c6b673db2006d092", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1212/1/012035", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "43de5d3b57b1bf1be13ac026c6b673db2006d092", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
236495190
pes2o/s2orc
v3-fos-license
Expression and role of P-element-induced wimpy testis-interacting RNA in diabetic-retinopathy in mice BACKGROUND As one of the major microvascular complications of diabetes, diabetic retinopathy (DR) is the leading cause of blindness in the working age population. Because the extremely complex pathogenesis of DR has not been fully clarified, the occurrence and development of DR is closely related to tissue ischemia and hypoxia and neovascularization The formation of retinal neovascularization (RNV) has great harm to the visual acuity of patients. AIM To investigate the expression of P-element-induced wimpy testis-interacting RNA (piRNA) in proliferative DR mice and select piRNA related to RNV. METHODS One hundred healthy C57BL/6J mice were randomly divided into a normal group as control group (CG) and proliferative DR (PDR) group as experimental group (EG), with 50 mice in each group. Samples were collected from both groups at the same time, and the lesions of mice were evaluated by hematoxylin and eosin staining and retinal blood vessel staining. The retinal tissues were collected for second-generation high-throughput sequencing, and the differentially expressed piRNA between the CG and EG was detected, and polymerase chain reaction (PCR) was conducted for verification. The differentially obtained piRNA target genes and expression profiles were enrichment analysis based on gene annotation (Gene Ontology) and Kyoto Encyclopedia of Genes and Genomes. RESULTS In the CG there was no perfusion area, neovascularization and endothelial nucleus broke through the inner boundary membrane of retinap. In the EG, there were a lot of nonperfused areas, new blood vessels and endothelial nuclei breaking through the inner boundary membrane of the retina. There was a statistically significant difference in the number of vascular endothelial nuclei breaking through the inner retinal membrane between the two groups. High-throughput sequencing analysis showed that compared with the CG, a total of 79 piRNAs were differentially expressed in EG, among which 43 piRNAs were up-regulated and 36 piRNAs were down-regulated. Bioinformatics analysis showed that the differentially expressed piRNAs were mainly concentrated in the signaling pathways of angiogenesis and cell proliferation. Ten piRNAs were selected for PCR, and the results showed that the expression of piR-MMU-40373735, piR-MMU-61121420, piR-MMU-55687822, piR-MMU-1373887 were high, and the expression of piR-MMU-7401535, piR-MMU-4773779, piR-MMU-1304999, and piR-MMU-5160126 were low, which were consistent with the sequencing results. CONCLUSION In the EG, the abnormal expression of piRNA is involved in the pathway of angiogenesis and cell proliferation, suggesting that piRNAs have some regulatory function in proliferative diabetic-retinopathy. INTRODUCTION As one of the major microvascular complications of diabetes, diabetic retinopathy (DR) is the leading cause of blindness in working age population. Because the extremely complex pathogenesis of DR has not been fully clarified, the occurrence and development of DR is closely related to tissue ischemia and hypoxia and neovascularization[1,2]. The formation of retinal neovascularization (RNV) has great harm to the visual acuity of patients. Patients who are refractory to treatment may experience serious complications, such as neovinal glaucoma, vitreous hemorrhagia, and retinal detachment which can lead to permanent blindness. RNV is a key link in proliferative DR (PDR) and is a complex pathological process. Intravitreal injection of anti-vascular endothelial growth factor (VEGF) is an effective method for treating ocular neovascular diseases. However, the procedure can result in complications, side effects, and only short-term effects. Previous studies reported that anti-VEGF treatment cannot prevent the formation of RNV in some patients [3,4]. Therefore, it is particularly important to find other VEGF-independent pathways that promote angiogenesis to identify safe therapeutic targets against DR. P-element-induced wimpy testis (PIWI)-interacting RNA (piRNA) was first found in the ovarian germ cells of Drosophila melanogaster in 2006. This novel short noncoding small RNA has an average length of 26-31 nucleotides[5] and functions by binding with PIWI protein to form a piRNA-induced silencing complex (PIRISC)[6-8]. piRNA plays an important role in transposon silencing, epigenetic regulation, protein regulation of genome rearrangement, spermatogenesis, and germ stem cell maintenance [9,10]. Additionally, piRNA participates in tumor formation, human aging, and neural axon regeneration [11][12][13]. Based on existing studies and literature reports, which highlight the role of piRNA in neovascularization-related diseases [14,15], piRNA may play an important role in the formation and development of DR. However, to the best of our knowledge, no previous study has explored this hypothesis. In the present study, the role of piRNA in RNV diseases was evaluated by sequencing of RNA obtained from the retinal tissues of mice with PDR mice and normal mice. The study aimed to provide theoretical support for the possible alternative clinical treatment of DR. Experimental animals and models We used 100 7-d-old C57 mice of either sex in this study. All experimental animals were purchased from Changsheng Biology Co., Ltd. (Shenyang, China). The animals were fed and related operations were conducted in the animal laboratory of Shengjing Hospital in a stable, specific pathogen-free environment; the temperature was maintained at 23 ± 2 °C, with a 12 h light cycle. The experimental study was approved by the animal ethics committee of Shengjing Hospital of China Medical University (2020PS078K). The animals were randomly divided into two groups, each consisting of 50 animals: normal group as control group (CG) and PDR group as experimental group (EG). Mice in CG were fed with mice and their mothers under normal conditions without any treatment. Briefly, 7-d-old EG mice and their mothers were housed and fed in a closed glass container with an oxygen concentration of 75% ± 2% for 5 d. Next, 12-d-old mice were fed under normal conditions, and the closed container was opened once per day to replace the bedding material, add water, and replace the mother mice [16,17]. All animals were euthanized at the age of 17 d for subsequent histopathological examination and total RNA extraction. Retinal patch staining Mice aged 17 d from the two groups were anesthetized and sacrificed, after which their eyeballs were removed. The eyeballs were fixed in 4% paraformaldehyde for 12 h. The contents of the anterior segment and vitreous cavity were removed, and the retina was carefully separated. The retinas were incubated in isolectin B4-594 (Invitrogen, Carlsbad, CA, United States) in a shaker at 4 °C overnight. Next, the glass slide was covered with an anti-radiation agent, and this agent was also applied to the retina [18]. The retinas were observed under a microscope (Eclipse NI, Nikon, Tokyo, Japan) and images were collected. Hematoxylin and eosin staining After the eyeballs of mice were removed, they were fixed in 4% paraformaldehyde for 12 h. Next, they were embedded in paraffin and cut along the sagittal plane of the optic nerve to obtain serial sections with a thickness of 4 μm. Non-continuous sections were acquired from each eye for hematoxylin and eosin (HE) staining, and sections containing the optic nerve were excluded. The number of endothelial cells breaking through the inner limiting membrane in the vitreous cavity was counted, and the average of this number was calculated in each section [19]. Only the vascular nuclei located closely to the retina were counted; thus, vascular nuclei not close to the internal limiting membrane in the vitreous cavity were excluded [20,21]. High-throughput sequencing The total RNA of each retinal tissue sample was extracted using Trizol reagent according to the manufacturer's instructions (Takara, Japan). The quality of RNA was analyzed using NanoDrop ND-2000 (Thermo Fisher Scientific, United States). A small RNA library was constructed by real-time polymerase chain reaction (RT-PCR) using 5' and 3' linkers. Agilent 2100 and Applied Biosystems StepOnePlus Real-Time PCR systems were used to assess the quality and yield of the constructed library (Life Technologies). Finally, the RNA was sequenced by Illumina Hiseq 2000 (Illumina, San Diego, CA, United States). Sequencing result analysis After standardization and quality control of the sequencing data, 26-31 nt piRNAs were selected from small RNA reads, and the differential expression of piRNA was analyzed. Fold change in piRNA expression ≥ 1.5 (P < 0.05) was used as the threshold for determining gene upregulation or downregulation. Gene Ontology (GO) enrichment (http://www.geneontology.org/) was used to analyze the abnormal expression of genes, and KO enrichment (https://www.genome.jp/kegg/pathway) was utilized to determine the biological function of the differentially expressed piRNA and investigate its possible involvement in the disease mechanism [22]. RT-PCR RT-PCR was performed on total RNA extracted from retina samples using Trizol reagent (Takara, Shiga, Japan) according to the manufacturer's instructions. SPSS 22.0 software (SPSS, Inc., Chicago, IL, United States) was used for all statistical analyses. The mean ± SD of relative piRNA expression in PCR analyses was calculated. Student's t-test was used to analyze the difference between the two groups. P < 0.05 indicated a statistically significant difference. Evaluation of fluorescence imaging The samples obtained from the normal control and EGs of mice were stained with isolectin B4-594 to observe the retinal vascular structure. In the normal CG, the retinal blood vessels were intact and clear, and large blood vessels were characterized by an even radial distribution around the optic disc reaching the periphery of the retina. In the DR group, there was no perfusion in the large vessel area and no decomposition of the perfusion area. The large vessels near the optic disc were tortuous and irregular, and they were mainly visible in the middle and periphery. A large number of disordered new vessels and new vascular buds were detected at the retina boundary along with vascular leakage (Figure 1). Quantitative analysis of retinal vascular endothelial cells HE staining images were used to quantitatively analyze the number of retinal vascular endothelial nuclei. In the CG, the structure of the retinal internal limiting membrane was intact and smooth with occasional vascular endothelial nuclei breaking through the internal limiting membrane on the vitreous side (average number 1.163 ± 0.31). In the EG, the morphology of the internal limiting membrane was irregular, and cells under the internal limiting membrane proliferated and were arranged in a disorderly manner. A large number of clusters of vascular endothelial cells broke through the inner limiting membrane and formed the neovascular lumen (average number 29.42 ± 1.07). There was a significant difference in the number of retinal vascular endothelial cells that broke through the inner limiting membrane between the two groups (P < 0.01) (Figure 2). High-throughput sequencing results There were 79 piRNAs differentially expressed in EG compared to in CG mice, among which 43 were upregulated and 36 were downregulated (Figure 3). Verification of results indicating differential gene expression We selected 10 differentially expressed genes according to their gene expression levels determined by high-throughput sequencing and verified the expression levels in two groups of retina samples by RT-PCR. The primer sequences used in this study are shown in Table 1. The results of quantitative verification by RT-PCR are shown in Figure 4. GO analysis To further analyze the biological functions of differentially expressed genes in EG and CG mice, the GO and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses was performed. The results showed that the differentially expressed piRNAs were involved in many biological processes, such as sensory development, G-proteincoupled receptor signaling pathway, regulation of inflammatory factors, and visual KEGG analysis KEGG analysis of differentially expressed genes and proteins showed that the enriched pathways were mainly related to angiogenesis and cell proliferation and were strongly correlated with RNV ( Figure 6). Analysis of protein-protein interaction network We obtained piRNA-target gene pairs correlated with expression level to construct the network analysis diagram. We selected the EPO, HIF-1α, IGF1, and TGF-β2 genes, which are potentially related to RNV, to construct the interaction analysis diagram (Figure 7). July 15, 2021 Volume 12 Issue 7 DISCUSSION Non-coding RNA is involved in the occurrence and development of several diseases. Numerous studies have shown that non-coding RNA is an important factor in the pathophysiological changes leading to malignant tumors and vascular diseases [23]. piRNA is a type of small non-coding RNA with an important role in animal development, reproduction, and gene regulation[10,24]. To date, 23439 piRNAs have been identified in the human genome, which is equivalent to the number of proteins encoded by mRNA (about 20000) and more than the number of miRNAs. These numbers indicate that piRNAs play an important role in regulating gene transcription [25,26]. After transcription, PIWI protein cleavage and amplification as well as piRNA clusters eventually form mature piRNAs having biological activity by forming PIRISC [27]. PIRISC, a silencing complex formed by piRNA and PIWI protein, can inhibit its target by transcriptional gene silencing and post-transcriptional gene silencing to maintain the genomic integrity of the germline [27,28]. PIRSIC regulates transposon expression at the transcriptional level by inducing epigenetic repression via histone H3K9me3 and DNA methylation [29]. Some studies report a larger number of allowed mismatches between target mRNA and piRNA as opposed to between target mRNA and miRNA [30]. siRNA and miRNA are easily and rapidly degraded by nucleases, whereas piRNA is relatively stable in the serum, and thus has the potential to serve as a marker for diagnosis and prediction of disease progression [31]. piRNAs are presently thought to act mainly in somatic cells and cancer tissues and are involved in cell proliferation, apoptosis, cell cycle arrest, angiogenesis, invasion, and metastasis [32]. Moreover, studies have shown that PIRISC may participate in tumorigenesis by inducing abnormal DNA methylation, which leads to genomic silencing [33]. Further, the piRNA-30473/WTAP/HK2 axis promotes the occurrence of breast cancer by regulating the methylation of m6A RNA in diffuse large B-cell expression of ICAM-1 and CXCR4 [36]. Downregulation of piRNA-36712 expression is known to result in upregulation of SEPW1 expression, which in turn inhibits the expression of its downstream gene p53 and therefore it suppresses the formation of malignant tumors [37]. In addition, piRNA plays an important role in gastric cancer, There are many models for the study of DR. At present, the model of RNV and vascular leakage without hyperglycemia has been applied. This model simulates PDR or the process to be developed from non-PDR (NPDR) to PDR. The retinopathy injury caused by hypoxia is due to the occurrence of the release of angiogenesis actors, presented with microhemangiomas, vascular leakage, venous occlusion, capillary in perfusion, neovascularization, and even vitreous hemorrhage, and retinal detachment [16]. There are notable nonperfusion areas in the center of the retina, the central great vessels are tortuous, and the number of vascular nuclei breaking through the inner limiting membrane is significantly increased, which are important factors indicating the success of the EG[39]. HE staining of retinal slices and paraffin sections of EG samples showed that there were large areas of nonperfusion, neovascularization, and vascular endothelial cells breaking through the inner limiting membrane in the retina. DR is one of the most common microvascular diseases of diabetes and also the main cause of blindness in diabetic patients [40]. Studies have shown that DR occurs in both type 1 diabetes and type 2 diabetes [41]. With the increase in the number of diabetic patients, DR has become the main cause of visual impairment in diabetic patients [42]. The whole pathological process of DR includes important pathological changes such as loss of retinal capillary pericytes, thickening of basement membrane, loss of endothelial barrier function, destruction of blood-retinal barrier, and lead to retinal ischemia, which will increase the level of VEGF. Studies have shown that overexpression of VEGF is associated with RNV, which can cause retinal hemorrhage, macular edema, retinal detachment, and neovascularization glaucoma, etc., leading to severe visual impairment and eventually blindness. The development of DR involves two stages: early NPDR and advanced PDR. The former is mainly characterized by increased retinal permeability and intraretinal hemorrhage, while the latter is mainly manifested by RNV. In NPDR, high glucose induced retinopathy mainly includes loss of capillary pericytes, thinning of the vascular layer and destruction of the blood-retinal barrier, which further leads to retinal ischemia and hypoxia. When the disease progresses to PDR, neovascularization occurs and eventually leads to severe visual impairment. RNV is a common pathological change in many retinopathies, including DR, retinopathy of prematurity, and age-related macular degeneration. The common feature of clinical treatment of RNV as well as malignant tumors is treatment with targeted drugs, mainly anti-VEGF drugs. However, targeted drugs against RNV have short action time and require multiple intraocular injections, creating safety risks. Based on of the findings of previous studies on the role of piRNA in cancers and neovascularization-related diseases, we compared piRNA expression levels between the DR model and CGs. The results revealed 79 piRNAs with differential expression in EG and CG. Through GO and KEGG analysis, we established that the mRNA of the differentially expressed piRNAs was involved in processes such as angiogenesis, optic nerve development, inflammation, and proliferation of cells. Among them, EPO, HIF-1α, IGF1, TGF-β2, and other genes are closely related to angiogenesis. Interestingly, a change in EPO expression is considered as an important factor affecting the retinopathy of prematurity. EPO treatment can effectively protect the nervous system and optic nerve development of premature infants [42,43]. HIF-1α is stably expressed under hypoxia; it can regulate EPO and VEGF expression and promote RNV [44,45]. Inhibiting the VEGF/VEGFR2 and HIF-1α/VEGF signaling pathways can prevent angiogenesis [46]. TGF-β2 is a pro-inflammatory cytokine precursor related to the pathogenesis of DR. IGF1 has been identified as the direct target of miR-142-5p. It can reduce the level of miR-142-5p by activating the IGF1/IGF1R median signaling pathway (involving p-PI3K, p-ERK, p-Akt, and VEGF activation), eventually leading to cell proliferation and is involved in the pathological process of DR [47,48]. piRNA plays an important role in various diseases. By interacting with PIWI protein, piRNA can participate in cancer formation and neovascularization through DNA methylation. It can also affect the expression of target genes. We examined the differential expression of piRNA in an oxygen-induced retinopathy mouse model and the potential cellular pathways involved in this process. The study identified a set of target genes that can enhance the theoretical understanding of the role of piRNA in RNV. Because the specific mechanism of action has not been studied in detail, studies are needed to explore its mechanism in a larger sample size. Moreover, it can drive further research on new strategies for clinical treatment. We plan to predict the downstream targets of each differentially expressed piRNA and verify the predicted targets using molecular biology methods. These results provide a foundation for further exploration of the molecular mechanism underlying the development of PDR. CONCLUSION Abnormal expression of the piRNAs are involved in pathways of angiogenesis and cell proliferation, which suggests that piRNAs may regulate some functions in proliferative DR. Research background Retinal neovascularization is caused by the progression of ischemic retinal diseases, including diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, and retinal vein occlusion. Complications of retinal neovascularization can severely impair vision or lead to permanent blindness. Research motivation To explore the upstream molecules of vascular endothelial growth factor or ratelimiting steps of angiogenesis, and to reveal new approaches to the treatment of diabetic retinal.
2021-07-30T06:08:30.220Z
2021-07-15T00:00:00.000
{ "year": 2021, "sha1": "8c8c01a1ce08ade391ff1a9a89124ed8e388247c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4239/wjd.v12.i7.1116", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c8c01a1ce08ade391ff1a9a89124ed8e388247c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257834178
pes2o/s2orc
v3-fos-license
Primordial black holes and stochastic inflation beyond slow roll: I -- noise matrix elements Primordial Black Holes (PBHs) may form in the early Universe, from the gravitational collapse of large density perturbations, generated by large quantum fluctuations during inflation. Since PBHs form from rare over-densities, their abundance is sensitive to the tail of the primordial probability distribution function (PDF) of the perturbations. It is therefore important to calculate the full PDF of the perturbations, which can be done non-perturbatively using the 'stochastic inflation' framework. In single field inflation models generating large enough perturbations to produce an interesting abundance of PBHs requires violation of slow roll. It is therefore necessary to extend the stochastic inflation formalism beyond slow roll. A crucial ingredient for this are the stochastic noise matrix elements of the inflaton potential. We carry out analytical and numerical calculations of these matrix elements for a potential with a feature which violates slow roll and produces large, potentially PBH generating, perturbations. We find that the transition to an ultra slow-roll phase results in the momentum induced noise terms becoming larger than the field noise whilst each of them falls exponentially for a few e-folds. The noise terms then start rising with their original order restored, before approaching constant values which depend on the nature of the slow roll parameters in the post transition epoch. This will significantly impact the quantum diffusion of the coarse-grained inflaton field, and hence the PDF of the perturbations and the PBH mass fraction. 'Cosmic inflation' (a period of accelerated expansion) has emerged as the leading scenario for the very early Universe, prior to the commencement of the radiation-dominated hot Big Bang [20][21][22][23][24][25]. A period of at least 60-70 e-folds of inflation generates natural initial conditions [26][27][28][29][30][31]. Furthermore quantum fluctuations of the inflaton field can generate the density perturbations from which structure forms. Observations of the anisotropies in the Cosmic Microwave Background (CMB) radiation [32,33] provide strong evidence that structure formation on cosmological scales is seeded by almost scale-invariant, nearly Gaussian, adiabatic initial density fluctuations, consistent with the predictions of the simplest single field slow-roll inflation scenario [25,34]. CMB observations [35,36] are consistent with the inflaton field, ϕ, rolling slowly down an asymptotically flat potential V (ϕ) during the epoch when cosmological scales exit the Hubble radius, 50-60 e-foldings before the end of inflation. However the scales probed by CMB and large scale structure (LSS) observations correspond to only 7-8 e-folds of inflation, and hence a relatively small region of the inflaton potential. On smaller scales, deviations from slow roll may lead to interesting changes in the primordial perturbations. In particular, if the scalar perturbations are sufficiently large on small scales, then PBHs may form when these modes reenter the Hubble radius during the post-inflationary epoch. PBHs are therefore a powerful probe of the inflaton potential over the full range of field values. Large, PBH forming, fluctuations can be generated by a feature in the inflationary potential, such as a flat inflection point (see Fig. 1). Such a feature can substantially slow down the already slowly rolling inflaton field, causing the inflaton to enter into an ultra slow-roll (USR) phase, which leads to an enhancement of the power spectrum, P ζ , of the primordial curvature perturbation, ζ. There are several subtleties in calculating the abundance of PBHs formed from inflation models with a feature in the potential. Firstly, the sharp drop in the classical drift speed of the inflaton means that the effects of stochastic quantum diffusion on its motion become non-negligible, and potentially even dominant. Even more importantly, since PBHs form from the rare extreme peaks of curvature fluctuations 1 their mass fraction at formation, β PBH , is sensitive to the tail of the probability distribution function (PDF) P [ζ] of the primordial fluctuations. Consequently, perturbative computations using the power-spectrum may lead to an inaccurate estimation of the PBH mass fraction. Hence the calculation of the full primordial PDF, which can be done non-perturbatively using the 'Stochastic Inflation' (SI) framework [28,[42][43][44][45][46][47][48], is extremely important (see Refs. [37,38,[49][50][51][52][53][54][55]). The stochastic inflation formalism is an effective treatment of the dynamics of the longwavelength (IR) part of the inflaton field coarse-grained on scales much greater than the Hubble radius i.e. k ≤ σ aH, with the constant σ ≪ 1. In this framework, the evolution of the coarsegrained inflaton field is governed by two first-order non-linear classical stochastic differential equations (Langevin equations) which receive constant quantum kicks from the small scale UV modes that are exiting the Hubble radius due to the accelerated expansion during inflation. Hence the small-scale fluctuations constitute classical stochastic noise terms in the Langevin SR-II SR-I USR Figure 1: A schematic illustration of a plateau potential (solid green line). The 'CMB Window' represents field values corresponding to cosmological scales k ∈ [0.0005, 0.5] Mpc −1 that are probed by CMB observations. The blue star represents the CMB pivot scale k * = 0.05 Mpc −1 . The potential has a flat inflection-point like segment (highlighted with pink shading) which results in ultra slow-roll (USR) inflation. After the first slow-roll phase (SR-I) near the CMB Window, the inflaton enters into an USR phase. During this transient phase of USR, the second slow-roll condition (see Eq. (2.7)) is violated, specifically η H ≃ +3. This leads to an enhancement in the primordial perturbations on small scales. Later, the inflaton emerges from the USR into another slow-roll phase (SR-II) before inflation ends at ϕ end . equations denoted by Σ ϕϕ , Σ ππ , and Σ ϕπ corresponding to the inflaton field noise, momentum noise, and the cross-noise terms (defined in Eq. (3.15)). The SI formalism is generally combined with the classical δN formalism [28,[75][76][77][78][79] in order to compute cosmological correlators in this framework. This leads to the emergence of the stochastic δN formalism 2 [42,[46][47][48]56]. The PDF P [ζ] of the primordial curvature perturbation can then be determined by using the techniques of first-passage time analysis for the stochastic distribution of the number of e-folds N with fixed boundary conditions on the coarse-grained inflaton field. A convenient analytic approach to obtain the distribution of first-passage e-folds (and hence the PDF P [ζ]) is to solve the corresponding Fokker-Planck equation (FPE) with the same boundary conditions [56,59,69]. In the analysis of stochastic dynamics, the noise terms in the FPE are assumed [48,56,59,69] to be of de Sitter-type 3 , i.e. Σ ππ , Σ ϕπ ≃ 0, and the field noise, Σ ϕϕ = (H/(2π)) 2 , is constant (where H is the Hubble expansion rate during the de-Sitter type phase). However, in the context of PBH formation, since slow roll is usually violated, it is important to compute the stochastic noise matrix elements more accurately, otherwise the determination of the PDF becomes inaccurate, which in turn leads to an imprecise estimation of the PBH mass fraction β PBH . Our aim is to develop analytical and semi-analytical techniques to estimate the full PDF P [ζ] using the stochastic inflation formalism beyond slow-roll. In the present paper, we carry out a thorough analytical and numerical computation of the stochastic noise-matrix elements accurately beyond the slow-roll approximations. In a forthcoming paper [81] we determine the PDF P [ζ] by solving the Fokker-Planck equation beyond slow-roll with appropriate noise matrix elements and discuss the implications for estimating the mass fraction of PBHs. In what follows, we begin with a brief introduction to the classical inflationary dynamics in Sec. 2, with particular focus on the ultra slow-roll dynamics across a flat segment in the inflaton potential. In Sec. 3, we discuss the quantum dynamics in the stochastic inflation framework. We introduce the Langevin equation in Sec. 3.1 and emphasise the importance of the noise matrix elements in the adjoint Fokker-Planck equation in Sec. 3.2. Section 4 is dedicated to the computation of the noise matrix elements which is the primary focus of this work. We numerically compute the noise matrix elements for a slow-roll potential as well as a potential with a slow-roll violating feature in Sec. 4.2.1 before proceeding to carry out a thorough analytical treatment in Sec. 4.2.2 for instantaneous transitions between different phases during inflation. We discuss the potential implications of our results for the computation of the PBH mass fraction and spell out a number of complexities associated with the computation in Sec. 5 before concluding with a summary of our main results in Sec. 6. Appendix A provides a derivation of the Mukhanov-Sasaki equation in spatially flat gauge. Appendix B deals with the analytical solutions of the Mukhanov-Sasaki equation in the absence of any transition, while Appendix C provides analytical expressions for the noise matrix elements in the super-Hubble limit. Appendices D and E are dedicated to the dynamics during instantaneous transitions. We work in natural units with c = ℏ = 1 and define the reduced Planck mass to be m p ≡ 1/ √ 8πG = 2.43 × 10 18 GeV. We assume the background Universe to be described by a spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric with signature (−, +, +, +). An overdot (.) denotes derivative with respect to cosmic time t, while an overdash ( ′ ) denotes derivative with respect to the conformal time τ . Inflationary dynamics beyond slow roll We focus on the inflationary scenario of a single canonical scalar field ϕ with a self-interaction potential V (ϕ) which is minimally coupled to gravity. The system is described by the action where R is the Ricci scalar and g µν is the metric tensor. Specializing to the spatially flat FLRW background metric the evolution equations for the scale factor, a(t), and inflaton, ϕ(t), are where H ≡ȧ/a and V ,ϕ ≡ dV /dϕ. The slow-roll regime of inflation is usually characterised by the first two kinematic Hubble slow-roll parameters ϵ H and η H , defined by where N = ln(a/a i ) is the number of e-folds of expansion during inflation with a i the initial scale factor at some early epoch during inflation before the Hubble-exit of CMB scale fluctuations. The slow-roll conditions correspond to It follows from the definition of the Hubble parameter, H, and ϵ H in Eq. (2.6), that the condition for the Universe to accelerate,ä > 0, is ϵ H < 1. Before proceeding further, we remind the reader of the distinction between the quasi-de Sitter (qdS) and slow-roll (SR) approximations. • Quasi-de Sitter inflation corresponds to the condition ϵ H ≪ 1. Hence, one can deviate from the slow-roll regime by having |η H | ≥ 1 while still maintaining the qdS expansion by keeping ϵ H ≪ 1, which is exactly what happens during ultra slow-roll (USR) inflation. This distinction will be important for the rest of this paper. Under either of the aforementioned assumptions, the conformal time, τ , is given by As discussed in Sec. 1, in order to facilitate PBH formation, we need to significantly amplify the scalar power at small-scales. This can be achieved with a feature in the inflaton potential, such as an inflection point-like feature (as shown in Fig. 1) for which V ,ϕ ≪ 3Hφ. The following criteria need to be satisfied for an inflationary potential to be compatible with observations on cosmological scales [35] while also generating perturbations on smaller scales that are large enough to form an interesting abundance of PBHs: • At the CMB pivot scale, k * = (aH) * = 0.05 Mpc −1 , the amplitude of the scalar power spectrum is P ζ (k * ) = 2.1 × 10 −9 , (2.10) with the scalar spectral index n S and tensor-to-scalar ratio r satisfying n S (k * ) ∈ [0.957, 0.975] , r(k * ) ≤ 0.036 at 95% C.L . (2.11) • A feature in V (ϕ) on a smaller scale k ≫ k * (closer to the end of inflation N e < N * ) to enhance the primordial scalar power spectrum by a factor of roughly 10 7 with respect to its value at the CMB pivot scale. Here N e is the number of e-folds before the end of inflation and N * is the value of N e when the CMB pivot scale made its Hubbleexit. Typically N * ∈ [50,60] depending on the reheating history of the Universe (see e.g. Ref. [82]). Throughout this work we take N * = 60. • The potential steepens, so that inflation ends. Reheating (and the transition to the subsequent radiation dominated epoch) then occurs as the field oscillates around a minimum in the potential. Given that PBH formation requires the enhancement of the inflationary power spectrum by a factor of 10 7 within less than 40 e-folds of expansion (see e.g. Ref. [83]), the quantity ∆ ln ϵ H /∆N , and hence |η H |, must grow to become of order unity, thereby violating the second slow-roll condition in Eq. (2.8). In the particular case of a flat plateau region in the potential at intermediate field values, the inflaton enters a transient period of ultra slow-roll (USR) (see Refs. [56,63,69,84,85]). Since V ,ϕ = 0, from the equation of motion Eq. (2.5), ϕ + 3Hφ = 0 ⇒ −φ/(Hφ) = +3, leading to As a consequence, the inflaton speed drops exponentially with the number of e-folds during this USR phase:φ =φ en e −3 H (t−ten) ∝ e −3 N , whereφ en is the entry velocity to the USR phase at time t en . Since USR is a transient non-attractor phase, the inflaton dynamics during this phase are sensitive to the initial conditions, in particular to the speedφ en with which the inflaton enters the plateau. In this context, the inflaton potential exhibits three important regimes, namely, the slow-roll SR-I phase for ϕ > ϕ en around the CMB scale, the USR phase at some intermediate field values ϕ ex ≤ ϕ ≤ ϕ en , succeeded by the final SR-II phase for ϕ < ϕ ex before the end of inflation at ϕ = ϕ end . Figure 2 schematically illustrates the three regimes. The flat regime (flat quantum well) 4 is characterised by its width ∆ϕ well = ϕ en − ϕ ex , and height, V well . During this regimeφ =φ en − 3H (ϕ − ϕ en ) . (2.14) The total number of e-folds of expansion during the USR period up to ϕ, where ϕ ex ≤ ϕ ≤ ϕ en is given by In order to amplify the perturbations sufficiently to generate an interesting abundance of PBHs, the USR phase typically has to last for around 2 − 3 e-folds (see Refs. [86,87]). The above Figure 2: A zoomed-in version of Fig. 1 in order to schematically illustrate the intermediate flat quantum well feature (highlighted with pink shading) in the inflaton potential. The height and width of the flat segment are denoted by V well and ∆ϕ well respectively. After exiting the first slow-roll phase (SR-I) near the CMB window, the inflaton enters the flat region at ϕ = ϕ en at intermediate field values. During this USR phase, the effects of quantum diffusion might become significant and hence one should use the stochastic inflation formalism to compute the primordial PDF of ζ. Later, the inflaton emerges from the USR phase to another slow-roll phase (SR-II) at ϕ = ϕ ex , before the end of inflation. expression for N USR (ϕ) can be used in the 'non-linear classical δN formalism' to determine the PDF of primordial fluctuations [51]. From the above expressions, it is clear that the dynamics of inflation during USR is sensitive to the initial conditions {ϕ en , π en }. Let us define the critical entry velocityφ cr to be the speed at which the inflaton must enter the flat quantum well in order to come to a halt at ϕ ex . From Eqs. (2.14) and (2.16) it follows thaṫ ϕ cr = −3 H ∆ϕ well , π cr = −3 ∆ϕ well . (2.17) Ifφ en >φ cr , then the classical speed of the inflaton is large enough to drive it all the way across the quantum well, while forφ en <φ cr , the inflaton comes to a halt at some intermediate point ϕ ∈ (ϕ ex , ϕ en ). Another important constraint comes from requiring inflation to continue, ϵ H < 1, hence from Eq. (2.6) we get In this Section, we have discussed the classical dynamics of the inflaton field beyond slow roll, with the specific example of ultra slow-roll inflation across a flat potential well. We now move on to describe the large-scale quantum dynamics of the inflaton field which is coarse-grained over super-Hubble scales, using the stochastic inflation formalism. This will enable us to study the PDF of the primordial fluctuations generated by the quantum diffusion of the inflaton. Quantum dynamics: stochastic inflation formalism Stochastic inflation is an effective long wavelength IR treatment of inflation in which the inflaton field is coarse-grained over super-Hubble scales k ≤ σ aH, with the constant σ ≪ 1. On the other hand, the Hubble-exiting smaller scale UV modes are constantly converted into IR modes due to the accelerated expansion during inflation. Hence the coarse-grained inflaton field follows a Langevin-type stochastic differential equation featuring classical stochastic noise terms sourced by the smaller scale UV modes, on top of the classical drift terms sourced by the gradient of the self-interaction potential V ,ϕ (ϕ). We start with the Hamiltonian equations [80] of the system, Eq. (2.1), for Heisenberg operators of the inflatonφ and its momentumπ ϕ where we choose the number of e-folds N as our time evolution variable for ϕ(N, ⃗ x) and π ϕ (N, ⃗ x) following Refs. [48,56]. We split the inflatonφ(N, ⃗ x) and its conjugate momentumπ ϕ (N, ⃗ x) into the corresponding IR {Φ,Π} and UV {φ,π} parts: where the UV fields are defined aŝ where the field and momentum noise operatorsξ ϕ (N ) andξ π (N ) are given bŷ We assume a window function which imposes a sharp cut off 5 between the IR and UV momentum space modes: It has the advantage of making the calculation of the noise correlation matrix elements more tractable. Physically, the noise termsξ ϕ andξ π in the Langevin Eqs. (3.6) and (3.9) are sourced by the constant outflow of UV modes into the IR modes, i.e. as a UV mode exits the cut-off scale k = σaH to become part of the IR field on super-Hubble scales, the IR field receives a 'quantum kick' whose typical amplitude is given by ∼ ⟨0|ξ(N )ξ(N ′ )|0⟩, where |0⟩ is usually taken to be the Bunch-Davies vacuum. Given that σ ≪ 1, this happens on ultra super-Hubble scales, where the UV modes must have already become classical fluctuations 6 due to the rapid decline of the non-commuting parts of the fields {ϕ k , π k } outside the Hubble radius [90][91][92]. This leads to the classical stochastic description of the dynamics of the coarse-grained quantum fieldsΦ,Π as discussed in the following subsection(s). In this compact notation the expressions for the noise operators, Eqs. (3.8) and (3.9), becomeξ with ϕ i k = {ϕ k , π k } being the field and momentum mode functions respectively. Assuming the sharp k-space window function, Eq. (3.10), it is easy to show that the equal-space noise correlators (auto-correlators) take the form [80] where the noise correlation matrix Σ ij has the form The stochastic nature of the noise leads to a probabilistic description of the system Φ i = {Φ, Π}. One approach to analyse the system is by solving the Langevin equation, Eq. (3.11), numerically for many (tens of millions) stochastic realizations and then proceeding to compute different moments of the physical (stochastic) variables. This method is direct, however cumbersome, non-analytical and requires significant computational power. See Refs. [46,47] for some of the earlier attempts in this direction, while for a more concrete analysis beyond slow-roll, see Ref. [65], and for state-of-the-art numerical simulations, relevant for determining the PDF of primordial fluctuations, see Refs. [66,70,71,74]. There is also an analytically concrete way to study this system, using the first-passage time analysis which involves making a transition from the Langevin equations to an equivalent second order partial differential Fokker-Planck equation (FPE) [42,45,93,94], that describes the time evolution of the PDF of the stochastic variables {Φ, Π}, subject to appropriate boundary conditions. Given our primary goal of computing the full PDF P [ζ], we take this route following Refs. [56,59,69]. The FPE corresponding to the Langevin equation, Eq. (3.11), takes the form where L FP (Φ i ) is the second-order Fokker-Planck differential operator and P (Φ i ; N ) is the probability density function of the stochastic process that is related to the probability of finding the phase-space variables at a given value Φ i = {Φ, Π} at some time N . However such a quantity is not of primary concern to us since we are not interested in studying the phase-space dynamics of the inflaton 7 . Rather, we are interested in finding the probability distribution P Φ i (N ) of the number of e-folds N . Note the important difference between our time variable N and the stochastic variable N . N denotes the background expansion of the Universe, while N is the number of e-folds of expansion obtained from the Langevin equations with fixed boundary conditions in the IR field space, ϕ en and ϕ ex . The coarsegrained curvature perturbation ζ cg is related to the stochastic number of e-folds N via the stochastic δN formalism [42,48,56,59,69] where the PDF P Φ i (N ) of the number of e-folds satisfies the adjoint FPE which we discuss below in Sec. 3.2. Note that N (Φ i ) and P Φ i (N ) correspond to N (Φ, Π) and P (Φ,Π) (N ) respectively. Adjoint Fokker-Planck equation and first-passage time analysis The adjoint FPE for the PDF P Φ i (N ) corresponding to the general Langevin equation, Eq. (3.11), is given by Our primary goal is to solve Eq. (3.19), with appropriate boundary conditions for Φ i ≡ {Φ, Π} in order to compute the PDF P Φ i (N ) ≡ P Φ,Π (N ). A physically well-motivated set of boundary conditions includes an absorbing boundary at smaller field values ϕ (A) closer to the end of inflation and a reflecting boundary at a larger field value ϕ (R) closer to the CMB scale. The PDF at the boundaries satisfies The absorbing boundary condition ensures that for Φ < ϕ (A) , the dynamics is heavily drift dominated and quantum diffusion effects are negligible. Similarly, the reflecting boundary condition arises from assuming that the potential is steep enough in the region Φ > ϕ (R) so that a freely diffusing inflaton can not climb back to a region of the potential beyond ϕ (R) . Both the boundary conditions play a crucial role in determining the functional form of the PDF, thus affecting the PBH mass fraction. A convenient method for determining the PDF, as discussed in Ref. [56], involves considering the 'characteristic function' (CF) χ N (q; Φ i ), given by 8 which is the Fourier transform of the PDF P Φ i (N ) w.r.t the dummy variable q (which is a complex number in general). Hence the PDF is the inverse Fourier transform of the CF: Since the PDF satisfies the adjoint FPE, Eq. (3.19), the CF satisfies which is a partial differential equation with one less dynamical variable than the adjoint FPE. The corresponding boundary conditions, Eqs. (3.20) and (3.21), for the characteristic function are given by The characteristic equation, Eq. (3.24), corresponding to a potential V (ϕ) in a general situation is quite difficult to solve. In practice, one has to make crucial approximations regarding the classical drift D i and the quantum noise ξ i . The most common approximation used in the literature assumes that the noise matrix elements Σ ij in Eq. (3.15) are of the de Sitter-type, i.e. (see Sec. 4) We now specialise to the case of quantum diffusion across a flat segment of the inflaton potential, as discussed in Sec. 2 and shown in Fig. 2. It is helpful to make a change of variables where f is the fraction of the flat well which remains to be traversed and y is the momentum relative to the critical momentum defined in Eq. (2.17), the initial momentum for which the fields comes to a halt at ϕ ex . The CF, Eq. (3.24), then becomes (see Ref. [69]) where V well is the height of the flat quantum well. The corresponding boundary conditions now become Such a system has been solved [69] in two distinct limits, namely • Free stochastic diffusion for which π en ≪ π cr ⇒ y en ≪ 1, implying that the classical drift term, Eq. (3.12), can be safely ignored, in which case the PDF takes the form (see Refs. [56,59]) The full PDF as a function of N is plotted in the left panel of Fig. 3. In the limit N ≫ 1, the PDF exhibits an 'exponential tail' of the form It follows from Eq. (3.32) that the amplitude, A 0 and coefficient of the exponential, Λ 0 , are given by In fact, the exponential tail was shown in Ref. [59] to be a universal feature of the PDF for quantum diffusion across a generic slow-roll potential with absorbing and reflecting boundary conditions, Eqs. (3.20) and (3.21). Larger values of f correspond to more quantum diffusion before exiting the flat quantum well and hence result in more prominent exponential tails. Notice that the PDFs saturate towards f = 1 which is a consequence of the reflective boundary condition given in Eq. (3.21). The full PDF as a function of initial field value Φ is plotted in the right panel of Fig. 3 for realizations which have different values of N . It is clear that for N → 0, starting from f = Φ/∆ϕ well ≃ 0 yields a sharply peaked distribution, in accordance with the absorbing boundary condition given in Eq. (3.20). • Large classical drift where π en ≫ π cr ⇒ y en ≫ 1. In this case the dynamics of the inflaton is primarily governed by its classical drift and hence the PDF is approximately Gaussian even for N ≫ 1 (see Ref. [69]). However, in cases where the power spectrum is amplified sufficiently to form an interesting abundance of PBHs, the inflaton typically enters the intermediate flat USR segment from the CMB scale SR-I phase (see Fig. 2), with speed of the order π en ≃ π cr ⇒ y en ≃ 1. In this case, both classical drift and stochastic diffusion become important (at least initially during the entry into the USR segment) and hence the aforementioned approximations will not be valid. Furthermore, the de Sitter approximations for the noise matrix elements, Eq. (3.26), might breakdown [97] during the transition into the USR phase. Consequently, it becomes important to estimate the noise matrix elements more accurately. We conclude that in order to properly use the stochastic δN formalism to estimate the abundance of PBHs, one must correctly determine the PDF P Φ i (N ) from the adjoint FPE Eq. (3.19) with appropriate boundary conditions. As discussed above, this can be carried out in two important steps: 1. Calculate the noise matrix elements Σ ij from Eq. (3.15) accurately for the transitions between the CMB scale slow roll and subsequent slow-roll violating phases. 2. Determine the form of the PDF P Φ,Π (N ), taking into account the initial momentum with which the inflaton enters the USR segment. In the rest of this paper, we carry out the first task of accurately computing the noise matrix elements, first numerically in Sec. 4.2.1 for a potential with a slow-roll violating feature, and then analytically in Sec. 4.2.2 for the case of instantaneous transitions between different phases during inflation. We reserve the second task to an upcoming paper [81]. Noise matrix elements in stochastic inflation In this section we calculate the expressions for the noise matrix elements Σ ij , i.e. the correlators of the field and momentum noise operatorsξ i = {ξ ϕ ,ξ π }. We do this initially for standard slow-roll inflation, and compare the estimates for Σ ij computed using the pure de Sitter approximation to those obtained using the slow-roll approximations. The key equations that we use are the following: the definition of the noise operators, Eq. (3.13), which along with a step-like k-space window function, Eq. (3.10), leads to the noise correlators of Eq. (3.14), with the noise correlation matrix Σ ij being given by Eq. (3.15). It is important to note that these UV-noise mode functions are to be computed, not at Hubble crossing, but at k = σaH, where they chronologically become part of the coarse-grained IR field and momentum, and provide quantum kicks. Hence, in order to compute the elements of the noise matrix Σ ij , we need to compute the mode functions ϕ i k = {ϕ k , π k }. This can be done by solving the Mukhanov-Sasaki (MS) equation in terms of conformal time τ defined in Eq. Note that depending upon the situation, the MS equation, Eq. (4.1), written in terms of the number of e-folds N ∼ ln(a) as might be more useful. We note that in terms of N , the MS equation features a friction term, and both the terms inside the square bracket evolve with time. However, in terms of conformal time, τ , it is a simple harmonic oscillator equation with time dependent mass terms (aH) −2 z ′′ /z, while the comoving mode frequency k is fixed, which is why we choose to work with conformal time. where with appropriate initial conditions. The expressions for the mode functions ϕ i k in the spatially flat gauge 10 are given by (see App. A) We introduce a convenient new time variable, T , defined as During quasi-dS expansion, the conformal time τ runs from −∞ to 0, so T runs from ∞ to 0. Modes undergo Hubble-exit at T ≡ k/(aH) = 1, and the sub-and super-Hubble regimes correspond to T ≫ 1 and T ≪ 1 respectively. In terms of T the MS equation, Eq. (4.1), takes the form where For slow-roll inflation, ν 2 is greater than or equal to 9/4 at early times and increases monotonically towards the end of inflation. In the limit where ν is a constant, the MS Eq. (4.7) can be converted to a Bessel equation as shown in App. B. In what follows, we start with the computation of the noise-matrix elements for the case of featureless slow-roll potentials, before proceeding to discuss the case of potentials possessing a slow-roll violating feature. Featureless potentials In the case of a featureless potential for which slow roll is a good approximation up until the end of inflation, the effective mass term (aH) −2 z ′′ /z in the MS Eq. (4.1) is almost a constant and evolves monotonically. Hence the MS Eq. (4.7) can be solved analytically by approximating ν in Eq. (4.8) to be a constant. Let us first demonstrate this calculation for the case of the pure de Sitter limit which is usually employed in the computation of noise matrices in the stochastic formalism. In the pure dS limit, both ϵ H , η H = 0, leading to z ′′ /z = 2a 2 H 2 and ν 2 = 9/4. Since a(τ ) = −1/(Hτ ) in the pure dS approximation, and hence the MS Eq. (4.7) reduces to the familiar form The general solution of this equation is given by where the positive and negative frequency Bogolyubov coefficients satisfy the canonical normalisation (Wronskian) condition Imposing the Bunch-Davies boundary conditions given in Eq. (4.5) from which we find the field and momentum mode functions from Eq. (4.4) to be (4.16) Using the above expressions for the mode functions, we derive exact expressions 11 for the noise matrix elements, Eq. (3.15), in the form (recall they are evaluated at k = σaH, hence when T = σ) In the stochastic inflation formalism the field and momentum variables are coarse-grained on ultra-Hubble scales, where σ ≪ 1. For example, taking σ = 0.01, we get Re(Σ ϕπ ) = 10 −4 Σ ϕϕ and Σ ππ = 10 −8 Σ ϕϕ under the pure de Sitter approximation. This motivates the usual practice of dropping the momentum-induced noise terms Σ ϕπ and Σ ππ from the adjoint FPE, Eq. (3.19). Turning now to slow-roll inflation, even though ϵ H , η H ≪ 1, the slow-roll parameters do not exactly vanish unlike in pure dS space. Nevertheless, as long as the quasi-de Sitter expansion is valid (which is justified since ϵ H ≪ 1), the expression for the scale factor in terms of T is still given by Eq. (4.9). Under the slow-roll approximations, the MS equation takes the general form Eq. (4.7) with ν ̸ = 3/2. In fact for realistic SR potentials, ν is roughly equal to 3/2 and evolves slowly and monotonically. Assuming ν to be a constant, and imposing Bunch-Davies initial conditions, the expression for v k takes the form (see where H which leads to the following expressions for the noise matrix elements Σ ij on super-Hubble scales Recalling the definition of T in Eq. (4.6) and the fact super-Hubble scales correspond to k = σaH, hence T = σ, it follows that the above expressions demonstrate that all three noise terms scale as Σ ij ∝ σ 2(−ν+3/2) on super-Hubble scales. This is in contrast to the pure dS limit where the three noise terms in Eqs. (4.17)-(4.19) behave differently, namely, Σ ϕϕ = const., Σ ϕπ ∝ σ 2 and Σ ππ ∝ σ 4 . Hence, during SR inflation for which ν ≃ 3/2, even though the momentum-induced noise terms Σ ϕπ and Σ ππ are small compared to the field noise Σ ϕϕ , they may not be negligible, depending upon the value of (ν − 3/2). As mentioned previously, for most slow-roll potentials, ν evolves slowly and monotonically. The numerically determined noise matrix elements, Σ ij , are shown in Fig. 4 for an example asymptotically flat SR potential, which we choose to be the D-Brane KKLT potential [99][100][101][102] which has the form where M is the mass scale in the KKLT model which we have chosen to be M = 0.5 m p . We have chosen σ = 0.01 as is the standard practice (see Ref. [80]). We notice that the momentum induced noise terms Σ ϕπ and Σ ππ are much higher than their corresponding values in the pure de Sitter limit. In particular, we find the ratio 13 of Σ ϕϕ : |Re(Σ ϕπ )| : Σ ππ to be 1 : 2 × 10 −2 : 4 × 10 −4 for large N e as opposed to the de Sitter analytic estimate of Potentials with a slow-roll violating feature Potentials possessing a feature that generates large, PBH-forming, perturbations, typically exhibit slow-roll violation, during which the quasi-dS approximation is still valid (ϵ H ≪ 1), while η H ≥ 1 (see Ref. [83]). In particular, η H ≃ 3 during an ultra slow-roll phase as discussed in Sec. 2. From Eq. (4.3), the expression for the effective mass term z ′′ /z under the quasi-dS approximation becomes In this case, the inflationary dynamics undergoes transitions between a number of phases driven by the behaviour of η H . In single field models in which perturbations grow sufficiently to produce an interesting abundance of PBHs, the inflaton typically undergoes two important transitions (see Ref. [87]). The first transition T-I occurs from the CMB scale SR-I to a near-USR phase, followed by a second transition T-II, from the near-USR phase to the subsequent second slow-roll phase, SR-II, before the end of inflation. For some class of features (see Refs. [87,104]), the second transition T-II also leads to an intermediate constant-roll (CR) phase [105] during which η H is negative, almost constant, and of order unity. As a specific example, we consider a modified KKLT potential with an additional tiny Gaussian bump-like feature [106]: where A,σ and ϕ 0 represent the height, width and position of the bump respectively. The evolution of η H and z ′′ /z for this potential is shown in Fig. 5. Following Ref. [106], we fix M = 0.5 m p , and take the bump parameters to be A = 1.87 × 10 −3 ,σ = 1.993 × 10 −2 m p and ϕ 0 = 2.005 m p . These bump parameter values lead to a peak in the scalar power-spectrum of P ζ ∼ 10 −2 at a k value corresponding to ∼ 10 17 g PBHs, i.e. at the lower end of the asteroid mass window where PBHs can make up all of the dark matter (see e.g. Refs. [14,15]). The inflationary dynamics in this case display the aforementioned three key phases, namely SR-I, USR and CR with η H making sharp (yet smooth) transitions between them, as shown by the dashed blue curve in Fig. 5. However, during the second transition from USR to the CR phase (15 ≤ N e ≤ 30), the effective mass term (aH) −2 z ′′ /z remains nearly constant 14 , as emphasized in Ref. [87]. The evolution of the mode functions (and hence the noise matrix elements) is determined by (aH) −2 z ′′ /z through the MS Eq. (4.1). The expression for the mode functions therefore remains the same in the subsequent CR phase through the second transition because of the duality first noticed by Wands (see Ref. [107]). Hence it is only necessary to follow the evolution through the first transition, T-I, from SR-I to the near-USR phase. 14 The effective mass term remains almost constant during the second transition because of the upward step-like evolution of ηH as a function of Ne. In the quasi-dS approximation, ϵH ≪ 1, the effective mass term, Eq. (4.27), becomes (aH where C is a constant andÑe is the value of Ne at which ηH = 3/2, i.e. an upward step, then (aH) −2 z ′′ /z = C 2 − 1/4 = const. . Note that the effective mass term is only constant for an upwards step in ηH , and not for a downward step, as occurs at the first transition. -19 - In what follows, we will first describe how to compute the noise matrix elements Σ ij numerically for the potential Eq. (4.28), before finding accurate analytic solutions for them. Note that we use this particular model to demonstrate our numerical framework because of its mathematical simplicity and efficiency. However, the results we present are representative of models with a broad range of features, including inflection point-like behaviour. This is because, as shown in Ref. [87], the behaviour of the effective mass term z ′′ /z is similar across these large class of models, hence our primary conclusions will apply to all of them, and not just this modified KKLT model. Numerical analysis In order to numerically compute the noise matrix elements for the potential Eq. (4.28), our strategy is to split the mode functions ϕ k , π k and v k into their real and imaginary parts (see Ref. [104]) Substituting Eqs. (4.29)-(4.31) into Eq. (3.15), we derive the following compact expressions for the noise matrix elements Σ ij (4.34) As we mentioned earlier, the imaginary part of the off-diagonal term Σ πϕ does not correspond to a stochastic classical noise source [80], hence we only need consider its real part in Eq. (4.33). The evolution of the absolute values of Σ ϕϕ , Re(Σ ϕπ ) and Σ ππ for the potential Eq. (4.28) are plotted in Fig. 6 for σ = 0.01, while Fig. 7 shows the ratios between the momentum-induced noise terms and the field noise, |Re(Σ ϕπ )|/|Σ ϕϕ | and |Σ ππ |/|Σ ϕϕ | around the transition epoch. The transition leads to an enhancement of the momentum induced noise terms relative to the field noise with Σ ππ > |Re(Σ ϕπ )| > Σ ϕϕ . This is followed by a near-exponential fall of each Σ ij during USR, since the slope of Σ ij is almost constant during this epoch. We see that |Σ ππ |/|Σ ϕϕ | ≳ 3 × |Re(Σ ϕπ )|/|Σ ϕϕ |. At late times the noise matrix elements begin to rise again and asymptote to constant values, and the hierarchy between the noise terms gets reversed back to Σ ππ < |Re(Σ ϕπ )| < Σ ϕϕ . We also notice that the asymptotic value of each Σ ij at late times is greater than its corresponding value in the SR-I phase. Figure 7: The ratios of the momentum-induced noise terms and the field noise, |Re(Σ ϕπ )|/|Σ ϕϕ | in green and |Σ ππ |/|Σ ϕϕ | in purple, with σ = 0.01, for the potential Eq. (4.28) with a tiny Gaussian bump as a function of N e around the SR-I to USR transition. The transition from SR-I to USR leads to an enhancement of the momentum induced noise terms, Σ ϕπ and Σ ππ , relative to the field noise, Σ ϕϕ , in the USR epoch. From Figs. 6 and 7, we conclude that the noise matrix elements for a potential with a PBH-forming feature evolve in a more complicated way than for pure de Sitter or pure slow-roll. We next show that the aforementioned interesting features of the noise terms across different epochs, such as during SR-I, immediately after the transition from SR-I to USR, as well as the late time asymptote, can be understood by making appropriate analytical approximations. In the following subsection, we compute the noise matrix elements analytically by assuming the transition T-I from SR-I to the near-USR phase to be instantaneous. We will also demonstrate that in the quasi-dS limit, ϵ H ≪ 1, the noise terms are completely determined by the second slow-roll parameter η H . Analytical treatment for instantaneous transitions In order to compute Σ ij analytically, we consider an approach which captures the key features of the full numerical evolution, namely solving the MS Eq. (4.7) under the following assumptions. 1. We assume the second slow-roll parameter η H to be a piece-wise constant function which makes an instantaneous (yet finite) transition, η H : η 1 → η 2 at time τ = τ 1 , given by where Θ is the Heaviside step function: Hence the piece-wise constant η H in Eq. (4.35) results in a piece-wise constant ν in Eq. (4.37). We notice that the effective mass term z ′′ /z contains a Dirac delta-function arising from the derivative of the Θ function in Eq. (4.35). Note that for η 2 > η 1 (which is the case for the SR-I → USR transition in Fig. 5) we have A > 0 and hence the term containing the Dirac delta-function in Eq. (4.37) is negative (since τ < 0 during inflation). This delta-function dip for an instantaneous transition analytically represents the observed dip of finite width and depth for potentials with a smooth feature, as seen in (1/aH) 2 z ′′ /z in Fig. 5 (around N e ∼ 32.5). General solutions to the MS equation in different piece where v E k (τ ) and v L k (τ ) are the mode functions before and after the transition respectively, represented by (4.41) We would ultimately like to derive expressions for the noise matrix elements which can be expressed in terms of the mode functions v k in the following compact form where we take σ = 0.01 as discussed earlier. We start with the computation of noise matrix elements for an instantaneous transition in the pure dS limit where ν 1 = ν 2 = 3/2, before moving on to a general transition between constant values of ν : ν 1 → ν 2 , with ν 2 > ν 1 . Case 1: Instantaneous transition in the pure dS limit with ν 1 = ν 2 = 3 2 In the case of an instantaneous transition at τ = τ 1 in the pure dS limit (first considered in Ref. [109]), we have η 1 = 0 and η 2 = 3 and the system makes a transition from a SR to an exact USR phase. Accordingly, the effective mass term in the MS equation takes the form where the transition strength is A = 3. The expressions for the mode functions, obtained by solving Eq. (4.1) are given (in terms of where α k and β k are constants of integration (to be determined from the Israel junction conditions given in Eqs. (4.39) and (4.40)), while their derivatives are given by where recall T > T 1 corresponds to the epoch before the transition and T < T 1 to the epoch after the transition. Note that we have imposed Bunch-Davies initial conditions on the mode function v E k (T ) before the transition T > T 1 , in accordance with our third assumption as discussed above. The corresponding Fourier modes of the field fluctuations are obtained from Eq. (4.4) (4. 48) as are those of the field momentum fluctuations The Bogolyubov coefficients α k and β k , determined by implementing the Israel junction conditions, Eqs. (4.39) and (4.40), are given by which yields where With a little bit of algebra, we obtain 15 , and the ratio Σ ϕϕ : |Re(Σ ϕπ )| : Σ ππ is given by 1 : 3 : 9, to leading order in T = σ. Following this epoch, the noise terms begin to rise exponentially, and the hierarchy between the field and momentum induced terms gets reversed back to Σ ϕϕ > |Re(Σ ϕπ )| > Σ ππ . At sufficiently late times, when T ≪ 1 ≪ T 1 , the noise matrix elements are due to modes for which α k → 1, while β k ≃ 3i/(2T 1 ) e i2T 1 decays to zero with oscillations. Hence the noise matrix elements asymptote to their corresponding pre-transition (constant) values given by Eqs. (4.17)- (4.19). Comparing the analytical results for an instantaneous transition in the pure dS limit, shown in the left panel of Fig. 8, with the numerical results for a potential with a PBH forming feature shown in Fig. 6, we conclude that the former fails to capture 17 the late time asymptotic properties of Σ ij . Therefore, in the following, we will compute Σ ij relaxing the pure dS approximation. Case 2: Instantaneous transition between two constant values of ν: ν 1 → ν 2 In the case of an instantaneous transition at τ = τ 1 where ν makes a jump 18 between the constant values ν 1 → ν 2 , once again by solving Eq. (4.1), the expressions for the mode functions are given (in terms of and their derivatives are given by By implementing the Israel junction conditions, Eqs. (4.39) and (4.40), the constant coefficients of integration C L 1 and C L 2 can be shown to satisfy the algebraic equations which yields where The resulting noise matrix elements, computed using Eqs. (4.42)- (4.44), are shown in the right panel of Fig. 8. In order to compare our results with the numerical calculation in Fig. 6, we choose ν 1 = 1.52 and ν 2 = 1.8. These values correspond to η 1 = −0.02 and η 2 = 3.3 respectively, to match the values of η H during the SR-I and the near-USR epochs for the modified KKLT potential with a Gaussian bump used for the numerical calculation in Fig. 6. As in the pure dS case, immediately after the transition, when T ≲ T 1 ≪ 1, the noise matrix elements fall nearly-exponentially with Σ ij ∼ e 2ANe . The ratio Σ ϕϕ : |Re(Σ ϕπ )| : Σ ππ is approximately 1 : A : A 2 (where A ≡ η 2 − η 1 = 3.32 from Eq. (4.38)), and nearly constant. However, following this epoch the noise terms begin to rise and the hierarchy between the field and momentum induced terms is reversed back to Σ ϕϕ > |Re(Σ ϕπ )| > Σ ππ . At sufficiently late times, T ≪ 1 ≪ T 1 , the coefficient of the negative frequency solution C L 2 becomes negligible, and the behaviour of Σ ij can be understood from the constant ν expressions for the noise terms, Eqs. (4.23)-(4.25). The late time ratio of noise terms is given by Σ ϕϕ : |Re(Σ ϕπ )| : Σ ππ → 1 : ν 2 − 3 2 : ν 2 − 3 2 2 i.e. the values of Σ ij are higher than their pre-transition counterparts in the SR-I phase. This matches the behaviour of the numerically calculated noise matrix elements for the modified KKLT potential with a Gaussian bump shown in Fig. 6. The key results of our analytical calculations for an instantaneous transition are: 1. The expressions for the noise matrix elements in the pre-transition epoch are given by Eqs. (4.23)-(4.25) with ν = ν 1 , resulting in the ratios (4.72) 2. Immediately after the transition, Σ ij ∝ e 2ANe , and 3. At sufficiently late times, the noise terms are again given by Eqs.(4.23)-(4.25), but with ν = ν 2 , yielding the ratios (4.74) Comparing Fig. 8 with Fig. 6, we see that the analytical treatment assuming an instantaneous transition between two constant values of ν, ν 1 and ν 2 (Case 2) captures most of the asymptotic properties of Σ ij for a potential with a PBH forming feature. This is in contrast to the pure dS transition (Case 1) which was not able to capture the late-time asymptote accurately, due to the assumption that ν 1 = ν 2 = 3/2. Furthermore, the pure dS transition also underestimates the momentum induced noise terms Σ ϕπ and Σ ππ in the SR-I phase, as discussed in Sec. 4.1. We conclude this Section by briefly commenting on the degree of correlation between the field and momentum noise terms, ξ ϕ and ξ π , which can be quantified in terms of the ratio 19 where γ is related to the determinant of the noise matrix by γ 2 = 1 − det(Σ ij )/(Σ ϕϕ Σ ππ ). The noise terms are maximally correlated if γ = 1, while γ = 0 implies that ξ ϕ and ξ π are independent (see Ref. [80]). For featureless potentials, we find that γ ≃ 1 under the pure dS approximation, using Eqs. (4.17)-(4.19), as well as under the SR approximations, using Eqs. (4.23)-(4.25). For potentials with a PBH forming feature, we also find that γ ≃ 1 throughout the three asymptotic regimes 20 described by Eqs. (4.72)-(4.74). This property of maximal correlation between the noise terms is a direct consequence of the fact that for σ ≪ 1, the super-Hubble UV mode functions, ϕ k , are frozen by the time they join the coarse-graining scale k = σaH (see Refs. [71,113] for a detailed discussion on the freezing behaviour of the UV modes). Consequently, we conclude that quantum diffusion can be assumed to be sourced by a single random noise term. Hence our analysis suggests that the dynamics during the three asymptotic regimes given in Eqs. (4.17)-(4.19) can be described by a system with a single stochastic degree of freedom as suggested in Refs. [68,97,114]. This will be discussed in our forthcoming paper [81]. Discussion In Sec. 4 we accurately calculated the stochastic noise matrix elements for a sharp transition from SR to USR, using both analytical and numerical techniques. Our ultimate aim is to determine the PDF of the number of e-folds, P Φ,Π (N ), by solving the adjoint Fokker-Planck Eq. (3.19) (using appropriate boundary conditions) and then calculate the mass fraction of PBHs β PBH . Using the Press-Schechter formalism [115], the PBH mass fraction is usually estimated by integrating the probability distribution of the coarse grained curvature perturbation, P (ζ cg ), above the threshold for PBH formation, ζ c . The PBH mass fraction in the Stochastic formalism is given as (see Refs. [56,57]) where the average number of e-folds, ⟨N (Φ, Π)⟩, can be obtained from Eq. (3.18). While this task is reserved for our upcoming paper, we expect that the sharp decline of the noise terms after the transition will decrease the amount of quantum diffusion of the IR fields across the PBH-forming feature. Therefore we expect the tail of the PDF to decline more rapidly than what is usually found using the pure dS approximation without any transitions. Indeed such behaviour of the PDF was found in Ref. [97] which focused on a sharp transition in pure dS space using the linear potential model of Starobinsky [109]. Numerical simulations carried out in Refs. [66,70,71] show that the canonical computation based on the pure dS noise terms without any transition typically leads to inaccurate estimates of the PBH abundance (over-estimates for some potentials and under-estimates for other potentials). However, it is important to study the relative contributions of the noise terms, Σ ij , the potential, V (ϕ), and the boundary conditions to the PDF separately. An analytical approach is well-suited to this, and this is one of the primary goals of our upcoming paper. In the following, we overview the outstanding complexities in accurately calculating the PBH mass fraction. • Curvature perturbation vs density contrast: While the PBH mass fraction is often calculated from the PDF of the curvature perturbation using Eq. (5.1), the criterion for PBH formation is most accurately formulated in terms of the non-linear density contrast δ l , see e.g. Ref. [39-41, 72, 116]. An accurate computation of the PDF of the density contrast needs the knowledge of all the higher order (n-point) connected correlators 20 We find γ ≪ 1 only for brief transient periods when the noise terms begin to rise after their exponential fall post transition. of the curvature perturbation, ζ. Therefore a high-precision calculation of the PBH mass fraction requires the joint probabilities rather than the one-point PDF, P [ζ] (see Ref. [40,72] for discussion of this issue). • Gauge corrections: In this work we compute the mode functions {ϕ k , π k }, and hence the noise correlators of {ξ ϕ ,ξ π }, in the spatially flat gauge, however the Langevin equations are written in the uniform-N gauge. This induces corrections to the noise terms that could be non-negligible when the slow-roll approximations are violated [61,67,68]. However Refs. [61,71] showed that the gauge corrections are negligible for σ ≪ 1. • Choice of coarse-graining parameter: In Sec. 3, we mentioned that the coarse-graining parameter σ needs to be small enough, σ ≪ 1, to ensure that the short-wavelength quantum fluctuations {ξ ϕ , ξ π } act as classical noise on the dynamics of the coarse-grained fields {Φ, Π}. In fact, the physical results are expected to be independent of σ as long as σ ≫ e −1/(3ϵ H ) (see Refs. [45,48]). In our analysis, we have considered σ = 0.01 in order to account for substantial non-linearity in the evolution by including as many modes into the long-wavelength regime as possible without violating the stochastic nature of the noise terms (see Refs. [71,113]). Nevertheless, our results given in Eqs. (4.72)-(4.74) demonstrate that the ratio of the noise terms are rather insensitive to the choice of σ. Numerical simulations of the stochastic dynamics carried out in Refs. [71,113] also indicate that the mass fraction of PBHs do not depend upon the particular choice for the value of σ, as long as it is not arbitrarily small. • Effects of backreaction: We have calculated the noise matrix elements by treating the mode functions {ϕ k , π k } as linear perturbations in a deterministic (non-stochastic) inflationary background, as is the usual practice in perturbation theory. In stochastic inflation, the noise terms should in principle be evaluated in the stochastically evolving background of the coarse-grained IR fields {Φ, Π}. However, numerical simulations demonstrate that such non-Markovian corrections due to the backreaction effects of the stochastic IR background are negligible for single field inflationary potentials with a large class of PBH forming features, such as a flat segment, an inflection point, or a bump/dip and hence can be safely ignored (see Refs. [71,113]). For potentials with a broad class of features, the non-perturbative non-Gaussianity induced by stochastic effects is usually expected to be dominant (see Ref. [71]). The relative significance of the stochastic effects can be inferred from a classicality criterion, expressed in terms of the parameter j cl = |V (ϕ)η H /(24π 2 m 4 p ϵ H )|, obtained from a saddle-point approximation of the stochastic integrals [48]. For a potential with j cl ≪ 1, stochastic effects can safely be ignored (except in the far tail of the PDF). Hence, it is possible to construct potentials for which the stochastic effects are specifically negligible by design, while the classical non-linearities can be significant (see Ref. [51]). In such cases the classical δN formalism can be successfully used to compute the PDF (see also Refs. [53,55]). • Loop corrections: As outlined in Sec. 2, to generate a non-negligible abundance of PBHs, the power spectrum of the primordial scalar perturbations on small scales has to be roughly seven orders of magnitude larger than its measured value on CMB scales, i.e P ζ (k) ≃ 10 −2 . Therefore it is crucial to ask whether such a large enhancement of power at smaller scales might induce non-negligible loop corrections to the CMB scale power spectrum at higher orders in perturbation theory. Recently such calculations were carried out perturbatively in Refs. [117][118][119][120][121]. These papers find that the one-loop corrections to the CMB scale power spectrum can become significant if P ζ (k) ≳ O(10 −2 ). This appears to rule out the formation of an interesting abundance of PBHs in single field inflationary models. However this conclusion is currently the subject of debate. It has been argued in Refs. [122][123][124] that the loop corrections are negligible if the transition from USR to the subsequent attractor phase is smooth enough. Nevertheless we stress that the amplitude of the small scale power spectrum required to form an interesting abundance of PBHs depends on the PDF of the perturbations. The standard value, P ζ (k) ≃ 10 −2 , assumes the PDF is Gaussian. This amplitude, and therefore the size of the one-loop corrections to the CMB scale power spectrum, will be different for the non-Gaussian tail usually generated by stochastic effects. Conclusions PBHs can form due to the gravitational collapse of large fluctuations, in the non-perturbative tail of the PDF. An accurate calculation of the full PDF of the perturbations is therefore crucial to calculate their abundance. Stochastic inflation is a powerful framework for computing the cosmological correlators non-perturbatively. Using the stochastic δN formalism, the full PDF can be calculated from the first-passage statistics of the number of e-folds, N , during inflation. However to correctly account for the back-reaction effect of small scale (UV) fluctuations, φ k , on the long wavelength coarse-grained (IR) field, Φ, it is essential to compute the noise matrix elements accurately. Since most single field inflationary potentials with a PBH-forming feature violate the slow-roll conditions, a precise calculation of the stochastic noise matrix elements beyond slow roll is required. In this paper we have done this, both analytically and numerically. After a brief overview of single-field inflationary dynamics beyond slow roll in Sec. 2, we set up the relevant equations underlying the stochastic inflation formalism in Sec. 3. There are two key steps to using the stochastic inflation formalism to calculate the full PDF of fluctuations in slow-roll violating, PBH-producing, models: 1. compute the statistics of both field and momentum-induced noise terms {ξ ϕ , ξ π }, 2. set up the Langevin equations (or, the corresponding adjoint Fokker-Planck equation) without ignoring the inflaton IR momentum Π, We have addressed the first issue here and will focus on the second in a forthcoming paper [81]. In Sec. 4 we computed the matrix elements, Σ ij , defined in Eq. (3.15), which characterise the statistics of the field and momentum noise terms. First, in Sec. 4.1, we derived expressions for Σ ij for featureless potentials where the slow-roll conditions ϵ H , η H ≪ 1 remain valid until almost the end of inflation. We compared the results of our analytical calculations, Eqs. (4.23)-(4.25), and numerical calculations (shown in Fig. 4) for the KKLT potential, Eq. (4.26), with the corresponding estimates under the pure de Sitter approximation, Eqs. (4.17)- (4.19). We found that the dS approximation underestimates the momentum induced noise terms, Σ ϕπ and Σ ππ , by several orders of magnitude, even for a slow-roll potential. In Sec. 4.2, we calculated the noise matrix elements for single field inflationary potentials with a slow-roll violating, PBH-forming feature. For the numerical calculations we used the modified KKLT potential featuring a tiny Gaussian bump, Eq. (4.28), as a proto-typical single-field PBH-forming potential. This potential has a sharp transition from the CMB scale SR-I phase to the subsequent near-USR phase, as shown in Fig. 5. Our results, plotted in Figs. 6 and 7, show that following the transition, Σ ij falls exponentially and the momentum induced noise terms dominate the field noise with the hierarchy Σ ππ > |Re(Σ ϕπ )| > Σ ϕϕ . Subsequently, the noise terms return back to their original hierarchy, before growing and tending to constant values. To understand the asymptotic behaviour of the noise terms, we calculated the noise matrix elements analytically using several approximations. Firstly we treated the sharp transition between the SR-I phase and the subsequent near-USR phase as instantaneous, and assumed the second slow-roll parameter η H to be piece-wise constant. By solving the Mukhanov-Sasaki equation, Eq. (4.7), analytically for a constant η H (and hence a constant ν), and applying the Israel junction matching conditions across the transition, we computed the noise matrix elements shown in the right panel of Fig. 8. We found that the behaviour of the noise terms post transition is governed by a single parameter, namely the transition strength, A, which is defined as the difference between the values of the second slow-roll parameter η H post-and pre-transition as given in Eq. (4.38). This analytical computation based on an instantaneous transition ν : ν 1 → ν 2 captures the key features of the noise matrix elements for potentials with a smooth feature (see Eqs. (4.72)-(4.74)). We also compared our calculations with those for an instantaneous transition using the pure dS approximation, i.e. ν 1 = ν 2 = 3/2, which was carried out in Ref. [97] for the Starobinsky model [109], see the left panel of Fig. 8. We found that the dS approximation underestimates the noise terms not only in the SR-I phase (as mentioned before), but also a long time after the transition. However, the pure dS-transition estimates are a good approximation to the behaviour of the noise terms immediately after the transition. Furthermore, for potentials with a pure 'flat' feature (as shown in Fig. 2) rather than a bump, the dS-transition approximations work quite well. In our analytical solutions of the MS equation, we focused on a single sharp transition, T-I. In this case the effective mass term z ′′ /z remains almost constant throughout the USR, T-I and CR phases, as can be seen in Fig. 5. Therefore the expression for the mode functions remains the same after the second transition, due to the Wands duality as discussed in Sec. 4.2. This is a common characteristic of a broad class of single field inflationary models with a PBH forming feature (see Ref. [87]). Our analytical scheme can be extended to situations where the effective mass term z ′′ /z undergoes two or more sharp transitions. We provide the relevant analytical expressions for the mode functions in this case in App. E. We conclude that in order to accurately determine the PDF of the curvature perturbation, P [ζ], beyond slow roll, one must solve the adjoint Fokker-Planck Equation (3.19) using the correct asymptotic forms of the noise matrix elements given in Eqs. (4.72)-(4.74). Our upcoming paper [81] will be dedicated to developing analytical and semi-analytical techniques to solve the adjoint FPE with the knowledge of Σ ij obtained here. While numerical simulations of the Langevin equations can be carried out in full generality, they are often quite timeconsuming, and demand large computational resources. Furthermore, the analytical approach will allow us to calculate the asymptotic behaviour of the PDF and study the effects of the noise terms, Σ ij , the potential, V (ϕ), and the boundary conditions on the PDF separately. It is therefore complementary to the fully numerical simulations of the Langevin equations discussed in Ref. [66,70,71,74]. B Analytical solution of the Mukhanov-Sasaki equation For the featureless slow-roll potentials that we study in Sec. 4.1, ν 2 is greater than or equal to 9/4 and effectively constant. In this case the Mukhanov-Sasaki (MS) Eq. (4.1) can be written as a Bessel equation with constant ν, which can be solved analytically (see Refs. [126,127] using the new time variable, T , defined in Eq. (4.6) All modes undergo Hubble-exit at T = 1, with sub (super)-Hubble scales corresponding to T ≫ (≪)1. In terms of this new time variable, the MS equation takes the form Using the variable redefinition F = v k / √ T , this equation can be transformed into the more familiar Bessel equation: The general solution to Eq. (B.2) (when ν is not an integer) can be written either as a linear combination of Hankel functions of the first and second kind {H B.1 In terms of Hankel functions The general solution to the Bessel Eq. (B.2) in terms of the Hankel functions is given by where the coefficients C 1 and C 2 are fixed by initial/boundary conditions. Hence the solution to the MS equation can be written as In the sub-Hubble limit, T ≫ 1, the Hankel functions take the form while in the super-Hubble limit, T ≪ 1, the Hankel functions take the form The Bunch-Davies conditions, Eq. (4.5), for the mode functions take the form v k (T ) , which yields B.2 In terms of Bessel functions The general solution to the Bessel equation, Eq. (B.2), in terms of the Bessel functions of the first kind of order ±ν is given by where the coefficients C + and C − are again to be fixed by initial/boundary conditions. Hence the solution to MS Eq. (4.1) can be written as Imposing Bunch-Davies initial conditions, Eq. (4.5), we get and hence the final expression for the mode functions becomes (B.14) In this work, we express analytical solutions of the MS equation in terms of Hankel functions. However, our results can alternatively be easily expressed in terms of the Bessel functions by using Eqs. (B.3) and (B.14). C Super-Hubble expansion of the noise matrix elements The full expressions for the noise matrix elements are given in Eqs. (4.42)-(4.44). Since we need to evaluate Σ ij in the super-Hubble limit with T = σ ≪ 1, here we provide expressions for the noise terms, derived from the mode functions, v k (T ), in Eq. (4.20), as a series expansion in T , up to O(T 4 ), for constant ν: The above expressions for Σ ij are valid for any value of ν, and accurately reproduce the de Sitter results for ν = 3/2. We have verified that there are no higher order corrections in the dS limit. D Functional form of z ′′ /z during sharp transitions In this Appendix we derive the analytic expressions for z ′′ /z for the instantaneous transitions that we use in Sec. 4.2.2. Under the quasi-dS approximation, ϵ H ≃ 0, the effective mass term in the MS Eq. (4.1) becomes where this final form of z ′′ /z depends upon the expression for the second slow-roll parameter η H (τ ). In the following, we will assume that η H is piece-wise constant and makes instantaneous, but finite transitions. We will begin with the simplest case where η H makes only one transition and later generalise this to the case of two or more successive transitions. D.3 Generalising to multiple instantaneous transitions If the inflaton potential exhibits a number of tiny features/modulations, then the second SR parameter η H might undergo a number of successive transitions before the end of inflation. E Noise matrix elements for two successive instantaneous transitions In this Appendix, we present the full calculations for the mode functions in a closed form for the case of two successive instantaneous transitions during inflation. This generalises the results presented in Case 1 (Pure dS limit) and Case 2 (transition between two different values ν 1 → ν 2 ) of Sec. 4.2.2. E.1 Pure dS limit For two successive instantaneous transitions SR → USR → SR in the pure dS limit 21 , the effective mass term in the MS equation takes the form where τ 1 is the transition time from SR to USR and τ 2 is the transition time from USR back to SR. A = +3 is the strength of the transition, as discussed before The MS (complex) mode functions generalise those derived in Eq. (4.46) and are given by 21 See Ref. [12] for an extension of the Starobinsky model [109] featuring two successive transitions in the context of PBH formation. and the derivatives of the mode functions generalise those derived in Eq. (4.47) and are given by Here the superscripts 'E', 'I' and 'L' stand for early, intermediate and late respectively. After implementing the Israel junction matching conditions, we obtain the final expressions for the mode functions (E.10) and the intermediate transition coefficients C I 1 and C I 2 are just α k and β k , respectively, derived previously in Eqs. (4.59) and (4.60).
2023-03-31T01:15:46.687Z
2023-03-30T00:00:00.000
{ "year": 2023, "sha1": "ee01db84f5e3838ba7319efa7d89646d4fc1c15c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee01db84f5e3838ba7319efa7d89646d4fc1c15c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119602067
pes2o/s2orc
v3-fos-license
A New Formulation of the $3D$ Compressible Euler Equations With Dynamic Entropy: Remarkable Null Structures and Regularity Properties We derive a new formulation of the $3D$ compressible Euler equations with dynamic entropy exhibiting remarkable null structures and regularity properties. Our results hold for an arbitrary equation of state (which yields the pressure in terms of the density and the entropy) in non-vacuum regions where the speed of sound is positive. Our work is an extension of our prior joint work with J. Luk, in which we derived a similar new formulation in the special case of a barotropic fluid, that is, when the equation of state depends only on the density. The new formulation comprises covariant wave equations for the Cartesian components of the velocity and the logarithmic density coupled to a transport equation for the specific vorticity (defined to be vorticity divided by density), a transport equation for the entropy, and some additional transport-divergence-curl-type equations involving special combinations of the derivatives of the solution variables. The good geometric structures in the equations allow one to use the full power of the vectorfield method in treating the"wave part"of the system. In a forthcoming application, we will use the new formulation to give a sharp, constructive proof of finite-time shock formation, tied to the intersection of acoustic"wave"characteristics, for solutions with nontrivial vorticity and entropy at the singularity. In the present article, we derive the new formulation and overview the central role that it plays in the proof of shock formation. Introduction and summary of main results Our main result in this article is Theorem 3.1, in which we provide a new formulation of the compressible Euler equations with dynamic entropy that exhibits astoundingly good null structures and regularity properties. We consider only the physically relevant case of three spatial dimensions, though similar results hold in any number of spatial dimensions. Our results hold for an arbitrary equation of state in non-vacuum regions where the speed of sound is positive. By equation of state, we mean the function yielding the pressure in terms of the density and the entropy. Our results are an extension of our previous joint work with J. Luk [20], in which we derived a similar new formulation of the equations in the special case of a barotropic fluid, that is, when the equation of state depends only on the density. Our work [20] was in turn inspired by Christodoulou's remarkable proofs [5,8] of shock formation for small-data solutions to the compressible Euler equations in irrotational (that is, vorticity-free) regions as well as our prior work [26] on shock formation for general classes of wave equations; we describe these works in more detail below. A principal application of the new formulation is that it serves as the starting point for our forthcoming work, in which we plan to give a sharp proof of finite-time shock formation for an open set of initial conditions without making any symmetry assumptions, irrotationality assumption, or barotropic equation of state assumption. The forthcoming work will be an extension of our recent work with J. Luk [19], in which we proved a similar shock formation result for barotropic fluids in the case of two spatial dimensions. Our new formulation of the compressible Euler equations comprises covariant wave equations, transport equations, and transport-divergence-curl-type equations involving special combinations of solution variables (see Def. 1.3). As we mentioned earlier, the inhomogeneous terms exhibit good null structures, which we characterize in our second main result, Theorem 3.2. Its proof is quite simple given Theorem 3.1. As we mentioned above, in [20], we derived a similar new formulation of the equations under the assumption that the fluid is barotropic. The barotropic assumption, though often made in astrophysics, cosmology, and meteorology, is generally unjustified because it entails neglecting thermal dynamics and their effect on the fluid. Compressible fluid models that are more physically realistic feature equations of state that depend on the density and a second thermodynamic state-space variable, such as the temperature, which satisfies an evolution equation that is coupled to the other fluid equations. In the present article, we allow for an arbitrary physical equation of state in which, for mathematical convenience, we have chosen 1 the second thermodynamic variable to be the entropy per unit mass (which we refer to as simply the "entropy" from now on). 1.1. Paper outline. In the remainder of Sect. 1, we provide some standard background material on the compressible Euler equations, define the solution variables that we use in formulating our main results, roughly summarize our main results, and provide some preliminary context. In Sect. 2, we define some geometric objects that we use in formulating our main results and provide some basic background on Lorentzian geometry and null forms. In Sect. 3, we give precise statements of our main results, namely Theorems 3.1 and 3.2, and give the simple proof of the latter. In Sect. 4, we overview our forthcoming proof of shock formation, highlighting the roles that Theorems 3.1 and 3.2 will play. In Sect. 5, we prove Theorem 3.1 via a series of calculations in which we observe many important cancellations. 1.3. Basic background on the compressible Euler equations. In this subsection, we provide some basic background on the compressible Euler equations. 1 For sufficiently regular solutions, there are many equivalently formulations of the compressible Euler equations, depending on the state-space variables that one chooses as unknowns in the system. 2 In our forthcoming proof of shock formation, we will, for convenience, consider spacetimes with topology R × Σ, where Σ := R × T 2 is the space manifold; see Sect. 4 for an overview. In that context, {x α } α=0,1,2,3 denotes the usual Cartesian coordinate system on R × Σ, where x 0 ∈ R is the time coordinate, x 1 is a standard spatial coordinate on R, and x 2 and x 3 are standard (locally defined) coordinates on T 2 . Note that the vectorfields ∂ α := ∂ ∂x α on T 2 can be extended so as to be globally defined and smooth. Equations of state. We study the compressible Euler equations for a perfect fluid in three spatial dimensions under any equation of state with positive sound speed (see definition (1.3.9)). The equation of state is the function (which we assume to be given) that determines the pressure p in terms of the density ≥ 0 and the entropy s ∈ R: Given the equation of state, the compressible Euler equations can be formulated as evolution equations for the velocity v : R 1+3 → R 3 , the density : R 1+3 → [0, ∞), and the entropy s : R 1+3 → (−∞, ∞). 1.3.2. Some definitions. We use the following notation 3 for the Euclidean divergence and curl of a Σ t −tangent vectorfield V : In (1.3.2), ijk is the fully antisymmetric symbol normalized by The vorticity ω : R 1+3 → R 3 is the vectorfield Rather than formulating the equations in terms of the density and the vorticity, we find it convenient to use the logarithmic density ρ and the specific vorticity Ω; some of the equations that we study take a simpler form when expressed in terms of these variables. To define these quantities, we first fix a constant background density¯ such that > 0. (1.3.5) In applications, one may choose any convenient value 4 of¯ . We assume throughout that 5 > 0. (1.3.7) In particular, the variable ρ is finite assuming (1.3.7). In the study of shock formation, to obtain sufficient top-order regularity for the entropy, it is important to work with the Σ t -tangent vectorfield S provided by the next definition; see Remark 1.1 for further discussion. Definition 1.2 (Entropy gradient vectorfield). We define the Cartesian components of the Σ t -tangent entropy gradient vectorfield S as follows, (i = 1, 2, 3): (1.3.8) Remark 1.1 (The need for S and transport-div-curl estimates in controlling s). In our forthcoming proof of shock formation, we will control the top-order derivatives of s by combining estimates for transport equations with div-curl-type elliptic estimates for S and its higher derivatives. At first glance, it might seem like the div-curl elliptic estimates could be replaced with simpler elliptic estimates for ∆s, in view of the simple identity ∆s = divS. Although this is true for ∆s itself, in our proof of shock formation, the Euclidean Laplacian ∆ is not compatible with the differential operators that we must use to commute the equations when obtaining estimates for the solution's higher derivatives. Specifically, like all prior works on shock formation in more than one spatial dimension, our forthcoming proof is based on commuting the equations with geometric vectorfields (see Subsect. 4.3 for an overview) that are adapted to the acoustic wave characteristics of the compressible Euler equations, 6 which have essentially nothing to with the operator ∆. Therefore, the geometric vectorfields exhibit very poor commutation properties with ∆ and in fact would generate uncontrollable error terms if commuted with it. In contrast, in carrying out our transport-divergence-curl-type estimates, we only have to commute the geometric vectorfields through first-order operators including the transport operator, div, and curl; it turns out that commuting the geometric vectorfields through first-order operators, as long as they are weighted with an appropriate geometric weight, 7 leads to controllable error terms, compatible with following the solution all the way to the singularity. We explain this issue in more detail in Steps (1) and (2) of Subsect. 4.3. Notation 1.1 (State-space variable differentiation via semicolons). If f = (ρ, s) is a scalar function, then we use the following notation to denote partial differentiation with respect to ρ and s: f ;ρ := ∂f ∂ρ and f ;s := ∂f ∂s . Moreover, f ;ρ;s := ∂ 2 f ∂s∂ρ , and we use similar notation for other higher partial derivatives of f with respect to ρ and s. is a fundamental quantity known as the speed of sound. To obtain the last equality in (1.3.9), we used the chain rule identity We make the following physical assumptions, which ensure the hyperbolicity of the system when > 0: • c ≥ 0. 1.3.4. A standard first-order formulation of the compressible Euler equations. We now state a standard first-order formulation of the compressible Euler equations; these equations are the starting point of our new formulation. Specifically, relative to Cartesian coordinates, the compressible Euler equations can be expressed 9 as follows: Above and throughout, δ ab is the standard Kronecker delta and is the material derivative vectorfield. We note that B plays a critical role in the ensuing discussion. Readers may consult, for example, [8] for discussion behind the physics of the equations and for a first-order formulation of them in terms of , {v i } i=1,2,3 , and s, which can easily seen to be equivalent to (1.3.11a)-(1.3.11c). Modified fluid variables. Although it is not obvious, the quantities provided in the following definition satisfy transport equations with a good structure; see (3.1.2b) and (3.1.3a). When combined with elliptic estimates, the transport equations allow one to prove that the specific vorticity and entropy are one degree more differentiable than naive estimates would yield. This gain of regularity is essential in our forthcoming proof of shock formation since it is needed to control some of the source terms in the wave equations for the velocity and density, specifically, the first products on RHSs (3.1.1a)-(3.1.1b). In addition, the source terms in the transport equations have a good null structure, which is also essential in the study of shock formation. We discuss these issues in more detail in Sect. 4. Definition 1.3 (Modified fluid variables). We define the Cartesian components of the Σ t -tangent vectorfield C and the scalar function D as follows, (i = 1, 2, 3): 1.5. Some preliminary context for the main results. In this subsection, we provide some preliminary context for our main results, with a focus on the special null structures exhibited by the inhomogeneous terms in our new formulation of the compressible Euler equations and their relevance for our forthcoming proof of shock formation. The presence of special null structures in the equations might seem surprising since they are often associated with equations that admit global solutions. However, as we explain below, the good null structures are in fact key to proving that the shock forms. Several works have contributed to our understanding of the important role that the null structures play in the proof of shock formation, including [5,12,19,20,26]. Here we review these works and some related ones and, for the results in more than one spatial dimension, we highlight the role that the presence of good geo-analytic and null structures played in the proofs. The famous work of Riemann [24], in which he invented the Riemann invariants, yielded the first general proof of shock formation for solutions to the compressible Euler equations in one spatial dimension. More precisely, for such solutions, the velocity and density remain bounded even though their first-order Cartesian coordinate partial derivatives blow up in finite time. The most standard proof of this phenomenon is elementary and is essentially based on identifying a Riccati-type blowup mechanism for the solution's first derivatives; see Subsect. 4.1 for a review of these ideas in the context of simple plane wave solutions. In all prior proofs of shock formation in more than one spatial dimension, there also was a Riccati-type mechanism that drives the blowup. However, in the analysis, the authors encountered many new kinds of error terms that are much more complicated than the ones encountered by Riemann. A key aspect of the proofs was showing that the additional error terms do not interfere with the Riccati-type blowup mechanism. This is where the special null structure mentioned above enters into play: terms that enjoy the special null structure are weak compared to the Riccati-type terms that drive the singularity, at least near the shock. In order to explain this in more detail, we now review some prior works on shock formation in more than one spatial dimension. Alinhac was the first [1][2][3][4] to prove shock formation results for quasilinear hyperbolic PDEs in more than one spatial dimension. Specifically, in two and three spatial dimensions, he proved shock formation results for scalar quasilinear wave equations of the form 10 whenever the nonlinearities fail to satisfy the null condition 11 and the data are small, smooth, compactly supported, and verify a non-degeneracy condition. Although Alinhac's work significantly advanced our understanding of singularity formation in solutions to quasilinear wave equations, the most robust and precise framework for proving shock formation in solutions to quasilinear wave equations was developed by Christodoulou in his groundbreaking work [5]. More precisely, in [5], Christodoulou proved a small-data shock formation result for irrotational solutions to the equations of compressible relativistic fluid mechanics. In the irrotational case, the equations are equivalent to an Euler-Lagrange equation for a potential function Φ, which can be expressed in the form (1.5.1). Christodoulou's sharp geometric framework relied on a reformulation of the wave equation (1.5.1) that exhibits good geoanalytic structures (see equation (1.5.2)), and his approach yielded information that is not accessible via Alinhac's approach. In particular, Christodoulou's framework is able to reveal information about the structure of the maximal classical development 12 of the initial data, all the way up to the boundary, information that is essential for properly setting up the shock development problem in compressible fluid mechanics. Roughly, the shock development problem is the problem of weakly continuing the solution past the singularity under suitable jump conditions. We note that even if the data are irrotational, vorticity can be generated to the future of the first singularity. Thus, in the study of the shock development problem, one must consider the full compressible Euler equations. The shock development problem remains open in full generality and is expected to be very difficult. However, Christodoulou-Lisibach recently made important progress [7]: they solved it in spherical symmetry in the relativistic case. Christodoulou's shock formation results for the irrotational relativistic compressible Euler equations were extended to the non-relativistic irrotational compressible Euler equations by , to general classes of wave equations [26] by the author, and to other solution regimes in [22,23,27]. Readers may consult the survey article [12] for an extended overview of some of these works. Of the above works, the ones [5,8] are most relevant for the present article. In those works, the authors proved small-data shock formation results for the compressible Euler equations in irrotational regions by studying the wave equation for the potential function Φ. The wave equation can be written in the (non-Euler-Lagrange) form (1.5.1) relative to Cartesian coordinates, 13 where the Cartesian components g αβ = g αβ (∂Φ) are determined by the fluid equation of state. In the context of fluid mechanics, the Lorentzian metric g in (1.5.1) is known as the acoustical metric (since it drives the propagation of sound waves). We note that the acoustical metric also plays a fundamental role in the main results of this article (see Def. 2.1), even when the vorticity is non-zero. 11 Klainerman formulated the null condition in three spatial dimensions [16], while Alinhac formulated it in two spatial dimensions [3]. For equations of type (1.5.1), the difference is that in three spatial dimensions, the definition of the null condition involves only the structure of the quadratic part ∂Φ · ∂ 2 Φ of the nonlinearities (obtained by Taylor expansion) while in two spatial dimensions, it also involves the cubic part ∂Φ · ∂Φ · ∂ 2 Φ. 12 Roughly, the maximal classical development is the largest possible classical solution that is uniquely determined by the data; see, for example, [25,28] for further discussion. 13 In discussing [5], it would be better for us to call them "rectangular coordinates" since the equations there are introduced in the context of special relativity, and the Minkowski metric takes the "rectangular" form diag(−1, 1, 1, 1) relative to these coordinates. A simple but essential step in Christodoulou's proof of shock formation was to differentiate the wave equation (1.5.1) with the Cartesian coordinate partial derivative vectorfields ∂ ν , which led to the following system of covariant wave equations: is an array of scalar functions, Ψ ν := ∂ ν Φ (with ∂ ν denoting a Cartesian coordinate partial derivative), g is a Lorentzian metric conformal to g, g( Ψ) is the covariant wave operator of g (see Def. 3.1), and Ψ ν is treated as a scalar function under covariant differentiation in (1.5.2). A key feature of the system (1.5.2) is that all of the terms that drive the shock formation are on the left-hand side, hidden in the lower-order terms generated by the operator g( Ψ) . That is, if one expands g( Ψ) Ψ ν relative to the standard Cartesian coordinates, one encounters Riccati-type terms 14 The presence of a covariant wave operator on LHS (1.5.2) was crucial for Christodoulou's analysis. The reason is that he was able to construct, with the help of an eikonal function (see Subsect. 4.2), a collection of geometric, solution-dependent vectorfields that enjoy good commutation properties with g( Ψ) . He then used the vectorfields to differentiate the wave equations and to obtain estimates for the solution's higher derivatives, much like in his celebrated proof [6], joint with Klainerman, of the dynamic stability of Minkowski spacetime as a solution to the Einstein-vacuum equations. Indeed, in more than one spatial dimension, the main technical challenge in the proof of shock formation is to derive sufficient energy estimates for the geometric vectorfield derivatives of the solution that hold all the way up to the singularity. In the context of shock formation, this step is exceptionally technical and we discuss it in more detail in Sect. 4. It is important to note that the standard Cartesian coordinate partial derivatives ∂ ν generate uncontrollable error terms when commuted through g( Ψ) and thus the geometric vectorfields and the operator g( Ψ) are essential ingredients in the proof. In [26], we showed that if one considers a general wave equation of type (1.5.1), not necessarily of the Euler-Lagrange type considered by Christodoulou [5] and Christodoulou-Miao [8], then upon differentiating it with ∂ ν , one does not generate a system of type (1.5.2), but rather an inhomogeneous system of the form where f is smooth and Q is a standard null form relative to the acoustical metric g; see Def. 2.5. We then showed that the null forms relative to g have precisely the right structure such that they do not interfere with or prevent the shock formation processes, at least for suitable data. The Q are canonical examples of terms that enjoy the good null structure that we mentioned at the beginning of this subsection. More generally, we refer to the good null structure as the strong null condition; see Def. 2.4 and Prop. 2.3. We stress that the full nonlinear structure of the null forms Q is critically important. This is quite different from the famous null condition identified by Klainerman [16] in his study of wave equations in three spatial dimensions that enjoy small-data global existence; in Klainerman's formulation of the null condition, the structure of cubic and higher order terms is not even taken into consideration since, in the small-data regime that he studied, wave dispersion causes the cubic terms to decay fast enough that their precise structure is typically not important. The reason that the full nonlinear structure of the null forms Q is of critical importance in the study of shock formation is that they are adapted to the acoustical metric g and enjoy the following key property: each Q is linear in the tensorial component of ∂ Ψ that blows up. Therefore, near the singularity, Q is small relative to the quadratic terms ∂ Ψ · ∂ Ψ that drive the singularity formation (which we again stress are hidden in the definition of g( Ψ) Ψ ν ). Roughly, this linear dependence is the crux of the strong null condition. In contrast, a typical quadratic inhomogeneous term ∂ Ψ · ∂ Ψ, if present on RHS (1.5.3), would distort the dynamics near the singularity and could in principle prevent it from forming or change its nature. Moreover, in the context of shock formation, cubic or higher-order terms such as ∂ Ψ·∂ Ψ·∂ Ψ are expected to become dominant in regions where ∂ Ψ is large and it is therefore critically important that there are no such terms on RHS (1.5.3). These observations suggest that proofs of shock formation are less stable under perturbations of the equations compared to more familiar perturbative proofs of global existence. The equations in our new formulation of the compressible Euler equations (see Theorem 3.1) are drastically more complicated than the homogeneous wave equations (1.5.2) that Christodoulou encountered in his study of irrotational compressible fluid mechanics and the inhomogeneous equations (1.5.3) that we encountered in [26]. The equations of Theorem 3.1 are even considerably more complicated than the equations we derived in [20] in our study of the barotropic fluids with vorticity. However, they exhibit many of the same good structures as the equations of [20] as well as some remarkable new ones. Specifically, in the present article, we derive geometric equations whose inhomogeneous terms are either null forms similar to the ones on RHS (1.5.3) or less dangerous terms that are at most linear in the solution's derivatives. We find this presence of this null structure to be somewhat miraculous in view of the sensitivity of proofs of shock formation under perturbations of the equations that we described in the previous paragraph. Moreover, in Theorem 3.1, we also exhibit special combinations of the solution variables that solve equations with good source terms, which, when combined with elliptic estimates, can be used show that the vorticity is one degree more differentiable than one might expect; 15 see Def. 1.3 for the special combinations, which we refer to as "modified fluid variables." The gain in differentiability for the vorticity has long been known relative to Lagrangian coordinates, in particular because it has played an important role in proofs of local well-posedness [9-11, 13, 14] for the compressible Euler equations for data featuring a physical vacuum-fluid boundary. However, the gain in differentiability for the vorticity with respect to arbitrary vectorfield differential operators (with coefficients of sufficient regularity relative to the solution) seems to originate in [20]. The freedom to gain the derivative relative to general vectorfield differential operators is important because Lagrangian coordinates are not adapted to the wave characteristics, whose intersection corresponds to the formation of a shock. Therefore, Lagrangian coordinates are not suitable for following the solution all the way to the shock; instead, as we describe in Subsects. 4.2 and 4.3, one needs a system of geometric coordinates constructed with the help of an eikonal function as well the aforementioned geometric vectorfields, which are closely related to the geometric coordinates. We remark that in the barotropic case [20], the "special combinations" of solution variables were simpler than they are in the present article. Specifically, in the barotropic case, the specific vorticity and its curl satisfied good transport equations; compare with (1.3.13a). Similarly, we can prove that the entropy is one degree more differentiable than one might expect by studying a rescaled version of its Laplacian; 16 see (1.3.13b). To the best of our knowledge, the gain in regularity for the entropy is a new observation. As we mentioned above, we exhibit the special null structure of the inhomogeneous terms in Theorem 3.2. Given Theorem 3.1, the proof of Theorem 3.2 is simple and is essentially by observation. However, it is difficult to overstate its profound importance in the study of shock formation since, as we described above, the good null structures are essential for showing that the inhomogeneous terms are not strong enough to interfere with the shock formation processes (at least for suitable data). The gain of differentiability mentioned in the previous paragraph is also essential for our forthcoming work on shock formation since we need it in order to control some of the source terms in the wave equations. Geometric background and the strong null condition In this section, we define some geometric objects and concepts that we need in order to state our main results. 2.1. Geometric tensorfields associated to the flow. Roughly, there are two kinds of motion associated to compressible Euler flow: the transporting of vorticity and the propagation of sound waves. We now discuss the tensorfields associated to these phenomena. The material derivative vectorfield B, defined in (1.3.12), is associated to the transporting of vorticity. We now define the Lorentzian metric g corresponding to the propagation of sound waves. Definition 2.1 (The acoustical metric 17 and its inverse). We define the acoustical metric g and the inverse acoustical metric g −1 relative to the Cartesian coordinates as follows: (2.1.1b) 16 Actually, with our future study of shock formation in mind, we formulate a div-curl-transport equation system for the gradient of the entropy; see equations (3.1.3a)-(3.1.3b) and Remark 1.1. 17 Other authors have defined the acoustical metric to be c 2 g. We prefer our definition because it implies that (g −1 ) 00 = −1, which simplifies the presentation of many formulas. Remark 2.1. One can easily check that g −1 is the matrix inverse of g, that is, we have (g −1 ) µα g αν = δ µ ν , where δ µ ν is the standard Kronecker delta. The vectorfield B enjoys some simple but important geometric properties, which we provide in the next lemma. We repeat the simple proof from [20] for convenience. 19 g−orthogonal to Σ t , and unit-length: 20 follows from a simple calculation based on (1.3.12) and (2.1.1a). Similarly, we compute that g(B, ∂ i ) := g αi B α = 0 for i = 1, 2, 3, from which it follows that B is g−orthogonal to Σ t . 2.2. Decompositions relative to null frames. The special null structure of our new formulation of the compressible Euler equations, which we briefly described in Subsect. 1.5, is intimately connected to the notion of a null frame. where δ AB is the standard Kronecker delta. The following lemma is a simple consequence of Def. 2.2; we omit the simple proof. Lemma 2.2 (Decomposition of g −1 relative to a null frame). Relative to an arbitrary g−null frame, we have Definition 2.3 (Decomposition of a derivative-quadratic nonlinear term relative to a null frame). Let where V 0 is the 0 Cartesian component. 20 Throughout we use the notation g(V, W ) := g αβ V α W β . 21 The topology of the spacetime manifold is not relevant for our discussion here. Moreover, we let M A α be the scalar functions corresponding to expanding the Cartesian coordinate partial derivative vectorfield ∂ α at q relative to the null frame, that is, Then 22 denotes the nonlinear term obtained by expressing N ( V, ∂ V ) in terms of the derivatives of V with respect to the elements of N , that is, by expanding ∂ V as a linear combination of the derivatives of V with respect to the elements of N and substituting the expression for the factor ∂ V in N ( V, ∂ V ). 2.3. Strong null condition and standard null forms. In Subsect. 1.5, we roughly described the special null structure enjoyed by the inhomogeneous terms in our new formulation of the compressible Euler equations. We precisely define the special null structure in the next definition, which we recall from [20]. Definition 2.4 (Strong null condition). Let N ( V, ∂ V ) be as in Def. 2.3. We say that N ( V, ∂ V ) verifies the strong null condition relative to g if the following condition holds: for every g-null frame N , N N can be expressed in a form that depends linearly (or not at all) and such that the following hold for Θ, Γ = 0, 1, · · · , 10: (2.3.2) 22 Here and below, we use Einstein's summation convention, where uppercase Latin indices such as A and B vary over 1, 2, 3, 4, lowercase Latin "spatial" indices such as a and b vary over 1, 2, 3, uppercase Greek indices such as Θ and Γ vary over 0, 1, . . . , 10, and lowercase Greek "spacetime" such as α and β indices vary over 0, 1, 2, 3. Remark 2.2 (Some comments on the strong null condition). Equation (2.3.2) allows for the possibility that one uses evolution equations to algebraically substitute for terms on LHS (2.3.2), thereby generating the good terms on RHS (2.3.2), which verify the essential condition (2.3.1). As our proof of Prop. 2.3 below shows, this kind of substitution is not needed for null form nonlinearities, which can directly be shown to exhibit the desired structure without the help of external evolution equations. That is, for null forms In the present article, the formulation of the equations that we provide (see Theorem 3.1) is such that all derivative-quadratic terms are null forms. Readers might then wonder why our definition of the strong null condition allows for the more complicated scenario in which one uses external evolution equations for algebraic substitution to detect the good null structure. The reason is that in our work [20] on the barotropic case, we encountered the inhomogeneous terms , which are not null forms. To show that this term had the desired null structure, we used evolution equations for substitution and therefore relied on the full scope of Def. 2.4. In the present article, we encounter the same term, but we treat it in a different way and show that in form plus other terms that are either harmless or that can be incorporated into our definition of the modified fluid variables from Def. 1.3; see the identity (5.1.14) and the calculations below it. A key feature of our new formulation of the compressible Euler equations is that all derivative-quadratic inhomogeneous terms are linear combinations of the standard null forms relative to the acoustical metric g, which verify the strong null condition relative to g (see Prop. 2.3). We now recall their standard definition. Definition 2.5 (Standard null forms). The standard null forms Q g (·, ·) (relative to g) and Q (αβ) (·, ·) act on pairs (φ, φ) of scalar-valued functions as follows: Proof. In the case of the null form Q g , the proof is a direct consequence of the identity (2.2.3). In the case of the null form Q (αβ) defined in (2.3.3b), we consider any g-null frame (2.2.1), and we label its elements as follows: N := {e 1 , e 2 , e 3 := L, e 4 := L}. Since N spans the tangent space at each point where it is defined, there exist scalar functions M A α such that the following identity holds for α = 0, 1, 2, 3: The key point is that the terms in braces are antisymmetric in A and B. It follows that the sum does not contain any diagonal terms, that is, terms proportional to (e A φ)e A φ. In particular, terms proportional to (Lφ)L φ and (Lφ)L φ are not present, which is the desired result. Precise statement of the main results In this section, we precisely state our two main theorems and give the simple proof of the second one. We start recalling the standard definition of the covariant wave operator g . Definition 3.1 (Covariant wave operator). Let g be a Lorentzian metric. Relative to arbitrary coordinates, the covariant wave operator g acts on scalar-valued functions φ as follows: The new formulation of the compressible Euler equations with entropy. Our first main result is Theorem 3.1, which provides the new formulation of the compressible Euler equations. We postpone its lengthy proof until Sect. 5. Remark 3.1 (Explanation of the different kinds of inhomogeneous terms). In the equations of Theorem 3.1, there are many inhomogeneous terms that are denoted by decorated versions of Q. These terms are linear combinations of g-null forms that, in our forthcoming proof of shock formation, can be controlled in the energy estimates without elliptic estimates. Similarly, in the equations of Theorem 3.1, decorated versions of the symbol L denote terms that are at most linear in the derivatives of the solution and that can be controlled in the energy estimates without elliptic estimates. In our forthcoming proof of proving shock formation, the Q's and L's will be simple error terms. The equations of Theorem 3.1 also feature additional null form inhomogeneous terms depending on ∂Ω and ∂S, which we explicitly display because one needs elliptic estimates along Σ t to control them in the energy estimates. For this reason, in the proof of shock formation, these terms are substantially more difficult to bound compared to the Q's and L's. Similarly, terms that are linear in ∂Ω, ∂S, C, or D can be controlled only with the help of elliptic estimates along Σ t . Covariant wave equations: Transport equations: Transport-divergence-curl system for the specific vorticity Transport-divergence-curl system for the entropy gradient , and Q (D) are the null forms relative to g defined by , which are at most linear in the derivatives of the unknowns, are defined as follows: are equivalent to the equations that we derived in [20]. However, one needs some observations described in Remark 2.2 in order to see the equivalence. On the other hand, to solve the Cauchy problem for the system (3. These data can be obtained by differentiating the fundamental data and using equations (1.3.11a)-(1.3.11c) for substitution. 23 3.2. The structure of the inhomogeneous terms. The next theorem is our second main result. In the theorem, we characterize the structure of the inhomogeneous terms in the equations Theorem 3.1. The most important part of the theorem is the null structure of the type (iii) terms. Theorem 3.2 (The structure of the inhomogeneous terms). Let denote the array of unknowns. The inhomogeneous terms on the right-hand sides of equations (3.1.1a)-(3.1.3b) consist of three types: where f is smooth and Q is a standard null form relative to g from Def. 2.5 Proof. It is easy to see that Q i (v) , Q (ρ) , Q i (C) , and Q (D) are type (iii) terms and that the same is true for the products on the first through third lines of RHS (3.1.2b) and the terms in braces on the first line of RHS (3.1.3a). Similarly, it is easy to see that , and L i (C) are sums of terms of type (i) and (ii), while the first product on RHS (3.1.1a), the first product on RHS (3.1.1b), and the second product on RHS (3.1.3a) are , in view of Def. 1.3, type (ii). Overview of the roles of Theorems 3.1 and 3.2 in proving shock formation As we mentioned in Sect. 1, in forthcoming work, we plan to use the results of Theorems 3.1 and 3.2 as the starting point for a proof of finite-time shock formation for the compressible Euler equations. In this section, we overview the main ideas in the proof and highlight the role that Theorems 3.1 and 3.2 play. We plan to study a convenient open set of initial conditions in three spatial dimensions whose solutions typically have non-zero vorticity and non-constant entropy: perturbations (without symmetry conditions) of simple isentropic (that is, constant entropy 24 ) plane waves. 25 We note that in our joint work [27] on scalar wave equations in two spatial dimensions, we proved shock formation for solutions corresponding to a similar set of nearly plane symmetric initial data. The advantage of studying perturbations of simple isentropic plane waves is that it allows us to focus our attention on the singularity formation without having to confront additional evolutionary phenomena that are often found in solutions to wave-like systems. For example, nearly plane symmetric solutions do not exhibit wave dispersion because their dynamics are dominated by 1D-type wave behavior. 26 In particular, our forthcoming analysis will not feature time weights or radial weights. 4.1. Blowup for simple isentropic plane waves. Simple isentropic plane waves are a subclass of plane symmetric solutions. By plane symmetric solutions, we mean solutions that depend only on t and x 1 and such that v 2 ≡ v 3 = 0. To further explain simple isentropic plane wave solutions, we start by defining the Riemann invariants: The function F in (4.1.3) solves the following initial value problem: where we are omitting the dependence of c on s since s ≡ constant and F (ρ = 0) = 0 is a convenient normalization condition. As is well-known, in one spatial dimension, in terms of R ± , the compressible Euler equations ( The simple isentropic plane wave solutions described in the previous paragraph typically form a shock in finite time via the same mechanism that leads to singularity formation in solutions to Burgers' equation. For illustration, we now quickly sketch the argument. We assume the simple isentropic plane wave condition R − ≡ 0, which implies that the system (4.1.3) reduces to {∂ 1 + f (R + )∂ 1 }R + = 0, where f is a smooth function determined by F . It can be shown that f is not a constant-valued function of R + except in the case of 24 Note that the transport equation (1.3.11c) implies that the entropy is constant in the maximal classical development of the data if it is constant along Σ 0 . 25 These simple plane waves have vanishing vorticity and constant entropy, through their perturbations generally do not. 26 In one spatial dimension, wave equations are essentially transport equations and thus their solutions do not experience dispersive decay. the equation of state of a Chaplygin gas, which is p = p( ) = C 0 − C 1 , where C 0 ∈ R and C 1 > 0. We now take a ∂ 1 derivative of the evolution equation for R + to deduce the equation Since R + is constant along the integral curves of ∂ 1 + f (R + )∂ 1 (which are also known as characteristics in the present context), the above equation may be viewed as a Riccati-type ODE for ∂ 1 R + along the characteristics of the form d dt where the constant k is equal to −f (R + ) evaluated at the point on the x 1 -axis from which the characteristic emanates. Thus, for initial data such that ∂ 1 R + and k have the same (non-zero) sign at some point along the x 1 axis, the solution ∂ 1 R + will blow up in finite time along the corresponding characteristic, even though R + remains bounded; this is essentially the crudest picture of the formation of a shock singularity. Note that there is no blowup in the case of the Chaplygin gas since f ≡ 0 in that case; see Footnote 32 for related remarks. 4.2. Fundamental ingredients in the proof of shock formation in more than one spatial dimension. We can view the simple isentropic plane waves described in Subsect. 4.1 as solutions in three spatial dimension that have symmetry. In our forthcoming work on shock formation in three spatial dimensions, we will study perturbations (without symmetry assumptions) of simple isentropic plane waves and show that the shock formation illustrated in Subsect. 4.1 is stable. For technical convenience, instead of considering data on R 3 , we will consider initial data on the spatial manifold where the factor of T 2 corresponds to perturbations away from plane symmetry. This allows us to circumvent some technical difficulties, such as the fact that non-trivial plane wave solutions have infinite energy when viewed as solutions in three spatial dimensions. Although the method of Riemann invariants allows for an easy proof of shock formation for simple isentropic plane waves, the method is not available in more than one spatial dimension. Another key feature of the study of shock formation in more than one spatial dimension is that all known proofs rely on sharp estimates that provide much more information than the proof of blowup for simple plane waves from Subsect. 4.1. Therefore, in our forthcoming proof of shock formation for perturbations of simple isentropic plane waves, we will use the geometric formulation of the equations provided by Theorem 3.1. We will show that these equations have the right structure such that they can be incorporated into an extended version of the paradigm for proving shock formation initiated by Alinhac [1][2][3][4] and significantly advanced by Christodoulou [5]. The most fundamental ingredient in the approaches of Alinhac and Christodoulou is a system of geometric coordinates (t, u, ϑ 1 , ϑ 2 ) (4.2.1) that are dynamically adapted to the solution. We denote the corresponding partial derivative vectorfields as follows: Here, t is the standard Cartesian time function and u is an eikonal function adapted to the acoustical metric. That is, u solves the following hyperbolic PDE, known as the eikonal equation: Above and throughout the rest of the article, g is the acoustical metric from Def. 2.1. We construct the geometric torus coordinates ϑ A by solving the transport equations where x 2 and x 3 are standard (locally defined) Cartesian coordinates on T 2 ; see Footnote 2 regarding the Cartesian coordinates in the present context. For various reasons, when differentiating the equations to obtain estimates for the solution's derivatives, one needs to use geometric vectorfields, described below, rather than the partial derivative vectorfields in (4.2.2). For this reason, the coordinates (ϑ 1 , ϑ 2 ) play only a minor role in the analysis and we will downplay them for most of the remaining discussion. Note that the Cartesian components g αβ depend on the fluid variables ρ and v i (see (2.1.1a)) and therefore the eikonal equation (4.2.3a) is coupled to the Euler equations. The initial conditions (4.2.3b) are adapted to the approximate plane symmetry of the solutions under study. The level sets of u are known as the "characteristics" or the "acoustic characteristics," and we denote them by P u . The P u are null hypersurfaces relative to the acoustical metric g. As we further explain below, the intersection of the level sets of the function u (viewed as an R-valued function of the Cartesian coordinates) corresponds to the formation of a shock singularity and the blowup of the first Cartesian coordinate partial derivatives of the density and velocity. u is a sharp coordinate that can be used to reveal special structures in the equations and to construct geometric objects adapted to the characteristics. The price that one pays for the precision is that the top-order regularity theory for u is very complicated and tensorial in nature. As we later explain, the regularity theory is especially difficult near the shock and leads to degenerate high-order energy estimates. The first use of an eikonal function in proving a global result for a nonlinear hyperbolic system occurred in the celebrated proof [6] of the stability of the Minkowski spacetime as a solution to the Einstein-vacuum equations. 27 Eikonal functions have also played a central role in proofs of low-regularity well-posedness for quasilinear hyperbolic equations, most notably the recent Klainerman-Rodnianski-Szeftel proof of the bounded L 2 curvature conjecture [18]. The paradigm for proving shock formation originating in the works [1][2][3][4][5] can be summarized as follows: To the extent possible, prove "long-time-existence-type" estimates for the solution relative to the geometric coordinates and then recover the formation of the shock singularity as a degeneration between the geometric coordinates and the Cartesian ones. In particular, prove that the solution remains many times differentiable relative to the geometric coordinates, even though the first Cartesian coordinate partial derivatives of the density and velocity blow up. The most important quantity in connection with the above paradigm for proving shock formation is the inverse foliation density. Definition 4.1 (Inverse foliation density). We define the inverse foliation density µ > 0 as follows: µ is a measure of the density of the characteristics P u relative to the constant-time hypersurfaces Σ t . When µ vanishes, the density becomes infinite, the level sets of u intersect, and, as it turns out, the first Cartesian coordinate partial derivatives of the density and velocity blow up in finite time. See Figure 1 on pg. 23 for a depiction of a solution for which the characteristics have almost intersected. Note that by (2.1.1b) and (4.2.3b), we have 28 µ| t=0 ≈ 1. Christodoulou was the first to introduce µ in the context of proving shock formation in more than one spatial dimension [5]. However, before Christodoulou's work, quantities in the spirit of µ had been used in one spatial dimension, for example, by John in his proof [15] of blowup for solutions to a large class of quasilinear hyperbolic systems. In short, to prove a shock formation result under Christodoulou's approach, one must control the solution all the way up until the time of first vanishing of µ. 4.3. Summary of the proof of shock formation. Having introduced the geometric coordinates and the inverse foliation density, we are now ready to summarize the main ideas in the proof of shock formation for perturbations of simple isentropic plane wave solutions to the compressible Euler equations in three spatial dimensions with spatial topology Σ = R × T 2 . For convenience, we will study solutions with very small initial data given along a portion of the characteristic P 0 and "interesting" data (whose derivatives can be large in directions transversal to the characteristics) along a portion of Σ 0 R × T 2 ; see Figure 1 below for a schematic depiction of the setup. Given the structures revealed by Theorems 3.1 and 3.2, most of the proof is based on a framework developed in prior works, as we now quickly summarize. The bulk of the framework originated in Christodoulou's groundbreaking work [5] in the irrotational case, with some key contributions (especially the idea to rely on an eikonal function) coming from Alinhac's earlier work [1][2][3][4] on scalar wave equations. The relevance of the strong null condition in the context of proving shock formation was first recognized in [12,26]. The crucial new ideas needed to handle the transport equations and the elliptic operators/estimates originated in [19,20]. A key contribution of the present work is realizing i) that one can gain a derivative for the entropy s and ii) that in the context of shock formation, one needs to rely on transport-div-curl estimates for the entropy gradient S in order to avoid uncontrollable error terms; see Remark 1.1 and Step (2) below for further discussion on this point. We now summarize the main ideas behind our forthcoming proof of shock formation. Most of the discussion will be at a rough, schematic level. that are adapted to the characteristics P u ; see Figure 1. Readers may consult [12,19,20] for details on how to use u to construct Z . Here, we only note some basic properties of these vectorfields. The subset spans the tangent space of P u while the vectorfieldX is transversal to P u . L is a g-null (that is, g(L, L) = 0) generator of the P u normalized by Lt = 1, whilȇ Error is a small vectorfield tangent to the co-dimension-two "interesting" data v e r y s m a l l d a t a Figure 1. The vectorfield frame Z at two distinct points in P u and the integral curves of B, with one spatial dimension suppressed. The elements of Z are designed to have good commutation properties with each other and also, as we describe below µ g . In particular, one can show that we have the following schematic relations: 29 In the rest of the discussion, Z denotes a generic element of Z and P denotes a generic element of P or, more generally, a P u -tangent differential operator. It is straightforward to derive the following relationships, which are key to understanding the shock formation, where ∂ schematically denotes 30 linear combinations of the Cartesian coordinate partial derivative vectorfields: where Error denotes small vectorfields. Hence, deriving estimates for the Z derivatives of the solution is equivalent to deriving estimates for the derivatives of the solution relative to the geometric coordinates. The elements of (4.3.1) are replacements for the geometric coordinate partial derivative vectorfields (4.2.2) that, as it turns out, enjoy better regularity properties. Specifically, an important point, which is not at all obvious, is that the elements of ∂ ∂u , ∂ ∂ϑ 1 , ∂ ∂ϑ 2 , when commuted through the covariant wave operator g from LHSs (3.1.1a)-(3.1.1b), generate error terms that lose a derivative and thus are uncontrollable at the top-order. In contrast, the elements Z ∈ Z are adapted to the acoustical metric g in such a way that the commutator operator [µ g , Z] generates controllable error terms. We note that one includes the factor of µ in the previous commutator because it leads to essential cancellations. However, achieving control of the commutator error terms at the toporder derivative level is difficult and in fact constitutes the main step in the proof. The difficulty is that the Cartesian components of Z ∈ Z depend on the Cartesian coordinate partial derivatives of u, which we can schematically depict as follows: Z α ∼ ∂u. Therefore, the regularity of the vectorfields Z themselves depends on the regularity of the fluid solution through the dependence of the eikonal equation on the fluid variables. In fact, some of the commutator terms generated by [µ g , Z] appear to suffer from the loss of a derivative. The derivative loss can be overcome using ideas originating in [6,17] and, in the context of shock formation, in [5]. However, as we explain in Step (7), one pays a steep price in overcoming the loss of a derivative: the only known procedure leads to degenerate estimates in which the high-order energies are allowed to blow up as µ → 0. On the other hand, to close the proof and show 30 Throughout, we use the notation A ∼ B to imprecisely indicate that A is well approximated by B. that the shock forms, one must prove that the low-order energies remain bounded all the way up to the singularity. Establishing this hierarchy of energy estimates is the main technical step in the proof. (2) (Multiple speeds and commuting geometric vectorfields through first-order operators). The compressible Euler equations with entropy feature two kinds of characteristics: the acoustic characteristics P u and the integral curves of the material derivative vectorfield B; see Figure 1. That is, the system features multiple characteristic speeds, which creates new difficulties compared to the case of the scalar wave equations treated in the works [1][2][3][4][5]8,19,20,23,27]. Another new difficulty compared to the scalar wave equation case is the presence of the operators div and curl in the equations of Theorem 3.1. The first proof of shock formation for a quasilinear hyperbolic system in more than one spatial dimension featuring multiple speeds and the operators div and curl was our prior work [19,20] on the compressible Euler equations in the barotropic case. We now review the main difficulties corresponding to the presence multiple speeds and the operators div and curl. We will then explain how to overcome them; it turns out that essentially the same strategy can be used to handle all of these first-order operators. Since the formation of a shock is tied to the intersection of the wave characteristic P u (as we clarify in Step (5)), our construction of the geometric vectorfields Z ∈ Z from Step (1) was, by necessity, adapted to g; indeed, this seems to be the only way to ensure that the commutator terms [µ g , Z] are controllable up to the shock. This begs the question of what kind of commutation error terms are generated upon commuting them through first-order operators such as B, div, and curl. The resolution was provided by the following key insight from [19,20]: the elements of Z have just enough structure such that their commutator with an appropriately weighted, but otherwise arbitrary, first-order differential operator 31 produces controllable error terms, consistent with "hiding the singularity" relative to the geometric coordinates at the lower derivative levels. Specifically, one can show that we have the schematic commutation relation [µ∂ α , Z ] ∼X + P, The above discussion suggests the following strategy for treating the first-order equations of Theorem 3.1: weight them with a factor of µ so that the principal part is of the schematic form µ∂. Then by (4.3.6), upon commuting the weighted equation with elements of Z , we generate only commutator terms that do not feature the 1 µ . We stress that the property (4.3.6) does not generalize to second-order operators. That is, we have the schematic relation [µ∂ α ∂ β , Z ] ∼ 1 µ Z Z + · · · , which features uncontrollable factors of 1 µ . This is the reason that in deriving elliptic estimates for 31 Here, by first-order differential operator, we mean one equal to a regular function times a Cartesian coordinate partial derivative. the entropy s, we work the divergence and curl of the entropy gradient vectorfield S i = ∂ i s instead of ∆s (see also Remark 1.1); the div-curl formulation allows us to avoid commuting the elements of Z through the (second-order) flat Laplacian ∆ and thus avoid the uncontrollable error terms. (3) (L ∞ bootstrap assumptions). Formulate appropriate uniform L ∞ bootstrap assumptions for the Z -derivatives of the solution, up to order approximately 10, on a region on which the solution exists classically. In particular, these Z derivatives of the solution will not blow up, even as the shock forms. Just below equation (4.3.11f), we explain why the proof requires so many derivatives. The bootstrap assumptions are tensorial in nature and involve several parameters measuring the size of various directional derivatives of the solution. We will not discuss the bootstrap assumptions in detail here. Instead, we simply note that they reflect our expectation that the solution remains a small perturbation of a simple isentropic plane wave at appropriate Z -derivative levels; readers may consult [19,20] for more details on the bootstrap assumptions in the barotropic case and note that in our forthcoming work, we will make similar bootstrap assumptions, the new feature being smallness assumptions on the departure of s from a constant and the derivatives of s. (4) (The role of Theorem 3.2). We now clarify the importance of the good null structures revealed by Theorem 3.2, thereby fleshing out the discussion from Subsect. 1.5. Let V denote the solution array (2.2.4). As we alluded to above, before commuting the equations of Theorem 3.1 with elements of Z , we first multiply the equations by a factor of µ. The main point is that by Theorem 3.2, all derivative-quadratic inhomogeneous terms (that is, the type iii) terms in theorem) in the µ-weighted equations can be decomposed in the following schematic form: where P is as in Step (1) We have therefore explained the good structure of type iii) terms from Theorem 3.2. The only other kind of inhomogeneous terms that one encounters in the µ-weighted equations of Theorem 3.1 are at most linear in ∂ V , that is, µ-weighted versions of the type i) and ii) terms from Theorem 3.2. The linear terms µ∂ V can be decomposed (schematically) as Then, using the L ∞ bootstrap assumptions from Step (2) . For brevity, we do not provide these transport equations in detail here (we schematically displayed the one for µ in (4.3.9)). Instead, we only 32 It turns out that the coefficient of the term ∂ ∂u v 1 on RHS (4.3.9) vanishes precisely in the case of the Chaplygin gas equation of state, which is p = p( ) = C 0 − C 1 , where C 0 ∈ R and C 1 > 0. Since the term ∂ ∂u v 1 is precisely the one that drives the vanishing of µ, our proof of shock formation does not apply for the Chaplygin gas. This is connected to the following well-known fact: in one spatial dimension under the Chaplygin gas equation of state, the compressible Euler equations form a totally linearly degenerate PDE system, which is not expected to exhibit shock formation; see [21] for additional discussion on totally linearly degenerate PDEs. 33 Deriving estimates for µ and the L i is essentially equivalent to deriving estimates for the first derivatives of the eikonal function, that is, for the first derivatives of solutions to the eikonal equation (4.2.3a). For µ, this is apparent from equation (4.2.5). note that the inhomogeneous terms in the transport equations exhibit good null structures similar to the ones enjoyed by the simple type i) and type ii) terms from Theorem 3.2. The reason that we must estimate the derivatives of µ and L i is that they arise as source terms when we commute the equations of Theorem 3.1 with the elements Z of (4.3.1). After commuting the equations, one uses the L ∞ bootstrap assumptions from Step (3) to derive suitable pointwise estimates for all of the error terms and inhomogeneous terms in the equations up to top-order. A key point is that all good null structures, such as the structure displayed in (4.3.7), are preserved under differentiations of the equations. Moreover, since the elements Z ∈ Z are adapted to µ g , the commutator terms corresponding to the operator [µ g , Z] also exhibit a similar good null structure. Another key step in the proof is to derive very sharp pointwise estimates for µ capturing exactly how it vanishes. More precisely, through a detailed study of equation (4.3.9), one can show that for the solutions under study, ∂ ∂t µ is quantitatively negative in regions where µ is near 0, which implies that µ vanishes linearly. It turns out that these facts are crucial for closing the energy estimates. (7) (Energy estimates). Using the pointwise estimates and the sharp estimates for µ from Step (6), derive energy estimates up to top-order. This is the main technical step in the proof. Null structures such as (4.3.7) are again critically important for the energy estimates, since our energies (described below) are designed to control error integrals that are generated by terms of the form RHS (4.3.7) and their higher-order analogs. To control some of the terms in the energy estimates, we also need elliptic estimates along Σ t , which we describe in Step (8). As a preliminary step, we now briefly describe, from the point of view of regularity, why our proof fundamentally relies on the equations (3.1.2a)-(3.1.3b) and elliptic estimates. In reality, we need elliptic estimates only to control the solution's top-order derivatives, that is, after commuting the equations many times with the elements of Z . However, for convenience, here we ignore the need to commute the equations and instead focus our discussion on how to derive a consistent amount of Sobolev regularity for solutions to the non-commuted equations. In proving shock formation, we are primarily interested in deriving estimates for solutions to the wave equations (3.1.1a)-(3.1.1b); given suitable estimates for their solutions, the rest of the proof of the formation of the shock is relatively easy. To proceed, we first note that the inhomogeneous terms C and D on the right-hand sides of the wave equations (3.1.1a)-(3.1.1b) are (see Def. 1.3), from the point of view of regularity, at the level of ∂Ω and ∂S plus easier terms that can be treated using energy estimates for wave equations (and that we will therefore ignore in the present discussion). On the other hand, the transport equations (3.1.1c) and (3.1.1e) for Ω and S have source terms that depend on ∂v and ∂ρ. This falsely suggests that Ω and S have the same Sobolev regularity as ∂v and ∂ρ which, from the point of view of regularity, would be inconsistent with the inhomogeneous terms ∂Ω and ∂S on the right-hand side of the wave equations; the inconsistency would come from the fact that energy estimates for the wave equations yield control only over ∂v and ∂ρ and thus ∂v and ∂ρ cannot have more L 2 regularity than the wave equation source terms ∂Ω and ∂S. To circumvent this difficulty, one needs to rely on the div-curl-transport-type equations (3.1.2a)-(3.1.3b) and elliptic estimates to control ∂Ω and ∂S in L 2 (Σ t ), using only that ∂v i and ∂ρ are in L 2 (Σ t ). We further explain this in Step (8). A key reason behind the viability of this approach is that even though equations (3.1.2a)-(3.1.3b) are obtained by differentiating the transport equations (3.1.1c) and (3.1.1e) (which feature inhomogeneous terms of the schematic form ∂v), the inhomogeneous terms on RHSs (3.1.2a)-(3.1.3b) do not feature the terms ∂ 2 v or ∂ 2 ρ; this is a surprising structural feature of the equations that should not be taken for granted. The main difficulty that one encounters in the proof of shock formation is that the best energy estimates that we know how to derive allow for the possibility that the high-order energies might blow up as the shock forms. This makes it difficult to justify the uniform (non-degenerate) L ∞ bootstrap assumptions from Step (3), which play a crucial role in showing that the shock forms and in deriving the pointwise estimates from Step (6). It turns out that the maximum possible energy blowup rates can be expressed in terms of negative powers of (4.3.10) Note that the formation of the shock corresponds to µ → 0. Just below, we will roughly describe the hierarchy of energy estimates. The energy estimates involve energies E (W ave);T op for the "wave variables" {ρ, v 1 , v 2 , v 3 } as well as energies E (T ransport) for the "transport variables" {s, Ω, S 1 , S 2 , S 3 , C 1 , C 2 , C 3 , D}. We use the notation E (W ave);T op to denote a wave energy that controls the top-order Z derivatives 34 of the wave variables (here we are not specific about how many derivatives correspond to top-oder), E (W ave);T op−1 denote a just-below-top order wave energy, E (W ave);M id denote a mid-order wave energy (we also are not specific about how many derivatives correspond to mid-order), E (W ave);1 correspond to the energy after a single commutation, 35 and similarly for the transport equation energies. The hierarchy of energy estimates that one can derive roughly has the following structure, where K ≈ 20 is a constant and˚ is a small parameter representing the size of a seminorm that, roughly speaking, measures how far the initial data are from the data of a simple isentropic plane wave: The difficult parts of the proof are controlling the maximum possible the toporder blowup rate µ −K (t) as well as establishing the descent scheme showing that the below-top-order energies become successively less degenerate until one reaches the level (4.3.11e), below which the energies do not blow up. Descent schemes of this type originated in the works [1][2][3][4][5] of Alinhac and Christodoulou and have played a key role in all prior works on shock formation in more than one spatial dimension. From the non-degenerate energy estimates (4.3.11e)-(4.3.11f), Sobolev embedding, and a smallness assumption on the data-size parameter˚ , one can justify (that is, improve) the non-degenerate L ∞ bootstrap assumptions from Step (3). To close the proof, we need the energies to remain uniformly bounded up the singularity starting at a level representing, roughly, slightly more than half of the top-order number of derivatives. Consequently, the proof requires a lot of regularity, and top-order corresponds to commuting the equations roughly 20 times with the elements of Z . The precise numerology behind the hierarchy (4.3.11a)-(4.3.11f) is complicated, but the following two features seem fundamental: i) The top-order blowup rate µ −K (t), since, as we explain below, the blowup-exponent K is tied to universal structural constants in the equations that are independent of the number of times that we commute them. ii) An improvement of precisely µ 2 (t) at each step in the descent, which is tied to the fact that µ (t) vanishes linearly (as we mentioned at the end of Step (6)). To construct energies that result in controllable error terms, we must weight various terms in it with factors of µ, a difficulty that lies at the heart of the analysis. For example, the energies E (W ave) for the wave variables Ψ ∈ {ρ, v 1 , v 2 , v 3 } are constructed 36 so that, at the level of the undifferentiated equations, we have, relative to the geometric coordinates (4.2.1) and the vectorfields in (4.3.1), the following schematic relation: The energy E (W ave);T op (t) on LHS (4.3.11a) schematically represents one of the quantities E (W ave) [Z Ntop Ψ](t), where N top ≈ 20 is the maximum number of times that we need to commute the equations in order to close the estimates. The factor of µ in (4.3.12) is chosen so that only controllable error terms are generated in the energy identities (it is true, though not obvious, that RHS (4.3.12) has the right strength). Note that some components of the energies become very weak near the shock (that is, in regions where µ is small), namely the products on RHS (4.3.12) that are µweighted. This makes it difficult to control the non-µ-weighted error terms that one encounters in the energy identities. To control such "strong" error terms, one uses, in addition to the energies (4.3.12), energies along P u (known as null fluxes) as well as a coercive friction-type spacetime integral, which is available because ∂ ∂t µ is quantitatively negative in the difficult region where µ is small (as we described in Step (6)). These aspects of the proof, though of fundamental importance, have been well-understood since Christodoulou's work [5] and are described in more detail in [19,20,27]; for this reason, we will not further discuss these issues here. 36 With the help of the vectorfield multiplier method, based on the energy-momentum tensor for wave equations and the multiplier vectorfield T := (1 + 2µ)L + 2X; see [19,20,27]. We must also derive energy estimates for the transport equations in Theorem 3.1. Specifically, to control the transport variables Ψ ∈ {s, Ω, S 1 , S 2 , S 3 , C 1 , C 2 , C 3 , D}, we rely on energies with the following strength: (4.3.13) As in the case of the wave variable energies, the factor of µ in (4.3.13) is chosen so that only controllable error terms are generated in the energy identities. We now sketch the main ideas behind why the top-order energy estimate (4.3.11a) is so degenerate. We will focus only on the wave equation energy estimates since the transport equation energy estimates are much easier to derive. 37 The basic difficulty is that on the basis of energy identities, the following integral inequality is the best that we are able to obtain: where A is a universal positive constant that is independent of the equation of state and the number of times that the equations are commuted and · · · denotes similar or less degenerate error terms. Below we explain the origin of the degenerate factor ∂ ∂s µ µ on RHS (4.3.14), whose presence is tied to an issue that we highlighted earlier: the needed top-order regularity properties of the eikonal function are difficult to derive. To apply Gronwall's inequality to the inequality (4.3.14), we need the following estimate: The proof of (4.3.15) can be derived with the help the estimates 3.17) whereδ * > 0 is a data-dependent parameter that, roughly speaking, measures the L ∞ size of the term ∂ ∂u v 1 on RHS (4.3.9). We note that to close the proof, one needs to consider initial data such that˚ is small relative toδ * (thoughδ * may be small in an absolute sense). We also note that the estimates (4.3.16)-(4.3.17) fall under the scope of the sharp estimates for µ from Step (6). Moreover, we note that the fact that µ vanishes linearly is important for deriving (4.3.15). Finally, we note that (4.3.15) is just a quasilinear version of the caricature estimate 1 s=t 1 s ds ln t, in which s = 0 37 It turns out, however, that the g-timelike nature of the transport operator B (as shown by Lemma 2.1) is important for the transport equation energy estimates; see [20] for further discussion on this point. represents the time of first vanishing of µ and s = 1 represents the "initial" data time. After we have derived (4.3.14) and (4.3.15), we can apply Gronwall's inequality (ignoring the terms · · · on RHS (4.3.14)) to obtain the following bound: We now briefly explain the origin of the difficult error integral on RHS (4.3.14). Let Ψ schematically denote any of the wave variables {ρ, v 1 , v 2 , v 3 }. The difficulty arises from the worst commutator error terms that are generated when one commutes the elements of Z (see (4.3.1)) through the wave operator µ g . To explain the main ideas, we consider only the wave equation verified by Y N Ψ, where Y N schematically denote an order N differential operator corresponding to repeated differentiation with respect to elements of the set {Y 1 , Y 2 }; similar difficulties arise upon commuting µ g with other strings of vectorfields from Z . Specifically, one can show that upon commuting any of the µ-weighted wave equations (3.1.1a)-(3.1.1b) with Y N , we obtain an inhomogeneous wave equation of the schematic form The term χ on RHS (4.3.19) is the null second fundamental form of the co-dimensiontwo tori P u ∩Σ t , that is, the symmetric type 0 2 tensorfield with components χ Θ A Θ B = g(D Θ A L, Θ B ), where D is the Levi-Civita connection of g. Moreover, tr g / χ is the trace of χ with respect to the Riemannian metric g / induced on P u ∩Σ t by g. Geometrically, tr g / χ is the null mean curvature of the g-null hypersurfaces P u . Analytically, Y N tr g / χ is a difficult commutator term in which the maximum possible number of derivatives falls on the eikonal function (recall that L ∼ ∂u and thus χ ∼ ∂ 2 u). As we mentioned earlier, the main difficulty is that a naive treatment of terms involving the maximum number of derivatives of the eikonal function leads to the loss of a derivative. This difficulty is visible directly from the evolution equation satisfied by Y N tr g / χ, which can be derived from geometric considerations 38 and which takes the following schematic form (recall that P schematically denotes elements of the set (4.3.2)): the left so that equation (4.3.20) To handle the term ∆ / Y N Ψ, we can use a similar but more complicated strategy first employed in [17] in the context of low regularity well-posedness and later by Christodoulou [5] in the context of shock formation: by decomposing the principal parts of the Y N -commuted wave equations ( The key point is that all inhomogeneous terms on RHS (4.3.21) now feature an allowable amount of regularity, 39 which implies that we can gain back the derivative by working with the "modified" quantity We have therefore explained how to avoid the derivative loss that was threatened by the term Y N tr g / χ on RHS (4.3.19). However, our approach comes with a large price: the inhomogeneous term on RHS (4.3.19) involves the factor Y N tr g / χ, while (4.3.21) yields an evolution equation only for the modified version of µY N tr g / χ stated in (4.3.22); this discrepancy factor of µ is what leads to the dangerous factor of 1 µ on RHS (4.3.14). Moreover, from a careful analysis that takes into account the evolution equation for µ as well as the precise structure of the factorXΨ on RHS (4.3.19) and the terms on LHS (4.3.21), one can deduce the presence of the factor ∂ ∂s µ on RHS (4.3.14), whose precise form is important in the proof of the estimate (4.3.15). We have therefore explained the main ideas behind the origin of the main error integral displayed on RHS (4.3.14). Having provided an overview of the derivation of the top-order energy estimate (4.3.11a), we now describe why the below-top-order energies become successively less singular as one descends below top-order, that is, how to implement the energy estimate descent scheme resulting in the estimates (4.3.11b)-(4.3.11f); recall that the non-degenerate energy estimates (4.3.11e)-(4.3.11f) are needed to improve, by Sobolev embedding and a small-data assumption, the L ∞ bootstrap assumptions from Step (2), which are central to the whole process. A key ingredient in the energy estimate 39 In reality, in three or more spatial dimensions, there remain some additional terms on RHS (4.3.21) that depend on the top-order derivatives of the eikonal function. These terms are schematically of the form of the top-order derivatives of the trace-free part of χ, traditionally denoted byχ (note thatχ ≡ 0 in two spatial dimensions). From the prior discussion, one might think that these terms result in the loss of a derivative and obstruct the closure of the energy estimates. However, it turns out that one can avoid the derivative loss forχ by exploiting geometric Codazzi-type identities and elliptic estimates on the co-dimension-two tori P u ∩ Σ t . Such elliptic estimates forχ have been well-understood since [6] and, in the context of shock formation, since [5]. For this reason, we do not further discuss this technical issue here. descent scheme is the following estimate, valid for constants b > 0, which shows that integrating the singularity in time reduces its strength: The estimate (4.3.23) is easy to obtain thanks to the sharp information that we have about the linear vanishing rate of µ (see (4.3.16)). We note that (4.3.23) is just a quasilinear version of the estimate 1 s=t s −b ds t 1−b for 0 < t < 1, where s = 0 represents the vanishing of µ . A second key ingredient in implementing the descent scheme is to exploit that below top-order, we can estimate the difficult term Y N tr g / χ on RHS (4.3.19) in a different way; recall that this term was the main driving force behind the degenerate top-order energy estimates. Specifically, for N below toporder, we can directly estimate Y N tr g / χ by integrating the transport equation (4.3.20) in time, without going through the procedure that led to equation (4.3.21) in the top-order case. This approach results in a loss of one derivative caused by the two explicitly displayed terms on RHS (4.3.20) and therefore couples the below-top-order energy estimates to the top-order ones. However, the integration in time allows one to employ the estimate (4.3.23), which implies that below top-order, Y N tr g / χ is less singular than RHS (4.3.20); this is the crux of the descent scheme. We also note that this procedure allows one to avoid the difficult factor of µ, which in the top-order case appeared on LHS (4.3.21) and which drove the blowup-rate of the top-order energies. We have thus explained one step in the descent. One can continue the descent, noting that at each stage, we can directly estimate the difficult term Y N tr g / χ by integrating the transport equation (4.3.20) in time and allowing the loss of one derivative coming from the terms on RHS (4.3.20). This procedure couples the energy estimates at a given derivative level to the estimates for the (already controlled) next-highest-energy, but it nonetheless allows one to derive the desired improvement in the energy blowup-rate by downward induction, thanks to the integration in time and the estimate (4.3.23). (8) (Elliptic estimates along Σ t ). We now confront an important issue that we ignored in Step (7): to close the energy estimates, we are forced to control some of the inhomogeneous terms in the equations using elliptic estimates along Σ t . This major difficulty is not present in works on shock formation for wave equations; it was encountered for the first time in our earlier work on shock formation [20] for barotropic fluids with vorticity. A key aspect of the difficulty is that elliptic estimates along Σ t necessarily involve controlling the derivatives of the solution in a direction transversal to the acoustic characteristics P u , that is, in the singular direction. We need elliptic estimates to control the source terms on RHSs (3.1.2b) and (3.1.3a) that depend on ∂Ω and ∂S, where ∂ denotes the gradient with respect to the Cartesian spatial coordinates. More precisely, we need the elliptic estimates only at the top derivative level, but we will ignore that issue here and focus instead on the degeneracy of the elliptic estimates with respect to µ. The elliptic estimates can easily be derived relative to the Cartesian coordinates and the Euclidean volume form dx 1 dx 2 dx 3 on Σ t . However, in order to compare the strength of the elliptic estimates to that of the wave energies (4.3.12) and the transport energies (4.3.13), we need to understand the relationship between the Euclidean volume form and the volume form dudϑ 1 dϑ 2 featured in the energies. Specifically, by studying the Jacobian of the change of variables map between the geometric and the Cartesian coordinates, one can show that there is an O(µ) discrepancy factor between the two forms: In the rest of this discussion, our notion of an L 2 (Σ t ) norm is in terms of the volume form dudϑ 1 dϑ 2 . That is, we set We now explain some aspects of the elliptic estimates that yield control over ∂S, where as before, ∂ denotes the gradient with respect to the Cartesian spatial coordinates. One also needs similar elliptic estimates to obtain control over ∂Ω, but we omit those details; see [20] for an overview of how to control ∂Ω in the barotropic case. Our elliptic estimates are essentially standard div-curl estimates of the form With the help of (4.3.24) and (4.3.25), we can re-express the above div-curl estimate as follows: We now explain the role that (4.3.26) plays in closing the energy estimates. Our main goal is to show how to derive the bound √ µ∂S 2 where m is a small positive constant and · · · denote error terms that can be controlled without elliptic estimates (for example, via the wave energies). We note that since (4.3.11a) implies that the top-order wave energies can be very degenerate, some of the terms in · · · on RHS (4.3.27) can in fact blow up at a much worse rate than the one µ − m (t) that we have explicitly displayed. The point of writing the estimate for √ µ∂S 2 L 2 (Σt) in the form (4.3.27) is that this form emphasizes the following point: the self-interaction terms in the elliptic estimates are not the ones driving the blowup rate of the top-order derivatives of S; Instead the blowup-rate of √ µ∂S 2 L 2 (Σt) is driven by the blowup-rate of the top-order derivatives of the wave variables {ρ, v 1 , v 2 , v 3 }, which are hidden in the · · · terms on RHS (4.3.27). It turns out that as a consequence, the blowup rates for the top-order wave energies are exactly the same as they are in the isentropic irrotational case. That is, our approach to energy estimates yields the same blowup-exponent K in the energy hierarchy (4.3.11a)-(4.3.11f) compared to the exponent that our approach would yield in the isentropic irrotational case. To explain how to derive (4.3.27), we start by discussing energy estimates for the transport equation (3.1.3a) for D. We again remind the reader that the elliptic estimate approach to deriving (4.3.27) is needed mainly at the top-order, but for convenience, we discuss here only the non-differentiated equations. Specifically, by deriving standard transport equation energy estimates for the weighted equation µ × (3.1.3a) and by using the L ∞ bootstrap assumptions of Step (2) (which in particular can be used to derive the bound µ∂ a v b L ∞ (Σt) 1), one can obtain the following integral inequality: where · · · is as above. From the L ∞ bootstrap assumptions of Step (2) √ µD 2 L 2 (Σs) + · · · , where · · · denotes terms that can be controlled without elliptic estimates, that is, via energy estimates for the wave equations (3.1.1a)-(3.1.1b) and the transport equations (3.1.1c)-(3.1.1e). From these estimates and (4.3.29), we deduce where k is a small constant whose smallness is controlled by k. Again using (1.3.13b) and the bound exp(2ρ) 1, we deduce from (4.3.31) that √ µdivS 2 we have relegated to the terms · · · on RHS (4.3.28), is much less degenerate than the one we have explicitly displayed on RHS (4.3.28) and in particular does not contribute to the blowup-rate of the top-order energies. A full discussion of this issue would involve a lengthy interlude in which we describe the need to rely, in addition to energies along Σ t , energies along the acoustic characteristics P u . For this reason, we omit this aspect of the discussion. 41 We again note that the smallness assumption guarantees, roughly, that the data are near that of a simple isentropic plane wave solution. One can obtain similar but much simpler estimates for √ µcurlS 2 L 2 (Σt) directly from equation (3.1.3b). 42 Then inserting these bounds into (4.3.26), we finally obtain the desired bound (4.3.27). We have therefore given the main ideas behind the elliptic estimates. This completes our overview of our forthcoming proof of shock formation. Proof of Theorem 3.1 In this section, we prove Theorem 3.1. The theorem is a conglomeration of Lemmas 5.1, 5.2, 5.3, 5.4, 5.6, and 5.7, in which we separately derive the equations stated in the theorem. Actually, to obtain Theorem 3.1 from the lemmas, one must reorganize the terms in the equations; we omit these minor details. Using equation (5.1.4b) to substitute for the factor BS a on the second line of RHS (5.1.9), we deduce BdivS = B(S a ∂ a ρ) − 2(∂ a v b )∂ b S a + 2(S a ∂ a v b )∂ b ρ + exp(ρ)δ ab (curlΩ) a S b . Proof. Equation (5.1.12a) follows easily from applying the operator div to equation (5.0.1) and noting that since ω = curlv, we have divω = 0. We now derive (5.1.12b). First, commuting the already established equation (5.1.1) with the operator curl and using the definitions (1.3.8) and (1.3.12) as well as equations (5.0.1) and (5.1.12a), we compute that Next, using the identity iab jkb = δ ij δ ak − δ ik δ aj and the antisymmetry of ··· , we rewrite the third and fourth terms on RHS (5.1.13) as follows: Substituting RHS (5.1.14) for the third and fourth terms on RHS (5.1.13), using equation (5.1.12a) for substitution, and using the identities (curlΩ) a ∂ a v i = ajk (∂ a v i )∂ j Ω k and iab bjk = δ ij δ ak − δ ik δ aj , we compute that We now multiply both sides of (5.1.17) by exp(−ρ) and bring the factor of exp(−ρ) under the operator B on the LHS. The commutator term (B exp(−ρ)) × · · · completely cancels |detg|g −1 = −c −3 B ⊗ B + c −1
2017-01-23T20:50:19.000Z
2017-01-23T00:00:00.000
{ "year": 2019, "sha1": "4c2310adc69ef52d7644fa57ab88614325e99301", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/131406.2/6/205_2019_1411_ReferencePDF.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c2310adc69ef52d7644fa57ab88614325e99301", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
210120228
pes2o/s2orc
v3-fos-license
DDIT4 gene expression is switched on by a new HDAC4 function in ataxia telangiectasia Ataxia telangiectasia (AT) is a rare, severe, and ineluctably progressive multisystemic neurodegenerative disease. Histone deacetylase 4 (HDAC4) nuclear accumulation has been related to neurodegeneration in AT. Since treatment with glucocorticoid analogues has been shown to improve the neurological symptoms that characterize this syndrome, the effects of dexamethasone on HDAC4 were investigated. In this paper, we describe a novel nonepigenetic function of HDAC4 induced by dexamethasone, through which it can directly modulate HIF‐1a activity and promote the upregulation of the DDIT4 gene and protein expression. This new HDAC4 transcription regulation mechanism leads to a positive effect on autophagic flux, an AT‐compromised biological pathway. This signaling was specifically induced by dexamethasone only in AT cell lines and can contribute in explaining the positive effects of dexamethasone observed in AT‐treated patients. | INTRODUCTION Ataxia telangiectasia (AT) is a rare neurodegenerative disease caused by biallelic mutations in the ataxia telangiectasia-mutated (ATM) gene (Chr 11q22.3-23.1), which encodes for the ATM protein, a member of the PI3 kinase-like kinase (PIKK) family. 1,2 AT patients show a complex phenotype characterized primarily by an early onset progressive cerebellar ataxia, loss of Purkinje cells, oculocutaneous telangiectasias, immunodeficiency, proneness to the development of tumors (lymphoma and leukemia) and infections (respiratory infections). [3][4][5][6] Ataxia telangiectasia-mutated, initially discovered as a protein with nuclear functions, as it is activated after DNA damage, 7,8 modulating cell cycle-checkpoint signaling, 9 also has pleiotropic effects in the cytoplasm. These effects are still under investigation. [10][11][12][13][14][15] Unfortunately, there is currently no cure available for AT patients, but only supportive therapies to ameliorate their pain. However, in the last few years, observational studies and clinical trials have shown that treatment with glucocorticoid analogues improves the neurological symptoms of AT patients, although their mechanism of action have only partially been elucidated. [16][17][18][19][20] The limitations observed with the use of oral corticosteroids, leading to undesirable side effects have been overcome with the administration of a sustained released delivery system via patients' red blood cells. 16,17 Several studies have been carried out in order to gain insight into the | 1803 RICCI et al. biological effects of glucocorticoids in AT patients and in cellular models, highlighting their role in redox balance, gene expression, protein regulation, and organelle dynamics. [21][22][23][24][25][26][27] Li et al reported the role of ATM in balancing HDAC4 function in AT neurons. 28 Among class II HDACs, HDAC4 is implicated in the control of gene expression, and it is also important for several cellular functions and is the major player in synaptic plasticity. 29 HDAC4 is expressed particularly in the heart, skeletal muscle, and in the brain, where it seems to be predominantly localized in the cytoplasm. 30,31 Loss of HDAC4 cytoplasmic distribution induces neuronal cell death. HDAC4 is normally phosphorylated by calcium/calmodulin-dependent kinases (CaMKs), enabling its binding to the chaperones 14.3.3 protein family, and leading to its nuclear export while preventing its nuclear import. [32][33][34] Protein phosphatase 2A (PP2A) in turn, regulates the de-phosphorylation of HDAC4, promoting its nuclear shift. 35 A lack of ATM causes the deregulation of PP2A and subsequent HDAC4 nuclear accumulation, inhibiting the transcription factors myocyte enhancer factor 2A (MEF2A) and cAMP response element-binding protein (CREB), thus promoting the repression of neuronal survival genes and leading to neurodegeneration. 28 In light of the above-mentioned dynamics involved in AT pathology and based on our previous investigations regarding dexamethasone (dex) action in AT cells and patients, 36 we decided to investigate whether dex can alter HDAC4 cellular localization and function in AT fibroblast cell lines. Dexamethasone treatment was found to promote a new non-epigenetic role of HDAC4, consisting in HDAC4 mediated HIF-1a regulation which leads to an ATM-independent DDIT4 transcription involved in the autophagy process that was restored after dex administration. These data can contribute in understanding the beneficial effect of dexamethasone in the treatment of AT. | Cell cultures Fibroblasts WT AG09429 (ATM+/+) and AT GM00648 (ATM−/−) from Coriell Institute (Camden, NJ, USA) were used as a cellular model. The hTERT antigen cell immortalization Kit (Alstem Cell Advancements) was used to immortalize the cells. The selected AT GM00648 hTERT (AT 648 hT) and WT AG09429 hTERT (WT hT) were grown in MEM (Eagle formulation). The medium was supplemented with 2 mmoL/L L-glutamine, 100 U/mL penicillin, and 0.1 mg/mL streptomycin (Sigma Aldrich), 10% fetal bovine serum (Thermo Fisher Scientific), and 10 mM glucose. All cells were incubated at 37°C with 5% CO 2 and treated with 100 nM dex for 48 hours prior to each analysis. Dimethylsulfoxide (DMSO) was used as the drug vehicle and thus was administered in untreated cells as a control. | Western blotting Total proteins were extracted using the Protein Extraction Reagent Type 4 (P4, Sigma Aldrich). Cells were sonicated with 10 pulses of 15 seconds at 45 Watts Labsonic 1510 Sonicator (Braun) and clarified by centrifugation for 10 minutes at 10 000 RCF. Cytosolic and nuclear fractions were obtained lysing the cells in Buffer A (10 mM Hepes/KOH pH 7.9, 1.5 mM MgCl 2 , 10 mM KCl, 1 mM dithiothreitol (DTT), 0.1% Nonidet-P40) completed with protease inhibitors (Roche Applied Science) and phosphatase inhibitors (10 mM NaF, 2 mM Na 3 VO 4 ) in ice for 10 minutes. Cells were centrifuged at 5000 RCF for 10 minutes and the supernatants containing the cytosolic fraction were collected. The pellets were then lysed in P4 and sonicated for 10 pulses of 10 seconds at 45 Watts. After clarification, the supernatants containing the nuclear fractions were collected. Protein concentration was determined by the Bio-Rad Protein Assay, based on Bradford's method. The whole lane normalization strategy was adopted in all western blot analyses using a trihalo compound for protein visualization. [38][39][40] Acquired images were analyzed by Image Lab software 5.2.1 (Bio-Rad). 41 | Indirect immunofluorescence microscopy Cells were grown on Lab-Tek II chamber slide (Nunc). After stimulation, they were fixed with 4% formaldehyde for 10 minutes and then with 100% cold methanol for 10 minutes. They were subsequently permeabilized with 0.5% NP-40 in PBS for another 10 minutes. After performing the blocking procedure for 1 hour at room temperature, primary antibodies were applied in 0.1% Triton X100, 1% BSA in PBS overnight at 4°C. The following antibodies were used: anti-HDAC4 (Cell Signaling Technology, Thermo Fisher Scientific), anti-phospho HDAC4 Ser632 (Cell Signaling Technology), and anti-HIF1-a (Cell Signaling Technology, Thermo Fisher Scientific). The following day, slides were incubated with secondary anti-mouse TRITC-conjugated antibody (Sigma-Aldrich) or anti-rabbit FITC-conjugated antibody (Sigma-Aldrich) in 0.1% Triton X100, 1% BSA in PBS for 1 hour at 37°C. After washing procedures, DNA was stained with 4′,6-diamidino-2-phenylindole (DAPI) at a final concentration of 0.2 µg/mL. Washed slides were mounted and embedded with ProLong Antifade (Thermo Fisher Scientific). Slides were observed by Olympus IX51, and the images were acquired by ToupCam camera (ToupTek Europe). Image analyses were performed by ImageJ (NIH). | Quantitative real-time PCR Total RNA was extracted from WT hT and AT 648 hT fibroblast cell lines treated with dex or not treated using the RNeasy mini kit (QIAGEN). Five hundred nanograms of RNA were employed in each experiment to obtain cDNA PrimeScript™ RT Master Mix (Takara). One nanogram of cDNA was used in each PCR reaction for TaqMan Gene Expression Assays (Thermo Fisher Scientific) according to the manufacturer's instructions. PPIC and PPIA gene expressions were used as housekeeping genes. Amplification plots were analyzed using the ABI PRISM 7500 sequence detection system (Applied Biosystems) and the relative expression data were calculated by the ½ ΔCt method. The enrichment of reduced proteins was performed with hybridization between 100 µg samples and 60 µL of 50% streptavidin agarose beads (Pierce) in PBS-containing protease inhibitors. The hybridization on a rotating bascule at 4°C lasted for 2 hours. Biotinylated proteins were purified as reported by Rybak et al 44 and subsequently separated by SDS-PAGE (Novex 8%-16%) and transferred to nitrocellulose. Membranes were probed with the primary antibody anti-HDAC4 and immunoreactive signals were detected as previously described. | Transcription factors array Protein nuclear extracts were obtained from WT hT and AT 648 hT cells, with or without dex treatment, extracted in native conditions, according to the recommendations of the Panomics Protein/DNA arrays II kit. Transcription factor activity was evaluated using the enhanced chemiluminescence detection following the manufacturer's instructions. | Electrophoretic mobility shift assay EMSA Native nuclear proteins of WT hT and AT 648 hT were extracted as described in the TFs array section and used in EMSA. The double-stranded DNA encompassing the HIF-1a-binding site (TACGTG) of the DDIT4 promoter was obtained and labeled by PCR amplification using 5′FAM modified forward primer 5′-GTTCGAC TGCGAGCTTTCTG-3′ and reverse 5′-CCTTCTCTG CGCCACGACCC-3′. DNA-protein binding and gel migration were performed as previously reported. 45 Anti-HDAC4 and anti-HIF-1a were added in super shift assays before probe binding. | RNA interference RNAi experiments were performed with 6 nM siRNA against HIF-1a or HDAC4 (Ambionᴿ Silencerᴿ Select Pre-designed siRNAs) using Lipofectamine RNAiMAX (Invitrogen) according the guidelines provided by the manufacturers. The Select Pre-designed scramble 6 nM siRNA was used as a control. SiRNAs were added in the last 24 hours of 48-hour dex stimulation. RNA and proteins were extracted as previously described. | Autophagy flux monitoring treatments In order to analyze the autophagic flux, treated and untreated cells were incubated for 48 hours and subsequently treated with the vehicle (control group), with chloroquine 100 μM (Sigma-Aldrich) and with chloroquine plus Pepstatin A 10 µg/µL (Sigma-Aldrich) for an additional 4 hours. Proteins were then analyzed as previously illustrated. | Statistical analysis GraphPad Prism was used for statistical analyses and graph generation. Statistical tests were chosen according to sample size and variance homogeneity. The following tests were used: t test for data from IF experiments, Mann-Whitney U test in case of unpaired medians comparisons, Wilcoxon test in case of paired medians comparisons, and Kruskal-Wallis test (nonparametric ANOVA) when more than two groups were compared. Means or medians were considered statistically different with P ≤ .05. | HDAC4 nuclear accumulation by cysteine reduction The first aim of this study was to evaluate if dex was able to alter HDAC4 nuclear localization, thus reversing the phenotype induced by its nuclear dysfunction. The intracellular distribution of HDAC4 ( Figure 1A) and p-HDAC4 ( Figure 1B) was assayed by indirect immunofluorescence (IF) in both WT hT and AT 648 hT cells treated with dex or untreated. Their quantified amounts are reported in Figures 1C,D, respectively. Consistent with the findings of Li et al, AT cells showed a larger amount of nuclear HDAC4 than did WT cells, and its magnitude further increased only in AT 648 hT after dex stimulation. The amount of nuclear p-HDAC4 was found to be slightly increased (not statistically significant) only in AT cells. These findings were also verified by western blot analyses of nuclear and total protein extracts using anti-HDAC4 and anti-p-HDAC4 antibodies ( Figure 1E). AT 648 hT cells showed an increase in nuclear HDAC4 after stimulation with dex compared to untreated cells. No significant difference was found in WT hT cells in terms of protein quantity ( Figure 1F). In agreement with IF quantification, AT 648 hT cells were found to have more nuclear HDAC4 protein than WT hT cells at basal conditions, but the amount of the protein was enhanced after dex only in AT cells. Nuclear HDAC4 phosphorylation status ( Figure 1G), as observed by IF, showed a slight increase in nuclear p-HDAC4/HDAC4 only in the dex-treated AT 648 hT cell line, though the increase was not statistically significant. Western blot analyses of total protein extracts with the anti-HDAC4 antibody showed a significant increase in HDAC4 only in AT 648 hT cells ( Figure 1H). Total phosphorylated HDAC4 was also tested and the ratio between pHDAC4/ HDAC4 showed an increase in treated AT 648 hT cells, while in WT hT cells, the ratio decreased after dex F I G U R E 1 HDAC4 and p-HDAC4 are specifically modulated in AT cells by dex. A and B, Typical images illustrating the nuclear localization of HDAC4 and p-HDAC4 in control and dex-treated WT hT and AT 648 hT cells stained by IF. C and D, Quantification of the signals derived from the IF experiments. At least 200 nuclei were counted for data processing. HDAC4 only accumulated in AT 648 hT cells after dex treatment (P = .012 t-test), while no statistical differences were appreciable for p-HDAC4 quantitation. E-I, Western blot analysis on nuclear protein extracts and total protein extracts of all the tested cell lines. The quantitation of the immunoreactive bands is also reported. Nuclear HDAC4 only accumulated in AT 648 hT-treated cells (Wilcoxon test P = .0313 n = 9), while no differences were recorded for nuclear p-HDAC4. The analysis of total protein extracts showed an increment of HDAC4 in AT 648 hT-treated cells (Wilcoxon test P = .313 n = 9), while p-HDAC4 was downregulated in WT hT and upregulated in AT 648 hT-treated cells (Wilcoxon test P = .0216, n = 9) treatment ( Figure 1I). The increased HDAC4 phosphorylation status was in disagreement with its nuclear localization, since the phosphorylated protein should be shuttled to the cytosol. 33 HDAC4 gene expression was also performed by qPCR assay, as illustrated in Supplemental Figure 1S. AT 648 hT cells showed an upregulation of HDAC4 gene expression after stimulation with dex compared to untreated AT 648 hT, whereas no significant differences were found between treated and untreated WT hT cells in terms of mRNA. It is known that numerous posttranslational modifications regulate HDAC4 subcellular localization and activity, reviewed by Mielcarek et al, 46 Di Giorgio and Brancolini 47 and Wang et al. 48 In particular, the reduction of the disulfide bridge between cystein-667 and cystein-669, inhibits its nuclear export, in spite of its phosphorylation status. 43 Therefore, we investigated the possible activity of dex on HDAC4 redox status, and consequently whether HDAC4 nuclear accumulation was related to its reduced state. We evaluated HDAC4 redox status by BIAM assay, as described in the material and methods section. 42,43 As illustrated in Figure 2A,B, HDAC4 reduction was greatly increased only in treated AT 648 hT cells, whereas no significant differences were observed in WT hT cells. As reported by Ago et al, in mice, thioredoxin (TXN) is able to regulate the localization of HDAC4, since the complex TXN-TBP-DNAJB5 reduces its disulfide bridge 667-669, promoting HDAC4 nuclear accumulation regardless of its phosphorylation status. We then proceeded to investigate TXN gene expression by qPCR assay, as reported in Figure 3A. Higher mRNA expression levels of TXN were observed in both treated WT hT and AT 648 hT cell lines, suggesting that TXN overexpression may actually influence the nucleocytoplasmic shuttling of HDAC4 by cysteine reduction. Nuclear factor erythroid 2-related factor 2 (NFE2L2) is a key player in cellular redox balance, and the activation of NFE2L2 results in the induction of genes involved in oxidative stress protection, including TXN. 49,50 Since dex could enhance the cellular nuclear translocation of NFE2L2 in AT lymphoblastoid cell lines (LCLs), 27 NFE2L2 nuclear localization was investigated by western blotting analysis as shown in Figure 3B. Surprisingly, we were not able to record any nuclear shift in the tested cells. However, we did observe a higher nuclear amount of NFE2L2 in AT fibroblasts than WT. The likelihood of HDAC4 nuclear accumulation by reduction is also supported by an autoregulatory feedback loop involving HDAC4 and miR-206. 51 HDAC4 in a reduced state suppresses miR-206 expression, thus avoiding the degradation of HDAC4 mRNA, a specific target of the previously mentioned miR. It has been shown through qPCR analysis that HDAC4 is overexpressed specifically in treated AT cells. MEF2A, and CREB When situated in the nucleus, HDAC4 can play numerous roles, the first of which is the deacetylase function. For this activity, HDAC4 binds directly to HDAC3 in order to activate its deacetylase domain, becoming competent for epigenetic alterations. 52,53 To test nuclear HDAC4 deacetylase activity, we co-immunoprecipitated HDAC4 and then verified the presence of HDAC3 by western blotting using the anti-HDAC3 antibody, as reported in Supplemental Figure 2S. The interaction between HDAC4 and HDAC3 remained unaltered in AT 648 hT samples. In WT samples dex seemed to reduce HDAC4/HDAC3 binding. Nuclear HDAC4 has an additional role in MEF2A and CREB activity suppression, promoting the downregulation F I G U R E 2 HDAC4 cysteins are reduced after dex treatment. A, Representative western blot image of the reduced HDAC4 immunoreactive bands obtained by biotin-modified cysteines captured by monomeric avidin beads and probed with anti-HDAC4 antibodies. B, Western Blot quantification. Dex improved the reduced status of HDAC4 only in AT 648 hT cells, promoting its nuclear translocation (P = .0313 Wilcoxon test, n = 5) of neuronal survival genes, leading to neurodegeneration in AT patients. 28 The activity of MEF2A and CREB in the investigated cells was therefore tested by a transcription factor (TF) array analysis, which also contained the assays for the above-mentioned TFs. MEF2A and CREB activity was undetectable in both AT 648 hT and WT hT fibroblasts, regardless of dex stimulation (Supplemental Figure 3S). However, among the TFs that were modulated in AT 648 hT by dex, the hypoxia inducible factor-1a (HIF-1a) was noted. | Dex increases HIF1-a/HDAC4 interaction HIF-1a is a heterodimer consisting of two subunits, oxygen-sensitive HIF-1a and constitutively expressed HIF-b. In hypoxic conditions, HIF-1a becomes stabilized, dimerizes with HIF-b and can translocate to the nucleus. 54 HIF-1a is involved in the modulation of numerous proteins and enzymes of glucose metabolism and the glycolytic pathway. 55 Tang et al 56 reported that HDAC4 has the ability to stabilize HIF-1a by 14.3.3ζ, promoting the expression of epithelial-mesenchymal transition (EMT or SLC22A1) transcription. Since dex stimulates HIF-1a in AT 648 hT cells but not in WT hT cells, we focused our attention on whether HDAC4 nuclear accumulation could directly modulate HIF-1a activity in the proposed AT cellular model. Accordingly, the interaction between HIF-1a and HDAC4 was assayed by co-immunoprecipitation of HDAC4, and the immunocomplex was tested by western blotting with anti-HIF-1a and anti-14.3.3ζ/δ. Figure 4A shows the interaction enhancement in AT 648 hT cells after dex treatment between HIF-1a and HDAC4; only a weak signal was obtained in WT hT samples. The 14.3.3ζ interaction with HDAC4 was also assessed, and it seemed to decrease in AT cells after dex treatment. HIF-1a nuclear localization was performed on nuclear protein extracts as reported in Figure 4B, and a significant HIF-1a increase was observable in treated AT 648 hT cells, while we did not find significant differences between treated and untreated WT hT samples. The IF assay, with anti-HIF1a and anti-HDAC4 antibodies ( Figure 4C) showed an HIF-1a fluorescent signal that was higher in dex stimulated AT 648 hT cells. A colocalization signal with HDAC4 was also observable in these cells. The findings described above led us to investigate several HIF-1a downstream target genes, including SLC22A1, by qPCR, to assess their transcriptional activity. The expression of SLC22A1 was undetectable (in contrast with Tang et al 56 ) in all the tested samples, while atypical outcomes were obtained testing the expression of vascular endothelial growth factor A (VEGFA), solute carrier family 2, facilitated glucose transporter member 1 (SLC2A1), insulin like growth factor-binding protein 1 (IGFBP-1), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Supplemental Figure 4S). Consequently, additional HIF-1a activities were investigated. ATM-dependent DDIT4 transcription Cam et al 11 reported that ATM is able to phosphorylate HIF-1a in Ser696 in hypoxic conditions, driving DDIT4 F I G U R E 3 TXN is upregulated by dex in NFE2L2 in an independent manner. A, HDAC4 reduction should be mediated by TXN, which is actually overexpressed upon dex treatment in both WT hT and AT 648 hT cells (Wilcoxon test P = .0239 and P = .041, respectively, n = 5). B, The overexpression of TXN was not mediated by NFE2L2 since no further accumulation in the nucleus is observed after dex treatment in all the analyzed cell lines. However, a higher basal amount of nuclear NFE2L2 was observed in AT cells than in WT cells (Test U Mann-Whitney P = .317, n = 5) (also known as REDD1) expression in AT mouse embryonic fibroblasts (MEFs) and in human AT fibroblasts. DDIT4 in turn, indirectly leads to the suppression of the mammalian target of rapamycin (mTORC1), 57 therefore improving the autophagy process, one of the compromised biological pathways in AT cells. 58 This prompted us to assess whether dex might somehow modulate HIF-1a by HDAC4, bypassing the HIF-1a phosphorylation by ATM, and controlling the HIF-1a-mediated DDIT4 transcription. We therefore first assessed DDIT4 mRNA expression and protein levels after dex treatment by qPCR and western blotting, using the anti-DDIT4 antibody as shown in Figure 5A,B, respectively. The DDIT4 transcript was found to be significantly overexpressed only in treated AT 648 hT, whereas no significant changes were observed in WT hT. In contrast, the DDIT4 protein showed a slight increase in WT hT cells after dex treatment, but showed its largest increase in AT 648 hT cells treated with dex. Once it had been established that DDIT4 expression is influenced by dex action, we turned our attention to the possible HIF-1a/HDAC4 interaction in the HIF-1a-binding site localized in the DDIT4 promoter. 59 First, we performed a gel shift assay using a probe surrounding the HIF-1a-binding site. As reported in Supplemental Figure 5S, at least three protein-DNA complexes were observable in all conditions. The super-shift by HDAC4 or HDAC4 and HIF-1a antibodies seemed to affect the composition of the complexes, especially in the treated AT 648 hT sample. In order to obtain further confirmation of the gel shift results, the immunoprecipitation of chromatin (ChIP) on AT 648 hT cells was achieved with HDAC4 and HIF-1a antibodies. The fragments surrounding the DDIT4 promoter HIF-1a-binding site were quantified by qPCR. As shown in Figure 6A, there was no significant difference between treated and untreated AT 648 hT in terms of the amount of HIF-1a in the DDIT4 promoter, while qPCR on anti-HDAC4 ChIP showed a larger amount of HDAC4 in the same promoter locus in dex treated AT 648 hT cells. DDIT4 transcription dependence on HIF-1a/HDAC4 was assayed by gene silencing experiments and DDIT4 expression was evaluated. In both HIF-1a and HDAC4 silencing, we observed a reduction in the amount of transcript (a downregulation of about 50%-60%, Figures 6B,C, respectively) only in AT 648 hT cells. In fact, in all WT hT conditions and in untreated AT 648 hT cells, the mRNA levels remained unaffected by siRNA treatments, thus reinforcing the idea that HDAC4 is responsible for HIF-1a DDIT4 transcription. In silencing experiments, the DDIT4 protein amount was also evaluated as reported in Figure 6D (quantified in Supplemental Figure 6SA), and its expression matched the mRNA amount. Additional results, including HIF-1a downregulation in siRNA experiments and the relationship between the amount of HIF-1a and DDIT4 expression, are reported in Supplemental Figure 6SB and C, respectively. Taken together, these findings show that DDIT4 is actually transcribed by the HIF-1a-HDAC4 complex after dex induction, bypassing ATM activity selectively in AT cells. | Autophagy is enhanced by dex without mTORC1 activation DDIT4 can activate the TSC1/2 complex, which converts Rheb to the inactive GDP-bound state, leading to the inhibition of mTORC1 activity 60 and indirectly promoting autophagy. 61,62 Autophagy dysfunctions are involved in several neurodegenerative diseases 63 and these impairments have also been described in AT. 58 Since the above-mentioned results concern DDIT4 increase, we decided to investigate the autophagy pathway in dex-treated AT cells. To detect autophagic flux, microtubule-associated protein light chain 3 (LC3) was assayed as an autophagy marker. During autophagy, the cytoplasmic form LC3-I is recruited to autophagosomes and converted to LC3-II through lipidation, and LC3-II associates with autophagosomal membranes. The amount of the lipidated form LC3-II is correlated with the number of autophagosomes. 64,65 Considering that LC3B-II is rapidly degraded inside autolysosomes, LC3 immunoblotting may not reflect the real autophagy activation. [66][67][68] Hence, to investigate the accurate autophagic flux, we performed LC3B degradation blocking experiments ( Figure 7A). In basal conditions, without inhibitor treatment, the level of LC3B-II decreased in both dex-treated WT and AT cells, although the reduction was more evident in AT samples. Under chloroquine treatment, AT 648 hT showed an increase in LC3B-II, whereas WT cells exhibited a decrease after dex treatment, but both outcomes were not statistically significant. In the chloroquine plus pepstatin condition, no differences were observable in WT hT cells, while a statistically significant LC3B-II accumulation was detected in dex-treated AT 648 hT samples. To monitor autophagic flux, in addition to LC3B, the p62 (SQSTM1/sequestosome 1) marker was also tested. Its degradation reflects an enhancement of the autophagic process. p62-Ubiquitin and LC3B are associated with mature autophagosomes and then degraded into autolysosomes. 69 The p62 protein level was assayed on total protein extracts via western blot analysis with anti-p62 antibody. Figure 7B shows a significant decrease in p62 in AT-treated cells. On the other hand, no F I G U R E 5 DDIT4 gene expression is specifically induced by dex in AT. A, Analysis by qPCR shows that dex specifically modulates the DDIT4 transcript only in AT 648 hT-treated cells (Wilcoxon test P = .0313 n = 7). B, Representative western blot and matching quantification of DDIT4 in total protein extracts. A DDIT4 protein boost was evident in treated AT 648 hT (Wilcoxon test P = .0355 n = 6). At the protein level, a small increment was also observable in WT hT dex-treated cells (Wilcoxon test P = .035 n = 6) | 1811 RICCI et al. significant difference was observed in the WT hT sample. Finally, the VPS18 autophagy marker was also evaluated. VPS18 is a central subunit of the VPS-C core complex involved in fusion between endosomes and lysosomes or autophagosomes and lysosomes. 70,71 VPS18 is critical for autophagosome clearance. 72 VPS18 protein level was assessed using anti-VPS18 and the quantifications are illustrated in Figure 7C. The amount of VPS18 protein decreased in AT 648 hT after dex treatment bringing VPS18 to the same levels as those found in WT hT cells, which were unaffected by dex stimulation. The VPS18 gene expression was detected by qPCR (Supplemental Figure 7S). VPS18 mRNA content was increased in AT-treated cells, while there was not a significant difference in the WT hT sample. The results for LC3B, p62, and VPS18 support the positive dex-induced effects on autophagic flux in AT fibroblasts. DDIT4 should stimulate autophagy by acting indirectly on mTORC1 complex, which is an atypical serine/threonine protein kinase. mTORC1 is the master regulator of cell growth and coordinates the cellular response to growth factors and nutrient sufficiency. 73 The main downstream targets of mTORC1, p70 ribosomal protein S6 kinase (p70S6K), and eukaryotic translation initiation factor 4E (eIF4E)-binding protein 1 (4E-BP1), are involved in the translation initiation process. 74 Therefore, the activity of mTORC1 was investigated by testing p70S6K and 4E-BP1 phosphorylation. We expected a decreased phosphorylation of both targets in AT cells treated with dex, but surprisingly no significant differences among the samples were observed (Supplemental Figure 8S). This unexpected outcome led us to test the effects of dex on HIF-1a-silenced fibroblasts. In Figure 8, p-p70S6K normalized signal is reported. Only in dex-stimulated AT 648 hT we observed a large amount of phosphorylated p70S6K, while no differences were observed in p-4E-BP1. This behavior could be due to the mTORC1 activation in AT cells after dex treatment, since its upstream pathway was found to be activated, as reported in Figure 9A. Dex induced AKT phosphorylation, especially in AT 648 hT cells, which F I G U R E 6 DDIT4 is selectively transcribed in AT cells by the HIF-1a-HDAC4 complex after dex stimulation. A, qPCR quantification of ChIP outcome in AT 468 hT cells revealed that the amount of HIF-1a in the DDIT4 promoter was unaltered (Wilcoxon test P = .14, n = 5), while the amount of HDAC4 was markedly increased after dex treatment (Wilcoxon test P = .0355, n = 5). B-C, The silencing of HIF-1a and HDAC4 by siRNAs, decreased DDIT4 expression in treated AT 648 hT cells by approximately 70% (siRNA HIF-1a, Wilcoxon test P = .0313 n = 7) and by approximately 50% (siRNA HDAC4, Wilcoxon test P = .0084 n = 6) when compared to the siRNA SCR dex-treated control. No differences between siRNA SCR and siRNA HIF-1a or siRNA HDAC4-untreated AT cells were observed. DDIT4 transcription is improved by HDAC4 HIF-1a stabilization upon dex stimulation specifically in AT cells. D. DDIT4 protein amounts in HIF-1a and HDAC4 targeting siRNAs in all the tested cell lines. DDIT4 protein levels were also reduced in treated AT 648 hT after HIF-1a and HDAC4 silencing. DDIT4 HIF-1a-silenced western blot quantification is reported in Supplemental Figure S6 also showed increased p-GSKb levels ( Figure 9B). The AKT signaling in AT cells should promote mTORC1 activation, but the simultaneous DDIT4 expression counteracts this stimulation at the mTORC1 level. | Inferring HDAC4 and DDIT4 expression in AT patients We have previously described the blood gene expression variation in AT patients enrolled in the EryDex clinical trial (IEDAT EudraCT Number 2010-022315-19), 16 in healthy subjects and in untreated AT patients by microarray analysis. 21 Among the differentially expressed probes in patients who received the treatment and the untreated subjects, we observed an expression increment in HDAC4 and DDIT4 genes. These indications were validated in the present investigation by qPCR, confirming that HDAC4 and DDIT4 gene expression is modulated in AT patients receiving dex ( Figure 10A,B, respectively). HDAC4 expression was found to be statistically different in all three tested groups. This means that dex can improve HDAC4 expression deficiency in AT patients raising it to levels found in healthy subjects. DDIT4 was found to be statistically downregulated in AT patients compared to healthy subjects. Dex improved DDIT4 expression in EryDex AT patients, but not in all subjects (P = .1). F I G U R E 7 Autophagy is specifically improved in dex-treated AT cells. A, Representative western blot of total protein extracts of the LC3-I and LC3-II immunoreactive bands of all the experimental conditions after 4 hour of treatment with chloroquine and chloroquine plus pepstatin A. The western blot quantification shows the LC3-II/LC3-I ratio. The increased LC3 II/I ratio after chloroquine-pepstatin A was only detected in treated AT 648 hT, suggesting an autophagic flux improvement (Wilcoxon test P = .138 n = 8). A slight decrement of LC3 II/I ratio was observed both in unblocked WT hT and AT 648 hT treated with dex (Wilcoxon test P = .0345 n = 8 and P = .035 n = 8, respectively). B, p62/ SQSTM1 western blot and quantification of total protein extracts of all the tested cell lines. The p62 downregulation confirmed the enhancement of the autophagic flux in AT 648 hT after dex stimulation. WT hT did not show any significant differences in terms of p62 content. (Wilcoxon test P = .026 n = 8). C, Western blot quantification of VPS18 of whole protein extracts of all the tested cell lines. AT cells showed a higher basal amount of the VPS18 protein than the WT cells (Mann-Whitney U test P = .0038 n = 7), suggesting an impairment of the autophagosomelysosome fusion. Dex decreased and restored the amount of the VSP18 protein in AT 648 hT bringing its level to that of the WT hT protein (Wilcoxon test P = .0140 n = 7) | DISCUSSION Ataxia Teleangiectasia is a severe syndrome, and no effective disease-modifying treatment is available. Only supporting therapies are used to care for patients. However, in the last few years, observational studies and clinical trials have shown that treatment with glucocorticoids improves symptoms and neurologic functions in patients with AT. The authors of the present study have previously described the influence of dex in AT patients and in LCLs. Since the lack of ATM leads to HDAC4-induced neurodegeneration, 28 we assessed whether dex could reverse this state by re-locating HDAC4 in cells. Our findings have led us to propose a new molecular mechanism for the nonepigenetic regulation of gene expression by HDAC4. In contrast to previously published data by Li et al reporting that nuclear HDAC4 promoted neurodegeneration, we suggest a different role for HDAC4, which was found to act as a direct transcription regulator in AT fibroblasts, leading to an unexpected outcome. Our initial results showed an extra nuclear HDAC4 accumulation in AT cells after dex treatment, by cysteine reduction and not by phosphosignaling. Since this observation of such a dex effect in AT cells was unexpected, we decided to investigate this event thoroughly. The clinical data of patients treated with dex 16,17,21,36 actually showed an improved neurological outcome, and it was hard to consider the HDAC4 nuclear shift as a side effect. In fact, the findings reported in the present paper are not in agreement with those reported in literature. First, the accumulation by cysteine reduction should be TXN mediated, 43,51 and our results showed a dex-dependent increment of TXN expression, but unlike previously published data, concerning AT LCLs, 27 this overexpression seemed to be NFE2L2 independent. Second, cysteine mediated nuclear localization of HDAC4 is not related to its deacetylase activity, as the interaction with the HDAC3 protein remained unaltered in AT after dex treatment. Hence, redHDAC4 probably has some other functions. Third, the effect of HDAC4 on the transcription factors MEF2A and CREB was not observable in the proposed fibroblasts; thus, their repression by HDAC4 was unquantifiable. Several other transcription factors were dex modulated in the TF array experiments. Among these, HIF-1a was selected as a potential partner of HDAC4, and this interaction was investigated. Actually, dex is capable of selectively increasing the interaction between HIF-1a and HDAC4 only in AT cells, and its nuclear amount was increased. The described interaction is able to stabilize HIF-1a transcription activity, 56 but the described gene modulation (SLC22A1) was not evident in AT fibroblasts. In addition, a panel of classical genes regulated by HIF-1a was evaluated, but the results were unusual and contradictory. The HIF-1a pathway analysis led us to investigate the possibility that HDAC4 HIF-1a interaction might be able to modulate DDIT4 expression bypassing ATM activation, which is responsible for inducing HIF-1a activity in hypoxia conditions. Obviously, we were not interested in hypoxia conditions, but simply in testing whether the axis dex-HDAC4-HIF-1a and DDIT4 could activate and restore autophagy, a compromised molecular mechanism in AT cells. 58 It is known that dex induces DDIT4 in some cells such as lymphocytes 62 and thymocytes, 75 in rat skeletal muscle 76 and rat hippocampus. 77 However, in the above-mentioned papers, the administered dex was in the micromolar concentration range, and the overexpression disappeared after 24-36 hours depending on the cell type. In contrast, our data showed an increase in DDIT4 mRNA and protein amounts only in AT fibroblasts, while WT cells were unaffected. Furthermore, the induction was present at nanomolar dex concentrations, and DDIT4 expression was protracted until 72 hours. Moreover, the molecular mechanism of action of DDIT4 induced by dex in all the investigated cell lines was unknown. The present paper illustrates for the first time a likely mechanism of F I G U R E 1 0 HDAC4 and DDIT4 are also modulated by dex in AT patients. A, HDAC4 qPCR on patients' samples, previously tested by microarray analyses, revealed that HDAC4 is downregulated in AT patients compared to healthy subjects, and the gene expression in AT patients who received EryDex was restored (Kruskal-Wallis P = .036 followed by Dunn test). B, Like HDAC4, DDIT4 was downregulated in AT patients compared to healthy subjects (Kruskal-Wallis P = .016 followed by Dunn test), and dex improved DDIT4 mRNA levels only in some treated patients F I G U R E 1 1 Hypothesized Biomolecular pathway induced by dex in AT. Schematic representation of the probable pathways that regulate autophagy and proliferation selectively modulated by dex treatment in AT cells. Only treated AT cells showed a biological switch: the proliferation and survival pathways are predominant over the growing pathway. Autophagic improvement can sustain this switch action through which dex can modulate DDIT4 expression, although this mechanism seems to be limited to AT cells. Indeed, the HIF-1a stabilization and activity by HDAC4 on the DDIT4 gene was observable only in AT cells and not in WT samples. The induction of DDIT4 represented a very important pathway, since it is involved in the autophagy process that was restored after dex administration as reported in the results section. In addition, the utilized AT fibroblasts showed vesicle fusion impairment as documented by D'Assante et al. Actually, the amount of VPS18 messenger was lower in AT than in WT samples at the basal condition. 58 Furthermore, the analysis of VPS18 protein, markedly higher in AT untreated samples, suggested a large amount of CORVET and HOPS tethering complexes in the cells. This may be due to the large amount of vesicles/autophagosomes that are not able to correctly fuse to lysosomes. The LC3B-II analyses confirm an improvement in autophagic flux in AT cells and further reinforce the idea that the problem in the AT autophagy process is the fusion between autophagosomes and lysosomes, since the LC3B-II/I ratio is statistically boosted in chloroquine-pepstatin A experiments. 68 Dex treatment of AT cells reinstated the levels of VPS18 protein to levels found in WT and the mRNA level was also restored, which is consistent with the improved autophagic flux. Finally, p62 levels confirmed the enhancement of autophagy. Since autophagy is typically tuned by the mTORC1/ DDIT4 pathway, we decided to test the mTORC1 activity, which should have been downregulated. The mTORC1 targets were assayed, but they were inexplicably unaffected after dex administration, and thus unaffected by DDIT4 overexpression. At this point, we wondered if some other signaling was activated upstream. To this aim, we investigated the activity of mTORC1 when the axis HIF-1a-DDIT4 was switched off. Surprisingly, p70S6K was found to be activated in only silenced treated AT samples, leading us to hypothesize that dex might exert an upstream activation of mTORC1 in AT, probably through the AKT signaling, which was strongly activated and its pathway sustained GSKb phosphorylation in AT. We therefore suggest that DDIT4 may counteract mTORC1-dependent AKT stimulation. AKT in turn can be regulated by PDK1 78,79 or by the mTORC2 complex. 80 PDK1 can also directly act on p70S6K, 78 which was always found to be unaffected by dex action in the tested cells; thus, we can assume that AKT phosphorylation is mediated by mTORC2. How dex can stimulate AKT signaling through the above-mentioned mechanisms remains unclear, even though a short-term glucocorticoid nongenomic pathway was described by Matthews et al. 81 The authors described a dexamethasone AKT activation lasting only a few minutes after treatment; on the contrary, we observed this event for at least 4 days treatment (also noted in another type of AT cells). 22 In light of all of these findings, we can propose that, only in AT-treated cells, there is a signaling branching through which the growing signaling (AKT-mTORC1-p70S6K), reviewed by Jewell and Guan 82 and by Saxton and Sabatini, 83 is switched to survival and proliferation signaling (AKT-GSKb). 84,85 At this point, the described autophagic flux improvement after dex treatment seems to be mTORC1 independent, but the mechanisms of its activation should be further investigated. In any case, we can hypothesize that an additional DDIT4 function can also be exerted in AT cells in the same manner that it can be exerted in human osteosarcoma cells and mouse embryo fibroblasts, as proposed by Qiao et al. In fact, DDIT4 can also control autophagosome-lysosome fusion by inhibiting ATG4b-mediated LC3-II delipidation to LC3-I. 61 Lipidated LC3B is essential for the correct movement forward of lysosomes and a proper fusion. 86 This possible DDIT4 activity through dex in the reported cellular model is in agreement with confirmed findings that the impairment of autophagic flux in AT is due to autophagosome-lysosome fusion deficiency. Nevertheless, the slight autophagy enhancement can endure the described DDIT4-AKT-mediated survival and proliferation signaling from an energy balance standpoint. Based on all of the findings described above, the proposed signaling that occurs specifically in AT fibroblasts treated with dex is shown in Figure 11. Finally, the availability of AT patients' data led us to explore the possibility that the aforementioned biological pathway may occur in dextreated subjects. Indeed, HDAC4 was found to be statistically altered in AT patients compared to healthy subjects, and dex changed this state. DDIT4 varied between healthy and AT subjects; some, but not all patients treated with dex improved their DDIT4 gene expression level. The number of analyzed patients is critical for the correct outcome estimation and the patient's genetic variability might contribute to their response to dex. It could be interesting to extend these gene expression variations in an ongoing phase III clinical trial (EDAT-02-2015 NCT02770807). It has to be noted that in the last few years, DDIT4 has been incongruously described as being involved in several types of malignancies and cancer therapy. 87 In light of the findings reported here and the fact that AT patients are particularly prone to tumor development, we propose that the DDIT4 pathway should be carefully evaluated and further investigated for the thorough treatment of these patients.
2019-12-12T10:17:01.756Z
2019-12-08T00:00:00.000
{ "year": 2020, "sha1": "3294d9c5d8a21e6dabe7a37fb114e29ec7754d34", "oa_license": "CCBYNC", "oa_url": "https://faseb.onlinelibrary.wiley.com/doi/pdfdirect/10.1096/fj.201902039R", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "d6de2ad31157a60e6fc0c5fc15d454baee23dd2b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73461715
pes2o/s2orc
v3-fos-license
Active contact tracing beyond the household in multidrug resistant tuberculosis in Vietnam: a cohort study Background Currently in Vietnam contact tracing for multidrug-resistant tuberculosis (MDR-TB) entails passive case finding among symptomatic household contacts who present themselves for diagnosis. Close contacts of MDR-TB cases are therefore not identified adequately. We assessed the added value of active contact tracing within and beyond households using social network questionnaires to identify close contacts of MDR-TB patients in Vietnam. Methods We conducted a cohort study using social network questionnaires in which contacts were identified by MDR-TB patients, including contacts from ‘high risk’ places like work. Contacts of MDR-TB patients were followed up and screened over a period of at least 6 months. This included two active screenings and any unscheduled passive screening of self-referred contacts during the study period. Results Four hundred seventeen contacts of 99 index cases were recruited, 325 (77.9%) and 160/417 (38.4%) contacts participated in the first and second screenings, respectively. The first screening detected one TB case but the bacteria were not MDR. From passive screening, a household contact was diagnosed with TB meningitis but not through our active approach. Social network analysis showed that only 1/17 (5.9%) high-risk places agreed to cooperate and were included in the screening, and no MDR-TB cases were detected. There were two pairs of index cases (identified separately) who were found to be contacts of each other and who had been diagnosed before the study started. Conclusions No new MDR-TB cases were detected using social network analysis of nearly 100 MDR-TB index cases, likely due to a relatively short follow up time, and loss to follow up (lack of cooperation from contacts or high risk places and lack of available resources in the National Tuberculosis Control Programme). Electronic supplementary material The online version of this article (10.1186/s12889-019-6573-z) contains supplementary material, which is available to authorized users. Background The emergence of resistance to anti-tuberculosis drugs, and particularly of multidrug-resistant tuberculosis (MDR-TB) is a serious public health threat and an obstacle to effective global TB control [1]. It is crucial to identify more MDR-TB cases at an earlier stage and provide optimal treatment. Vietnam is ranked 13th among 30 high burden MDR-TB countries (based on estimated incidence by absolute number) with an estimated 5500 MDR-TB among a total of 100.000 notified TB cases per year [2]. Despite the efforts to utilize rapid test to intensify case finding of MDR-TB; in Vietnam, the proportion of MDR-TB cases detected and treated annually is low compared with the estimated number of incident MDR-TB cases (less than 50%, see Additional file 1 for notification and enrollment of MDR-TB cases) [3]. Contact screening of MDR-TB patients is highly recommended by the World Health Organization (WHO) [4]. However, contact investigation of household members only is not sufficient to identify all MDR-TB cases due to transmission outside the household. In rural Vietnam only 1% of index TB patients had a positive household member and 83% of these household TB cases were infected with an isolate that differed from that of their household members [5]. These results are similar to those in higher incidence settings in South Africa, and Malawi [6,7]. The WHO also recommends to conduct contact investigation beyond the household for patients with MDR-TB and extensively drug-resistant TB (XDR-TB), and to collect additional information regarding their residence and other social settings where transmission may have occurred such as hotels, shelters and bars [4]. Contact tracing using social network questionnaires is a more comprehensive approach than household contact tracing, which includes the linking person to person or person to place for contact investigation [8,9]. Although screening of close contacts of MDR-TB patients is recommended by the National Tuberculosis Control Programme (NTP) of Vietnam (see Additional file 1 for policy recommended by the NTP Guidelines) [10], there is no system in place to support this. Currently a passive case finding approach is used, where household contacts are advised to seek TB diagnosis when symptomatic. We assessed the added value of active contact tracing within and beyond the household using social network questionnaires (SNQ) among contacts of MDR-TB patients in Vietnam. Study design and setting A cohort study was conducted to analyze the added value of an active screening using SNQ, the questionnaire revealed the patient's contacts through their social network including the frequently met people and visited places. Contacts were either named by patients or identified from eligible places. Close contacts of MDR-TB patients were enrolled and followed up over a period of at least 6 months and screened for TB and MDR-TB. Contacts were screened at enrolment, followed by an appointment on completion of the first screening and a reminder at 6 months by telephone for the second screening. During the study period, study participants were asked to make an unscheduled visit to district TB units and contact with the district health coordinators if they had any symptom suggestive of TB. The screening consisted of (i) standardized clinical assessment, (ii) chest X-ray among those who were not presumed to have TB by clinical assessment, and (iii) microbiological testing by Gene Xpert MTB/RIF (Xpert, Cepheid, the United States) for patients with a history or chest X-ray suggestive of TB. Study population and definitions The study involved patients with rifampicin resistant TB and their eligible contacts (all ages), who were named by index patients or identified from eligible places as defined below. The minimum sample size was estimated at 100 patients (see Additional file 2 for sample size calculation). Inclusion criteria Eligible for enrolment were all patients diagnosed with rifampicin-resistant TB or MDR-TB diagnosed by Xpert or by Genotype MTBDR Plus Line Probe Assay (Hain Lifescience Nehren, Germany) who were living in Hanoi and started MDR-TB treatment between October 2013 and April 2015. Their defined contacts (household contacts or contacts outside the household, either named by patients or from eligible places) during the 3 months preceding MDR-TB diagnosis were eligible for enrolment as contacts. Eligible high-risk places were physically enclosed spaces where the MDR-TB index case spent an average of at least 4 h a day for at least 14 days, or a cumulative total average duration of at least 8 h per week for at least 8 weeks in 3 months prior MDR-TB diagnosis. For children who were less than 18 years old, information was obtained from their parents or responsible family members. Data collection and analysis We modified and contextualized a published case report form (CRF) [11], which was then validated by fine tuning the language to make sure the respondents can understand and answer our questions adequately, and used to interview consenting patients (see Additional file 2 for description of modifications to the questionnaire). The following data were collected: demographics, medical history, social network including their contacts and frequently visited places, as well as tracing information (name, address, telephone number). As soon as patients were diagnosed and enrolled for treatment during one to two weeks at the provincial hospital, informed consent was obtained, and the interviews with the patients were conducted by trained TB health care workers. Completed patient CRFs were entered in a central database (CliRes) and reviewed by the study coordinator to identify eligible places and contacts (see Additional file 2 for operational definitions) for screening. Eligible contacts were registered at district TB units to be followed up by the study team. Eligible places were visited by provincial or district coordinators to obtain informed consent from the place's legal representative, followed by public announcement to call frequent visitors to come for screening to identify additional contacts. Completed contact CRFs were entered in the central database. Social network analysis was applied to identify links among MDR-TB index cases, contacts, and places. In the network illustrated by the link among patients, contacts and places, contacts or index cases linked to more than one patient were considered as the source of transmission, hence the centers of the network. The centrality degree of the contacts was measured by the number of patients attached with each contact. In order to determine if the contacts or places were mutual (i.e. named by at least 2 confirmed MDR-TB patients), demographics and information such as address and telephone number of the places and contacts were collected and compared. Mutual contacts or places were identified when identical information was obtained between pairs of contacts or pairs of places named by different patients. The Social network analysis also looked at the density illustrated by how closely contacts and patients are connected. The number of contacts per patient was used to rank the closeness level between patient and contacts [12]. Statistical analysis Data were entered in MS Access software (Microsoft Inc., USA) and then transferred to SPSS 16.0 for statistical analysis. Descriptive statistics, including frequency, median, interquartile range (IQR), proportion and 95% confidence intervals (95% CIs), were performed where appropriate. The comparisons were tested statistically using Chi-Square test to compare proportions. P-values (2-sided) below 0.05 were considered significant. Characteristic of MDR-TB patients (index cases) Of 112 eligible patients, 99 were enrolled into the study as MDR-TB index cases. All patients were adult, 51(51%) were 35-54 years old and 77 (78%) were male. Four patients were HIV positive (Table 1). Sputum smear and chest X-ray were performed for all patients: 76/99 (77%) had a smear-positive result. Thirty-two patients (32%) had X-ray signs of cavitation, all of whom were smear-positive. Seventy patients (71%) had been previously treated with first-line anti-TB drugs. These included patients detected as MDR-TB when starting retreatment or detected later when found to be smear-positive (non-converters) after two months of retreatment. Fourteen patients (14%) had received no or less than one month of TB treatment previously. The remaining 15 patients (15%) included 11 non-converters during their first treatment course and 4 patients previously treated in the private sector with unknown outcome. Characteristics of contacts We identified 496 contacts and 17 high risk places based on information provided in the SNQ: 481 contacts were named by patients and 17 high risk places were approached by visiting for informed consent. Of which 1 place agreed to cooperate and subsequently an additional 15 contacts were identified. Seventy-nine contacts were excluded from the study because they were not living in Hanoi (n = 16) and/or had not been in direct contact with the index patient during the 3 months before diagnosis (n = 69) (Fig. 1), leaving 417 eligible contacts whose characteristics are described in Table 2. They included 292 (70.0%) household contacts and 125 (30.0%) non-household contacts. Of the 125 non-household contacts, one contact came from a high-risk place, and the others had been named by index patients. Of the 417 eligible contacts, 189 (45.3%) were males and 86 (20.6%) were children under 15 years of age at the time of identification. Additional file 2 for more detailed table). There were no apparent differences in the proportion of contacts participating in the screening by age group (see Additional file 2). Upon first screening 36/325 (11.1%) contacts interviewed were clinically diagnosed as presumed TB. Chest X-rays were performed for 299 contacts, including 10 with clinically presumed TB, of whom an additional 12 (4.0%) had an abnormal chest X-ray suggestive for TB (Fig. 1). Xpert testing was performed for the total of 48 presumed TB cases identified by clinical assessment and/or by chest X-ray. We detected one drug-susceptible TB case and no rifampicin-resistant/MDR-TB case from the first active screening of contacts (Fig. 2). Among 160 contacts assessed in the second active screening, twenty-seven (including 3 contacts who also participated in the first screening) had presumptive TB by interview and/or chest X-ray. Xpert MTB/RIF testing detected no TB case. From passive screening, a two-year old child, whose father was an index patient, was diagnosed with TB meningitis but not as part of our study (Fig. 2). This child was not identified as presumed TB in the first screening. She was taken by her parents to the Vietnam national children hospital for diagnosis and not to the district coordinator when later having fever, cough and loss of consciousness. Social network analysis The median number of eligible contacts per index patient was 3 (IQR: 3-6). These median numbers were 3 (IQR: 2-4) among household contacts and 2 (IQR: 1-4) among non-household contacts. Index patients named 35 places, of which 17 were identified as high-risk places including 3 workplaces (1 vocational school, 1 private tailoring company, 1 grocery store), 3 internet café's, 2 hair salons and 9 restaurants. Only 1/17 (5.9%) high-risk place (vocational school) agreed to cooperate. One presumed MDR-TB case was identified based on clinical diagnosis among 15 people screened who frequented this high-risk place ( Fig. 1) but not diagnosed as TB. We found no mutual contact and no mutual place among the MDR-TB index cases. Two additional TB cases were detected among household contacts (including the confirmed drug-susceptible case and the child with unknown drug resistance status). Moreover, two pairs of index cases (four patients) were found to be contacts of each other (diagnosed before study started; Fig. 3). No genotyping was done to look at genetic relatedness. Discussion We conducted social network analysis to be able to detect more MDR-TB cases than through passive case finding. Enrolling 99 MDR-TB cases and their contacts did not reveal new MDR-TB cases. One child with (probable) MDR-TB meningitis was missed by our study. Links between MDR-TB cases were found in two instances but did not lead to the detection of new cases. Only one of seventeen high-risk places agreed to participate in the screening, resulting in one additional presumed MDR-TB case identified. The likely reasons why we did not detect additional MDR-TB cases was limited participation of contacts in TB screening. Participation was reasonable (~80%) in the first screening, but dropped considerably to~40% at the second screening. Observation from discussions with staff suggest that participation may have been poor due to the following elements:1) perceived stigma among patients, contacts and high risk places and reluctance to cooperate and to reveal correct contact information, 2) low awareness about TB and its transmission, especially among contacts with low levels of education or contacts belonging to vulnerable groups such as drug users, and 3) participation in the second screening may not have been perceived as in their interest if they were busy and not diagnosed with TB from the first screening. Furthermore, the relatively short follow-up period in our study of 6 months may be another reason. Studies have shown that active TB usually develops within five years after initial infection [13][14][15], and predominantly [16]. The median time from infection to symptoms in secondary cases is estimated to be 1.3 years [16]. Household contacts of MDR-TB patients are considered to be at higher risk to get infected than household members of drug-susceptible TB cases [17,18]. This is because, even though MDR-TB isolates are usually less transmissible [19], family members of MDR-TB cases tend to have been exposed for a longer duration due to delays in correct treatment initiation [17,19]. Therefore, contact investigation is useful for early case detection and treatment to reduce transmission of MDR-TB [4,18]. The pick-up rate for MDR-TB cases may also increase by improving the sensitivity of our diagnostic approach. Future diagnostic approaches should consider: (i) to ensure the quality of sputum and chest X-ray, (ii) expanding TB clinical assessment criteria to any cough and other tuberculosis-related symptoms like chest pain, weight loss, lack of appetite, weakness or fatigue, chills, fever and night sweats (iii) including MTB culture with higher sensitivity [20] as an add-on test following Xpert result, (iv) using multiple rather than a single specimen, to increase the diagnostic yield of Xpert MTB/ RIF [20]. However, resources in low and middle-income countries (LMICs) are generally limited and therefore it may not be feasible to implement all these recommendations. A limitation of our study using Xpert MTB/RIF is that only TB and rifampicin resistance is diagnosed as an indicator for MDR-TB [20]. In Vietnam we generally also perform culture and additional sensitivity testing of drugs included in first and second line regimens to confirm MDR-TB and tailor treatment. There is a need to develop a system to identify and manage contacts of MDR-TB cases better, including providing of adequate instructions, and possibly screening. We recommend to use a simpler questionnaire rather than a comprehensive social network approach. This is a more efficient and likely more cost-effective means for MDR-TB case detection in Vietnam and other low and middle-income countries. Information about household contacts and those who have the most frequent contact with patients such as close friends and colleagues should be collected. Depending on available resources, screening may start with a clinical assessment to determine if the person has TB-related symptoms, followed by chest X-ray and Gene Xpert MTB/RIF. This should be combined with health education, i.e. inform contacts with what symptoms they need to come for TB screening. Health education about TB, MDR-TB and its transmission among the general population should be more focused, and results of this study may help in prioritizing risk groups. It is needed to enhance awareness among contacts of MDR-TB and their compliance with screening programmes. Particular attention should be paid to enhance screening of non-household contacts as some studies show the incidence of TB among these contacts to be higher compared with household contacts [5][6][7]. Furthermore, we found a lower screening participation of male contacts in our study, which is in line with findings from our national prevalence survey. Therefore, more efforts are needed to find male tuberculosis patients [21]. Currently, about 50% of the estimated MDR-TB cases in Vietnam have not been previously treated, reflecting significant transmission of MDR-TB among contacts [20][21][22][23][24]. However, the routine case finding strategy for detection of MDR-TB during our study period mainly focused on previously treated TB cases Additional file 1 [10], with only 14% of MDR-TB patients diagnosed being treatment-naive. It is important for Vietnam to pay more attention to management of MDR-TB among new cases including close monitoring of MDR-TB contacts. Given the low yield of MDR-TB case detection from our study, beyond improving contact investigation, other potential groups should be considered to address 50% of undetected MDR-TB burden in Vietnam. Furthermore, diagnostic screening strategies should be enhanced. Approaches can be applied depending on the resources available as follows: microbiological testing by Gene Xpert MTB/RIF for (i) all newly detected TB patients including smear positive and negative (ii) presumptive TB cases who had/have contact with MDR-TB patients. These contacts can be identified by healthcare workers through interviewing TB presumptive cases who come to their health facility for health check up, and (iii) all TB presumptive. While MDR-TB can be cured, social barriers to MDR-TB treatment could be an important factor that needs to be taken into account when designing and implementing a contact tracing program [17]. Home visits by contact investigators are an effective method for interviewing household contacts and encouraging them to be assessed for TB [4]. By visiting index patients and their household contacts, the investigator is able to observe the housing conditions, perform an environmental assessment for infection control measures, and discuss and evaluate the risk of exposure, as well as provide counseling to household contacts on symptoms suggestive of TB and when and where to seek health care and social support [4]. Even though we did not find any new MDR-TB case directly through our social network analysis, this approach may still be worth consideration if the key limitations of our study are addressed. The screening process should be simplified, well organized, to increase the participation of contacts, extend the time of follow-up of contacts, and improve diagnostic screening strategy. Given that the low participation rate in our study may have limited case detection, it is recommended to expand health education on transmission of TB and MDR-TB among contacts, reduce stigma attached to TB, improve communication skills of health staff, and increase staff resources to trace contacts and get them involved in the screening. Conclusion In this study of nearly 100 MDR-TB index cases we were not able to find new MDR-TB cases using household contact screening and social network analysis within a follow-up period of 6 months. Screening of identified contacts was complicated by refusals. More staff resources may be needed and better communication skills and community awareness, collaboration of non NTP health facilities is needed to enhance participation and improve MDR-TB case detection.
2019-03-09T00:14:34.979Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "ffc072b5c8a318a0cdba8608cb57c83a3bc1611e", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-6573-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffc072b5c8a318a0cdba8608cb57c83a3bc1611e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222146468
pes2o/s2orc
v3-fos-license
CD133 Role in Oral Carcinogenesis Objective: to investigate CD133 immunoexpression, cancer stem cells marker, in oral epithelial dysplasias (OEDs) and oral squamous cells carcinomas (OSCCs) and understandits possible involvement in the malignant transformation process of these lesions and to better elucidate their biological behavior. Material and methods: Tissue samples of 15 cases of OSCCs and 15 OEDs were subjected to CD133 antibody immunohistochemistry reactions. The analysis used quantitative parameters (number of immunostained cells regardless of immunostaining sublocations). Results: All samples of OSCCs and OEDs showed positive immunostaining, with no significant difference between these groups (p = 0.283). We did not observe statistical difference between the degree of dysplasia and the amount of CD133+ cells (p = 0.899). CD133 immunoexpression showed no association with the OEDs and OSCCs sites. It was observed that nuclear and cytoplasmic immunostaining was more evident with the progression of the malignant process. Conclusion: It is suggested that the CD133 cellular localization together with the histopathological criteria of OEDs classification can contribute to provide more concrete indications about the oral carcinogenesis process. Introduction Oral cancer has been gaining worldwide attention for being the 11th most common carcinoma around the globe (D'Souza and Addepalli, 2018). In Brazil, more than 14 thousand new cases of this disease are estimated for the 2018-2019 biennium, being the 5th most common cancer in me n (INCA,2018). This incidence represents 2.6% of all registered cancers in Brazil, one of the highest in the world and of significant expressiveness in Latin America CD133 Role in Oral Carcinogenesis risk factors for PMDs are the same for OSCCs (Porter et al., 2018). However, not all PMDs will undergo malignant transformation, requiring histopathological findings of Oral Epithelial Dysplasia (OED) in histopathological examination. OEDs have been categorized into three major sections: minimum, moderate and critical (D'Souza and Addepalli, 2018), with higher chances of malignant transformation in the last two of them. However, there is no molecular or even histopathological pathognomonic hallmark that can predict malignant transformation of PMDs (Ganesh et al., 2018). Recent research suggests that Cancer Stem Cells (CSC) hold the key to unlocking effective strategies to curb initiation and growth of several malignant neoplasms (de Moraes et al., 2017), including OSCCs (Saluja et al., 2019). The CSCs are identified by some surface markers, among them CD133, also called Prominin-1. CD133 consists of an N-terminal extracellular domain, five transmembrane domains, and an intracellular cytoplasmic tail with functional tyrosine kinase sites (Udeabor et al., 2012), which will interact with distinct cytoplasmic partners, regulating signal molecules and changing the cancer metabolism, thus promoting the CSC properties (Jang et al., 2017). The identification of CSCs in OEDs and OSCCs may help in understanding the role of CSCs in the oral carcinogenesis process. It is expected that CSC biomarkers can be used together with histopathological parameters indicative of malignancy in order to contribute to a more accurate diagnosis of the risk of malignant transformation, since the association between the degree of oral dysplasia and malignant transformation remains debatable (Speight et al, 2018). Thus, this research aimed to evaluate CSCs participation, through CD133 immunoexpression, in process of oral carcinogenesis through OEDs of different degrees and OSCCs. Materials and Methods This study consisted of an observational, analytical, and cross-sectional study, using the diagnosis and immunomolecular analysis of malignant and premalignant lesions. We analyzed 15 cases of OEDs and 15 of OSCCs (size of the sample compatible with the annual demand of patients in the service). All samples were embedded in paraffin and obtained from incisional biopsies from patients of the Stomatology Clinic of the Federal University of Ceará, Sobral Campus. Samples were collected from October 2013 to October 2014. Histomorphometric analysis Specimens were fixed in 10% formalin, embedded in paraffin, sectioned at 5µm, stained with hematoxylin-eosin and mounted on glass slides for histopathological analysis. OEDs specimens were classified according to WHO classification (Barnes et al, 2005). The results of this classification were as follows: 11 were mild dysplasia, 02 were moderate, and 02 were severe. Immunohistochemical reaction For immunohistochemistry, 3µm sections were cut from paraffin embedded material. All tissue samples were processed using standard methods and serial sections were used for immunohistochemical reaction (IHC). After deparaffinization and rehydration, slides were subjected to heat-induced epitope retrieval (citrate in pH 6,0, for 30 minutes at 99ºC) in a Pascal water bath (DakoCytomation). Endogenous peroxidase activity was blocked for 30 min with 0.3% hydrogen peroxide followed by 1% protein blocking for 10 min. The sections were incubated with primary antibodies anti-CD133 (GTX60471, GeneTex ® , San Antonio, TX, USA) for 90 minutes, at room temperature, in the dilution of 1:650. The samples were then incubated with the secondary antibody LSAB Kit (DAKO ® , Carpentaria, CA, USA) for 10 min at room temperature. Next, development was performed using a chromogen solution prepared with DAB (3-30-diaminobenzidine) for 5 min in a dark chamber (DAKO ® , Carpentaria, CA, USA) and Harris hematoxylin was used for counter staining. Finally, coverslips were placed on the samples on glass slides, which were examined under a Leica DM 2000 optical microscope. A positive control (breast carcinoma) was included in each reaction along with the samples. A negative control lacking primary antibody was performed in parallel with incubation of the experimental samples. Quantitative analysis Quantitative analysis of CD133 glycoprotein expression was performed by percentage of stained cells in 5 random areas as examined in X400 magnification using the software Image J (Image and Processing Analysis in Java -Rasband, W.S., ImageJ, National Institutes of Health, Bethesda, Maryland, USA, http:// rsb.info.nih.gov/ij/, 1997-2004) (Adapted from Ravindran and Dervarage, 2012). In case of disagreement, the slides were re-evaluated by the 2 observers until a consensus was reached. Statistical analysis The analysis of CD133 positive cells was submitted to the Kolmogorov-Smirnov normality test, expressed as mean ± standard error of the mean (parametric data) and compared between groups by Student's t test and ANOVA followed by the Bonferroni post-test. Significance index p <0.05 was adopted for all evaluations performed in GraphPad Prism version 5.0 software for Windows®. Results The sample consisted of 30 cases, 15 OED and 15 OSCC samples. The clinical characterization (clinical profile) of the studied sample was performed. Of the 15 cases of OSCC, it was evidenced that males were the most affected, with 60% of the cases. Age ranged from 33 to 83 years, with a mean age of 50.8 years, and more prevalent in the fourth and fifth decades of life (66.6% of cases). The tongue region was the most prevalent anatomical site, affecting approximately 40% of the cases. The analysis performed in the 15 cases of OED showed that the female sex was the most prevalent (73.3%). Age ranged between 17 and 85 years, with most cases in the seventh decade of life (33.3%). The region of jugal mucosa was the most frequent site of involvement, corresponding to 33.3% of the OED cases analyzed. Immunohistochemical analysis of CD133 revealed nuclear and cytoplasmic immunostaining in OEDs and OSCCs of all evaluated specimens. Interestingly, nuclear immunostaining as well as intensity increased according to the progression of the malignancy process (Figure 1 A, B, C and D). Regarding the immunostaining profile, CD133 positive expression was observed in all samples. The CD133 immunoexpression was 82.6 ± 7.2 and 77.6 ± 16.0 in patients with OSCCs and OEDs, respectively. However, there was no statistically significant difference between the groups studied (p = 0.283) (Graphic). Regarding the degree of dysplasia, it was observed that 78.0 ± 18.4 of epithelial cells showed positive immunostaining in cases of mild dysplasia, while 72.7 ± 11.4 and 80.1 ± 1.8 showed positive marking in cases of moderate and severe dysplasia, respectively, with no difference statistically significant among the three groups Discussion This research found a higher OSCC prevalence among males (60%). The highest incidence of oral cancer in men is demonstrated in several studies (Oliveira et al., 2015, Tandon et al., 2017. This may be because of the greater exposure of men to the risk factors. In addition, gender differences in oral cancer may reflect different cultural behavior and lifestyle factors. The occurrence of oral cancer increases with age in all parts of the world. We detected that the 4 th and 5 th decades of life were the age groups most affected by OSCCs. However, an alarming increase in the incidence of oral cancers among the younger adults has been reported. This happens due to an The percentage of immunostaining for CD133 in relation to sex did not show any statistically significant results: male sex (OED 76.4 ± 10.9 and OSCC 82.9 ± 6.3) (p = 0.526) and female sex (OED: 78.0 ± 17.9 and OSCC 82.1 ± 8.9) (p = 0.588) (Graphic 4). Graphic 1. Immunostaining Profile of OED and OSCC Cases. There was no statistically significant difference between the groups (T-test unpaired. p = 0.283) Graphic 2. Immunostaining Profile of Dysplasia Degree. There was no statistically significant difference between the groups (One-way ANOVA test. p = 0.899). increase in the usage of tobacco (smoked or chewed) in young adults in comparison to older individuals (Abdulla et al., 2018). Other risk factors that lead to young patients OSCC development include influence of environmental carcinogens, stress and viral infections. The oral tongue was the region most affected by OSCC in this study similarly to other studies (Rivera, 2015). The predilection for this region may be associated with the pooling of carcinogens in saliva, creating risk zones (Brandizzi et al., 2008). However, geographic differences may be related to the intraoral distribution of OSCC. This happens, for example, in India, where most cases of OSCCs affect the buccal mucosa (Tandon et al., 2017) due to tobacco use, particularly chewing. Among the cases of OED analyzed, most of them affected the female sex (73.3%). PMDs are less common in females, however, when present, they have a higher risk for malignant transformation. It is still unclear why women are more predisposed to malignant transformation compared with men (Speight et al., 2018). Most of the OED patients were in the 7 th decade of life in the present study. In a large Swedish study, the highest malignant transformation rate was found in those with age of 70-89 years (Napier et al., 2003). Therefore, special attention should be given to patients with OEDs in this age group. The region of jugal mucosa was the most frequent site of involvement in this study. The oral tongue and the floor of the mouth are the sites of major involvement of the OEDs (Gandara-Vila et al., 2018;Speight et al., 2018), and are also considered anatomical sites of high risk for malignant transformation, this location is almost always related to etiologic factors and therefore may vary by geo Figure location and local habits (Speight et al., 2018). Recent experimental evidence shows that oral cancer is initiated through CSCs, which play a crucial role in cancer malignant progression, therapeutic resistance and recurrence (Baillie et al., 2017). Oral CSCs have been isolated in various forms, among them using specific biomarkers. CD133 is a marker that has been gaining popularity for OSCC identification and it was first described as a hematopoietic stem cell marker (Ravindran and Devaraj, 2012), but currently it can be considered a CSC marker of several solid tumors, such as breast, gastric, pulmonary, hepatic, prostate, pancreas and thyroid (Okamoto et al., 2013;Gao et al., 2014;Yu et al., 2015;). However, a single biomarker is not able to unambiguously identify CSCs, since it is likely that there is an overlapping hierarchy of subsets of CSC populations Graphic 3. CD133 Positive Cells Showed No Association with the Different OED and OSCC Sites. (One-Way ANOVA Test). Graphic 4. There Was No Statistical Difference Regarding CD133 Immunostaining in OEDs and OSCCs Cases when Comparing the Patients' Dgenders. (Two-way ANOVA test). (Baillie et al., 2017). Consequently, most of the studies that investigate and characterize CSCs use a combination of biomarkers of this cellular type (Baillie et al., 2017). The markers already described for CSCs of OSCCs are (OCT4, NANOG, SOX2, STAT3, CD44, CD24, Musashi-1, ALDH, components of the renin-angiotensin system, CD29) (Liu et al., 2013;Baillie et al., 2017;de Moraes et al., 2017). We chose CD133 to identify oral CSCs because it is more highly expressed in the CSC population compared to the parental normal population (Pozzi et al., 2015). In addition, although CD133+ CSCs are present in OSCCs, these cells are preferentially expressed in colon, brain, and lung cancer (de Moraes et al., 2017), with little information on the role of CD133+ CSCs in oral carcinogenesis. We found that all cases of OSCCs and OEDs showed CD133 positive marking, with a large percentage of cells immunopositive for both lesions (Figure 1). However, although these cells are the foundation of tumorigenesis, they represent only a small portion of tumor cells (Wang et al., 2016). The frequency of CSCs CD133+ appears to be variable among OSCCs and Head and Neck Squamous Cell Carcinoma (HNSCCs) with cases of low (Wang et al., 2016;de Moraes et al., 2017) and high cell CD133+ expression (Ravindran and Devaraj, 2012;Liu et al., 2013;Manelli et al., 2015). We believe that these conflicting results occur because the tumor samples have different degrees of malignancy and clinical outcome, since the presence of CSCs have been related to tumors with worse prognosis, recurrent disease, treatment failure and metastasis (Satpute et al., 2013;Jang et al., 2017). Research with OSCCs shows that stage III and IV tumors have higher amounts of CD133+ CSCs (Singh et al., 2018). In addition, the localization of overexpressed CD133 at nucleus and cytoplasm is related to poor prognosis (Huang et al., 2015). We detected high cytoplasmic and nuclear expression in the OSCC samples (Figure 1), which may also explain why our results are diverging from some studies. We did not detect statistically significant differences in CD133 immunoexpression between OSCCs and OEDs (Graphic 1) nor between different degrees of OEDs (Graphic 2). However, it is quite evident that the marking pattern changes between the different lesions, with a higher nuclear and cytoplasmic marking in the more advanced cases of carcinogenesis ( Figure 1A, 1B, 1C and 1D). The release of CD133 from the plasma membrane into the cytoplasm is related to the uptake of glucose under conditions of deprivation (Jang et al., 2017). Therefore, CD133 signaling in the cytoplasm is likely to potentiate the survival of tumor cells under conditions of nutrient restriction or stress (Jang et al., 2017), conditions known to be linked to the oral carcinogenesis process. Research shows that CD133+ CSCs are related to OSCCs of worse staging point to the involvement of CD133+ CSCs in the process of transformation of premalignant oral lesions (Ravindran and Devaraj, 2012;Liu et al., 2013). In addition, CSC CD133+ are present in most OEDs that have undergone malignant transformation for OSCCs (Liu et al., 2013). Thus, CD133 serves as a predictor to identify oral premalignant lesions with a high risk of oral cancer development. We found no difference in expression between CD133 and the localization sites of OSCCs and OEDs (Graphic 3). Similar results were also not observed in other studies (Ravindran and Devaraj, 2012;Liu et al., 2103), suggesting that the expression of this protein is independent of the location of the lesion in the oral cavity. This is probably because the anatomical locations covered in this study (tongue, palate, buccal mucosa and buccal floor) are susceptible to the same etiologic factors of carcinogenesis (tobacco smoke and alcohol consumption). This study has limitations like any other purely immunohistochemistry research, which limited the authors to realize greater conjectures. In addition, despite the size of the sample match the sample calculation, we believe that more and more homogeneous sample could modify the results. In conclusion, the presence of multiple CSC subtypes within OSCCs making investigation of these cellular types reliant on the use of multiple markers. This study has limitations because it uses only CD133 as a biomarker to identify oral CSCs, but provides important evidence on the cellular location of CD133 to be linked to the oral carcinogenesis process. Thus, despite the histologic grading of OED currently being the most important indicator for determining the risk of malignant transformation, its histologic classification may involve subjectivity. Thus, the cellular localization of CD133 may provide evidence of development of CSCs. This tool, along with the histopathological findings, may better identify premalignant oral lesions with increased chances of malignancy.
2020-10-06T13:33:19.987Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "cc4d89a601bacbddeb1b0175e3ccb5b6add1d008", "oa_license": "CCBY", "oa_url": "http://journal.waocp.org/article_89248_cf93a790b4ac1aa97e55bbd402b8161a.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "398c084ed08c39d422bee28adf64937d7115fcd1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1553260
pes2o/s2orc
v3-fos-license
Relative Radiometric Normalization of Multitemporal images A correct radiometric normalization between both images is fundamental for change detection. MAD method and its IR-MAD extension in an implementation on multisprectral aerial images is described in this paper. I. INTRODUCTION his paper analyzes results of the application of an automatic method of radiometric normalization between two multitemporal images of the same zone. This radiometric adjustment is part of the preprocessing of image changes detection. Any surface in two images recorded with the same sensor should ideally appear with similar values in their digital levels, but in real practice it doesn't happen due to several reasons, among them different atmospheric conditions, and different lighting from different recorded dates. That is the reason why pixels from the same terrain can show different radiance values, and, therefore, different values in their digital levels. In satellite images radiometric normalization must determine ground absolute reflectivity through correction algorithms as well as atmospheric properties related to the moment of the acquisition of the image [1]. For aerial images (in which atmospheric effects are not as prominent as in satellite images), and for many applications of change detection lineal radiometric normalization of multitemporal is enough. To this end one of the images is taken as reference and the necessary radiometric correction is applied to the other in order to make the tone of its pixels with those of the reference image. The behaviour of the spectral signals of a reflective lambertian surface with times t 1 and t 2 can be accepted as a lineal function. This way the pixels of the image at time t 1 must be corrected to get radiometric normalization: , kk abradiometric normalization constants for band k. According to the values taken by the coeficients, called gain and bias too [2], different normalization values will be obtained. Different methods have been analyzed in similar studies [3], which has been ordered in the following list from greater to less effectiveness: In aerial images can be difficult to get an absolute normalization due to the lack of atmospheric information associated to the image. Relative normalization based on the intrinsic radiometric information of the images is an alternative method, in which it is not necessary to know the absolute reflectivity of images [4]. In order to implement the relative radiometric normalization, it is assumed that the relationship between the radiance obtained by the sensors in two different instants from regions with constant reflectivity can be approximated by a linear function. The critical issue of the method is the determination of time invariant characteristics which can be the base of normalization The MAD (Multivariate Alteration Detection) transformation applied to both images from different times is invariant to arbitrary linear transformations of the intensities of the pixels involved in the transformation. That is the reason why in the implementation of the change detection method (MAD) preprocessing with radiometric normalization is superfluous. This work proposes combined use of MAD transformation applied to not-normalized multitemporal images to select NOT-changed pixels and then their utilization for a relative radiometric normalization. This is a simple, quick and completely automatic procedure, compared with methods requiring manual selection of characteristics that do not change with time. Upon completion this method could be combined, if results are not satisfactory under visual exploration of radiometric changes in the normalized image, with a histogram based transformation that modify the digital level of one pixel of the image being corrected, taking one of the two images as reference, so the final histogram of the image is similar to the histogram chosen as base. El que los histogramas sean similares significa que el brillo medio, contraste y distribución de niveles digitales sean también parecidos. The IDL programming language has been used to implement this method in the ENVI software environment along with RADCAL-RUN extension. The method requires a previous transformation: IR-MAD (modification of MAD transformation [5]), which improves the location of no-change pixels. The quality of normalized images is evaluated in terms of the joint of t-test and F-test in order to compare the mean and the variance respectively. The MAD change detection procedure will be explained concisely in section II. II. THE MAD AND IR-MAD TRANSFORMATIONS The Multivariate Alteration Detection method (MAD) is a new change analysis method in multisprectal images originally proposed by [6]. The purpose of this method is that the data of two bitemporal multisprectal image Hill be transformed in such a way that the maximum variance in every band will be explained at the same time in the difference image. This transformation generates a set of mutually orthogonal difference images (MAD components), which have the same spectral dimension as the original multiespectral images that were transformed. The method is based on correlation analysis. Linear correlation are obtained from two data sets, in such a way that the difference between the two first linear correlations correspond to the biggest correlation. This is called the first canonical correlation. The two corresponding linear combinations are the first canonical components. The transformation is as follows [7]: first two Ndimensional multisprectal images are represented (where N means the number of bands) of a scene acquired in times t 1 and t 2 with two random vectors, called and XY , assuming a gaussian normal distribution: A. Canonical Correlation Analysis This analysis includes a linear transformation of each set of multiespectral images such as, instead of being ordered by its wavelength, transformed components are ordered according to their mutual correlation. The greatest mutual correlation between the images is called first canonical variable (CV) and so on orderly second, third, etc. B. MAD transformation Once the CCA has been exposed in the last paragraph, the MAD transformation defined as: The first MAD component has maximum variance in the intensity of its pixels. The absolute value of the last MAD component shows always the domain of the greatest undergone change. The correlation among the input bands and the MAD components make the interpretation of the mode of change easier. For 12 input bands (this is the case with two multitemporal images LANDSAT) the input is 6 MAD components, with which after the selection of a significant change threshold, the change-no change image can be represented. Depending on the type of present change, any of its components may exhibit significant change information. In fact one of the more interesting aspects of this method is that it orders different change categories in different uncorrelated components of the image. MAD transformation is invariant to linear transformations applied to the original image (affine transformation type). This means too that it is invariant to radiometric and atmospheric corrections that could be applied. That is why it is considered a very robust method to detect changes. This invariance offers the possibility to use the MAD transformation to implement automatically a relative radiometric normalization onto multitemporal images, as it will be described subsequently. C. Iteratively reweighted multivariate alteration detection (IR-MAD) This transformation can be implemented in an iterative schema, in which, when means and covariance matrices are calculated for the next iteration of the MAD transformation, weights are applied to observations according to the probability of determining the NO-change in the preceding iteration. It all begins with the original MAD transformation by assigning, for example, the same weight =1 to every pixel. In order to choose the weight of pixel j in the next iteration w j , the Z variable is used to represent the sum of the squares of the standard MAD components: They are therefore multiespectral images with three bands corresponding to the visible part of the electromagnetic spectrum. The images were scanned by the Zeiss/Imaging photogrametric scanner with resolution of 21 microns. After the aerotriangulation of the set of images, orthopictures were taken with GSD value of 1 meter using DIGI3D software. Visually, in figure 1, the changes experimented in those years can be observed, also the difference in shades between the images. IV. RADIOMETRIC NORMALIZATION In order to implement the radiometric normalization the RADCAL_RUN extension [4] developed by Dr. M. J. Canty PhD and programmed in IDL language over the digital image processing software ENVI 4.7 is used. As reference image has been used that of the year 1995. With the aim of carrying out a radiometric normalization tose pixels that satisfy t is a decision threshold, usually 95%. The steps involved in the radiometric normalization are the following: [7]  Chose the values of weights equal to one for every pixel in the bitemporal scene. The IR-MAD method is applied to the images. The development of the iterations of the canonical correlations is shown in figure 2. As it can be observed, the first iterations are the more important ones It stabilizes itself from the seventh one on. In order to evaluate the process of normalization the program saves one in every three pixels of NO-change to carry out a reliability test. The mean and the variance are calculated before and after the normalization as well as the statistical hypothesis test of invariant pixels in both images. 1794 pixels for the normalization and 898 pixels for the statistical tests were used. The results for the Student test for the mean in the red, green, and blue bands are -0.0077, -0.5409 and 0.1284 respectively. With these values the confidence interval has a p-value between 0.89 and 0.99 for red and blue bands and 0.58 for the green band. As it can be seen in figure 4 in red, the part of the reject of the test covers almost all the distribution. In this case we reject radiometric normalization. By jeans of a visual analysis the bad result is confirmed because it doesn't equal the radiometric values between the reference image, time 1 and the normalized one, time 2. The process is repeated but this time with the a priori condition of probability of belonging to NO-change pixels of 99%. With this premise the number of used pixels for the radiometric normalization has decreased considerably down to 368 and for the tests only 184 have been used. That means that the degrees of freedom have diminished for the calculation of confidence intervals. The results can be seen in table 2. They have clearly improved in respect with the previous test. The radiometric normalization can be accepted then. V. CHANGE DETECTION One application among others of change detection is the updating of Geographic Databases. According to [8] the two main approaches to update a Database are: first to set up gradually a new Database that replaces the old one and the second approach is to detect, identify, and update only the changes. This option is faster and more convenient. That is the reason why automatic change detection is the first and most important step in the updating of Geographic Databases. The result of MAD transformation generates three components, see In [9] and [10] MAD method is used as a technique of change detection between satellite multiespectral images. VI. CONCLUSIONS Radiometric normalization among multitemporal multiespectral images using the IR-MAD transformation gives good results. This transformation selects invariant pixels in the presence of changed pixels. The associated statistics to the applied transformation with a t threshold, tables 1 and 2, has the utility of validate or reject the normalization. In the case of the aerial images in this work, a final threshold t≥99% was chosen to search for invariant pixels. Finally, MAD transformation as method of change detection has highlighted existing changes. This technique depends on the chose threshold to highlight changes in each component. These thresholds have to be selected by means of an empiric method through observation by the image analyst.
2018-01-23T22:41:34.879Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "4d74317f07461bcef9efde0159a9df4f6141ef51", "oa_license": "CCBY", "oa_url": "http://www.ijimai.org/journal/sites/default/files/IJIMAI20101_3_9.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "92c6c7b56f50bf00e905519a84a5dcbb00e3a252", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
244116974
pes2o/s2orc
v3-fos-license
First Order Transitions Between the Gapped Spin-Liquid and Ferrimagnetic Phases in (1/2,1/2,1) Mixed Diamond Chains with Bond Alternation The ground-state phases of mixed diamond chains with bond alternation $\delta$, and ($S, \tau^{(1)}, \tau^{(2)})=(1/2,1/2,1)$, where $S$ is the magnitude of vertex spins, and $\tau^{(1)}$ and $\tau^{(2)}$ are those of apical spins, are investigated. The two apical spins in each unit cell are connected by an exchange coupling $\lambda$. The exchange couplings between the apical spins and the vertex spins take the values $1+\delta$ and $1-\delta$ alternatingly. This model has an infinite number of local conservation laws. For large $\lambda$ and $\delta \neq 0$, the ground state is equivalent to that of the spin $1/2$ chain with bond alternation. Hence, the ground state is a gapped spin liquid. This energy gap vanishes for $\delta=0$. With the decrease of $\lambda$, the ground state undergoes a transition at $\lambda=\lambda_{\rm c0}(\delta)$ to a series of ferrimagnetic phases with a spontaneous magnetization $m_{\rm sp}=1/p$ per unit cell where $p$ is a positive integer. It is found that this transition is a first order transition for $\delta\neq 0$ with a discontinuous change in $m_{\rm sp}$, while no discontinuity is found for $\delta=0$. The critical behaviors of $m_{\rm sp}$ and $\lambda_{\rm c0}(\delta)$ around the critical point $(\delta,\lambda) =(0, \lambda_{\rm c0}(\delta))$ are also discussed analytically. Introduction In low-dimensional frustrated quantum magnets, the interplay of quantum fluctuation and frustration leads to the emergence of various exotic quantum phases. 1,2) In the one-dimensional cases, the quantized and partial ferrimagnetic phases are often realized in addition to the gapped and gapless spin-liquid phases. The diamond chain [3][4][5][6][7][8][9][10] is known as one of the simplest examples in which an interplay of quantum fluctuation and frustration leads to a wide variety of ground-state phases. Remarkably, this model has an infinite number of local conservation laws and the ground states can be classified by the corresponding quantum numbers. If the two apical spins have equal magnitudes, the pair of apical spins in each unit cell can form a nonmagnetic singlet dimer and the ground state is a direct product of the cluster ground states separated by singlet dimers. 3,4) Nevertheless, in addition to the spin cluster ground states, various ferrimagnetic states and strongly correlated nonmagnetic states such as the Haldane state are also found when the apical spins form magnetic dimers. In these cases, all the spins collectively form a correlated ground state over the whole chain. In the presence of various types of distortion, the spin cluster ground states also turn into highly correlated ground states. Extensive experimental studies have been also carried out on the magnetic properties of the natural mineral azurite that is regarded as an example of distorted spin-1/2 diamond chains. 9,10) On the other hand, if the magnitudes of the two api- * E-mail address: hida@mail.saitama-u.ac.jp cal spins are unequal, they cannot form a singlet dimer. Hence, all spins in the chain inevitably form a many-body correlated state. As a simple example of such cases, we investigated the mixed diamond chain with apical spins of magnitude 1 and 1/2, and vertex spins, 1/2 in Ref. 8. In addition to the nonmagnetic gapless spin liquid phase and the ferrimagnetic phase expected from the Lieb-Mattis theorem, 11) we found an infinite series of ferrimagnetic phases with spontaneous magnetizations m sp = 1/p where p is a positive integer (1 ≤ p < ∞). The ferrimagnetic phases for p ≥ 2 are accompanied by the spontaneous translational symmetry breakdown with spatial periodicities of p unit cells. The width and spontaneous magnetization of each ferrimagnetic phase tend to infinitesimal as λ tends to the ferrimagneticnonmagnetic transition point. Considering the infinitesimal energy scale around this transition point, these series of ferrimagnetic ground states are expected to be fragile against various perturbations such as lattice distortions, randomness, and finite temperature effect. In the present work, we investigate the effect of bond alternation δ on this model. This type of lattice distortion preserves the infinite number of conservation laws in the undistorted diamond chain and a similar series of ferrimagnetic phases are found. For finite δ, however, it is verified that the maximal value of p is finite and the spontaneous magnetization m sp has a finite discontinuity at the transition point. This paper is organized as follows. In Sect. 2, the model Hamiltonian is presented. In Sect. 3, the numerical results for the spontaneous magnetization and the first- order transition points are presented. The critical behavior of spontaneous magnetization and the first-order transition point for small δ is discussed analytically in Sect. 4. The last section is devoted to a summary and discussion. Hamiltonian We consider the Hamiltonian where S l , τ (1) l and τ (2) l are spin operators with magnitudes S l = τ (1) l = 1/2 and τ (2) l = 1. The number of unit cells is denoted by L, and the total number of sites is 3L if the periodic boundary condition S L+1 ≡ S 1 is employed. Here, the parameters λ and δ control the frustration and bond alternation, respectively, as depicted in Fig. 1. The Hamiltonian (1) has a series of local conservation laws. To see it, we rewrite Eq. (1) in the form, where the composite spin operators T l are defined as Then, it is evident that Thus, we have L conserved quantities T 2 l for all l. By defining the magnitude T l of the composite spin T l by T 2 l = T l (T l + 1), we have a set of good quantum numbers {T l ; l = 1, 2, ...L} where T l = 1/2 and 3/2. The total Hilbert space of the Hamiltonian (2) consists of separated subspaces, each of which is specified by a definite set of {T l }, i.e., a sequence of 1/2 and 3/2. A pair of apical spins with T l = 1/2 is called a doublet (hereafter abbreviated as d) and that with T l = 3/2 a quartet (abbreviated as bond alternation δ whose ground state is a gapped spin liquid for δ = 0. For δ = 0, it is a gapless spin liquid. Ground states for λ ≪ 1 For λ ≪ 1, ∀l T l = 3/2. Hence, this model is equivalent to the spin-1/2-3/2 alternating antiferromagnetic Heisenberg chain whose ground state is a ferrimagnetic state with spontaneous magnetization m sp = 1 per unit cell according to the Lieb-Mattis theorem. 11) Here, m sp is defined by where denotes the expectation value in the ground state with an infinitesimal symmetry breaking magnetic field in z-direction. Intermediate λ We investigate this regime numerically using the infinite-size DMRG (iDMRG) method. The number of states χ kept in each subsystem in the iDMRG calculation ranged from 240 to 360. We calculate the groundstate energies per unit cell for different configurations of {T l } at the infinite-size fixed point and compare them to find the ground-state configuration. Although we start the iteration with the open boundary condition setting S L+1 = 0, the boundary condition is irrelevant since we measure the ground-state energies per unit cell for the middle segment of the chain at the infinite-size fixed point. However, it is not possible to carry out the calculation for all possible configurations of {T l } for infinite chains. As plausible candidates of the ground states, we consider the configurations (qd p−1 ) ∞ with m sp = 1/p where p takes positive integer values. The configuration (qd p−1 ) ∞ consists of an infinite array of segments qd p−1 with length of p unit cells as depicted in Fig. 2. This phase is called the (qd p−1 ) ∞ phase. As for the notations of the configurations and phases, we follow those of Ref. 8. These states are the ground states for δ = 0, 8) and we assume that other types of ground states do not emerge by the introduction of a bond alternation. The λ-dependence of m sp is shown in Fig. 3 for various values of δ. For δ = 0, the spontaneous magnetization m sp jumps from m sp = 0 to a finite value m c sp (δ) at the nonmagnetic-ferrimagnetic transition point λ = λ c0 (δ). For δ = 0, m sp rises starting from infinitesimal value corresponding to p → ∞ at the critical value of λ given by 8) where the boundary between the (qd p−1 ) ∞ phase with m sp = 1/p and (qd p ) ∞ phase with m sp = 1/(p + 1) is denoted by λ c (p+ 1, p, δ). The transition point λ c0 (δ) shifts to lower values with the increase of δ. This is plotted against δ 2/3 in Fig. 4 suggesting the asymptotic behavior λ c0 (δ)−λ c0 (0) ∝ δ 2/3 . The value of λ c0 (0) estimated from the extrapolation of λ c0 (δ) to δ → 0 is slightly larger than the value (6). However, the deviation is within the last digit of 0.001 and would be attributed to the ambiguity in the extrapolation procedure. The spontaneous magnetization m c sp just below the transition point λ = λ c0 (δ) is also plotted in Analytical approach To examine the critical behavior for δ ∼ 0 and λ ∼ λ c0 (0) analytically, we start with asymptotic behaviors of the ground-state energies of nonmagnetic and ferrimagnetic phases for small δ. Ground-state energy of the nonmagnetic phase with bond alternation For small δ, the ground-state energy of the nonmagnetic phase E N (δ, L) can be obtained from the wellknown result for the spin-1/2 Heisenberg chain with bond alternation δ as where ǫ 0 and C 0 are constants. The energy ǫ 0 corresponds to the ground-state energy of the spin-1/2 antiferromagnetic Heisenberg chain per site. The first term is the contribution from the energy of the apical spins in the d state. The last term is the correction resulting from the bond alternation δ. The δ-dependence of this term is also well-established [12][13][14][15] on the basis of the SU(2) invariant conformal field theory, although the logarithmic correction is ignored in the present analysis. Ground-state energy of the ferrimagnetic phase In this case, the point δ = 0 is not a critical point. Hence, we can expand the ground-state energy with respect to δ. Considering that the energy is an even function of δ, the lowest order correction in δ is of the order of δ 2 that is smaller than the corresponding correction ∝ δ 4/3 in the nonmagnetic phase for small δ. Hence, we neglect the effect of bond alternation in the ferrimagnetic phase. The ground-state energy E F (p, δ, L) in the ferrimagnetic (qd p−1 ) ∞ phase can be expressed as where the first term is the contribution from the energy of the L/p pairs of apical spins in the q state and the second term is that from the remaining pairs of apical spins in the d state. The energy ǫ(p) is the ground-state energy per unit cell of the Heisenberg model with the configuration (qd p−1 ) ∞ . For large p, we expand ǫ(p) up to the second order in 1/p as where C 1 and C 2 are constants. It should be noted that the limit p → ∞ corresponds to the uniform spin-1/2 antiferromagnetic Heisenberg chain. Hence, ǫ(p) should tend to 2ǫ 0 in this limit. Phase transition points The transition point λ c (p + 1, p, δ) between the (qd p−1 ) ∞ and (qd p ) ∞ phases is given by equating the ground-state energies of both phases as which yields Comparing (12) with the definition of λ c0 (0) (6), we find Hence, we have and For finite δ, we may assume the transition to the nonmagnetic phase takes place directly from the (qd p−1 ) ∞ phase with appropriate p. The transition point λ c0 (δ) is determined by equating the ground-state energies Eq. (7) and Eq. (14) as where the spatial periodicity p must satisfy Substituting Eqs. (15) and (16) into Eq. (17), we find . For large enough p, this yields This implies that the spontaneous magnetization m sp jumps from 0 to m c sp given by at the transition point. Further, substituting (19) into (16), we find These results are consistent with the plots of Fig. 4 and Fig. 5. Summary and Discussion The ground-state phases of diamond chains (1) with (S, τ (1) , τ (2) ) = (1/2, 1/2, 1) and bond alternation δ are investigated. For δ = 0, the transition between the gapped spin-liquid and the ferrimagnetic phases is of the first order with discontinuity in spontaneous magnetization. This is in contrast to the case of δ = 0, in which m sp rises from the gapless spin liquid phase with an infinitesimal step resulting in the infinite series of ferrimagnetic phases. The δ-dependence of the transition point λ c0 (δ) and that of the spontaneous magnetization m sp at the transition point are examined numerically and analytically. Our results show that the nature of the ferrimagnetic phase is closely related to that of the neighboring nonmagnetic phase. The infinite series of ferrimagnetic phases present for δ = 0 is truncated at finite p as soon as the spin gap opens in the nonmagnetic side owing to nonvanishing δ. This implies that the infinite series of ferrimagnetic phases and accompanying infinitesimal magnetization steps are the consequences of the critical nature of the ground state of the uniform spin-1/2 Heisenberg chain as suggested in Ref. 8. In the spin-1 diamond chain with bond alternation δ, the nonmagnetic phase is equivalent to the ground state of the spin-1 Heisenberg chain with bond alternation δ. 7) In this model, an intermediate ferrimagnetic phase is observed in the close neighborhood of the point (λ, δ) = (λ c (S = 1), δ c (S = 1)) ≃ (1.0832, 0.2598) that corresponds to the endpoint of the Haldane-dimer critical line. [16][17][18][19] In Ref. 8, it has been speculated that for δ = δ c (S = 1) an infinite series of quantized ferrimagnetic phases similar to those discussed in the present model with δ = 0 is realized. 8) The behavior for δ ≃ δ c (S = 1) would be also similar to the present model although the detailed numerical confirmation is too demanding due to the smallness of the width of this region. As discussed in the Introduction, the series of ferrimagnetic phases predicted for δ = 0 are expected to be fragile against various perturbations near λ = λ c0 (0). This implies that a wide variety of exotic phases and phase transitions can emerge from these states in the presence of the perturbations that are expected if materials close to the present model are synthesized experimentally. In this context, it would be important to investigate the effect of lattice distortions that do not preserve the conservation laws (4). 5,6) Also, the effect of randomness would be one of the relevant issues. These studies are left for future investigation. A part of the numerical computation in this work has been carried out using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo, and Yukawa Institute Computer Facility at Kyoto University.
2021-11-16T02:16:21.204Z
2021-11-13T00:00:00.000
{ "year": 2021, "sha1": "9fded0b5cef11483b0e9f6cbc0205d113fcf77be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.07054", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9fded0b5cef11483b0e9f6cbc0205d113fcf77be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270037393
pes2o/s2orc
v3-fos-license
Research on Low-Cost High-Viscosity Asphalt and Its Performance for Porous Asphalt Pavement To develop a cost-effective, high-viscosity asphalt for porous asphalt pavement, we utilized SBS, tackifier, and solubilizer as the main raw materials, identified the optimal composition through an orthogonal experiment of three factors and three levels, and prepared a low-cost high-viscosity asphalt. We compared its conventional and rheological properties against those of rubber asphalt, SBS modified asphalt, and matrix asphalt, employing fluorescence microscopy and Fourier transform infrared spectroscopy for microstructural analysis. The results indicate that the optimal formula composition for high-viscosity asphalt was 4–5% styrene-butadiene-styrene (SBS) + 1–2% tackifier +0–3% solubilizer +0.15% stabilizer. The components evenly dispersed and the performances were enhanced with chemical and physical modification. Compared with SBS modified asphalt, rubber asphalt, and matrix asphalt, the softening point, 5 °C ductility, and 60 °C dynamic viscosity of high-viscosity asphalt were significantly improved, while the 175 °C Brookfield viscosity was equivalent to SBS modified asphalt. In particular, the 60 °C dynamic viscosity reaches 383,180 Pa·s. Rheological tests indicate that the high- and low-temperature grade of high-viscosity asphalt reaches 88–18 °C, and that high-viscosity asphalt has the best high-temperature resistance to permanent deformation and low-temperature resistance to cracking. It can save about 30% cost compared to commercially available high-viscosity asphalt, which is conducive to the promotion and application of porous asphalt pavement. Introduction With the advent of the automobile era, the challenges to road traffic safety have intensified.According to investigation results concerning highway accidents during rain, the accident rate on rainy days is approximately eight times higher than on sunny days, featuring a high incidence of severe accidents, such as consecutive rear-end collisions and hydroplaning.Furthermore, the noise generated by the interaction between the road surface and tires during high-speed travel significantly impacts the living environment of residents along these routes [1].In this context, porous asphalt pavement has emerged as the optimal solution for mitigating frequent rain-related accidents, owing to its superior noise reduction, skid resistance, water mist suppression, enhancement of driving safety in wet conditions, and reduction in surface runoff and water pollution [2][3][4][5].Porous or permeable asphalt pavement is characterized by asphalt mixtures with large pores, allowing surface rainwater to penetrate the structural layer and infiltrate into the base layer [6][7][8].Compared to conventional pavement, the cushion layer of porous asphalt pavement requires high permeability, consisting primarily of graded crushed stone, gravel, or a mixture thereof, forming various granular cushion layers.The soil foundation beneath Polymers 2024, 16, 1489 2 of 19 the cushion layer typically comprises sandy soil with excellent permeability, meeting the structural requirements for pavement [9][10][11].Water damage is widely recognized as the primary cause of damage to asphalt pavement.In permeable pavement, rainwater penetration into the structural layer causes the material to remain damp or saturated for extended periods, leading to a decline in material performance.Particularly during rainy conditions, the dynamic water pressure between the tires and pavement increases the likelihood of aggregate peeling and mixture loosening [12][13][14][15]. High-viscosity asphalt, with its excellent viscoelastic properties, high-and lowtemperature performance, and water stability, ensures that porous asphalt mixtures have strong adhesion, water damage resistance, and good resistance to rutting.The author used polyphosphoric acid (PPA) to improve the physical and rheological properties of high-viscosity modified asphalt; its 60 • C dynamic viscosity reaches 163,735 Pa•s, far higher than the other three types of asphalt [16,17].This has, therefore, attracted considerable academic attention.Ilyin et al. investigated the effects of polymer and solid nanosized additives on the rheological properties of asphalt pavement at an earlier time, and the results showed that the addition of polymeric modifiers (SBS) or devulcanized rubber particles substantially increases the storage and loss moduli and decreases the intensity of reduction in the storage modulus with temperature by several orders of magnitude [18].Bahram Shirini analyzed the disparity in efficacy between rubber asphalt with different contents of rubber powder and 5% SBS modified asphalt.The findings indicated that incorporating rubber powder and SBS could enhance the material's resilience to high-temperature deformation, moisture damage resistance, and traction, but it will reduce the water permeability of asphalt mixtures [19].Investigations conducted by Punith et al. reveal that blending wood fibers into rubber modified asphalt improves its cohesive properties, deformation resistance, moisture integrity, and endurance against fatigue [20].Sangiorgi et al. examined the roadway suitability of a high-viscosity asphalt blend formulated through a compound alteration of discarded rubber and SBS, noting enhancements in both its resilience to low-temperature fissuring and aerosolized particle reduction, albeit at the expense of diminished penetrability by water and resilience to lasting deformation [21].Substances such as SBS, rubber powder, or TAFPACK-SUPER (TPS) are commonly employed to prepare high-viscosity asphalt. Currently, the TPS modifier is the most commonly used high-viscosity modifier for high-viscosity asphalt in Japan.TPS primarily consists of thermoplastic rubber, complemented by minor quantities of resin (tackifier) and plasticizer, known for their effective performance.However, the elevated cost restricts its widespread use and application [22,23].More researchers are now concentrating on developing high-viscosity asphalt with low cost and high performance.Raqiqa tur Rasoo1 employed recycled rubber powder and SBS to develop composite modified asphalt, and experimental findings indicate that recycled rubber powder and SBS are highly compatible, disperse evenly in the composite modified asphalt, and notably enhance its 60 • C dynamic viscosity [24].Geng Litao used SBS and lime milk to produce high-viscosity asphalt and mixture, examining its fatigue and anti-aging characteristics, and the tests revealed that the mixture exhibits strong anti-aging and fatigue resistance [25].Alam et al. investigated the effects of polysulfate and SBS on matrix asphalt at different concentrations.By adjusting the proportions of aromatics, resin, and asphaltene in asphalt, the 60 • C dynamic viscosity can be enhanced [26].Zhang et al. developed high-viscosity asphalt using SBS as the main modifier, incorporating furfural essential oil as a plasticizer and sulfur as a crosslinking agent.Findings demonstrated that plasticizers facilitate the swelling and dispersion of SBS in asphalt and enhance 60 • C dynamic viscosity, and crosslinking agents help form a stable polymer network, effectively boosting asphalt's aging resistance [27,28].Cong et al. explored the impact of various carbon black on the performance of SBS modified asphalt, noting that carbon black enhances both the conductivity and thermal properties of asphalt, thus improving its high-temperature and anti-aging capabilities [29].Wu the aging and high-temperature stability of SBS modified asphalt, with optimal results at a 6% concentration [30]. With the development of testing technology, many researchers suggest using dynamic shear rheological testing to characterize the performance of modified asphalt.It can also provide more reliable information for studying the performance of high-viscosity asphalt.Rheology mainly studies the flow and deformation processes of materials and is a branch of mechanics.Various countries around the world classify asphalt mainly through penetration classification and viscosity classification systems [31].With the improvement of technology and people's understanding of rheology, the United States implemented the Strategic Highway Research Program (SHRP) in the late 1980s, and the research results on asphalt and asphalt mixtures are collectively referred to as SUPERPAVE (Super Performance Asphalt Pavement).Among them, Performance Grade (PG) is the most eye-catching [32].Unlike previous asphalt grading and standards, the evaluation method and indicators of PG are proposed based on the road performance of asphalt binder, so it is applicable to both ordinary asphalt and modified asphalt.At present, the research methods and indicators for the rheological properties of modified asphalt are developed and changed based on PG [33].PG classification mainly distinguishes the high-and low-temperature grades and fatigue performance of asphalt materials.Shenoy [34] corrected the rutting factor of asphalt and obtained G*/(1 − 1/(1/tanδsinδ)) as a new indicator of the high-temperature performance of asphalt.Xing investigated the dynamic shear rheological properties of different types of high-viscosity asphalt slurries, and the results showed that increasing the viscosity of modified asphalt or increasing the specific surface area of mineral powder can effectively reduce the temperature sensitivity of high-viscosity asphalt slurries to rutting factors and improve their high-temperature deformation resistance [34].Overall, the research methods and evaluation indicators for asphalt rheological properties have mostly been continuously improved and developed based on PG, so rheological methods are more closely related to road performance than empirical testing.At present, rheological means have been widely used to study the viscoelasticity, compatibility, stability, and aging resistance of polymer modified asphalt. However, the high-viscosity asphalt often features high polymer content, limited polymer-asphalt compatibility, and unresolved storage stability issues.This performance decline impacts the original modified asphalt.Compatibility challenges between polymer additives and asphalt continue to hinder the use of high-viscosity asphalt in engineering applications and large-scale production [35,36].This study formulates a high-performance, cost-effective high-viscosity asphalt using orthogonal experimental design and conducts a thorough comparative analysis of its conventional performance, microstructure, and rheological properties.Analyzing the applicability and economic value of high-viscosity asphalt in porous asphalt pavements, offers insights for further performance optimization and wider application.Compared to SBS modified asphalt and rubber asphalt, the highviscosity asphalt for porous asphalt pavements exhibits superior performance in high and low temperatures, 60 • C dynamic viscosity, and aging resistance.Additionally, it is significantly more cost-effective than commercially available high-viscosity asphalt, underscoring its considerable importance for the promotion and application of porous asphalt pavements. Materials The primary raw materials for the high-viscosity asphalt comprised SK90# matrix asphalt, manufactured in Suwon, South Korea; SBS1301, a linear type produced by Yueyang Petrochemical Plant in Yueyang of China; a tackifier primarily composed of C5 petroleum resin with a molecular weight of 1500; a liquid solubilizer of rubber oil with the model of Naphthenic acid 4010, sourced from Henan Leimo Chemical Products Co., Ltd. in Zhengzhou of China; a stabilizer of sulfur powder; and rubber powder procured from Shandong Hengfeng Rubber Powder Co., Ltd. in Binzhou, Shandong of China, featuring a fineness of 0.25 mm.The basic properties of SBS are detailed in Table 1.This study used SK90# matrix asphalt and evaluated its basic performance according to the "Test Specification for Asphalt and Asphalt Mixtures in Highway Engineering" (JTG E20-2011) [37], as shown in Table 2.The basic properties of rubber powder are shown in Table 3. Rubber asphalt and SBS modified asphalt were both self-made in the laboratory.Rubber asphalt was composed of 82% matrix asphalt and 18% rubber powder; SBS modified asphalt was composed of 93.5% matrix asphalt, 4.5% SBS, 2% rubber oil, and an additional 0.15% stabilizer added to the entire asphalt system. Test Method 2.2.1. Conventional Performance Test of Modified Asphalt In accordance with the procedures outlined in the 'Highway Engineering Asphalt and Asphalt Mixture Test Procedures' (JTG E20-2011), this study evaluated the performance indicators of asphalt, including 25 • C penetration, softening point, 5 • C ductility, 60 • C dynamic viscosity, 175 • C Brookfield viscosity, peeling rate, and segregation difference [37]. Microscopic Analysis of Modified Asphalt (1) Fluorescence microscopic dispersion observation test The microscopic photography method offers a direct approach to examining the distribution and phase interface behaviors of polymers within the asphalt system, serving as an effective technique for the microscopic analysis of the modification mechanisms in polymer-modified asphalt [38,39].Fluorescence microscopy was employed to observe the microstructure of various modified asphalts.A 0.5 g sample of asphalt was placed on a glass slide and heated to 100 • C on a heating table to ensure even spreading.Subsequently, the glass slide was positioned under an objective lens for the observation of the sample's dispersion state. (2) Asphalt infrared spectrum test Using a Fourier transform infrared spectrometer (FTIR) and KBr compression method, infrared spectroscopy tests were conducted on different asphalt samples.These were conducted by taking about 100 mg of KBr in an agate mortar, grinding it into fine powder, putting it into a grinding tool, applying 8-10 tons of pressure on the tablet press, and keeping it there for 2 min.The tablet should be uniform and transparent, without cracks.The background of the tablet was collected, and a thin layer of asphalt sample was evenly applied to the KBr tablet for sample collection.During the experiment, the scanning frequency was set to 32, the resolution was 4 cm −1 , and the scanning wavenumber range was 400 cm −1 ~4000 cm −1 . Rheological Performance Test of Modified Asphalt (1) Asphalt PG high-temperature grading test A dynamic shear rheometer (DSR) was used for the asphalt PG testing.Test parameters for the original asphalt setting were as follows: a strain value of 12%, angular frequency of 10 rad/s, and a spacing of 1 mm between the parallel plates.Test parameters for the asphalt after aging were as follows: a strain value of 10%, angular frequency of 10 rad/s, parallel plates with an interval of 1 mm between the top and bottom, and vibration loading test at a temperature level of 6 • C. The complex modulus (G*) and phase angle (δ) were tested using a DSR.The rutting factor (G */sin(δ)) of the asphalt was calculated based on the G* and δ.According to the American SHRP research program specification, the G*/sin (δ) of the original asphalt was not less than 1.0 kPa, the G*/sin (δ) of the thin film oven test (TFOT) residual asphalt was not less than 2.2 kPa, and the |G*|•sin (δ) of the asphalt after accelerated aging using a pressure aging vessel (PAV) was not more than 5000 kPa.Through a DSR test of residual asphalt after the TFOT stage of different asphalt samples, the high-temperature grading of asphalt can be obtained [32]. (2) Low-temperature bending creep stiffness test The low-temperature bending creep test of asphalt after the TFOT and PVA was carried out at −12 • C, −18 • C, and −24 • C by bending beam rheometer (BBR).Creep stiffness modulus S and creep rate m were measured.The low-temperature grade of the asphalt sample was determined under the condition that the creep stiffness modulus S is not more than 300 MPa and the creep rate m is not less than 0.3. (3) Asphalt temperature scanning test The dynamic shear rheometer was used to scan the temperature of the selected matrix asphalt and modified asphalt at a stress level of 100 Pa.The temperature scanning range was 30 • C to 80 • C. The asphalt sample was evenly applied to a 25 mm parallel plate, and the upper and lower parallel plate spacing was 1 mm for shock scanning.The complex modulus, phase angle and rutting factor of the test results were used to analyze the temperature sensitivity of different asphalts. (4) Multiple Stress Creep Recovery (MSCR) Test The MSCR test uses the dynamic shear rheometer to conduct repeated loading and unloading tests on asphalt at different stress levels.Two stress levels are 100 Pa and 3200 Pa, respectively, with 10 cycles for each stress level.Each cycle included a 1 s loading process for asphalt and a 9 s unloading recovery process.This study conducted creep recovery tests on different types of asphalt under 60 • C test conditions, and obtained the relationship between the time and strain of matrix asphalt and modified asphalt. Orthogonal Design To determine the optimal formulation of high-viscosity asphalt, SBS (A), tackifier (B), and solubilizer (C) were utilized as the primary raw materials.Among them, SBS and tackifier can improve the high-and low-temperature performance of asphalt, while solubilizer can adjust the compatibility and stability of the system, further improving the low-temperature performance.An orthogonal experiment with three factors and three levels was designed.The factors and levels are detailed in Table 4, utilizing an L9 (3ˆ4) orthogonal array for the experimental design.The orthogonal experimental array is presented in Table 5. Serial Number Factors The following evaluation indexes are used as the response of orthogonal experiment: (a) 25 The preparation process for modified asphalt involves the following: (1) placing SK90# matrix asphalt in a constant temperature oven set at 140 • C for three hours to allow it to flow; (2) heating the matrix asphalt to approximately 180 • C in a heating sleeve, followed by the addition of SBS and tackifier while stirring, and shearing at a high speed of 4000-5000 rpm until uniform; and (3) adding solubilizer and stirring continuously for four hours to achieve a stable system. The process for preparing high-viscosity asphalt using SBS, tackifier, and solubilizer is illustrated in Figure 1. Study on Composition Design of High-Viscosity Asphalt Following the orthogonal experimental design, high-viscosity asphalt samples were prepared under varying factor levels.The conventional performance metrics, stability, and dynamic viscosity of high-viscosity asphalt samples across different experimental schemes were compared and evaluated, as detailed in Table 6.An analysis of the results of the orthogonal test for high-viscosity asphalt are shown in Table 7. Study on Composition Design of High-Viscosity Asphalt Following the orthogonal experimental design, high-viscosity asphalt samples were prepared under varying factor levels.The conventional performance metrics, stability, and dynamic viscosity of high-viscosity asphalt samples across different experimental schemes were compared and evaluated, as detailed in Table 6.An analysis of the results of the orthogonal test for high-viscosity asphalt are shown in Table 7. Given that the range (R) serves as the primary indicator for assessing the significance of each test factor on the outcomes, a larger R value indicates a more substantial impact of the corresponding factors on the results.Conversely, a smaller R value signifies a lesser impact on the outcomes.Therefore, the range can be utilized to identify the optimal composition of asphalt formula components.The factors influencing the various evaluation indices are ranked, and the optimal scheme is determined through statistical analysis, as depicted in Table 8. The influence of various factors on the conventional performance of high-viscosity asphalt is shown in Figure 2. The influence of various factors on the stability and 60 • C dynamic viscosity of high-viscosity asphalt is shown in Figure 3. Owing to the variation in optimal preparation combinations tailored to distinct indi cators, the comprehensive balance method was employed to thoroughly analyze the five optimal preparation combinations previously mentioned. ( 1) The influence of SBS quantity ratio (factor A) on various indicators The influence of the SBS quantity ratio (factor A) on various indicators is evident, a depicted in Figures 2c and 3a.The SBS content exhibits the most considerable variation in terms of softening point and 60°C dynamic viscosity, signifying a notable impact on en hancing these properties of asphalt.As illustrated in Figures 2a,b and 3b, the SBS range for ductility and penetration is substantial, whereas the range for segregation difference is minimal.This suggests that SBS enhances the low-temperature flexibility and viscosity of asphalt and demonstrates good compatibility with it. (2) The influence of tackifier (factor B) on various indicators The variation in tackifier content across different indicators is relatively narrow.A depicted in Figures 2 and 3, an escalation in solubilizer content corresponds with increase in both the ductility and softening point of asphalt.At a dosage of 1%, the penetration and segregation difference exhibit their greatest values.Upon the dosage reaching 2%, the as phalt's ductility, softening point, and dynamic viscosity attain their peak values.Despite the tackifier's relatively minor proportion in asphalt, an optimal quantity significantly en hances asphalt performance, particularly in terms of low-temperature flexibility and dy namic viscosity. (3) The influence of solubilizer (factor C) on various indicators Owing to the variation in optimal preparation combinations tailored to distinct indicators, the comprehensive balance method was employed to thoroughly analyze the five optimal preparation combinations previously mentioned. ( 1) The influence of SBS quantity ratio (factor A) on various indicators The influence of the SBS quantity ratio (factor A) on various indicators is evident, as depicted in Figures 2c and 3a.The SBS content exhibits the most considerable variation in terms of softening point and 60 • C dynamic viscosity, signifying a notable impact on enhancing these properties of asphalt.As illustrated in Figure 2a,b and Figure 3b, the SBS range for ductility and penetration is substantial, whereas the range for segregation difference is minimal.This suggests that SBS enhances the low-temperature flexibility and viscosity of asphalt and demonstrates good compatibility with it. (2) The influence of tackifier (factor B) on various indicators The variation in tackifier content across different indicators is relatively narrow.As depicted in Figures 2 and 3, an escalation in solubilizer content corresponds with increases in both the ductility and softening point of asphalt.At a dosage of 1%, the penetration and segregation difference exhibit their greatest values.Upon the dosage reaching 2%, the asphalt's ductility, softening point, and dynamic viscosity attain their peak values.Despite the tackifier's relatively minor proportion in asphalt, an optimal quantity significantly enhances asphalt performance, particularly in terms of low-temperature flexibility and dynamic viscosity. (3) The influence of solubilizer (factor C) on various indicators The solubilizer significantly affects the penetration, ductility, and segregation of asphalt.Solubilizer enhances the hardness of asphalt, leading to reduced softening point and ductility.Solubilizer remains in a particulate state within the asphalt.In segregation tests, higher solvent content correlates with larger differences in softening point between the upper and lower sections of the asphalt sample, and decreased storage stability.However, as illustrated in Figures 2c and 3a, modified asphalt with 3% solubilizer showed a significant increase in softening point and dynamic viscosity compared to its counterpart without solubilizer.However, at a 6% concentration, there was a noticeable decline. Higher levels of SBS and tackifier, along with lower levels of solubilizer, significantly modify asphalt, with each modifier offering complementary benefits.The high-viscosity asphalt designed for porous pavement in this study requires excellent high-temperature stability and low-temperature crack resistance.The most important indicators reflected in high-viscosity asphalt are 5 • C ductility and 60 • C dynamic viscosity.Usually, the 60 • C dynamic viscosity requires a minimum of 50,000 Pa•s to have sufficient bonding performance, and the prepared asphalt mixture must not undergo scattering and water loss.SBS and tackifier enhancers significantly contribute to the asphalt's performance at high and low Polymers 2024, 16,1489 temperatures, while forming a stable system.High solubilizer content reduces compatibility with asphalt, affecting performance at various temperatures.Therefore, using high levels of SBS and tackifier, along with low levels of solubilizer, is advisable for preparing high-viscosity asphalt for porous asphalt pavement.Consequently, the optimal modifier ratios for high-viscosity asphalt are 4-5% SBS, 1-2% tackifier, and 0-3% solubilizer. Furthermore, according to the calculation based on the commercially available raw materials, the cost of high-viscosity asphalt is about 5000 (CNY/ton), which is approximately a 30% cost savings per ton compared to commercial high-viscosity asphalts. Study on the Conventional Performance of High-Viscosity Asphalt High-viscosity asphalt was formulated using SK90# matrix asphalt at 91%, SBS at 5%, tackifier at 2%, and solubilizer at 2%, and added stabilizer at 1.5‰ of the whole asphalt system.The conventional properties of high-viscosity asphalt, SK90# matrix asphalt, rubber asphalt, and SBS modified asphalt were compared and evaluated.The three major indicators for each type of asphalt are depicted in Figure 4, while the bonding performances are detailed in Table 9. asphalt designed for porous pavement in this study requires excellent high-temperature stability and low-temperature crack resistance.The most important indicators reflected in high-viscosity asphalt are 5 °C ductility and 60 °C dynamic viscosity.Usually, the 60 °C dynamic viscosity requires a minimum of 50,000 Pa▪s to have sufficient bonding performance, and the prepared asphalt mixture must not undergo scattering and water loss.SBS and tackifier enhancers significantly contribute to the asphalt's performance at high and low temperatures, while forming a stable system.High solubilizer content reduces compatibility with asphalt, affecting performance at various temperatures.Therefore, using high levels of SBS and tackifier, along with low levels of solubilizer, is advisable for preparing high-viscosity asphalt for porous asphalt pavement.Consequently, the optimal modifier ratios for high-viscosity asphalt are 4−5% SBS, 1−2% tackifier, and 0−3% solubilizer. Furthermore, according to the calculation based on the commercially available raw materials, the cost of high-viscosity asphalt is about 5000 (CNY/ton), which is approximately a 30% cost savings per ton compared to commercial high-viscosity asphalts. Study on the Conventional Performance of High-Viscosity Asphalt High-viscosity asphalt was formulated using SK90# matrix asphalt at 91%, SBS at 5%, tackifier at 2%, and solubilizer at 2%, and added stabilizer at 1.5‰ of the whole asphalt system.The conventional properties of high-viscosity asphalt, SK90# matrix asphalt, rubber asphalt, and SBS modified asphalt were compared and evaluated.The three major indicators for each type of asphalt are depicted in Figure 4, while the bonding performances are detailed in Table 9.The three main indicators of asphalt include penetration, ductility, and softening point.Penetration reflects the relative viscosity of asphalt under certain conditions.The greater the penetration, the greater the viscosity of asphalt.Ductility reflects the low-temperature performance of asphalt, and the greater the ductility, the better the low-temperature performance of asphalt.The softening point characterizes the high-temperature performance of asphalt, and the higher the softening point, the better the high-temperature performance of asphalt. As depicted in Figure 4, compared to SK90# matrix asphalt, high-viscosity asphalt shows a 51.7 • C increase in softening point, a 28.9 cm increase in 5 • C ductility.Highviscosity asphalt significantly enhances performance at both high and low temperatures.Considering the distribution of the three modifiers, SBS thickens the asphalt and increases its elasticity, thereby enhancing both high-temperature stability and low-temperature flexibility.High-viscosity asphalt's performance metrics surpass those of SBS modified asphalt. Compared to SBS modified asphalt, it shows a 26.2% reduction in penetration, a 25.4 • C increase in softening point, and a 5 cm improvement in 5 • C ductility.This highlights the role of tackifier and solubilizer in enhancing asphalt performance at various temperatures. The tackifier is a high molecular weight elastomer.At high temperatures, the tackifier, being harder than the matrix asphalt, absorbs more stress with less deformation.At low temperatures, it becomes softer than the matrix asphalt, absorbing less stress but undergoing larger deformation.The solubilizer exhibits relatively high hardness, significantly contributing to the asphalt's high-temperature performance.This enhancement of the softening point leads to an improved performance of the self-made high-viscosity asphalt at both high and low temperatures. Higher 60 • C dynamic viscosity indicates stronger adhesion between asphalt and aggregates.The 175 • C Brookfield viscosity measures asphalt's viscosity, with higher values indicating a higher construction temperature for the mixture, which characterizes the construction and workability of asphalt mixtures. As indicated in Table 9, modified asphalt exhibits significantly higher 60 • C dynamic viscosity compared to SK90# matrix asphalt.The blending of asphalt and rubber powder mainly belongs to physical blending, accompanied by the physical swelling and dissolution of rubber, and will not undergo chemical reactions with asphalt, resulting in poor compatibility with asphalt [40]. Under long-term shear and swelling, SBS can be completely dissolved in the matrix asphalt, and after the addition of stabilizers, SBS is re-crosslinked.The tackifier and solubilizers in high-viscosity asphalt swell to form viscoelastic particles that adsorb in the asphalt network, further increasing the 60 • C dynamic viscosity of asphalt. The higher the 60 • C dynamic viscosity, the more conducive it is to improving the bonding strength of high-viscosity asphalt binder and reducing the scattering and detachment of porous asphalt mixture.The adhesion of different asphalt is evaluated by the peeling rate.The larger the peeling rate, the poorer the adhesion between asphalt and aggregates.According to Table 9, the 60 • C dynamic viscosity value of high-viscosity asphalt is much higher than the specification requirement of over 50,000 pa.s.Compared with SBS modified asphalt and rubber asphalt, the 175 • C Brookfield viscosity of highviscosity asphalt is moderate, meeting the technical requirement of less than 3.0 pa.s.The order of adhesion strength is high-viscosity asphalt > SBS modified asphalt > rubber asphalt > SK90# matrix asphalt, which shows that high-viscosity asphalt is suitable for laying porous asphalt pavement and has good construction workability. Observation of Fluorescence Dispersion of Modified Asphalt Optical microscopy is an effective auxiliary analytical device for studying the thermal stability of polymer modified asphalt.Currently, micrographs have been used as a direct method to study the distribution behavior and phase interface behavior of polymers in asphalt systems.The dispersibility of modifiers in asphalt was evaluated by directly observing the distribution of polymers in asphalt [39].The microstructure of different modified asphalts was observed by fluorescence microscope, as shown in Figure 5. As shown in Figure 5, the modified asphalt has a good dispersion effect.Each modified material is uniformly dispersed in the matrix asphalt without obvious agglomerates or obvious phase interfaces.Rubber particles and polymer-like particles are sheared and dissolved, forming a homogeneous and stable system. Figure 5a shows the microstructure of rubber asphalt.After high-speed shearing, rubber particles are uniformly dispersed.After a long period of high temperature, the rubber particles absorb the oil in the asphalt and undergo swelling, while there are small dark rubber particles in the system.Figure 5b shows the microstructure of SBS modified asphalt.After long-term high-speed shear, SBS undergoes swelling, dissolution, and recrosslinking, resulting in a clear network-like structure in the system.Figure 5c shows the microstructure of high-viscosity asphalt.The modifier is evenly distributed in asphalt and there are flocs in the asphalt system, which are the crosslinked network nodes in asphalt, increasing fusion between components, and serving as a "link". Observation of Fluorescence Dispersion of Modified Asphalt Optical microscopy is an effective auxiliary analytical device for studying the thermal stability of polymer modified asphalt.Currently, micrographs have been used as a direct method to study the distribution behavior and phase interface behavior of polymers in asphalt systems.The dispersibility of modifiers in asphalt was evaluated by directly observing the distribution of polymers in asphalt [39].The microstructure of different modified asphalts was observed by fluorescence microscope, as shown in Figure 5.As shown in Figure 5, the modified asphalt has a good dispersion effect.Each modified material is uniformly dispersed in the matrix asphalt without obvious agglomerates or obvious phase interfaces.Rubber particles and polymer-like particles are sheared and dissolved, forming a homogeneous and stable system. Figure 5a shows the microstructure of rubber asphalt.After high-speed shearing, rubber particles are uniformly dispersed.After a long period of high temperature, the rubber particles absorb the oil in the asphalt and undergo swelling, while there are small dark rubber particles in the system.Figure 5b shows the microstructure of SBS modified asphalt.After long-term high-speed shear, SBS undergoes swelling, dissolution, and recrosslinking, resulting in a clear network-like structure in the system.Figure 5c shows the microstructure of high-viscosity asphalt.The modifier is evenly distributed in asphalt and there are flocs in the asphalt system, which are the crosslinked network nodes in asphalt, increasing fusion between components, and serving as a "link". Infrared Spectroscopy Analysis of Asphalt The FTIR spectrometer is used to conduct infrared spectroscopy tests on different asphalt samples.The infrared spectra of four different types of asphalt are shown in Figure 6. As depicted in Figure 6, the various asphalts exhibit significant absorption peaks at 2800 cm −1 ~3000 cm −1 , attributed to the CH2 stretching vibration absorption peak of alkanes or cycloalkanes.A weak vibration absorption peak at 2729 cm −1 corresponds to the C-H stretching vibration absorption peak.The infrared spectral analysis peaks of polymers occur in two distinct regions: 4000 cm −1 ~1300 cm −1 and 1300 cm −1 ~600 cm −1 .The vibration absorption effect of functional groups is pronounced in the high-frequency region, which facilitates analysis and is significant for identifying functional groups.The low-frequency region is highly sensitive to asphalt components, and small changes can result in a strong vibration absorption effect.Therefore, this region is often referred to as the fingerprint recognition area [41,42].At the wavelength of 2361 cm −1 , rubberized asphalt, SBS modified Infrared Spectroscopy Analysis of Asphalt The FTIR spectrometer is used to conduct infrared spectroscopy tests on different asphalt samples.The infrared spectra of four different types of asphalt are shown in Figure 6. Polymers 2024, 16, x FOR PEER REVIEW 13 of 20 asphalt, and high-viscosity asphalt exhibit vibrations, indicating the presence of asymmetric vibrations associated with accumulated double bonds or stretching vibrations of triple bonds such as -C≡C and -C≡N.The absorption peaks of SBS modified asphalt and high-viscosity asphalt at 1458 cm −1 and 1376 cm −1 are formed by the in-plane stretching vibration of -C-H in -C-CH3 and -CH2.The 1601 cm −1 and 1493 cm −1 wave lengths represent the kinetic absorption peaks of benzene nuclei.There is a significant difference in the absorption peaks between modified asphalt and matrix asphalt in the fingerprint region. In the 1000-650 cm −1 region, there is a benzene ring substitution zone, which produces a benzene ring skeleton (C-C) vibration and bending vibration (C-H).The 694 cm −1 and 757 cm −1 wavelengths are vibration absorption peaks of single substituted benzene rings, 965 cm −1 is a twisted vibration absorption peak of C=C, and 911 cm −1 is an out-of-plane swing vibration absorption peak of CH2, which is a characteristic absorption peak of polymer (SBS, thickener).It is evident that high-viscosity asphalt has a large absorption area, while rubber asphalt and matrix asphalt do not exhibit absorption peaks.The shapes of the spectra of rubber asphalt and matrix asphalt are basically the same, indicating that the modification of rubber asphalt mainly involves physical blending.SBS modified asphalt and high-viscosity asphalt have many similar characteristic absorption peaks, but no characteristic absorption peaks disappear or are newly generated, exhibiting soluble physical comixing and re-crosslinking, thereby significantly improving 60 °C dynamic viscosity and asphalt performance. Analysis of Asphalt Temperature Scanning Test Temperature scanning tests were conducted on SK90# matrix asphalt and three types of modified asphalt to obtain the relationship between complex modulus G*, phase angle δ, and rutting factor G*/sin(δ) with temperature, as shown in Figures 7-9.As depicted in Figure 6, the various asphalts exhibit significant absorption peaks at 2800 cm −1 ~3000 cm −1 , attributed to the CH 2 stretching vibration absorption peak of alkanes or cycloalkanes.A weak vibration absorption peak at 2729 cm −1 corresponds to the C-H stretching vibration absorption peak.The infrared spectral analysis peaks of polymers occur in two distinct regions: 4000 cm −1 ~1300 cm −1 and 1300 cm −1 ~600 cm −1 .The vibration absorption effect of functional groups is pronounced in the high-frequency region, which facilitates analysis and is significant for identifying functional groups.The low-frequency region is highly sensitive to asphalt components, and small changes can result in a strong vibration absorption effect.Therefore, this region is often referred to as the fingerprint recognition area [41,42].At the wavelength of 2361 cm −1 , rubberized asphalt, SBS modified asphalt, and high-viscosity asphalt exhibit vibrations, indicating the presence of asymmetric vibrations associated with accumulated double bonds or stretching vibrations of triple bonds such as -C≡C and -C≡N.The absorption peaks of SBS modified asphalt and high-viscosity asphalt at 1458 cm −1 and 1376 cm −1 are formed by the in-plane stretching vibration of -C-H in -C-CH 3 and -CH 2 .The 1601 cm −1 and 1493 cm −1 wave lengths represent the kinetic absorption peaks of benzene nuclei.There is a significant difference in the absorption peaks between modified asphalt and matrix asphalt in the fingerprint region.In the 1000-650 cm −1 region, there is a benzene ring substitution zone, which produces a benzene ring skeleton (C-C) vibration and bending vibration (C-H).The Polymers 2024, 16, 1489 13 of 19 694 cm −1 and 757 cm −1 wavelengths are vibration absorption peaks of single substituted benzene rings, 965 cm −1 is a twisted vibration absorption peak of C=C, and 911 cm −1 is an out-of-plane swing vibration absorption peak of CH 2 , which is a characteristic absorption peak of polymer (SBS, thickener).It is evident that high-viscosity asphalt has a large absorption area, while rubber asphalt and matrix asphalt do not exhibit absorption peaks.The shapes of the spectra of rubber asphalt and matrix asphalt are basically the same, indicating that the modification of rubber asphalt mainly involves physical blending.SBS modified asphalt and high-viscosity asphalt have many similar characteristic absorption peaks, but no characteristic absorption peaks disappear or are newly generated, exhibiting soluble physical co-mixing and re-crosslinking, thereby significantly improving 60 • C dynamic viscosity and asphalt performance.The complex modulus of four asphalt types shows similar patterns, decreasing as temperature rises.Asphalt with a higher modulus generally resists deformation at high temperatures.A larger phase angle indicates more pronounced viscosity characteristics of The complex modulus of four asphalt types shows similar patterns, decreasing as temperature rises.Asphalt with a higher modulus generally resists deformation at high temperatures.A larger phase angle indicates more pronounced viscosity characteristics of asphalt, reflecting strain hysteresis.Therefore, with rising temperatures, asphalt's resistance to deformation decreases while its viscosity characteristics increase. As depicted in Figure 7, at a consistent temperature, SK90# asphalt has the lowest modulus, while rubber asphalt has the highest among the four types.For instance, at 58 • C, the complex modulus of rubber asphalt is 6.74 times greater than that of SK90# asphalt.Adding rubber powder significantly enhances the asphalt's modulus. Figure 8 shows that at the same temperature, the phase angles of the three modified asphalts are smaller than that of the matrix asphalt.For example, at 50 • C, the phase angle of high-viscosity asphalt is 32.6 • lower than that of the base SK90# asphalt.Modifiers significantly affect the viscoelastic properties of asphalt.With increasing scanning temperatures, the phase angle between rubber asphalt and matrix asphalt widens, indicating more viscous components.Conversely, the phase angle of high-viscosity asphalt decreases, particularly above 60 • C, showing fewer viscous components.This results in greater elasticity and improved high-temperature performance. The rutting factor characterizes asphalt's resistance to high-temperature deformation.Figure 9 illustrates that the rutting factor of asphalt decreases as the temperature increases [43].Comparing four types of asphalt, SK90# exhibits lower rutting factors, suggesting that modified asphalts offer superior rutting resistance.The non-uniformity of rubber asphalt, due to the blending of rubber powder with matrix asphalt, significantly impacts its rheological test results.These test results may not accurately represent the material's road performance.High-viscosity asphalt, which has a higher rutting factor, shows a slower decrease in this factor with rising temperatures, indicating stronger resistance to high-temperature deformation.This makes it less susceptible to temperature variations and more effective at resisting high-temperature rutting. Permanent Deformation Resistance of High-Viscosity Asphalt The temperature scanning test reveals that asphalt's viscosity and elasticity vary with temperature changes.Evaluating asphalt's high-temperature performance based solely on viscosity and elasticity can yield unreliable conclusions.Consequently, the multiple stress creep recovery test (MSCR) is employed to more accurately assess hightemperature performance. The creep recovery rate (R) and irrecoverable creep modulus (J nr ) are used as hightemperature performance evaluation indicators.The temperature setting for the MSCR test is 60 • C. The creep recovery rate R and creep compliance J nr can be calculated through time and strain parameters to characterize the delayed viscoelastic properties and hightemperature resistance to permanent deformation of four different asphalts.The average creep recovery rate R 3.2 , average creep compliance Jnr 3.2 , and average creep recovery rate R 0.1 and average creep compliance Jnr 0.1 at a stress level of 3.2 kPa and 0.1 kPa can be calculated, as shown in Table 10.As shown in Table 10, asphalt displays varying creep recovery abilities at two stress levels.Modified asphalt's creep deformation recovery rate is higher than that of its matrix counterpart.For matrix asphalt, this rate is negative at stress levels of 0.1 kPa and 3.2 kPa, indicating limited recovery ability.At 60 • C, the test temperature significantly exceeds its softening point, resulting in a lack of macro-level elasticity in the asphalt.During unloading, deformation often increases due to gravity.Irreparable creep compliance J nr reflects asphalt's deformation recovery strength.A higher J nr indicates more irreparable deformation and reduced deformation resistance in asphalt pavement.Matrix asphalt's irreversible creep compliance is nearly three orders of magnitude lower than modified asphalt's.Under load, matrix asphalt undergoes the most deformation, which is almost entirely irreversible. Modified asphalt exhibits robust deformation recovery capabilities.The creep recovery performance of SBS modified and high-viscosity asphalt are comparable, whereas rubber asphalt's rate is the lowest.Particularly at 3.2 kPa, rubber asphalt's recovery rate is less than half that of the other two types.High-viscosity asphalt, incorporating tackifier, solubilizer, and SBS, demonstrates high elasticity and robust resistance to high-temperature rutting. Comparing the unrecoverable creep compliance J nr of modified asphalt at different stress levels, at 0.1 kPa the order is rubber asphalt > high-viscosity asphalt > SBS modified asphalt.At 3.2 kPa, it is rubber asphalt > SBS modified asphalt > high-viscosity asphalt.This comparison reveals that high-viscosity asphalt maintains good creep recovery across stress levels, with increasing deformation resistance and excellent high-temperature rutting resistance as load levels rise. Analysis of Low-Temperature Crack Resistance Performance The creep rate (m) reflects the asphalt's stress relaxation capacity under lower temperature loads.A higher creep rate indicates stronger stress relaxation and improved lowtemperature performance.Creep stiffness (S) measures the asphalt's deformation resistance at lower temperatures.Higher creep stiffness suggests greater stress for the same strain, resulting in harder asphalt with reduced low-temperature crack resistance.As depicted in Figure 10, both the creep stiffness modulus S and creep rate m vary with temperature. Figure 10 illustrates that lower temperatures result in a higher stiffness modulus and a lower creep rate, aligning with the rheological and stress relaxation properties of asphalt under these conditions.With temperature variations, the stiffness modulus of modified asphalt remains lower than that of matrix asphalt, indicating the latter's superior rigidity at low temperatures.Modified asphalt's high elasticity enhances its low-temperature flexibility.The asphalt's creep rate consistently decreases with temperature, showing a strong linear correlation.It also features a lower creep rate and effective stress relaxation.Additionally, asphalt demonstrates robust elastic recovery in low temperature conditions. Figure 10 shows that at the same temperature, the creep stiffness modulus ranks as follows: SK90# matrix asphalt, SBS modified asphalt, high-viscosity asphalt, and rubber asphalt.Creep rates, from highest to lowest, are as follows: high-viscosity asphalt, SBS modified asphalt, rubber asphalt, and matrix asphalt.Comparing S and m values indicates that high-viscosity asphalt offers superior low-temperature flexibility.Despite its lower creep stiffness modulus, rubber asphalt's high creep rate leads to more significant damage in low temperatures, adversely affecting road durability. As temperature drops, m value decreases uniformly across all four asphalt types.While SBS and rubber modifications slightly increase asphalt's creep rate, high-viscosity modifiers have a more pronounced effect.Thus, the combined use of viscosity enhancers and solvents in high-viscosity asphalt maximizes the benefits of both, boosting stress relaxation and improving low-temperature performance. rutting. Comparing the unrecoverable creep compliance Jnr of modified asphalt at d stress levels, at 0.1 kPa the order is rubber asphalt > high-viscosity asphalt > SBS m asphalt.At 3.2 kPa, it is rubber asphalt > SBS modified asphalt > high-viscosity a This comparison reveals that high-viscosity asphalt maintains good creep recovery stress levels, with increasing deformation resistance and excellent high-temperat ting resistance as load levels rise. Analysis of low-Temperature Crack Resistance Performance The creep rate (m) reflects the asphalt's stress relaxation capacity under low perature loads.A higher creep rate indicates stronger stress relaxation and improv temperature performance.Creep stiffness (S) measures the asphalt's deforma sistance at lower temperatures.Higher creep stiffness suggests greater stress for th strain, resulting in harder asphalt with reduced low-temperature crack resistance picted in Figure 10, both the creep stiffness modulus S and creep rate m vary wi perature.Figure 10 illustrates that lower temperatures result in a higher stiffness modu a lower creep rate, aligning with the rheological and stress relaxation properties of under these conditions.With temperature variations, the stiffness modulus of m asphalt remains lower than that of matrix asphalt, indicating the latter's superior at low temperatures.11.Modified asphalts exhibit improved high-temperature grades over matrix asphalt.Rubberized and SBS modified asphalts have a high-temperature grade of 76 • C, three degrees higher than matrix asphalt, while high-viscosity asphalt reaches 88 • C, five degrees higher.Following short-term aging, modified asphalts' high-temperature grades typically decrease due to their viscosity, which impedes smooth flow and uniform film formation during the thin film oven aging process, unlike matrix asphalt.Additionally, in long-term high temperatures, asphalt's asphaltene content increases, while its resin, aromatic, and saturated components decrease.This change, coupled with repeated loading, leads to irreversible deformations due to increased viscous components. Both matrix and rubber asphalts are graded at PG-12 • C for low temperatures, with a sharp increase in creep stiffness modulus as temperatures drop.This results in a higher creep rate and reduced low-temperature crack resistance.Despite both SBS and highviscosity asphalt having a low-temperature grade of PG-18 • C, high-viscosity asphalt outperforms SBS in creep stiffness and rate, offering superior stress relaxation and flexibility at low temperatures, thus enhancing crack resistance.The synergistic use of viscosity enhancers and solvents in self-made modified asphalt combines the strengths of both, significantly improving its low-temperature capabilities. The analysis indicates that all three modified asphalt types enhance both high-and low-temperature performance of asphalt.High-viscosity asphalt, in particular, shows a more pronounced improvement in these areas.Using high-viscosity modifiers yields the best results in temperature performance enhancements.This modification also significantly boosts rut resistance, leading to optimal overall performance. Conclusions The three factors and three levels orthogonal experiment was used to explore the effect of raw material ratio on the performance of modified asphalt, and the optimal ratio range was determined based on the relevant technical indicators of porous asphalt pavement.A high-viscosity asphalt was prepared according to the optimal ratio, and matrix asphalt, rubber asphalt, and SBS modified asphalt were selected as control groups.The microstructure, conventional properties and rheological properties of different types of asphalt were compared and analyzed.The primary conclusions are summarized as follows: (1) The optimal quantity ratio of high-viscosity asphalt is determined to be SBS content of 4%-5%, tackifier content of 1-2%, solubilizer content of 0-3%, and stabilizer content of 0.15%.(2) Compared with SBS modified asphalt, rubber asphalt, and matrix asphalt, the softening point, 5 • C ductility, and 60 • C dynamic viscosity of high-viscosity asphalt were significantly improved, while the 175 • C Brookfield viscosity was equivalent to SBS modified asphalt.In particular, the 60 • C dynamic viscosity reaches 383,180 Pa•s. Rheological tests indicate that the high-and low-temperature grade of high-viscosity asphalt reaches 88-18 • C, and high-viscosity asphalt has the best high-temperature resistance to permanent deformation and low-temperature resistance to cracking.(3) The components evenly dispersed and the performances were enhanced with chemical and physical modification.The SBS and thickener exhibit soluble physical co-mixing and re-crosslinking, thereby significantly improving asphalt performance.(4) The comprehensive performance of high-viscosity asphalt has been greatly improved and can save about 30% in costs compared to commercially available high-viscosity asphalt, which is conducive to the promotion and application of porous asphalt pavement. Figure 1 . Figure 1.The process of preparing high-viscosity asphalt. Figure 1 . Figure 1.The process of preparing high-viscosity asphalt. Figure 2 . Figure 2. The influence of various factors on the conventional performance ((a) 25 °C penetratio (b) 5 °C ductility, and (c) softening point) of high-viscosity asphalt. Figure 2 . Figure 2. The influence of various factors on the conventional performance ((a) 25 • C penetration, (b) 5 • C ductility, and (c) softening point) of high-viscosity asphalt. Figure 2 . Figure 2. The influence of various factors on the conventional performance ((a) 25 °C penetration (b) 5 °C ductility, and (c) softening point) of high-viscosity asphalt. Figure 3 . Figure 3.The influence of various factors on the stability and 60 °C dynamic viscosity of high-vis cosity asphalt ((a) 60 °C dynamic viscosity, (b) Segregation difference). Figure 3 . Figure 3.The influence of various factors on the stability and 60 • C dynamic viscosity of high-viscosity asphalt ((a) 60 • C dynamic viscosity, (b) Segregation difference). Figure 4 . Figure 4. Three major indicators of each asphalt.Figure 4. Three major indicators of each asphalt. Figure 4 . Figure 4. Three major indicators of each asphalt.Figure 4. Three major indicators of each asphalt. Figure 6 . Figure 6.The infrared spectra of four different types of asphalt. Figure 6 . Figure 6.The infrared spectra of four different types of asphalt. Figure 8 . Figure 8.The variation in phase angle of different asphalt with temperature. Figure 7 . 20 Figure 7 . Figure 7.The variation in complex modulus of different asphalt with temperature. Figure 8 . Figure 8.The variation in phase angle of different asphalt with temperature. Figure 8 . Figure 8.The variation in phase angle of different asphalt with temperature. Figure 8 . Figure 8.The variation in phase angle of different asphalt with temperature. Figure 9 . Figure 9.The variation in rutting factor of different asphalt with temperature. Figure 9 . Figure 9.The variation in rutting factor of different asphalt with temperature. 3. 4 . 4 . Analysis of PG Results for High-Viscosity AsphaltThrough the DSR test of residual asphalt after the TFOT stage of different asphalt samples, the high temperature grading of asphalt can be obtained.The PG results for four types of asphalt are shown in the Table et al. found that 4A zeolite boosts Table 1 . Basic properties of SBS. Table 3 . Basic properties of rubber powder. Table 4 . Factor levels of asphalt modifier formulation. Table 5 . Orthogonal experimental design table. Table 6 . Orthogonal experimental results of high-viscosity asphalt. Table 7 . Analysis of orthogonal test results for high-viscosity asphalt. Table 6 . Orthogonal experimental results of high-viscosity asphalt. Table 7 . Analysis of orthogonal test results for high-viscosity asphalt. Table 7 . Cont. is the arithmetic mean value of the corresponding test data when i is taken as the factor level of any column, i = 1, 2, 3; the range R is the maximum difference of K i at this level. i Table 8 . Comprehensive analysis table of orthogonal experiments. Table 9 . The bonding performances of asphalt. Table 10 . Calculation results of creep recovery rate and irreversible creep compliance. Table 11 . The PG results for four types of asphalt.
2024-05-26T15:07:11.780Z
2024-05-24T00:00:00.000
{ "year": 2024, "sha1": "d3c3a79bf62bc8d5bd7d84ec6b77c62f0d74898e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/16/11/1489/pdf?version=1716543167", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "381ae472ca846301be0d8dca6a12ff73fad84b7e", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
234818794
pes2o/s2orc
v3-fos-license
Conceptualizing the Experiential Affordances of Watching Online TV This article investigates the experiential affordances of watching online TV as outcomes of the material underpinnings of online TV and the actions taken by viewers. Potential experiential changes derive from how online TV services can be considered libraries of content affording self-scheduling action possibilities. Such changes need to be situated in the slow-to-change conditions of television viewing. We draw on a qualitative study of how viewers respond to the action possibilities and constraints of online TV services. We argue that potentials for individualized viewing are counterbalanced by television viewing as a social activity. Next, self-scheduling ties in with viewing as a deliberate action, appropriated to create experiences where attentiveness is tailored to what is narratively required. Finally, flow schedules are replaced with programed paths constraining the agency of viewers. Introduction The history of television is commonly narrated as one marked by continuous technological and cultural transformation. Such accounts are reflected in how the development of television is depicted according to techniques of distributing content; from the broadcast/TVI-era, via the cable/TVII-era, to the digital/TVIII-era (Dunleavy 2017;Johnson 2019). These eras are characterized by different logics (from scarcity to abundance; from mass to niche audiences) and adherent branding strategies. The emergence and increasing use of online TV services may in this context represent a distinct phase, requiring re-thinking definitions of television as a medium (Jenner 2016;Johnson 2019). The trivial observation that television content is increasingly consumed online might have consequences for how audiences relate to and experience television. Yet few scholars examine present transformations and continuities in viewing experiences from the perspective of audiences, and the changes in how people watch television signal a need to revive audience studies (Gray 2017;Turner 2019). Lotz et al. (2018, 42) likewise note that audience studies are required "before we can theorize the experiential dimensions of internet-distributed television in ways that are conceptually robust." In this article, we follow the leads of Gray (2017), Lotz et al. (2018), and Turner (2019), and ask, how can we conceptualize the experiential affordances of watching online TV? We frame our analytical object according to Johnson's (2019) definition of online TV as a subset of internet-distributed television services. Online TV, in Johnson's framework, includes SVOD services and online players from legacy broadcasters; these are closed environments offering editorially selected content oriented toward the creation of viewing experiences. Applying affordances as the analytical notion allows us to attend to experiential outcomes as positioned between the material underpinnings of online TV and the actions undertaken by viewers (Chemero 2003;Gibson 1979;Lüders 2019;Nagy and Neff 2015). Affordances are hence not properties of an object, but refer to the relational space between the material features and the perceiving and acting being. The materiality of online TV is understood as imbued with values and interests reflecting the economic and political contexts within which services are designed and marketed. An analysis of affordances consequently needs to be sensitive to how action possibilities and constraints reflect ideological positions. Brand rhetoric accentuating user choice may for example reflect a move from considering viewers as citizens toward considering viewers as customers (Johnson, 2019). Relatedly, Burroughs (2019, 11) argues that streaming companies promote a conception of algorithms as "just delivering to audiences what they have already told the algorithm that they want to consume." Netflix has re-branded itself as "the future of television" aligned with natural and inherent audience needs (Tryon 2015, 107). Such rhetoric discourse should be addressed critically, since they are often promoted by actors who have an interest in their realization (Enli and Syvertsen 2016). We first address the material level of online TV in order to identify experiential aspects related to action possibilities and constraints. Next, we explicate and develop these components through an analysis of interviews conducted with twenty Norwegian study participants. We start from the metaphor of online TV services as libraries of content affording self-scheduling possibilities. Our findings delineate how individualized viewing patterns are counterbalanced by the continued social position of TV; how self-scheduling ties in with deliberate watching; and finally, how service providers create programed paths through content libraries, constraining the agency of viewers. From Schedule to Library: The Changing Materiality of Television Online TV represents a technological transformation whereby "the television set is transformed into an internet-connected device that carries simultaneously its earlier associations with viewing linear television schedules and newer associations with ondemand and interactive engagement" (Johnson 2019, 17). Herein lies a fundamental question for our objective: what can we expect to change and what can we expect to stay the same when viewers turn to online TV services? From a high-level perspective, potential changes derive from the materiality of online TV, while potential continuities relate to the established context for watching television. The schedules of linear television structure viewing patterns, with programs and between-slots (announcements, advertisements) sequenced into a flow of continuous television (Bruun 2020;Ihlebaek et al. 2014;Williams [1974] 2003). Internet-distributed television, however, offers content libraries instead of schedules (Lotz 2018, 117). Although the materiality of television changes, we cannot simply infer changes in viewer behavior, but need to consider long-established viewing practices, as well as how the material level of technologies shapes but never fully determines experiences (Bucher and Helmond 2017;Lüders 2019;Nagy and Neff 2015). Failures to attend to the experiential outcome as partly constituted by human agency or failing to consider continuities of practices risk resulting in exaggerated visions of change. Television as an online library affords self-scheduling. Yet how self-scheduling is appropriated needs to be situated within "our knowledge of the slow-to-change conditions that underpin identity, sociality, and community," which next implies the need "to map the changing media environment in relation to prior communicative practices which, in turn, shape that environment" (Livingstone 2003, 338). Viewers may for example have opportunities to watch shows that fit their individual preferences but may still experience a pull toward the social role of television: watching together with family and friends and talking about the same shows (Lull 1990). Within this context of what changes and what remains the same, there are some more disputed consequences of online TV. We will next delineate how these services are discussed regarding the agency of the viewer. Agency, Control, Flow Human agency presupposes an acting subject, but this subject is always placed relative to other people and to her or his surroundings: the subject is always a socially and culturally entangled entity (Mansfield 2000). An analysis of how viewers experience watching online TV therefore needs to place the viewer relative to what is materially enabled and constrained. On the surface, being able to self-schedule signifies a viewercentered notion of agency and control, making self-determined experiences possible (Bruun 2020;Enli and Syvertsen 2016;Lotz 2018). Positioning viewers as in control also reflects how Netflix, in particular, uses terms such as "user freedom" and "active audiences" rhetorically to promote itself as the future of television and as a service in tune with user needs and demands (Burroughs 2019;Tryon 2015). Propositions of how agency is potentially recalibrated predate online TV services, evident for example by Uricchio's argument that the remote control signaled a shift toward "flow as a set of choices and actions initiated by the viewer" (Uricchio 2004, 170), implying the conditions of flow shift toward agencies as imbricated into viewerprogram interfaces. Similarly, scholars have posited that digital video recorders and digital television represent a move away from logic of broadcast flow (see Johnson 2019, 122 for a discussion). Williams' ([1974] 2003) notion of flow captured how the television experience is planned as sequences of programs, where each program item is subordinate to how items are stitched together to lure viewers into an evening of "watching television". Williams' flow leaves the viewer with limited agency and instead positions the viewer as submissive to clever programing. Williams' notion of flow holds a prominent position in television studies, including how flow is reconceptualized to account for online TV (Cox 2018;Gray and Lotz 2019). Yet whether agency has shifted toward viewers is a disputed claim (Van Esler 2020), and as argued by Gray and Lotz (2019, 132), "the structuring forces Williams gestured towards persist." That is, we should direct our attention toward how interfaces, algorithms, and menus work to create streaming flows, replacing the sequenced scheduling-flow of linear television. Likewise, but more radically, Johnson (2019) and Cox (2018) position user agency more as an illusion than a reality. Certain programs are made more visible than others, search functions are downplayed in favor of pre-organized catalogs, and while recommendation algorithms depend on patterns of use, viewers cannot determine the criteria informing how algorithms work (Johnson 2019). Cox (2018, 439-440) reasons that the types of interactions facilitated "foster a sense of user control that often downplays the industrial control exercised on users through these very same interactive features." Johnson and Cox question to what extent viewers are liberated from structuring forces, but also, as Johnson contends, the success of online TV services may derive from their structured and closed nature (Johnson 2019, 43). Self-scheduled Viewing Experiences Online TV services may steer our attention, by, so to speak, how the library shelves are organized and with certain titles prominently facing the viewer. But they also appear to enable undisturbed viewing. User agency, at least regarding when to watch (and without scheduling sequences and advertisement breaks), may here entail potentials for being immersed in the viewing experience (Steiner and Xu 2020). We here contrast the viewing experience with Ellis' ([1982] 2001) depiction of the television glance as sporadic rather than the sustained cinematic gaze. The glance does not imply any effort of being "invested in the activity of looking. . ." (p. 137). The TV-looker is no cinephile, but instead a casual onlooker with lazy eyes hoping for relaxation and diversion (Ellis [1982(Ellis [ ] 2001. Our argument follows a long-standing critique of Ellis' chasm between the cinematic gaze and the television glance. Television has always also been a gaze-medium (Wheatley 2016), and the distracted glance of the inattentive viewer does not accurately portray television viewing (Caldwell 1995). Moreover, notions such as "Quality TV," emphasizing the lineage to cinema and art cinema (Feuer 2007), narrative complexity (Mittell 2006), and complex dramas (Dunleavy 2017) allude to the gazing and attentive viewer. Mittell's and Dunleavy's accounts point to how at least segments of contemporary television productions appear well suited for the viewing experiences enabled by online TV. The complex serial, with ongoing stories and plot developments, diverse settings, and dynamic characters, demands attention from viewers, who (are at least assumed to) turn to these narratives with commitment (Dunleavy 2017;Mittell 2006). Narrative complexity and immersive viewing are to some extent included as predictive factors for binge-viewing (Perks 2015;Pittman and Sheehan 2015;Steiner and Xu 2020;Tukachinsky and Eyal 2018). Online TV certainly affords opportunities for sequential viewing. But since most studies of on-demand viewing revolve around understanding why viewers feel compelled to keep watching, binge-viewing ends up serving as a proxy for knowledge on the new television experience (Turner 2021, 229). Binge-viewing, with its implicit link to high-profile quality shows, additionally aligns with the branding rhetoric of online TV services. These services have a strategic interest in associating "binge-worthiness" with "quality" and "cult" texts (Jenner 2017, 312). However, online TV services have become mundane and normal (at least in a Norwegian context), and viewers turn to them for a variety of television genres. Complex serials are far from exclusive for online TV services, and neither do these only offer programs that demand attentiveness. Viewing practices may consequently be more varied than what is captured with binge-viewing, encompassing different levels of attentiveness depending on the intricacy unfolding on the screen. Method and Data In order to explore and sensitize the experiential affordances of watching online TV, we rely on interviews with twenty users of online TV services. Interviewees include ten men and ten females between twenty-one and seventy-two years, with a median age of 33.5. The objective to investigate the conditions underpinning experiential affordances implies a small-scale sample is advantageous, enabling a hermeneutical analysis whereby the interview material is continuously monitored in relation to theoretical developments (Crouch and McKenzie 2006). Participants were recruited using printed fliers, sharing of a Facebook-post, and snowballing from personal and professional networks (avoiding interviews with persons in own networks) and were interviewed between 2017 and 2019. Interviews lasted between one and one and a half hour and were transcribed verbatim. Transcribed interviews were coded in NVivo 12. The analysis combined inductive and deductive approaches. First, codes were inductively identified by reading through the transcripts. Next, these codes were linked with theoretical sensitizing concepts of relevance for our objective of investigating and conceptualizing the experiential affordances of watching online TV. Table 1 presents the participants and the types of online TV services that they use or have used previously. HBO refers to the SVOD-service HBO Nordic, launched in the Nordic countries in 2012. The NRK player refers to the on-demand service from the Norwegian public broadcaster NRK. Analysis: Continuities and Transformations We first address how the social position of television viewing counterbalances individualized viewing patterns. Next, experiential affordances of watching online TV are different from linear experiences in how television is adapted to the temporalities of life with patterns of deliberate watching and immersive modes of engagement. Whereas deliberate watching ties in with a sense of agency and control, the final part of the analysis delineates experiences of limited viewer agency in how online TV providers device programed paths through content libraries. The Ritual and Social Role of Television Participants portray the position of television an integral part of everyday life, facilitating ways of being together that largely reflect the persistence of established conditions for viewing television (Livingstone 2003). In place of settled flow-schedules, participants co-schedule watching together as a leisure-time ritual taking place in front of the television. Hence, while online TV may have a stronger component of individualized Note. Services marked with * refer to discontinued subscriptions or use. Camilla and Kristian are married and were interviewed together (two and half-hour interview). Education: Low = upper secondary education; Higher = higher education less than four years; High = higher education four years or more. viewing, viewing as a collective activity remains central. The allure of watching television an activity that lends itself well to pursuit togetherness (Lull 1990) does not change substantially. As a social activity imbricated with rituals, watching together appears particularly salient for couples sharing the same TV-preferences: Kristian (36): When I get home [from work], we usually watch something on Netflix or HBO, or whatever we're currently watching. We don't have much dinner-culture, so we end up in the couch watching Netflix together. That's pretty much a ritual actually. For Kristian and his wife Camilla (interviewed together) busy work-lives are paused by these at home opportunities to rewind, share, and cultivate their largely overlapping taste in television. Maria (31) similarly depicts how watching together with her partner represents time off, though in her case, scheduled later in the evening once their new-born baby has been put to sleep: "I don't think I actually ever want to watch anything alone. We're currently watching Handmaid's Tale on HBO. And, oh my, we watched Chernobyl. . .We were a bit late to Handmaid's Tale, so we have this total binge." If anything has changed regarding watching together, it is the opportunity to adjust viewing to diverse temporalities of lives. Watching television has always also been a social activity but settled program schedules structured when we gathered to watch together. On-demand viewing, however, impose a sort of contract, where watching the next episode alone is to be avoided. Ingrid (72) states that for programs she and her partner both like, "we're not unfaithful, we watch it together." Possibilities to decide what shows to watch when and where are kept in balance by self-imposed social arrangements, safeguarding shared experiences when structural schedules disappear: Vilde (31): We're now watching Taboo, and he can't watch without me. It's about partnership in a way to watch together and talk about it, because a lot is art and ways of telling stories. . . Overall, we share the same taste when it comes to TV. . . . Watching TV is part of our everyday life, it's part of my life. The continued importance of watching together does not imply that all participants always watch television together with partners, family members, or friends. Idiosyncratic preferences are saved for moments of solitude. Annette (27) explains how she uses home alone time to watch programs her partner is not interested in. Other participants largely watch alone because they live alone or because they do not share the same taste as partners or family members. The ritual and social role of television extends beyond watching together, to talking about television as a way of social bonding. Vilde (31) states that she needs to know what her friends are interested in, "it's a way of knowing them, right. If I have no insight into their life, then they have this community where I'm lost." Richard (25) reasons that keeping up with what friends watch "is about close relations with friends. When people you know watch the same show, it gets this social function. Like watching the same as your friends becomes a social thing." The notion that online TV facilitates fragmented and individualized viewing is not only too simplistic due to the continued programing control of services providers (Cox 2018;Johnson 2019), but also because of a human need for belonging, pulling viewers toward shared repertoires of programs: Maria (31): You share the same references. It becomes this common culture, or culture bubble. . . . At one point, you could almost take for granted that everyone had watched Game of Thrones. We can still meet people and have a good time together without having watched the same programs. But it's nice also because you can discover new things. Like, if you liked Making a Murderer, you will probably also like these true crime shows. It's a way of having a shared media life. As an experiential dimension, social appears as an unlikely condition to change rather than a "slow-to-change condition" (Livingstone 2003, 338). Online TV, if considered primarily materially, enables a form of individualized viewing aligned with industry conceptions of the algorithmic audience (Burroughs 2019), but the social and ritual roles of television pull toward the continued importance of togetherness. Self-scheduled Deliberate Watching and Re-watching Self-scheduling has experiential outcomes related to how television is adapted to life schedules rather than life adapted to television schedules. This change ties in with viewing as a deliberate action and a need to control how everyday time is spent. Finally, self-scheduling combined with the materiality of online TV as a library implies windows to revisit selected television productions. Adapting television to life schedules represents a continuum from slightly adjusting when programs are watched to catching up with productions that "everybody" talks about. Regarding slightly adjusted time-shifting, Jan (67) reflects, "It used to be very fixed, like at 6:00 pm, it was Dagsnytt Atten [daily news magazine]. But now I watch it when I want to." Current affair programs such as Dagsnytt Atten are still released at certain hours of the day. Likewise, HBO has largely retained the structure of weekly releases of episodes. Yet, once programs are released, they remain accessible: Nina (52): I've been watching Handmaid's Tale lately. And Girls. And like, when you need to wait for next week. And you still don't miss it [the next episode]. I've never been able to follow series before. Interviewer: Like before you needed to sit down by the TV every Tuesday at 9:30 pm to follow a series? Nina (52): Exactly. I never used to watch TV-series. . . Like with Twin Peaks early 1990s. I watched it, but not all episodes. It was impractical, because sometimes I was busy elsewhere. For Nina, adapting life to television schedules was never really an option, and while she still had to wait for the next episode of Handmaid's Tale (since she watched it while season one unfolded), she could adjust the pace to her own life. Conceptually, this sense of agency depicts how viewers tune in to watch specific programs. As stated by Bård (24), "You sit down to watch a series or a film. You don't sit down to watch TV, like random crap. It becomes the activity." Bård here offers a different perspective compared to how the flow model of broadcasting contrasts with watching discrete units of content (Williams [1974(Williams [ ] 2003. Online TV may seem to represent a return to specific programs rather than "what's on." Kevin (21), who is an avid gamer, depicts a Williams'-like flow of linear television and Twitch (see Spilker et al. 2020 for an analysis of Twitch-experiences as flow) compared to the focused viewing experience of online TV: I find Twitch and [linear] TV to be much more similar than Netflix, HBO and YouTube because there you are specifically looking for something good, whereas Twitch and TV in general is more like, you can just keep it on in the background even if it's not particularly good, just to have something to talk about. Kevin continues by elaborating how he pauses the video if being interrupted, since he cannot allow himself to miss out on the intricacy of what unfolds on the screen. We may here distinguish between the considered decision to watch a specific item (deliberate watching) as a general tendency across genres, and the specific viewing experience as more or less immersive depending on genres. When Kevin and his buddies decide to watch comedy series, which they regularly do when together, this is a deliberate action, but the viewing experience is not of the kind where Kevin needs to pause the video due to interruptions. An underexplored feature of online TV concerns the value ascribed to "a show that becomes part of a library in perpetuity" (Lotz 2018, 146), or the "afterlife" achieved "through unprecedented succession of exhibition and consumption 'windows'" (Dunleavy 2017, 11-12). Opportunities to look back and revisit old productions highlight the archival function of on-demand services (Tryon 2015). For some, such as Anne (57), re-watching old favorites relates to a sense of personal history and cultural heritage. She enjoys going back in time and find programs that were once part of her life: "I've done that quite a bit. Watched Nitimemordet [crime series from the 1970s] for example. And other old series. And there was this old children's show that was made from my hometown." Whereas reruns of productions have been a central part of linear television schedules (Weispfenning 2003), online TV represents a media ecology where back catalogs are made available for viewer-initiated reruns. Among the participants re-watching old favorites is common, whether this is revisiting sitcoms such as Friends and The Office; Kristian's (36) and Camilla's (31) annual indulgence into their "ultimate guilty pleasure, Buffy"; or as Erik (27) describes, re-watching The Wire, which he used to follow with his dad. The Wire, being an epitome of the complex serials of the last twenty years (Dunleavy 2017;Mittell 2006), may be considered to warrant re-watching in order to follow and detangle multifaceted story arcs. For Morten (46), one of the participants with the most affection for complex serials, the quality of a production explains his re-watching patterns. He lists numerous favorites and genres that he regularly watches. Yet when he mentions serials such as The Wire, Fargo, and True Detective, he adds that he has watched these "at least twice": To me quality is that I can watch it again, and discover something new. . . I've watched True Detective two-three times. I still find it fantastic. You discover new patterns all the time. I don't mind watching series that don't talk to me in the same way, but that's more entertainment, and feels more like a waste of time. While far from all participants are as involved viewers as Morten, sentiments related to time are common. Participants may still watch television to pass time, but deliberate watching ties in with how to spend time prudently. Maria (31), for instance, reckons online TV has made her watch more television, and that time therefore needs to be well spent: "since screen time adds up to quite an amount, I try to be critical to what I'm watching, compared to oh well this is what's on TV2 at the moment." Programed Paths through Content Libraries Our notion of self-scheduled deliberate watching denotes watching discrete programs and not the interface experience, or how providers device paths through content libraries by way of organizing and recommending content. Yet the agency associated with self-scheduling could influence the interface experience, partly explaining why some participants refrain from considering the structure of interfaces and (various levels) of personalization as influencing what they watch. Other participants contest the notion of user agency characterizing the rhetoric of online TV services. Such oppositional interpretations relate to three concerns: how content is organized; the inadequacy of recommendations; and how interfaces appear structured to inhibit diversity of content. We structure this final part of the analysis accordingly, but also include sentiments that help discern the subtleness of programed paths. Wariness with how content is organized relates to a sense of being steered toward a fraction of available content, corroborating Van Esler's (2020, 7) portrayal of online TV interfaces as shepherding viewers in certain directions. Metaphorically, most parts of the library are gated off in low-level and hard-to-reach bunkers. Finding content can nevertheless, or exactly therefore, be challenging. Some participants convey irritations, but do not detail much beyond expressing annoyance. Anne (57) laments how "there is a lot to browse through." Erik (27) likewise points to the experience of "swiping through Netflix to find something new to watch, it can be hopeless." Kristian (36), however, addresses similar issues by underlining the control exercised by online TV providers and Netflix in particular: I think Netflix is a bit too aggressive with categorizing content. It's more difficult to just browse than to be served what Netflix believes you want to watch. . . . What they push you towards in the first ten categories is just a small spectre of what they have. Which means it's difficult to use Netflix the way I use Tidal [music streaming service] where I end up with weird stuff that other people have never heard of. We rarely do that on Netflix. Kristian, who has a background in computing, continues by considering why online TV services appear to push viewers toward a small spectre of content, querying whether this relates to the larger data files involved in streaming video and the need to cache content in servers near end-users. Such domain-specific considerations are evidently not common among the interviewees. However, the organization of content is but one of several centripetal forces shepherding viewers toward the same content. Certain productions are prominently positioned in the television library, but these are often the same blockbusters that serve as common cultural references among peers and in society. Prestigious series need cultural critics to prioritize them, creating what Tryon (2015, 107) terms a "culture of 'just-in-time promotion'." Reflecting back to the analysis of the social and ritual role of television, we add that peers contribute in a similar manner. Markus (36) states that he often watches programs peers recommend: "I've just watched The Man in the High Castle, an Amazon-production. And that was also because someone recommended it to me." Kevin (21) similarly says, "I definitely think something is worth watching if there's a lot of coverage in the media. And if my buddies recommend something, and you hear a lot of people talking about a show." Two productions, in particular, came across as television "everybody" talked about: Game of Thrones and the Norwegian "teen-drama" SKAM. Both shows were prominently featured on the interfaces of HBO and NRK, but participants express how the buzz surrounding these shows was more influential. Thomas (43) and Ruth (26) stated that they would sign up for HBO to experience for themselves why "everyone" talked about Game of Thrones. Vilde (31), Markus (36), and Erik (27) mention Game of Thrones as the reason for initially turning to HBO. SKAM similarly raised the awareness for the NRK player among the younger participants. Bård (24) never used to consider NRK interesting, but then "SKAM happened, and I realized they have some really good programmes. . . . Like they produce content for my generation. SKAM had a positive effect in the sense that I now use the NRK player." It should be noted that SKAM was widely brought up and commended by participants regardless of age, and it certainly represented a shared cultural reference recommended and talked about among peers. For HBO and NRK, Game of Thrones and SKAM served as flagship productions that attracted viewers to their services. If prime time depicts the scheduling slots of content that appeals to a mass audience, then online TV services similarly organize their libraries with prime shelves. Since prime shelves feature the most popular content, delineating the extent to which decisions on what to watch are influenced by how the interface is structured or by a general buzz around certain programs becomes difficult. Discomfort rather relates to perceptions that there is nothing but prime shelves, or if these are experienced to hide the full catalog of content. Relatedly, the extent to which online TV services personalize interfaces and algorithmically recommend content (which varies between service providers) does little to open new paths but is instead experienced to provide more of the same. Some participants consider recommendations in a neutral or affirmative manner. Ruth (26), for instance, notes how she does not "mind recommendations. . . as they are recommended based on what you've watched." Vilde (31) finds personalized recommendations "alright, although they don't always match what I like." Perceptions of personalization and algorithms are hard to discern since they represent invisible frames (Johnson 2019) that often escape people's awareness (Gran et al. 2020;Lüders 2020). Gran et al. (2020) find that highly educated exhibit higher levels of algorithmic awareness and more critical attitudes toward recommendations. Our study includes few participants with low education (see Table 1), but our aim is to extrapolate analytical generalizations concerning the mechanisms at play. To this end, what some participants do not say is telling. Participants who do not refer to recommendations as influencing their attention might be interpreted as resistant to being guided, or as not "seeing" how these interfaces work. Other participants recognize how service providers exert control by devising attention-steering paths. Among these, portrayals of resistance are quite common and tie in with reflections on how they are categorized as viewers. As Morten (46) asserts, "When HBO and Netflix try to tell me what I like, I'm like, 'no, I won't, I'm certainly not watching that'. I get a bit cranky; they don't know me, and they try to compartmentalize me." Lars (49) avidly keeps track of algorithmic recommendations in Spotify, but when it comes to Netflix, "I don't think the recommendations are accurate at all. It may be because my oldest son tends to use my profile and hence disturbs everything." In this case, "more of the same" becomes a problem since what is recommended is not calculated based only on the data input from what Lars prefers to watch. However, as Camilla (31) suggests, data input can be contaminated also by one's own viewing patterns: My Netflix is kind of crazy. Because with programmes we watch together, we usually watch using my husband's profile. Whereas, when my head is really knackered, I watch the Lego Batman Movie. And get recommendations for children's TV. So, I've tried to watch more of what I'm interested in. Camilla's reaction aligns closely with the if, then-structure and the "you-get-theinfrastructure-you-deserve-logic" of algorithms (Gran et al. 2020, 14): if she wants more accurate recommendations, then she needs to realign her viewing behavior in order to feed Netflix with better behavioral data. Since there is no way to backtrack past behavior, for example by signaling what programs should not be considered when predicting recommendations, the only remaining option is to get recommendations back on track by refraining from watching content that would pollute data-input. Discontentment with how content is organized and how recommendations work ties in with perceptions that content libraries are more diverse than what surfaces as immediately accessible. Reflecting on how online TV services tailor interfaces and recommendations to match her viewing patterns, Sara (26) points to how the result is a feedback loop where she is never challenged: What I miss is like, "try something new," right? It's not the amount of content, which is the problem, it's the sorting of content. Most places have these equations for what you're likely to want, and they give you that so that you'll return. It works, but it also limits what people come across. It's comfortable to get what you expect and what fits with your perspectives. But you lose the opportunity to widen your horizon. Sara illustrates the tension between the industry's interest in guiding viewers in certain directions (for instance toward prestige shows that generate buzz and promote critical acclaim) and her interest in exploring diversifying paths. Sara's quest for programs that "widen your horizon" can even be seen an argument for a television schedule and challenges the prevailing market discourse of online TV (Burroughs 2019;Johnson 2019;Tryon 2015). Participants who contest the notion of infrastructures as objective and neutral do not so much react to the control service providers retain in guiding their attention, but rather to how they are categorized as viewers. Sara's reflections align with a yearning to at least be treated as a viewer with complex and unpredictable preferences. Toward Experiential Affordances Our conceptual framework and analysis portray how the experiential affordances of watching online TV are relationally contingent on technical materiality, viewer agency, and social context. Our study first suggests that self-scheduled action possibilities for individualized viewing are counterbalanced by the continued social position of television. Viewers tailor online TV situations to mirror linear ways of watching together (Livingstone 2003;Lull 1990). Togetherness extends beyond watching in the company of others, and television retains its social position as a means for bonding and keeping up with what peers watch. Second, viewing practices are characterized by what we term deliberate watching, denoting the considered decision to watch, and sometimes re-watch, specific programs. Whereas deliberate watching applies across genres, viewers additionally adjust their attentiveness to the social setting and what is narratively required. Selfscheduling combined with the uninterrupted viewing experience of online TV situates the viewer in a potentially immersive viewing position well suited for what complex storytelling requires. Third, interfaces and recommendations create paths through content libraries, yet in ways that elicit varied perceptions and reactions: from not considering these paths as influential to explicitly recognizing these mechanisms and adjusting or resisting to adjust viewing behavior accordingly. Some viewers hence appear cognizant of the shepherding strategies of online TV providers in directing them along certain paths. If Williams ([1974] 2003) recognized how broadcasting services sequenced program items as flows with the aim of retaining viewers for an evening of watching television, then at least some demonstrate an awareness of how the programing of services and interfaces device paths guiding their attention as viewers. In the flow model of scheduled television, the viewing experience is intricately interwoven and almost inseparable from the sequencing of program items (Williams [1974(Williams [ ] 2003. For Williams ([1974] 2003), watching television inherently encompassed tensions between choice (selecting another program, channel, or turning off) and flow. Yet, the scheduled flow defined watching television: "even when we have switched on for a particular 'programme', we find ourselves watching the one after it and the one after that" (p. 94). Online TV seems to require separating the viewing experience from the interface experience at least as analytical categories. The viewing experience relates to discrete programs, and in our study, we sensitize this as deliberate watching. The interface represents a different layer where service providers device attention-guiding mechanisms when viewers navigate (or are navigated) to find new items to watch. While our analysis substantiates a continued need to scrutinize how structuring forces persist (Cox 2018;Gray and Lotz 2019;Johnson 2019), we also note how these forces appear interwoven with peer recommendations, to the extent that separating between agency, peer recommendations, and programed paths becomes impalpable. This may seem unhelpful if the aim is to demarcate the power of online TV providers, but points to the placement of the subject relative to its surroundings. First, the subject placed within a technology-saturated world finds her/himself in-between choice, being in control and being controlled. The flow model of online TV may as such be contingent on viewers transposing self-scheduling agency to a sense of control on the interface level, and by a juxtaposition of programs prominent in interfaces, peer conversations, and popular discourse. Second, the subject as socially and contextually embedded points to the inadequacy of investigating the power of media without including the perspective of audiences and the meanings and values they ascribe to their experiences. By inquiring the experiences of audiences, this study depicts accounts of social embeddedness, agency gratification, annoyances, and to some extent resistance. These accounts elucidate how neither audiences nor service providers are in complete control. The power service providers have in directing viewers is elusive and relational. Johnson (2019) concludes that online TV services are epitomes of how the interests of citizens and democracies have been replaced by commercial, neoliberal notions of user choice; replacing public service with services designed to enhance the experience for the viewer as a customer. It is consequently noteworthy that some participants reflect critically on how content is organized and object to how they are categorized as viewers. Their objections reflect an ideological stance that services should refrain from addressing viewers merely as easily categorized customers. If considered from the branding rhetoric of the algorithmic audience (Burroughs 2019, 10-11), services that feed on behavioral data position the viewer with the co-responsibility to sway recommendations by acting like a viewer who wants to be challenged. Ultimately, "improving" recommendations entails "improving" viewing behavior, adjusting to the co-responsibility logic of algorithmic infrastructure (Gran et al. 2020). Positioning the viewer as in control comfortably alleviates service providers from responsibility, and conceals how viewers meet interfaces and mechanisms designed to guide their attention. Our affordance approach does not explicate to what extent programed paths are reflected in what participants watch (which our interview data moreover cannot elucidate). However, our analysis depicts how online TV as materially "the same" for all participants encompasses experiences conditioned by levels of interface and algorithmic awareness. A robust conceptualization of watching online TV hence needs to recognize the material level as well as acting human subjects.
2021-05-21T16:56:38.399Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "2e088ae5e841a7f5633233edd6b4065e780ba954", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15274764211010943", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "d2b85f38ffed49f37b193027256f7871fa1a23da", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
67782943
pes2o/s2orc
v3-fos-license
A Novel Approach to Obesity from Mental Function Animals including humans are able to maintain almost stable body weight due to regulatory system for energy expenditure and food intake. On the other hand, in human society, body shape varies from extremely thin, anorexia nervosa, to extremely fat, obesity. Body weight is determined by an interaction between genetic, environmental and psychosocial factors. There are physiological time points when humans increase their body weight, such as, while in growth phase, pregnancy, and aging. Obesity also arises when taking energy far above these physiological demands. Recent drastic increment of overweight and obese is mainly due to decrease physical activity and increase energy intake. Regular exercise and modest food intake are well recognized for weight control. The health and psychosocial benefits of sustained weight loss are well established, even though this knowledge is not sufficient to motivate long-term behavioral change. It is most important for weight loss therapy that motivation for weight control beyond motivation for food intake. The interaction of the hypothalamus, which is the classical homeostatic energy regulatory site, and extra-hypothalamic brain areas related to regulation of emotion, cognition, and reward are the main construction in regulation of food intake. Recently, vulnerability to stress, wrong body image, low self-esteem, and disregulation of hedonic hunger which are determined by these brain areas, contribute to development of obesity. In this context, such mental function is now recognized as a pivotal player in the management of weight loss therapy. The Mechanisms of Weight Control Animals including humans are able to maintain almost stable body weight due to regulatory system for energy expenditure and food intake. On the other hand, in human society, body shape varies from extremely thin, anorexia nervosa, to extremely fat, obesity. Body weight is determined by an interaction between genetic, environmental and psychosocial factors. There are physiological time points when humans increase their body weight, such as, while in growth phase, pregnancy, and aging. Obesity also arises when taking energy far above these physiological demands. Recent drastic increment of overweight and obese is mainly due to decrease physical activity and increase energy intake. Regular exercise and modest food intake are well recognized for weight control. The health and psychosocial benefits of sustained weight loss are well established, even though this knowledge is not sufficient to motivate long-term behavioral change. It is most important for weight loss therapy that motivation for weight control beyond motivation for food intake. The interaction of the hypothalamus, which is the classical homeostatic energy regulatory site, and extra-hypothalamic brain areas related to regulation of emotion, cognition, and reward are the main construction in regulation of food intake. Recently, vulnerability to stress, wrong body image, low self-esteem, and disregulation of hedonic hunger which are determined by these brain areas, contribute to development of obesity. In this context, such mental function is now recognized as a pivotal player in the management of weight loss therapy. Mental Aspect of Obesity In adults, high prevalence of mental disorders including cognitive impairment is observed in obesity [1][2][3][4][5]. Among mental disorders, eating disorders are often comorbid with obesity [6]. Especially, bingeeating disorder is thought to be present in 20-40% of obese patient [6]. Moreover, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV TR, obesity is categorized as eating disorder [7]. Some population of obesity is even characterized as mental disorder with "compulsive food consumption" similar to drug addiction. Recent functional Magnetic Resonance Image (fMRI) study suggests that anorexia nervosa, which might have opposite phenotype to obesity, might have motivation and reinforcement for starving and hedonic for hunger [8]. This result speculates that obesity might have motivation and reinforcement for consumption of palatable food and fear for Abstract Obesity is well recognized as serious problem in the world. Regular exercise and modest food intake are the basic strategies for healthy body weight. Although, it is very difficult to lose weight and it is much more difficult to avoid weight regain. Recently, from basic and clinical studies, some part of this difficulty might be explained by impairment of central nervous system due to obesity. Indeed, mental function, such as cognitive impairment, depression, vulnerability to stress, wrong body image, low self-esteem and disregulation of hedonic hunger contribute to development of obesity. The link between such mental disorders and obesity is likely to be bidirectional. Brain inflammation and imbalance of neuronal plasticity caused by disregulation of metabolic signals are candidates which cause mental disorders associated with obesity. normal weight participants [9]. These findings suggest the existence of "compulsive food consumption". This "compulsive food consumption" is difficult to be modified, and even if weight loss is achieved, the neural plasticity "fixed" by palatable food leads individuals to crave more palatable food and thus substantially regain weight. Moreover, a weakened Top/Down inhibition signal for food cravings and inadequate sensing of ingested nutrients resulting in hyperphagia of obesity has been detected in fMRI studies [10]. Obesity is also associated with an increased risk of developing depression and a higher likelihood of current depression [11][12][13][14]. Most obese individuals tend to have higher scores in depression, and the projected increase in the rates of being overweight and obesity in future years could generate a parallel increase in obesity-related depression. According to the DSM-IV, an episode of major depressive disorder can be classified clinically as depression with melancholic features and depression with atypical features. Unlike melancholic depression characterized by a loss of appetite or weight, atypical depression and seasonal depression decrease activity and increase appetite and weight. Epidemiologic studies have demonstrated that the incidence of cognitive impairment is higher in obese individuals than in individuals with normal body weight [4,5]. From the study of Anstey et al. risks of cognitive impairment appeared to be highest for those with underweight and obese in midlife [15]. Increasing evidence suggests that obesity is associated with impairment of certain cognitive functions, such as executive function, attention, visuomotor skills, and memory [4,16]. The link between such mental disorder and obesity is likely to be bidirectional: obesity can lead to mental disorder and, in turn, mental disorder can be an obstacle to treatments of obesity and attaining longterm weight-loss goals, thereby contributing to weight gain [6]. relationship; however, the mechanism is almost unknown, yet. Brain inflammation and imbalance of neuronal plasticity caused by dysregulation of metabolic signals are candidates which damage neurons and result in mental disorder associated with obesity according to the results of animal and human studies [17][18][19]. Obesity and Brain Inflammation Adiposity is thought to have a direct effect on neuronal degradation [5]. Microglia, macrophage-like cells of the central nervous system that are activated by pro-inflammatory signals causing local production of specific interleukins and cytokines, play a pivotal role in brain inflammation [20]. Experimental studies in animals have confirmed neurologic vulnerability to obesity and a high-fat diet and further demonstrated that diet-induced metabolic dysfunction increased brain inflammation, reactive gliosis, and vulnerability to injury, especially in the hypothalamus [21,22]. Recent studies with animals and humans have shown that other brain structures, such as the hippocampus and orbitofrontal cortex, are also affected [20,23,24]. Anti-inflammatory agent, regular treadmill running and calorie restriction were reported to be effective for improvement of these inflammatory changes in mice [22,25,26]. Obesity and Imbalance of Neuronal Plasticity Modulated by Metabolic Signals To explain mutual relationship between obesity and mental function, the focus of research is on imbalance of neural plasticity caused by disregulation of metabolic signal. Leptin, adipocyte-derived hormone, insulin, secreted from pancreas β-cells, ghrelin, a stomach-derived hormone and glucagon-like peptide (GLP)-1, secreted from the L cells of intestinal tract, turned out to be main players as metabolic signals linking between obesity and imbalance of neural plasticity. Leptin is reported to induce an antidepressant-like activity in the hippocampus, which is considered to be an important region for regulation of the depressive state in rodents [27,28]. We previously demonstrated that development of depression associated with obesity might be due in part to impaired leptin activity in the hippocampus [28]. Given the high comorbidity of metabolic disorders, such as diabetes and obesity, with depression, several lines of evidence suggest that insulin signaling in the brain is also an important regulator in depression related to obesity. Clinical investigations show the relationship between insulin resistance and depression, but the underlying mechanisms are still unclear [29]. Ghrelin also play a potential role in defense against the consequences of stress, including stress-induced depression and anxiety and prevent their manifestation in experimental animals [30]. There might be different subtypes of depression which are better treated with leptin, insulin or ghrelin. Postulated mechanisms which obesity results in cognitive impairment are the effects of hyperglycemia, hyperinsulinemia, poor sleep with obstructive sleep apnea, and vascular damage to the central nervous system [31,32]. In animal studies, chronic dietary fat intake, especially saturated fatty acid intake, contributes to deficits in hippocampus-and amygdala-dependent learning and memory in rodents with diet-induced obesity by changes in neuronal plasticity [33,34]. Several lines of electrophysiological and behavioral evidence demonstrate that leptin and insulin enhance hippocampal synaptic plasticity and improve learning and memory [32,35]. Therefore, it is likely that impairment of the actions of leptin or insulin might be attributable to cognitive deficits in obesity and diabetes mellitus [36,37]. Through both direct and indirect actions, leptin and insulin diminish perception of food reward-the palatability of food-while enhancing the response to satiety signals generated during food consumption that inhibit feeding and lead to meal termination. By contrast, ghrelin enhances hedonic and incentive responses to food-related cues [38]. Orexin signaling is required in these ghrelin's action on food reward [38]. Ghrelin is also reported to mediate stress-induced food-reward behavior in mice [39]. GLP-1 is turned to be an important player in reward from animal studies. Recently, GLP-1 analogue liraglutide in addition to an energydeficit diet and exercise program, led to a sustained, clinically relevant, dose-dependent weight loss in human [40]. This successful result might arise, at least in part, from improvement of dysregulation of reward circuit. In obesity, dysregulation of these metabolic signals might change neural plasticity in many brain regions resulting in behavioral change. Literature reviews and numerous empirical studies which described significant improvements in psychosocial functioning after bariatric surgery support these ideas [41]. Remarks Mental aspect of obesity has been catching light very recently. To assess and treat mental aspect of obesity was only vaguely recognized so far. Being overweight and obesity might be a phenotype of overadaptation for coping with continuous dynamic metabolic changes to protect brain. Such over-adaptation via dysregulation of brain inflammation and imbalance of neural plasticity might result in mental disorders. Clinical studies suggest that mental disorders associated with obesity can be reversible by body weight loss therapy [42][43][44]. We need clinical prospective data on how body weight, adiposity and muscle mass correlate with brain inflammation and imbalance of neural plasticity, eventually mental functions.
2019-03-11T13:05:24.134Z
2013-04-24T00:00:00.000
{ "year": 2013, "sha1": "f8ebec2ef02395d241acd02adb6078993c015b8b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2165-7904.1000168", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "76d090ff5df0507f1fdefc52208b02891905d420", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235279639
pes2o/s2orc
v3-fos-license
The analysis of landslide preparedness on senior high school students in karanganyar regency, central java Preparedness is a part of the disaster management process. It is closely related to the risks emerged by disasters. Landslide is a routine disaster which always occurs every year in Karanganyar. The research is set to determine the preparedness of landslide of Senior High School students in Karanganyar. It is descriptive qualitative research using survey method in collecting data. The results reveal that the percentage of students who have good preparedness in facing landslide is still very low. It can be seen from the percentage, 25.5% of students are less ready and 21.8% of them are even not ready. Conversely, 3% of students are very ready and 18.8 of them are ready, while 30.8% of other students are almost ready. This case shows that the capacity of Senior High School students in Karangnyar is still very low in facing landslide which occurs almost every year in Karanganyar. Introduction Indonesian territory is crossed by zero degrees latitude or equator. Regarding the division of climate in the world, the area crossed by the equator has a tropical or hot climate. It means the area gets more solar radiation than areas that are not crossed by the equator. Having such a hot or tropical climate, the division of seasons in Indonesia only consists of two seasons, namely the rainy season and the dry season [1].The existence of these two seasons is characterized by changes in temperature, weather, and quite extreme wind. Besides, the land of Indonesia is very fertile because of its topographic conditions where the surface and the rock are diverse, both physically and chemically [1]. On the contrary, this condition can cause some adverse consequences for humans, for instance, the occurrence of hydro-meteorological disasters such as floods, landslides, forest fires, and droughts [2]. Human knowledge about the environment varies in various levels. It can negatively affect the natural environment if human has no or little understanding about environment [3]. Along with the development of time and increasing human activity, environmental damage tends to get worse and trigger an increase in the number of occurrences and intensity of hydro-meteorological disaster [4] (floods, landslides, and droughts) which occur alternately in many regions in Indonesia [5]. Based on data from the Indonesian National Board for Disaster Management/BNPB, there have been 1,304 disasters ranging from 01-01-2019 to 31-03-2019. There are 649,654 affected victims, 367 people die and disappear, and 1,385 people are injured. This disaster causes damage to existing houses and public facilities [5]. The largest number of disasters is in Java, especially Central Java. Karanganyar is one of the districts located in Central Java which has a considerable vulnerability to natural disasters such as landslides, hurricanes, floods, fires, land movements, collapsed houses, and past accidents [6]. Based on the data released by the BPBD Karanganyar Regency from January Attitude and concern for disaster risk Policies, relevant regulations (Regional regulation and Decree) • Types of preparedness policies to anticipate natural disasters. • Relevant regulations (Regional regulation and Decree) Emergency Planning • Plans to respond to emergencies • Evacuation plans, including locations and evacuation sites, maps, routes, and evacuation signs • First-aid plan, rescue, safety, and security in the event of a disaster • Plan for fulfilling basic needs Results and Discussion Preparedness in facing disasters can be measured using preparedness parameters. Preparedness parameters are employed to simplify the measurement of individual preparedness. Based on the Disaster Preparedness Framework issued by LIPI and UNESCO [22], the preparedness of students in facing disasters can be measured by several parameters 1) Knowledge and attitude, 3) Emergency Planning, 4) Early Warning Systems, and 5) Resource Mobilization Capacity. The following are the results of research that has been conducted in three State Senior High Schools in Karangnyar Regency. Landslide Preparedness of State Senior High School Jumapolo Based on the results of the study, the preparedness of the State Senior Hight School Jumapolo students as a whole is almost ready with a percentage of 55%, while there are only a small percentage of students who are ready to face the landslide disaster. It can be seen in the score percentage of the following graph: Figure 1. Landslide Preparednes of SHS Jumapolo Based on the results of the analysis, the largest percentage of students' readiness in Jumapolo is in the almost ready and ready class. This is influenced by several factors including: 1) The landslides that often occur in the students' surrounding environment make them familiar with disasters and ready to face them. 2) The condition and position of the school that is in a slope or hilly area. Landslide Preparedness of State Senior High School Kerjo Based on the results of the study, the preparedness of SMA N Kerjo students as a whole is almost ready. Meanwhile, there are only a small percentage of the students who are ready to face the landslide disaster. This can be seen in the score percentage of the following graph: Based on the analysis, the largest percentage of students' readiness in SMA N Kerjo is in the almost ready class and the percentage of students who are ready and less ready in facing landslides. This is influenced by several factors including: 1) The landslides that often occur in the students' surrounding environment make them familiar with disasters and ready to face them. 2) Diverse dwellings and environmental conditions around schools that are quite vulnerable, as well as access to schools that pass through hills that is quite vulnerable to landslides. 3) The condition and position of schools that are in a slope or hilly area. Landslide Preparedness of State Senior High School Karangpandan Based on the results of the study, the preparedness of SMA N Karangpandan students as a whole is not ready. There are only a small percentage of the students who are ready to face the landslide disaster. This can be seen in the score percentage of the following graph: Figure 3. Landslide Preparednes of SHS Karangpandan Based on the analysis, the biggest percentage of students' readiness in SMAN Karangpandan is in the not ready and less ready class. This is influenced by several factors including: 1) The location of the school is in a fairly gentle slope and landslides are very rare, 2) The condition of schools that are already adequate in terms of facilities and infrastructure makes students careless and lacks of personal preparedness in facing disasters. Landslide Preparedness of State Senior High School in Karanganyar Based on the results of the study, the preparedness of the State Senior High School students in Karanganyar shows that most students are still in the almost ready category. Besides, there is only a small percentage (3%) of students who are ready to face landslides. This can be seen in the score percentage of the following graph: Figure 4. Landslide Preparednes of SHS in Karanganyar The following is a comparison of students' preparedness in facing landslides in each school : Figure 5. Landslide Preparednes of SHS in Karanganyar In the graph above, it can be seen that the school which has the best preparedness compared to the other schools is SMA N Kerjo, while the school which has the worst preparedness is SMA N Karangpandan. It is marked by the "not ready" category as the highest percentage. This case is certainly inseparable from a variety of factors, especially on knowledge and attitude indicator because the school is one of the institutions that have a large influence on the attitudes and behavior of students, including Senior High School students. In a school, the knowledge of preparedness given by teachers will affect the attitude of students in taking action when a disaster occurs. As stated by Notoatmodjo (2007) in Aprilin, et al (2018) that the increase of knowledge possessed by an individual will correlate with an increase in the behavior of the individual [1]. This case shows that in a school, teachers have an important role in providing knowledge about disaster preparedness because it will become the main reason for someone to carry out protection activities or preparedness efforts [13]. IOP Conf. Series: Earth and Environmental Science 683 (2021) 012071 IOP Publishing doi:10.1088/1755-1315/683/1/012071 7 Preparedness Knowledge provided by the teacher must be relevant to the condition of the region so that the students have a better understanding of it. Ultimately, it is expected that it can improve their preparedness attitude. Based on the data above, it is revealed that SMA N Jumapolo has a higher level of preparedness compared to the other two schools. Even though, the three sub-districts where the school is placed have a moderate level of vulnerability [6]. Preparedness certainly cannot be separated from the occurrence of landslides in the Jumapolo, as stated by the Head of Jumapolo Subdistrict, Mr. Sri Suboko at SMA 1 Jumapolo. He said that almost every year, there are always landslides in four villages in Jumapolo, namely in Jumantoro Village, Kadipiro, Kedawung, and Giriwondo [7]. However, Karangpandan Subdistrict also includes as a potential area of landslides when there is rain with high rainfall and long intensity, along with several other sub-districts of Tawangmangu, Ngargoyoso, Matesih, Jatiyoso, and Jenawi. Regional conditions greatly influence the knowledge and ways of thinking of society, including Senior High School students in the age range from 15-18 years. The age which a person's cognitive domain reaches a high level, namely making plans, deciding strategies and decisions, and problemsolving. The right plans and strategies must be taken when a landslide occurs, which is called an emergency plan, namely the stages of preparing effective and efficient actions during a disaster. [23] It is in line with a statement stated by MPBI / UNESCO (2007) in Tirtana and Satria (2018), knowledge is always utilized as the beginning of an action and awareness of a person. So it is expected that the students have good preparedness by having maximum disaster knowledge capacity. The emergency plan is related to the evacuation, relief and rescue process to minimize the number of disaster victims [22]. For example, is by preparing emergency equipment in the form of items needed by the victim. It aims to reduce the impact they feel, such as first aid kits, flashlights, dry food, or food reserves, drinks, a collection of important telephone numbers, duplicate house keys, copies of important documents, and others [24]. For other parameters, related to policies or regulations concerning disaster, early warning systems, and resources mobilization. These are broader parameters, it means that these parameters emphasize government efforts along with institutions dealing with disaster issues (BPBD Karanganyar Regency) in making policies or regulations, for example, the prohibition on building houses in areas with steep slopes. Karanganyar BPBD chief executive, Bambang Djatmiko states that the EWS (Early Warning System) had been installed in several sub-districts to increase disaster risk [15]. Early warning systems include warning signs and information distribution. However, the most important thing is a good system which can be understood by the community, so that the community knows what to do when there are warning signs of a fire [24]. In this case, the school also has an important role to make students understand the signs of disaster. It can be implemented by conducting simulations/training in schools, especially in SMA N Karangpandan, which currently has a relatively low level of preparedness. In 2017, monitoring of four villages is still carried out conventionally or because it does not have an early warning system (EWS) in Jumapolo [6]. Resource mobilization capacity also includes preparedness parameters because it relates to various existing resources to restore or prepare an emergency condition [22]. Parameters certainly include the planned authority of the institution, not by each individual. However, Senior High school students also need to be informed about this, so that they know the needed resources by individuals (themselves) and the surrounding community in their recovery efforts. Ultimately, they can take the right steps or actions when a landslide disaster occurs [7]. Conclusion Preparedness is the basis of efforts to reduce disasters risk. The level of disaster is determined by the disaster potentiality and preparedness in facing disasters. The high risk of existing natural disasters can be reduced by increasing the capacity of the affected communities. The school which has the best preparedness compared to the other schools is SMA N Kerjo, while the school which has the worst
2021-06-02T23:35:43.221Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "76a8189a628b2cc91025374c6b9b7e2da44bdf6f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/683/1/012071", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "76a8189a628b2cc91025374c6b9b7e2da44bdf6f", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Physics" ] }
219505712
pes2o/s2orc
v3-fos-license
PySTPrism: Tools for Voxel-based Space–Time Prisms PySTPrism: Tools for Voxel-based Space–Time Prisms The observed movements of humans and animals are realizations of complex spatiotemporal processes. Recent advances in location-aware technologies have rendered trajectory data ubiquitous. Examining the sequenced, instantaneous locations found in movement trajectory data for information reconstructing the location or state of the mover between observed points comprises a primary focus in Time Geography and related disciplines. The PySTPrism toolbox introduced in this paper provides a straightforward and open-source implementation of the Probabilistic Space Time Prism, in addition to related tools from Time Geography. PySTPrism is implemented in Python using the ArcPy module in ArcGIS Pro Desktop. Motivation and significance The observed movements of humans and animals are realizations of complex spatiotemporal processes [1,2]. Movement captured as sequences of time-stamped locations, termed trajectory datasets, may be conceptualized as a complex signal reflecting the decisions, context, and internal states affecting the mover [3,4]. With recent advances in location-aware technologies, trajectory data has become ubiquitous. Examining the sequenced, instantaneous locations found in movement trajectory data for information reconstructing the location or state of the mover between observed points comprises a primary focus in Time Geography and related disciplines [5,6]. Here, time-geographic questions concerning accessibility, utilization of space, and spatiotemporal interactions among movers and the environment have been proven relevant to biological, ecological, and sociological inquiries, as well as conservation and planning studies, and to some extent virtual reality and cybernetics. In these studies, trajectory data have been analyzed from a time-geographic perspective towards clearer understandings of animal interactions [7], habitat use [8,9], planning for conservation efforts [10][11][12] and human accessibility to transit systems [13] or spatiotemporal positioning across networks [14,15] and positioning in virtual environments [16]. Part of Hägerstrand's original framework for time geography as a discipline [17], the Space-Time Prism approach and its derivatives represent a common methodological factor among many of the studies mentioned here, in efforts to quantify the constrained and uneven movement opportunities available to humans and animals traversing through space. Alternatively, a range of methods analyze trajectory data without specific application of the Space-Time Prism, instead examining and summarizing the parameters of trajectories towards insight about the moving objects that produced them. Examples include temporally-aware variants of kernel density estimators [18], methods which decompose trajectories into symbolic sequences [19], methods which characterize trajectories in terms of their tortuosity [20], compare observed trajectories to those simulated in random walks [21], and compare observed trajectories to one another for similarity [22,23]. Additionally, trajectories may be analyzed for clusters [24], and the examination of single moving objects within a larger group [25]. While all of these alternatives consume movement data as input, their focus is not immediately on the bounding and evaluation of movement opportunity in space. The Space-Time Prism (STP) is a constraints-based approach used to delineate and characterize the movement opportunities available to a moving object in terms of space, time and velocity [26,27]. Given information describing observed starting and ending spatiotemporal point locations for a moving object (also known as space-time anchors), the time elapsed between anchors, and an estimate of the moving object's maximum attainable speed, the classical STP calculation constructs a bounding volume capturing the set of possible locations the object would have been feasibly able to visit over the course of its travel along its spacetime path between anchors. Essentially, a particular location is included in the prism if it could have been visited by the object traveling between anchors, given a movement budget defined in terms of speed and available time (Fig. 1). The set of locations accessible to an object at a particular instance in time is termed a space-time disk. Infinitely many disks may comprise a spacetime prism, and taken individually, disks exhibit their greatest area at the midpoint between anchors, and converge to zero area at the anchors (where object position is relatively certain). Eq. (1) describes the classical STP calculation applied to voxels. Voxels (volume-elements) are discretized positions in both space (X and Y) and time (Z), having a fixed spatial and temporal resolution [8,28]. From a computing perspective, voxels offer an effective simplification of the concepts and calculations involved in constructing the space-time prism easily represented as raster data. where: ∥x x − x x ∥ is a Euclidean distance calculation between the current voxel centroid location x a and either the starting x i , or ending x j , space-time anchor locations in a given anchor pair. t a is the temporal midpoint associated with x a , and (t a − t i ) s ij , ( t a − t j ) s ij expressions calculate the distance the object could have successfully traversed between the anchors, given the time elapsed and remaining between x i and x j , respectively, considering the object's expected maximum speed, s ij . The classical STP is of course limited in that accessibility is defined as a binary condition. The literature recognizes movement opportunities to be uneven within the STP volumes, with influences such as context, behavior, and anchor positional uncertainty contributing to this variation [1,8,29]. Extensions to the classical STP abound in the literature as a response to this limitation, including methods which apply random walks [30], Brownian bridges [31], kinematic constraints [32], awareness of inhomogeneous mover context [29,33], spatial interpolations functions [8] and hybrid methods such as behavioral-contextual agent-based simulation [1]. PySTPrism is a GIS toolbox for generating and generalizing space-time prisms for moving objects, and forms the impetus for this document as its introduction. The toolbox contains 4 tools, including tools generating two respective space-time prism variants, a means to combine prism results at the disk level, and a trajectory data pre-processing tool. The prism generation and generalization implementations released with this PySTPrism toolbox were used directly to generate probabilistic space-time prism and probability surface results as-presented in a range of peer-reviewed studies concerning animal movement, interaction, habitat use and conservation planning [7][8][9]11,12]. As a result of conversations with colleagues from disciplines outside computational movement analysis, the Authors resolved to package and release this implementation as the PySTPrism toolbox. The wide applicability of Hägerstrand's initial construction, and Downs' straightforward probabilistic extension of the space-time prism concept are still being realized in efforts far removed from the initial research questions treated by these methods. PyST-Prism promotes the analysis of movement across disciplines by rendering foundational methodologies more accessible to all researchers. In the following sections, the methods operationalized in PySTPrism are described and demonstrated using an abstract sample trajectory tracking an imagined moving object over a 200second duration, having space-time anchor captures at 100 s intervals, each of which represent locations 100 m apart in an appropriate coordinate space. Software description As an open-source extension to the popular ESRI ArcGIS Pro desktop application, PySTPrism seeks to provide researchers from a range of disciplines with a simple means to construct spacetime prisms for their respective research targets. Software architecture Four tools are contained in the PySTPrism toolbox. These include the space-time prism generators Generate Probabilistic Voxel Space-Time Prism and Generate Voxel Space-Time Prism, along with a probabilistic aggregation function for space-time disks, Calculate Probability Surface, and a data pre-processing tool, Save Pre-Processed Trajectory. The toolbox is implemented in Python 3.x, relying on the ArcPy module, a general application programming interface exposing functionality inherent to ArcGIS Pro. Additionally, some functionality in the PySTPrism toolbox leverages operations requiring an Advanced license level for Ar-cGIS Pro, as well as licensure for the Spatial Analyst extension. The tools can be accessed interactively through their respective geoprocessing tool GUIs in ArcGIS Pro, or programmatically as python objects within or outside of an ArcGIS Pro session. The tools packaged with PySTPrism accept input trajectory data as vector point feature classes subscribing to a projected coordinate system, and expect each point to carry a corresponding timestamp attribute value held in a DATE type field. PySTPrism returns raster datasets representing space-time disks as the interchange format for results. Software functionalities 2.2.1. Generate Probabilistic Voxel Space-Time Prism The Generate Probabilistic Voxel Space-Time Prism tool is discussed first in this document as it is the main tool implemented in the PySTPrism toolbox. Generate Probabilistic Voxel Space-Time Prism operationalizes the voxel based probabilistic space-time prism (PSTP) approach introduced in Downs, (2014) [8]. The PSTP approach exposed in PySTPrism serves as a reference implementation for Downs' PSTP and has been directly employed in its current and earlier iterations to several studies concerning animal movement and interaction in space and time [7][8][9]11,12]. Generate Probabilistic Voxel Space-Time Prism accepts a reference (path-to-data) to a point feature class representing the observed space-time anchors from a moving object trajectory and the name of the field recording anchor timestamps. The tool returns a file geodatabase at the User's choice of location containing a series of raster datasets each representing a probabilistic voxel space-time disk comprising the prism, where each disk's Z -axis height is synonymous with the duration of time it represents. In short, Generate Probabilistic Voxel Space-Time Prism extends the operation shown in Eq. (1) to include application of a distancedecay function assigning visit probabilities, P ( STP xa ) , to voxel locations (Eq. (2)). where: ∥x x − x x ∥ is a Euclidean distance calculation between the current voxel centroid location x a and the intersection point x s of its host space-time disk k, and the space-time path. Additional parameters accepted by Generate Probabilistic Voxel Space-Time Prism include: the desired interval of time (Z, in seconds) each prism disk raster is meant to represent, the cellular (X/Y) resolution for the disks (in map units, inherited from the input trajectory's projected coordinate system), the velocity multiplier, a factor capturing the notion that an observed moving object may not have been moving at its top speed while tracked, and expand edges, a multiplier that expands the analysis extent for raster outputs, avoiding situations where disk values are truncated at the edge of the analysis extent. Care should be exercised while selecting disk temporal interval, disk cell size, velocity multiplier and expand edges factor. Trade-offs between temporal and spatial resolution should be considered as overall computational effort constructing the prism is sensitive to these parameter decisions. Users should consider the overall scale of the observed movement trajectory and the scale of the relationship or phenomena they are exploring as expressed by the trajectory in selecting these parameters. Values for velocity multiplier should be set with consideration of known movement characteristics of the subject. For example, the trajectory observed for a duck meandering through an urban greenspace setting is not generally indicative of the animal's top speed. Increasing the velocity multiplier in this situation would help capture the capability of the animal. Finally, computational effort is sensitive also to the expand edges parameter, as this parameter effectively multiplies the amount of disk cells undergoing PSTP calculation. Generate Voxel Space-Time Prism The Generate Voxel Space-Time Prism (VSTP) tool provides an implementation of Hägerstrand's [17] classical STP (Section 1), discretized on the basis of voxels per the early work of Huisman and Forer [28]. VSTP represents the foundational logic and approach from which PSTP was developed, and is included in PySTPrism as both a reference implementation for the classical STP and a tool applicable in situations where a binary measure accessibility is the target (for example, in the simple alibi query). The interface exposed in PySTPrism is identical for the VSTP and PSTP tools, and both are parameterized identically. The main difference between VSTP and PSTP is the addition of a distancedecay approach in PSTP, assigning a visitation probability to each voxel. The VSTP relates spatiotemporal accessibility in terms of a binary 1/0 result assigned to voxels comprising raster space-time disks. Calculate Probability Surface The Calculate Probability Surface (CPS) tool performs the probabilistic OR operation Eq. (3) across an arbitrary number of input space-time disks. Given a sequence of inputs, the base calculation involves obtaining the OR result from the first (A) and second (B) raster space-time disks in the series, with the result applied recursively as (A) in the subsequent calculation, and so on until the sequence is exhausted. CPS represents a means to aggregate the information held in either binary (VSTP) or probabilistic (PSTP) space time disks either serially (in sequence, with disks taken from a single prism), or laterally (among disks representing the same timeframe, taken from separate prisms). The CPS calculation is carried out on the basis of voxel centroid locations. The CPS approach has been integral to several studies, including those exploring animal interaction with the built environment [12], and conservation planning studies seeking to place animal crossing structures optimally [11]. In these studies, aggregate understandings of larger-scale movement processes (as captured by CPS) were compared with contextual factors in the study area towards answering conservation and planning questions. The CPS tool accepts a list of raster datasets (space-time disks) as input, and returns a single raster reflecting the CPS result. The CPS result is saved to the User's choice of geodatabase container per the value entered in the Output Geodatabase and Output Probability Surface Raster parameters of the tool. Users should ensure that input space-time disks share a consistent spatial and temporal resolution (voxel X /Y /Z dimensions). At this time, CPS provides no validation or check asserting that input spacetime disks share the same spatial and temporal resolution, etc. Comparison of mismatched disks in terms of resolution, or disks which are out-of-sequence in lateral disk aggregations can render results difficult to interpret or meaningless overall. Save pre-processed trajectory The Save Pre-Processed Trajectory tool is a pre-processing tool for point feature classes representing trajectory data. The operations encapsulated in the Save Pre-Processed trajectory tool happen automatically as part of the VSTP and PSTP routines, however this tool allows users to examine the movement parameters of the input prior to calculating prisms. The Save Pre-Processed trajectory tool performs calculations adding fields recording the offset distance (in map units), time elapsed (in seconds) and velocity (in map units per second) observed between fix locations present in the feature class supplied as the Input Point Features argument. The pre-processed result is saved as a separate copy of the Input Point Features at the User's choice of file geodatabase location. This tool is meant as an assist for initial exploration of the movement characteristics captured in a timestamped point pattern. Movement characteristics (distance, time elapsed, velocity) calculated between any two sequential trajectory fix locations A and B are written to fields associated with fix A. The final point location in the trajectory will reflect 0 for movement characteristics values. Illustrative examples The following demonstrates use of the Generate Probabilistic Voxel Space-Time Prism from the toolbox GUI interface and examines the result in terms of the PSTP methodology. In the following demonstration, an abstract sample trajectory having a total duration of 200 s, consisting of 3 space-time anchor locations, where each subsequent anchor location is 100 m from the previous location will be used (Fig. 2, Left). The progression of this abstract object's trajectory through space is artificial, with direct northward movement for the first 100 m and 100 s, followed by a 45-degree turn to the northeast for the remaining 100 m and 100 s. Given a point feature class representing this or any applicable trajectory, the user first selects appropriate values parameterizing the desired spatial and temporal resolution of the results, adjustments for object speed and analysis extent, as well as a target location on-disk for the results. Per the selections reflected in Fig. 2 (right), the user has specified for space-time disks comprised of 10 × 10 m cells with a temporal interval (the duration in time which the disk captures, alternatively, its voxel height) of 20 s. Additional attenuation of the results is achieved by setting the ''velocity multiplier'' and ''expand edges factor'' parameters introduced in Section 2.2.1. The results geodatabase stored at the directory location specified in ''Output Folder for Prism FGDB'' contains a series of probabilistic space time disks which may be visualized in context or used in subsequent analyses (Fig. 3). Here, each respective probabilistic space-time disk, stored individually as raster datasets, represents a 20-second duration of occupancy probability for the tracked object, between 12:00:00 AM 1/1/2020 and 12:03:20 AM 1/1/2020. Returned space-time disks are exclusive of bounds at space time anchors, as disk areas converge to zero at known or observed space-time anchor locations. This is because it is assumed there exists no positional uncertainty to be related or estimated at these observed locations, where object location is known. With respect to the visualization shown in Fig. 3, cell locations with darker hues correspond to a higher probability of occupancy for the moving object over the disk interval at those cell locations. Discretized to 20 s intervals, an overlay of 8 PSTP disks modeling the 160 s of uncertainty between the start and terminal space-time anchors comprise the probability mass depicted in Fig. 3. Additional examples outlining usage of PySTPrism can be found in the user documentation bundled with the tool repository. Impact The functionality exposed in PySTPrism enables researchers and analysts from a variety of disciplines and industries to examine moving object trajectories using space-time prism approaches. Recent advances in location-aware technologies demonstrate a clear need for widely applicable methods analyzing movement data. PySTPrism provides an accessible GUI interface to proven methods from Time Geography using the Ar-cPy API driving the very popular ArcGIS Pro desktop GIS software. The PySTPrism toolbox is distributed freely under a permissive MIT open-source license. The terms of this license encourage both academic and commercial application of the PySTPrism feature set while simultaneously inviting collaboration from all parties on future versions of the toolbox. For these reasons, PySTPrism represents a significant reduction in the barrier to entry and requisite knowledge necessary to apply space-time prism methodologies to an unbounded class of research questions dealing with the movement process. PySTPrism seeks to proliferate space-time prism analysis among disciplines and industries yet to consider the time-geographic perspective on their data. Possible applications for PySTPrism in disciplines removed from computational movement analysis are numerous. These range from emergency response studies, where for example missing-persons searches or readiness planning for medical emergencies at large or festival events may be aided by application of PSTP, to marketing studies interested in the potential paths of shoppers through markets. The Authors of PySTPrism expect the toolbox will be immediately beneficial to studies adjacent to or closely related to computational movement analysis. For example, forthcoming work examining the habitat use of Amazonian Black Skimmers (Rynchops niger cinerascens) using PSTP and CSP, as-implemented in the toolbox, represents a collaboration between PySTPrism authors and avian ecologists examining the species. Additionally, PySTPrism as a reference implementation of the PSTP methodology encourages reproducibility in results within and between researchers, analysts, journalists etc. The straightforward construction of voxel-based space-time prisms from raster space-time disks also encourages extension of the methods included so far in PySTPrism, inviting alternative distance-decay functions for use in PSTP, or alternative aggregation approaches extending or altering CSP. Extensions incorporating contextual influences on movement from field-based time geography [29,33] are also invited as extensions to PySTPrism. Conclusions Voxel-based space-time prisms provide a computationally accessible and easy-to-interpret characterization of accessibility and space utilization for moving objects over time. The PyST-Prism toolbox makes voxel-based space-time prism methods available to a wide audience, encouraging reproducible application of space-time prisms to moving object trajectories of interest to a range of disciplines and industries. The toolbox materials are distributed under a permissive MIT open source license and have been implemented in Python 3.6 using the ArcPy interface to ArcGIS Pro 2.4 or newer versions. The functionality present in the toolbox may be accessed interactively using GUI interfaces to the included operations, or programmatically using Python's import routine, exposing the PySTPrism operations as Python objects. The toolbox provides four tools, two of which construct variants of the voxel-based, space-time prism, one which aggregates prism results on the basis of visit probability, and one tool used for basic trajectory data exploration. Data interchange formats and related considerations for the toolbox are intentionally simple and handled in terms of ESRI (ArcGIS Pro) data formats. Tools present in PySTPrism drive existing and forthcoming research examining the movement trajectories of animals for new knowledge about their habitat use patterns and interactions with their environment and each other. The release of PySTPrism encourages the widespread application of proven methods from time geography on new research questions. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-05-28T09:14:16.191Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "2130f0b40357bfbedf99f25de156352da732a1f9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.softx.2020.100499", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a87efb065a727abee9b74252c14c7d80ab50324a", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Computer Science" ] }
218600873
pes2o/s2orc
v3-fos-license
Degradation Mechanism of 2,4-Dichlorophenol by Fungi Isolated from Marine Invertebrates 2,4-Dichlorophenol (2,4-DCP) is a ubiquitous environmental pollutant categorized as a priority pollutant by the United States (US) Environmental Protection Agency, posing adverse health effects on humans and wildlife. Bioremediation is proposed as an eco-friendly, cost-effective alternative to traditional physicochemical remediation techniques. In the present study, fungal strains were isolated from marine invertebrates and tested for their ability to biotransform 2,4-DCP at a concentration of 1 mM. The most competent strains were studied further for the expression of catechol dioxygenase activities and the produced metabolites. One strain, identified as Tritirachium sp., expressed high levels of extracellular catechol 1,2-dioxygenase activity. The same strain also produced a dechlorinated cleavage product of the starting compound, indicating the assimilation of the xenobiotic by the fungus. This work also enriches the knowledge about the mechanisms employed by marine-derived fungi in order to defend themselves against chlorinated xenobiotics. Introduction The progress of industrialization and increase in various human activities increased the use of various chemicals in various consumer products, drugs, pesticides, food additives, fuels, and industrial solvents. Pollution of air, water, and soil can occur as a result of the improper disposal of said chemicals [1]. There is recently heightened concern among policymakers and scientists with regard to the effects of human and wildlife exposure to chemical compounds in the environment, particularly the aquatic environment [2]. Application of pesticides, insecticides, and herbicides constitutes the main source of water pollution with phenolic compounds through an agricultural source. Table 1. Percentage of 2,4-dichlorophenol (2,4-DCP) removal in resting-cell reactions after 10 days for all isolated fungal strains, which were identified based on their internal transcribed spacer (ITS) sequence. Information (region and depth) about the invertebrate host of each strain is given. Locations: Red Sea (Red), east Mediterranean Sea (Med E), and Andaman Sea (Andaman). Expression of Catechol Dioxygenase Activities Extracellular catechol dioxygenase activities were measured in the culture broth of the tested strains following induction with 2,4-DCP. All tested strains seemed to mostly express catechol 1,2-dioxygenase (C12O) instead of catechol 2,3-dioxygenase (C23O) activity. C23O activity is expressed in very low levels, and just two strains-Tritirachium sp. ML197-S3 and Aspergillus sp. ML147-S2-presented detectable activity at 63 h of 0.36 and 0.33 U·mg −1 , respectively. As seen in Figure 1, the majority of strains, except for Aspergillus sp. ML147-S2, presented the maximum C12O activity at 63 h. Additionally, it is clear that the strain Tritirachium sp. ML197-S3 had the ability to express high C12O activity (41.50 U·mg −1 ) compared to the rest of the tested strains, being 28-fold higher than the second highest strain (Cladosporium sp. ML6-S1). Intracellular C12O activity of the strain Tritirachium sp. ML197-S3 was also measured at the peak point of extracellular activity. While the activity detected in the extracellular medium was found to be 0.55 U, the intracellular one was 0.13 U, which constitutes only 19% of the total C12O activity detected. Aspergillus awamori NRRL 3112, which had the ability to grow on several phenolic compounds, expressed 0.043 U·mg −1 of extracellular C12O when grown on 2,4-DCP [16]. The same strain, when grown on phenol and catechol, expressed higher specific C12O activity both intra-and extracellularly. ML147-S2-presented detectable activity at 63 h of 0.36 and 0.33 U•mg −1 , respectively. As seen in Figure 1, the majority of strains, except for Aspergillus sp. ML147-S2, presented the maximum C12O activity at 63 h. Additionally, it is clear that the strain Tritirachium sp. ML197-S3 had the ability to express high C12O activity (41.50 U•mg −1 ) compared to the rest of the tested strains, being 28-fold higher than the second highest strain (Cladosporium sp. ML6-S1). Intracellular C12O activity of the strain Tritirachium sp. ML197-S3 was also measured at the peak point of extracellular activity. While the activity detected in the extracellular medium was found to be 0.55 U, the intracellular one was 0.13 U, which constitutes only 19% of the total C12O activity detected. Aspergillus awamori NRRL 3112, which had the ability to grow on several phenolic compounds, expressed 0.043 U•mg −1 of extracellular C12O when grown on 2,4-DCP [16]. The same strain, when grown on phenol and catechol, expressed higher specific C12O activity both intra-and extracellularly. Bacterial intradiol dioxygenases were extensively investigated regarding their reaction mechanism, substrate specificity, and structures [5]. C12O enzymes are known to be expressed by various microorganisms able to degrade phenolic pollutants. Bacteria able to assimilate 3-hydroxybenzoate [17], phenol [18], benzoic acid [19], and α-naphthol [20] express such enzymes that were characterized. On the contrary, the reports about C12O expression by fungi, mostly related to their biochemical characterization, are scarce. Phenol-induced C12O activity was detected in Bacterial intradiol dioxygenases were extensively investigated regarding their reaction mechanism, substrate specificity, and structures [5]. C12O enzymes are known to be expressed by various microorganisms able to degrade phenolic pollutants. Bacteria able to assimilate 3-hydroxybenzoate [17], phenol [18], benzoic acid [19], and α-naphthol [20] express such enzymes that were characterized. On the contrary, the reports about C12O expression by fungi, mostly related to their biochemical characterization, are scarce. Phenol-induced C12O activity was detected in various filamentous fungi [21,22], and only a few reports on isolation and characterization from Candida strains were published [23,24]. Identification of 2,4-DCP Metabolites The bacterial mechanisms for the detoxification and assimilation of chlorophenols are well known and were reviewed recently [25]. On the other hand, the studies regarding the defensive mechanisms of fungi for the handling of chlorophenols and especially 2,4-DCP are limited. In general, fungi are known to utilize a two-step process for the detoxification of xenobiotics. During phase I, they express enzymes, typically belonging to the cytochrome P450 family, which modify the initial compound by adding functional groups, such as -OH. The resulting compounds are then further modified by phase II enzymes that include several non-specific transferases (sulfo-, glycosyl-, glutathione-, etc.). Their products are less toxic and are excreted from the cells without any further modification [26,27]. Based on the MS analysis, several compounds were detected. However, the emphasis was given to compounds related to the 2,4-DCP metabolism study. For the annotation of 2,4-DCP metabolites, various software tools and online databases were employed. For the prediction of the elemental composition (EC) of the compounds, software tools such as Sirius, Rdisop, and mMass were employed according to the accurate mass, the relative intensities of the first and second isotopes of each compound, the composition rules of H/C, NOPS/C, and Ring Double Bond Equivalent value (RDBE), and applying an m/z tolerance of 10 ppm. The online databases Metlin and Chemspider, as well as data from literature, were used to annotate the metabolites with an applied m/z tolerance of 10 ppm. All the compounds that are characterized as metabolites were only detected on DCP-treated cell cultures. Table 2 depicts the 2,4-DCP metabolites based on the analysis. Nine metabolites of 2,4 DCP (10) were identified, where some are still holding a chlorine substituent. For Aspergillus sp. ML147-S2 and Tritirachium sp. ML197-S3, six metabolites were detected, while, for the remaining two strains, only four metabolites were detected. All of the tested strains were able to hydroxylate 2,4-DCP to generate an ortho diol (9), which is appropriate for further dioxygenase reactivity [25,28]. All of the selected strains are able to substitute the two chlorines of 2,4-DCP by hydroxyl groups leading to trihydroxybenzene (or hydroxyquinol) (5). All strains except for Aspergillus sp. ML147-S2 were able to transform compound (9) into tetrahydroxybenzene (4). The aforementioned reactions are performed by phase I enzymes mostly cytochrome-P450 monooxygenases. This intense dechlorination capacity of the selected strains revealed a specific detoxification activity proportionally related to the number of chlorines on the aromatic ring. The further glycosylation of (9) to (8) by Aspergillus sp. ML147-S2 and Tritirachium sp. ML197-S3 is aimed at enhancing compound removal from the cells as the final step of detoxification. Finally, except for Cladosporium sp. ML6-S1, all the strains were able to convert compound (9) to the corresponding glutamine conjugate (3). Complete dechlorination of the starting xenobiotic is really important, since it substantially decreases its toxicity. However, there was also a non-dechlorinated product of dichlorocatechol in a glycosylated form (8). This metabolite was found in strains Aspergillus sp. ML147-S2 and Tritirachium sp. ML197-S3, and, even though it is much less lipophilic than its precursor, it still contains both chlorine atoms and it is probably a dead-end product. Dichlorocatechol could also be partially dechlorinated by all strains, except for Cladosporium sp. ML6-S1, to form the respective glutamine conjugate (3). In the reaction of Tritirachium sp. ML197-S3, based on HRMS data, 2-hydroxymuconic acid (2) was detected. This compound may derive from a dioxygenase catalyzed cleavage of (5). This could be correlated to the high C12O activity detected in this strain. This is a very significant finding, since there is a high probability of 2,4-DCP being assimilated by this particular strain. However, this result is in contrast to the degradation mechanism of 2,4-DCP by bacteria, where the cleavage of the ring is performed before the dehalogenation of the compound [29]. Metabolic Pathways of 2,4-DCP As previously reported with exclusively mesophotic fungi [11], dioxygenase activities were found in addition to no cleavage products. However, hydroxyquinol was detected, which was the only fully dechlorinated metabolite. Despite that, the metabolites of mesophotic fungi were more diverse compared to the fungi in the present study. However, several compounds were detected in both cases, such as dichlorocatechol, its glycoside, and its glutamine conjugate, as well as chlorophenol and its sulfated and cysteine conjugates. The isolated Aspergillus strains from the two studies, Aspergillus sp. ML147-S2, Aspergillus creber TM122-S3, and Aspergillus sp. TM124-S1, had only two metabolites in common: sulfated chlorophenol and the glutamine conjugate of chlorocatechol. On the other hand, Penicillium chrysogenum ML156-S8 had only one common metabolite with Penicillium sp. TM38-S1 [11]. Based on literature data and MS results presented in Table 2, the metabolic pathway utilized by the investigated fungal strains for the detoxification of 2,4-DCP was envisaged. Structural configurations presented in Figure 2 are tentative and are based on the most probable conformation according to the compound dynamics and data reported in literature. Isolation and Identification of Invertebrate Symbionts After sampling, invertebrates were transferred carefully to the laboratory to be processed. Small pieces of tissue were taken from the samples before frozen. Invertebrate samples (1 cm 3 ) were ground in sterile seawater and heated at 50 °C for 1 h. The suspension was serially diluted, plated on Difco™ Marine Broth Agar (BD Biosciences San Jose, CA, USA), and incubated at 28 °C for six weeks. A single colony was picked from the agar and cultivated as a pure culture on Difco™ Potato Dextrose Agar and Difco™ Marine Broth Agar (BD Biosciences San Jose, CA, USA) (media and cultivated for five days at 28 °C. The strain spores and mycelium were recovered by a gentle scratch of the agar plate surface using a scalpel, and they were conserved at −20 °C in 10% glycerol solution. Genomic DNA of the purified strains was isolated using a DNeasy Plant Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. The ITS region was amplified with The isomers were suggested according to MS and literature data. For the metabolites where there is no information about the most probable isomer, an asterisk was added next to the molecule. The number next to each compound is that corresponding to Table 2. The biotransformation yield of 2,4-DCP by the studied isolates cannot be directly correlated with the metabolites detected. Undoubtedly, catechol dioxygenase activities and the presence of a ring cleavage product are very important factors that probably enhance the overall biotransformation yield; however, they are not the only ones. As seen with the identified metabolites, different enzymes take part in the initial 2,4-DCP transformation. When other enzymes can act on these initial metabolites, then the first enzymes can presumably act even more on 2,4-DCP. What is also important is the formation of compounds that can be further processed by the microbial metabolism and not just dead-end products. Definitely, the overall biotransformation yield is not as crucial as the quality of the produced metabolites and, more specifically, their dichlorination process, since our main goal is the detoxification of the starting pollutant. Isolation and Identification of Invertebrate Symbionts After sampling, invertebrates were transferred carefully to the laboratory to be processed. Small pieces of tissue were taken from the samples before frozen. Invertebrate samples (1 cm 3 ) were ground in sterile seawater and heated at 50 • C for 1 h. The suspension was serially diluted, plated on Difco™ Marine Broth Agar (BD Biosciences San Jose, CA, USA), and incubated at 28 • C for six weeks. A single colony was picked from the agar and cultivated as a pure culture on Difco™ Potato Dextrose Agar and Difco™ Marine Broth Agar (BD Biosciences San Jose, CA, USA) (media and cultivated for five days at 28 • C. The strain spores and mycelium were recovered by a gentle scratch of the agar plate surface using a scalpel, and they were conserved at −20 • C in 10% glycerol solution. Genomic DNA of the purified strains was isolated using a DNeasy Plant Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer's instructions. The ITS region was amplified with primers ITS1F (5 -CTTGGTCATTTAGAGGAAGTAA-3 ) and ITS4 (5 -TCCTCCGCTTATTGATATGC-3 ) using described polymerase chain reaction (PCR) conditions. Amplicons were sequenced by Sanger sequencing (GATC, Eurofins Genomics, Ebersberg, Germany), and the sequences were aligned against the non-redundant database of NCBI using the BLASTn program. Culture Conditions and Resting-Cell Reactions The culture procedure followed was the same as previously reported [11]. Fungal strains were grown on Difco™ Marine Agar 2216 (BD Biosciences San Jose, CA, USA) plates containing 100 µg·mL −1 Ampicillin at 27 • C for five days. Mycelia from these were used to inoculate submerged cultures with Difco™ Marine Broth 2216 (BD Biosciences San Jose, CA, USA) (pH 7.6) at 27 • C and 160 rpm. After five days, the biomass was filtered using 0.2-µm-pore Supor ® polyethersulfone (PES) membrane disc filters (Pall Corporation, Port Washington, NY, USA) and used as a biocatalyst (10% w/v) in 15-mL reactions containing 1 mM 2,4-DCP in ultrapure water. Reactions with just 2,4-DCP were used as controls for the abiotic transformations. Furthermore, for each strain, control reactions with the same amount of biomass but no addition of 2,4-DCP were also realized. All reactions were left at 27 • C and 120 rpm for 10 days. Samples were withdrawn on the third and sixth days and analyzed after filtration. On the final day, the remaining reaction was extracted with equal volume of chloroform, and it was analyzed after drying and resolubilization in ultrapure water. Detection and Quantification of 2,4-DCP The quantification of 2,4-DCP was performed using the same method, as previously reported [11] using a SHIMADZU LC-20AD HPLC equipped with a SIL-20A autosampler (Kyoto, Japan). A C-18 reverse-phase NUCLEOSIL ® 100-5 (Macherey-Nagel, Dueren, Germany) served as the stationary phase and 40% aqueous acetonitrile served as the mobile phase at a flow rate of 0.8 mL·min −1 . Detection took place with the photodiode array detector Varian ProStar (Varian Inc., Palo Alto, CA, USA) at 285 nm. The total running time was 16 min and the retention time of 2,4-DCP was 12.4 min. Identification of 2,4-DCP Metabolites by LC-MS The analysis was performed on an ESI-LTQ-Orbitrap Discovery XL mass spectrometer (Thermo Scientific, San Jose, CA, USA) connected to an Accela UHPLC system (Thermo Scientific, San Jose, CA, USA). A Fortis UPLC C18 (2.1 × 100 mm, 1.7 µm) reverse-phase column (Fortis Technologies Ltd., Neston, UK) was used for the analysis. The mobile phase was a mixture of 0.1% (v/v) formic acid/water (solvent A) and acetonitrile (solvent B). Sample analysis was carried out in both positive (ESI+) and negative (ESI−) ion mode. The flow rate was 0.4 mL·min −1 . A gradient method of 30 min was used for the analysis as follows: 0 to 24 min: 95% A, 5% B; 24 to 28 min: 5% A, 95% B; 28 to 30 min: 95% A, 5% B. The column temperature was maintained at 40 • C and the injection volume was 10 µL. The conditions for the HRMS in each ionization mode were set as follows: for the positive ion mode, the capillary temperature and voltage were set at 320 • C and 40 V, respectively. The sheath gas flow was set to 40, and the aux gas flow was set to 8 arb units. The spray voltage was set to 3.6 kV, and the tube lens voltage was set to 120 V. For the negative ion mode, the capillary temperature and voltage were set to 320 • C and −20 V, respectively. The sheath gas flow was set to 40, and the aux gas flow was set to 8 arb units. The spray voltage was set to 2.7 kV, and tube lens voltage was set to −80 V. In both positive and negative ion mode, analysis was performed using the Fourier-transform mass spectrometer (FTMS) (Thermo Scientific, San Jose, CA, USA) full-scan ion mode. The Orbitrap resolution was set to 30,000 full width at half maximum (FWHM) and the data-dependent acquisition mode of the three most intense ions was used for studying the MS/MS fragmentation pattern in parallel to the acquisition of full-scan mass spectra. Data acquisition was performed for a mass range of 100-1000 Da, and the spectra were acquired in the centroid mode. Measurement of Enzymatic Activities Induction of enzymatic activities began by introduction of 1 mM 2,4-DCP (final concentration) in 50-mL fungal cultures that were left to grow for three days as mentioned above. Samples were withdrawn at frequent intervals and centrifuged at 14,000 × g for 10 min (10 • C) to remove the biomass. The supernatant was used as crude extracellular enzyme for the detection of catechol dioxygenase activities. After the last sample was taken, the remaining biomass was lysed in order to measure intracellular enzymatic activities. Lysis was initiated by adding 670 units of Lyticase (#L4025; Sigma-Aldrich, St. Louis, MO, USA) per gram of biomass in 0.1 M potassium phosphate buffer pH 7.5 and left to incubate at 30 • C for 1 h. Afterward, 10 mL of the same buffer with 1.2 M sorbitol and 0.5 mM MgCl 2 was added to the reaction. The biomass suspension was sonicated at 4 • C in a VC505 Vibra-Cell Processor (Sonics & Materials Inc., Newtown, CT, USA) for four cycles of sonication (1 min of 8-s pulse followed by 8-s pause). The resulting suspension was centrifuged for 20 min at 20,000 × g (4 • C), and the resulting supernatant was used as intracellular crude enzyme for enzymatic assays. A typical assay (250 µL final volume) contained 1 mM catechol as substrate in 50 mM Tris-HCl pH 7 buffer. The reaction began with the addition of 25 µL of crude enzyme and its time-course was recorded on a SpectraMax-250 microplate reader (Molecular Devices, Sunnyvale, CA, USA) equipped with SoftMaxPro software (version 1.1, Molecular Devices, Sunnyvale, CA, USA) set at 35 • C. One unit (U) of catechol 1,2-dioxygenase (C12O) activity is defined as the amount of enzyme that produces 1 µmol cis,cis-muconic acid per minute under the assay conditions. One unit (U) of catechol 2,3-dioxygenase (C23O) activity is defined as the amount of enzyme that produces 1 µmol 2-hydroxymuconic semialdehyde per minute under the assay conditions. The products of C12O and C23O were detected at 260 nm and 375 nm, respectively, and they were quantified according to Lin and Milase [18] and Hupert-Kocurek et al. [31]. Conclusions The present work aimed towards the expansion of the biocatalytic toolbox for the bioremediation of chlorinated aromatic pollutants. Bioprospecting of novel microorganisms was achieved by accessing marine regions at various depths and collecting invertebrates. Fungal symbionts of these invertebrates were isolated, identified, and screened for their ability to transform high 2,4-DCP concentrations. The most competent strains were studied further in order to elucidate the mechanisms which they use in order to cope with this chlorinated pollutant. Since these strains originate from pristine habitats, their enzymatic arsenal is not evolved specifically for the biotransformation of this xenobiotic. In fact, these strains seemed to employ non-specific pathways, which nevertheless could lead to less toxic and, in some cases, fully dechlorinated metabolites. Surprisingly, one of the strains had the ability to cleave the aromatic structure of the pollutant following its dechlorination. This suggests the assimilation of the xenobiotic by this strain, which is the main objective of every bioremediation process. When tested for ring-cleavage activities, the same strain expressed the highest catechol dioxygenase activity, demonstrating the key enzyme for efficient bioremediation. 2,4-DCP can be considered as a model chlorinated aromatic pollutant, and strains with the ability to detoxify this compound are candidates for the bioremediation of other chlorinated xenobiotics. In our following studies, we will focus on the complete elucidation of the detoxification mechanism of chlorinated pollutants by these fungal strains using transcriptomic and genomic analyses [32].
2020-05-13T13:03:51.306Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "e1b9294128f28f4aaa2b660bf75c9fc0a4dafc1f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms21093317", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4526d6d495de0e977b8cdedb58c0993ff80255d4", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
7472688
pes2o/s2orc
v3-fos-license
Odontogenic Myxoma of the Maxilla: A Report of Unusual Pediatric Case Odontogenic myxoma (OM) is a rare and locally benign neoplasm of high aggressive behavior found exclusively in the jaws. OM commonly occurs in the second and third decade, its quite rare to find in maxilla that to invading the maxillary sinus completely. The lesion often grows without symptoms and presents as a painless swelling. The radiographic features are variable, and the diagnosis is therefore not easy. This article presents a case of OM of maxilla in a 13-year-old boy, which was previously diagnosed as fibrosseous lesion with the help of CT. INTRODUCTION Myxoma is a benign tumor of primitive mesenchymal tissue, closely mimicking the structure of mucoid connective tissue of umbilical cord. The odontogenic myxoma is a rare, benign tumor which does not show metastasis but local agressiveness and involves the maxilla and mandible. When involving the maxilla, odontogenic myxomas can invade the maxillary sinus, and are then diagnosed later stages only after having grown to huge mass. 1 According to literature review, odontogenic myxomas (OM) represent between 1% and 17.7% of all odontogenic tumors. 2 CASE REPORT A 13-year-old male was referred to the Dental Department at Krishna Devaraya College of Dental Health Sciences Center for definitive management of a right-sided maxillary lesion. Which was previously diagnosed as fibrosseous lesion with the help of CT 2 years back. A three-year history of a slow growing mass causing intermittent pain in the right midface was reported. The patient denied any visual disturbance. Physical examination revealed fullness of the right midface which was mildly tender to palpation. The overlying skin was not erythematous and he demonstrated no lymphadenopathy or trismus (Fig. 1). Intraoral examination revealed a firm, nontender swelling expanding the buccal cortex of the maxilla, extending from right lateral incisor to second molar, there was no mobility in the overlying teeth, but displacement of teeth was noted, measuring around 4 × 2 cm in diameter (Fig. 2). The occlusal radiograph showed a large multilocular radiolucent area with a well-defined sclerotic margin extending from the right lateral incisor to the distal aspect of the right second molar, with 'spider web' and 'tennis racket' pattern appearance, with which a preliminary diagnosis of OM was made (Fig. 3). A computed tomographic (CT) scan, axial and coronal views demonstrated an lytic lesion with expansion and thinning of the overlying buccal cortex with radiopaque foci spread throughout the lesion involving the right maxillary antrum (Fig. 4). IJCPD An incisional biopsy confirmed the diagnosis of odontogenic myxoma. The surgical management involved a combined intra-and extraoral approach. The tumor was resected with a margin of normal tissue. This involved a maxillary ostectomy (Fig. 5). Macroscopically, the surgical specimen consisted of a segment of complete right maxilla and antrum with gelatinous mass with glistening mucoid substance (Fig. 6). Microscopically, the tumor was composed of loosely arranged spindle cells with serpentine nuclei within a variably myxoid and fibrous stroma (Fig. 7). Postoperative recovery was uneventful. The patient has since been seen regularly for follow-up, and treatment planning for dental rehabilitation is currently underway. He will be monitored long-term for signs of recurrence clinically and radiographically. DISCUSSION Myxoma is a benign, mesenchymal-stemed, slowly proliferative, local aggressiveness and high rate of recurrence. Virchow coined this term in 1863, which was subsequently and sometimes spindle-shaped cells set in a myxoid stroma containing mucopolysaccharide through which course very delicate reticulin fibers in various directions. 5 According to the WHO's classification of odontogenic tumors, in 1992, the Myxoma is considered a tumor of the odontogenic mesenchyme with or without the presence of odontogenic epithelium. 6 Usually, when it involves bony tissue, it affects the facial bones. 7 Odontogenic myxoma is believed to originate from the dental papilla or follicular mesenchyme. The evidence for its odontogenic origin arises from its location in the tooth bearing areas of the jaws, its occasional association with missing or unerupted teeth and the presence of odontogenic epithelium. 8 Odontogenic maxillary myxoma was first mentioned in the literature by Thoma and Goldman in 1947. OM usually affect adolescents and young adults, between the second and third decades of life, very rarely affects people below 10 years of age or above 50 years. With equal gender predilection, mandibular tooth bearing area is favored over the maxilla. 9,10 The present case reported here can be of unusual variant because, the patient's history of 3 years duration of lesion, which might have started in early age and site of lesion also, i.e. maxillary posterior segment with involvement of antrum. OM clinically present as an asymptomatic lesion discovered during routine dental or radiological examinations in the initial stages of lesion or as a lesion associated with painless jaw expansion in later stages. Lesions with more advanced stages may be associated with pain, paresthesia, facial asymmetry, ulceration, teeth displacement and root resorption 11,12 our case showed only facial asymmetry and displacement of teeth. So, the clinical differential diagnosis should include ameloblastoma, odontogenic keratocyst, radicular cyst, dentigerous cyst lateral periodontal cyst, intraosseous hemangioma, simple bone cyst, giant cell granuloma, aneurismal bone cyst and metastasis of malignant tumors which shows the slow growth pattern. 13 Macroscopically, the lesion appears to be a soft gelatinous yellowish grey mass which is often nonencap sulated. Cut surface of the lesion exhibits characteristic slimy appearance. Histopathologically, OM consists of triangular stellate cells with long processes intermeshing with each other. The intercellular matrix is mucoid, and the cytoplasm is slightly basophilic, finely granular and with a well-defined nucleus, mitotic figures are few. Cells may show pleomorphism. Bone may be rarely present with islands of inactive odontogenic epithelium. 14 Recent ultrastructural studies have showed that myxoma is a tumor of fibroblasts, modified in such a way to form a matrix composed of glycosaminoglycans and do not form collagen fibrils-designated as 'myxoblasts'. Histological differential diagnosis should be made with IJCPD rhabdo myosarcoma, myxoid liposarcoma, neurogenic sarcoma, neurofibroma, lipoma, fibroma, chondromyxoid and nodular faciitis. 15,10 Conventional radiography (occlusal and orthopanto mography) and CT are useful in the detection of OM and also helpful to estimate the size, extent and margins of the tumor. Because of wide variation of radiographic presentation of OM, apart from most specific radiographic patterns, CT must be used to confirm conventional radiographic findings. Literature studies have described this tumor as both a unilocular or multilocular radiolucency and as having a distinct or diffuse margin. 16 Barros in 1969, proposed two stages of radiologic patterns, first stage osteoporotic appearance, with more prominent medullary spaces separated by thin septa of bone. During this stage, the lesion acquires its classic radiographic appearance, consisting of multilocular radiolucency with well-developed locules, composed of trabeculae tending to intersect at right angles, forming locules straight, thin, elongated and lacy, Eversole called this as 'Lichen planus of jaw bone'. It has varied radiographic presentations: Soap-bubble or honey comb, spider web, tennis racket appearances. Other shapes include small or large triangles, diamonds, squares, rectangles, and X,Y and V figures. Second stage consists of the breakout or destructive phase consisting of loss of internal locules, significant expansion and perforation of the cortex with invasion into surrounding soft tissues. in maxilla there is extension into the antrum. Sometimes the peripheral margin of the septa may be arranged at right angles to the margin, giving a 'hair brush' or 'sun burst' appearance. 17 Due to its varied radiographic feature, the present case was misdiagnosed as fibroseous (fibrous dysplasia) lesion based on CT findings which led to delay in accurate treatment leading to vast anatomic destruction, leading to facial disfigurement. OM should be considered in the differential diagnosis of both radiolucent and mixed radiolucent-radiopaque lesions of both jaws in all age groups. 1 Radiographic differential diagnosis of odontogenic Myxoma should include: Intraosseous hemangioma, cherubism, aneurismal bony cyst, fibrous dysplasia, ameloblastoma, central giant cells lesion, traumatic bony cyst and odontogenic cysts (radicular, lateral periodontal, dentigerous and keratocyst). 18 The current treatment for OM includes resection with bony margins of at least 1.0 to 1.5 cm and leaving behind one uninvolved anatomic boundary. Maxillectomy and sometimes resection of the orbital floor are required for OM in the upper jaw. 19 Treatment for the present case was same. Enucleation, curettage and peripheral ostectomy are inadequate because of its gelatinous and nonencapsulated nature which makes the lesion to recur. Period of the greatest recurrence rate is seen in the first 2 years. 20 Defects resulting from maxillary resection can be replaced by prosthetic obturator or tissue reconstruction procedures. 21 Due to varied radiographic presentation it makes difficult to diagnose lesion based on radiographic features alone. Proper diagnosis requires clinical, histological and radio graphic correlation. Because of its high rate of recurrence, especially due to its gelatinous, nonencapsulated and mucous aspect, surgical treatment through bone resection is the most indicated treatment modality, and the patient must be followed up closely for years.
2017-06-11T02:36:09.924Z
2011-04-15T00:00:00.000
{ "year": 2011, "sha1": "849373aa69e0465e18764148e218c65af8e71054", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5005/jp-journals-10005-1123", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "849373aa69e0465e18764148e218c65af8e71054", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265218798
pes2o/s2orc
v3-fos-license
Toothbrushing and Access to Dental Services in Peruvian Children Objective. The aim was to determine the association between access to dental services and toothbrushing in Peruvian children under 12 years old. Methods. This was a cross-sectional study with a population of 2021 database of the Demographic and Family Health Survey. Records of children under 12 years old who provided answers about their toothbrushing were included. Variables were evaluated descriptively and followed by a bivariate analysis; multivariate tests were performed using Poisson regression with a multilevel regression analysis. Results. General toothbrushing was 96.32% (n = 34 198), and daily toothbrushing was 88.05% (n = 28 444). Access to dental services was associated with general toothbrushing (aPR: 1.18; 95% CI: 1.14-1.22; P < .001), daily toothbrushing (aPR: 1.08; 95% CI: 1.04-1.12; P < .001) and minimum toothbrushing 2 times a day (aPR: 1.12; 95% CI: 1.07-1.17; P < .001). Conclusion. Access to dental services was associated with general toothbrushing, daily toothbrushing and toothbrushing at least twice a day. Introduction Access to health services refers to a process involving factors such as availability, efficacy, acceptability, and the presence of barriers and facilitators for its use, considering its main purpose to satisfy the population's needs. 1,2rom a dental perspective, access is limited and inequitable, resulting in a more significant disease burden, mainly affecting lower-income populations. 3][6] On the other hand, the high prevalence of oral pathologies such as dental caries is a problem for global public health, particularly in developing countries, highlighting the necessity to reinforce effective preventive habits against this disease. 7Evidence suggests that diet control, access to information, use of fluorides, and toothbrushing are the main preventive habits. 8Thus, access to dental care is essential in ensuring the provision of preventive tools for this condition. 9Regarding toothbrushing, it is urged that this habit should begin before the first year of life to ensure consistency in its practice over time; it is also recommended to practice it at least twice daily, given its preventive purposes against dental caries.It is essential to emphasize that toothbrushing is a beneficial, affordable, widespread, and culturally accepted act; hence, it is an effective measure for public health. 10,11At the national level, specific reports indicate a limited frequency of toothbrushing, mainly in younger individuals and members of families with limited economic resources. 12,13romoting healthy practices, like toothbrushing, 14 within the context of the lowest dental care access is challenging, especially in vulnerable communities.Scientific evidence shows that the successful establishment of the toothbrushing habit is linked to receiving information on the prevention of oral cavity diseases, which in many countries is provided in the dental office. 15,16On the other hand, it was observed that the poorest adults have the same or greater predisposition to follow preventive practices than others with a higher socioeconomic level; contradictorily, their use of dental services was the lowest. 17n this regard, the Peruvian Dental Caries Clinical Practice Guideline suggests that before the first year of life, children should receive a dental evaluation, an opportunity to make parents aware of the initiation and frequency of brushing. 18In this sense, evidence has yet to be identified on the association between these 2 factors, and it is necessary to consider that Peruvian children have an inadequate frequency of this good oral health practice.Therefore, the aim is to generate studies that address this problem, starting with this one that seeks to determine the association between access to dental services and toothbrushing in Peruvian children under 12 years old. Methodology A cross-sectional study was performed, where the population comprised the 2021 Demographic and Family Health Survey (ENDES) database, carried out by the National Institute of Statistics and Informatics of Peru (INEI).It is worth noting that a trained team executes the ENDES survey every year.This team conducts inhome interviews and administers questionnaires to the designated population. It had a 2-stage, stratified, probabilistic, balanced, and independent sampling at the departmental level, both by urban and rural area.The sample size for the year 2021 amounted to 36760 dwellings, which meant 168145 children under 12 years old; only the data provided by survey respondents regarding their toothbrushing habits will be used for analysis, leading to a final sample size of 32023.As a criterion for managing the records, incomplete ones were eliminated (Figure 1).The ENDES survey comprises 3 questionnaires: health, household, and individual; it should be clarified that the person providing the information is the individual (over 15 years old) responsible for the children's health care. 19egarding the variables established, general toothbrushing (Does [NAME] brush his/her teeth with a toothbrush?), daily toothbrushing (Does [NAME] brush his/her teeth every day?) and toothbrushing at least twice a day (How many times a day does [NAME] brush his/her teeth?) were considered dependent.In contrast, access to dental services, time since last dental care (measured in years) and place of dental care were determined as independent.According to the information given, the healthcare provider that carried out the service was classified as either the Ministry of Health of Peru (MINSA), Social Health Insurance of Peru (ESSALUD), Armed Forces (FF.AA) and Police (PNP), or the private sector.In addition, covariates were defined as the natural region, categorized into Metropolitan Lima, the rest of the coast, the highlands and the jungle.The area of residence was also evaluated, organized into urban and rural; place of residence divided into capital, city, town, and countryside; altitude measured at less than 2500 meters above mean sea level (MAMSL) or from 2500 MAMSL and more.Wealth index is a measure of a household's ability to access and enjoy goods and services; subsequently, a score was assigned to each household and its residents using a formula utilized by the United States Demographic and Health Surveys Program.It made it possible to classify each dwelling according to quintiles from the poorest to the richest. 20,21t also included health insurance coverage, whether public or private; sex of the individual and age, divided into 2 groups: 0 to 5 and 6 to 11 years old. Data Collection Procedures The 2021 database was downloaded from the INEI's official web page (http://iinei.inei.gob.pe/microdatos/),obtaining the survey's modules and technical datasheet, and finally exported to STATA SE 17.0. Statistical Analysis For the statistical analysis, absolute and relative frequencies of the variables were obtained through a descriptive analysis, followed by a bivariate analysis using the Chi-square test to find associations between the variables under study.It is essential to clarify the use of the svy command, which allowed representative estimates to be established following the survey design, where the sampling patterns were distinguished according to stratum, primary sampling unit, and weights.Furthermore, Poisson logistic regression was utilized to conduct multivariate tests to determine the crude prevalence ratios (PR) and adjusted prevalence ratios (aPR) based on the previously demonstrated significance variables.The variables' association with toothbrushing was analyzed using multilevel regression; it should be noted that the 24 regions of Peru were established as the level of analysis.With this information, a variance component model was built (null model) using general toothbrushing, daily toothbrushing and toothbrushing at least twice a day as dependent variables, but without inserting explanatory variables; the null models estimated the general variability of the dependent variables and attributed it to the regions justifying proceeding with the analysis (P < .001).Subsequently, a series of explanatory variables were included to analyze the association with each independent variable and covariates.Four models were created: unadjusted model 1 of access to dental services, covariates.This research used a confidence level of 95%, and as an indicator of statistical significance, a value of P < .05 was defined in all tests. Ethical Approval and Informed Consent The study was approved by the Institutional Ethics Committee of Universidad Peruana Cayetano Heredia (CIE-UPCH), with a SIDISI code of N° 206253.It should be noted that these databases are publicly accessible, and the records are coded to maintain the anonymity of the respondents.For this research, CIE-UPCH waived the requirement for informed consent due to the survey characteristics. Results General toothbrushing was 92.76% (n = 32 023), daily toothbrushing was 84.28% (n = 25 511), while minimum toothbrushing 2 times daily was 79.95% (n = 19 871).Access to dental care was 52.96% (n = 17 519), 12.72% (n = 2071) reported that their care was less than 2 years ago, and the main place of dental care was the Peruvian Ministry of Health with 46.37% (n = 10 306) (Table 1).In a bivariate manner, access to dental care, time since last dental care, and place of dental care presented an association with general toothbrushing (P < .05);concerning daily toothbrushing, it was associated with access and place of dental care (P < .05).While toothbrushing at least twice a day was associated with access to dental care and place of that care (P < .05). Similarly, the 3 variables of interest were associated with the natural region, area of residence, place of residence, altitude, wealth index, and age.On the other hand, sex was associated with general toothbrushing (P < .05)(Table 2). Discussion From the dental perspective, low access to dental care is one of the leading public health problems.This situation prevents the optimal extension of hygienic habits to the population, such as toothbrushing, 14 and there are limitations to its application in vulnerable communities, especially in those individuals in the extreme stages of life.Among the findings of this research, there is evidence of the association between access to dental care and general toothbrushing, daily and at least twice daily, as well as the place of dental care with general toothbrushing, daily and at least twice daily.Research conducted by Hernández-Vásquez and Azañedo suggests that children who have not had a dental visit in the last 6 months may tend to lower levels of brushing compared to those who have had a recent dental visit.Likewise, the study in Mexican children by Vallejos-Sánchez et al mentions that those who had dental care 1 year before the study had a higher probability of frequent brushing. 12,22These suggest that regular contact with the oral health professional reinforces healthy preventive practices such as tooth brushing. Regarding toothbrushing, this study found that approximately 90% of the sample reported that they typically brushed their teeth; in addition, comparable results were achieved with daily practice, but the effectiveness decreased when it was considered that the activity should be performed at least twice daily.On this matter, the Peruvian Ministry of Health, through its Clinical Practice Guideline for preventing, diagnosing, and treating dental caries in children, recommends that this preventive habit be performed at least twice daily, starting from the first tooth's eruption, approximately after 6 months of age. 18On the other hand, a national database study carried out in Iran states that the frequency of children who brush their teeth twice daily is 4 times less than those who do it once; it has also been suggested that this occurrence may be related to socioeconomic and demographic factors, as well as the healthcare system that the child and their family are associated affiliated. 23Studies conducted on the Peruvian population confirm that the frequency of toothbrushing increases with age, and this practice is widely adopted and established, 12,13 aligning with the findings of this study. It is essential to consider that the information produced by this research covered only the year 2021, within a national context perceived as a "new social coexistence" due to the COVID-19 pandemic. 24owever, the situation experienced in the country in 2020 would have impacted different sectors of Peruvian society, including oral health.The scientific evidence developed from health emergency reports that the time since the last dental care in Peruvian children would have increased by 1.39 years, indicating that this dilation in the search for timely care would be associated with the year of the pandemic, noting that in previous years, variables such as place of dental care, natural region of residence and age already showed significant differences, but the year itself denotes greater relevance. 25egarding toothbrushing, it was observed that the pandemic harmed its daily practice at least twice daily, which could be explained by the complex context faced by the country due to multiple measures to contain contagions that impacted the economy and social habits of Peruvian households, so that more pressing aspects may have been prioritized, rather than a preventive habit; similarly, it was found that factors such as geographic region, area and place of residence, altitude, health insurance coverage, economic level, age, and sex were associated with brushing. 26mong the limitations of this research is the use of secondary information sources such as the ENDES survey, where the information collected could present inaccuracies due to self-reporting.Regarding the study design, the cross-sectional type cannot infer causality from the associations or results found.Additionally, given the nature of the survey, it is not feasible to classify individuals by months of age, only by years completed.Therefore, it was decided to include those from 0 years of age, even though they did not yet have teeth.The Peruvian Clinical Practice Guidelines on dental caries recommends starting toothbrushing at 6 months of age, 18 so it can be expected that minors do not brush their teeth.Finally, it was observed that the survey lacks sufficient variables to assess the need for access to dental care, which would allow suggesting future modifications for future versions. Despite the abovementioned limitations, the present study provides a first understanding of the link between access to dental health services and establishing preventive habits in Peruvian children, such as tooth brushing.It is essential to recognize that access to dental care in territories with emerging economies is still fragile and scarce, even though the country has been developing strategies such as Universal Health Insurance, intending to narrow the gap between people who do not have insurance and those who for financial reasons have not been able to obtain the necessary care.The multilevel regression analysis allowed us to understand the relevance of geographical characteristics variables; although these influence toothbrushing, access to dental services is still associated.Finally, policymakers at the national level should evaluate the importance and success of the application of these hygiene habits in oral health, not limiting it to coverage indicators, which, according to the study, access to a dental health service does not provide a guarantee that individuals in vulnerable situations, such as children under 12 years old, can enjoy optimal oral health and quality care. Conclusion Access to dental services was positive associated with general toothbrushing, daily toothbrushing, and toothbrushing at least twice a day.Likewise, specific covariates such as natural region, place of residence, wealth index, and age showed association with the three variables of interest; additionally, area of residence and altitude were. unadjusted model 2 Figure 1 . Figure 1.Flow chart of sample screening inclusion and exclusion. Table 1 . Toothbrushing, Access to Dental Services and Characteristics of Children Under de 12 Years Old in Peru, 2021. Table 2 . Toothbrushing According to Access to Dental Services and Characteristics of Children Under 12 Years Old in Peru, 2021. Table 3 . Association Between Toothbrushing and Access to Dental Services in Children Under 12 Years Old in Peru, 2021.Adjusted by natural region, area of residence, place of residence, altitude, wealth index, sex and age.b: Adjusted by natural region, area of residence, place of residence, altitude, wealth index and age. a:
2023-11-17T05:23:12.579Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "3a7018643c2e3c9cf2162e3994c55809d31b05ee", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2333794X231209672", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a7018643c2e3c9cf2162e3994c55809d31b05ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234767075
pes2o/s2orc
v3-fos-license
Reviewing the Clinical Implications of Treating Narcolepsy as an Autoimmune Disorder Abstract Narcolepsy type 1 (NT1) is a lifelong sleep disorder, primarily characterized clinically by excessive daytime sleepiness and cataplexy and pathologically by the loss of hypocretinergic neurons in the lateral hypothalamus. Despite being a rare disorder, the NT1-related burden for patients and society is relevant due to the early onset and chronic nature of this condition. Although the etiology of narcolepsy is still unknown, mounting evidence supports a central role of autoimmunity. To date, no cure is available for this disorder and current treatment is symptomatic. Based on the hypothesis of the autoimmune etiology of this disease, immunotherapy could possibly represent a valid therapeutic option. However, contrasting and limited results have been provided so far. This review discusses the evidence supporting the use of immunotherapy in narcolepsy, the outcomes obtained so far, current issues and future directions. Introduction Narcolepsy is a chronic sleep disorder, primarily associated with excessive daytime sleepiness (EDS) and cataplexy, a sudden and transient loss of muscle tone triggered mainly by intense, usually positive, emotions, during wakefulness. Other symptoms, including sleep-related hallucinations, sleep paralyses, and fragmented nocturnal sleep point to an intrinsic REM sleep dysfunction (ICSD3). 1 In most cases, symptom onset is in the first two decades of life, with up to 65% of the cases presenting before the age of 20 years. 2,3 According to the American Academy of Sleep Medicine (AASM), 1 two distinct subtypes are identified, Narcolepsy type 1 (NT1) and Narcolepsy type 2 (NT2). NT1 results from the loss of hypothalamic hypocretin (orexin)-producing neurons as documented by reduced or undetectable levels of hypocretin-1 (hcrt-1) in the cerebrospinal fluid (CSF) and is clinically marked by cataplexy, whereas NT2 is characterized by normal CSF hcrt-1 concentration and absence of cataplexy. The CSF hcrt-1 deficiency observed in NT1 is due to the destruction of a small group of hypocretin secreting neurons in the lateral hypothalamus. 4 In NT2, a less severe loss of these neurons or an altered hypocretin receptor signalling 5,6 has been postulated. About 10% of NT2 cases transform into the NT1 phenotype, indicating disease progression over time, at least in some cases. [7][8][9][10] Narcolepsy is classified as a rare disorder with a prevalence of 20-50/100,000 individuals worldwide 11,12 but is however poorly and lately recognized 13,14 and burdened by a high socioeconomic impact. Indeed, narcolepsy patients have lower education and higher unemployment rate compared to the general population, resulting in reduced incomes and lowered life standards. [15][16][17][18] Moreover, they present higher frequency of other medical/psychiatric comorbidities and concurrent medication usage, and reduced rates of marriage/cohabitation. Despite the availability of several symptomatic treatments, 4,19 complete control of symptoms is only rarely achieved. 20,21 The necessity to find a cure for this lifelong and disabling condition has driven the investigation of new treatments targeting the underlying mechanisms of the disease. In this review, we will discuss the implications of treating narcolepsy as an autoimmune disorder, the therapeutic approaches used so far and their outcomes as well as the future directions. Evidence of Autoimmune Etiology in Narcolepsy Autoimmune disorders are pathological conditions characterized by an aberrant immune response against "selfantigens" due to the loss of tolerance, which leads to inflammation, cell injury or dysfunction and clinical manifestations. Formal demonstration of the autoimmune nature of a disease requires several pieces of evidence. 22 Direct evidence is provided by the passive transfer of pathology by antibodies or T-cells from an affected individual to laboratory animals or to cells in culture. Indirect evidence comes from the simulation of disease in animal models either by active immunization or by manipulation of the immune system, or by isolation of self-reactive Tcells/autoantibodies from the organ targeted by the autoimmune attack. Finally, circumstantial evidence derives from different clinical observations such as: a) presence of genetic susceptibility (ie, recurrence in the same family and human leukocyte antigen (HLA) association); b) presence of antibodies in relation with a specific clinical phenotype; and c) response to immunotherapy. 23 The loss of the hypocretin secreting cells represents the core feature of NT1. Nevertheless, the pathological mechanisms leading to the highly selective destruction of these hypothalamic cells, with sparing of the neighboring melanin-concentrating hormone neurons, are still unknown. However, the specificity of this loss itself, the strong association with the HLA DQB1*06:02 24 and other genetically determined features of the immune system pointed towards the hypothesis of the autoimmune etiology of narcolepsy. This hypothesis was further supported by circumstantial evidence coming from epidemiological studies showing an association between NT1 and infections, which can provoke autoimmune reactions through different mechanisms such as bystander activation, molecular mimicry, superantigens and epitope spreading. 25 A questionnaire-based study revealed an increased frequency of narcolepsy among subjects diagnosed with strep throat before the age of 21 26 and elevated streptococcal antibodies levels were found in patients' sera taken within 3 years from disease onset compared to agematched controls. 27 Lately, a link between narcolepsy and the influenza A virus subtype H1N1 (A/H1N1) was observed. In China, the incidence of narcolepsy increased three-fold within 6 months of the peak of the A/H1N1 influenza pandemic. 28 Furthermore, an increase in the incidence of narcolepsy was observed in subjects vaccinated with the H1N1 Pandemrix ® vaccine. [29][30][31][32][33] These observations led to speculate that the autoimmune destruction of the hypocretin neurons could be triggered by molecular mimicry with H1N1 flu antigens. Indeed, a study reported the presence of hypocretin-specific CD4+ Tcells cross-reactive to H1N1 hemagglutinin (HA) protein in narcolepsy patients, 34 although this finding was not replicated. 35 Indeed, HLA class II molecules, such as DQB1*06:02, are responsible for the presentation of antigenic peptides to CD4+ T-cells, possibly implying a prominent role of T-cells in the pathogenesis of narcolepsy. Subsequently, the discovery that polymorphisms in other genes involved in the immune response, such as the T-cell receptor alpha (TCRalpha) 36 and the purinergic receptor subtype P2RY11 (P2RY11) 37 among others, [38][39][40] are associated with an increased risk of developing the disease, further implicated the T-cells in its pathogenesis. Indeed, recently an increased hypocretin-specific CD4+ T-cell response was found in blood samples from 19 NT1 patients compared to controls. 41 However, unexpectedly, the majority of these cells were mainly HLA-DR-and not HLA-DQ6restricted. 41 Another study showed the presence, in NT1 patients, of DQ0602-restricted CD4+ T-cells cross-reactive to hypocretin and the HA protein of the pandemic 2009/ 2010 A/H1N1 influenza virus. 34 Higher frequency of hypocretin responsive CD4+ and CD8+ T-cells was also observed in NT1 children. 42 However, it remains unclear if 43 is that neuronal cells do not express HLA class II molecules but only class I, which are recognized by CD8+ and not CD4+ T lymphocytes. Nevertheless, a role of CD8+ Tcells is supported by the association with several HLA class I alleles 44,45 and by the finding, in nine NT1 patients, of mutations in P2RY11, encoding for a receptor highly expressed in cytotoxic CD8+ T lymphocytes. 46 The potential ability of CD8+ T-cells to destroy the hypocretinergic neurons has been further supported by the serendipitous pathological observation of extensive CD8+ T-cell infiltrates and gliosis in the hypothalamus of a patient with NT1 secondary to Ma2 antibody-mediated encephalitis. 47 A similar proof of concept was demonstrated in a transgenic mice animal model where cytotoxic CD8+, but not CD4+, HA-reactive T-cells were able to destroy hypocretin neurons expressing HA as a neo-antigen. 48 Interestingly, a higher frequency of autoreactive CD8+ Tcells was recently documented in NT1 patients' blood samples compared to controls. 49 Moreover, CD8+ T-cells were identified in the CSF of patient with recent onset NT2 who later on progressed into NT1 with hypocretin deficiency. 41 These observations suggest a direct involvement of CD8+ T-cells in the destruction of hypocretin neurons. Finally, it has recently been observed that NT1 patients display effector CD4+ T-cells with an unconventional profile which might have cytotoxic activity. 50 Although these evidence point to a T-cell-mediated process associated with narcolepsy, these findings are mainly based on blood sample studies, and the primary role of these cells is not clear yet. Since B cells are usually involved in CD4+ T-cellmediated responses, 51 several studies investigated the presence of neuronal autoantibodies. Screening for antibodies against specific antigens including the hypocretin precursor-peptide, 52 hypocretin 1 and 2 and their receptors (HCRTR1 and 2) 53 and other neuronal antigens in narcoleptic patients 54-57 produced negative or indecisive results. Antibodies directed against Tribbles homologue 2 (TRIB2) were detected in 14% of narcoleptic patients but also in a small percentage (5%) of control sera, 58 a finding replicated also in other cohorts 59,60 but not in post-H1N1 cases. 61 TRIB2 is highly expressed in hypocretinergic neurons, but also in other cells types; moreover, it is an intracellular antigen, and as such is unlikely to play a primary role in the destruction of hypocretin-producing cells. Subsequently, HCRTR2 antibodies were found in 85% of post-Pandemrix ® narcolepsy cases as well as in controls (35%), 62 but these findings have not been reproduced in idiopathic cases. 63,64 Several studies showed the presence of antibodies directed against different neuronal targets in NT1 patients, but usually only in a small percentage of cases, 65 suggesting that the humoral response is not primarily relevant in the pathogenesis of the disease. Therefore, despite significant progresses, most evidence supporting the autoimmune etiology of narcolepsy is still circumstantial. According to Witebsky's postulate, 23 definitive proof such as the passive transfer of the disease to healthy individuals by autoreactive T-cells and/or autoantibodies or the active immunization with autoantigen able to induce the disease in animal models, are still missing. 22 Immunotherapies Soon after the autoimmune etiology of narcolepsy was hypothesized, the first attempts to arrest the pathogenic process using immunotherapy were made. These early attempts were based on the same approaches used for the treatment of classical neurological autoimmune disorders, and included corticosteroids, plasmapheresis (PLEX) and intravenous immunoglobulins (IVIG). These therapies indeed exert their action at multiple levels and could possibly be effective independently from the primary involvement of a humoral or cellular immune response. More recently, therapies specifically targeting B-and Tcells were attempted in limited cases, as discussed below. Immunotherapies with Pleiotropic Effects Corticosteroids exert a broad range of effects on the immune system, from inflammatory cytokines synthesis inhibition to impairment of function and survival of multiple types of immune cells, including neutrophils, monocytes, macrophages and B and T lymphocytes. 66 Because of these wide effects, corticosteroids are widely employed to treat a variety of neurological inflammatory and autoimmune disorders and were therefore the first immunomodulatory treatment used in NT1 (Table 1). An 8-year-old boy was treated two months after the acute onset of NT1 with prednisone (1 mg/kg/day) for 3 weeks. However, no clinical improvement or modification of sleep parameters was observed. 67 Similar negative results were reported in a 29-year-old woman who received intravenous methylprednisolone (IVMP) for a transverse myelitis of possible autoimmune etiology nine years after the onset of NT1. 68 Conversely, Coelho et al 69 described two men with a longstanding NT1 history, who reported the disappearance of EDS, and in one case also of cataplexy, upon prednisone treatment (40 mg/day) for other inflammatory conditions (inflammatory intestinal disease and asthma). However, these cases were not corroborated by CSF hypocretin-1 levels measurements nor the improvement was confirmed by repeated sleep studies after treatment and, in consideration of the long interval between disease onset and treatment, it is likely that the arousing effects of the steroids played a major role in the control of EDS. In other cases, corticosteroids were used as add-on therapy. However, overall results were negative. A 10year-old child with a 3-month NT1 history was initially treated with IVIG infusion (1g/kg/day for 2 days) but later switched on prednisolone (1.3 mg kg/day) therapy for 3 weeks, due to side effects. Both EDS and cataplexy improved during steroid therapy but reappeared following treatment tapering. Despite the reported symptoms improvement, no changes of CSF hypocretin-1 levels were detected. 70 Similarly, two children, one with post-Pandemrix and one with sporadic NT1, respectively, received IVIG (1 g/kg/day for 2 days), followed by IVMP (20 mg/kg/day for 4 days) infusions repeated 3 times at monthly intervals. 71 In the first child, a marked improvement of EDS and cataplexy was observed, but symptoms gradually reemerged within 1-2 weeks from treatment, although two follow-up MSLTs showed normal sleep latencies. In the second child, only a transient amelioration of sleepiness and cataplexy was noted. In both cases, hypocretin levels remained low, and indeed dropped despite treatment in patient 1. However, the improvement of cataplexy that occurred during IVMP treatment suggests that steroids could exert an immunomodulating effect, explaining the improvement of cataplexy in these cases 71 and in one of the previously reported cases. 69 Nevertheless, corticosteroids have several effects on the central nervous system, 72 therefore, although unlikely, a direct anticataleptic activity, ie, through their action on noradrenergic and serotoninergic neurotransmission, cannot be excluded. 73 PLEX is a very well-established treatment for antibody-mediated disorders. However, only a transitory benefit was observed in the single NT1 case treated so far with this procedure. 74 Indeed, despite an early treatment, within 2 months after onset, and an initial amelioration, the symptoms reemerged after few days. The patient was switched on IVIG therapy, but no further improvement was noticed ( Table 1). The transient benefit observed with PLEX could be related to the removal of antibodies or other molecules (ie, cytokines) by this procedure, whilst its short duration as well as the lack of response to IVIG, suggest a placebo effect, or a different pathogenic mechanism, as discussed by the same authors. Indeed, the lack of a sustained effect of PLEX is in line with the lack of evidence supporting a central role of antibodies in the pathogenesis of narcolepsy (see above). IVIG have been more extensively employed in NT1, although mostly in single cases and small case series (without any randomized controlled trial approach to date) possibly because the first few observations seemed to provide positive results. A summary of the results of these studies is given in Table 2 and Figure 1. Dauvilliers et al 75 treated with IVIG four typical NT1 patients with low CSF hypocretin-1. The three cases treated close to onset showed a reduction of the frequency and severity of cataplexy, whereas an improvement of the mean sleep latency on the maintenance of wakefulness test (MWT) was observed in the patient with a 9-year disease history. These positive effects continued over time. 76 ESS scores improved during IVIG treatment in all cases. However, the concentration of hypocretin-1 in the CSF remained unchanged in two of the three available cases, whereas in one patient a slight increase was detected. This study pinpointed the importance of an early intervention, before the complete loss of hypocretinergic neurons, in ensuring a good outcome. This hypothesis seemed confirmed by a case where IVIG treatment, started 15 days after NT1 onset, led to cataplexy frequency reduction and CSF hypocretin-1 levels normalization. However, treatment discontinuation was followed by a progressive reoccurrence of NT1 symptoms after 4 months. 77 In three other cases treated within 1-4 months from symptoms' onset mixed effects were observed, with some improvement of EDS, and in two cases also of cataplexy, although no significant changes of polysomnographic parameters were documented. 78 A 16-year-old girl with NT1 and severe bizarre hallucinations, with an inflammatory CSF (positive oligoclonal bands and pleocytosis) and undetectable hcrt-1 levels at the baseline examination, was treated with IVIG and showed only a transient improvement of the hallucinations. Interestingly, the CSF pleocytosis gradually disappeared after treatment, but CFS hcrt-1 levels remained unchanged. 79 Among 4 NT1 children with undetectable CSF hypocretin-1, treated with IVIG for 6 months within 1 year from onset, only one case showed a significant reduction of EDS and cataplexy frequency whereas in the others no persistent clinical changes were noted. 80 The short-lived benefit induced by therapy was confirmed by three further pediatric cases, treated close to onset 81,82 and in 4 idiopathic adult cases with various disease durations. 83 The often-transient nature of the benefit obtained by treatment suggested a possible placebo effect, as indeed demonstrated in a double-blind placebocontrolled single-case trial. A woman with a 7-year history of NT1 received alternatively IVIG or placebo and reported an improvement of her cataplexy whilst under either treatment. 84 A larger longitudinal non-randomized, retrospective study including a pediatric NT1 population, evaluated the effects of IVIG in 22 patients compared to 30 controls who received standard therapy. 85 The study failed to show an effect of the IVIG treatment, although among patients with more severe symptoms, those receiving IVIG achieved remission earlier than controls. On the other hand, an improvement of symptoms in the treatment group was observed already before IVIG administration. Shorter disease duration did not correlate with response to treatment. The results of this study however cannot be considered as conclusive since the non-randomized study design could have introduced a selection bias. Indeed, baseline symptoms scores were higher in patients treated with IVIG compared to controls. 85 This selection bias, together with the significant spontaneous amelioration of cataplexy and EDS documented in non-treated NT1 children underline the probable significant bias in the observations obtained without any randomized study design. 86 Indeed, notwithstanding prospective randomized studies, the utility of IVIG in the treatment of idiopathic NT1 remains to date unclear. In most cases, results have been disappointing or short-lasting. The benefits noted in some anecdotal reports could be related to both placebo effect and a spontaneous improvement of symptoms over time, as observed occasionally. [85][86][87][88] On the other hand, it cannot be excluded that in rare instances immunotherapy could partially reverse a hypofunction of hypocretin cells preceding their actual destruction, explaining the disappearance of cataplexy in few individual cases. Limited efficacy of IVIG therapy was also observed in post-vaccine narcolepsy, even when treatment was administered close to onset. 89,90 Immunotherapies Targeting B and T Cells A summary of the results of the studies employing these therapies is given in Tables 3 and 4. Rituximab is a monoclonal antibody targeting the CD20 antigen, mainly expressed on the surface of B cells. Upon binding to its target, this antibody mediates cells lysis with consequent B cells depletion. This therapy has proven effective in several antibody-mediated neurological disorders, including myasthenia gravis and autoimmune encephalitis. Recently, a 13-year-old boy, with post-Pandemrix NT1, treated with IVIG without benefit, received two rituximab infusions after the onset of a severe psychiatric disorder characterized by daytime hallucinations, behavioral problems and severe aggressivity requiring commitment to a psychiatric department. 90 After treatment, an improvement of narcolepsy symptoms and behavioural and psychiatric disorders was observed. However, this beneficial effect lasted only for 2 months and subsequent infusions did not have any effect. Another patient, a 28-year-old man was treated with 5 courses of rituximab (1000 mg every 6 months) 5 months after the onset of NT1. Despite a subjective improvement of EDS 1 month after each infusion, no change of cataplexy frequency was observed. Moreover, longitudinal assessment of hcrt-1 CSF levels showed a progressive decrement from 100 to around 60 pg/mL. 91 More recently, a 5-year-old boy, with NT1 and a suspected neuromyelitis optica spectrum disorder (NMOSD) was treated with a combination of IVIG, steroids and rituximab. A transitory improvement was noticed only after high dose IVMP treatment. 92 The failure of B cells depletion in reverting the disease course supports the lack of a major role of humoral immunity in driving the pathological process and suggests that treatments targeting the T-cells could be more effective. A 79-year-old man with a very long NT1 history was treated for a T-cell lymphoma with alemtuzumab, 93 a humanized monoclonal antibody directed against the CD52 antigen causing CD4+ T-cells suppression. 94 During treatment, the patient reported disappearance of cataplexy, but not of other disease manifestations. Interestingly, methotrexate, another immunosuppressive treatment acting on several immune cells, including T-cells, 95 which was administered before alemtuzumab, did not impact narcolepsy symptoms. Why and how alemtuzumab could selectively affect cataplexy, particularly in a patient with a 58-year history of cataplexy, is unclear. It is possible that this drug exerts other effects beside T-cells suppression, such as neuroprotection and repair. 96 Rarely, narcolepsy can develop together with multiple sclerosis (MS), either secondarily to hypothalamic demyelinating lesions or as a concomitant disorder. 97 In a small series of NT1 cases with concomitant MS, two patients reported EDS improvement upon treatment with IVIG and long-term steroid therapy, respectively. 97 Among the five patients receiving disease-modifying MS drugs, no response was observed with glatiramer acetate (n=1) or beta-interferon (n=2) and, of the two patients treated with natalizumab, only one reported reduction of EDS. 97,98 It is unclear if this improvement is related to a direct effect of natalizumab on the pathogenic mechanism underlying narcolepsy or to a more complex effect, since natalizumab treatment was shown also to improve fatigue and EDS in MS patients without narcolepsy. 99,100 Natalizumab is a recombinant monoclonal antibody directed against the cell adhesion molecule alpha4-integrin expressed on the surface of human leukocytes. This treatment is expected to prevent cellular infiltration of the CNS and could be a promising treatment for NT1. Recently, a 21-year-old woman was treated with IVIG followed by natalizumab 3 months after NT1 onset. Despite treatment, no symptoms improvement was noted and, on the contrary, CSF hypocretin-1 levels dropped from 70 pg/mL to 17 pg/mL. 101 Although disappointing, this result could be explained by the relatively delayed treatment, administered when most of the hypocretin secreting cells had already been lost. Cyclophosphamide (CYC) is an alkylating agent with antineoplastic and immunosuppressive and immunomodulating effects. Its cytotoxic activity is mainly due to DNA cross-linkage leading to cell apoptosis. 102 CYC induces B and T cells depletion therefore inhibiting both, humoral and cell-mediated immune response. However, it can also exert beneficial immunomodulatory effect by reducing the number of regulatory T-cells and inducing T-cells grow factors. 103,104 CYC is highly effective in the treatment of several autoimmune conditions although is burden by several toxicities. 105 To our knowledge, to date, no patients with NT1 treated with CYC have been reported. However, disappearance of hypersomnia was reported in a 36-yearold woman who developed NT2 in the context of a neurolupus and was treated with 4 monthly CYC and IVIM infusions. 106 On the other hand, no benefit was observed in four patients with paraneoplastic NT1. [107][108][109][110] Side Effects of Immunotherapy Corticosteroids and PLEX have been largely used in clinical practice to treat autoimmune disorders and adverse events are well known. In the few NT1 patients who underwent these treatments, side effects included acne and dermatitis 70 and a severe catheter infection. 74 Although IVIG treatment is considered safe, side effects, even severe, can occur and were indeed observed also in NT1 patients. In adult cases, headache, sometimes associated with stiff neck 101 or rise in CSF leukocytes, 89 was the most common complaint, whereas one patient presented a not better-defined allergic reaction. 83 In children, side effects included infectious episodes (a flu-like syndrome and viral gastroenteritis), 78 skin reactions with urticaria and petechiae, 80 hypotension and nausea, 90 headache, fever, and flushing 70 requiring therapy withdrawal. Although until now no serious adverse events occurred in NT1 patients treated with monoclonal antibodies, these drugs have potentially lethal side effects, ie, severe infusion-related reactions, secondary autoimmunity 94 and lifethreatening infections secondary to immunosuppression, in particular progressive multifocal leukoencephalopathy (PML). The risk of PML is higher in patients treated with drugs reducing T-cells trafficking to the brain such as natalizumab, but rare cases have been also reported in association with rituximab. 111 Therefore, clinicians should keep in mind this risk and predispose adequate monitoring. Scammel and colleagues 101 proposed a treatment trial with natalizumab of a maximum of 1 or 2 years, to reduce the risk of PML in NT1 patients. Open Issues and Future Directions Most of the information on the effects of immunotherapy in narcolepsy derive from uncontrolled case studies, with small sample sizes, different treatment schemes and highly heterogeneous outcome measures. Therefore, current data is not sufficient to support the use of immunotherapy for narcolepsy and randomized controlled clinical trials are needed to provide substantial evidence and avoid bias related to placebo effect and to spontaneous disease improvements. To date, most of the attempts to reverse NT1 symptoms have been disappointing, and this could be related to several open issues that need to be addressed and that are summarized below. What is the Best Immunomodulating/Immunosuppressive Treatment for Narcolepsy? Since the autoimmune basis of narcolepsy remains unproven, and our understanding of the immune process involved is still limited (ie, associated or pathogenetic), it is difficult to provide a definite answer to this question. However, mounting evidence underpins the primary role of T-cells in NT1 disease pathogenesis. Many of the studies on immunotherapy have been focused on treatments acting mainly on the antibody-mediated immune response, possibly explaining the lack of a meaningful effect. T-cells targeting drugs have been only rarely employed; therefore, agents such as natalizumab or alemtuzumab could be promising treatments for future clinical trials. When to Treat? This is an apparently easy question with a seemingly easy answer, the sooner, the better. Indeed, since immunotherapy might prevent the loss of the hypocretinergic neurons, it should be given before this process reach completion. On this assumption, many of the previous studies focused on treating patients close to disease onset. However, cataplexy manifests when most, about 80%, of the hypocretinsecreting neurons are already lost, 112 therefore it could be possibly more effective to treat patients who have not yet developed cataplexy. Although the natural history of the hypocretin cells death is unclear, most NT1 patients refer experiencing EDS months or years before the appearance of cataplexy 13 implying a progression over time, as documented in few cases by hcrt-1 CSF assessment. [7][8][9][10]41 These cases show that at least in few cases, narcolepsy symptoms evolve over time along with changes of CSF markers suggesting that in HLA DQB1*06:02 positive subjects with EDS and other narcolepsy features CSF markers should be monitored over time. This could potentially offer a window of opportunity to intervene with immunotherapies in early stages in order to try preventing or reducing the hypocretinergic neuronal loss. Interestingly, Latorre et al 41 observed the presence of reactive CD8+ T-cell clones in the CSF of a recently diagnosed NT2 patient who later evolved into NT1 suggesting that these cells could be a potentially reliable early marker. The identification of early markers of progression could be crucial to identify those patients who could benefit from immunotherapy before the complete destruction of hypocretinergic neurons leading to NT1. On the other hand, CSF hypocretin loss can progress quickly and shortly after symptom onset, 113 even before the appearance of sleep-onset rapid eye movement periods, therefore prompt diagnosis and CSF examination are crucial for the identification of cases who can benefit from immunotherapy. How to Assess the Clinical Outcome? Many of the cases reported so far used heterogeneous measurements, including subjective sleepiness scales, CSF hypocretin-1 levels, multiple sleep latency test (MSLT), MWT, alone or in various combinations, or patients' self-reports to monitor the response to therapy. In certain cases, patients subjectively experienced an improvement of EDS or cataplexy, but the use of objective sleep parameters, often failed to pair these subjective evaluations, showing the intrinsic limitations of current methodology for the assessment of treatment efficacy, as well as their poor correlation with clinically relevant outcome measures. 21 For some narcolepsy manifestations including cataplexy or sleep paralysis, there are not appropriate assessment tools and only recently an overall narcolepsy severity scale has been proposed. 114 Most of the previous studies on IVIG efficacy involved children and employed the same tools used to evaluate adults, with only few studies adopting age-appropriate scales. 80,85 However, children manifest significantly different NT1 symptoms compared to adults thus calling for specific evaluation approaches. Recently, Wang et al 115 proposed a subjective sleepiness scale and a cataplexy diary for pediatric narcolepsy, and, although not yet validated, they have been already applied in a clinical trial. 116 There is an urgent need of new standardized assessment tolls of disease severity aimed at better following the disease course and documenting treatments' effects. Conclusion To date only symptomatic treatments are available for narcolepsy, with new drugs recently showing promising results; 4,19 however, chronic pharmacological treatments met with frequent side effects and may not sufficiently impact on disease burden. 21 No disease-modifying cure is available, calling for future research on treatment strategies as well as on diagnostic approaches able to identify patients who will develop NT1 among those complaining only EDS. NT1 is considered an immunemediated disease, nevertheless the absence of definitive proofs represents a limit to design targeted clinical trials on immunotherapy. To date, indeed, only case series and case reports are available. However, although not yet conclusive, the evidence gathered so far suggest that is the time for randomized, double-blind, placebocontrolled trials that would contribute to answer the question whether immunotherapy is useful in narcolepsy. Disclosure Maria Pia Giannoccaro and Fabio Pizza reports no conflicts of interest in this work. Rocco Liguori reports personal fees from Argenx, Biogen, Sanofi-Genzyme, Argon Healthcare s.r.l., Amicus Therapeutics s.r.l. and Alfasigma for Advisory Board consultancy and Lecture fees from Dynamicom Education, SIMG Service, Adnkronos Salute Unipersonale s.r.l. and DOC Congress s.r.l., outside the submitted work. Giuseppe Plazzi participated in advisory board for UCB Pharma, Idorsia, Jazz pharmaceuticals and Bioprojet. narcolepsy The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2021-05-19T05:17:02.241Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "662f546a59f36ec1abef14099daad1bc200e2539", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=69299", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "662f546a59f36ec1abef14099daad1bc200e2539", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13367005
pes2o/s2orc
v3-fos-license
BRENDA, AMENDA and FRENDA: the enzyme information system in 2007 The BRENDA (BRaunschweig ENzyme DAtabase) enzyme information system () is the largest publicly available enzyme information system worldwide. The major parts of its contents are manually extracted from primary literature. It is not restricted to specific groups of enzymes, but includes information on all identified enzymes irrespective of the enzyme's source. The range of data encompasses functional, structural, sequence, localisation, disease-related, isolation, stability information on enzyme and ligand-related data. Each single entry is linked to the enzyme source and to a literature reference. Recently the data repository was complemented by text-mining data in AMENDA (Automatic Mining of ENzyme DAta) and FRENDA (Full Reference ENzyme DAta). A genome browser, membrane protein prediction and full-text search capacities were added. The newly implemented web service provides instant access to the data for programmers via a SOAP (Simple Object Access Protocol) interface. The BRENDA data can be downloaded in the form of a text file from the beginning of 2007. INTRODUCTION The BRENDA (BRaunschweig ENzyme DAtabase) enzyme information system (1,2) is a manually annotated repository for enzyme data. Originally intended and published as a series of books (3) in 1987, it was transformed into a publicly available database in 1998 and has been curated and continuously improved at the University of Cologne since then. Its contents are not restricted to specific groups of enzymes, but include information on all enzymes that have been classified in the EC scheme of the IUBMB (International Union of Biochemistry and Molecular Biology) irrespective of the enzyme's source. The range of data includes the catalyzed reaction, detailed description of the substrate, cofactor and inhibitor specificity, kinetic data, structure properties, information on purification and crystallization, properties of mutant enzymes, participation in diseases and amino acid sequences. Each single entry is linked to the enzyme source (organism and, if applicable, the tissue and/or the protein sequence) and to the literature reference. Data queries can be performed by a number of different ways, including an EC-tree browser, a taxonomy-tree browser, an ontology browser and a combination query of up to 20 parameters. The newly implemented web-service provides instant access to the data for programmers via a SOAP interface (Simple Object Access Protocol). Data statistics In summary BRENDA contains 1.3 million manually annotated data, on average 300 single entries per EC-number ( Table 1). Enzymes of 7500 different organisms are covered. With 170 000 single data human enzymes are the most thoroughly described in the literature (Figure 1), followed by enzymes of Rattus norvegicus (132 000 entries) and Escherichia coli (93 000 entries) New information fields pI value: The isoelectric point is now included. This value is of significance for the purification procedure allowing conclusions about the solubility of the enzyme and its motility in electrophoretic procedures. K i value: 14 014 inhibition constants are presently included in the database. Each value is connected to the enzyme, to the inhibitor and, where available, to the 2D structure of the molecule. Engineered enzymes: The reactivity of mutant enzymes can reveal detailed insight into the catalytic process and may give valuable clues about the active sites, the mechanism of the reaction or the regulation. Meanwhile 19 000 engineered enzymes are described in the database. For each single modification of the protein sequence, the properties of the resulting enzyme are described. Kinetic data for these enzymes are included in the respective database sections. MOLECULAR STRUCTURE-BASED QUERIES When searching for molecules which interact with the enzyme (substrates, products, cofactors, inhibitors, activating substances, etc.) different query procedures are possible. Using the name of the compound: This option returns not only the data stored for the ligand under the given name but applies the integral molecular thesaurus based on the INChI (IUPAC International Chemical Identifier) (4) codes of 53 000 molecular structures stored as molfiles. Performing a substructure search with the integrated JME Editor (5): The result page of this function displays the images, names and synonyms of the found compound, their function when interacting with the enzyme and also provides a button for an immediate BRENDA search. ONTOLOGIES The BRENDA Ontologies section allows to search in all publicly available ontologies of biochemical, anatomic, developmental, chemical and medical terms such as Gene Ontology (6) or MeSH (7) published in open biomedical ontology format (http://obo.sourceforge.net). If possible, terms are cross-linked to other ontologies and BRENDA enzyme data. The use of umbrella terms allows to search, for example, for complete classes of chemical compounds in the BRENDA database. NEW DATABASES AT THE BRENDA HOST FRENDA FRENDA (Full Reference ENzyme DAta) is an additional database to BRENDA available to the academic community with BRENDA release 6.2 (June 2006). FRENDA aims at providing an exhaustive collection of indexed literature references containing organism-specific enzyme information. Compared to a standard PubMed (8) query, FRENDA returns also all references on the enzyme published under one of its synonyms. FRENDA currently covers 1.4 million enzyme/organism combinations from 550 000 distinct references, automatically extracted from more than 16 million PubMed abstracts (June 2006) (8). The scientific articles are pre-filtered using MeSH terms-only references declared as 'enzyme' hits are used (1.6 million remaining abstracts). FRENDA uses a dictionarybased approach for recognizing named entities (enzymes, organisms) in titles and abstracts. The dictionaries are compiled from BRENDA and NCBI Taxonomy (8). In a twostep approach, references with enzyme hits in title, abstract or MeSH terms are searched for co-occurring organism names (scientific names and synonyms). The results of this indexing process were classified into four reliability categories depending on the occurrence of search terms in title and/or abstract and/or MeSH terms. This classification is provided with the commentaries in the FRENDA database. The manual evaluation of the quality of the FRENDA approach using 250 randomly chosen results indicates a precision of 64.8% with a recall of 72% from a set of 250 manually annotated enzyme-related literature references. AMENDA As a subset of FRENDA, AMENDA (Automatic Mining of ENzyme DAta) currently covers organism-specific information on enzyme localization (>30 000 records, compared with 17 000 records in BRENDA) and source tissues (150 000 records, compared with 38 000 records in BRENDA) from a text-mining procedure (J. Barthelmes, C. Ebeling and D. Schomburg, unpublished data). Search terms for enzyme names, organism names, localization and sources and tissues are compiled from BRENDA enzyme synonyms, the BRENDA tissue-tree (http://obo. sourceforge.net/cgi-bin/detail.cgi?brenda) and the NCBI Taxonomy (8). AMENDA is based on the FRENDA cooccurrence approach. Protozoa, viruses and bacteria are excluded for tissue search. References with enzyme/organism hits are searched for occurrences of tissue terms (singular and plural) and localization terms in title, abstract, and MeSH terms and further evaluated based on text-mining criteria. The text-mining approach described above was tested on 200 randomly selected results. A precision of 76.0% and a recall of 11.7% for the combined search terms enzymeorganism-tissue/localization was achieved. In a way similar to FRENDA, the commentaries indicate the individual reliability level for each data set. BRENDA GENOME EXPLORER The BRENDA Genome Explorer is an enzyme-centered genome visualization tool for browsing and comparing enzyme annotations in full genomes. It closes the gap between genomic and enzymatic data and allows the alignment of genomes at a given enzyme-coding gene and its orthologs, thus allowing to visually compare the genomic environment of the gene in different organisms (Figure 2). The underlying genome database is compiled from EBI Genomes (9) and ENSEMBL (10) and supplemented by UniProt (11) annotations. It can be searched for specific proteins via names, EC-numbers, or UniProt accessions, allowing for a highly target-oriented search. TRANSMEMBRANE PROTEIN PREDICTION Transmembrane helices for enzymes are predited with TMHMM (TransMembrane Hidden Markov Model) developed by Sonnhammer et al. (12). With the aid of this tool it is possible to predict the number, the size and the location of transmembrane helices, thereby discriminating soluble and membrane-bound enzymes. ACCESSIBILITY BRENDA is accessible via the various search options (quick search, advanced search, ontologies, sequence search, Genome Explorer, etc.). The database can be downloaded as a text file. Access to AMENDA and FRENDA requires a registration. SOAP-BASED WEB SERVICE Web services provide a simple way to access the data collection without the need for downloading, parsing and preparing an entire database for local queries. Web services are independent of the internal organization of the database and avoid parsing problems caused by changes in the text file structure. BRENDA now provides a SOAP (http://www.w3.org/TR/ soap) based web service comprised of 148 methods covering 52 data fields. Flexible queries can be performed directly from programs written in different programming languages (Perl, Java, C++, Python, PHP) on data fields such as substrate, K m -value and pH-optimum. For any given record returned, a set of complete literature references can be retrieved using unique reference identifiers. Every data field may be queried by providing at least one of the three parameters EC-number, organism, or-if applicable-ligand structure identifier. The ligand structure identifier, which can be queried with the name of a chemical compound, is used to ensure that all synonyms for a given molecular structure are also retrieved. The BRENDA web service also gives access to the data using identifiers from other databases like UniProt (11) or NCBI Taxonomy (8), as well as ontologies like Gene Ontology (6) or BRENDA Tissue Ontology. The ontology-based search allows for queries based on entire branches of the hierarchy, avoiding a complex search for all leaves in the given branch. For example, an ontology-based search for the term 'brain' or the respective Gene Ontology identifier will return all tissues and cell types under the umbrella term 'brain'. The same method can also be applied to search for whole groups of organisms. The documentation of the BRENDA web service including examples in different programming languages is available at http://www.brenda.uni-koeln.de/soap. CONCLUSIONS In the past year the BRENDA enzyme information system has made a big step forward not only by a formidable increase in the annotation speed but also by inclusion of data based on text-mining approaches and by the development of different new methods for data access. The new funding by an EU grant allows to increase the annotation speed even further to bring the backlog down to less than one year and will also allow to substantially increase the percentage of ligands with full structural information.
2014-10-01T00:00:00.000Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "a233e816eaee911ed33b3618f92b78efce9179a4", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/35/suppl_1/D511/3888992/gkl972.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a233e816eaee911ed33b3618f92b78efce9179a4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
213519184
pes2o/s2orc
v3-fos-license
An agent-based simulation model of dynamic real-time traffic signal controller Traffic management has a key role within intelligent transportation systems (ITS). An efficient traffic control system leads to less fuel consumption, gas emissions, and transportation delay. The main goal of this study is to minimize the driver waiting time at intersections and avoid traffic jams. To achieve this goal, an adaptive traffic signal controller model was proposed and verified in this work. A multi-agent based adaptive controller for a dynamic vehicular ad-hoc network (VANET) in a single four-way intersection was modelled and simulated using Matlab/Simulink/SimEvents. The inputs of the controller are the position and the speed of the vehicles at each approach. In fact, the need for inter-vehicle communication is eliminated in this work through the process of virtual road segmentation and the deployment of road-side units (RSUs). The output of the controlled is the green-time for each approach, where the green time is calculated based on the traffic density and on the queue length. The proposed model was verified by intensive simulations using four main parameters: green phase time; inter-arrival times; service time; and average waiting time (AWT). The performance, AWT, of the proposed adaptive controller was compared to a fixed-time controller. The simulation results showed that the proposed controller outperformed the fixed-time controller for significant variance (i.e. variance > 10) in the means of inter-arrival times for vehicles approaching the traffic signal. number of vehicles at each stream. The adaptive time of red/green light was calculated using conflict directions matrix described in [5]. The simulation of fuzzy traffic controller was performed in [6] for multi-lane isolated signalized intersection. A model for an intersection of two lanes and different values for waiting time and queue length was performed for each road segment. The maximum values of waiting time and queue length represented the input of the fuzzy controller. The main goal of this work is to minimize the driver waiting time at intersections and avoid traffic jams. Indeed, an optimal traffic signal controller can achieve this goal. Traffic signal optimization is one of the most effective methods to reduce traffic congestion [7−13]. The problem can be modeled as an optimization problem, where the objective function is the average waiting time, and the optimization parameters are the green time for each phase. Thus, the goal is to find the best value for these parameters in order to minimize the objective function. However, this problem is hard and blind due to abnormal and random conditions [14,15]. Therefore, the objective of this paper is to find the optimal solution for this problem. Conventional simulation techniques are suitable for fixed-time approaches [16]. However, they are not sufficient for heterogeneous environments such as highly-dynamic topology real-time Vehicular Ad-Hoc Network (VANET) with QoS guarantees [17]. To overcome the limitations of conventional simulation techniques, agent-based simulation techniques were used to model and analyze such dynamic environments, where the whole system is decomposed into a set of cooperative sub-agents with a well-defined communication protocol between them [17−19]. Most of the above-mentioned adaptive systems are based on the queue length and/or traffic flow on the intersection approaches. Although these systems consider the present situation of the traffic, the future state is neglected. The future status of the traffic situation is denoted by traffic density which represents the number of vehicles in one-kilometer length of the street. Traffic density can be calculated from the position and the speed of approaching vehicles which can be obtained by utilizing the intervehicle communication. In this paper, a multi-agent system for dynamic realtime traffic signal controller was modelled using Matlab/Simulink/SimEvents and compared to other controllers. The inputs of the controller are the position and the speed of vehicles at each road, based on the assumption that data communication is possible between traffic signal systems, vehicles, and roadside units. The output of the controlled is the green-time for each road, where the green time is calculated based on the traffic density and on the queue length. Since every vehicle sends its position and speed, the controller can predict the time of arrival for each vehicle. The main contribution of this work is as follows:  The design and simulation of an adaptive controller model for a dynamic real-time VANET in a single four-way intersection using Matlab/Simulink/SimEvents  The use of agent-based methodology for system design to eliminate the limitations of conventional simulation techniques.  Eliminating the need of Inter-vehicle communication through the process of virtual road segmentation and the deployment of road-side units (RSUs) for communication. The rest of this paper is organized as follows. Section 2 presents the agent-based system model. In Section 3, a discrete simulation model is described in details. Section 4 verifies the simulation model by comparison with fixed time models. The conclusion of this research and future research directions are provided in Section 5. 2.Methodology and system model Using agent-based methodology, three main phases were defined: 1) Decomposition, where the system is decomposed into five sub-agents: vehicle, queue, server, controller, and roadside unit (RSU). 2) Modelling, where the functionality of each sub-agent is defined. 3) Protocol, where the interaction between sub-agents is defined. 2.1Sub-agents design The decomposition of the agent-based system along with the communication links between agents is shown in Figure 1. The sub-agents of the proposed multi-agent model are defined as follows: 1) Vehicle: This sub-agent has an ID that is the plate number, the MAC address in the communication protocol, as well as a specific speed. An exponential distribution with mean (1/y) was used to generate the inter-arrival time for the vehicle, while a uniform distribution was used to model its speed. Each vehicle has the ability to know its position (coordinates) via GPS signals. 2) Queue: queuing and de-queuing processes are performed by this agent. The arrived vehicle will be queued based on its arrival-time, while the vehicle at the top of the queue is to be de-queued to be served by the server sub-agent. This agent is to be downloaded at the traffic light signal. 3) Server: An exponential function with a mean (1/y) was used to model the service time of the vehicle, where (y) is the vehicle service rate that is affected by some metrics such as the size of the vehicle, its location in the lane, and the lane dimensions. This agent is to be downloaded at the traffic light signal. 4) Controller: It is the coordinator of the whole process. It collects system information from other sub-agents and evaluates system parameters needed to govern the functionalities of all subagents. Such sub-agent is to be downloaded at the traffic light signal. 5) Roadside unit (RSU): Such sub-agent monitors the virtual road segment. It is the intermediate agent between the controller and the vehicles. Each subagent has a specific ID (SID). Figure 1 An agent-based model. The system is decomposed into five sub-agents: vehicle, queue, server, controller, and roadside unit (RSU) Figure 2 illustrates the various parts of the overall system. To model the proposed system, assumptions and network parameters should be well defined, i.e. the mobility model of the vehicles, RSUs and node parameters, and communication scheme. Figure 2 Overall model description at four-way single intersection 2.2.1Assumptions In this work, the mobility model was defined in the previous work [5], where vehicles are moving in the same direction without changing their speed. The virtual road segment is monitored by the RSU, the transmission range of RSU reaches all vehicles in the road-segment. To ensure proper communication, we assumed that RSU has the ability to communicate with its adjacent RSUs and each node is in the center of its transmission circle. Controller Queue Server Vehicle 2.2.2System parameters The parameters for both road segment and mobile node (vehicle) are defined as follows: 1) Road segments: X number of road-segments with the following attributes: a. ID of the segment (S ID ). b. Segment dimensions (L,W): L and W are the length and width of the segment. 1) Mobile nodes: N number of mobile nodes (vehicles) is assumed in each road segment and each node has the following attributes: a) Node address (N ID ): Each vehicle node in the segment has a unique address that is the plate number (looks like the MAC address). b) Road segment where the node belongs (R). c) Node speed (N S ): Nodes are moving in the same direction with a uniform distribution between 20 and 60 Km/h. d) Node position (P x ,P y ). e) Strength of signals received by the node from the RSUs. No more than two signals could be received (α 1 , α 2 ). Such parameters are the key behind defining the segments' boundaries. 2.2.3Communication schemes The communication scheme between RSUs is defined according to the position of the RSU as follows: 1) Intra-segment scheme: That is when the RSU and the traffic light are located in the same segment. In such case, the communication between the RSU and the controller is direct. 2) Inter-segment communication: In such case, the RSU and the traffic light belong to different segments. Accordingly, a layer of cooperation between adjacent RSUs is exists. 2.3Sub-agents interaction The process begins when the controller sub-agent initiates a system-begin-request, every time t=T, and sends it to the RSU that serves the segment where the traffic light exists. Such request will be broadcast to every RSU in the road. Upon receiving such request, each RSU broadcasts a control message in the virtual segment it serves requesting the vehicle nodes to send their system parameters, node position (P x ,P y ) and node speed (N S ). Accordingly, each vehicle node responds by sending such parameters, destination address is RSU. One of the problems in such model is that when a vehicle receives requests from two different RSUs. i.e., the vehicle is an intermediate one. Such problem was solved by embedding a signal strength comparator in the vehicle. Accordingly, the vehicle responds to the strongest one. That is if (α 1 > α 2 ) then the vehicle belongs to RSU1. Upon receiving such vehicles parameters, each RSU sends them to the controller sub-agent. According to the location of the RSU, the communication scheme is following one of the two modes that have been previously mentioned in section 2.2.3. Once the controller receives the parameters, it interacts with the queue sub-agent requesting for its size (Q) that is the number of vehicles in the virtual segment where the traffic light exists. Upon receiving such information, the controller then calculates the traffic density (TD x ) for approach x as: Where m is the number of virtual segments in the approach, N i is the number of vehicles in each virtual segment i, L i is the length of segment i, Q x is the size of the queue in approach x. i.e. our system is a single intersection where 4 approaches are exist, Once a traffic density is obtained for approach x, the controller calculates the green interval indicator GII for this approach as: Accordingly, the controller calculates the green time (G x ) for approach x as: where T c is the cycle time, and T y is the yellow time. The controller then evaluates the service time (S) for each vehicle in the queue; given that the queue is applying a First-In-First-Out (FIFO) scheduling algorithm where the vehicle at the top of the queue is the nearest to the traffic light and should be served first. The calculation of the vehicle's service time is as: where { } S i is the service time for vehicle number i in the queue, such that the first service time is S 1 . S 0 is initialized to zero. Upon evaluating the service time of the vehicles, the controller interacts with queue and requests it to dequeue the vehicle from the top of the queue and passes it to the server. The controller also passes the service time of the de-queued vehicle to the server International Journal of Advanced Computer Research, Vol 10(46) that serves the vehicle in turns. Figure 3 illustrates the timing diagram and the overall interactions between sub-agents. The controller, through the previously mentioned communications paths, informs whether they will be served or not during the current green phase. This is possible as given in Equation 4 above and the green time for the designated approach. Immediate benefit from this is embedded speed control for vehicles not in the queue. When a driver knows that it is hard to catch the traffic signal green, he/she keep his speed normal and does not step on the gas. Figure 3 Timing diagram for the multi-agent system and the interaction between sub-agents 3.Discrete event model description Matlab/Simulink/SimEvents was used in this research to develop a discrete event model. SimEvents is a discrete-event simulation engine and component libraries. It is a tool for analyzing event-driven system models and optimizing performance characteristics. SimEvents enables model routing, processing delays, and prioritization for scheduling and communication. Figure 4. The traffic signal is modelled as an input switch with inputs equals to the number of roads at the intersection, i.e. IN1 to IN4. The controller block represents the traffic signal controller. It controls the switching time based on the traffic densities for all four roads and the cycle time, which represents the total time of one cycle of the traffic signal. The cycle is assumed equal to the green times of the four phases plus four yellow times. The output of the controller is the port number of the input switch. The green time for each phase is calculated as a portion of the cycle time based on the traffic signal of that phase. The service time subsystem is used to model the delay time of the vehicle while crossing the traffic signal. The maximum and minimum service times are assumed four and two seconds respectively. The maximum service time is for the first vehicle in the queue. Then service time declines for the preceding vehicle until reach the minimum. The phase subsystems, i.e. Phase1 to Phase4, generates a number of vehicles and their speed randomly at each phase (road) as detailed in Figure 5. It consists of an entity generator where entities are generated upon a port connected to an event-based random number. Inter-arrival time is modelled as an exponentially distributed random variable. A set-attribute block is used to set speed attribute for each entity. Speed is assumed to be a random variable that uniformly distributed between 20 and 60 km/h. The delay time between the entity generator and the traffic signal depends on the speed of the entity. The delay time is modelled by an infinite server where the service time is the delay time. The delay time is calculated from the distance and the speed by dividing the distance by the speed. The traffic density which represents the number of vehicles in one-kilometer length of the 4.1Simulation parameters The proposed algorithms were assessed using extensive simulation experiments. 50 runs using Independent and identically distributed (IID) random variables for 10000 simulation time units were conducted. The yellow time (T y ) is set to 4 seconds. The speed of the vehicles is random variable that uniformly distributed between 20 and 60 km/h. The length of the road segment is considered to be 1 Km. 4.2.1Model verification using the green phase In this simulation, two cycles of 65 seconds length each were generated by the model with zero-vehicle generation as shown in Figure 6. Figure 6 Green phases for the four road approaches of the traffic signal during two cycles of 65 seconds. The green phases are equal for the invariant-traffic generation Figure 6 shows that the green phases are equal for the 4-road approaches used in our system. The green times are equal, as expected, since there is no variance in the vehicle density and arrival rate. Such equality verified the model, where the main cause is the invariant-traffic generated by our proposed model. 4.2.2Model verification using the inter-arrival times In this simulation, we monitor the generated interarrival times of the vehicle nodes for the four road approaches by running the proposed system for 250 seconds. Figure 7 provides a verification of the proposed model, where it shows the variability of the inter-arrival times as the simulation time goes by. 4.2.3Model verification using the service time In this simulation, two cycles of 65-second length each were generated by the model with zero-vehicle generation as shown in Figure 8. Figure 8 provides a verification of the proposed system through monitoring the service time for each approach, where it depicts the randomness in such system parameter. It shows that in each green phase the generated service times are random and uniformly distributed between [0.5, 2] seconds. 4.2.4Model verification using the average waiting time (AWT): In this simulation, the model is verified using AWT metric. AWT is calculated for distinct inter-arrival time (IAT) averages. The simulation results are shown in Table 1. The first column, in the Table, is the mean of the exponentially distributed inter-arrival time for vehicles generation, the mean is the same for the four road approaches. The average waiting time is calculated for two different cycles of 65 and 80 seconds. It is apparent that the average waiting time increases as the mean of inter-arrival time of vehicle generation decreases, and thus the generated model is verified. 4.3Impact of the average waiting time: In this simulation, the performance of the proposed adaptive controller (TD+Q) was measured in comparison with a fixed time controller that uses a preset green time based on a prior knowledge of the traffic flow. The performance metric was the average waiting time for all approaches at a traffic signal. The simulation was performed by measuring such metric for different variances of the vehicles, inter-arrival times in the range [0, 130]. The simulation results shown in Figure 9 illustrated that the adaptive controller outperforms the fixed-time controller when there is a significant variance (i.e. variance > 10) in the means of inter-arrival times for vehicles approaching the traffic signal, whereas the fixed-time outperforms the adaptive controller when the variance of the means of the inter-arrival times is less than 10. In order to find the optimal cycle time for the proposed system, the simulations were performed by measuring the average waiting time for several interarrival time variances and the cycle time. The simulation results indicated that the optimal cycle time is around 50 seconds regardless of the interarrival time variance, as shown in Figure 10. Figure 9 The average waiting times vs. the inter-arrival times variance for an adaptive and fixed-time controller 5.Conclusions and future work In this study, a model for an adaptive traffic density controller (TD + Q) was developed and verified for different traffic density parameters. The average waiting time (AWT) of the proposed model was then compared to a fixed-time baseline controller. The simulation results revealed that the proposed controller outperformed the baseline controller for variant inter-arrival times due to its adaptability to the variations in input parameters. The variance impact of the inter-arrival times was investigated for both controllers. It has been shown that the proposed controller outperformed the baseline one as the variance of inter-arrival times increased. The advantage of the fixed-time controller compared to the adaptive one is the simplicity of the controller. However, it does not adapt to variable traffic situations which degrades the performance. The only limitation of applying the proposed model is the assumption of inter-vehicles communication for reporting the position and the speed of the vehicles. This limitation can be overridden through the process of virtual road segmentation and the deployment of road-side units (RSUs) for communication. As a future work, the cycle length and the sequence of phases will be considered as adapted parameters. Another future direction is to consider more than one intersection as a distributed control system.
2020-02-06T09:03:44.550Z
2020-01-30T00:00:00.000
{ "year": 2020, "sha1": "1243421b668fb4f976587e0887ac4228b932301f", "oa_license": null, "oa_url": "https://www.accentsjournals.org/PaperDirectory/Journal/IJACR/2020/1/1.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "25fa2b8b4976dbde4fc74a89fb50ebb221b70c23", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }