id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
18436878 | pes2o/s2orc | v3-fos-license | Pairing of Supermassive Black Holes in unequal-mass galaxy mergers
We examine the pairing process of supermassive black holes (SMBHs) down to scales of 20-100 pc using a set of N-body/SPH simulations of binary mergers of disk galaxies with mass ratios of 1:4 and 1:10. Our numerical experiments are designed to represent merger events occurring at various cosmic epochs. The initial conditions of the encounters are consistent with the LambdaCDM paradigm of structure formation, and the simulations include the effects of radiative cooling, star formation, and supernovae feedback. We find that the pairing of SMBHs depends sensitively on the amount of baryonic mass preserved in the center of the companion galaxies during the last phases of the merger. In particular, due to the combination of gasdynamics and star formation, we find that a pair of SMBHs can form in 1:10 minor mergers provided that galaxies are relatively gas-rich (gas fractions of 30% of the disk mass) and that the mergers occur at relatively high redshift (z~3), when dynamical friction timescales are shorter. Since 1:10 mergers are most common events during the assembly of galaxies, and mergers are more frequent at high redshift when galaxies are also more gas-rich, our results have positive implications for future gravitational wave experiments such as the Laser Interferometer Space Antenna.
INTRODUCTION
Compelling dynamical evidence indicates that supermassive black holes (SMBHs) with masses ranging from 10 6 to above 10 9 M ⊙ reside at the centers of most galactic spheroids (e.g., Ferrarese & Ford 2005). The masses of SMBHs correlate with various properties of their hosts, e.g. luminosity or mass (Magorrian & al. 1998;Häring & Rix 2004) and velocity dispersion (Ferrarese & Merritt 2000;Gebhardt & al. 2000). In the currently favored model for structure formation, the ΛCDM cosmology, galaxies grow hierarchically through mergers and accretion of smaller systems (e.g., White & Rees 1978). Thus, if more than one of the merging galaxies contained a SMBH, the presence of two or more SMBHs in their merger remnant will be almost inevitable during galaxy assembly (Begelman et al. 1980). However, it is unclear if the dynamical processes at play are efficient in forming a close SMBH pair with separations ∼ 10-100 pc, which may subsequently shrink to a bound binary, and eventually merge via gravitational wave radiation. Such black hole coalescence events are expected to give rise to gravitational wave bursts that should be detectable by the Laser Interferometer Space Antenna (LISA) (Vecchio 2004).
SMBH pairing has been shown to proceed quickly when both compact objects are hosted by steep stel-lar cusps approaching each other from close distances (Milosavljević & Merritt 2001), or when embedded in a circumnuclear gaseous disk under appropriate thermodynamic conditions ), but whether the large-scale merger can lead the SMBHs to such a favorable configuration is still a matter of debate. Previous studies found that, following a galaxy merger, the relative distance of the SMBHs in the remnant is very sensitive to the structure of the merging galaxies, and to their initial orbit (Governato et al. 1994). Kazantzidis & al. (2005) showed that pairing is efficient in equal-mass disk galaxy mergers with cosmologically relevant orbits, while the presence of a dissipative component is necessary for the pairing of SMBHs in 1:4 mergers. Other recent studies (e.g., Springel et al. 2005;Johansson et al. 2009) focused on the effect of energetic feedback from black hole accretion onto the surrounding galaxy, but were not designed to follow the orbital evolution of SMBHs. Substantially less effort has been devoted to examining the fate of SMBHs in minor mergers (but see Boylan-Kolchin & Ma 2007), which are much more frequent in ΛCDM cosmologies (Lacey & Cole 1993;Fakhouri & Ma 2008). Investigating the necessary conditions for SMBH pair formation in this regime is of primary importance for the search of gravitational waves and for SMBH demographics and activity.
In this Letter, we report on the efficiency of the SMBH pairing process using a set of N-body/SPH simulations of disk galaxy mergers, with mass ratios q = 0.25 and 0.1, constructed to represent mergers occurring at various cosmic epochs. The choice of the initial conditions, in particular the masses of the SMBHs, is such that the corresponding SMBH coalescence events would be detectable with LISA (Sesana et al. 2005).
SIMULATION SET-UP
The galaxy models were initialized as three-component systems following the methodology outlined in Hernquist (1993). They comprise a Hernquist spherical stellar bulge (Hernquist 1990), an exponential disk with a gas mass fraction f g , and an adiabatically contracted dark matter halo (Blumenthal et al. 1986) with an initial NFW profile (Navarro et al. 1996). A collisionless particle representing the SMBH was added at the center of each galaxy.
Our reference model is a Milky-Way type galaxy, with a virial velocity V vir = 145 km/s, a disk mass fraction M d = 0.04M vir , and a bulge mass fraction M b = 0.008M vir . The mass of its central SMBH is M BH = 2.7 × 10 6 M ⊙ , consistent with the updated M BH − M bulge relation (Häring & Rix 2004). The disk scale height and the bulge scale radius are z 0 = 0.1R d and a = 0.2R d respectively, R d being the exponential disk scale length. R d is determined following the model by Mo, Mao, & White (1998) (MMW hereafter), which yields disk galaxies lying on the Tully-Fisher relation. Models at redshift z = 0 were initialized with a halo concentration parameter c = 12 (Bullock et al. 2001). We also ran mergers with initial conditions rescaled to z = 3 according to MMW, keeping V vir fixed, as expected for the progenitors of our z = 0 models (Li et al. 2007). Considering high-redshift mergers is crucial, because the merger rate increases with look-back time, and a large fraction of the gravitational wave signal from coalescences of SMBH binaries is predicted to originate from this cosmic epoch at the corresponding mass scale (Sesana et al. 2005;Volonteri et al. 2003). Following MMW, all masses, positions and softening lengths were rescaled by a factor H(z = 3)/H 0 , i.e. the ratio between the Hubble constant at z = 3 and its present-day value for a ΛCDM "concordance" cosmology (H 0 = 70 km s −1 Mpc −1 , Ω m = 0.3, Ω Λ = 0.7). The halo concentration was chosen according to Bullock et al. (2001), c = 3. Satellite galaxies were initialized with the same structure, with the mass in each component being scaled down by q. The resulting SMBH pairs fall in the typical range of masses whose coalescences will be detectable with LISA (Sesana et al. 2005). We choose orbital parameters for the mergers that are common for merging halos in cosmological simulations (Benson 2005): the baricenters of the two galaxies were placed at a distance equal to the sum of their virial radii and set on parabolic orbits with pericentric distances of 20% the virial radius of the most massive halo. All mergers we considered were coplanar and prograde.
All simulations were performed with GASOLINE, a TreeSPH N-body code (Wadsley et al. 2004). We ran collisionless ("dry", with f g = 0) and gasdynamical ("wet") mergers with the same gas fraction in the primary and secondary galaxies, either f g = 0.1 or 0.3. In wet runs, atomic gas cooling was allowed; star formation (SF) was treated according to Stinson et al. (2006). Gas particles are eligible to form stars if their density exceeds 0.1 cm −3 and their temperature drops below 1.5 × 10 4 K, and the energy deposited by a Type-II supernova on the surrounding gas is 4 × 10 50 erg. With this choice of parameters our blast-wave feedback model was shown to produce realistic galaxies in cosmological simulations (Governato et al. 2007). A summary of our set of simulations is presented in Table 1.
In each galaxy (except for a very high-resolution test, see 3.1), we employed 10 6 particles for the halo, and, initially, 2 × 10 5 star particles and 10 5 gas particles, when included. The force softening was 100 pc in our reference model, scaled down by q 1/3 in the satellites, and by H(z)/H 0 in high-z runs, yielding a force resolution of ∼ 20 pc in the satellite galaxy for q = 0.1 at z = 3. With such a high particle number, the masses of star particles in the satellite is an order of magnitude lower than M BH , ensuring that SMBH dynamics is not affected by spurious two-body collisions. In what follows, we define two SMBHs as a "pair" if their relative orbit shrinks down to a separation equal to twice the softening. From these distances, a SMBH binary may form in ∼ 1 Myr ).
Collisionless Mergers
In collisionless runs, the satellite is not able to dissipate energy gained through tidal shocks at pericentric passages (Gnedin et al. 1999;Taffoni et al. 2003). For q = 0.25, dynamical friction on the dark matter halo of the more massive galaxy is efficient, and the satellite sinks down to a few ∼ 10 kpc from the center after 3 orbits. At that point the central density of the satellite has decreased considerably because of tidal heating (Kazantzidis et al. 2004). Its innermost region is then tidally disrupted, leaving the small SMBH at a distance of a few kiloparsecs. The dynamical friction timescale has now greatly increased, because the mass of the small "naked" SMBH is orders of magnitude lower than that of the satellite's core that once surrounded it. No pair is formed, and the smaller SMBH is left wandering a few kiloparsecs away from the center of the remnant (Fig. 1). We note that estimating correctly the dynamical friction timescale of the SMBH from the simulation is not trivial in this regime, because the dark matter component is still dynamically important at kiloparsec distances from the center of the remnant. Even at high resolution, the mass of the dark matter particles of the primary galaxy is comparable with that of the SMBH, hence dynamical friction could be altered by discreteness effects. However, the dynamical friction timescale needed for the "naked" SMBH to reach the center of the remnant can be estimated using Chandrasekhar's formula (Colpi et al. 1999), and it turns out to be longer than a Hubble time. Hence we conclude that the two SMBHs will not form a pair.
In the q = 0.1 case, dynamical friction is rather ineffective. The sinking time for the satellite is longer than a Hubble time for a z = 0 merger, owing to the low initial mass of the satellite and to mass loss due to tidal stripping (Colpi et al. 1999). However, since mergers are much more common at higher redshift, when orbital times are shorter by a factor H(z)/H 0 , we performed a q = 0.1 merger starting at z = 3 (see §2), an epoch at which these SMBH pairs are predicted to be most typical. This is completed in ∼ 2.5 Gyr. Similarly to the q = 0.25 case, a wandering SMBH is left at several kiloparsecs from the center of the primary (Fig. 1). In order to check that the tidal disruption of the core was not affected by numerical heating, we ran the same merger with a 5 times higher mass resolution in the stellar component of both galaxies, and correspondingly higher force resolution; no significant difference in the SMBH orbit was found (Tab. 1). Therefore, even in this case the two SMBHs do not form a pair.
Dissipational Mergers with Star Formation
The presence of a star-forming gaseous component crucially affects the orbital decay of the SMBH via its dynamical response to tidal forces and torques.
The orbits of dry and wet, q = 0.25 mergers differ only after the first couple of orbits (∼ 6 Gyr, see Fig. 1). At second pericenter, tidal forces excite a strong bar instability in the satellite. Dissipation in the gas and torques exerted by the stellar bar onto the gas drive a gaseous inflow toward the center of the satellite (see inset in Fig. 1), increasing the central star formation rate by a factor of 3. Thus, the potential well of the satellite deepens, ensuring resilience of its central part to tidal stripping and shocks even when it plunges near the center of the primary. As a consequence, the small SMBH continues to sink fast, because it remains embedded in the massive core of the satellite. A pair of SMBHs is formed in this case, confirming previous results (Kazantzidis & al. 2005).
In q = 0.1, z = 3 wet mergers, both f g = 0.1 and 0.3 were employed; the latter should be a more realistic assumption, since disk galaxies at z = 3 are believed to have a higher gas mass fraction (e.g., Franx et al. 2008). In these cases, star formation and supernovae feedback affect the structure of the interstellar medium (ISM) in the disks quite dramatically (see also Governato et al. 2007). The disks develop a clumpy and irregular multi-phase structure, and turbulent velocities of the gas become a significant fraction (30%) of the circular velocity in this mass range (V vir = 64 km s −1 ). Star formation in the center of the satellite is enhanced compared to the same Left panel: total bound mass profiles. The more gas-rich satellite develops a higher concentration during the merger, compared to the other cases. Tidal truncation and gas removal cause a factor of ∼ 2 difference in mass at 2 kpc between isolation and merging cases. Right panel: bound mass in stars formed after the start of the simulation. The total amount of stars formed depends roughly only on the initial fg, but SF is more localized in the center during the merger because of tidal forces. The more gas-rich merging satellite undergoes the strongest central SF burst, developing a higher central density. model evolved in isolation, even though the star formation rate integrated over the whole galaxy is unchanged (see Fig. 2) The concentrated star formation and the turbulent motions of the gas stabilize the disks against bar instability (see inset in Fig. 1). In the absence of bar-driven torques, the strong gaseous inflow seen for higher-mass objects (q = 0.25) does not happen for q = 0.1. Yet, tidal torques drive some gas towards the center, explaining the enhanced central star formation (Fig. 2). During the first three orbits, the f g = 0.1 case preserves its initial central density owing to the mild mass inflow, rather than lowering it as in the collisionless case, while the f g = 0.3 satellite develops a steeper stellar cusp. Once the satellites go through the second pericentric passage, their ISM is prone to ram pressure stripping by the gas disk of the primary galaxy, outside the ram pressure stripping radius (Marcolini et al. 2003). Nearly 90% of the gas is swept away when they first enter the disk of the primary, while what remains is stripped during the next orbit: at t = 2 Gyr, the satellites have lost all their gas content, even in their central region. From this point onward, the satellite with initial f g = 0.3 is a cuspy, gas-poor object, subject to dynamical friction in the stellar and gaseous background. Its sinking is relatively fast because the steeper stellar density profile implies a larger bound mass, enhancing dynamical friction relative to the dry merger case. Moreover, its response to tidal shocks is nearly adiabatic (Gnedin et al. 1999), preserving it from tidal disruption. On the contrary, the satellite of the f g = 0.1 run undergoes a slower decay because of the lower bound mass. It then experiences a higher number of tidal shocks at pericentric passages which further decrease its density, until its complete disruption. As a consequence, the f g = 0.3 merger leaves the lighter SMBH at 70 pc from the more massive one (Fig. 1, Tab. 1) in a gas-rich environment, where the dynamical friction timescale for the SMBH to sink to the center, based on Chandrasekhar's formula, is very short (< 1 Gyr). Instead, in the f g = 0.1 case the final distance of the SMBHs is ∼ 400 pc; at such separations the dynamical friction timescale is of a few billion years. Hence the pairing will occur within one Hubble time in both cases, but will be considerably faster in the f g = 0.3 case.
DISCUSSION AND CONCLUSIONS
Our results show that the formation of a SMBH pair in unequal-mass mergers of disk galaxies is very sensitive to the details of the physical processes involved. None of the collisionless cases we studied led to SMBH pairing: tidal shocks progressively lower the density in the satellite until it dissolves, leaving a wandering SMBH in the remnant. The inclusion of gas dynamics and SF changes significantly the outcome of the merger. For higher mass ratios (q = 0.25) at z = 0, bar instabilities funnel gas to the center of the satellite, steepening its potential well and allowing its survival to tidal disruption down to the center of the primary. Therefore, in this case the presence of a dissipative component is necessary and sufficient to pair the SMBHs at ∼ 200 pc scales, creating favorable conditions for the formation of a binary. The smaller satellites here considered (q = 0.1, z = 3) are more strongly affected by both internal SF and the gasdynamical interaction between their ISM and that of the primary galaxy. Torques in the early stages of the merger are funnelling the gas to the center less efficiently, due to the absence of a stellar bar and the stabilizing effect of turbulence. As a result, ram pressure strips away all of the ISM of the satellite. If satellites develop a higher central stellar density by rapidly converting their gas into stars before ram pressure removes it, they can retain enough bound mass to ensure the pairing of the two SMBHs. Gas-rich satellites ( f g = 0.3) undergo a stronger burst of SF during the first orbits, and therefore meet this requirement better than f g = 0.1 satellites. Yet in both cases the central density of the cusp remains high enough to permit its survival, allowing the pairing of the two SMBHs within a Hubble time.
The pairing of the two SMBHs takes less than 1 Gyr in the gas rich systems that should be common at z = 3. Therefore, if the M BH − M bulge relation approximately holds at z = 3 as in the local Universe, the galaxies here considered should lead to the formation of representative SMBH pairs at such cosmic epochs (Volonteri et al. 2003). These pairs are also expected to contribute significantly to the high-z gravitational wave signal in the LISA band (Sesana et al. 2005). Since we show that gas-dynamical processes allow such an efficient pairing of the SMBHs, the results of this Letter strengthens the case for the observability of these coalescence events. On the other hand, we show that the timing between galaxy mergers and mergers of their SMBHs is sensitive to the gaseous content of the merging galaxies. Hence, SMBH coalescence events do not necessarily trace galaxy mergers directly. This will have important implications on the interpretation of the LISA data stream.
We note that the orbital evolution of the SMBH pairs, in the dynamical range considered here, has only a weak dependence on the masses of the two SMBHs. As shown in Figure 2, the stellar mass enclosed inside two softening lengths from the center of our galaxy models (hence close to our resolution limit) already exceeds M BH by more than an order of magnitude. This is the effective mass that determines how quickly the SMBHs will sink. Therefore, lowering M BH or increasing it by up to an order of magnitude would have no effect on sinking timescales before the disruption of the satellite. Instead, after disruption, the analytic estimate for the dynamical friction timescale of the naked SMBH would change linearly with M BH . Similarly, if nuclear star clusters with masses ∼ 10 M BH (Wehner & Harris 2006;Ferrarese & al. 2006) were present around the SMBHs, their sinking timescales would still be longer than a Hubble time in our dry mergers, where the final SMBH separation exceeds 1 kpc, while the pairing would now occur in well below a Gyr in all our wet mergers. Hence, either a larger M BH or the presence of a nuclear star cluster would enhance even further the difference between dry and wet mergers.
Lastly, a general limitation of our simulations is that they lack gas accretion onto the SMBHs and associated energy feedback. Additional heating from the active SMBH should reduce the binding energy of the gas, making it more susceptible to stripping processes, and perhaps inhibiting the formation of a steep stellar cusp. This would reduce the efficiency of the pairing process, but the effect will strongly depend on when the SMBH becomes active during the merger. Although it is unlikely that these effects will change the overall picture presented in this Letter, they will have to be explored in a forthcoming paper. | 2009-04-21T12:15:42.000Z | 2008-11-05T00:00:00.000 | {
"year": 2008,
"sha1": "3194c1049758c04b29793ad46d197aac8ccbea5c",
"oa_license": null,
"oa_url": "https://www.zora.uzh.ch/id/eprint/30887/1/Callegari_AstrophyJLetters_2009_VM-V.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "94c0dcf269c82ede1c075772e01a71dee970d69d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255548418 | pes2o/s2orc | v3-fos-license | Impostor Phenomenon and L2 willingness to communicate: Testing communication anxiety and perceived L2 competence as mediators
The Impostor Phenomenon (IP) describes experiences of perceived intellectual fraudulence despite the existence of objectively good performances, and it is a robust predictor of experiences and outcomes in higher education. We examined the role of the IP in the domain of second language (L2) acquisition by testing its relations with a robust predictor of L2 use, willingness to communicate (WTC). We collected self-reports of 400 adult Iranian L2 learners and tested the associations between the IP and WTC. As expected, we found a negative association between IP and WTC (r = −0.13). When testing a mediation model with perceived competence and communication anxiety as parallel mediators, we found evidence for full mediation via perceived competence. Our findings show the importance of considering self-evaluations in the domain of L2 acquisition. Further implications and limitations are discussed.
Introduction
The Impostor Phenomenon (IP; Clance, 1985) describes individual differences in feelings of intellectual fraudulence despite the existence of objective positive performance feedback (e.g., grades and recommendation letters). Those with high expressions in IP ("Impostors") constantly dismiss positive evaluations of their performance. Foremost, by attributing their success to chance and luck instead of their ability (e.g., Brauer and Proyer, 2022). The IP relates to low engagement in striving for opportunities to advance their career and educational achievements (e.g., Traut-Mattausch, 2016, 2017;Parkman, 2016;Cisco, 2020). We aimed to extend the knowledge on the IP in the educational domain by investigating its role in second-language acquisition (L2) among adult learners. IP-typical self-perceptions of inability might hinder the process of learning a second language, as the latter requires learners to engage in active learning, experience progress throughout their learning process, and have positive experiences when utilizing their skill set. We expected that the IP is negatively associated with L2 learning because the IP robustly relates to experiences of underachievement, irrespective of actual achievements (Sakulku and Alexander, 2011). To address this, we collected data from Iranian L2 learners and tested the association between the IP and willingness to communicate (WTC), an immediate predictor of L2 use (MacIntyre et al., 1998). Moreover, we considered indirect effects of communication anxiety and perceived competence on the association between the IP and WTC. To our knowledge, no study has hitherto examined the role of the IP for WTC in L2, and we aimed to narrow this gap in the literature to extend the understanding of self-perceptions of intellectual fraudulence in the domain of L2 acquisition. Clance (1985) described the IP as an inclination to underestimate one's own abilities and fear of being exposed as an intellectual fraud. Impostors are convinced that they are "intellectual phonies, " and despite the existence of objective indicators of their successes, such as grades or recommendation letters, they discount their positive performance feedback and assume that they would not be able to repeat their successes (Clance, 1985). The IP has been studied regarding its underlying mechanisms and consequences. Foremost, the IP-typical attributional style explains how Impostors perceive the causes of their successes. Impostors show externalinstable-specific attributions of positive performance outcomes, while attributions of events in social contexts and negative events are unrelated to the IP (e.g., Brauer and Wolf, 2016). Studies analyzing self-reports, vignettes, and experiments showed that the IP robustly relates to externalizing success and experiencing negative emotions after positive feedback (e.g., Thompson et al., 1998;Brauer and Wolf, 2016;Badawy et al., 2018;Vaughn et al., 2020;Brauer and Proyer, 2022). Thus, Impostors discount their ability by attributing their positive performances and successes externally to chance and luck. Finally, Impostors' attributions are biased because their actual performance is unrelated to the IP (e.g., Cozzarelli and Major, 1990;Brauer and Proyer, 2022).
The Impostor Phenomenon
The IP is unrelated to age, study fields, and vocations (Sakulku and Alexander, 2011). Findings on gender differences are mixed, with some studies showing that women experience on average higher levels of IP than men, but effect sizes are small (e.g., Chrisman et al., 1995;Brauer and Wolf, 2016;Badawy et al., 2018). Comparisons between students and working professionals showed that the IP is more pronounced among university students, with robust effects of medium size (Hedges' gs ≈ 0.50; Proyer, 2017, 2019;Neureiter and Traut-Mattausch, 2016). The IP relates to numerous negative consequences concerning mental health (e.g., greater levels of anxiety, depressiveness, and neuroticism; e.g., Sakulku and Alexander, 2011;Vergauwe et al., 2015) and one's career. For example, Impostors show less inclinations to career planning and-striving, less motivation to lead, lower occupational self-efficacy, fewer resources for adapting in their careers, and lower job satisfaction to name but a few (Vergauwe et al., 2015;Traut-Mattausch, 2016, 2017).
Despite the well-acknowledged negative links between the IP and learning in different domains (e.g., Blondeau and Awad, 2018; Lee et al., 2022), there is yet limited knowledge on the IP and its role in learning a second/foreign language. Yamini and Mandanizadeh (2011) found that the IP relates negatively to selfefficacy concerning L2 writing abilities in 94 Iranian learners of English as L2. Expanding on their research, we anticipated that language learners with inclinations to the IP would be more prone to experience language anxiety in terms of communication anxiety, fear of negative evaluation, and test anxiety. Based on the well-documented negative association between language anxiety and learners' WTC in the L2 (e.g., Elahi Shirvan et al., 2019; Dewaele and Dewaele, 2020;Alrabai, 2022), we anticipated the IP to relate negatively to students' WTC in the target foreign language.
The present study
We aimed to extend the knowledge on the role of the IP among L2 learners. As a criterion, we examined WTC, which describes "the intention to speak or to remain silent, given free choice" and "a readiness to enter into discourse at a particular time with a specific person or persons, using a L2" (MacIntyre et al., 1998, p. 547). WTC relates robustly to self-evaluations, teacher-ratings, and objective indicators of L2 proficiency and is considered the most immediate predictor of L2 use (e.g., MacIntyre et al., 1998;Barabadi et al., 2021). In accordance with the literature, we treat WTC as an indicator of inclinations to engage in L2, which is important for the learning process of a foreign language. Speaking a foreign language Frontiers in Psychology 03 frontiersin.org during the learning process can expose L2 learners' grammatical and vocal expressions of the new language and go along with experiences of shame and anxiety (Teimouri, 2018;Wilson and Lewandowska-Tomaszczyk, 2019). We expected that the IP relates negatively to WTC (H1), considering that Impostors' academic selfconcept, which is characterized by, for example, low academic selfesteem, self-efficacy, and discount of abilities, translates into being less inclined to expose their performance (i.e., speaking the foreign language) in front of others such as teachers and co-students. Additionally, we examined the role of communication anxiety and self-perceived competence as mediator variables. Communication anxiety describes "worry and negative emotional reaction aroused when learning or using a second language" (MacIntyre, 1999, p. 27). According to Bravata et al. (2020), anxiety is deeply linked to the IP in that it is typically presumed to precede and trigger impostor feelings among young people and adults alike. Also, its components have been well studied in relation to the IP. For example, fear of negative evaluation is one significant aspect of anxiety, and Chrisman et al. (1995) found that Impostors usually report greater levels of fear of being negatively evaluated by others. It seems that in the context of learning, students who experience certain levels of IP are likely to demonstrate a higher tendency for fear of negative evaluation by their teachers or peers. Accordingly, we assessed communication anxiety as a potential mediator variable to examine its indirect effect on the IP-WTC association. We expected a positive association between the IP and communication anxiety (H2a).
MacIntyre and Charos (1996) suggested that L2 learners' selfassessment of their L2 competence might be more important for their L2 communication than their actual L2 ability. Thus, selfperceptions of competence play a role in learners' inclinations to L2 use. There is robust evidence that the IP is characterized by low academic self-evaluations, academic self-esteem, and inclinations to underestimate their abilities (e.g., Cozzarelli and Major, 1990;Thompson et al., 1998;Badawy et al., 2018;Brauer and Proyer, 2022), and that self-perceived competence relates positively to WTC (e.g., Dewaele, 2008;Piechurska-Kuciel, 2011;Lockley, 2013). Further, Yamini and Mandanizadeh (2011) showed that the IP relates negatively to self-efficacy in L2 abilities. Therefore, we also took self-perceived competence as a potential mediator variable into account and expected that the IP relates negatively to self-evaluations of competence (H2b). Using a parallel mediation analysis model, we examined whether the hypothesized negative association between the IP and WTC (expected in H1) would be explained by indirect effects of anxiety and self-perceived competence (see Figure 1 for the model).
Materials and method 2.1. Sample and procedure
Our sample comprised N = 400 (15% male and 84% female) participants who studied Teaching English as a Foreign Language (TEFL). The majority (96.5%) were BA students and the remainder were master's or PhD students. Since most participants are BA students of TEFL, we present the main courses that they are required to take during their BA program: four language skills, grammar, study skills, linguistics, research, testing, teaching methodology, material development, and a practicum. Overall, BA students must complete 136 credit courses during the course of eight semesters. For each "two-credit" course, students have to attend 16 sessions. Participants' mean age was 21.1 years (SD = 4.1; median = 20). We collected all data online using GoogleDoc. At the time of data collection, all university classes were held online during the COVID-19 pandemic. We recruited our participants from three Iranian universities (Ferdowsi University of Mashhad, University of Tehran, and University of Bojnord) and shared the link to the online questionnaire in L2 classes. Institutional approval is not required Impostor Phenomenon
Willingness to Communicate
Communication Anxiety Perceived Competence Parallel mediation model testing the association between the Impostor Phenomenon and willingness to communicate under consideration of perceived competence and communication anxiety.
Instruments
We used self-report questionnaires to assess the study variables. Participants gave their responses to the items on 5-point Likert-type scales (1 = strongly disagree; 5 = strongly agree).
Impostor Phenomenon
We used the Clance Impostor Phenomenon Scale (CIPS; Clance, 1985) for the assessment of the IP in its Persian translation. The CIPS comprises 20 items (e.g., "When people praise me for an achievement, I fear that I will not be able to meet their expectations in the future. "). There is good evidence for the reliability and validity of the instrument in different language versions (for an overview, see Mak et al., 2019), and we found good internal consistency in our study (α = 0.84).
Willingness to communicate
We assessed willingness to communicate (WTC) with seven items that were originally developed by Weaver (2005) and adapted by Khajavy et al. (2018) for their use in the context of foreign language learning. A sample item is "I am willing to speak English about a topic which is written on a board or a piece of paper. " In our study, the internal consistency was high (α = 0.87).
Communication anxiety
We used Dewaele and Dewaele's (2020) 8-item scale to assess L2 communication anxiety. This scale assesses the extent to which L2 learners experience L2 communication anxiety when speaking English as foreign language. A sample item is "I get nervous and confused when I am speaking in my FL class. " The results of the current study provided evidence for the reliability and validity of this scale. The internal consistency was high (α = 0.87) in our study.
Perceived L2 competence
We assessed perceived L2 competence (PC) with eight items from Peng and Woodrow (2010) and Fushino (2010). This scale assesses the extent to which L2 learners perceive their competence in L2. A sample item is "I am able to do a role-play standing in front of the class in English (e.g., ordering food in a restaurant). " The internal consistency was α = 0.91.
Data analysis
We examined the intercorrelations between our study variables by computing bivariate correlations for preliminary analyses. Power analyses with G*Power (Faul et al., 2007) showed that our sample size allowed to detect minor effects (ρ = 0.16) with 90% power and 5% type I error-rate (two-tailed).
We tested our hypotheses with regression analyses in Mplus 8.6 Muthén, 1997/2017): First, we tested the baseline model with IP as the independent variable and WTC as outcome. Secondly, we tested our parallel mediation model, which included anxiety and PC as mediator variables for the association between IP and WTC (see Figure 1 for a visualization of the model). We used maximum likelihood estimation and report unstandardized path coefficients (b), standard errors (SE), 95% confidence intervals (CI), p-values, and the determination coefficient R 2 . We computed bootstrapped 95% CIs (k = 5,000 samples) for all parameters. In line with MacKinnon et al. (2007), we tested the statistical significance of the indirect effects 1 on basis of the 95% CIs. Accordingly, the existence of an indirect effect is supported when the CI does not include zero (MacKinnon et al., 2007). In all analyses, we controlled for gender as a covariate to account for potential gender differences as described in the literature (e.g., Badawy et al., 2018;Fleischhauer et al., 2021).
Preliminary analyses
The descriptive statistics are displayed in Table 1. The means and SDs of the scale scores were comparable to prior studies (e.g., Brauer and Wolf, 2016;Barabadi et al., 2021). The kurtosis (≤ 0.92) and skewness (≤ 0.58) of the study variables did not indicate deviations from normality. Associations with gender were negligible (rs ≤ 0.05, ps ≥ 0.153), but women showed a tendency to higher WTC (r = −0.12, p = 0.016). Age was unrelated to all study variables (rs ≤ 0.10).
The correlations among the study variables were as expected, with a positive association between the IP and anxiety (r = 0.45), a positive association between PC and WTC (r = 0.62), as well as negative correlations between anxiety and PC (r = −0.61) and WTC (r = −0.45; all coefficients controlled for gender; Table 1). In line with H1, IP related negatively to WTC (r = −0.12, p = 0.014), but the effect was small. As hypothesized, the IP related positively to communication anxiety (r = 0.45, p < 0.001; H2a), whereas higher IP did go along with perceiving oneself as low in competence (r = −0.15, p = 0.003; H2b).
Regression analyses
The baseline model showed the expected negative association between the IP and WTC (β = −0.12, p = 0.015; Table 2), but the effect size was small. The model explained 2.9% of the variance in WTC. Thus, we found support for H1.
Frontiers in Psychology 05 frontiersin.org Next, we tested the mediation model with communication anxiety and perceived competence as mediator variables (see Figure 1; Table 2). The direct path between the IP and WTC became negligible and non-significant (b = 0.00, 95% CI [−0.006, 0.007]) after adding anxiety and PC. We found that anxiety and PC yielded a statistically significant negative total indirect effect (b = −0.01, 95% CI [−0.014, −0.004]) on the association between IP and WTC. The inspection of the single indirect effects showed an indirect effect of PC (b = −0.01, 95% CI [−0.009, −0.002]), whereas anxiety did not explain the IP-WTC relationship (b = 0.00, 95% CI [−0.007, 0.000]). Thus, higher IP related to lower self-perceptions of competence and thereby less WTC. The mediation model explained 39.8% of the variance in WTC. In summary, the IP-WTC association was fully mediated via PC (Muthén et al., 2017).
Discussion
Our study aimed to extend the knowledge regarding the IP in the educational domain by examining its role in L2 learners. We tested the relationship between the IP and WTC, a robust predictor of L2 use (MacIntyre et al., 1998). As expected, our data showed that the IP goes along with lower WTC among L2 learners. However, the effect size was small, suggesting that the IP might play only a minor role in L2 learning when evaluating the direct effect. We additionally assessed whether communication anxiety and perceived competence might yield indirect effects on the IP-WTC association. We found that the IP-WTC association was mediated by the indirect effect via perceived competence, whereas the contribution of anxiety was negligible. Perceived competence has been identified as a robust predictor of WTC in prior studies (e.g., Dewaele, 2008; Above diagonal zero-order correlations and below diagonal second-order correlations controlled for age and gender. *p < 0.05. **p < 0.01. ***p < 0.001. Two-tailed. Harman's one-factor test showed that 25.9% of the variance in the study variables were explained by a single factor, which is below the cut-off (50%) suggesting the existence of robust common method variance. -Kuciel, 2011;Lockley, 2013), and this aligns well with Impostors' tendency to discount their abilities and underestimate their competence (e.g., Brauer and Proyer, 2022). This is also consistent with Yamini and Mandanizadeh's (2011) finding that the IP is associated with lower self-efficacy and, as a result, lower selfperceived writing competencies in L2. Our findings show that the IP's negative association with PC robustly relates to WTC and that Impostors' inclinations to dismiss their competencies is detrimental to engaging in communicating in the foreign language. Thus, while the IP plays only a minor role in terms of its direct effect on WTC, Impostors' tendencies to perceive themselves as low in competence contribute indirectly to understanding inhibitions in WTC in L2 learners. One might argue that there might be redundancy between the IP and PC, but our correlation analyzes showed that the expected overlap was far from redundancy, with less than 2.3% shared variance. Although our expectations concerning the direction of the associations between communication anxiety and the IP and WTC were met in line with prior findings (e.g., Chrisman et al., 1995;Barabadi et al., 2021), the indirect effect of communication anxiety on the IP-WTC association did not reach statistical significance. Thus, although anxiety robustly related to the IP and WTC with comparatively strong correlation effects, its mediating role for the IP-WTC relationship was negligible. It could be argued that although our expectations concerning the directions of the effects were met for the relations between IP and anxiety, and anxiety and WTC, their joint effect was negligible when perceived competence is considered in the model. It is desirable that future research clarifies the statistical robustness and practical relevance of this finding, as well as extending research on anxiety for L2 learning (e.g., by considering fine-grained antecedents of anxiety such as shame; Teimouri, 2018;Wilson and Lewandowska-Tomaszczyk, 2019). Finally, one might argue that Impostors are more affected by discounting their positive potential (i.e., competence) than by negative emotions such as anxiety (e.g., Yamini and Mandanizadeh, 2011;Brauer and Proyer, 2022).
Piechurska
In conclusion, the IP showed a minor effect size in relation to WTC, which is fully mediated by Impostors' inclinations to experience low competencies when it comes to their ability to speak a foreign language. Our findings have several implications. While Impostors' dismissal of their abilities has been evaluated with regard to indicators of generalized abilities in terms of intelligence or grades (e.g., Cozzarelli and Major, 1990;Brauer and Proyer, 2022), our findings highlight that the IP and its tendency to discount success, performance, and competencies also affect domains such as WTC in the context of learning a foreign language. We argue that identifying the discounting of competencies as a robust mediator contributes to understanding the previously documented consequences of the IP, as well as providing an avenue for future research and the development of trainings and interventions to reduce the IP. Considering that there is increasing evidence showing that the dismissal of abilities is a core criterion of the IP, future studies could use this knowledge to examine the efficacy of trainings. For example, Proudfoot et al. (2009) examined an intervention program among workers that increased "self-serving" attributional styles and showed positive effects on outcomes such as well-being and job satisfaction. An intervention aimed at training to internalize positive performance feedback from teachers and peers, as well as formal feedback (e.g., grades), might help to reduce the IP. Interventions could also increase Impostors' self-perceptions of competence, academic self-esteem, and selfefficacy to a more veridical level, and support them in achieving goals such as learning a foreign language, irrespective of personal or professional motivations. Considering the role of the IP and its mental health correlates, we would expect such training to help alleviate the effects on outcomes such as anxiety (including its fine-grained components such as communication-and test anxiety), fear of negative evaluation, and depressiveness (e.g., Chrisman et al., 1995;Yamini and Mandanizadeh, 2011;Brauer and Wolf, 2016;Bravata et al., 2020).
Limitations and future directions
Our findings must be interpreted with several limitations. Although there is robust evidence that the IP longitudinally predicts perceptions of competence (e.g., Cozzarelli and Major, 1990;Brauer and Proyer, 2022) and that the IP precedes WTC from a theoretical perspective, our findings must be interpreted with caution because longitudinal research is needed to examine the causal pathways between the study variables in the narrow sense, especially for the indirect effect identified in our mediation model. Secondly, we only collected data from Iranian L2 learners as part of TEFL, and replication in other countries and alternative L2 target languages are important to generalize our findings. Thirdly, our findings are based on self-reports and should be extended by supplementing self-reports of WTC by L2 abilities assessed with standardized tests and teacher evaluations to reduce IP-typical biases and common method variance (Campbell and Fiske, 1959). Fourth, our sample is not representative, as it comprises comparatively young participants of high educational status from Iran, which limits the generalizability of our findings. Finally, we collected the data during the COVID-19 pandemic, which has affected teaching and learning (e.g., by illness or loneliness during isolation).
Future research could extend our findings in multiple ways. Although WTC is a robust predictor of L2 use, we have not assessed external indicators of L2 use. For example, it would be interesting to examine the associations between the IP and objective indicators of L2 acquisition such as test-and exam data and incorporate teacher ratings of students' L2 proficiency. While it is well-supported that WTC robustly relates to objective indicators such as tests or teacher ratings (e.g., Barabadi et al., 2021), using different sources of information would also allow to examine the discrepancies between Impostors' self-perceived competencies and external evaluations of their abilities. For example, initial research testing Impostors' self-evaluations in creativity in comparison to their results in a situational judgment test of creative styles has shown that the IP goes along with discrepancies between self-evaluations and test data, indicating that they underestimate their creative abilities (Proyer and Brauer, 2020). Similarly, we expect that Impostors would systematically Frontiers in Psychology 07 frontiersin.org underestimate their L2 abilities in comparison to an external criterion (e.g., teacher evaluations or language-test scores). Further, future studies should extend the nomological net of the IP in the domain of L2 learning. One could argue that the IP would negatively correlate with other concepts relating to WTC such as communication confidence (Lee and Drajati, 2019;Lee and Hsieh, 2019) and motivation (Khajavy et al., 2016;Lee and Drajati, 2019). In this regard, it is likely that the IP would positively correlate with emotions that negatively affect learners' WTC, like generalized anxiety and boredom (e.g., Pawlak et al., 2016) and negatively with emotions that positively influence WTC in L2 such as enjoyment (Teimouri, 2018;Dewaele and Dewaele, 2020;Alrabai, 2022). This line of research could help in assessing how the IP affects different aspects of language learning and in building an overarching theoretical framework that supports guiding research on the IP in the educational context.
Data availability statement
The datasets presented in this study can be found in the Open Science Framework under https://osf.io/kyg5c/.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
Author contributions
KB and EB: conceptualization. EB, EA, FA, and MS: data collection. KB and RS: formal analysis. KB, EB, and RS: roles/ writing -original draft. KB, EB, EA, FA, MS, RS, and LV: writing -review and editing. All authors contributed to the article and approved the submitted version. | 2023-01-10T14:32:30.914Z | 2023-01-09T00:00:00.000 | {
"year": 2022,
"sha1": "57598765910ea7ae0c386989a257d004192ad37c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "57598765910ea7ae0c386989a257d004192ad37c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254817997 | pes2o/s2orc | v3-fos-license | Postoperative Adjuvant Hepatic Arterial Infusion Chemotherapy With FOLFOX in Hepatocellular Carcinoma With Microvascular Invasion: A Multicenter, Phase III, Randomized Study
PURPOSE To report the efficacy and safety of postoperative adjuvant hepatic arterial infusion chemotherapy (HAIC) with 5-fluorouracil and oxaliplatin (FOLFOX) in hepatocellular carcinoma (HCC) patients with microvascular invasion (MVI). PATIENTS AND METHODS In this randomized, open-label, multicenter trial, histologically confirmed HCC patients with MVI were randomly assigned (1:1) to receive adjuvant FOLFOX-HAIC (treatment group) or routine follow-up (control group). The primary end point was disease-free survival (DFS) by intention-to-treat (ITT) analysis while secondary end points were overall survival, recurrence rate, and safety. RESULTS Between June 2016 and August 2021, a total of 315 patients (ITT population) at five centers were randomly assigned to the treatment group (n = 157) or the control group (n = 158). In the ITT population, the median DFS was 20.3 months (95% CI, 10.4 to 30.3) in the treatment group versus 10.0 months (95% CI, 6.8 to 13.2) in the control group (hazard ratio, 0.59; 95% CI, 0.43 to 0.81; P = .001). The overall survival rates at 1 year, 2 years, and 3 years were 93.8% (95% CI, 89.8 to 98.1), 86.4% (95% CI, 80.0 to 93.2), and 80.4% (95% CI, 71.9 to 89.9) for the treatment group and 92.0% (95% CI, 87.6 to 96.7), 86.0% (95% CI, 79.9 to 92.6), and 74.9% (95% CI, 65.5 to 85.7) for the control group (hazard ratio, 0.64; 95% CI, 0.36 to 1.14; P = .130), respectively. The recurrence rates were 40.1% (63/157) in the treatment group and 55.7% (88/158) in the control group. Majority of the adverse events were grade 0-1 (83.8%), with no treatment-related death in both groups. CONCLUSION Postoperative adjuvant HAIC with FOLFOX significantly improved the DFS benefits with acceptable toxicities in HCC patients with MVI.
INTRODUCTION
Hepatocellular carcinoma (HCC) accounts for 90% cases of primary liver cancers, of which 70% of patients are ineligible for curative treatments. 1,2 At present, surgical resection remains as the mainstay of curative treatment option. 3 However, the recurrence rate after surgical resection in patients with HCC could be 70%-80%. 2 The incidence of microvascular invasion (MVI) in HCC is about 30%-50%, and the expected 1-and 2-year disease-free survival (DFS) of patients with MVI positive is about 50%-60% and 30%-40%, respectively. [4][5][6] Besides, multiple retrospective studies substantiated MVI as a key risk factor in the early recurrence of HCC after surgical resection and a better predictor for DFS and overall survival (OS). [6][7][8] Despite the availability of various adjuvant therapies to reduce recurrence and prolong OS, there is no global consensus on the recommendation of adjuvant therapies for HCC after surgical resection. Moreover, the overall outcomes of these interventions are variable, and rendering the improvement of prognosis for these patients is a major challenge. 9 Although several studies substantiated that hepatic arterial infusion chemotherapy (HAIC) has a higher response rate than systemic chemotherapy with longer OS and tolerable toxicity in patients with advanced HCC, only the Japanese guidelines recommend HAIC as a treatment option for advanced HCC. 1 In addition, studies comparing HAIC with 5-fluorouracil and oxaliplatin (FOL-FOX) regimen either alone or in combination with sorafenib evidenced an improvement in the prognosis in patients with intermediate and advanced HCC. [10][11][12] Although there was no direct comparison between HAIC and the current standard first-line treatment, such as combination of atezolizumab and bevacizumab, given that the overall response rate of atezolizumab and bevacizumab in IMbrave 150 study was only 27.3%, 13 previous studies suggested that the response rate of HAIC in advanced HCC was significantly better. Although it was not possible to directly compare the results of different studies, these data still demonstrated the potential efficacy of HAIC. Recently, we reported our preliminary findings of phase III, randomized controlled trial where adjuvant HAIC after hepatectomy may be associated with survival benefits in HCC patients with MVI. 14 In this study, we report the updated efficacy and safety data with an extended follow-up.
Study Design and Participants
Details on study design, inclusion criteria, and exclusion criteria were described previously in our preliminary report. 14 Briefly, a phase III, multicenter, prospective, open-label, randomized controlled clinical trial was conducted in China at the following five centers: Sun Yat-sen University Cancer Center (SYSUCC), Guangzhou, China; the First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China; the First People's Hospital of Foshan, Foshan, China; Zhujiang Hospital of Southern Medical University, Guangzhou, China; and the First Affiliated Hospital of Jinan University, Guangzhou, China. The following details are considered for inclusion: patients age 18 years and older to younger than and 75 years with histologically confirmed HCC with MVI; treatment-naive; Eastern Cooperative Oncology Group performance score of # 2; absence of macrovascular invasion, distant metastasis, and intrahepatic or extrahepatic recurrence at radiological follow-up (4-6 weeks after surgery); and adequate hematologic, hepatic, and renal functions (details are in the study Protocol [online only] and the Data Supplement [online only]). Furthermore, patients with histologically proven positive resection margin (R1 resection); severe functional impairment of organs (heart, brain, lung, kidney, and liver); allergy to related drugs or intolerance to HAIC; previous or concomitant antitumor therapy; and a history of organ transplantation, neurologic, or psychiatric diseases, human immunodeficiency virus infection, esophageal or gastric variceal bleeding, hepatic encephalopathy, or cardio-cerebrovascular events within 30 days of random assignment were excluded.
The implementation of this clinical study complies with all local laws and regulations and is implemented in accordance with the ethical principles of the Declaration of Helsinki. Before the study, all patients provided their written informed consent. The study protocol was approved by the Institutional Review Board and Institutional Ethics Committee of SYSUCC (Institutional Review Board Approval No.: B2017-006-01). The study has been registered at Clin-icalTrials.gov (identifier: NCT03192618). Furthermore, this study was reported as per the Consolidated Standards of Reporting Trials reporting guidelines.
Trial Design and Treatment
Surgical resection procedures were described in previously reported studies. 15,16 All resection margins were negative. All patients had at least seven paraffin-embedded tissue blocks, with a mean of 7.2 (median, 8; range, 7-10) blocks per tumor available for pathologic examination. Slides were re-examined to solve the discrepancy with a double-CONTEXT Key Objective To our knowledge, no standard treatment has been proposed as the adjuvant therapy for the hepatocellular carcinoma (HCC) patients with microvascular invasion, and our study is the first phase III trial to evaluate the value of hepatic arterial infusion of oxaliplatin, fluorouracil, and leucovorin (hepatic arterial infusion chemotherapy with 5-fluorouracil and oxaliplatin) as the adjuvant therapy in this population. Knowledge Generated Hepatic arterial infusion chemotherapy with 5-fluorouracil and oxaliplatin significantly improved the disease-free survival (20.3 v 10.0 months, P 5 .001) compared with routine follow-up in HCC patients with microvascular invasion. There was no significant difference in the incidence of operation-related adverse events between the two groups (P 5 .597). Relevance (E.M. O'Reilly) These data are intriguing and provide ongoing support for the continued investigation of hepatic artery infusional therapy in patients with HCC.* headed microscope, and a consensus was reached. The presence of MVI was defined as a tumor within a vascular space lined by the endothelium that was visible only via microscopy. 4,5 After surgery (4-6 weeks), all patients were randomly assigned to receive either one to two cycles of adjuvant HAIC (treatment group) or routine follow up without any adjuvant treatment (control group) in a 1:1 ratio by using a simple random assignment method. Random assignment was performed using a computer-generated random assignment sequence at the Clinical Trial Center of SYSUCC. Details of the random allocations were provided in sequentially numbered, opaque, sealed envelopes prepared by a statistician (Li Jibin), who participated in the statistical analysis and data review. The random assignment and allocation concealment were conducted according to practical guidance. 17 HAIC procedure was performed as per previously reported studies. 11,12,14,18 After successful percutaneous femoral artery puncture and catheterization, superior mesenteric arteriography and hepatic arteriography were performed. After confirming that the patients met the inclusion criteria according to the results of arteriography, the hepatic artery was intubated to the predetermined position, and patients with indwelling catheter were shifted to the ward. Any implanted port system was not applied. The catheter was connected to the injection pump in the ward, and the following chemotherapeutic agents were continuously pumped: oxaliplatin, 85 mg/m 2 from 0 to 3 hours once on day 1; leucovorin, 400 mg/m 2 from 3 to 4.5 hours once on day 1; fluorouracil, 400 mg/m 2 from 4.5 to 6.5 hours once on day 1; and fluorouracil, 2,400 mg/m 2 once over 46 hours from days 1 to 3. The patient was bedridden during chemotherapy. When chemotherapy ended, the catheter was pulled out, and the patient was discharged after complete hemostasis at the puncture site. The time interval between two cycles of HAIC was set at 4-5 weeks. In the control group, patients with recurrence confirmed by imaging have received hepatic arteriography and subsequent transarterial chemoembolization (TACE).
End Points and Follow-Up
The primary end point was DFS, defined as interval between random assignment and first documented diagnosis of HCC recurrence or death due to all causes depending on which occurred first. The secondary end point was OS, defined as the duration from the date of random assignment to the death due to all causes. Patients who had not experienced recurrence or death at the time of data analysis were censored as alive and event-free at the date of last follow-up. Recurrence rate (on the basis of angiographic or/and radiologic findings) and safety assessment included continuous assessment of adverse events (AEs) throughout the trial and were graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events version 4.03. 19 More specifically, AEs were evaluated twice a day during hospitalization. During the home-stay period, patients can contact the investigators over phone if they have serious AEs. Other AEs were documented at the time of scheduled review. All patients were followed up at an interval of 2-3 months per our previous studies. 16,20 At each follow-up visit, physical examination, blood test (serum levels of alpha-fetoprotein and liver function), and enhanced abdominal computed tomography or magnetic resonance imaging scans were performed. Once suspicious recurrence/metastasis was detected, further examinations including hepatic arteriography or biopsy were conducted. Recurrence/metastasis was confirmed based on the cytologic/histologic evidence or the noninvasive diagnostic criteria for HCC by the European Association for the Study of Liver. Patients with recurrence in both the groups received subsequent treatment according to the decision of the multidisciplinary team of each center.
Statistical Analysis
The sample size estimation was based on assumptions that a median DFS of the control group was 12.0 months, and adjuvant HAIC could improve the median DFS of treatment group to 18.0 months. To detect this difference with a power of 90% and a two-sided a of .05, we estimated that the required number of events would be observed if 131 patients were enrolled in each group with an enrollment period of 24 months and a follow-up period of 24 months.
Clinical and pathologic differences in the distribution of baseline characteristics between the treatment and control groups were compared using the Pearson's x 2 test or Fisher's exact test (categorical variable). For normally distributed and non-normally distributed values, the variable distributions were described using the mean 6 standard deviation and median and range, respectively. Depending on data normality, Student's t test or Mann-Whitney test was used to assess the difference in continuous variables between the two groups.
The efficacy analyses were performed in the intention-totreat (ITT) population, which included all randomly assigned patients, and in the per-protocol (PP) population, which included patients who completed two cycles of adjuvant HAIC. Safety analyses about AEs associated with HAIC were conducted among those who received at least one dose of the trial regimen. The cumulative survival probabilities were estimated using the Kaplan-Meier curve method, and the group differences were compared using log-rank tests in the ITT population and in the PP population. We calculated hazard ratios (HRs) using the Cox proportional hazards model. The proportional hazards assumption was confirmed based on Schoenfeld residuals. 21 The exploratory subgroup analyses were conducted according to the prognostic factors including age, tumor number, tumor diameter, tumor distribution, Milan criteria, alpha-fetoprotein, HBV-DNA, cirrhosis, and Edmondson-Steiner grade. The treatment effects in each subgroup were evaluated using an unadjusted Cox proportional hazards model. The interaction effect was evaluated by adding interaction terms to Cox proportional hazards models.
All the analyses were performed using the SPSS software, version 24.0 (SPSS Inc, Chicago, IL). A two-tailed P , .05 was considered statistically significant.
Patient Characteristics and Treatment Administration
Between June 2016 and August 2021, a total of 351 patients were screened and 315 patients were randomly assigned to receive adjuvant FOLFOX-HAIC (treatment group, n 5 157) or to follow up without any adjuvant treatment (control group, n 5 158) and were included in the ITT population. Among them, 14 patients from the treatment group and 15 patients from the control group were excluded from the PP population. The reasons for the exclusion and the patient disposition process are summarized in Figure 1. Overall, 148 patients in treatment group underwent at least one cycle of HAIC, and these patients were included in safety analyses. The baseline demographics and the clinical characteristics were comparable between the two groups (Tables 1 and 2).
Finally, there were 24 patients (15.3%) who received only one cycle of HAIC in the ITT population. In the PP population, 124 patients (86.7%) completed the planned two cycles of HAIC and 18 patients (12.6%) received only one cycle of HAIC. Among these 18 patients, 14 patients (9.8%) refused to accept the second cycle of HAIC due to their personal reasons and four patients (2.8%) were diverted to accept TACE since intrahepatic recurrence was found during the hepatic arteriography of the second cycle of HAIC. In addition, one patient (0.7%) did not undergo HAIC as planned but was diverted to accept TACE since intrahepatic recurrence was found during the hepatic arteriography of the first cycle of HAIC.
Those patients who were diagnosed with recurrent HCC through hepatic arteriography were included in survival analysis, as hepatic arteriography was not given to the patients in the control group. Moreover, as tumor recurrence was confirmed in the aforementioned five patients, they were treated with TACE with epirubicin, lobaplatin, and lipiodol, instead of HAIC.
Efficacy Analysis
The study was censored on September 30, 2021. The median follow-up period was 23.7 months (95% CI, 21.0 ; Fig 2A), whereas it was 19.3 months (95% CI, 12.2 to 26.4) and 8.9 months (95% CI, 5.9 to 11.8), respectively, in the PP population (HR, 0.52; 95% CI, 0.38 to 0.72; P , .001; Fig 2C) Fig 2D). The Cox proportional hazards model was examined as applicable (on the basis of Schoenfeld residuals, P 5 .19 for DFS and P 5 .54 for OS analyses in the ITT population; P 5 .23 for DFS and P 5 .45 for OS analyses in the PP population). The results of subgroup analyses were consistent with those in the whole enrolled patients. This indicated that almost all patients could have better DFS benefits from adjuvant HAIC in the ITT population (P , .001; Fig 3A) and in the PP population (P , .001; Fig 3C). Those patients without liver cirrhosis could benefit from adjuvant HAIC in terms of OS both in the ITT population (P 5 .038; Fig 3B) and in the PP population (P 5 .043; Fig 3D).
Among the patients who had recurrence, 48 (76.2%) patients in the treatment group and 59 (67.0%) patients in the control group underwent subsequent antitumor therapies (Data Supplement). The patterns of recurrence were similar between the two groups (Data Supplement). Unadjusted Cox model was used to estimate HRs with 95% CIs and to test for interactions among subgroups using two-sided P values. AFP, alpha-fetoprotein; DFS, disease-free survival; HBV-DNA, hepatitis B virus-DNA; HR, hazard ratio; ITT, intention-to-treat; OS, overall survival; PP, per-protocol.
Safety Analysis
The overall incidence of operation-related AEs is summarized in the Data Supplement. There was no significant difference in the incidence of operation-related AEs between the two groups (P 5 .597 There were no incidences of death due to HAIC or surgery.
DISCUSSION
Intrahepatic recurrence of HCC after hepatectomy is more frequent due to intrahepatic dissemination or micrometastases of primary cancer cells, 22 and MVI is recognized as a risk factor for recurrence. 16,23 Considering the high risk of recurrence, local adjuvant therapy may offer better survival benefits than systemic adjuvant therapy in patients with HCC recurrence. Although adjuvant TACE has shown survival benefits in HCC patients with MVI after curative resection, complications caused by embolization limited its applicability. 16,24 Moreover, there is no universally accepted adjuvant therapy for HCC patients with MVI. At this juncture, the results from our current study substantiated that adjuvant HAIC with FOLFOX provided acceptable survival benefits. In addition, our study suggests that FOLFOX-HAIC has acceptable safety profiles and was well-tolerated.
The results of EACH study confirmed the value of systemic chemotherapy with FOLFOX regimen in the treatment of advanced HCC. 25 Recently, several retrospective studies and a few randomized trials substantiated the survival benefits of FOLFOX-HAIC either alone or in combination with sorafenib in patients with advanced HCC with and without MVI. 10,11 Earlier, Lyu et al 10 conducted a retrospective study involving the comparison of survival outcomes in patients with advanced HCC undergoing FOLFOX-HAIC with and without sorafenib and reported that FOLFOX-HAIC improved survival benefits when compared with sorafenib in a large number of patients. Furthermore, in this study, a trend of superior OS seemed to be demonstrated in the treatment group compared with that in the control group. However, the 1-, 2-, and 3-year rates of OS and the overall regression comparisons showed no significant difference. We believe that a longer follow-up period might reveal the benefits of adjuvant HAIC in terms of OS.
In this study, most patients benefit from DFS in the early 2 years. The main reason for the early recurrence of HCC after surgery was the existence of small metastases in residual liver, which is the high-risk outcome of MVI in HCC. 26,27 The continuous infusion of FOLFOX drugs has the potential to eliminate the micrometastasis in liver parenchyma and blood circulation. Therefore, HAIC mainly reduces early recurrence, which is in accordance with the treatment principle and investigators' expectations.
The role of chemotherapy drugs in locoregional therapy is undoubtedly important. A study has shown that the chemotherapeutic drugs, rather than embolization, played a dominant role in TACE treatment. 28 The continuous infusion of chemotherapeutic drugs can ensure the adequate local drug concentration in the liver so that the efficacy was not inferior to that of TACE. At the same time, it can avoid the complications due to embolization and reduce the damage of liver function. Besides, we performed arterial catheterization in every cycle rather than using an implanted port system to avoid port-related complications such as local infection, thrombosis, and toxicities caused due to leakage of chemotherapeutic drugs. 29 Although this study is a complete multicenter, prospective, randomized controlled study, our study has certain limitations. First, MVI scale was not used as a randomized stratification factor in the initial design of the study, and most centers have begun to grade MVI (M1 or M2) in the last 1 to 2 years. Therefore, it was not possible to evaluate the MVI scale for the early enrolled cases. As a result, MVI that affects the prognosis was not included in the analysis of this study. However, we will add this factor to subsequent clinical and basic research to design and analyze research data. Second, although the incidence of grade 3 or higher AEs was quite low, the proportion of patients who refused to complete two cycles of HAIC because of various reasons was relatively high (9.8%, 14/143), suggesting that adjuvant HAIC still had a certain impact on the quality of life of patients. Third, since all patients enrolled in this study are Chinese, the value of adjuvant HAIC in HCC patients with different ethnic groups and hepatitis backgrounds needs to be further studied. Finally, the current HAIC plan requires patients to stay in bed (. 50 hours), which will indeed affect the patient's treatment compliance and further necessitates the optimization of the trial protocol or chemotherapy regimen.
In conclusion, this study evidenced that postoperative adjuvant HAIC with FOLFOX significantly improved the DFS benefits with acceptable toxicities in HCC patients with MVI. | 2022-12-18T16:13:29.972Z | 2022-12-16T00:00:00.000 | {
"year": 2022,
"sha1": "01b42d34cdec4e87cbfb31456c6ca8724ddd3ccf",
"oa_license": "CCBYNCND",
"oa_url": "https://ascopubs.org/doi/pdfdirect/10.1200/JCO.22.01142?role=tab",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "84e9bda40afed75d0d4f886d2adfe583c7d0cdfa",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56282494 | pes2o/s2orc | v3-fos-license | Predicting continuous form of soil-water characteristics curve from limited particle size distribution data
Detailed information derived from a soil moisture characteristics curve (SMC) helps in water flow and solute transport management. Hence, prediction of the SMC from soil particle size distribution (PSD), which is easy to measure, would be convenient. In this study, we combine an integrated robust PSD-based model and a Van Genuchten SMC model to predict a continuous form of SMC using sand, silt and clay percentages for 50 soils selected from the UNSODA database. We compare the performance of the proposed approach with some previous prediction models. The results indicated that the SMC can be predicted and modelled properly by using sand, silt, clay and bulk density data. The model’s bias was attributed to the high fine particle and organic carbon (OC) content. We concluded that independence of the proposed method from the database and any empirical coefficients make predictions more reliable and applicable for large-scale water and solute transport management.
INTRODUCTION
The soil's unsaturated zone forms a pivotal part of the hydrological cycle as it connects surface water to groundwater through the porous medium of soil.Therefore, a comprehensive evaluation of unsaturated soil proves useful for studying water flow and solute transport (Harter and Hopmans 2004).One of the most important challenges in soil physics is to deal with the estimation of the hydraulic conductivity curve (HCC) and soil moisture characteristic curve (SMC) (Futter et al., 2007;Balland et al., 2008).The SMC, which indicates the functional relationship between soil water content and matric potential, is used to model the solute transport and water flow in the vadose zone (Hunt et al., 2013).However, due to temporal and spatial variability, a direct measurement of the hydraulic properties is labour-intensive, costly and inaccurate (Schaap and Leij 1998;Christiaens and Feyen 2001;Islam et al., 2006;Abbasi et al., 2011).Therefore, considerable efforts have been made to estimate the SMC indirectly (Antinoro et al., 2014).
Easily available soil properties have been used extensively as a basis for alternative methods to estimate the HCC and SMC.In recent years, researchers have paid considerable attention to predicting SMC in terms of pore size distribution (PoSD) using basic soil physical properties (Nimmo et al., 2007;Mohammadi and Meskini-Vishkaee, 2013).These approaches, which are dubbed transfer functions, can be classified into three groups: • Statistical techniques (pedo-transfer functions) or neural network models determine the correlation of basic soil properties (for instance sand, silt and clay percentages and organic matter content) to SMC points or parameters (Dashtaki et al., 2010;Vereecken et al., 2010;Abbasi et al., 2011).Available and reliable soil databases provide a variety of inputs for statistical models and, therefore, these models have been widely used (Hwang and Choi, 2006).For instance, ROSETTA software uses neural network analysis to estimate soil hydraulic parameters with hierarchical pedo-transfer functions.However, some researchers have shown that ROSETTA software did not estimate the Van Genuchten model (VG model) parameters properly (Yang and You, 2013).• Physico-empirical models express the relation of particle size distribution (PSD) with PoSD.Arya and Paris (1981) made the first attempt to develop a physico-empirical model, which connects the soil moisture content and void volume.They estimated the pore diameter from the particle size (AP model).
429
Therefore, the objectives of this study were (i) to adjust the MV-VG model for the prediction of SMC using only sand, silt and clay percentage; (ii) to compare the performance of the proposed approach with the results from the MV-VG model using the UNSODA (Unsaturated Soil Database) database and (iii) to evaluate the performance of the adjusted MV-VG model with the ROSETTA software prediction results.
Scaling approach
Empirical parameters of the soil water characteristic curve and database-dependent models are error sources of models which describe soil hydraulic functions.Elimination of such systematic error using scaling approaches greatly improves the SMC accuracy.Meskini-Vishkaee et al. (2014) proposed an SMC scaled model based on the VG model assuming that the residual water content equals zero (θ r = 0).
where θ (L 3 •L -3 ) is the soil moisture content, θ s (L 3 •L -3 ) is the saturated soil moisture content, h (L) is the matric suction, m and α are fitting coefficients and the parameter n* is the scaled pore size distribution index.
where n is a fitting coefficient and λ is defined as: The parameter ξ max (-) equals 1.41432 and ξ (-) is a coefficient depending on the arrangement state of soil particles and is defined as: where e(-) is the void ratio given by: where ρ b (M•L -3 ) and ρ s (M•L -3 ) are bulk and particle densities, respectively.
Developed soil water characteristic curve
Mohammadi and Vanclooster (2011) presented a conceptual robust model (MV model) to predict the soil matric suction, h i , from the particle size assuming the pore space geometry: where h i (L) is the matric suction of the i th fraction size, r i (L) is the radius of the i th fraction size.Simplified assumptions of the MV model, which ignore the considerable effects of clay surface forces, lead to under-predictions in a dry range of the SMC, despite the fact that the MV model predicts the water characteristic curve accurately because of independence of SMC to the database and no empirical parameters.Following the Arya and Paris model (AP model), the mathematical equation between the moisture content (θ i ) and h i is defined as: where w i (-) is the particle mass fraction of the i th fraction.Eq. ( 1) is the scaled form of the Van Genucthen SMC model when θ r = 0.
Combining Eqs 6 to 8 gives: Eq. 9 is fitted to the PSD data to estimate the VG model parameters (m, n and α), which should be used as the input parameters in Eq. 1 as an SMC predictor model.Since Eq. 9 includes 3 variable fitting parameters it should be used to fit the full range of PSD data containing at least 4 measured points.For limited-availability data points, Eq. 9 can be represented with the assumption of m = 1-1/n.
Eq. 10 can be used to fit PSD data including the sand, silt and clay percentages only.In summary, fitting of Eq. 10 or Eq. 9 allows the estimation of SMC parameters (n, m, α).Considering that ρ b is known and the scaling factor and subsequently n* can be calculated, the continuous form of SMC is predicted using Eq. 1.
MATERIALS AND METHODS
Fifty soil samples from the UNSODA database (Nemes et al., 2001) having PSD data with at least 4 fractions were selected to estimate SMC.The selected codes are presented in Table 1.The UNSODA database contains unsaturated hydraulic characteristics of 790 soil samples from all over the world, and especially Europe and America.They are used to develop estimations of water flow and solute transport management.
Equation 9 was used to fit the full range of PSD data with at least 4 measured points (Method 1: full PSD method) and Eq. 10 was used to fit the PSD data by assuming that only the sand, silt and clay percentages are known (Method 2: limited PSD method).To evaluate the unknown coefficients of Eq. 9 and Eq. 10, the trust region algorithm of Matlab8.3 software (Matlab 8.3, The Mathworks Inc., Natick, MA, USA) was used.
The parameters e, ξ and λ were easily calculated using available bulk and particle densities.In most UNSODA soil samples, θ s data are available.For those samples with no θ s data we used the suggestion of Chan and Govindaraju (2004), who assumed saturation moisture content to be equal to the corresponding moisture content of the lowest matric potential.
The ROSETTA software is also used to estimate the SMC parameters of the VG model using the SSCBD model option (sand, silt and clay percentages and bulk density are model predictors).
Statistical analysis
To calculate the accuracy of each prediction, the root mean square error (RMSE) between the measured and predicted moisture content was computed: where N is the number of measured moisture contents, θ i(p) and θ i(m) are predicted and measured moisture content in the i th matric suction, respectively.The coefficient of determination (R 2 ) is also presented to evaluate the correlation between the measured and predicted moisture content.Relative improvement (RI) was calculated to compare the prediction methods (McBratney, 2002): where RMSE f and RMSE s are RMSE of Method 1 (as the reference model) and Method 2 or ROSETTA (as the comparative approaches), respectively.A positive value of dimensionless RI indicates that the accuracy of the predicted moisture contents improves by using Method 2 or the ROSETTA approach.
To compare the measured and predicted moisture content for the dataset containing 50 soils, UNSODA codes were evaluated using the mean absolute error (MAE) and mean bias error (MBE) defined as: and where M i , P i are the measured and predicted values of moisture content, respectively, and N is also the number of measured and predicted points.MAE is a statistical criterion to show the average of error magnitude and MBE is used to show the average bias of each method.A positive MBE value indicates over-prediction.Methods 1 and 2 are based on the MV model.This model assumes that all soil particles are spherical and that soil structure can only influence soil bulk density.The effects of soil organic matter content, particle surface energy, lens and film water volume are not supported by this model (Mohammadi and Vanclooster, 2011;Mohammadi and Meskini-Vishkaee, 2013).Therefore, the under-prediction by Method 1 and Method 2 can be partially attributed to the assumptions of the MV model.For all sample soils represented in Fig. 1, Method 1 and Method 2 provide consistent predictions, especially for the wet ranges of SMC.
RESULTS AND DISCUSSION
The results of fitting Eq. 9 (Method 1: full PSD method) and Eq. 10 (Method 2: limited PSD method) are presented in Table 2.
Table 2 shows that the average value of θ s is 0.445 for all selected soils and varies from 0.324 for sandy loam soils to 0.557 for silty clay loam soils.For Methods 1 and 2, the average values of n* were 1.374 and 1.229, respectively.Regarding Eqs 3 to 5, the λ value is computed using bulk and particle densities; the average λ values of Method 1 and Method 2 are the same (0.756).For Method 1 and Method 2, the geometric average values of α are 0.0101 and 0.0129, respectively.The prediction results of Method 1, Method 2 and the ROSETTA software are The average RMSEs of Method 1, Method 2 and the ROSETTA software are 0.048 (varying from 0.023 for silty clay soils to 0.080 for sandy soils), 0.034 (varying from 0.009 for sandy soils to 0.064 for loam soils) and 0.069 (varying from 0.027 for sandy loam soils to 0.139 for silty clay loam soils), respectively.In terms of RMSE, Method 1 and Method 2 predicted consistently better than ROSETTA software.The RMSEs derived from Method 1 and Method 2 are smaller than the 0.060 and 0.2071 obtained from the scaling approach by Meskini et al., (2014) and Mohammadi and Vanclooster (2011 Comparison of the performances of Method 1 and Method 2 with the performance of ROSETTA software reveals that ROSETTA software is not capable of predicting SMC accurately in fine textured soils because of the fine particles.The average value of R 2 for all selected UNSODA soil textures is 0.958, 0.975 and 0.910 for Method 1, Method 2 and the ROSETTA software, respectively (Table 3).In terms of R 2 values, Method 1 and Method 2 performed consistently better than the ROSETTA software.The small difference between the R 2 values of Method 1 and Method 2 is not statistically significant.
Overall comparison of developed model
Comparison of Method 1 and Method 2 according to the average RI value (0.126) indicated that, in general, the accuracy of the MV-VG model does not increase with an increased number of measured points of the PSD curve.However, the average RI value per texture class varies from -0.425 for clay soils to 0.637 for loamy sand soils.The RI values in comparison of Method 1 and Method 2 for clay, loam, silty clay loam, and silty clay textured soils were slightly negative, revealing that a low number of model inputs reduce the SMC accuracy in fine to moderate textured soils (Table 3).The average RI value for comparison of ROSETTA and Method 2 is 0.342 (varying from -0.230 for loam soils to 0.788 for silty loam soils), which indicated that, although ROSETTA and the limited PSD method require the same input data, Method 2 (limited PSD method) predicts SMC more accurately.
In many pedological studies, sand, silt and clay percentages are measured routinely and this information is usually available in most soil survey reports.Method 2 can be used to predict SMC easily.Moreover, Method 2 does not need any empirical coefficient or database-dependent parameter.This advantage allows for prediction of the soil hydraulic characteristics regardless of spatio-temporal variations; thus SMC can be estimated for large-scale studies.
Table 4 represents statistical criteria to compare the measured vs. predicted moisture content using mean absolute error, mean bias error and R 2 of linear regression.In terms of MAE, Method 2 predicts SMC more accurately.As can be seen in Table 3, that is also evident from comparing average values of RMSE and R 2 obtained for Method 2 (0.034, 0.975) and Method 1 (0.048, 0.958) and ROSETTA software (0.069, 0.910).The negative MBE shows that Method 2 over-predicts SMC while Method 1 and ROSETTA under-predict.
Comparison of measured and predicted soil moisture content of the full dataset for Method 1, Method 2 and the ROSETTA software is respectively shown in Fig. 2 (a-c).In general, the 1:1 line shows the equal measured and Optimal value 0 0 1 a R 2 is the determination coefficient of linear regression in Fig. 2 (a-c)
CONCLUSIONS
In this study, we adopted the MV-VG model for the prediction of SMC using only sand, silt and clay percentages, and we also evaluated the performance of this approach with the experimental data and results of ROSETTA software.Results showed that the continuous form of SMC can be predicted accurately assuming that sand, silt and clay percentages are the only known properties of the soil.Full PSD data are not usually available while sand, silt and clay percentages are measured conventionally in all soil analyses.In general, we summarized the advantages of the proposed method for proper SMC prediction: (i) This method does not depend on a database or any empirical parameter.(ii) The proposed approach predicts continuous forms of SMC for all tested soils.(iii) In comparison with the well-known ROSETTA software, this method is capable of predicting SMC more accurately, especially in a dry range of SMC.Since sand, silt and clay percentages are readily available, soil properties and their spatial-temporal variability are approximately constant.The proposed method can be used as an alternative for predicting SMC in large-scale studies.
Figure 1 (
Figure 1 (a-h) Examples of measured and predicted water retention curves for Method 1 (Eq.9), Method 2 (Eq.10) and ROSETTA software for: (a) clay soil (b) loam soil (c) sandy soil (d) sandy clay loam soil (e) sandy loam soil (f) sitly loam (g) silty clay soil (h) silty clay loam soil ) (MV model), respectively.The RMSE results of the ROSETTA software in the current study (0.069) are approximately the same as for ROSETTA software by Meskini et al. (2014) (0.0745).
TABle 3 Comparison of average of RMSE, R 2 and RI values of Method 1 (Eq. 9), Method 2 (Eq. 10) and ROSETTA software in predicting the SMC. The standard deviations are presented in parentheses.
July 2018 Published under a Creative Commons Attribution Licence 433 summarized in Table 3 by comparing the statistical criteria, including RMSE, R 2 and RI.
TABle 4 Statistical comparison of measured vs. predicted moisture content
July 2018 Published under a Creative Commons Attribution Licence434predicted moisture content to reveal the bias of measured vs. predicted moisture content of the dataset.Linear regression (dashed red line) is considered to evaluate the best fitting line through predicted and measured moisture content.The slope values of linear regression between measured and predicted moisture content were 0.94 (Method 1), 0.93 (Method 2) and 0.77 (ROSETTA software).The R 2 of the linear regressions were 0.917, 0.950 and 0.827 for Method 1, Method 2 and the ROSETTA software, respectively.According to the slope values, Fig.2 (a-c) and MBE criteria, Method 2 slightly over-predicts moisture contents, while the ROSETTA software and Method 1 under-predict.Comparison of the proposed methods revealed that Method 1 and Method 2 generally predict SMC more accurately than ROSETTA software, according to statistical criteria, including RMSE, RI, MBE and R 2 . | 2018-12-18T00:15:05.592Z | 2018-07-31T00:00:00.000 | {
"year": 2018,
"sha1": "892327bc3f14aede5a56e65f4fd5772b327fbe33",
"oa_license": "CCBY",
"oa_url": "https://watersa.net/article/download/6634/7879",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "892327bc3f14aede5a56e65f4fd5772b327fbe33",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
202185201 | pes2o/s2orc | v3-fos-license | Using computer climate generator versus conventional lapse rate to model skyscrapers
The values of temperature and humidity at the top of skyscrapers are different from those near the ground. Thus, different mechanical systems, air flow rates, and other design parameters are required for such tall buildings. Conventional air temperature reduces linearly with increasing altitude, or lapse rate, of −6.5 °C/km. This study examines how the conventional lapse rate in a hot and humid region differs by using a computer-based climate generator in Dubai at an altitude of 600 m, we address three issues: whether the conventional lapse rate is always a good indicator of the climate profile, whether building design conditions change with altitude, and by how much the predicted energy consumption changes with altitude. Our first conclusion is that the conventional lapse rate may not always be a good indicator of the climate profile. The lapse rate is influenced by humidity. When humidity is low, the lapse rate tends to be higher and can reach up to −9.8 °C/km under adiabatic conditions. Conversely, when humidity is high, and as temperature drops with increasing elevation, condensation occurs and releases heat of vaporization, which warms the air and reduces the lapse rate. Under certain conditions, temperature inversion can occur, and the temperature above the ground may be higher than the temperature at 600 m altitude. Our second conclusion is that the linear lapse rate is not always a good predictor of design conditions. During the summer, there is a tendency to underestimate the lapse rate due to low relative humidity. In contrast, during winter, there is a tendency to overestimate the lapse rate due to low temperatures and high relative humidity. Last but not least, the linear lapse rate is not always a good indicator of energy consumption. Based on simulations, we found that differences in the lapse rate and the air density influenced the energy consumed by the air conditioning system in an office building. Specifically, between altitudes of 11 and 600 m, the energy consumption differed by approximately 5%.
Introduction
The temperature and humidity values at the top of skyscrapers are different from those near the ground. Thus, different mechanical systems, air flow rates, and other design parameters are required for such tall buildings. However, there are no design parameters to meet these requirements, even within the ANSI/ASHRAE Standard 169-2013 (Climatic Data for Building Design Standards) [1] . Therefore, we SBE_Tokyo IOP Conf. Series: Earth and Environmental Science 294 (2019) 012038 IOP Publishing doi:10.1088/1755-1315/294/1/012038 2 must develop these design parameters because, globally, cities are becoming increasingly more centralized, taller, and more stratified. In general, it is well known that temperature reduces linearly as altitude increases according to the conventional lapse rate of −6.5 °C/km. On the other hand, according to a previous study, it is also known that the lapse rate can vary depending on the weather and other surrounding conditions [2] . Phillips et al. examined these conditions through calculations by using a computer-based climate generator [3] . The data and results of these simulations, which can be obtained from RWDI (https://rwdi.com/en_ca/) in the EPW system file, would be useful for the design of high-rise buildings in the future. Our study examines how the conventional lapse rate in a hot and humid region differs from the insights provided by the latest computer-based climate generator. Using Dubai and an altitude of 600 m as our case study, we focus on three issues: whether the conventional lapse rate is always a good indicator of the climate profile, whether design conditions change with altitude, and by how much the predicted energy consumption changes with altitude.
Lapse rate
As it is an important background concept for this paper, this section describes lapse rate. Equation (1) expresses the well-known relationship between altitude and air temperature, in which the value of Γ, also known as the conventional lapse rate, is −6.5 °C/km [4] .
where Γ is the lapse rate [°C /km], T is the temperature [°C], and z is the altitude [km] Figure 1 depicts the various lapse rates that define the change of air temperature with altitude [2] . As shown, the lapse rate is not a constant value. Generally, the lapse rate varies with humidity. When the air is humid, the lapse rate reduces, and in contrast, the lapse rate increases in drier air. When the relative humidity (RH) is high, as the air temperature drops with increasing elevation, dew condensation occurs. It is because of this phase change, latent heat energy is released into the atmosphere, and this heat raises the air temperature. Thus, when the RH is high, the lapse rate is small. When the air contains a lot of moisture and no heat transfer into or out of the surrounding air occurs, this lapse rate is known as the wet adiabatic lapse rate; under these conditions, the rate of temperature decrease is −5.5 °C/km. Conversely, if the amount of moisture in the air is small enough that condensation is minimal, the lapse rate will be higher than −6.5 °C/km. When the air contains so little moisture that condensation does not occur at all, this lapse rate is known as the dry adiabatic lapse rate; under these conditions, the rate of temperature decrease is −9.8 °C/km. In addition, Figure 1 also illustrates a possible inverse situation (Line 1) in which the air temperature is higher in the sky than on the ground surface. Figure 1. Diagram of the various lapse rates defining the change of atmospheric temperature with altitude [2] .
Climate profile
This section explains the difference between the climate profile calculated using the conventional lapse rate (based on the ASHRAE ground-level weather file) and that calculated by the computerbased climate generator. Assuming a skyscraper in Dubai, we compared both calculations at an altitude of 600 m. Figure 2 shows the dry-bulb temperature profile at ground level in Dubai according to ANSI/ASHRAE Standard 169-2013 [5] WMO#412170 data. The dotted line in Figure 2 shows the temperature profile at 600 m using the constant conventional lapse rate of −6.5 °C/km. The shape of the corresponding ground-level temperature profile is identical to that of the ASHRAE data; however, the profile is shifted to the left. In this study, we used the values calculated by RWDI as the simulation values from the computerbased climate generator [3] . Figure 3 shows the dry-bulb temperature at 600 m altitude, as simulated by RWDI. This graph also shows a dotted line representing the temperature profile using the constant conventional lapse rate. However, in this case, the ground-level temperature profiles are not identical in shape. An important observation is that the summer lapse rate is significantly higher than that of the winter. Figure 4 compares the dry-bulb temperature and RH (ANSI/ASHRAE Standard 169-2013) for Dubai at ground level. Likewise, Figure 5 compares the dry-bulb temperature and RH (RWDI) at an altitude of 600 m. These charts plot the averages for each hour of each month. As shown, the dry-bulb temperature is almost the exact inverse of the RH. At ground level in summer, the dry-bulb temperature exceeds 40 °C and the RH is only approximately 30%. During this season, there is some amount of moisture in the air, but because the temperature is very high, the RH stays low. When the temperature drops and as the altitude rises, the RH increases, but not as high as to cause dew condensation to occur. Therefore, the temperature lapse rate is high. On the other hand, the winter temperatures are almost the same at both ground level and 600 m altitude because the design temperature in the winter is approximately 10 °C, and the RH is much higher than in the summer. When the temperature drops, the moisture in the atmosphere condenses and heat is released. Therefore, the lapse rate decreases. In addition, Figure 5 shows that, at an altitude of 600 m, the daytime and night-time outside temperatures are nearly equal throughout the year, and it is assumed that there is minimal influence from the rapid changes in the ground surface temperature. This suggests that using cold air at night (e.g., night purge ventilation) is not an effective strategy for saving energy on the upper floors of highrise buildings. Therefore, our first conclusion from this assessment of the weather data and simulations is that the conventional lapse rate may not always be a good indicator of the climate profile. temperature at ground level in summer was set at 45.9 °C. With this value, and using conventional lapse rate, the design temperature at 600 m altitude was calculated as 42.0 °C. However, this value differs from the summer design temperature condition according to the RWDI simulation. On the other hand, this temperature is very close to the value obtained according to the dry adiabatic lapse rate of −9.5 °C/km. We determined that this value is unusually high, despite the drop-in temperature due to increase in altitude and the fact that condensation did not occur. This finding suggests that, in high temperature climates, the design conditions for high altitude locations may be lower than those obtained according to the conventional lapse rate. Next, we considered the winter season. Because of the low temperatures, the RH in winter is relatively high. Thus, we compared our results according to the wet adiabatic lapse rate. Our results ( Table 1) show that the design temperature obtained by simulation at 600 m altitude was higher than that obtained according to the wet adiabatic lapse rate. Moreover, this value was higher than the ground surface temperature. This observation can be attributed to the temperature inversion that occurs in Dubai for many hours during the coldest hours of the day. Under this condition, the ground cools much faster at night; thus, the air temperature near the ground also reduces faster than at higher altitudes. Therefore, our second conclusion is that the design conditions predicted according to the liner lapse will be incorrect.
Energy simulation
In this section, we examine how the energy use changes at 600 m altitude. We simulated energy usage by using data on outside air conditions as presented in the previous section.
Outline of the simulation
We built a model of a typical floor of an office building at an elevation of 600 m, and used this model to calculate the daytime energy consumption. We used the Energy Plus version 8.8.0 software for the energy simulation. Figure 8 shows the plan and elevation views of our model. Table 2 describes the outline of the model building. We created a one-floor model of the office building. The plan was a 45 m square shape with a 15 m square core at the center. The model height was 3000 mm. All the four exterior surfaces had the same elevation, and there was a 1200 mm high glass window (Z = 9002100 mm). There was no exchange of heat with the upper and lower floors. The model was divided into an interior zone and a perimeter zone in each direction. The perimeter zone was set at 5 m from the outer wall. Thus, the model had nine zones, including the core. An interior wall separated the core from the office. We set different values of internal gain for the core and the office. These values were obtained from ASHRAE 90.1-2013. Figure 9 shows the diagram of the heating, ventilation, and air conditioning (HVAC) system and plant. Table 3 describes the outline of HVAC system and Table 4 describes the outline of the heating and cooling plant system. The HVAC system is a single-duct VAV system that can reheat each zone. The room is ventilated using a supply fan and a return fan, and the VAV distributes the air volume necessary for the load of each zone. Chilled water is produced by an electric HP chiller, and hot water is produced by a gas boiler and delivered to the heating coil and the VAV unit. The ventilation air flow
Climate profile for simulation conditions
We ran the energy simulation using four different weather cases. Case 1 is the standard reference condition, which was used according to the ANSI/ASHRAE Standard 169-2013 for Dubai at ground level. In Case 2, we changed only the pressure in the Case 1 weather file to assume conditions at 600 m altitude. Equation (2) shows the relationship between height and pressure for the international standard atmosphere [6] . We used this approximate formula to calculate the pressure at 600 m altitude, and we changed all the values of the field atmospheric station pressure in the weather file to 94,400 Pa. We derived this value by substituting 0.589 km (the difference in altitude between 0.6 km and the Dubai ground level 0.011 m, according to ASHRAE169-2013) into Equation (2). It should be noted that this approximate formula includes the influence of the temperature lapse rate on the constant value. Case 3 was the simulated weather file at Dubai at 600 m altitude using the computer-based climate generator by RWDI. In Case 4, we changed the pressure and temperature from the Case 1 weather file. The pressure was changed to 94400 Pa, which was the same as Case 2. The temperature was varied according to the conventional lapse rate of −6.5 °C/km and uniformly decreased by 3.9 °C from the dry-bulb temperature. The absolute humidity was kept constant, and when the dry-bulb temperature exceeded the dew point temperature, the dew point temperature was reduced to match the dry-bulb temperature. Table 5. Weather files for simulation Table 6 shows the results of the energy simulation under the categories heating, cooling, lighting, equipment, fan, pump, heat rejection for each weather condition separately. In addition, we calculated the total energy consumption and energy consumption per area. Then we compared the normalized result obtained for Cases 24 relative to that of Case 1. The energy consumption of cooling and heating in Case 2 was less than that of Case 1 because the air density in Case 2 was lower than that of Case 1. As the air density decreased, the mass of the air also decreased, which meant there was less air mass to heat and cool. The regulation of ventilation volume in ASHRAE 62.1-2013 [7] is set by volume at 1.2 kg (DA)/m 3 at 1 atm and 21 °C in General notes for table 6.2.2.1 Air density. However, it was not necessary to change the amount of ventilation due to the decrease in air density. Because the ventilation volume was not changed in the simulation, we determined that the use of cooling and heating energy had decreased mainly due to the difference in energy for treating the outside air. We attributed this finding to the fact that the HVAC system was a VAV system. The air density reduced at 600 m altitude, such that the amount of heat that the air could hold in the same volume become smaller, and thus the cooling capacity reduced. As a result, the total energy usage increased slightly mainly due to the increase in the power of the fan. In Case 3, the energy used for both heating and cooling reduced because the outside air conditioning load was lower in both the summer and winter periods. However, for the same reason as in Case 2, the energy consumed by the fans in the VAV system increased. This occurred because of the low air density. Overall, the energy consumed was reduced by 3.8%. In Case 4, the cooling energy consumption decreased, but the heating energy consumption increased. This observation can be attributed to the underestimation of the temperature at 600 m altitude according to the conventional lapse rate. In addition, the energy consumption by the fans was influenced by the reduction in pressure, but this influence was slightly decreasing because the influence from the reduction in the cooling load was stronger. Thus, the reduction in total energy consumption of 4.6% was the largest reduction among all the cases we simulated. Therefore, our conclusion is that there is a difference of approximately 5% in the simulation result due to the difference in the method of predicting the outdoor air temperature at high altitude. Specifically, the conventional lapse rate is not always a good indicator of energy consumption. Table 6. Result of the energy simulation
Conclusion
In designing skyscrapers, the conventional lapse rate should be used with caution for predicting the climate profile, design conditions, and energy consumption. Owing to the increasing rate of construction of high-rise buildings, there is an urgent need to identify and use appropriate climate data of high altitudes. | 2019-09-10T20:24:06.016Z | 2019-08-09T00:00:00.000 | {
"year": 2019,
"sha1": "59d43f78f36c320bfd93c97fac4ca4bb998de5de",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/294/1/012038",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "795a936713bf563a2c914178310aa524a02b86c9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
20830088 | pes2o/s2orc | v3-fos-license | Craniotomy or Decompressive Craniectomy for Acute Subdural Hematomas: Surgical Selection and Clinical Outcome
Objective Craniotomy (CO) and decompressive craniectomy (DC) are two main surgical options for acute subdural hematomas (ASDH). However, optimal selection of surgical modality is unclear and decision may vary with surgeon's experience. To clarify this point, we analyzed preoperative findings and surgical outcome of patients with ASDH treated with CO or DC. Methods From January 2010 to December 2014, data for 46 patients with ASDH who underwent CO or DC were retrospectively reviewed. The demographic, clinical, imaging and clinical outcomes were analyzed and statistically compared. Results Twenty (43%) patients underwent CO and 26 (57%) patients received DC. In DC group, preoperative Glascow Coma Scale was lower (p=0.034), and more patient had non-reactive pupil (p=0.004). Computed tomography findings of DC group showed more frequent subarachnoid hemorrhage (p=0.003). Six month modified Rankin Scale showed favorable outcome in 60% of CO group and 23% of DC group (p=0.004). DC was done in patient with more unfavorable preoperative features (p=0.017). Patients with few unfavorable preoperative features (<6) had good outcome with CO (p<0.001). Conclusion In selective cases of few unfavorable clinical findings, CO may also be an effective surgical option for ASDH. Although DC remains to be standard of surgical modality for patients with poor clinical status, CO can be an alternative considering the possible complications of DC.
Introduction
Acute subdural hematomas (ASDH) are observed in one third of patients with severe traumatic brain injury. 22) ASDH forms between the dura and arachnoid membranes usually due to tearing of bridging veins or arterial rupture. Management of ASDH may vary from simple observation to different surgical evacuation technique. Two most fre-quent surgical modalities are craniotomy (CO) and decompressive craniectomy (DC). CO procedure removes skull bone and subdural hematoma followed by replacement of original skull bone. DC also removes skull bone and hematoma, but remains the bone unclosed for possible expansion of edematous brain tissue with or without an additional expansile duroplasty. 2) Many studies have been reported showing effectiveness of DC for ASDH. 1,5,6,10,17,18) However, not all patients show severe postoperative brain swelling after evacuation of hematoma where theoretical benefit of DC is questionable. DC also carries disadvantage owing to lack of bone closure. 2,9,10,16) The optimal surgical modality for ASDH still remains to be clarified.
The objective of this study is to analyze the surgical outcomes of CO and DC for evacuation of ASDH by comparing the preoperative clinical features, computed tomography (CT) images and postoperative complications which may
Materials and Methods
We retrospectively reviewed 46 cases of ASDH surgically treated with CO or DC in our hospital from January 2010 to December 2014. Demographic and preoperative medical data were reviewed including age, sex, and presence of medical illness causing coagulopathy or use of antiplatelet agents. Preoperative data that may affect the surgical outcome were also collected such as time from trauma to surgery or time from clinical deterioration to surgery, preoperative Glasgow Coma Scale (GCS), pupillary light reflex, and presence of major extracranial injury. Preoperative CT scans were analyzed for measurement of midline shift, presence of intracerebral hemorrhage (ICH) or petechial hemorrhage, obliteration of basal cistern and third ventricle, and presence of subarachnoid hemorrhage at basal cistern.
All of the patients underwent surgery for evacuation of ASDH through frontotemporoparietal CO of size 10×12 cm or larger. Decision for CO or DC was done by attending neurosurgeon (Figure 1). Cases in which evacuation ASDH were not the main goal of surgery were excluded. Thus, evacuation of large traumatic ICH, decompression for cerebral swelling, and surgery other than frontotemporoparietal CO were excluded in this study.
Postoperative midline shift was measured from immediate postoperative CT scan. Measurement of swelling above the bone flap was done for patients who underwent DC with CT scan taken within postoperative day 3 to 7 when maximal brain swelling was observed. Imaginary line on absent bone flap was drawn and brain tissue above the imaginary line was measured ( Figure 2). Medical records and CT scans were reviewed for patient who underwent cranioplasty. Postoperative outcome was recorded using modified Rankin Scale (mRS) 6 months after initial surgery. Outcome was defined good for patients with mRS score 1-3, and poor for patients with scores 4-6.
Preoperative clinical features were classified for further analysis where unfavorable feature was defined as age over 70 years, anticoagulation or antiplatelet use, time to surgery >4 hours, preoperative GCS <8, one or both non- female with acute subdural hematomas (ASDH) after traumatic brain injury. She underwent craniotomy and evacuation of hematoma without remarkable postoperative brain swelling. Another case of 78-year-old male with ASDH (C, D) who underwent decompressive craniectomy. Preoperative (C) and postoperative (D) CT scan shows brain swelling, but removal of bone aids in control of raised intracranial pressure.
A B
C D reactive pupil, and comorbid major extra-cranial injury. Preoperative CT findings with ICH or petechial hemorrhage, obliterated basal cistern or 3rd ventricle, and presence of subarachnoid hemorrhage were also classified as unfavorable preoperative feature. Data was analyzed using Statistical Package for Social Sciences (SPSS) software for personal computers (SPSS ver. 21; IBM Corp., Armonk, NY, USA). Unpaired Student's t-test or Mann-Whitney test was used for continuous variables, and chi-squared test or Fisher's exact test was used for categorical variables. Probability value of less than 0.05 was considered as statistically significant.
Demographic and preoperative clinical factors
Forty six patients met the inclusion criteria of patient with ASDH who received either CO or DC. Twenty (43%) patients underwent CO, and 26 (57%) with DC. Mean age of CO group was 63.4 years, and DC was 65.5 years. Male was prevalent in both groups. Neither age nor gender distribution showed significant difference between the groups.
Preoperative clinical data showed presence of coagulopathy or use of antiplatelet in 5 of 20 (25%) patients of CO group and 12 of 26 (46%) patients of DC group. Time to surgery or clinical deterioration to surgery time was less than Preoperative pupillary reflex also showed more one or both non-reactive pupil in DC group (40% in CO vs. 77% in DC, p=0.004). Major combined extracranial injury were 2 patients in both groups (Table 1).
Preoperative CT findings
Mean preoperative midline shift at preoperative CT were 12.9 mm in CO group and 13.3 mm in DC group (p=0.512). Number of patients with ICH or petechial hemorrhage was 8 in CO group and 13 in DC group (40% in CO vs. 50% in DC, p=0.500). Obliteration of basal cistern and 3rd ventricle was 5 in CO group and 13 in DC group (25% in CO vs. 50% in DC, p=0.085). More patients showed preoperative subarachnoid hemorrhage in DC group (25% in CO vs. 69% in DC, p=0.003).
Postoperative findings and patient outcome
Mean postoperative midline shift was larger in DC group (6.4 mm in CO vs. 9.1 mm in DC), but it was not statistically significant (p=0.095). Reoperation was done in 4 of 20 (20%) patients in CO group which were due to recollection of subdural hematoma in 2 patients and epidural hemorrhage in 2 patients. In DC group, reoperation was done in 3 of 26 (12%) patients which were due to subgaleal hematoma in 1 patient and growth of traumatic ICH in 2 patients. Cranioplasty was done in only 12 of 26 (46%) patients mainly due to patient condition. Six months postoperative mRS scores were less than 3 in 12 (60%), 4 & 5 in 7 (35%), and 6 in 1 (5%) of 20 patients in CO group. In DC group, mRS scores were less than 3 in 6 (23%), 4 & 5 in 7 (27%), and 6 in 13 (50%) of 26 patients. Difference of 6 months mRS scores between CO and DC groups were statistically significant (p=0.004) ( Table 2).
Number of unfavorable preoperative features and clinical outcome
Mean number of unfavorable preoperative feature was 4.1 in CO group, and 5.8 in DC group (p=0.017) ( Table 3). In CO group, patient with preoperative adverse feature <6 showed good outcome in 13 patients and poor in 2 patients. With preoperative adverse feature 6, no patient showed good outcome and 5 patients showed poor outcome (p< 0.001). In DC group, patient with preoperative adverse feature <6 showed good outcome in 4 patients and poor in 5 patients. With preoperative adverse feature 6, 2 patient showed good outcome and 15 patients showed poor outcome (p=0.06).
Discussion
ASDH are present in approximately one third of patients with severe traumatic brain injury. 22) Despite advances in emergency medical services and surgical techniques, ASDH remains one of the most lethal of all intracranial injuries. Various surgical modality such as simple burr hole trephination, CO, and DC are used for evacuation of ASDH. On the Brain Trauma Foundation guidelines published in 2006, they recommended that ASDH with thickness greater than 10 mm, or midline shifting greater than 5 mm on CT scan should be treated surgically. 1) Mortality rates of ASDH ranges from 55% to 79% even with surgical intervention of any modality. 19) Ransohoff et al. 18) reported recovery rate of 40% for ASDH treated with hemicraniectomy followed by hematoma removal. After the report, DC for ASDH has been recommended as a surgical modality of choice, and performing DC in ASDH patients seemed to be attractive. 1,4,7,8,13,14,20) Girotto et al. 7) reported DC technique of adequate size, early surgery, and GCS of 6 to 8 group would contribute significantly to better outcome by reducing morbidity and mortality. Meier and Gräwe 17) reported that DC benefits on overall outcome of patients with traumatic brain injury. The rationale behind performing DC lies in control of postoperative brain swelling and overwhelming intracranial hypertension, but little is known on the degree of postoperative swelling after evacuation of hematoma. Empirical decision for DC or CO is made by the neurosurgeon based on patients' clinical status and CT findings which may be confounding unless brain swelling is noted intraoperatively after evacuation of hematoma.
The analysis of postoperative brain swelling is difficult since the postoperative CT findings or intracranial pressure measurements will vary depending on closure or opening bone flap. Different trauma setting among patients makes randomized trial for CO and DC impossible and unethical. Nevertheless, there were several retrospective series that compared the outcome of CO and DC. 3,5,12,15,21) Woertgen et al. 21) compared the surgical outcomes in ASDH which were not significantly different between CO and DC. They concluded that signs of herniation at presentation, and increasing age had most influence to patient outcome. So, preoperative clinical feature influenced most on outcome and DC does not seem to have a therapeutic advantage over CO in ASDH. More recent study by Chen et al. 3) also reported similar results in 102 patients where DC group had higher mortality rate which may be due to poorer preoperative clinical status. The study by Li et al. 15) is notable where they tried to diminish the effect of preoperative clinical status by using CRASH-CT prognostic model. Predicted outcome was calculated in 85 patients in a retrospective fashion. Favorable outcomes were observed in 45% of CO versus 42% of DC (p=0.83), but standardized morbidity ratio (observed/ expected unfavorable outcome) was 0.90 for CO group and 0.75 for DC group.
Our study showed poorer outcome in DC group compared with CO group (poor mRS 77%, 20 of 26 patients in DC group vs. 40%, 8 of 20 patients in CO group; p=0.004). This results are may be due to more patients with low GCS score (GCS<8), unresponsive pupil, and comorbid CT lesion in DC group. Our results carry similar selection bias that neurosurgeons tend to perform DC when patients' preoperative clinical status is poor. To clarify this point, we counted on number of unfavorable features for each patient that may influence on poor outcome. On average, DC group had more adverse features than CO group, and thus poor outcome for DC group can be explained.
One notable finding is that in patients with few unfavorable features (<6), good outcome (mRS less than 3) was achieved in majority of patients in CO group. However, similar results were not obtained in DC group with few unfavorable features. This implies that further stratification of unfavorable clinical features is needed which has larger impact on outcome. Nonetheless, it seems that some patients with few preoperative unfavorable features can benefit with CO without the need for bone removal.
Furthermore, various possible complications of DC need awareness of neurosurgeons. Subgaleal hemorrhage, herniation through the cranial defect, subdural effusion, syndrome of the trephined (sinking skin flap syndrome), and hydrocephalus were reported complications of DC. 11,23) In our series, 1 patient underwent reoperation due to subgaleal hematoma and 2 patients had severe sinking of skin flap where difficulty was in cranioplasty resulted in complication. DC also have disadvantage of requiring subsequent cranioplasty which harbor additional risk of complication. 2,11,16) Gooch et al. 9) reported that immediate post-operative complication rate of cranioplasty after DC was as high as 34% which were infection, wound breakdown, intracranial hemorrhage, and bone resorption. We also experienced complications of cranioplasty in our patients (4 of 12; epidural hematoma 2, infection 1, cerebrospinal fluid leakage 1) which interrupted patients' recovery. In this context, there may be some advantage of CO in evacuation of ASDH.
However, this study is a retrospective single center study with small patient population. Limitations of selection bias hinder any conclusion on role of CO or DC for ASDH. We think further investigation with larger patient population and carefully selected criteria is needed to clarify the optimal surgical modality for patient with ASDH.
Conclusion
In selective cases of few unfavorable clinical findings, CO may also be an effective surgical option for ASDH. Although DC remains to be standard of surgical modality for patients with poor clinical status, CO can be an alternative considering the possible complications of DC. Controlled prospective study with larger patient population is needed clarify this point.
■ The authors have no financial conflicts of interest. | 2017-09-22T13:56:01.068Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "0f9c6e998f349df4a47fdeee9a35119b5305c24e",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4866560?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f9c6e998f349df4a47fdeee9a35119b5305c24e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236640518 | pes2o/s2orc | v3-fos-license | The horizontal shear fracture of the pelvis
Purpose Various classification systems describe fractures of the acetabulum and pelvis separately. Horizontal shear fractures involve the pelvic ring and both acetabula and have not been previously described. The aim of this study is to describe the horizontal shear fracture of the pelvis. Methods At a level 1 trauma centre over 10 years from December 2008 to December 2018, 1242 patients had pelvic and acetabular fractures. Six patients had horizontal shear fractures, comprising 0.5% of all pelvic and acetabular fractures. Demographic, clinical and radiological data was collected. Clinical outcomes were pain and mobility level, sciatic nerve symptoms, further acetabular or pelvic surgery, or total hip arthroplasty. Radiological outcomes included fracture displacement, implant migration, femoral head osteonecrosis, and post-traumatic arthritis. Outcomes were assessed at a minimum 12 month follow-up. Results The median patient age was 35 years. Five of six shear fractures were due to motorcycle crashes. No mortalities occurred. At follow-up, three patients reported pain, two patients had difficulty mobilising associated with traumatic sciatic nerve injury, and one patient underwent total hip arthroplasty for femoral head osteonecrosis. No fracture displacement or implant migration occurred. The Matta arthritis grade was excellent or good in all except one hip. Median follow-up time was 1.8 (range 1.1–7.8) years. Conclusion The horizontal shear fracture of the pelvis is a high-energy injury characterised by separation of the anterior and posterior pelvic ring through the acetabula. Good outcomes can be achieved with open reduction and internal fixation of displaced fractures.
Introduction
Pelvic ring and acetabular fractures are often considered separately in current classification systems. For acetabular fractures, Letournel described the classic elementary and associated fracture types [1], whilst for pelvic ring injuries, the Tile stability-related classification [2] or Young and Burgess [3] mechanism-related classification is commonly used. Both pelvic and acetabular fractures are also included in comprehensive fracture classifications such as the Arbeitsgemeinschaft für Osteosynthesefragen (AO) Foundation and Orthopaedic Trauma Association (OTA) systems [4]. In patients with acetabular fractures, 1% are bilateral [5]. Specifically, bilateral transverse type acetabular fractures have only been described in two case reports previously [6,7]. We have observed a unique type of fracture, the horizontal shear pattern which has not yet been reported. These are bilateral transverse acetabular fractures separating the anterior and posterior pelvic ring through the acetabula. This study's objective is to describe the horizontal shear fracture, associated clinical features, subsequent management, and outcomes associated with this pattern.
Materials and methods
All patients with pelvic and acetabular fractures from a level 1 trauma centre's prospective pelvic and acetabular fracture database from December 2008 to December 2018 were retrospectively reviewed to identify the patients of interest. The database contains patient medical record numbers, age, sex, and whether a pelvic or acetabular fracture was sustained. Radiological images for all patients were reviewed using our institutions radiology imaging system. This included radiographs, computerised axial tomography (CT) scans, magnetic resonance imaging scans, and bone scans. The key inclusion criterion was any patient with the horizontal shear fracture pattern. This was defined by bilateral transverse type acetabular fractures (OTA classification fracture type 62B1) with separation of the anterior and posterior pelvic ring through these acetabulum fractures. The transverse pattern was defined by Letournel's classification [1]. Patients with bilateral transverse acetabular fractures who additionally sustained associated posterior wall fracture, comminuted fracture pattern, or pelvic ring fracture were included. All other pelvic and acetabular fracture types were excluded.
Detailed demographic data, aetiology, clinical features, investigation results, and management were collected retrospectively for each included patient via our institution's digital clinical information system. The injury severity score was recorded and patients were classified as polytrauma victims by the Berlin definition [8]. Patients were investigated with pre-operative radiographs and CT scanning of the pelvis with 3-dimensional reconstructions. The images of patients with bilateral transverse acetabular fractures were reviewed in detail to describe the fracture pattern. Fractures were classified into infratectal (62B1.1), juxtatectal (62B1.2), or transtectal (62B1.3) types according to Letournel's classification [1]. Clinical and radiological outcomes were recorded at the patient's most recent follow-up after history, examination, and pelvis radiographs were performed. Minimum follow-up required was 12 months.
Clinical outcomes included pain and mobility level adapted from Matta [5], sciatic nerve symptoms, further acetabular or pelvic surgery, other related surgeries, or total hip arthroplasty. These outcomes were patient reported. Sciatic nerve symptoms are specifically related to weak ankle dorsiflexion or "foot-drop". Related surgeries were any pertaining to the bilateral acetabular fractures. Radiological outcomes measured on follow-up radiographs included posttraumatic hip arthritis, femoral head osteonecrosis, fracture displacement, and implant migration. Post-traumatic hip arthritis was assessed on plain X-ray as described by Matta's classification [5]. Femoral head osteonecrosis was defined on radiographs using the updated Ficat classification [9].
Heterotopic ossification was assessed and graded according to the Brooker classification [10].
Institutional ethics waiver was obtained prior to completing this study (Reference Number: AU201908-07). Patients were contacted and consent was obtained to utilise their images for the illustrative purposes of the study. Detailed statistical analysis was not required for this study. One patient was lost to local long-term follow-up as they were reviewed at a different hospital beyond 2 months post-operatively and thus were not included in the outcomes reported at follow-up.
Demographics
During the 10-year period, 1242 patients had pelvic or acetabular fractures. Acetabular fractures affected 283 patients. Six patients had horizontal shear pelvic fractures, representing an incidence of 0.5% of all pelvic and acetabular fractures (see Fig. 1). All six were included in this study. Five patients were young males in motorcycle crashes. This involved going over the handlebars and in one case T-boning a car. Estimated crash speeds ranged from 45 to 80 kms per hour. One young female was crushed by a horse. The median age was 35.5 (range 17-49) years. See Table 1 for demographic data.
Clinical presentation and associated injuries
On presentation, patients could not weightbear. Four patients were not intubated and had bilateral pelvic pain and tenderness, and were haemodynamically stable. The other two patients were intubated and required massive transfusion protocol (MTP) activation. One had an MTP after suffering a haemothorax and a proximal femoral shaft fracture managed with intramedullary nailing while another required angioembolisation for internal iliac artery bleeding. Three patients sustained a sciatic nerve injury including one neuropraxia and two with loss of ankle dorsiflexion or "foot-drop". Posterior pelvic ring injuries included unilateral sacral alar fractures in three patients and a unilateral anterior sacroiliac joint (SIJ) injury in one patient. No posterior pelvic ring injury required definitive fixation as sacral alar fractures were undisplaced, the SIJ injury was anterior only, and patients were kept non-weightbearing for 6 weeks. All upper limb fractures were managed non-operatively. No bladder injuries were found on examination, following urinary catheterisation or on pelvis CT scan. The median hospital length of stay was 20.5 days (range 8-32 days).
Radiological investigations
All patients had pelvis radiographs and CT scans. Radiological analysis was performed for all six horizontal shear fracture cases (12 acetabula). In all cases, both transverse acetabular fractures occurred essentially in the same axial plane at a similar level. As viewed on sagittal CT images, the fracture line angle was variable at the anterior column (7 horizontal, 5 oblique) whilst characteristically exiting horizontally at the posterior column (11 horizontal, 1 oblique). Nine of 12 fractured acetabula were displaced (horizontal shear translation ≥ 2 mm, range 2-14 mm). This was measured at the location of maximal displacement. See Table 2 for radiological characteristics. Such displacement was characterised by posterior translation of the antero-inferior half of the pelvic ring (see Fig. 2). In the case of bilateral posterior wall fractures (62B1b), one resulted from posterior hip dislocation (see Fig. 3).
Surgery and post-operative management
An orthopaedic trauma team at a level 1 tertiary centre managed all patients. All nine displaced fractures underwent open reduction and internal fixation (ORIF). Three of the six patients required emergency surgery prior to ORIF. One patient had a temporary pelvic external fixator, definitive femoral intramedullary nailing and thigh fasciotomy, the second had a trauma laparotomy before transfer and then also had an external ventricular drain for intra-cranial haemorrhage on arrival, and the third had a closed reduction of their hip dislocation and skeletal traction. The median time from admission to surgery for ORIF of the horizontal shear fracture was 4 days (range 2-16 days). The patient who did not undergo ORIF was the youngest patient, had an undisplaced horizontal shear fracture, and was managed by 8 weeks of non-weightbearing followed by progressive weightbearing. In operative cases, through a Kocher-Langenbeck approach, the posterior column was fixed using either one or two low-profile pelvic reconstruction plates (De-Puy Synthes, Warsaw, Indiana, USA), see Fig. 4. Posterior wall fractures were reduced and also held with posteriorly placed reconstruction plates. One patient with severely comminuted fractures had ilioinguinal approaches with anterior column fixation using bilateral supra-pectineal plates (Stryker, Kalamazoo, Michigan, USA). No sacral fractures required fixation after the acetabular columns were reduced and stabilised. Post-operatively patients were allowed to sit up in bed, commence bed-based range of motion exercises but were kept non-weightbearing for 6 weeks with privileges for stationary bike riding, swimming, and walking in shoulder depth water. At 6 weeks, repeat radiographs and clinical review occurred at our clinic before commencing walking again with progressively increasing weight. Patients were allowed to return to physical work once fully weightbearing, typically after 3 months. Successive follow-up with serial radiographs occurred through our clinic at 3, 6, 12, and 24 months post-operatively, after 24 months, follow-up was organised only in case of new hip-related symptoms.
Outcomes at follow-up
All three patients reporting pain at follow-up had transtectal (62B1.3) type fractures and one had ipsilateral femoral head osteonecrosis, whilst another developed bilateral heterotopic ossification (right; Brooker grade 3 and left; grade 2). This patient suffered severe brain injury, traumatic shock requiring massive transfusion, significantly comminuted fractures, and was the only patient to have bilateral ilioinguinal and Kocher-Langenbeck approaches. Two of the three patients with any degree of post-traumatic arthritis sustained associated posterior wall fractures. Nine of ten hips demonstrated minimal post-traumatic changes according to Matta's classification. The single hip with a poor grade developed femoral head osteonecrosis managed with total hip arthroplasty at > 1 year post-operatively. Both patients with difficulty mobilising were those who sustained sciatic nerve injuries with foot-drop. One had a tibialis posterior tendon transfer for unilateral foot-drop 1 year after injury and one patient utilised ankle-foot orthoses bilaterally. When comparing fracture and implant Table 3 for a summary of clinical outcomes which were recorded at the last clinical review for those five patients with greater than 12 month follow-up (median 22, range 13-94 months).
Discussion
The horizontal shear fracture of the pelvis is characterised by bilateral transverse acetabulum fractures. This injury is uncommon, accounting for 0.5% of all pelvic and acetabular fractures over 10 years at our institution. In these fractures, the anterior ring typically shears posteriorly ≥ 2 mm relative to the posterior pelvic ring through the bilateral transverse acetabular fractures. This is the first report of this pattern. These fractures are high-energy injuries normally following motorcycle crashes. The proposed mechanism is a large horizontal force transferred through both femora into the acetabula, for example, as the patient hits the ground with flexed hips after coming off the motorcycle or being ejected from a vehicle. Other mechanisms for this injury may include a crush injury with significant weight, for example under a horse, with the hips flexed and a force again directed along both femora into the acetabula. We propose that these specific fractures are rare, because they should require both hips to be concurrently positioned exactly at a similar position of flexion and slight abduction at impact and simultaneously both femora, rather than one, must strike the ground or object together to produce the horizontal shearing injury. The likelihood of this exact combination of events is scarce, and without it, the other more common patterns of unilateral acetabular fractures are instead produced. The horizontal shear fractures are frequently juxtatectal (62B1.2) or transtectal type (61B1.3), comminuted, have horizontal trajectories through the columns, and typically exit above the ischial spine. Posterior wall fractures may also occur. Sacral fractures and sciatic nerve injuries affected half of the patients. The Matta post-traumatic arthritis grade was excellent or good in 90% of hips. Patients who reported any pain or difficulty mobilising at follow-up sustained related sequelae including femoral head osteonecrosis, heterotopic ossification, or foot-drop related to sciatic nerve injury. This study's limitations are acknowledged. First, the number of patients with this pattern is small. With larger studies, these initial findings may be expanded upon and the generalisability of findings may be improved. However, it is uncommon, constituting just 0.5% pelvis and acetabular fractures at our institution over 10 years. Matta also found bilateral acetabular fractures accounted for only 1% of patients with acetabulum fractures [5]. Unfortunately, epidemiological studies have not reported the prevalence of bilateral acetabular fractures regardless of country or number of patients, possibly due to their shear rarity [11][12][13][14][15][16]. This is a retrospective study, and therefore, detailed functional outcomes such as modified Merle d'Aubigné scores are not reported [17]. It would be of interest to compare the functional scores of patients sustaining horizontal shear fractures to other fracture patterns. Follow-up for patients was 1 year; longer follow-up would be valuable to determine if patients remain free of post-traumatic arthritis.
The most prevalent unilateral acetabular fracture type is the posterior wall fracture (20-32%) followed by both column fractures (17-20%) [1,12,18]. There is one report of bilateral posterior wall fractures; however, none of bilateral both column fractures [19]. Bilateral acetabulum fractures are most commonly central fracture dislocations [20][21][22][23]. These often occur secondary to seizures [20][21][22][23], sustained myoclonus [24], or cerebrovascular accident-related convulsions [25]. Central fracture dislocations have occurred with bilateral femoral neck fractures [25]. Seizures also caused bilateral T-type acetabulum fractures in one case [26], whilst osteoporosis caused bilateral anterior column posterior hemi-transverse fractures in another [27]. One study reported on eight cases of bilateral acetabular fractures most commonly following motor vehicle crashes which included two cases of transverse fractures with posterior wall fractures [28]. Bilateral anterior column fractures were reported following motor vehicle crashes [29] and in one paediatric case following an ice-hockey collision [30]. Motor vehicle crashes have also produced an anterior wall fracture with contra-lateral posterior wall fracture [28,31] and bilateral posterior column and posterior wall fractures [19]. Our aim was to describe this unique pattern, of bilateral horizontal shear lines through the transverse plane of the acetabula separating the pelvic ring into anterior and posterior ring.
Bilateral transverse acetabular fractures are rare with only two case reports to date [6,7] and this pattern has not been reported in larger epidemiology studies [11][12][13][14][15]. One occurred in a 24-year-old female ejected from her vehicle and another in a 65-year-old pedestrian struck by a car and thrown 20 feet. Such mechanisms were similarly high-energy to this study. As found in our study, associated posterior ring fractures were reported, with the 24-yearold-patient having a left sacral fracture and right sacroiliac joint widening [6]. Traumatic sciatic nerve injuries have a reported incidence of 16% in patients with acetabular fractures [32]. Neither the 24-year-old or 65-year-old sustained sciatic nerve injuries; however, they did not have associated posterior wall fractures and likely did not experience a posterior shear-type mechanism that we found in motorcycle crashes. Pure transverse acetabular fractures are less prevalent (3-9%) than the associated transverse with posterior wall fractures (8-21%) [1,12,14,18]. Transverse with posterior wall fractures occurred in five patients in our group, and unsurprisingly, this pattern is found most commonly with sciatic nerve injury [33]. All fractures were juxtatectal or transtectal, similar to the findings in unilateral transverse acetabular fractures [1].
Two patients in our group required a massive transfusion protocol. One required internal iliac artery angioembolisation. In the previous reports, the 24-year-old patient had blood transfusions but no documented massive transfusion protocol [6]. Of note, no deaths occurred with level 1 trauma centre care in our study and in both case reports of bilateral transverse acetabular fractures [6,7]. The 24-yearold patient had bilateral sacroiliac joint screw fixation and anterior supra-acetabular external fixator, whilst the 65-yearold's diagnosis was delayed, having left-sided protrusion managed with skeletal traction [6,7]. These management options differed to ours. In horizontal shear fractures, the displaced acetabulum needs anatomical reduction and buttressing of the posterior column through internal fixation. This fixation also restores continuity between the anterior and posterior rings.
Published data regarding the follow-up of patients with bilateral transverse acetabular fractures are scarce [6,7]. We identified one patient with a poor grade of traumatic arthritis with an ipsilateral posterior wall fracture. The prevalence of post-traumatic arthritis following acetabular fracture is 26.6% [32] and posterior wall fractures are reported to have greater risk of post-traumatic arthritis [34]. One case of femoral head osteonecrosis occurred in our group in a hip which was not dislocated. Femoral head avascular necrosis occurs in 5.6% of patients with acetabular fractures and 9% in those with posterior dislocation [32]. The osteonecrosis was managed definitively with total hip arthroplasty. The rate of hip arthroplasty following acetabular fractures that result in post-traumatic arthritis or osteonecrosis is reported at 15% at a median of 4 years [35]. Regarding functional outcomes, a modified Merle d'Aubigné score was not available for our patients. On analysis of 906 patients correlating modified Merle d'Aubigné scores with fracture pattern, in transverse fracture types, 86.3% had excellent or good scores and transverse with posterior wall fractures were excellent or good in 83.0% [32].
Young and Burgess described a mechanism-based classification [3]. The horizontal shear mechanism involves bilateral transverse acetabular fractures with a separation of the anterior and posterior ring. When comparing combined acetabular and pelvic ring injuries to isolated acetabular fractures, transverse fractures and T-type fractures more commonly occur with pelvic ring injuries [36]. This may be explained by transverse type fractures occurring after a high-energy force directed through the acetabulum sufficient to additionally cause posterior ring injuries [37]. Our study supports this, as four of six patients had posterior pelvic ring injuries. Following a motorcycle crash or other significant force directed from anterior to posterior, the horizontal shear pattern can have concomitant posterior wall fractures, hip dislocation, or posterior pelvic ring injury.
In conclusion, we have described the horizontal shear fracture of the pelvis being characterised by transverse fractures through both acetabula causing separation of the anterior and posterior pelvic ring. It is an uncommon fracture and requires significant energy to produce, typically following motorbike crashes. The most common associated injuries include posterior wall fractures, posterior pelvic ring fracture, and sciatic nerve injury. Conceptually, it may fit into a mechanism-based system such as the Young and Burgess classification. Understanding this horizontal shear pattern will facilitate the management of patients with these injuries. Good radiological and clinical outcomes were achieved by the method of treating displaced horizontal shear fractures with open reduction and internal fixation which allowed restoration of the acetabulum, buttressing the posterior column and re-established stable continuity between the anterior and posterior pelvic ring.
Author contributions Conceptualization: ZB; methodology: BJ and ZB; formal analysis and investigation: BJ and ZB; data curation: BJ and ZB; writing-original draft preparation: BJ; writing-review and editing: BJ and ZB; resources: ZB; visualisation: BJ; supervision: ZB; project administration: BJ and ZB.
Funding The authors declare no funding was received for conducting this study.
Conflict of interest
The authors have no conflicts of interest to declare in relation to this study.
Ethics approval Institutional ethics waiver was obtained prior to completing this study by the institution's research and governance department (Reference Number: AU201908-07). This was due to the retrospective nature of the study and being considered to be low and negligible risk research activity. The research adheres to the Declaration of Helsinki. This study was carried out according to STROBE guidelines for observational studies.
Consent to participate Informed consent was obtained from participants to include their results in the study; particularly, permission was obtained for the inclusion of imaging for the illustrative purposes of this study.
Consent to publish
Informed consent was obtained from participants to publish deidentified radiological images for the illustrative purposes of this study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-08-02T13:33:27.502Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "79acfc8e821430ec0e37b6896601a9da76beb8ed",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00068-021-01764-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "79acfc8e821430ec0e37b6896601a9da76beb8ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55473511 | pes2o/s2orc | v3-fos-license | Assessment of Bacteria, Fungi and Protozoa in Three Theobroma Cacao Soils in Ondo State, Nigeria
The microbial community of 3 cocoa soils in Ondo State, Nigeria was investigated. Fourteen bacterial isolates, 8 fungi and 9 protozoa were obtained. The bacteria include Actinomyces sp., Bacillus spp, Corynebacterium sp., Lactobacillus sp., Micrococcus sp., Staphylococcus sp. and Streptomyces sp. The fungi were Aspergillus flavus, A. niger, Fusarium sp., Penicillium sp., Phytopthora palmivora, Phytophthora spp., Rhizopus sp., Saccharomyces sp., Trichoderma spp. and Alternaria sp., while the protozoa include Balantiophorus, Biomyxa spp, Bodo spp, Colpoda spp, Tetramitus spp, Naegleria spp and Uroleptus spp. Differences in population of the microorganisms in the 3 soils might be due to environmental factors of the fields, and this might account for the quantity and species of microorganisms obtained. The determination microorganism in cocoa fields is crucial as it may be exploited for the control of black pod disease, which is presently one of the most important diseases affecting cocoa production in south western Nigeria.
Introduction
Cocoa is cultivated in most tropical regions throughout the world as an economically important crop for smallholder farmers (Holmes et al., 2004). The number and kinds of microorganisms present in soils depend on many environmental factors: the amount and type of nutrients available, moisture, degree of aeration, pH, temperature among others (Prescott et al., 1999). Soil bacteria and fungi play pivotal roles in various biochemical cycles, and are responsible for the recycling of organic compounds (Wall and Virginia, 1999). The best soil for cocoa production is the forest soil rich in humus, which should be well-drained and free-flowing to allow easy penetration of roots capable of retaining moisture during summer, and those that allow circulation of air and moisture. Cocoa is grown on soils with a wide range of pH from 6.0-7.5, where major nutrients and trace elements would be available (Drenth and Guest, 2004). The beneficial roles of cocoa microbial community include organic matter decomposition, mineralization of nutrients, biological degradation and as bio-filters for cleaning up soil and improvement of soil structure. The level of spoilage microbes reflects the microbial quality, wholesomeness of a food product, as well as the effectiveness of measures used to control or destroy such microbes (Pierson and Smoot, 2001). This work examines the microbial flora of soils from three cocoa plantations, including the protozoa which its study has received little attention. The study was done to assess the microorganisms present in these fields, in order to fully exploit their potential.
MATERIALS AND METHODS Sample collection:
Soil samples were collected from three different fields in Odegbo, Araromi Quarters; Idele, along Supare road and a cocoa farm at Ilale, along Adekunle Ajasin University Permanent site, Akungba-Akoko, Ondo State, Nigeria. The modified methods of Mpika et al., 2011 were used for sample collection. The samples were obtained from three separate locations within a field with soil auger on topsoil at a depth of up to 10 cm, after removing the leaf litter to obtain random and uniform samples. In each plot, a bulk sample of 600g of soil was collected, made up of 3 samples taken at the base of 3 cacao trees bearing many healthy pods. Each bulk sample was carefully labeled, mixed and divided into three parts which were put in previously sterilized polyethylene bags.
pH readings
The Fisher Accument pH meter (Model 600 Fisher Scientific Co, U.S.A) was used for determining the pH of the samples. Water and 0.1M KCl solution were used at 1:2.5 soil/solution ratio in a sterile beaker. The anode of the pH meter was inserted into it and the readings were obtained when it was stable.
Determination of Physico-chemical parameters
The organic carbon (OC) content was determined by the modified K2Cr2O7 digestion of Walkley-Black wet oxidation method. Flame photometer was used for measuring Na and K, while Atomic Absorption Spectrophotometer was used for Mg and Ca.
Cultivation of microorganisms Preparation of media for isolation
The medium used for isolation of bacteria and protozoa was Nutrient agar (NA), while Sabouraud Dextrose Agar (SDA) was used for fungi. The media were prepared according to the manufacturer's instruction and specifications, sterilized in the autoclave for 15 min at 121 o C temperature and 103.42kPa pressure before allowed to cool.
Cultural methods for bacteria and fungi
The pour plate technique was used for inoculation of soil samples. Petri dishes were arranged on a working bench for each of the samples collected. One gramme of each soil sample was suspended in 9 ml of sterile distilled water, mixed thoroughly and diluted to 10 -4 , 10 -5 and 10 -6 for bacteria and protozoa, while 10 -3 , 10 -4 and 10 -5 were used for fungi . A 1ml aliquot was then dispensed into each sterile Petri dish and a molten NA, SDA was poured into each dish. The plates were then gently swirled for 10 s to aid even distribution of both the sample and the medium, and were allowed to cool and set before incubation. Incubation of the plates was done at 37 o C for bacteria, and 25 o C for fungi in an inverted position for 48 hours until reasonable growth occurred. The isolates were preserved and maintained on NA and SDA slants, and kept at 4 o C in the refrigerator until further analysis. The cultural characteristics of the colonies were observed.
Isolation and identification of protozoa
Methods of Subba Rao, 1999 was used for isolating the protozoa. Escherichia coli, a good example of edible bacteria for soil protozoans was used.
Cultures of E. coli were first cultivated on NA in 9 sterile plates at 37 0 C for 24 hours. After the incubation, 1ml of each of 10 -4 , 10 -5 and 10 -7 soil samples was transferred into each of the bacterial cultures, and appropriately labeled. The plates were sealed with masking tape and incubated for 10 days at 30 0 C. Staining was done by preparing a smear on microscope slides, flooded with Giemsa stain, and allowed to dry for 30 s before viewing under the microscope with oil-immersion lens. They were identified and classified into groups based on the morphology-shape and organ of locomotion: Ciliates, Amoebae (move by means of a temporary foot or "pseudopodtestate amoebae (makes a shelllike covering), naked amoebae and Flagellates (use a few whip-like flagella to move (Minchin, 2003).
Identification of bacteria
Gram's staining was done to assess the organisms that retained the purple colour of crystal violet which were considered Gram positive, and those that retained the red colour of safranine: Gram negative. Biochemical tests carried out include catalase test, sugar tests include glucose, sucrose, lactose and mannitol. Others were indole test, starch hydrolysis, sugar fermentation, motility and Ornithine tests (Ederer and Clark, 1970).
Identification of fungi
Pure cultures of fungal isolates were characterized between 48 and 96 h after incubation. They were viewed under the microscope with Lactophenol-in cotton blue, and classified based on colony types and morphology of the spores according to the descriptions of various identification books including Barnett and Hunter (1998), Williams-Woodward (2001), Dayan (2004) and Chaturvedi and Ren (2011). Radial growth of the isolates were measured daily in some cases to ease identification.
RESULTS
All the soils were observed to be acidic (pH 6.30-6.45), and more ideal for cocoa plantation. However, the acidity was higher in Odegbo, followed by Ilale and Idele. The Organic Content (OC), K, Ca and Mg were highest in Odegbo than in other locations (Table 1).
Eight bacterial isolates, 8 fungi and 9 protozoa were obtained. The total bacterial count for Odegbo ranged from 2.0 to 4.0 x 10 6 cfu/g, Idele ranged from 1.0 to 9.0 x 10 6 cfu/g and Ilale ranged from 2.0 to 4.0x 10 6 cfu/g. The highest and lowest bacterial count, 9.0 x 10 6 and 1.0 x 10 6 cfu/g respectively, was observed in Idele as shown in Table 2. Odegbo and Ilale had the highest number of bacteria (6) present in the soil samples, while Idele had the least (5). Actinomyces sp., Bacillus, Corynebacterium sp. and Micrococcus sp. were obtained from all the three cocoa fields. This is an indication that these microorganisms are predominant in cocoa soil. (Table 3).
DISCUSSION
The results showed a high proliferation of bacteria, fungi and protozoa with a greater proportion of bacteria. The cocoa soils are therefore one of the preferred sites of indigenous microorganisms. Most of the bacteria reported in this study have been shown to be present in cocoa soils by previous workers (Amir and Pineau, 1998). Odegbo and Ilale had higher bacteria and fungi probably due to the nature of the soil and the use of synthetic chemicals and pesticides which could adversely affect the soil microbial balance, causing the soil microorganisms to grow when they are used as carbon and energy source (Deacon, 2005). The number of microorganisms may increase depending on the organic matter content of any particular soil. The bulk of soil bacteria are heterotrophic and utilize readily available source of organic energy from sugars, starch, cellulose and protein. Actinomycetes, which were found in all the fields, grow on complex substances such as keratin, chitin and other complex polysaccharides, and thus play an active role in humus formation.
Soil fungi are mostly heterotrophs. Sporulating fungi such as Mucor, Penicillium and Aspergillus appear on agar plates rather profusely than nonsporulating ones (Saritha and Sreeramulu, 2013). According to Adebola and Amadi (2010a), Rhizopus spp. could possibly serve as a good biological control agent against Phytophthora palmivora. Many beneficial fungi and bacteria that occur naturally and associated with cocoa had been reported to show potential as antagonists of major cocoa pathogens (Bong et al., 2000;Samuel and Habber, 2003; Adebola and Amadi, 2010b).
The highest population of protozoans found in cocoa soil belongs to flagellates (class Mastigophora). This could be due to litter content, soil depth, pore-size and water potential (Stout and Heal, 1967). Other reports highlighted that protozoan abundance and diversity may be greater in environment with relatively high level of environmental stress.
Encystment of protozoa was observed, this indicates that the cells have accumulated sufficient reserves when the conditions became unsuitable for their activities. Conversely, where nutritional resources are low, encystment is limited and many cells die if the soil dries out (Couteaux and Ogden, 1988).
CONCLUSION
Natural populations of microorganisms in cocoa soils -bacteria, fungi and protozoa were obtained in this study. Almost all the soil living organisms have different micro-environment in which they live (Rana, 2005, Subba Rao, 1999. It was observed that the total bacterial counts were higher than the fungal counts in samples from the three fields. This predominance of bacteria over fungi in cocoa soils had been observed by several authors (Okoh et al., 1999). The biodiversity was variable qualitatively and quantitatively. Differences in population of microorganisms in the 3 soils might be due to physiological features of the fields, and this might account for the quantity and species of microorganisms obtained. Protozoan isolates from the field beside the stream were higher in number than for other fields. This might be due to high level of water potential which enhances movement of the organisms.
The study was done within the limits of the facilities available. Modern technology (nucleic acid probes) approach should be employed to obtain detailed overview of the microbial diversity. The potential of the isolated microorganisms, especially fungi and bacteria could be exploited in the control of black pod disease caused by Phytophthora palmivora and P. megakarya, and this will instill hope in cocoa farmers whose revenues constantly decline due to this disease. | 2019-04-02T13:03:43.840Z | 2015-07-29T00:00:00.000 | {
"year": 2015,
"sha1": "066b5cc88f61b2cbcc339c825f8704b10489e78b",
"oa_license": "CCBY",
"oa_url": "https://www.ijsciences.com/pub/pdf/V4201507768.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "86a7a96681380cc116f9336b11c617fb18ad1455",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
119283558 | pes2o/s2orc | v3-fos-license | Fusion and quasifission dynamics in the reactions $^{48}$Ca+$^{249}$Bk and $^{50}$Ti+$^{249}$Bk using TDHF
Background: Synthesis of superheavy elements (SHE) with fusion-evaporation reactions is strongly hindered by the quasifission (QF) mechanism which prevents the formation of an equilibrated compound nucleus and which depends on the structure of the reactants. New SHE have been recently produced with doubly-magic $^{48}$Ca beams. However, SHE synthesis experiments with single-magic $^{50}$Ti beams have so far been unsuccessful. Purpose: In connection with experimental searches for $Z=117,119$ superheavy elements, we perform a theoretical study of fusion and quasifission mechanisms in $^{48}$Ca,$^{50}$Ti+$^{249}$Bk reactions in order to investigate possible differences in reaction mechanisms induced by these two projectiles. Methods: The collision dynamics and the outcome of the reactions are studied using unrestricted time-dependent Hartree-Fock (TDHF) calculations as well as the density-constrained TDHF method to extract the nucleus-nucleus potentials and the excitation energy in each fragment. Results: Nucleus-nucleus potentials, nuclear contact times, masses and charges of the fragments, as well as their kinetic and excitation energies strongly depend on the orientation of the prolate $^{249}$Bk nucleus. Long contact times associated with fusion are observed in collisions of both projectiles with the side of the $^{249}$Bk nucleus, but not on collisions with its tip. The energy and impact parameter dependences of the fragment properties, as well as their mass-angle and mass-total kinetic energy correlations are investigated. Conclusions: Entrance channel reaction dynamics are similar with both $^{48}$Ca and $^{50}$Ti projectiles. Both are expected to lead to the formation of a compound nucleus by fusion if they have enough energy to get in contact with the side of the $^{249}$Bk target.
I. INTRODUCTION
The synthesis of superheavy elements is one of the most fascinating and challenging tasks in low-energy heavy-ion physics. Nuclear mean-field theories predict a superheavy island of stability as a result of new proton and neutron shell closures. Most recent theoretical calculations yield a magic neutron number N = 184, but there is no consensus yet about the corresponding magic proton number, with predictions [1-6] ranging from Z = 114 − 126. Experimentally, two approaches have been used for the synthesis of these elements. The first method uses targets containing doubly-magic spherical nuclei such as 208 Pb (or alternatively 209 Bi). By bombarding these targets with heavy-ion beams ranging from chromium to zinc, researchers at the GSI Helmholtz Center in Germany and at Riken were able to produce several isotopes of elements Z = 107 − 112. The beam energy was kept low to minimize the excitation energy ('cold fusion') [7][8][9][10]. The second approach, pioneered at JINR in Russia, uses actinide targets instead. In contrast to the spherical 208 Pb target nuclei used at GSI, all of the actinide target nuclei exhibit quadrupole deformed ground states. Target materials ranging from 238 U to 249 Cf were irradiated with a 48 Ca beam. Despite the fact that the excitation energy is found to be substantially higher in these experiments ('hot-fusion') researchers at JINR were * umar@compsci.cas.vanderbilt.edu † volker.e.oberacker@vanderbilt.edu ‡ cedric.simenel@anu.edu.au able to create isotopes of elements Z = 113 − 118 [11][12][13][14][15], with lifetimes of milliseconds up to a minute. Recently, hotfusion experiments were also carried out at GSI, LBNL, and RIKEN [10,[16][17][18][19][20][21] which confirmed the discovery of elements Z = 112 − 117. However, attempts to synthesize even heavier elements such as Z = 119, 120 with beams of 50 Ti and 54 Cr instead of 48 Ca have so far not been successful. The experimental community is asking for theoretical guidance as to why 48 Ca beams seem to be so crucial in forming superheavy elements. For example, the reaction 48 Ca + 249 Bk produces superheavy element 117 with cross-sections of 2 − 3 picobarns. By contrast, an upper cross section limit of only 50 fb was reported for the production of isotopes of element 119 in the reaction 50 Ti+ 249 Bk at GSI-TASCA [22].
Experimentally it is found that capture reactions involving actinide target nuclei result either in fusion or in quasifission. Fusion produces a compound nucleus in statistical equilibrium, while quasifission leads to a reseparation of the fragments after partial mass equilibration without formation of an equilibrated compound nucleus [23]. Furthermore, if the nucleus does not quasifission and evolves to a compound system, it can still undergo statistical fission due to its excitation. The evaporation residue cross-section is dramatically reduced due to the quasifission (QF) and fusion-fission (FF) processes.
Most dynamical models [51][52][53][54][55] argue that for heavy systems a dinuclear complex is formed initially and the barrier structure and the excitation energy of this precompound system will determine its survival to breaking up via quasifission. The challenge for nuclear theory is to describe the entrance channel dynamics leading either to fusion or to quasifission and which accounts for the complex interplay between dynamics and structure. Microscopic dynamical theories are natural candidates to describe such reactions. Here, we simulate heavy-ion collisions in the framework of the time-dependent Hartree-Fock (TDHF) theory which provides a fully microscopic mean-field approach to nuclear dynamics.
In this paper we will concentrate on the theoretical analysis of the 48 Ca+ 249 Bk experiments [12,13,21] in which element Z = 117 was produced. This system will be compared to the 50 Ti+ 249 Bk reaction which appears to have a very low cross section limit for synthesizing element Z = 119. Our goal is to investigate potential different mechanisms between these two reactions by calculating dynamical observables such as nuclear contact times, mass and charge transfer, excitation energies, and heavy-ion potentials.
A brief introduction to the theoretical framework is provided in section II, followed by a presentation and discussion of the results in section III. Conclusions are drawn in section IV.
The TDHF equations for the single-particle wave functions can be derived from a variational principle. The main approximation in TDHF is that the many-body wave function Φ(t) is assumed to be a single time-dependent Slater determinant at all times. It describes the time-evolution of the singleparticle wave functions in a mean-field corresponding to the dominant reaction channel. During the past decade it has become numerically feasible to perform TDHF calculations on a 3D Cartesian grid without any symmetry restrictions and with much more accurate numerical methods [57,60,90,[102][103][104][105][106]. Furthermore, the quality of effective interactions has been substantially improved [107][108][109][110].
Recently, we have developed a new dynamic microscopic approach, the density-constrained time-dependent Hartree-Fock (DC-TDHF) method [111], to calculate nucleus-nucleus potentials V (R), mass parameters M(R), and precompound excitation energies E * (R) [112], directly from microscopic TDHF dynamics. The basic idea of this approach is the following: At certain times t or, equivalently, at certain internuclear distances R(t) the instantaneous TDHF density is used to perform a static energy minimization while constraining the proton and neutron densities to be equal to the instantaneous TDHF densities. This can be accomplished by solving the density-constrained density-functional problem where E[ρ n , ρ p ] is the TDHF density-functional (calculated with Skyrme interactions). The quantities v n,p (r) are Lagrange multipliers which represent external fields that constrain the densities during the minimization procedure. This means we allow the single-particle wave functions to rearrange themselves in such a way that the total energy is minimized, subject to the TDHF density constraint. In a typical DC-TDHF run, we utilize a few thousand time steps, and the density constraint is applied every 10 − 20 time steps. We refer to the minimized energy as the "density constrained energy" E DC (R(t)). The ion-ion interaction potential V (R) is obtained by subtracting the constant binding energies E A 1 and E A 2 of the two individual nuclei The calculated ion-ion interaction barriers contain all of the dynamical changes in the nuclear density during the TDHF time-evolution in a self-consistent manner.
In addition to the ion-ion potential it is also possible to obtain coordinate dependent mass parameters. One can compute the "effective mass" M(R) [113] using the conservation of energy in a central collision where the collective velocityṘ is directly obtained from the TDHF time evolution and the potential V (R) from the density constraint calculations. At large distance R, the mass M(R) is equal to the reduced mass µ of the system. At smaller distances, when the nuclei overlap, the mass parameter generally increases. This microscopic approach also applies to reactions involving deformed nuclei when calculations are done in an unrestricted three-dimensional box where the nuclei can be given arbitrary orientations with respect to the collision axis [114][115][116].
Using the density constrained energy defined above, we can compute the excitation energy of the system at internuclear distance R(t) as follows where E T DHF is the conserved TDHF energy. The last term denotes the collective kinetic energy where j(r,t) is the local current density from TDHF and m the nucleon mass. The index q denotes the isospin index for neutrons and protons (q = n, p).
A. Unrestricted TDHF calculations: fusion and quasifission
In this paper, we focus on fusion and quasifission in the reactions 48 Ca+ 249 Bk and 50 Ti+ 249 Bk. In our TDHF calculations we use the Skyrme SLy4 and SLy4d energy density functionals [107,117] including all of the relevant time-odd terms in the mean-field Hamiltonian. Both interactions are constructed using the same fitting procedure, apart for onebody center of mass corrections, included in SLy4, which are small in heavy systems such as those studied here. The 48 Ca+ 249 Bk calculations were done with SLy4d while the calculations for 50 Ti+ 249 Bk used the SLy4 parametrization. The reason for switching to SLy4 was due to the availability of the pairing force parameters for this force. To describe these reactions with a high degree of accuracy, the shapes of the individual nuclei must be correctly reproduced by the mean-field theory. In some cases, it is necessary to include BCS pairing which increases the number of single-particle levels that must be taken into account by about 50 percent. Static Hartree-Fock (HF) calculations without pairing predict a spherical density distribution for 48 Ca while 249 Bk shows prolate quadrupole and hexadecapole deformation, in agreement with experimental data. However, static HF calculations without pairing predict a prolate quadrupole deformation for 50 Ti due to partial filling of π f 7/2 with occupation numbers 0 or 1, thus breaking spherical symmetry. When BCS pairing is added, these occupation number are lower than 1 and distributed around the Fermi surface, restoring a spherical density in 50 Ti. Therefore, we include BCS pairing (using fixed partial occupations) in the TDHF runs for 50 Ti+ 249 Bk while pairing has been left out for the system 48 Ca+ 249 Bk to speed up the calculations.
Numerically, we proceed as follows: First we generate very well-converged static HF wave functions for the two nuclei on the 3D grid. The initial separation of the two nuclei is 30 fm. In the second step, we apply a boost operator to the singleparticle wave functions. The time-propagation is carried out using a Taylor series expansion (up to orders 10 − 12) of the unitary mean-field propagator, with a time step ∆t = 0.4 fm/c. For reactions leading to superheavy dinuclear systems, the TDHF calculations require very long CPU times: a single TDHF run at fixed E c.m. energy and fixed impact parameter b takes about 1-2 weeks of CPU time on a 16-processor LINUX workstation. A total CPU time of about 6 months was required for all of the calculations presented in this paper. Let us first consider the reaction 50 Ti+ 249 Bk at E c.m. = 233 MeV, which is the energy used in the GSI-TASCA experiment [22]. The numerical calculations were carried out on a 3D Cartesian grid which spans (66 × 52 × 30) fm. In Fig. 1 we show contour plots of the mass density in the x − z plane as a function of time. In this case, the initial orientation of the 249 Bk nucleus has been chosen such that the 50 Ti projectile collides with the "side" of the deformed target nucleus. We observe that at an impact parameter b = 0.5 fm TDHF theory predicts fusion. Our conceptual definition of fusion is an event with large contact time exceeding 25 − 35 zs, and in addition we require a mononuclear shape without any neck formation.
By contrast, at an impact parameter b = 1.0 fm TDHF theory predicts quasifission, see Fig. 2. As the nuclei approach each other, a neck forms between the two fragments which grows in size as the system begins to rotate. Due to the Coulomb repulsion and centrifugal forces, the dinuclear system elongates and forms a very long neck which eventually ruptures leading to two separated fragments. In this case, the contact time is found to be 16 zs.
B. Nucleus-nucleus potentials (DC-TDHF)
In Fig. 3 we plot the microscopic DC-TDHF nucleusnucleus potential barriers for the 48 Ca+ 249 Bk system. The dashed lines correspond to potentials calculated with constant reduced mass, while the solid lines include the influence of the coordinate-dependent "effective mass" M(R). We observe that the coordinate-dependent mass changes only the interior region of the potential barriers. The barriers are depicted for two extreme orientations of the 249 Bk nucleus (tip and side). [13] and E c.m. = 211 − 218 MeV in the GSI-TASCA experiment [21]. We conclude that the highest experimental energy E c.m. = 218 MeV is above both barriers but the lowest experimental energy E c.m. = 204 MeV is slightly below the barrier for the side orientation of 249 Bk. In Fig. 4 we plot the corresponding potential barriers for the 50 [21]. We note that the chosen experimental energy is 22.0 MeV above the barrier E B (tip) and 8.6 MeV above the barrier E B (side).
C. Energy dependence for central collision
We define the contact time as the time interval between the time t 1 when the two nuclear surfaces (defined as isodensities with half the saturation density ρ 0 /2 = 0.08 fm −3 ) first merge into a single surface and the time t 2 when the surface splits up again. Figure 5 clearly favor production of a heavy fragment near 208 Pb (with a 91 Rb light fragment) due to magic shell effects at all energies. A similar phenomenon was already observed in TDHF calculations of reactions with 238 U [44,101]. In some cases, a light fragment with N = 50 is formed, indicating an influence of this magic number in the dynamics as well.
Recently, we have developed an extension to TDHF theory via the use of a density constraint to calculate the excitation energy of each fragment directly from the TDHF density evolution. This gives us new information on the repartition of the excitation energy between the heavy and light fragments which is not directly available in standard TDHF calculations unless one uses advanced projection techniques [92]. In Fig. 5 (c) It is interesting to note the atypical rise of the contact time between impact parameters b = 1 fm and b = 2 fm, see Fig. 7 (a). As shown in Fig. 7 (b), for these impact parameters the light fragment is in the region of the neutron rich 100 Zr isotope. The microscopic evolution of the shell structure seems to have a tendency to form a composite with a longer lifetime when the light fragment is in this region. This was also discussed for the case of 40,48 Ca+ 238 U quasifission study of Ref. [94]. In Ref. [94] this was explained as being due to the presence of strongly bound deformed isotopes of Zr in this region [118,119].
Next we consider collisions of 48 Ca with the tip of 249 Bk (dashed lines in Fig. 7). No fusion events are found for this initial orientation. Quasifission reactions with contact times of 6 − 12 zs are found at impact parameters b = 0 − 4 fm, with light fragment masses A L = 82 − 101 and excitation energies E * L = 24 − 39 MeV. Impact parameters b > 5 fm yield DIC, multi-nucleon transfer and inelastic collisions. We have repeated these calculations at a lower center-ofmass energy of E c.m. = 211 MeV. The results are shown in Figure 8. For the tip orientation, all observables are quite similar to those obtained at E c.m. = 218 MeV. However, for the side orientation, we find that the contact time decreases more rapidly with impact parameter than at higher energy. As a result, both mass transfer and fragment excitation energies also decrease faster. Fusion is found for impact parameters b < 0.5 fm (for the side orientation of 249 Bk only). Figure 9 shows results for the 50 Experiments at Dubna and at GSI-TASCA have produced several isotopes of superheavy element 117 with crosssections of 2 − 3 picobarns in the reaction 48 Ca + 249 Bk. However, attempts to synthesize isotopes of element 119 in the reaction 50 Ti+ 249 Bk have been unsuccessful so far. One possible reason could be different excitation energies in these systems. In order to investigate this conjecture, we have calculated the total excitation energy for both systems as a function of impact parameter. This quantity can be calculated with the DC-TDHF method for both fusion and quasifission. The results are displayed in Figure 10 . We find the interesting result that the total excitation energy of both systems is almost identical at impact parameters b = 0 fm (fusion) and b = 1 fm (QF). For impact parameters b > 1.5 fm, the total excitation energy of the 50 Ti+ 249 Bk system is found to be in between the two curves calculated for 48 Ca + 249 Bk. We conclude that the excitation energy of the fused system or of the quasifission fragments does not exhibit strong differences between 48 Ca and 50 Ti induced reactions.
E. Mass-angle distributions
In this section we study mass-angle distributions (MADs) arising from quasifission. MADs have proven to be an efficient experimental tool to understand quasifission dynamics and how this mechanism is affected by the structure of the reactants [24-26, 35, 39, 41-47, 120-123]. TDHF calculations can help the analysis and interpretation of experimental MADs [41,44,45,47].
The MAD is obtained by plotting the scattering angle θ c.m. as a function of the mass ratio M R = m 1 /(m 1 + m 2 ) where m 1 and m 2 are the masses of the fission-like fragments. In Fig. 11 we show TDHF calculations of mass-angle distributions for 48 For the quasifission events which occur at time scales between 5.6 zs and 13.6 zs, our TDHF calculations show a strong correlation between scattering angle and mass ratio. The reason for this correlation is that the mass transfer between the two fragments increases with the rotation (contact) time (see Figure 7a,b) which in turn impacts the scattering angle. Hence, the MADs for quasifission events can be used as a clock for the rotation period of the system [24,26].
For the tip orientation of the 249 Bk nucleus (blue curve in Fig. 11) TDHF shows quasifission at impact parameters b = 0 − 4 fm and a deep-inelastic reaction at b = 5 fm. No fusion events are predicted by TDHF for the tip orientation. On the other hand, for the side orientation of the 249 Bk nucleus (red curve) the TDHF calculations show fusion at impact parameter b = 0 fm, quasifission at impact parameters b = 0.3 − 3.0 fm, and a deep-inelastic reaction at b = 4 fm.
In general, collisions with the side of 249 Bk yield an increase in the mass ratio for quasifission. The maximum value for the light fragment, M R = 0.368, is obtained at impact parameter b = 0.5 fm. Note that as a result of the single-Slaterdeterminant approximation, TDHF is a deterministic theory that will provide us only with the most probable reaction products for the MADs rather than with the full mass distribution.
In fusion-fission reactions a compound nucleus is formed which subsequently decays by fission at a time-scale that is much longer than observed in quasifission, with no memory of the entrance channel and therefore no mass-angle correlation. In experiments, fission fragments are usually more symmetric than in quasifission, producing a peak around M R = 0.5. Even though our TDHF calculations predict fusion for the small impact parameter range b < 0.3 fm (and only for the side orientation of the 249 Bk nucleus), it is not possible to obtain a fully equilibrated nucleus undergoing fission in TDHF calculations because of limitations of the mean-field approach.
In Fig. 12 we show TDHF calculations of the mass-angle distribution for the same system, but at a lower energy E c.m. = cleus (blue curve) looks quite similar to the one obtained at higher energy. However, for the side orientation (red curve) we find a different mass-angle distribution: the scattering angles for the light fragment are confined to a small region θ c.m. = 96 − 123 deg, and the fragments are more asymmetric than at E c.m. = 218 MeV.
In Fig. 13 we show TDHF calculations of quasifission mass-angle distributions for 50 and with a maximum M R for the light fragment extending almost to 0.4. In the details, however, differences are observed on the position in the MADs of events associated with specific impact parameters.
F. Mass-TKE distributions
Correlations between mass and total kinetic energy (TKE) of the fragments have often been measured in experimental studies of quasifission [24,25,30,33,34,37,38,50,124,125]. Plots of fragment mass versus TKE are often used to separate quasi-elastic events to fully damped events such as quasifission and fusion-fission. In between, deep-inelastic collisions are characterised by a partial damping of the initial kinetic energy and a relatively small (compared to quasifission) mass transfer. Fully damped events are expected to have a TKE close to the Viola systematics for fission fragments [126,127].
In TDHF, the TKE of the fragments is simply obtained from the exit channel of the collision. For well separated fragments, it is straightforward to compute the kinetic energy of each fragment (i = 1, 2) at time t according to where M i is the final mass of the fragment i (neglecting nucleon emission) and R i its distance from the center of mass of the total system. Although the fragments do not interact anymore via the strong nuclear interaction, they are close enough feel the Coulomb repulsion from the other fragment. The TKE is then estimated by the sum of the kinetic energy of the frag-ments after their separation and their Coulomb potential energy assuming that the fragments are point like charges, where R(t) = R 1 (t) + R 2 (t). 48 Ca, 50 Ti+ 249 Bk TDHF calculations, respectively. The figures also show the TKE expected from the Viola systematics accounting for fragment mass asymmetry [127] and assuming that the fragments have the same N/Z ratio as the compound nucleus (dashed lines). Overall, we observe that the TKE are distributed around the Viola estimates, indicating that most of the relative kinetic energy has been dissipated in the collision. However, the distributions associated with side and tip orientations are well separated, with the side (tip) collisions leading essentially to a TKE below (above) the Viola systematics.
One could argue that assuming that the fragments have the same N/Z as the compound nucleus is a crude approximation, in particular for systems with large asymmetry in the exit channels as observed here. Therefore, we have also computed the TKE according to the Viola estimate using the masses and charges of the fragments in the exit channel obtained from TDHF. The results are plotted in Figs. 15(a) and (b) which show the ratio of the TDHF final TKE over the TKE from Viola estimate with TDHF mass and charge partitions. The previous conclusion are still valid, i.e., the tip (side) orientations are associated with more (less) final TKE than the Viola systematics. This means that less damping occurs in collisions with the tip than with the side. This conclusion, however, does not depend on if the projectile is a 48 Ca or a 50 Ti, indicating again that the reaction dynamics is relatively similar in both systems. 80 100
IV. CONCLUSIONS
The Time-Dependent Hartree-Fock (TDHF) theory provides a dynamic quantum many-body description of nuclear reactions. The only input is the effective nucleon-nucleon interaction (Skyrme) which is fitted to the static properties of a few nuclei, otherwise there are no adjustable parameters. TDHF has proven to be a valuable tool for elucidating some of the underlying physics of heavy-ion reactions in the vicinity of the Coulomb barrier. In this paper, we have studied the transition between various reaction mechanisms including fusion, quasifission, deep-inelastic collisions, and quasi-elastic reactions in collisions of 48 Ca+ 249 Bk and 50 Ti+ 249 Bk which have been used to synthesize elements Z = 117, 119. Quasifission is the primary reaction mechanism that limits the formation of superheavy nuclei.
In addition, heavy-ion interaction potentials are obtained with the Density-Constrained Time-Dependent Hartree-Fock (DC-TDHF) method. Because of the prolate deformation of the Bk nucleus, these potentials (and other observables) depend strongly on the relative orientation of 249 Bk. In particular, we present results for the "tip" and "side" orientation. Using TDHF, we calculate nuclear contact times, masses and charges of the two fragments, and their pre-compound excitation energies. Specifically, we study the energy-dependence of these quantities for central collisions, and we calculate the impact parameter dependence at selected fixed energies. Finally, we present results for mass-angle and mass-TKE distributions. The orientation of the actinide plays a crucial role on these observables.
In agreement with experiments at Dubna and at GSI-TASCA, our TDHF and DC-TDHF calculations predict fusion in the reactions 48 Ca + 249 Bk resulting in isotopes of element 117. While experimental attempts at GSI-TASCA to synthesize element 119 in the reaction 50 Ti+ 249 Bk have so far not been successful, the TDHF calculations do find fusion in this system also. In fact, our calculations do not show significantly different behaviors of the entrance channel dynamics between the two projectiles. | 2016-06-02T14:33:57.000Z | 2016-06-02T00:00:00.000 | {
"year": 2016,
"sha1": "0d787727a356e5741ac78a051032f18752d89cf4",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevC.94.024605",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0d787727a356e5741ac78a051032f18752d89cf4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260598127 | pes2o/s2orc | v3-fos-license | VISITORS’ AWARENESS OF DIVERSITY AND ABUNDANCE OF TREES IN URBAN GREEN SPACES AND THEIR ABILITY OF ACCURATE SPECIES IDENTIFICATION
. Istanbul's historical groves are important green areas consisting of large tree communities that have been protected and developed in a historical process and they meet the recreational needs of city residents. The purpose of this study, after revealing the richness and composition of woody plant species in a historical city grove, is to understand whether or not the visitors notice the plant richness of the grove and to test visitors' ability to identify tree species using photographs of the most common tree species in the grove. Results showed that 9388 woody plants belonging to 211 species, 111 genera and 55 families were found in Emirgan Grove. Perceived plant species richness within the grove was often well below the actual richness recorded in woody plant inventory. Frequency of visits had a significant relationship with the visitor awareness of tree species but it did not contribute meaningfully to the knowledge of species identification. Moreover, there was no indication that more abundant trees were better identified by visitors. Trees that have distinctive looks, fragrant or conspicuous flowers and typical fruits were better identified rather than the most abundant trees. These results can help to develop education programs to improve citizens’ awareness and knowledge of plant species in urban areas.
Introduction
Trees, as the most important components of the urban green spaces, perform many ecosystem functions that contribute significantly to the quality of life of the people living in the region. Planting, caring for, and protecting trees in a city and thus ensuring its sustainable management is one of the most effective strategies that contribute directly and indirectly to the goal of improving the quality of life of the city residents (Turner-Skoff and Cavender, 2019; Jones, 2021).
High species and genus diversity is recommended in urban areas as one of the most important solutions to have a healthy and sustainable urban tree population, where the damage due to epidemic plant diseases and invasive pests (Guo et al., 2019), and possible effects of global climate change is minimal (Raupp et al., 2006;Tubby and Webber, 2010;Sjömann et al., 2012). Strategic recommendations that a tree species, genus, and family should not exceed a certain percentage of the total tree population in urban green areas (Barker, 1975; Grey and Deneke, 1986;Moll, 1989;Santamour, 1990;Miller and Miller, 1991) are important guidelines for the selection of tree species, increasing tree species diversity, and sustainable management of tree populations in such areas. In order to develop robust strategies to improve species diversity and age structure in an urban green space, first tree composition must be revealed (Miller, 1997;Pauleit, 2003;Ningal et al., 2010;Keller and Konijnendijk, 2012;Nielsen et al., 2014;Sjöman et al., 2016;Thomsen et al., 2016). Tree inventory provides baseline data for understanding current woody plant diversity and composition, age and size diversity, the number of native and non-native species in an urban green area, and an important tool in decision-making for sustainable management of the tree population (Morgenroth and Östberg, 2017).
Public biodiversity awareness and broad-based support in society are important to the success of urban biodiversity conservation attempts. In this sense, species identification skills are considered the first step to increasing the awareness of biodiversity in society and providing community support for conservation efforts (Elder et al., 1998 (Hooykaas et al., 2022).
'Grove' is defined as a small piece of forest or a large tree community or afforested area near the city (Uzun et al., 2000), parks or gardens with planted tress (Stark, 2014), tree communities whose woody understory vegetation has been cleared to a large extent (Phibbs, 1991). Istanbul's historical groves are important green areas consisting of large tree communities that have been protected and developed in a historical process and they meet the recreational needs of city residents. The Emirgan grove is one of the most popular historical green areas of İstanbul from the 17 th century. In 1960, the tulip festival was held for the first time in the Emirgan grove (Yaltırık et al., 1997). Since 2005, tulip festival has been held in April every year, and a large number of people visit the grove during this period. Many native and exotic woody plant species were planted here, and at present the area has a rich woody plant diversity. Studies on the plant diversity awareness level and tree identify knowledge of people using green spaces in Istanbul, which is located on the border of the European continent and the most populated city in Europe, are extremely limited. There are not any studies present quantifying the tree species identification of the visitors of urban green areas in İstanbul.
In this study, our aim is (1) to determine the woody plant inventory of the Emirgan grove, which is one of the oldest historical groves of İstanbul and to reveal the woody plant composition and diversity, and (2) to evaluate species identification skills of the visitors of this historical grove by finding answers to the following questions: Do species identification skills of the visitors differ between gender, age, education level, activities, and frequency of visits?
Is there a relationship between the abundance of the trees in the grove and visitors' species identification skills?
Study area
The study was conducted in Emirgan Grove that covers a total area of 43 ha, located on a hill in the north of Istanbul, overlooking the Bosphorus (Figure 1). There are 3 mansions built in the 19 th century in the Emirgan Grove which dates back to the 17 th century when it was known as "Feridun bey gardens" at that time. The privately owned area was purchased by the Istanbul Municipality and opened to the public in 1943 (Evyapan, 1972) and today it is one of the most popular recreation areas in the city.
Figure 1. Study area (Photograph İBB archive)
Emirgan Grove is a reconstructed space composed of lawns, shrubs, monumental trees, 7 km of paths, a grotto, ornamental ponds, a small lake with a magnificent view of the Bosphorus. While the surroundings of the mansions are rich and well-maintained areas in terms of species diversity, the other large part of the grove has the appearance of a natural designed park with plenty of trees (Buğdaycıoğlu, 2004) and a small sloping part also has the appearance of a forest dominated by natural vegetation.
In 1960, the tulip festival was held for the first time in the Emirgan grove (Yaltırık et al., 1997). Since 2005, tulip festival has been held in April every year, and a large number of people visit the grove during this period.
Plant data
Between July -November 2020, a full inventory of shrubs and trees was done in Emirgan Grove. The grove was divided into 23 parcels in total considering the road network on the map. Each woody plant species was identified using Krüssmann (1985,(1984)(1985)(1986) and Akkemik (2020), and trees were classified in three diameter categories at the breast height (as 8-20 cm, 20-40 cm and more than 40 cm). The plants were classified as native versus exotic.
Questionnaire data
In July -September 2021, we conducted face-to-face semi-structured interviews (n = 80) in the grove. We briefly explained the purpose of the study to the visitors, asked them to participate, and conducted the survey with those who voluntarily accepted. The questionnaire consisted of 10 questions in Turkish (see Appendix A) and lasted 25 min on average. The age, gender, education level, visiting frequency and activity types of the visitors were asked and recorded. Afterwards, following Muratet et al. (2015), the visitors' perception of plant species richness in the grove, estimated number of plant species and opinion about the importance of species richness in the grove were taken. In the second part of the interview the visitors were shown photographs of the general appearance, bark, leaf, flower, fruit, or cone of 12 species, 10 of which were the most common species and 2 of which were not in the grove at all (Appendix B). The visitors were asked whether they had seen the trees shown in the photos in order to understand whether they noticed the plants during their visit to the grove. Then they were asked to common name the trees if they knew. Finally, we received the recommendations of visitors about the grove.
We preferred to conduct our surveys during the weekdays since the visitor profile vary considerably between weekends and weekdays. In this period, weekend visitors are mostly people who visit the grove for a picnic.
Data analysis
Basic descriptive statistics were derived to understand visitors' typology. Because the data failed the normality test, the non-parametric Kruskal-Wallis one-way analysis of variance on ranks was chosen to compare visitors' differences in recognition and identification of plants between gender, age, education level, and visiting frequency. A biplot comparisons were applied whether identification score and detection score were associated with the number of species in the grove. All of the statistical analyses were performed using the statistical software R 4.1.2 (R Core Team, 2021). Data manipulation and descriptive statistics were conducted using the R package "dplyr" (Wickham et al., 2023) and plots were generated with the R packages "ggplot2" (Wickham, 2016), "ggsci" (Xiao, 2018), and "ggrepel" (Slowikowski, 2022). Basal area of each tree has been registered as abundance data, and the relative abundance and relative importance values were calculated using the "Impotancevalue.comp" command of "BiodiversityR" (Kind and Coe, 2005) package.
Species richness and composition
A total of 211 different species were encountered with 9388 individuals being measured. 3435 individuals are in the diameter step below 20 cm, 3946 are in the 20-40 cm diameter category, and 2007 are greater than 40 cm. The top three species made up almost one third of the total plants (31.8%), while the top 10 most frequent accounted for 58% of the total plants and the top 30 tree species are 82.62% of the total woody plants in the grove ( Table 1). The relative abundance of the most common taxa in urban green spaces is a good tool for determining plant diversity (Kendall et al., 2014). The results show that the relative abundance of families was less than 30%, and the relative abundance of genus was less than 20% in the study area. At the species level, the most abundant tree in the grove is F. angustifolia, with a total number of 1482 trees (15.79%). When Fraxinus ornus (0.1%) and F. excelsior (2.1%) are also included, the genus Fraxinus is the most common genus with 17.9% relative abundance. The next two most frequent species were C. libani and P. pinea, with 813 and 687 individuals, respectively. Pinus is the second most abundant genus in the grove, and it made up 11.3 % of the plants throughout the grove.
Native and exotic trees distribution
The number of exotic tree species in the grove is 137 and it constitutes 65% of the total number of species. However, when the number of individuals belonging to exotic taxa is evaluated, its ratio in the total tree population is 29% with 2686 trees. While F. angustifolia, Q. robur, Cercis siliquastrum, F. ornus, L. nobilis are species that are both found in the natural vegetation and planted abundantly as ornamental plants in the grove, Q. coccifera and P. latifolia are only seen in natural vegetation on the slopes.
Size class distribution of trees
The information on diameter size of tree species in urban green areas gives important clues about the age of the trees, although the diameter size of the trees may differ depending on the tree species and habitat as well as the tree age. Size class distribution of the thirty most important tree species in Emirgan Grove was shown in Table 1. Tree size class was divided into 3 groups: young age group (lover than 20 cm), middle age group (20 cm -40 cm) and old age group (more than 40 cm). The medium size class (stem diameter of 20-40 cm) had the greatest proportion of all stems recorded (42%). This could be explained by the presence of a large number of native slow-growing small tree growth throughout the study site such as Q. coccifera, P. latifolia and F. ornus, as well as the dense plantings in recent years.
The top five tree species in the young age group are C. libani (with 236 individual), L. nobilis (with 233), F. angustifolia (202 trees), C. siliquastrum (179 trees), Q. coccifera (154 trees) (Figure 2), which account for 2.5%, 2.5%, 2.2%, 1.9% and 1.6% of the total trees in this group respectively, but there are many individuals of these tree species in other diameter classes as well. This indicates that these species have been preferred as park trees from past to the present except Q. coccifera which grows spontaneously on the slopes in natural vegetation. F. ornus, L. nobilis, and C. siliquastrum are also found on these slopes naturally and also planted as ornamental trees abundantly in other parts of the grove. Pyrus calleryana 'Chantaclieer' (15 trees), Cupressus x leylandii (20 trees), Yilmaz et Parrotia persica (23 trees) were only found in the young age class which illustrated that they were planted in İstanbul urban parks in recent years. In addition to these, Carpinus betulus 'Pyramidalis' and, Acer platanoides are among the species that have been planted in recent years. A large number of young R. pseudoacacia individuals reproduce from root and stump sprouts are also present in the grove.
Visitor profiles
Gender, education, visiting frequencies, ages of the visitors were determined. Female visitors and visitors with either undergraduate or graduate degrees each make up more than half of the visitors (Figure 3). While more than half of the male visitors visit the grove once a week or more, 40% of female visitors have this visitation frequency. Female visitors are centered between ages 30-45, the age of male visitors is mostly between 35-55. The grove is mostly visited for walking by visitors of all genders (Figure 3). Picnic is another popular activity for women. In her study in the same area Kart (2005) found that the grove is most often used by visitors for resting, picnic, and walking, respectively.
Visitors' awareness and knowledge in woody plant diversity at the grove
Visitors stated that the grove is rich in woody plant diversity (Figure 4). 50% of the male visitors estimated that there were 50-100 tree species in the grove and only 1 male visitor guessed exactly. Regarding female visitors, 4 of them estimated exactly, while approximately 25% of them guessed that there were 50-100 tree species (Figure 4).
Figure 4. Visitors' awareness, estimates, and knowledge in woody plant richness
Visitors generally stated that the rich woody plant diversity in the grove will make positive contributions to the environment, and they said that it will mostly support the presence of animals, improve air quality, and increase the aesthetic value of the grove ( Figure 4).
Visitors' awareness, estimates, and knowledge in woody plant richness
We expected that more frequent visitors would be aware of the plant richness of the grove and identify more tree species. We thought that more common trees would be more recognized, and that personal characteristic and the purpose of the visit might also be effective in the results.
As a result of the non-parametric analysis applied to both variables (awareness and plant species identification that did not show normal distribution, no significant difference was found between genders and education levels in terms of awareness and plant identification skills ( Table 2). Significant differences were found between the age groups of the visitors in terms of awareness the plant species. It was found that plant awareness increased with increasing age. While the frequency of visiting the grove had a significant relationship noticing tree species, it had no significant relationship with species identification ( Table 2). Yilmaz We observed that more frequent visitors to the grove have noticed that many of these tree species are found in the grove, even if they could not have identified them. But it is not possible to say that there is a relationship between the frequency of occurrence of tree species and awareness of visitors ( Figure 5). Cedrus, Q. robur, C. siliquastrum, P. pinea stood out as the most recognized trees in the grove ( Figure 5). C. siliquastrum and Tilia spp. were recognized more often, independently from their abundance compared to other species. Since C. siliquastrum is a species that is commonly used in Istanbul and easily attracts attention during its flowering season and Tilia (Linden) has a distinct scent, these species were recognized more frequently, independently from their occurrence in the grove. Some visitors even stated that they have not seen a linden tree but have smelt its scent during their visits, therefore concluding that linden was present in the grove. There was a high number of visitors that said they saw A. glutinosa in the grove which is not present in the grove at all. This species that exists naturally in the forest of Istanbul was misidentified as Morus by some visitors. Another species that is not present at the grove, Salix babylonica L. was identified as present near the lake, mistaking it for the old Styphnolobium japonica (L.) Schott 'Pendula' by the lake. The reason for L. nobilis, which is abundant at the site to be identified as "not-present/not-seen" was determined as the fruit and flower images. It was observed that the fruit and flowers of L. nobilis, which are not very noticeable among the leaves, were not known by many. It is not possible to say that there is a positive relationship between the abundance of tree species in the grove and their accurate identification by visitors ( Figure 6).
Figure 6. The relationship between the species identification score and the rate of occurrence of the species in the grove
As a matter of fact, F. angustifolia, which is the most abundant tree in the grove, is one of the least identified trees. Only 1 visitor correctly identified all 12 tree species Quercus sp, P. pinea, and T. tomentosa were the most identified trees while F. angustifolia, C. betulus and A. glutinosa were the least identified trees ( Figure 6). Since Q. robur and Q. coccifera are known by their acorns rather than their leaves, they have been identified as oak or oak acorn without specifying the species. P. pinea was identified as pine by 53% of the visitors, while it was named as 'umbrella pine' by 18% of the visitors. Therefore, P. pinea was the most known tree in the grove with a 71% visitor identification rate. Three visitors were identified Cedrus as cedar but many visitors (62%) were named as 'pine', probably, because conifers are commonly known as pine in Turkey.
Tree species richness and composition
The historical Emirgan grove, located on the shore of the Bosphorus, has a rich woody plant diversity with 211 tree and shrub species. However, the ten most common species accounted for 58% of all trees. Similar results, for the ten most common tree species, were found in many other cities, such as in Chicago 46% (Nowak et al., 2010); in Lisbon 73% (Soares et al., 2011) and in urban parks of Taipei 79% (Jim and Chen, 2009). The higher tree age and tree species diversity in an urban green area, the lower the risk of tree loss in the event of a disease and pest epidemic (Alvey, 2006). A tree population in Yilmaz et different age classes and with high species diversity is also a necessity in order to ensure uninterrupted ecosystem functions of urban green spaces (Kendal et al., 2014). According to the 10/20/30 rule, most widely accepted and proposed by Santamour (1990), tree population in urban green spaces should include no more than 10% of any one species, 20% of any one genus and 30% of any family. Our results showed that, Emirgan grove has high tree species richness and diameter structural diversity. Except for F. angustifolia, no tree species has more than 10%, no genera have more than 20%, and no family has more than 30% percentage in the total tree population in the grove. The percentage of the F. angustifolia should reduce from 15.2% percent to 10% percent of the tree population or at least it should not be planted any more in the grove. Information on diameter size class distribution of trees in urban green areas is a crucial knowledge for the sustainable management of the area and also gives important clues about tree species preference changes over time (Muthulingam and Thangavel, 2012;Xie, 2018). F. angustifolia, A. hippocastanum, Q. robur, T. tomentosa were planted widely in the past and present, while P. calleryana 'Chantaclieer', C. x leylandii, Parrotia persica A. platanoides, T. cordata were planted recent years. Tree age diversity is also an important tool to prevent disruption of ecosystem activities in an urban area due to age-related removal of trees or simultaneous failure of young trees.
The ratio of exotic woody flora of Emirgan grove (65%) was well above the exotic flora of the urban green areas of Istanbul reported as 52% and 55% (Çoban et al., 2020), The proportion of exotic species in urban flora was also reported as 51% in Central European cities (Lososová et al., 2012), 35% (ranging from 19-46%) in North American cities urban flora (Clemants and Moore, 2003), and 77% in Bangalore, India (Nagendra and Gopal, 2011). These exotic tree species and growth in Emirgan grove are important knowledge on urban tree species selection to increase species diversity in Istanbul green area. Because planting exotic species that have never been tested before in order to increase species diversity could lead to negative results (Raupp et al., 2006;Sjömann et al., 2012). Therefore, to increase species diversity in the urban green areas of Istanbul, species that have been growing and developing in locations such as Ataturk Arboretum, historical groves should be used instead of species that have never been tested before (Sjöman and Nielsen, 2010;Hirons et al., 2020).
The number of exotic woody species was higher than the number of native species in the grove. Although exotic woody species prevailed in total species with about 65%, their ratio in the total tree population is 29%. Similar results were found in ten Nordic cities (Sjömann et al., 2012). Some exotic ornamental shrub species especially add to the species richness at the grove. While the ratio of exotic species among the entire population is not at a worrying level. It should not be disregarded that this ratio could be the result of a high amount of natural vegetation present at the slopes of the grove.
Tree species awareness and identification skills of visitors
The failure of the visitors to provide correct answers in estimating the species richness can be explained by the fact that people cannot easily visualize all the species they have seen after the visit, since the area of the grove area is very large. Indeed, Southon et al. (2018) concluded that perceived species richness in the plot level is more accurate than site level assessments.
We expected that more frequent visitors would be aware of the plant richness of the grove, notice the most common trees and identify more tree species accurately. Because as the interaction between humans and nature increased, that is, as the number of visits increased, visitors started to learn about species diversity by observing plants in the environment and species awareness increased. However, while there was a meaningful relationship between the number of visits and the awareness of tree species, the frequency of visits did not contribute to the knowledge of species identification. On the other hand, we had hoped that people would notice and recognize the trees they saw more often. But although the abundance of trees somewhat contributed to their recognition, it was not effective in the species identification. People who used the grove more frequently were more likely to notice different tree species could not find high relationship between the accuracy of perceived species richness estimates and activity type, gender, age, education. Visitors' plant species identification skills were found poor in Emirgan Grove. Similarly, poor species identification skills were reported in the Netherland The results of our study showed no remarkable difference in tree species identification skills between male and female visitors. In contrast, Şat Güngör et al. (2018) stated that women had more plant identification knowledge compared to men, and also higher educated people had more knowledge than less-educated people.
The visitors were better able to identify trees that have distinctive habitus, fragrant or conspicuous flowers and typical fruit rather than the more abundant trees. F. angustifolia, which are the most abundant trees in the grove, are scattered all over the grove from the entrance, and although some of them are old and magnificent trees, they could not be identified by the visitors. On the other hand, P. pinea, with its typical umbrella-shaped crown, has been the most identified tree by visitors. Salix babylonica, which is never found in the grove, is distinguished by its distinctive appearance. Likewise, Quercus sp. can be distinguished easily by their typical fruit.
Conclusions
This is the first study to explore visitors' identification of the most common tree species found in an urban green area in Turkey. To achieve this, we first made a plant inventory to reveal the species richness and composition of the grove. Afterward, we collected data on tree species diversity awareness and species identification skills of visitors. Trees that have distinctive looks, fragrant or conspicuous flowers and typical fruits were better identified rather than the most abundant trees. The limited number of samples may not be sufficient to generalize the results of the study. However, this study provides sufficient information to reveal the woody plant diversity in the area and the visitors' identification of the species found in the grove and their perception of plant diversity.
Using "citizen science" to observe urban biodiversity is a practice that has been widely used in recent years. This study has demonstrated that in order to work with the public on efforts to observe, preserve, and plan urban plant biodiversity there is a need for serious preliminary work. Urban green spaces should not only be viewed as spaces for rest, exercise, and recreation, efforts to educate the public on biodiversity should also carried out. Many visitors have realized how little they know about tree species after the survey and have requested informative sources such as information boards, tags, and brochures.
The main priority in the grove should be to protect the oldest and rare trees. To protect these, and to plan future tree populations local, regional, and national level policies as well as regulations are needed. Including the public along with local governments in these efforts will be beneficial in obtaining effective results. People usually care for and preserve what they know and have more knowledge about. Therefore, it is important in conservation studies that people recognize the plants commonly found in the environment and understand the benefits preserving and increasing biodiversity will provide. | 2023-08-06T15:25:58.503Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "8614560d370351c0fd4e0c55df66ab85141f3642",
"oa_license": null,
"oa_url": "https://doi.org/10.15666/aeer/2104_28972912",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9531c06dd9e558c562c62e9bb8eb16c6d5b89e54",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
24975391 | pes2o/s2orc | v3-fos-license | A SHORT REVIEW OF THE ANTHELMINTIC ROLE OF MIRAZID
Mirazid® is a patented preparation from a plant that had been used in folk medicine since ancient Egyptians (Myrrh). It was registered in Egypt for the treatment of schistosomiasis and fascioliasis. Over 32 independent studies for efficacy of Mirazid had been reviewed and their results analyzed. The majority of these studies reported higher than 90% cure rates, that even higher in mixed than single trematodal infections in humans and in farm animals. Only two groups of investigators reported lower cure rates as they used lower doses and estimated cure rates at a shorter period from treatment than recommended by innovators. HEADINGS – Fascioliasis. Schistosomiasis mansoni. Trematoda. Anthelmintics. Plant extracts. Infections with parasitic helminthes, specially blood, liver and intestinal flukes continue to be a major health problem in many countries in the subtropics and tropics. Mirazid® is a patented special preparation from the oleo-resin of a plant that had been used in folk medicine since the era of ancient Egyptians (myrrh)(2). It had already been registered by the ministry of health in Egypt, after extensive preclinical, animal and clinical studies including large multicenter phase III human clinical trials in five big university centers proving its safety and efficacy in the treatment of schistosomiasis and fascioliasis(5). Over 32 published and unpublished clinical trials had been done to test efficacy of Mirazid in the treatment of infection by many species of helminthes both in animals and in humans. The greatest majority of these independent studies had reported cure rates higher than 90% in human clinical trials, as well as in studies on experimentally and naturally infected farm animals(5). Only two groups of investigators reported low cure rates. This had lead to conflicting evidence regarding the efficacy of this newly marketed drug. To solve this conflict we have critically reviewed all published and unpublished studies done on mirazid and tried to reach a conclusion. We found that the high cure rates of over 90% in case of human infections with schistosomiasis, fascioliasis and other single or mixed parasitic infections were reported with the recommended protocol of using a dose of 10 mg/kg body weight with a minimum of 600 mg (two capsules) daily 1 hour before breakfast for 6 successive days. The cure rates as defined by absence of ova in stools were estimated at least 2 or 3 months after the course of treatment, while ova count by kato technique had been repeated monthly for those cases that were still passing ova in stools(5) (Figure 1). *Green Clinic and Research Center, Alexandria, Egypt. Correspondence: Dr. Mostafa Yakoot, MD – Green Clinic and Researcher Center, Alexandria, Egypt E-mail: yakoot@yahoo.com 0 10 20 30 40 50 60 70 80 90 100 Wa hee b A (Az har ) 20 00 Mo wa fy T (B anh a) 2 001 Ga bal lah M, 20 01 EL -Go har y Y , 20 03 Ab o-m ady an A, 200 4 Mo ham ma d E , 20 04 Ma sso ud A, 199 8 Ma sso ud A a ,b,1 999 Ba rak at R , 20 05
Infections with parasitic helminthes, specially blood, liver and intestinal flukes continue to be a major health problem in many countries in the subtropics and tropics.
Mirazid® is a patented special preparation from the oleo-resin of a plant that had been used in folk medicine since the era of ancient Egyptians (myrrh) (2) .
It had already been registered by the ministry of health in Egypt, after extensive preclinical, animal and clinical studies including large multicenter phase III human clinical trials in five big university centers proving its safety and efficacy in the treatment of schistosomiasis and fascioliasis (5) .Over 32 published and unpublished clinical trials had been done to test efficacy of Mirazid in the treatment of infection by many species of helminthes both in animals and in humans.The greatest majority of these independent studies had reported cure rates higher than 90% in human clinical trials, as well as in studies on experimentally and naturally infected farm animals (5) .Only two groups of investigators reported low cure rates.This had lead to conflicting evidence regarding the efficacy of this newly marketed drug.To solve this conflict we have critically reviewed all published and unpublished studies done on mirazid and tried to reach a conclusion.We found that the high cure rates of over 90% in case of human infections with schistosomiasis, fascioliasis and other single or mixed parasitic infections were reported with the recommended protocol of using a dose of 10 mg/kg body weight with a minimum of 600 mg (two capsules) daily 1 hour before breakfast for 6 successive days.The cure rates as defined by absence of ova in stools were estimated at least 2 or 3 months after the course of treatment, while ova count by kato technique had been repeated monthly for those cases that were still passing ova in stools (5) (Figure 1).For animal studies, the repeatedly tested, protocol with cure rates over 90% for sheep naturally infected with fascioliasis, dicrocoeliasis, or monieziasis, which is recommended by the innovators, is to use the special veterinary administrable emulsified suspension (offered free by manufacturer for any local or international investigator) in a dose of 6 mL, 1 hour before first feed in the morning for 6 to 8 days.Also as in humans the primary end-point is the cure rate or the worm load reduction, estimated at 2 and 3 months after treatment (5) .
While in the studies which reported low cure rates (5) , the dose, the duration of treatment and the endpoints were different.
Botros et al. (5) had used a dose of only one capsule 300 mg once daily for 3 days for human patients infected with Schistosoma mansoni.They estimated the cure rates at only 4 to 6 weeks after treatment.While Barakat et al. (5) stated that they used mirazid in a dose of two capsules daily for only 3 successive days (not 6 days) and estimated the cure rates at 3 and 4 weeks after treatment.Botros et al. (5) had estimated the reduction in worm loads in ovine fascioliasis at only 4 weeks while it should be at 8 and 12 weeks as recommended by the innovators.They did not mention the batch number of mirazid capsules they used and how they emulsified and administered the capsule contents to animals, in-order to be sure that they did not mistakenly buy the imitated cheaper products in the market with whole myrrh extract which is not equivalent to the patented mirazid preparation manufactured under good manufacturing and quality control practices.
The reason for that had been explained by the results of scanning electron microscopic studies and by hatching and sedimentation tests done by many independent investigators (5) .It was found that the adult worms first become contracted and lose their anchorage to the walls, then the process of tegumental destruction and attacking by the immune system take place in a rather smoother and slower curve than the case with chemical comparators, which explained in-part the higher tolerability of mirazid in patients with advanced liver and/or renal disease.This results in excretion of a progressively decreasing egg count over a period that is longer in case of heavy infections and may take up to 3 months for the ova to completely disappear as the adult worms became fully disintegrated.The presence of more than one species of worm infection was also studied.It was found by statistical analysis that the presence of mixed parasitic infection with any of the intraluminal, trematode nematode or cestode worms which proved in other studies to be more responsive to lower Mirazid doses and durations had lead to higher and faster cure rates than single schistosoma or fasciola infection (6) .This was explained by the assumption that priming the immune system with more sensitive intestinal worm shared antigens may enhance the process of immune eradication of the more resistant schistosoma or fasciola worms.Also many investigators proved that mirazid extract can act on snails as well as immature forms, thus it can be beneficial as a preventive measure before or following exposure (1) .
These facts added to the known high safety and tolerability of myrrh as known from its FDA GRAS (Generally Recognized As Safe) status; complemented with extra-benefits as its proved tumoricidal activity (4) , anti-dyspeptic, anti-ulcer, antiinflammatory, anti-hypercholesterolemic, anti-hyperglycemic, as well as anti-bacterial and anti-fungal activities (3) make its benefits much over-weigh its risks, especially in patients with advanced liver or multisystem dysfunctions.
To conclude, it would be conclusive as stated by Southgate et al. (5) that an independent body such as the World Health Organization would organize a multinational study with the right recommended protocol and put guidelines for the treatment of schistosomiasis or fascioliasis and the place of mirazid in such indications. | 2018-01-13T09:27:39.392Z | 2010-10-01T00:00:00.000 | {
"year": 2010,
"sha1": "2adc39382a0e78c239514f3bb4880aafa2b35758",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ag/a/jgk8gFQfxWXX6trvxV9pwrf/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2adc39382a0e78c239514f3bb4880aafa2b35758",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
29627109 | pes2o/s2orc | v3-fos-license | Alate Aphid (Hemiptera: Aphididae) Species Composition and Richness in Northeastern USA Snap Beans and an Update To Historical Lists
Abstract Recent aphid-vectored viruses in the northeastern U.S. led to extensive surveys of aphid (Hemiptera: Aphididae) species composition. We report the species composition and richness of alate aphids associated with processing snap bean (Phaseolus vulgaris L.; Fabales: Fabaceae) agroecosystems from field surveys conducted during 5 yr in New York and 3 yr in Pennsylvania. Rates of species accumulation were similar between the 2 states, and asymptotic, suggesting reasonably adequate sampling intensity. Our results suggest that about 95 to 100 aphid species are present as alates within these agroecosystems, a surprisingly high percentage (∼14 to 18%) of the total aphid richness. Host records suggest that 61% of the alate aphid species we collected from pan traps placed within snap bean fields were dispersing through this agroecosystem, originating from woody plants in the surrounding landscape. We compiled this information with a recent study of aphid species composition from peach orchards and an exhaustive inspection of museum samples, and present an updated list of the aphid species in Pennsylvania.
Palabras Clave: Aphis glycines, hospedero, trampas de caída, durazno, melocotón, Phaseolus vulgaris, vectores de virus Aphids are a small, but diverse group of insects with an origin in the Jurassic and a total of 4800 species world-wide (Grimaldi & Engel 2005;Dixon 1985a;Dixon 1985b). They are primarily phloem feeders and when present in high densities can damage their host plant. Aphids excrete excess carbohydrates from their diet of phloem sap, providing a nutrient-rich substrate for sooty mold fungi to grow. Sooty mold can be a major problem in a number of agricultural crops because the mold can either render produce unmarketable or reduce plant quality of the commodity. Aphids are also important vectors of viruses that can kill their host plant or substantially reduce crop yield and quality (Agrios 2005). Some viruses are transmitted by aphids in a non-persistent, stylet-borne manner. They are obtained quickly by their aphid vector during short tasting probes, adhere to the stylet lining by binding to helper component proteins or directly to the stylet, and remain there until they are flushed out during another tasting probe (Ng & Falk 2006). Nonpersistently transmitted viruses can be vectored by alates, sometimes by multiple aphid species (Gildow et al. 2008) regardless of whether or not there is reproduction on the plant host, and the epidemiology of these viruses can be influenced heavily by the alate aphid community.
Several viruses of this type have been introduced recently, or increased in frequency, in the northeastern U.S. One, plum pox virus (PPV), threatened the stone fruit industry following its arrival in the U.S. This virus causes sharka disease in parts of Europe and South America where it is endemic (Roy and Smith 1984;Rosales et al. 1998). Type D isolates were detected in the U.S. in Pennsylvania in 1999 (Damsteegt et al. 2001), and surveillance and eradication efforts of this invasive species included destruction of approximately 23% of the non-cherry stone fruit orchards of Pennsylvania (Wallis et al. 2005). As part of these efforts, studies were conducted to determine the potential aphid species that might serve as reservoir or route of transmission in the region where this virus was first detected (Wallis et al. 2005). Soon thereafter, in the early 2000s, Northeastern and Midwestern U.S., snap bean crops (Phaseolus vulgaris L; Fabales: Fabaceae.) had viruslike symptoms (leaf mosaic and blistering, deformed pods) and experienced dramatic yield loss (Larsen et al. 2002). Among the viruses detected were alfalfa mosaic virus, bean common mosaic virus, bean pod mottle virus, bean yellow mosaic virus, clover yellow mosaic virus, clover yellow vein virus (ClYVV), cucumber mosaic virus (CMV), tobacco streak virus and white clover mosaic virus (Grau et al. 2002;Larsen et al. 2002;Shah et al. 2006). CMV was the most prevalent virus detected in these snap bean fields (Larsen et al. 2002;Shah et al. 2006). As is the case with PPV, CMV is transmitted by aphids in a non-persistent, stylet-borne manner (Nault 1997). CMV-infected plants were often found in clumps in snap bean fields, which were consistent with aphid-initiated virus epidemics (Shah et al. 2005). CMV epidemics also occurred more frequently in New York than in Pennsyl-vania. The CMV epidemics coincided with the appearance of a newly invasive aphid, Aphis glycines Matusmura (Nault et al. 2009), and as was the case with stone fruit, the threat of viral epidemics led to extensive surveys of the alate aphids species composition in the affected crop.
These recent surveys of aphid collected from snap bean fields in Pennsylvania and New York, and peach orchards in Pennsylvania, were quite extensive. Also from Pennsylvania, J. O. Pepper specialized in aphid identification and actively collected them for most of the 20th century. His collections centered at his home in central Pennsylvania (State College) and included much of the surrounding forest and farmland. The bulk of his collection is housed in the Frost Entomological Museum (University Park, Pennsylvania), and he also contributed slides to the United States National Collection (Beltsville, Maryland). Pepper (1965) reported 345 species in a published list of the aphids of Pennsylvania and their host plants. To date, this is the most comprehensive published list of aphids for the state. However, since taxonomy and systematics are in flux, the names that Pepper published are currently out of date and in need of revision.
The purpose of this study was to identify the species composition and estimate aphid species richness in snap bean agroecosystems in the northeastern states from field surveys, and generate a current list of aphid species in this region using field survey data, literature, and an examination of the J. O. Pepper aphid collection.
MATERIALS AND METHODS
Detailed methods for alate aphid collection in snap bean fields in Pennsylvania and New York were published in Nault et al. (2009). To summarize, we used water pan traps baited with a green ceramic tile (Webb et al. 1994) and filled with a 20% propylene glycol solution in snap bean fields in both states from 2002in NY and 2004 in PA. Traps were installed in a total of 56 fields in western NY (12 each yr, except for 2004 which had 8 fields) and 18 fields in Centre county PA (6 each yr). The traps in Centre County formed an approximately 40 mile transect in the southern portion of the county roughly following state routes 45 and 192. The traps were checked weekly for aphids from the early trifoliate stage (early to mid Jul) until field harvest. Collection methods in the peach (Prunus persica (L.) Stokes; Rosales: Rosaceae) orchard are documented in Wallis et al (2005), and also used the water pan traps baited with a green tile. Trapping occurred during 2 yr in 2 orchards in central Pennsylvania.
For both studies, aphids were removed from pan traps and then stored in 70% etha-nol (EtOH), then transferred to potassium hydroxide and heated for 1 h or until clear. Cleared aphids were rinsed for 10 min each in a sequence of 95% EtOH, absolute EtOH, and clove oil. Once rinsed, each aphid was placed on a drop of Canada balsam on a glass slide and positioned to expose diagnostic features before a coverslip was placed on top. Aphids collected in New York were identified by R. Eckel (RVWE Consulting, Frenchtown, New Jersey), whereas those from Pennsylvania were identified by W. Sackett and A. Bachmann using keys by Smith et al. (1992) and Blackman & Eastop (2000). Voucher specimens are located at the New York State Agricultural Experiment Station in Geneva, New York, and the Department of Entomology, Pennsylvania State University, University Park, Pennsylvania.
Species rarefaction curves were calculated for the Pennsylvania and New York collections individually and for both states combined using EstimateS (Colwell 2005).
A complete list of aphids from Pennsylvania was compiled using the J. O. Pepper Aphid Slide Collection, which is housed at the Frost Entomological Museum (University Park, Pennsylvania), as well as species recorded in Pepper (1965). We searched the slide collection in addition to using Pepper (1965) because Pepper continued to collect aphids and make slides into the late 1980s, but did not publish any updates to his original 1965 paper. Because the collection and Pepper (1965) contained aphid species names from the early 20th century, we consulted 2 online aphid databases to ensure that the Wallis et al. (2005) to create a more current list of the aphids of Pennsylvania (Tables 3-11).
RESULTS
In snap bean fields in New York and Pennsylvania, a total of 8,821 aphids were identified, with 7,484 from New York and 1,337 from Pennsylvania. A total of 97 species were caught; 71 from New York and 41 from Pennsylvania. We were unable to identify only 254 (2.8%) of the aphids. Of the aphids captured, those species representing 1% or greater of the total number caught in either state are listed in Table 1 (originally published in Nault et al. 2009) with their abundances. A comprehensive list of all aphid species found in Pennsylvania and New York snap bean fields is shown in Table 2 along with their host associations based on Blackman & Eastop (1994, 2000. From this host information we estimated that 61 percent of the species dispersing through snap bean fields in both states were most likely coming in from the surrounding forests as their hosts are woody, not herbaceous, species (Fig. 1).
Species accumulations followed asymptotic patterns (Fig. 2) suggesting reasonably adequate sampling of the aphid species present as alates in commercial snap bean fields. Overall, there were fewer aphids collected in Pennsylvania, but based on the rarefaction curve there were a similar number of total species represented in a sample of the same number of in- Blackman & Eastop (1994, 2000.
Melanocallis caryaefoliae (Davis) Myzocallis (Lineomyzocallis) exultans
Combining the list of aphids collected from snap bean fields, peach orchards and those published by J. O. Pepper in 1965, we developed a new, more comprehensive list of the aphids present in Pennsylvania. We found 7 species present in our collections that were not present in the slide collection housed in the Frost Entomological Museum (University Park, Pennsylvania) or published in Pepper (1965) (Table 3). One of these aphids, Aphis glycines Matsumura, was introduced to the US around the turn of the 21st century and is now widespread throughout the Midwest, Northeast and southeastern Canada (Ragsdale et al. 2011).
DISCUSSION
Our passive trapping in snap bean fields alone yielded a surprisingly high percentage of the species present throughout Pennsylvania and New York (~14% and ~18% respectively). Our sampling method concentrated on only one habitat (commercial snap bean fields), but did intercept aphids moving from the surrounding forests and hedgerows. The high degree of landscape heterogeneity and crop diversity in the trapping areas includes plant species that serve as hosts for many of the aphid species that represented less than 1% of the total capture (Pfleeger et al. 2006). These aphids were captured in very small numbers (mostly singletons), and are not important contributors to the plant virus epidemics reported by Wallis et al. (2005) and Nault et al. (2009).
Of the aphids we captured, 2 species were especially notable; Therioaphis trifolii Monell, which comprised 31.8% of the identified aphids, and A. glycines which represented 18.2 % of the identified aphids. Both of these aphids were introduced to North America (A. glycines from Asia and T. trifolii from Europe) and were quite destructive to crops immediately after their introduction (in soybean and alfalfa, respectively). Aphis glycines continues to cause significant economic damage in soybean (Ragsdale et al. 2011). While not known to colonize Phaseolus spp., both species are competent vectors of the legume strain of CMV (Gildow et al. 2008).
The intermittent appearance of CMV in central Pennsylvania snap bean crops could be influenced by a unique agricultural landscape. Agricultural fields are located in valleys bordered by the low, but steep, forested ridges of the Appalachian Mountains. The ridge and valley system might be acting like a barrier, keeping CMV out for most of the season. We did not search for a CMV reservoir outside of testing a few alfalfa fields, which were also negative for CMV. It is possible, that much like our A. glycines population, legume strains of CMV may be transient. If this is the case, migrating aphids may be scrubbed of virions when they land in one of the many bordering forests containing many non-host plants.
The Pepper (1965) aphid list in addition to the Pepper slide collection allowed us to compile a comprehensive list of the aphids present in Pennsylvania, but the nomenclature was in need of updating. Our efforts to update the nomenclature, and incorporate our more recent sampling efforts, resulted in a modern list of aphids of Pennsylvania that includes recently introduced species. | 2017-10-20T14:53:02.564Z | 2014-10-09T00:00:00.000 | {
"year": 2014,
"sha1": "f634aa7b512cdd391bb290d31cb9c96a98b74154",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1653/024.097.0356",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b00a363d35db760d58712da01f9770aa7d0b1b8a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
267233242 | pes2o/s2orc | v3-fos-license | The Assessment of Infection Risk in Patients with Vitiligo Undergoing Dialysis for End-Stage Renal Disease: A Retrospective Cohort Study
Vitiligo is an autoimmune condition that causes patchy skin depigmentation. Although the mechanism by which vitiligo induces immunocompromise is unclear, other related autoimmune diseases are known to predispose those affected to infection. Individuals with vitiligo exhibit epidermal barrier disruption, which could potentially increase their susceptibility to systemic infections; patients with renal disease also show a predisposition to infection. Nevertheless, there is little research addressing the risk of infection in dialysis patients with vitiligo in comparison to those without it. A retrospective analysis was performed on patients with end-stage renal disease (ESRD) in the United States Renal Data System who started dialysis between 2004 and 2019 to determine if ESRD patients with vitiligo are at an increased risk of bacteremia, cellulitis, conjunctivitis, herpes zoster, or septicemia. Multivariable logistic regression modeling indicated that female sex, black compared to white race, Hispanic ethnicity, hepatitis C infection, and tobacco use were associated with an enhanced risk of vitiligo, whereas increasing age and catheter, versus arteriovenous fistula, and access type were associated with a decreased risk. After controlling for demographics and clinical covariates, vitiligo was found to be significantly associated with an increased risk of bacteremia, cellulitis, and herpes zoster but not with conjunctivitis and septicemia.
Introduction
Vitiligo, an autoimmune skin condition, impacts nearly 3 million Americans, with approximately 40% of adult vitiligo cases remaining undiagnosed [1].The pathogenesis of vitiligo relates to the destruction of melanocytes by innate and adaptive immunological pathways, with an involvement of oxidative stress and pro-inflammatory cytokines, disruption of melanocyte adhesion, and dysregulation of CD8+ T-cells, resulting in patchy dyspigmentation of affected lesion sites [2][3][4].As a result, vitiligo can negatively impact patients' quality of life, especially those with higher body surface area coverage [5], resulting in an increased risk of psychiatric disorders including major depressive disorder [6,7].Vitiligo is also associated with other comorbid autoimmune conditions, like diabetes mellitus, and connective tissue diseases, like discoid lupus erythematous, as well as ocular and psychiatric conditions [8].Studies have demonstrated that patients with vitiligo are at an increased risk of obesity and renal diseases, such as chronic kidney disease and end-stage renal disease (ESRD) [7,9].
Patients with ESRD require dialysis or kidney transplants as a treatment.Patients undergoing dialysis are at an increased risk for systemic infections due to vascular and catheter access, as well as native and acquired immunosuppression [10,11].Sepsis and other infections are the second leading cause of death in these patients, following cardiovascular disease [12,13].Patients with vitiligo also have a disruption of their epidermal barrier, potentially making them more susceptible to systemic infections through this compromised barrier [14][15][16][17].
Based on the characteristic pathogenesis of vitiligo, it is reasonable to hypothesize that vitiligo, as a comorbidity in patients with ESRD undergoing dialysis, may lead to an increased risk of infections compared to the general population, potentially due to epidermal barrier dysfunction.However, there is a lack of research examining the correlation between vitiligo and infection risk in patients on dialysis.To address this gap in knowledge, we queried the United States Renal Data System (USRDS) for patients undergoing dialysis with a diagnosis of vitiligo to analyze whether vitiligo serves as an independent risk factor for bacteremia, septicemia, cellulitis, herpes zoster, and conjunctivitis in these patients.
Dataset Study and Cohort
The USRDS, funded by the National Institute of Diabetes and Digestive and Kidney Diseases, is a national data system that collects, analyzes, and distributes information about chronic kidney disease (CKD) and ESRD in the United States (US).The USRDS collaborates with organizations including the Centers for Medicare & Medicaid Services (CMS), the United Network for Organ Sharing (UNOS), and the ESRD networks to produce a dataset that includes demographics and CMS medical claims submitted to Medicare for all US patients on dialysis; all US patients undergoing dialysis are automatically enrolled in Medicare.In this study, the USRDS database was used to determine whether vitiligo is an independent risk factor for some infections in patients with ESRD.This research was deemed not Human Subjects Research by the Augusta University Institutional Review Board (reference #1592144-1).
Individuals in the USRDS from 18 to 100 years of age were eligible for inclusion in this study if they initiated dialysis between 2004 and 2019.Those who were less than 18 years or greater than 100 years of age, or had missing or unknown data on age, race, sex, ethnicity, access type, or dialysis type were excluded.Patients with ESRD with a diagnosis of vitiligo were identified in this database using International Classification of Disease (ICD)-9 (709.01) and ICD-10 (H02.73-H02.739or L80) codes.The total sample size included was 1,526,270, and the analysis was performed to compare those with and those without a diagnosis of vitiligo.
Study Design
A retrospective cohort design was employed using data from the USRDS to analyze vitiligo and its association with different infectious outcomes in patients with ESRD.
Outcome Variables
Infectious outcomes of interest included bacteremia, cellulitis, conjunctivitis, herpes zoster, and septicemia [12,13,[18][19][20].Infections following the incident date of dialysis were determined using ICD-9 and ICD-10 codes from hospital, detailed, and physician/supplier claims.The person-years-at-risk was determined as the difference between the first date of the specific infection diagnosis and the incident date of dialysis.For those without an infectious outcome, this value was ascertained to be the difference between the first date of dialysis and either death or 31 December 2019.
Main Independent Variable-Vitiligo Diagnosis
A diagnosis of vitiligo after the incident date of dialysis was determined from hospital, detailed, and physician/supplier claims using ICD-9 and ICD-10 codes.
Statistical Analysis
Statistical analyses were performed using SAS 9.4 (SAS, Inc., Cary, NC), with statistical significance assessed using an alpha level of 0.05.Descriptive statistics including frequencies and percentages or means and standard deviations, where appropriate, for all variables were determined, overall, by vitiligo status and by each type of infection.
Logistic regression was used to examine the association of each demographic or clinical risk factor with vitiligo, and to examine the association of vitiligo, as well as each demographic and clinical risk factor, with each infection.An offset parameter of the natural log of the number of person-years-at-risk value was used in the estimation of the relative risk.For vitiligo or for each infection, each risk factor was assessed in a simple, bivariate model.All risk factors were then entered into a comprehensive full logistic regression model for the vitiligo outcome or for each infectious outcome, and a backward model building strategy was used to create the comprehensive final model, as previously described in Schwade et al. [21] and Momin et al. [22].
Results
Table 1 displays the overall descriptive statistics and the vitiligo status of patients, as well as the results of the simple and final logistic regression models on vitiligo, to assess potential correlates or confounders.Briefly, the average age of all 1,526,270 subjects with ESRD was 63.5 years (SD = 14.9), with the majority being white (66%) and male (57.2%).Nearly all (99.9%) subjects were on hemodialysis and 80.8% had a catheter for their access type.Only 676 subjects (0.04%) had a diagnosis of vitiligo.The dialysis modality was unable to be examined in logistic regression models as all subjects with vitiligo were on hemodialysis.All other demographic and clinical variables were associated with vitiligo in simple models, and the final multivariable logistic regression model indicated that black compared to white race, female sex, Hispanic ethnicity, tobacco use, and hepatitis C infection were associated with an increased risk of vitiligo, whereas increasing age and catheter access type compared to arteriovenous fistula (AVF) were associated with a decreased risk of vitiligo.
Table 2 provides the descriptive statistics of all variables by bacteremia, septicemia, cellulitis, herpes zoster, and conjunctivitis.The percentage with vitiligo was higher among those with each type of infection compared to those without the specific infection.
Table 3 presents the results of the simple models and the final model, examining the association of vitiligo with each type of infection when controlling for demographic and clinical covariates.In simple logistic regression models, vitiligo was associated with an increased risk of bacteremia, septicemia, cellulitis, and herpes zoster but not conjunctivitis.Controlling for demographic and clinical covariates, vitiligo remained associated with an increased risk of bacteremia, cellulitis, and herpes zoster but was no longer significantly associated with septicemia; vitiligo again showed no association with conjunctivitis.Increasing age, female sex, and catheter compared to AVF access, tobacco use, hepatitis C, and all other races compared to the white race were associated with an increased risk for all five infectious outcomes.Hemodialysis compared to peritoneal dialysis was associated with an increased risk of bacteremia, septicemia, and cellulitis.Graft usage compared to AVF access was associated with an increased risk of bacteremia, septicemia, conjunctivitis, and cellulitis.Alcohol dependence presented a heightened risk of bacteremia and septicemia despite a decreased risk of cellulitis.Black race compared to white race was associated with a decreased risk of bacteremia, septicemia, cellulitis, and herpes zoster.Hispanic ethnicity was associated with a decreased risk for all five infections.
Discussion
There is a lack of knowledge surrounding vitiligo and its role in increasing the risk of infection for patients with other comorbid conditions.In this study, we aimed to address this knowledge gap by evaluating the infection risk in patients with vitiligo undergoing dialysis for ESRD.Our analysis of the data revealed that vitiligo was diagnosed in 0.04% of the included patients with ESRD.Although the use of health insurance claims has previously been shown to demonstrate high diagnostic performance for vitiligo, it is worth noting that the lower proportion of patients identified within this population compared to national estimates may be attributable to underreporting among groups with higher prevalence (non-white) and/or due to the presentation of the disease (unilateral, segmental) [1,23].
The patients identified were more likely to be of black compared to white race, Hispanic ethnicity, female sex, a tobacco user, and/or with a hepatitis C diagnosis.Thus, our analysis demonstrated a significant association between vitiligo and race/ethnicity, with black and Hispanic persons being at a higher risk.While some studies have reported that vitiligo affects all ethnic groups equally, numerous studies have described potential reporting and diagnostic variances attributed to the greater visibility of vitiligo on darker skin tones [24,25].These findings emphasize the importance of considering race and ethnicity in the diagnosis and treatment of vitiligo.Additionally, despite our finding of an elevated risk of vitiligo in females, previous epidemiological investigations have reported conflicting results regarding the gender predisposition.Some studies have identified no disparity between males and females, while others have reported conflicting findings regarding the prevalence of males versus females, with some studies indicating a higher prevalence in females and others suggesting a higher prevalence in males [26,27].However, female patients may be more likely to seek early care due to the cosmetic and social implications of vitiligo, potentially contributing to the observed gender difference in our study [26].Nevertheless, the prevalence of 0.04% observed in our study is low compared to the 0.76% to 1.11% prevalence of adult Americans determined in a recent study [1], although it should be noted that about 40% of these cases were reported to be undiagnosed.It is possible that the low prevalence is unique to the ESRD population or that underdiagnosis is even more of an issue in these patients compared to the general population.Alternatively, the use of ICD codes may result in undercounting, particularly as physicians treating these patients may be more concerned with the management of ESRD.However, our results, which show that vitiligo is an independent risk factor for certain infections, such as bacteremia and cellulitis, suggest the importance of physicians' awareness of this diagnosis.
Tobacco use emerged as a significant clinical risk factor for vitiligo in our study.Proposed mechanisms for this observation relate to the production of reactive oxygen species (ROS) by the use of tobacco products [28,29].However, it should be noted that other studies have suggested that tobacco use may suppress vitiligo, with tobacco smoke enhancing skin pigmentation in melanocytes [30,31], potentially by inhibiting the monoamine oxidases implicated in vitiligo pathogenesis [32].Although the disparate results obtained in our study may relate to a particular population (those with ESRD), these contradictory results underscore the incomplete understanding of the effects of tobacco use in patients with vitiligo.
The hepatitis C virus (HCV) has been shown to trigger adult-onset vitiligo, potentially due to the deposition of immune complexes causing extrahepatic inflammatory responses, particularly in melanocytes and keratinocytes [33].Thus, the association between HCV and the risk of vitiligo is perhaps not unexpected.Indeed, special consideration should be paid to HCV-infected patients receiving hemodialysis, since prior studies have shown increased rates of bacteremia and death compared to the general hemodialysis population [34].
Our analysis found an association between increasing age and a decreased risk of vitiligo.Studies have established that the incidence of vitiligo is highest among young individuals (10-30 years old) with nearly 80% developing the condition by age 30, and a decreasing incidence with increasing age, especially after the age of 50 [25,35].One plausible explanation for this observation could be that the risk of developing autoimmune conditions like vitiligo decreases as individuals age.Vadasz et al. explains that the increase in protective immune mechanisms, such as an increased peripheral T-regulatory cell production found in the elderly, play a vital role in protection against development of autoimmune diseases [36].
Catheter access was correlated with a decreased likelihood of having vitiligo versus AVF access.Most of the literature to date recommends AVF as the first access type to be considered due to better outcomes with this type of access versus catheter access [37,38].However, one study in 2013 documented disparities in AVF placement in older patients with hemodialysis, indicating that those of increasing age, female sex, and black race are less likely to receive AVF placement as their initial access type.In our study, all of these demographic factors were shown to be associated with an increased likelihood of a diagnosis of vitiligo [39], suggesting some confounding of this relationship.
In simple logistics regression models, vitiligo presented an increased risk of bacteremia, septicemia, cellulitis, and herpes zoster but not conjunctivitis.Controlling for demographics and clinical covariates determined that patients with ESRD with a vitiligo diagnosis had a significantly increased risk of a diagnosis of bacteremia, cellulitis, and herpes zoster but not septicemia or conjunctivitis.The increased risk for some infections may potentially be due to factors including disruption of the skin barrier, destruction of melanocytes, and dysregulation of autoimmune regulation in patients with vitiligo.
Vitiligo is characterized by the selective loss of melanocytes caused by autoimmune dysregulation, leading to the dyspigmentation of affected sites [25].These areas of lesioned skin confer issues beyond cosmetic appearance, including delayed barrier recovery compared to uninvolved sites [14], which can impact barrier function and allow more ready entry of microorganisms.Indeed, keratinocytes in vitiligo lesions have been demonstrated to exhibit decreased levels of aquaporin 3, which is a water and glycerol channel that is known to be involved in epidermal barrier formation [40,41].The levels of E-cadherin, which mediates keratinocyte-keratinocyte interactions, are also reduced [15].The nonlesional skin of patients with vitiligo also shows altered epidermal lipid composition, with markedly decreased levels of ceramides, which are required for a competent permeability barrier [15].In addition, vitiligo lesions also show impaired innate immunity.Thus, melanocytes can be mobilized by the activation of their innate immune pattern-recognition receptors in response to a host of extracellular bacteria and intracellular viral pathogens to promote the expression of type-I interferons (IFN α/β), cytokines (IL-1), and chemokines (CXCL-8/IL-8, CCL-2/MCP-1) [42].Indeed, increased rates of hospitalization for herpes zoster have been observed among patients with chronic inflammatory skin conditions, including vitiligo [43].
Other parameters associated with one or more of the infections include increasing age, female sex, hemodialysis (versus peritoneal dialysis), catheter or graft versus AVF access, tobacco use, alcohol dependence, and hepatitis C infection.The association between increasing age and an increased risk of all five infections studied is not unexpected, as aged individuals show an increased susceptibility to infections [44,45].Better outcomes with AVF use in patients undergoing dialysis are also expected based on previous studies [38].Tobacco use was also associated with an increased risk of all the five infections studied.Tobacco induces physiological changes and immune system dysregulation [46][47][48].Additionally, in patients undergoing dialysis, hepatitis C virus (HCV) infection and bacteremia are common comorbidities, and the presence of HCV further increased the risk of bacteremia, septicemia, and cellulitis, as was also indicated in our findings [49][50][51][52].HCV weakens the immune system, allowing other pathogens introduced, e.g., via vascular access in dialysis patients, to thrive.Chronic alcohol use is also known to predispose people to the development of infections, especially bacteremia and sepsis [53].Alcohol is known to downregulate the immune system and inhibit the release of the proinflammatory cytokines [53][54][55] needed to fight against pathogens.On the other hand, previous studies investigating gender differences in the risk of bacterial infections have yielded inconsistent results.Multiple studies have reported a higher incidence of bacterial infections, sepsis, and cellulitis in men compared to women [56][57][58][59], although the mechanisms are unclear.
While our study reports that black race and other races, compared to white race and Hispanic ethnicity are associated with a decreased risk of bacteremia, septicemia, and cellulitis, multiple previous studies have reported conflicting findings on this matter.One such study reported a higher incidence of bacterial infections and sepsis in black and Hispanic individuals even after controlling for income and geographic variances [60].Similarly, another study reported a higher likelihood of cellulitis and other skin and soft tissue infections in African American and Hispanic individuals [61].These differences may be due to underlying biological reasons, external factors, or the specific population examined (patients with ESRD).
Preventative measures most impactful for reducing infectious outcomes in this population may include obvious precautions, such as proper hand sanitization between patients, and education for healthcare professionals on the increased infection risk among individuals with vitiligo compared to patients with baseline ESRD, in conjunction with increased patient surveillance [62].Other less apparent methods that could potentially reduce infection risk in ESRD patients with vitiligo include the use of flags in electronic medical records to highlight patients at risk for healthcare-associated infections.In addition, treatments to improve the epidermal barrier, as discussed in [63], may also prove beneficial.
This study has several limitations due to its reliance on the USRDS dataset.The diagnoses examined in this study were based on billing codes submitted to Medicare or derived from CMS Form 2728.It is important to note that these diagnoses were not based on actual clinical data.Consequently, it is difficult to determine the specialty or healthcare provider responsible for the billing or the extent of physical inspection by the physician.In addition, bias may be introduced as a result of the possibility that very light skin types may not consult a physician if experiencing depigmentation; as such, hypo-or depigmentation may not be as visible or troublesome, while those with darker skin types may be more motivated to seek help for pigmentation issues.Similarly, older patients may be less bothered or, due to flexibility issues, less aware of depigmented lesions and therefore may also be underdiagnosed.Further, it is important to note that this study lacks the ability to account for the clinical severity of vitiligo diagnosis, preventing the stratification of patients based on disease severity.Additionally, this study is unable to address potential coding idiosyncrasies, as well as instances of inaccurate or missed codes within the dataset.However, these limitations are somewhat mitigated by the large USRDS dataset, which captures billed diagnoses and therapies for all patients with ESRD in the United States, providing substantial statistical power for the analysis.
Conclusions
In conclusion, this study aimed to address the knowledge gap surrounding vitiligo and its association with an increased risk of infection for patients undergoing dialysis for ESRD.Those with a vitiligo diagnosis were more likely to be black, of Hispanic ethnicity, female, tobacco users, alcohol dependent, and/or with a hepatitis C diagnosis.Increasing age and catheter access were associated with a decreased risk of vitiligo.ESRD patients with vitiligo had an increased risk of bacteremia, cellulitis, and herpes zoster, potentially attributable to their disrupted skin barrier, melanocyte destruction, and immune system dysregulation.However, no significant association was found with septicemia or conjunctivitis.Overall, these findings highlight the importance of physician surveillance for infection in ESRD patients with vitiligo, although further research, preferably prospectively, is clearly warranted.
Table 1 .
Descriptive statistics overall and by vitiligo, and logistic regression results on vitiligo.
Note: Black-shaded cells indicate that a variable was not examined due to zero frequencies in simple models or that a variable did not remain in the final model.RR = relative risk, aRR = adjusted relative risk, CI = confidence interval, HD = hemodialysis, PD = peritoneal dialysis, AVF = arteriovenous fistula.
Table 2 .
Descriptive statistics by infectious outcomes.
Table 3 .
Logistic regression results of vitiligo on infectious outcomes.
Note: Black-shaded cells indicate that a variable did not remain in the final model.RR = relative risk, aRR = adjusted relative risk, CI = confidence interval, HD = hemodialysis, PD = peritoneal dialysis, AVF = arteriovenous fistula. | 2024-01-26T16:58:32.854Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "61c0bf13fcda8c41ecd457eda99191401d405a70",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/13/1/94/pdf?version=1705911323",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c6dfc52335ce63ec18e10213054e0a3c65fe9c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210118627 | pes2o/s2orc | v3-fos-license | Pulmonary Arterial Hypertension In Systemic Sclerosis: Challenges In Diagnosis, Screening And Treatment
Abstract Systemic sclerosis (SSc) is a chronic, multisystem autoimmune disease characterized by vasculopathy, fibrosis and immune system activation. Pulmonary hypertension and interstitial lung disease account for majority of SSc-related deaths. Diagnosis of SSc-PAH can be challenging due to nonspecific clinical presentation which can lead to delayed diagnosis. Many screening algorithms have been developed to detect SSc-associated pulmonary arterial hypertension (SSc-PAH) in early stages. Currently used PAH-specific medications are largely extrapolated from IPAH studies due to smaller number of patients with SSc-PAH. In this review, we discuss the current state of knowledge in epidemiology and risk factors for development of SSc-PAH, and challenges and potential solutions in the diagnosis, screening and management of SSc-PAH.
Introduction
Systemic sclerosis (SSc) is a chronic, multisystem autoimmune disease characterized by vasculopathy, fibrosis and immune system activation. SSc is clinically classified into two subsets based on the extent of skin involvement: 1) limited cutaneous SSc (lcSSc) with skin involvement restricted to distal limbs below elbow and knees with or without facial involvement and 2) diffuse cutaneous SSc (dcSSc) occurring proximally to the elbows and knees. The natural history of these two cutaneous subtypes differs, with diffuse SSc being characterized by more rapid onset of skin and internal organ involvement. Systemic manifestations of SSc includes the hallmark of puffy fingers or skin thickening, as well as myopathy, joint involvement and contractures, interstitial lung disease, gastrointestinal dysmotility and cardiac involvement. Vascular manifestations of SSc include Raynaud's phenomenon, digital ulcers, scleroderma renal crisis and pulmonary hypertension.
Systemic sclerosis has the highest case-specific mortality of the autoimmune diseases. 1 In modern day studies, pulmonary hypertension and interstitial lung disease account for majority of SSc-related deaths. In this review, we discuss the current state of knowledge in epidemiology and risk factors for development of SSc-associated pulmonary arterial hypertension (SSc-PAH), and challenges and potential solutions in the diagnosis, screening and management of SSc-PAH.
Pulmonary Hypertension (PH) In SSc
Pulmonary hypertension (PH) is classified according to the 6th World Symposium on Pulmonary Hypertension, Nice, 2018 2 (Table 1). In SSc, PH can occur secondary to pulmonary vascular disease (WHO Group 1), SSc-interstitial lung disease (WHO Group 2) and cardiac involvement (WHO Group 3). This review will concentrate on SSc-PAH, or WHO Group 1 disease, as it is by far the most frequent PH manifestation in SSc. Until very recently, and pertaining to most of the literature presented in this review, SSc-PAH was considered as isolated pulmonary arterial hypertension defined as mean pulmonary artery pressure (mPAP) >25 mmHg on right heart catheterization (RHC) and pulmonary capillary wedge pressure ≤15 mmHg without evidence of significant pulmonary parenchymal disease. The 6th World Symposium on Pulmonary Hypertension recently updated the definition of PAH to be mPAP >20 mmHg and included PVR ≥3 Woods based on recent data from normal subjects. 2 Survival in SSc-PAH Lags Behind Other Causes of PAH Survival in SSc-PAH remains below that of idiopathic PAH (IPAH) or PAH from other causes. Kawut et al called attention to this in 2003 when they published a 55% one-year survival in SSc-PAH patients compared to 84% one-year survival in other PAH patients. 3 Today, with more therapy options available, survival has improved, but SSc-PAH survival continues to lag behind. Three-year survival of SSc-PAH in more recent publications reports 56-75%. 4,5 In a recent US-based multicenter observational study of SSc-PAH patients, long-term survival (eight years) was 49%. 5 Current PAH medications have been mainly studied in IPAH despite overall worse outcome with trifold higher risk of death and less response to PAH therapy in patients with SSc-PAH and potential differences in pathophysiology of these diseases. It is only in the last few years that multicenter clinical trials to evaluate medications in targeted SSc populations have been performed; with no results published at the time of this review article. These trials have faced challenges in enrollment due to the low frequency of SSc-PAH, and the need for stable PAHrelated therapy at the time of trial initiation ( Table 2).
Epidemiology of Systemic Sclerosis Associated Pulmonary Arterial Hypertension
In the literature, prevalence of SSc-PAH varies depending on the population studied, criteria used to define PAH and the method of choice for diagnosis: Right heart catheterization (RHC) vs echocardiography (ECHO). Prevalence of SSc-PAH was reported to range between 13 and 35% with ECHO and 8 and 12% with RHC. [6][7][8][9] This difference could be explained by poor reliability of ECHO in estimating pulmonary arterial pressures. Importantly, the recent change in definition of PAH previously mentioned to mPAP >20 based on recent data from normal subjects, will assuredly increase the prevalence of SSc-PAH. 10 In a French prospective multicenter cohort using the older criteria, the incidence of SSc-PAH was estimated as The Rationale for Screening in SSc-PAH PAH occurs at a markedly higher rate in SSc compared to the general population. Prevalence estimates for PAH in SSc are 8-15%, depending on the series. [6][7][8][9] In striking contrast, the prevalence of PAH is 15-52 cases per million, or 0.00005% in the general population. 11 Initial clinical manifestations of SSc-PAH are mostly nonspecific and include dyspnea, fatigue, and exercise intolerance. Historically, the majority of SSc-PAH patients presented at advanced stage, with diagnosis often delayed for more than two years from symptom onset,12 likely related to the nonspecific nature of initial symptoms. It is well recognized that patients who have less severe disease at the time of diagnosis have a better survival rate both in SSc13and idiopathic PAH. Studies have demonstrated that early PH-specific treatment may improve longterm outcomes in these patients. 14 The combination of high prevalence, nonspecific early symptoms making diagnosis difficult, combined with proven benefit of early diagnosis and therapy provides strong rationale for a screening approach for PAH/PH in SSc patients. A small, but nicely reported study in France demonstrated the benefit of improved SSc-PAH-related survival when patients were diagnosed during a systematic PAH detection program compared to those diagnosed on symptoms alone. 7 In summary, the above highlights the importance for rheumatologists to maintain a high index of suspicion to timely and appropriately screen SSc patients for this life-threatening complication.
Risk Factors for Developing PAH in SSc
Given the dismal prognosis of SSc-PAH and reported survival benefit with early treatment, early recognition, diagnosis and therapy intervention is key. Identification of risk factors could help clinicians to appropriately stratify the patients and closely monitor at-risk groups. To date, several clinical characteristics and serological markers have been proposed as risk factors associated with development of PAH in patients with SSc.
Clinical risk factors can be categorized as patient-and disease-specific factors. Older age at the time of SSc diagnosis and male gender are patient-specific factors that have been associated with higher risk of PAH development. [15][16][17] Disease-specific risk factors include presence of calcinosis, gastroesophageal reflux disease, and digital ulcers, more severe Raynaud's phenomenon, increased number of telangiectasias, and decreased nailfold capillary density. 16,[18][19][20] Although it is promising that most of these disease-specific risk factors are microvascular complications of SSc as spectrum of a systemic progressive vasculopathy; they had inconsistent results between studies in literature and would benefit from validation in large cohorts.
Serological markers have long been investigated as markers of PAH development in SSc, and four primary SSc-related antibodies have been associated with increased risk of PAH. These antibodies include anti-centromere, anti-Th/To, anti-U1 ribonucleoprotein (RNP), and anti-U3 RNP. 16,18 Anti-U3RNP and anti-Th/To antibodies can be difficult to obtain accurately through commercial testing, but both are associated with nucleolar staining on antinuclear antibody (ANA) test by indirect immunofluorescence. Therefore, it is reasonable to state that patients with one of these four antibodies or nucleolar pattern ANA DovePress result should be carefully monitored for signs and symptoms of PAH, and undergo routine clinical screening. Besides the proposed clinical and serological risk factors, there are two established predictors of SSc-PAH that were incorporated into clinical practice as screening strategies. First, a decline in diffusing capacity of lung for carbon monoxide (DL CO ) and a high forced vital capacity (FVC)/DL CO ratio (FVC/DL CO >1.6) have been shown to be strong predictors of PAH development. 16,21 Second, the N-terminal pro-B-type natriuretic peptide (NT-proBNP) measure that is released from cardiac myocytes as a response to wall stress. Studies have shown that plasma NT-proBNP levels are higher in patients with PAH, correlate strongly with hemodynamic parameters and WHO class, and have a high positive predictive value of PAH. [21][22][23] Screening strategies using DL CO and NT-proBNP will be discussed in the screening section below.
Diagnosis Of Pulmonary Hypertension In SSc: Challenges And Solutions
Diagnosis of SSc-PAH is challenging for clinicians for several reasons. First, the patients with SSc-PAH present with nonspecific symptoms such as fatigue, exercise intolerance, and dyspnea, which could be from multiple potential causes in these patients. These etiologies include ILD, musculoskeletal involvement, cardiac involvement and deconditioning. PH in SSc could also be due to SSc-unrelated etiologies (such as, chronic hypoxic lung disease, chronic thromboembolic hypertension), which is critical to distinguish given the differences in prognosis and treatment. Therefore, it is important to have a high index of suspicion especially in at-risk populations and perform appropriate diagnostic work-up to accurately rule out other causes. Furthermore, SSc patients can have multifactorial PH due to combination of group 1 PAH, group 2 secondary to left heart disease, and group 3 secondary to ILD. Given distinct therapeutic implications, the dominant cause of PH should be concluded in these patients with multifactorial PH.
Another diagnostic challenge is pulmonary veno-occlusive disease (PVOD). PVOD has a similar, but often more acute clinical presentation combined with the hemodynamic characteristics to SSc-PAH. PVOD has a poor prognosis and worsens despite PH-specific treatment. 24 Interestingly, histological review of transplanted lung biopsies of 18 SSc-PH patients due to ILD revealed that 15 patients had concomitant PVOD pathology. 25 A separate pathologic study reported four out of eight patients with SSc-PAH had PVOD. 26 These results suggest a high incidence of PVOD in patients PH due to ILD and potential contribution to prognosis in this population. Therefore, consideration of PVOD is helpful for accurate prognostication and to prompt early referral of these patients for lung transplantation evaluation.
Objective Testing
As mentioned earlier, the current gold standard technique for diagnosis of PH is RHC. However, RHC is an invasive test with associated complication risks and high costs, making it an inappropriate screening tool for PH. Additionally, it may not be available at every hospital due to lack of equipment or trained personnel. On the contrary, ECHO is a noninvasive, widely available, relatively low-cost tool that can be used for PH screening. However, it is not a definitive diagnostic test. This is in part due to its operator-dependent nature and indirect estimation of pulmonary arterial pressures (PAP) from right ventricular pressure (RVP) and tricuspid regurgitation velocity (TRV). TRV is a view-dependent measurement, and technically adequate signal may not be obtained due to body habitus and in cases of absent or severe TR. 27 In one study, ECHO estimation of PAP was possible in only 44% of patients with advanced lung disease. 28 Studies also show a wide range of variability in correlations between invasive and ECHO measurements of PH, especially in the presence of ILD. 28 These results highlight that ECHO only provides a probability of PH; therefore, results should be interpreted in the context of individual patients, and patients with high suspicion should be referred for RHC for definitive diagnosis.
Exercise ECHO
Studies suggest that abnormal increase in PA pressures with exercise in patients with SSc can be an early clue to PAH due to subclinical RV dysfunction in these cases, and exercise ECHO can be a helpful tool to detect these changes that are yet to develop at rest. 29 However, data regarding use of exercise ECHO is currently limited and more studies need to be performed to evaluate the usefulness of this noninvasive tool.
Screening of Pulmonary Hypertension in SSc: Challenges and Potential Solutions
Despite reported survival benefit of using a screening program in SSc patients, one of the challenges for PH 30 Potential reasons for the physician nonadherence was queried in a survey study performed in Australia and revealed cost of screening and concern for inability to interpret the results as contributory reasons. 8 Around 40% of participants had reported requiring better guidelines, reminder system, and guideline simplification to screen more effectively. 8 As the studies suggest, lack of consensus on which screening algorithm to use constitutes an important barrier for screening.
Consensus Recommendations
Four screening algorithms have been published recently and will be discussed below, highlighting the key differences ( Figure 1). These include the consensus recommendations from European Society of Cardiology/European Respiratory Society (ESC/ERS), Australian Scleroderma Interest Group (ASIG), and American College of Chest Physicians/American Heart Association (ACCP/AHA), and the DETECT algorithm. Both the ESC/ERS and ACCP/AHA guidelines recommend initial screening upon SSc diagnosis by ECHO with subsequent RHC if positive screen on ECHO. 31,32 These guidelines are cognizant of the limitations of using ECHO as discussed above.
Comparatively, the ASIG algorithm incorporates NT-proBNP and PFT for initial screening with subsequent ECHO if initial screening is positive. 33 Positive findings to prompt ECHO referral on PFTs include if the DLCO <70% predicted and FVC/DLCO ≥1.8, and/or NT-proBNP >210 pg/mL at initial screening. If ECHO returns high risk, then the patient is referred for RHC. If all testing is negative at baseline, then repeat NT-proBNP and PFT in one year and annually. 33 The DETECT Algorithm The DETECT study targeted a high risk for PAH population, and enrolled 644 patients with DLCO predicted <60% from multiple countries in North America, Europe and Asia. 34 The subsequent DETECT algorithm identified a combination of clinical, laboratory, electrocardiographic parameters and PFT for initial screening with subsequent ECHO if high risk, and RHC if ECHO results are also high risk. 34 This set of variables are telangiectasia, NT-proBNP, serum urate, anti-centromere antibody, FVC/DLCO ratio, and right axis deviation on electrocardiogram. Based on these variables, risk is calculated via web-based calculator and considered "high" if >300 (http://detect-pah.com). High risk patients are referred to ECHO and variables are incorporated into web calculator again, which gives risk points. If risk point is >35, then patient is referred to RHC.
The following limitations to use of DETECT algorithm should be noted: (1) online DETECT algorithm calculator is currently not accessible to US residents, (2) it has not been validated for use in in SSc patients with DLCO ≥60% predicted, and (3) it does not provide further recommendations if the initial screening or ECHO results are low risk.
In addition to clinical performances, cost-effectiveness of these screening algorithms is an important area for further study. For instance, using ECHO in every asymptomatic patient at the first initial screening step in ESC/ ERS algorithm may not be cost-effective. On the other hand, the DETECT algorithm uses non-ECHO variables, but includes PFT and laboratory tests which can be costly.
PAH Specific Management: Challenges and Potential Solutions
Medications currently used in SSc-PAH are mainly extrapolated from IPAH studies. Unlike IPAH, calcium channel blockers are not used for PAH specifically given the rarity of vasoreactivity in SSc-PAH (1%) and low probability to observe a sustained response. 38 However, this does not preclude the use of these medications for the management of Raynaud's phenomenon.
Anticoagulation use has long been discussed as an adjunct to SSc-PAH treatment given observations of in situ microvascular thrombosis in lung histology of these patients. An observational study done with SSc-PAH patients showed no associated survival benefit of warfarin exposure in these patients. 39 Therefore, given lack of reported survival benefit and high risk of gastrointestinal bleeding in SSc patients due to gastric antral vascular ectasia, telangiectasias or erosive esophagitis, anticoagulation is currently not recommended in these patients. However, evidence behind this recommendation is weak, and further studies need to be performed. Currently, an Australian multicenter, double-blind, placebocontrolled trial investigating apixaban use in SSc-PAH is ongoing which may shed light on anticoagulation use in these patients (ACTRN12614000418673). 40 Treatment regimens include four classes of medications which are endothelin-receptor antagonists (ERA), prostacyclin analogs (PA), phosphodiesterase-5 (PDE-5) inhibitors and guanylate cyclase stimulators (GCS). We will discuss evidence behind the use of each class here. All the currently registered ongoing trials in SSc-PAH and PH trials with planned CTD-PAH enrollment are summarized in Table 2.
Endothelin Receptor Antagonists
ERAs include nonselective endothelin-A and B receptor antagonists (bosentan and macitentan) and the endothelin-A specific receptor antagonist, ambrisentan. ERAs can be used alone or in combination with WHO class II and III patients, and in combination with parenteral therapy in WHO class IV patients. Evidence behind use of ERA alone is limited. A randomized clinical trial done in PH included a small group of SSc-PAH patients (N=44) and subgroup analysis showed stabilization of six minute walk distance (6MWD) in these patients treated with initial bosentan therapy compared to placebo. [41][42][43][44] Following this result, nonselective cohort studies looking at bosentan monotherapy in SSc-PAH patients showed improvement of functional class, 6MWD and hemodynamics after an average of 3-6 months of use and then stabilization of these measures after nine months to one year, which was likely due to the progressive nature of disease. [44][45][46] Data on ambrisentan or macitentan use as monotherapy specifically in SSc-PAH is restricted to two primary studies. First, a single-center 24-week open-label study of ambrisentan in patients with exercise-induced SSc-PAH showed significant improvement in hemodynamics and 6MWD. 47 The randomized, double-blind, placebo-controlled trial of macitentan (SERAPHIN) showed reduction in mortality and morbidity with macitentan use in patients with PAH. 48 The trial included 70 and 82 CTD-PAH patients in drug and placebo arms, respectively; however, no subgroup analysis was done to report outcomes in these patients.
Phosphodiesterase-5 Inhibitors
PDE5 inhibitors include oral sildenafil, tadalafil and vardenafil. To our knowledge, no studies have been done on the use of tadalafil and vardenafil as monotherapy in SSc-PAH. Data on sildenafil use specific to SSc-PAH is minimal. An open label, uncontrolled study of sildenafil from India showed improvement in hemodynamic measurements and 6MWD in three months in 17 patients with SSc-PAH. 49 Post hoc, subgroup analysis of double-blind, placebo-controlled SUPER-1 trial showed improvement in exercise capacity, hemodynamics and functional class with 12 weeks of sildenafil use in patients with CTD-PAH (38 patients with SSc-PAH). 50
Combination Therapy of ERA and PDE5 Inhibitors
Given loss of clinical improvement after a variable period of time on bosentan monotherapy, one option is adding another DovePress 329 agent to bosentan. In a retrospective study including patients with SSc-PAH and IPAH, addition of sildenafil to bosentan monotherapy in patients with clinical deterioration was associated with improvement in functional class and 6MWD in IPAH, but not in SSc-PAH patients. There were higher rates of liver toxicity and mortality in the SSc-PAH group. 51 It should be noted that sildenafil and bosentan interact, leading to increased bosentan levels and reduction in sildenafil levels.
Studies also investigated outcomes with combination of ERA and PDE5 inhibitors started at different time points of the disease course. An open-label clinical trial of ambrisentan and tadalafil upfront combination therapy showed significant improvement in hemodynamics (reduction in PVR by 55% and RV mass by 14%), functional class, Borg dyspnea score and quality of life of patients with SSc-PAH. 52 Similarly, subgroup analysis of the AMBITION trial examined the effect of upfront combination ambrisentan-tadalafil treatment versus monotherapy of either agent on risk of clinical failure, and change in NT-proBNP and 6MWD in SSC-PAH patients. 53 It showed lower clinical risk (21% vs 40%) as well as greater improvement in NT-proBNP and 6MWD in combination group compared to pooled monotherapy. A retrospective analysis of SSc-PAH patients from the PHAROS registry also showed that patients with initial treatment of ERA alone had increased risk of clinical worsening than patients on combination PDE5 inhibitor and ERA or ERA alone. 54 Apart from improvements in functional and quality of life measures with combination therapy, recently published retrospective study of Spanish nationwide SSc-PAH cohort examined survival rates in patients treated with monotherapy (ERA or PDE5 alone), and upfront (initiation of drugs within <12 weeks of each other) and sequential (initiation of drugs ≥12 weeks apart from each other) combination therapy. The study showed higher survival rates in patients treated with upfront and sequential combination therapy than monotherapy; however, sequential combination therapy had higher survival benefit in one-(95.8% vs 94.1% vs 78%), three-(80.5% vs 51.8% vs 40.7%) and five-years (56.5% vs 34.5% vs 31.6%) than upfront combination therapy. 55 All in all, based on the accumulating evidence over the last six years, initial oral combination treatment is recommended in SSc-PAH patients with WHO class II disease.
Prostacyclin Agonists
The prostacyclin agonists include parenteral epoprostenol, parenteral and inhaled treprostinil and iloprost, and oral selexipag. Except oral selexipag (which can be used in class II), all the parenteral and inhaled agents are often reserved for patients with WHO class III and IV patients in clinical practice.
Efficacy of parenteral epoprostenol was studied in a 12week, open-label study which showed improvement in exercise capacity and hemodynamics compared to conventional therapy in 111 patients with SSc-PAH. 56 Open-label extension of the same study could not provide long term outcomes of these patients treated with epoprostenol due to technical limitations, but reported 3-year survival rate of 52%, which was higher than historical cohorts. 57 Similarly, subcutaneous treprostinil use showed significant improvement in hemodynamics and nonsignificant trend in quality of life and 6MWD in a subgroup analysis of double-blind, placebo controlled, 12 week trial. 58 Inhaled iloprost has been reported to improve hemodynamics, functional class and quality of life after mean follow-up of 13.2 months of five patients with CREST syndrome related pulmonary hypertension. 59 A 24 week openlabel study was performed with patients with PAH including 13 patients with CTD-PAH; however, subgroup analysis was not done to specifically report efficacy in this group. 60 Lastly, selexipag, only oral medication of prostacyclin analogs, has shown to improve morbidity/mortality composite score, and hospitalization rate compared to placebo in subgroup analysis of double-blind, placebo-controlled GRIPHON trial. 61
Guanylate Cyclase Stimulators
Riociguat is an oral guanylate cyclase stimulator. Data is limited to subgroup analysis of PATENT-1 and PATENT-2 trials which showed slight improvement in 6MWD, functional class and hemodynamic parameters compared to placebo with 12 weeks of riociguat use in patients with CTD-PAH (with 40 SSc-PAH patients). 62
Outcomes
Three-year survival in SSc-PAH ranges from 61.4-75% depending on availability of treatments at the time of diagnosis and distribution of prognostic indicators in different cohorts; and was around 50-56% in newly diagnosed patients. 5,9,63 Predictors of mortality in patients with SSc-PAH have been investigated in multiple studies including large PHAROS and REVEAL registries and included age >60, male sex, systolic blood pressure ≤110 mmHg, pericardial effusion, PVR >32 Woods, DLCO >39% predicted, poor functional status, 6MWD <165 m and BNP >180. 5,13,63,64 Causes of death were SSc-related within four years of SSc diagnosis, while SSc-related and unrelated causes were equally distributed ≥4 years after SSc diagnosis. 64 SSc-related causes of death were PAH-related in the majority of the patients, followed by infection, renal crisis, and cancer as SSc-unrelated causes of death. 5,64 Conclusion SSc-PAH is a serious complication and the leading cause of death in patients with SSc. Diagnosis of SSc-PAH can be challenging due to nonspecific clinical presentation which can lead to delayed diagnosis. Early recognition and treatment of SSc-PAH is known to improve survival in these patients. Therefore, many screening algorithms have been developed to detect SSc-PAH in early stages. Currently used PAH-specific medications are largely extrapolated from IPAH studies due to smaller number of patients with SSc-PAH. There is great interest in current and future drug trials dedicated to the SSc-PAH population given the overall worse survival and less response to treatment.
Disclosure
Robyn T Domsic is a consultant for Eicos Sciences. The authors report no other conflicts of interest in this work. | 2020-01-02T21:43:59.799Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "24c6256b3a85cc5b1ab8ba9f553243e9a8b87961",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=55014",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1db7fddd4ca936429156a08d5eda5cf8593809ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238766766 | pes2o/s2orc | v3-fos-license | From Exercise to Cognitive Performance: Role of Irisin
The beneficial effects of exercise on the brain are well known. In general, exercise offers an effective way to improve cognitive function in all ages, particularly in the elderly, who are considered the most vulnerable to neurodegenerative disorders. In this regard, myokines, hormones secreted by muscle in response to exercise, have recently gained attention as beneficial mediators. Irisin is a novel exercise-induced myokine, that modulates several bodily processes, such as glucose homeostasis, and reduces systemic inflammation. Irisin is cleaved from fibronectin type III domain containing 5 (FNDC5), a transmembrane precursor protein expressed in muscle under the control of peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α). The FNDC5/irisin system is also expressed in the hippocampus, where it stimulates the expression of the neurotrophin brain-derived neurotrophic factor in this area that is associated with learning and memory. In this review, we aimed to discuss the role of irisin as a key mediator of the beneficial effects of exercise on synaptic plasticity and memory in the elderly, suggesting its roles within the main promoters of the beneficial effects of exercise on the brain.
Introduction
Physical activity refers to body movement that is produced by the contraction of Skeletal Muscle (SkM) and that increases energy expenditure. It includes activities in the workplace (e.g., typing), around the house (e.g., household chores, such as cleaning) and during leisure time (e.g., walking, swimming, dancing, cycling). definite intervention to maintain the cognitive performance in older adults. In particular, cognitive stimulation and regular PE have been successfully related to improvement in brain plasticity [20,21]. However, age-related cognitive decline continues to occur, including in the deterioration of memory [22].
Memory enables the storage and recovery of data in order to adapt to environmental stimuli. In 1968, Atkinson and Shiffrin suggested the "modal" model of memory, in which memory was characterized by (i) sensory store, (ii) short-term memory (STM), and (iii) long-term memory (LTM). Therefore, memory has the capacity to archive the sensory inputs. Furthermore, through STM, memory can archive and use data for a brief period. STM is also defined as primary or active memory, and comprises different memory systems. LTM permits us to collect an indefinite amount of information for an indefinite time [23,24].
The anatomic side of memory is mainly situated in the hippocampus [25]. It is a neural structure with complex efferent and afferent connections with various brain cortices, that exerts a fundamental role in learning [26]. Consequently, STM and LTM are archived mainly in the hippocampus, but also in other cortex regions. This memory network allows us to more rapidly understand environmental stimuli, planning and acting adaptive responses. The brain can create new memories to learn more and/or use previous memories to inform behavior [27][28][29].
Recent studies have shown that functional and morphological changes occur in parallel in the aging brain, most importantly at hippocampal and pre-frontal cortex levels [30]. At present, numerous interventions have been proposed to ameliorate the deficit in memory and slow down the acquirement of mild cognitive impairment or dementia. Withinthese multifactorial interventional factors, approaches include stress reduction (e.g., meditation), diet regimen, and regular PE [31]. PE has been described as having particularly beneficial effects. In fact, regular PE helps to prevent serious diseases including diabetes, cancer, cardiovascular and neurodegenerative diseases. Moreover, PE can improve mood, quality of sleep and reinforce immune response [32][33][34][35][36][37].
Older adults are characterized by significant atrophy of grey and white matter primarily in the hippocampus and pre-frontal cortex. PE is associated with improvement of cardiorespiratory fitness and reduced loss of grey and white matter in the temporal, frontal and pre-frontal regions [38]. At cellular and molecular levels, PE increases neurogenesis, cerebral plasticity and synaptogenesis in the hippocampus. In these processes, PE acts on the expression and release of BDNF downstream, improving cerebral oxygen uptake, and in turn, enhancing memory formation [39]. Also, PE has been shown to promote memory by increasing dopaminergic activity in the basal ganglia [40].
Several studies support the hypothesis that a multifactorial approach is more effective in counteracting aging related memory decay. The combination of PE and mental exercises is more effective in improving cognitive function in the healthy elderly with respect to treatment with one stimulus alone. Specifically, the application ofa combination of home-based PE and digitalized cognitive stimulation for 16 weeks ameliorated the performance of verbal episodic memory [41]. Additionally, elderly who were submitted to a combined intervention for 12 weeks showed higher levels of improvement in executive and memory functions [42].
The mechanisms by which PE exerts its positive effects on health are numerous, howeverthe expression and release of myokines represent one of the most important. In 2007, Pedersen and coworkers were early proponents of the idea that SkM releases a largenumber of molecules, which werenamed myokines [43]. Some years after this, the same authors suggested a new model in which SkM is a secretory organ, synthesizing and secreting myokines in response to muscle contraction. The main function of these factors is to regulate the muscular function and metabolism [44]. Subsequently, myokines have been increasingly recognized as a protective element against the negative effects of physical inactivity and physiological aging. The most important myokines include acid β-aminoisobutyric acid, BDNF, decorin, fibroblast growth factor-21, follistatin-like protein-1, insulin-like growth factor-1, irisin, leukaemia inhibitory factor, meteorin-like, myonectin, myostatin, and immune mediators IL-4, IL-6, IL-7, IL-8, and IL-15 [45,46].
The myokines act at a systemic level, exerting specific roles in different organs and tissues, including modulatory activity of the central nervous system (CNS) [47]. Many of the protective effects of PE on CNS are facilitated bycomponents of the neurothrophin family, particularly BDNF. As ascertained by studies on animal models, it acts in a paracrine or autocrine manner on energy balance, improving insulin signaling, regulating motoneuron survival, playing an essential role insynaptic plasticity, neuronal survival and differentiation [48][49][50]. Inhumans, several studies on healthy subjects suggested a significant association between PE, peripheral levels of BDNF, cognitive performance, and the volume of the hippocampus [51][52][53]. Growing evidence is accumulating on the pleiotropic functions of irisin, and on its precise mechanism of action in the brain. However, evidence regarding the correlation between PE and irisin effects at a systemic level, as well as the association between irisin response and cognitive functions in older populations, is still limited. In the following sections, we describe irisin structure and expression, and speculate on its role in contributing to cognitive performance improvement by PE.
Irisin Structure
About 20 years ago, two independent groups identified FNDC5 as a protein exerting a role in the differentiation of myoblast. These first findings suggested that the gene was highly expressed in the SkM, but also in cerebral and cardiac tissues [54,55]. Human FNDC5 is a type I membrane protein of 212 amino acids (aa). The N-terminal is a signal sequence needed for final maturation and cleavage, the C-terminal is the cytoplasmatic domain, and in the middle there is a fibronectin III (FNIII) domain, an unknown domain, and a hydrophobic transmembrane domain [7,56,57]. The irisin represents the segment of FNDC5 that is cleaved under stimuli such as PE or cold. The portion contains 112 aa, formed by the residues from 29 to 140, including the tail at the C-terminus, the central FNIII domain, and the tail at the N-terminus [7,57]. The resulting peptide is characterized by a molecular weight of 12 kDa, and is dimerizing through the FNIII domain [56]. The irisin undergoes post-translational modification by N-glycosylation at two different residues. This modification, in addition to dimerization, determines a molecular weight reaching 35 kDa. The protein stability and the irisin secretion are regulated by the N-glycosylation. The process is strictly dependent on the presence of the signal peptide at the N-terminal, and is important for the irisin activity. The loss of glycosylation does not allow irisin to exert its main function in the browning stimulation of white adipose tissue ( Figure 1) [58]. physical inactivity and physiological aging. The most important myokines include acid β-aminoisobutyric acid, BDNF, decorin, fibroblast growth factor-21, follistatin-like protein-1, insulin-like growth factor-1, irisin, leukaemia inhibitory factor, meteorin-like, myonectin, myostatin, and immune mediators IL-4, IL-6, IL-7, IL-8, and IL-15 [45,46].
The myokines act at a systemic level, exerting specific roles in different organs and tissues, including modulatory activity of the central nervous system (CNS) [47]. Many of the protective effects of PE on CNS are facilitated bycomponents of the neurothrophin family, particularly BDNF. As ascertained by studies on animal models, it acts in a paracrine or autocrine manner on energy balance, improving insulin signaling, regulating motoneuron survival, playing an essential role insynaptic plasticity, neuronal survival and differentiation [48][49][50]. Inhumans, several studies on healthy subjects suggested a significant association between PE, peripheral levels of BDNF, cognitive performance, and the volume of the hippocampus [51][52][53]. Growing evidence is accumulating on the pleiotropic functions of irisin, and on its precise mechanism of action in the brain. However, evidence regarding the correlation between PE and irisin effects at a systemic level, as well as the association between irisin response and cognitive functions in older populations, is still limited. In the following sections, we describe irisin structure and expression, and speculate on its role in contributing to cognitive performance improvement by PE.
Irisin Structure
About 20 years ago, two independent groups identified FNDC5 as a protein exerting a role in the differentiation of myoblast. These first findings suggested that the gene was highly expressed in the SkM, but also in cerebral and cardiac tissues [54,55]. Human FNDC5 is a type I membrane protein of 212 amino acids (aa). The N-terminal is a signal sequence needed for final maturation and cleavage, the C-terminal is the cytoplasmatic domain, and in the middle there is a fibronectin III (FNIII) domain, an unknown domain, and a hydrophobic transmembrane domain [7,56,57]. The irisin represents the segment of FNDC5 that is cleaved under stimuli such as PE or cold. The portion contains 112 aa, formed by the residues from 29 to 140, including the tail at the C-terminus, the central FNIII domain, and the tail at the N-terminus [7,57]. The resulting peptide is characterized by a molecular weight of 12 kDa, and is dimerizing through the FNIII domain [56]. The irisin undergoes post-translational modification by N-glycosylation at two different residues. This modification, in addition to dimerization, determines a molecular weight reaching 35 kDa. The protein stability and the irisin secretion are regulated by the N-glycosylation. The process is strictly dependent on the presence of the signal peptide at the N-terminal, and is important for the irisin activity. The loss of glycosylation does not allow irisin to exert its main function in the browning stimulation of white adipose tissue ( Figure 1) [58]. In contrast to rodents, the human FNDC5 gene has an ATA as a start codon instead of ATG, generating a transcript that results in a very low efficiency of translation [59]. More recently, Albrecht et al. (2020), identifying other non-canonical start codon, suggested that in SkM there are several transcripts for the human FNDC5 gene [60]. Studies on transcriptome profiling through RNA-sequencing (RNA-Seq) by the Functional Annotation of the Mammalian Genome/Genotype-Tissue Expression Project, established that the FNDC5 gene is mainly expressed in SkM, the heart, and several regions of the brain, mainly in the cerebellum, but also in hippocampus, cortex, and medulla oblongata [61].
A receptor for irisin has not yet been identified. The only evidence is provided by an interesting study by Kim et al. (2018), whichshowed that this myokine exerts its biological function by binding the integrins' family of proteins. The integrins are ubiquitously expressed transmembrane receptors, consisting of eighteen αand eight β-subunits, forming a total of 24 different heterodimers able to recognize also soluble ligands. Kim and coworkers described how the binding of irisin with the αVβ5 integrin heterodimer occurs in human adipocytes and osteocytes. Using the integrin inhibitor RGD peptide, which binds to αVβ5 in a selective manner, they also showed that any signaling response induced by irisin was significantly suppressed in these cells [62].
Irisin Functions
Bostrom and colleagues were the first to showthat irisin levels in the blood increase after PE, describing an increase of 65% in blood concentration in mice submitted to regular running for 21 days [7]. The level of irisin after PE is dependent on the type of physical activity, where training based on aerobic exercise is a higher inductor of serum irisin compared to resistance exercise [63,64].
In general, the level of irisin is influenced by lifestyle, characterized by specific residential place and associated activities, as suggested by the differences recorded between rural and urban inhabitants. Irisin concentration is lower in urban citizens, with a mean value of 3.6 ng/mL, while active individuals that live in rural areashad amean value of 4.3 ng/mL [65]. PE increases the level of irisin in the blood of healthy people [7], and in people with metabolic disorders [66]. Its circulating level is also related to the phenotype of different disease, as well obesity, type 2 diabetes [67], chronic renal disease [68] and hypothyroidism [69].
The first identified function of irisin was the "browning" of adipose tissue, in which irisin increases the expression of the mitochondrial protein uncoupling protein-1 (UCP-1) in mature fat cells, allowing the conversion of the white adipose tissue (WAT) to the brown adipose tissue (BAT) phenotype. The process ends with the formation of a third type of adipose tissue phenotype, named beige/brite adipose tissue. Irisin and PGC-1α regulate the expression of UCP-1 and thermogenesis in BAT, driving the metabolism of glucose and lipids toward the increase in energy consumption [70,71]. Furthermore, irisin is implicated in glucose homeostasis, by acting on different cell types and tissues involved in glucose metabolism, such as adipose tissue, SkM, liver, and pancreatic β cells. Due to this property, irisin is able to improve insulin sensitivity under insulin resistance (IR) conditions [72]. A decrease in irisin levels was associated with an increased risk of presenting metabolic syndrome and hyperglycemia in obese adults. This myokine shows negative associations with fasting insulin and glycosylated hemoglobin [73]. Previous studies also suggested its negative correlation with fasting glucose and HOMA-IR in school-age students of both genders [74], and positive associations with insulin concentration, fasting glucose, and HOMA-IR [66,[75][76][77].
Irisin has other specific functions, including in the heart and liver, where it exerts antiapoptotic effects on cardiomyocytes and hepatocytes through the induction of autophagy [45], and protects cells from ischemia-reperfusion injury [78,79]. At bone level, it has a favorable effect and represents a key molecule in the crosstalk between this tissue and SkM. Specifically, irisin increases the mass and strength of the cortical bones, positively modifying their geometry by reducingthe secretion of osteoblast inhibitors, and driving the expression of bone-specific genes [80]. In immune system functioning, irisin mediates the positive effect of regular/moderate physical activity, contributing to a reduction in systemic inflammation, and consequently protecting from the development of diseases associated with chronic inflammation [81].
The functions of irisin on the brain are described in more detail in the section "Irisin: a new bridge between exercise and cognitive functions". In brief, this myokine increases the proliferation of hippocampal neuronal cells [15], and reduces the neuronal damage mediated by pro-oxidant stimuli [14]. The FNDC5/irisin system is important for long-term potentiation and memory in mouse hippocampal region, being involved in establishing synaptic plasticity and memory [82], and may contribute to the antidepressant effect of PE together with serotonin, by the activation of the PGC-1α/BDNF pathway [83].
Expression of FNDC5/Irisin
In this section, the mechanisms involved in regulating FNDC5/Irisin expression in SkM and CNS are considered. Although the expression of the FNDC5 gene occurs in several tissues both in humans and rodents, we have focused on the SkM because it represents the major source of irisin from a peripheral origin [7,66]. We also considered the expression in the brain, due to the relevant role of irisin of CNS origin on cognitive functions.
Skeletal Muscle
The expression of the FNDC5 gene in SkM cells is differentiated between muscle fiber types. Generally, slow type fibers show a higher expression compared to fast type fibers at rest; PE is able to induce FNDC5 expression in all type of fibers, following an expression pattern regulated by exercise type and duration. For example, in a mouse model, aerobic exercise (i.e., running wheel) induces equal expression of FNDC5 in different muscle fibers [7,[84][85][86].
While the FNDC5 gene is expressed in different tissues, the main sources of circulating irisin are the SkM during PE, and adipose tissue. Therefore, irisin can be considered both a myokine and an adipokine [7,84]. In the SkM, irisin expression is mainly mediated by PGC-1α through its interaction with a number of transcription factors implicated in energy requirement [87][88][89]. These proteins can induce the expression of FNDC5 in a different manner: Yang et al. (2018) recently suggested in vitro that in C2C12 myotubes the expression of FNDC5 is modulated by the cAMP response element-binding protein (CREB) through its binding with PGC-1α [90]. It's important to remember that CREB is activated by aerobic exercise and that the signaling of cAMP activates CREB in the SkM during exercise to manage metabolic adaptation [91,92]. In addition, endurance exercise induces the expression of FNDC5 in the quadriceps via the PGC-1α/estrogen related receptor alpha (ERRa) pathway [8]. The retinoic acid (RA) is another inductor of FNDC5 expression in muscle. RA is a natural ligand for retinoid X receptor (RXR). This receptor also represents a transcription factor activated by a ligand that binds to RA responsive elements (RARE) in the regulatory sequences of genes regulated by PGC-1α [93]. The expression of FNDC5 is increased by treatment with RA in differentiated C2C12 myocytes, also in an independent manner from PGC-1α [94] (Figure 2). During PE, PGC-1α accelerates mitochondrial biogenesis and regulates the glucose/fatty acid metabolism, favoring the switch from fast to slow fiber contraction. Under this process, the increased expression and activity of PGC-1α is mediated by the increase of Ca 2+ influx into fibers' cytoplasm [95].
Conversely, other conditions can inhibit the expression of FNDC5 in SkM. For example, FNDC5 expression is reduced by Myostatin. This myokine exerts a reverse effect to FNDC5, inhibiting the differentiation of myoblast [96], and significantly increasing the mRNA expression levels of PGC-1α and FNDC5 when it is silenced [97] (Figure 2). Comparably, the expression of PGC-1α and FNDC5 may be suppressed by the protein Mothers against decapentaplegic homolog 3 (SMAD3) in C2C12 mouse myoblasts, while in knockout Smad3 mice, aerobic exercise increases serum irisin in comparison to wild-type mice [86]. Furthermore, Varela-Rodriguez et al. (2016) showed that 48 h of fasting decreased the circulating level of irisin and expression level of FNDC5 in SkM, while the intraperitoneal injection of insulin for 2 weeks has a comparable action on FNDC5/irisin level in plasma and SkM [98].
Mothers against decapentaplegic homolog 3 (SMAD3) in C2C12 mouse myoblasts, while in knockout Smad3 mice, aerobic exercise increases serum irisin in comparison to wild-type mice [86]. Furthermore, Varela-Rodriguez et al. (2016) showed that 48 h of fasting decreased the circulating level of irisin and expression level of FNDC5 in SkM, while the intraperitoneal injection of insulin for 2 weeks has a comparable action on FNDC5/irisin level in plasma and SkM [98]. Finally, it is worth mentioning the growing interest in the potential use of irisin as a bona fide biomarker and potential target for the complex management of sarcopenia and muscle loss even in subjects after spinal cord injury [80,[99][100][101]. Future studies are warranted to investigate the role of irisin as a biological bridge among exercise and muscle metabolism.
Brain
Several studiessuggested that the FNDC5/irisin system is expressed in the brain, where its specific roles have not yet been well established. Studies on animal models showed that this expression is dependenton the region of the brain, in which irisin is expressed in ventromedial nuclei and hypothalamic arcuate in primates [102], and in cortex, hippocampus, and other areas such as the vestibular nuclei of the medulla oblongata and Purkinje cells of the cerebellum in mice [9].
The irisin expression in the CNS is regulated by several environmental, physiological and pathological conditions. Aerobic PE significantly increases the mRNA of FNDC5 in the mouse hippocampus [8,103]. Recently, Yu et al. (2020) suggested that additional stimuli, such as environmental enrichment (EE), can increase the expression of FNDC5 in the pre-frontal cortex [104], where it is useful in promoting neurogenesis and general cerebral activity, increasing the neuronal capacity to recover from wounds [105,106]. In regard topathological conditions, the expression of FNDC5 is reduced in AD patients in the pre-frontal cortex and hippocampus [107]. Nevertheless, the injection of recombinant irisin reduces stress-mediated anxiety, depression and memory disfunction in the hippocampus and/or in the lateral ventricle of rodents' model [108][109][110].
There is increasing evidence relating to FNDC5/irisin brain expression. Wrann et al. (2013) described that PGC-1α is responsible for FNDC5 expression induction in the pri- Finally, it is worth mentioning the growing interest in the potential use of irisin as a bona fide biomarker and potential target for the complex management of sarcopenia and muscle loss even in subjects after spinal cord injury [80,[99][100][101]. Future studies are warranted to investigate the role of irisin as a biological bridge among exercise and muscle metabolism.
Brain
Several studiessuggested that the FNDC5/irisin system is expressed in the brain, where its specific roles have not yet been well established. Studies on animal models showed that this expression is dependenton the region of the brain, in which irisin is expressed in ventromedial nuclei and hypothalamic arcuate in primates [102], and in cortex, hippocampus, and other areas such as the vestibular nuclei of the medulla oblongata and Purkinje cells of the cerebellum in mice [9].
The irisin expression in the CNS is regulated by several environmental, physiological and pathological conditions. Aerobic PE significantly increases the mRNA of FNDC5 in the mouse hippocampus [8,103]. Recently, Yu et al. (2020) suggested that additional stimuli, such as environmental enrichment (EE), can increase the expression of FNDC5 in the pre-frontal cortex [104], where it is useful in promoting neurogenesis and general cerebral activity, increasing the neuronal capacity to recover from wounds [105,106]. In regard topathological conditions, the expression of FNDC5 is reduced in AD patients in the pre-frontal cortex and hippocampus [107]. Nevertheless, the injection of recombinant irisin reduces stress-mediated anxiety, depression and memory disfunction in the hippocampus and/or in the lateral ventricle of rodents' model [108][109][110].
There is increasing evidence relating to FNDC5/irisin brain expression. Wrann et al. (2013) described that PGC-1α is responsible for FNDC5 expression induction in the primary neurons and hippocampus [8]. More recently, it was foundthat FNDC5 induction in the hippocampus is regulated by the activation of the cAMP/PKA [104], and that the hippocampal FNDC5 expression is also induced by the lactate released from SkM during PE [107]. The FNDC5 promoter has been described in the presence of an ERRαbinding element (ERRE) sited in the upstream region, where PGC-1α increased FNDC5 expression activating ERRα, and then triggered a negative feedback onto PGC-1α/ERRα in primary cortical neurons [8]. Further studies are needed to investigatethe upstream regulatory sequences of the FNDC5 promoter, in order to better define how different stimuli can influence FNDC5 expression in the brain. These analyses could clarify which binding elements are regulated by different transcription factors.
A number of studies suggested that irisin can also be endocytosed and that it exerts a role in mediating endocytosis and exocytosis. Regarding the endocytosis process, Lourenco et al. (2019) suggested that irisin bound to unrecognized receptor in CNS, particularly on the surface of hippocampal neurons and astrocytes, starting an endocytosis process [107]. In addition, the subcutaneous injection of irisin increased glucose uptake at brain level, suggesting its potential role in augmenting the endocytosis of glucose transporters [13]. Relating to exocytosis, the only evidences come from Zhang et al. (2018) showing that irisin increased insulin secretion in mouse pancreatic islet cells in response to glucose [111].
Irisin: A New Bridge between Exercise and Cognitive Functions
As mentioned above, irisin exerts its role at a systemic level, including the CNS [46,47]. Since 2016, several findings on irisin's impacts on cognitive functions have been made. These findings suggest that the beneficial effects of PE in counteracting memory degradationare mediated by irisin from a peripheral and central origin. In turn, the impactof irisin on cognition is to a large extent elicited by the induction of the neurothrophin BDNF.
In general, BDNF is essential for brain development due to its actions in neuronal survival, differentiation, and migration, and in exerting a role in dendritic arborization and in regulating synapse genesis and plasticity. Consequently, BDNF is fundamental for hippocampal function and learning [112][113][114]. It has been widely described that higher levels of BDNF have a beneficial effect on many cognitive processes, such as verbal, recognition, spatial, and episodic memory [115,116]. In humans, mutation in the BDNF gene (i.e., Val66Met) is associated with a decreased level of BDNF, with the affected subjects characterized by a higher level of mood disturbances, such as increased anxiety and depression, alterations in episodic memory, and reduced volume in specific regions of the brain [117,118].
BDNF is related to the positive effect of PE on CNS, in particular acting on the above cited neuronal survival/differentiation, and synaptic plasticity [48,49,53]. In humans, circulating BDNF was successfully associated with PE, cognitive performance and hippocampal volume [51,52]. In a several studies, Vaynman and colleagues suggested that the blockage of BDNF signaling with a specific antibody (ant-TrkB) significantly reduced the improvement in acquisition and retention during spatial memory tasks induced by PE; this inhibition was paralleled by a decrease in expression of synaptic proteins [119,120]. In accordance, other studies suggested associations between PE, circulating BDNF levels and the hippocampal volume [121,122], whose systemic decline is associated with advanced age [123].
Notably, the activation of the FNDC5/irisin system in the brain is an important inductor of BDNF. The overexpression of FNDC5 in primary cortical neurons increases BDNF expression, and in a similar manner the BDNF expression is significantly abrogated by RNAi-mediated knockdown of the FNDC5 gene [124]. An animal model showed that the irisin precursor FNDC5 could mediate beneficial CNS effects of endurance exercise by upregulating BDNF expression in the hippocampus [8]. Accordingly, the above-mentioned human polymorphism of BDNF, Val66Met, was shown to affect both BDNF and FNDC5 expression in the brain after PE [125]. Nevertheless, in a mouse model of AD, the neurogenesis mediated by PE in the hippocampus was associated with similar induction of expression of BDNF and FNDC5, facilitating improvement in cognitive functions [126]. Schnyder et al., (2015), suggested that endurance PE elevates systemic irisin levels, inducing the expression of FNDC5 in the hippocampus by PGC-1a, and leading to BDNF expression. This process culminates in neurogenesis induction in this region [127]. In two recent studies, Lourenco and coworkers investigated the relationship between the potential alteration of the FNDC5/irisin system and AD. They showed that silencing FNDC5 with specific small hairpin RNA in a mouse brain, determined the loss of long-term potential (LTP) at hippocampal level. A similar loss of LTP was induced in a model of AD obtained by injecting amyloid-β oligomers (AβOs) and causing memory and behavioral defects. The injection of the recombinant irisin in the glycosylated form was able to reverse these processes on LTP loss and behavioral alterations. In an additional approach, the same authors used an adenovirus expressing FNDC5 into the brain, injecting AβOs after six days, and obtaining analog recovery of animals. PE also reversed the behavioral defects of the AβO injection, supporting the idea from previous data, that the induction of FNDC5 in the hippocampus was mediated by PE. The following year (2020), Lourenco and colleagues suggested a positive correlation between cerebrospinal fluid irisin and BDNF levels, and memory, during a study on AD patients and control subjects [107,128]. These findings supported earlier evidence in animals of a relationship between FNDC5/irisin-BDNF and neuroplasticity in brain, as an element of the linking pathway between PE and cognitive functions [8,15]. Nevertheless, irisin may contribute to the antidepressant effect of PE together with serotonin, via the upstream activation of the PGC-1α/BDNF pathway [83].
Altogether, these findings suggest that the brain activation of the FNDC5/irisin system could be the mediator by which PE induces neurogenesis at a molecular level, highlighting that an important association exists between irisin and BDNF [116]. See Figure 3 for a summary representation.
Conclusions
This comprehensive review has shown how the interest in the myokine irisin has grown exponentially. Aerobic exercise has a significant impact upon cognitive function in aging. The FNDC5/irisin system exerts an interesting role as an important exercise-related factor acting on the aging brain. Its functions are dependenton irisin of peripheral origin or by its direct expression in the CNS, both of which are induced by PE. The administration of glycosylated recombinant irisin may improve cognitive function in animals, mimicking the effect of endurance exercise on specific brain regions such as the hippocampus. The physiological expression of irisin levels decreases with age, albeit irisin expression in muscle is higher in elderly high-fitness men than in elderly low-fitness men. This is not true for the young [129][130][131][132]. Thus, it is possible that the aging-induced reduction in circulating irisin level can be restored by sustained endurance training, and that this effect might be age-specific.
In the future, it will be imperative to completely bridge the gap in scientific literature on the relationship betweenexercise-linked irisin and consequences on cognition in aging, considering that the therapeutic potential of exercise-linked irisin is very relevant, Figure 3. FNDC5/Irisin signaling in the brain. Schematic representation of irisin action on neuron. Irisin stimulates synaptic plasticity, neurogenesis and cognitive improvement by induction of expression of brain-derived neurotrophic factor (BDNF). Physical exercise further induces brain FNDC5 and irisin release. cAMP: cyclic adenosine monophosphate; CREB: cAMP response element-binding protein; ERRα: Estrogen-related receptor α; FNDC5: Fibronectin type III domain containing 5; PGC-1α: PPARγ coactivator 1α; PKA: cAMP-dependent protein kinase.
In another study, Li et al. (2017) described a role for irisin in the brain and cognitive modulation without mentioning BDNF, suggesting that irisin also has a protective effect againstadverse environmental stimuli. They described a reduction mediated by irisin of obtained neuronal damage inducing an oxidative stress condition; this beneficial effect was caused bythe inhibition of expression and secretion of canonical proinflammatory cytokines [14].
Conclusions
This comprehensive review has shown how the interest in the myokine irisin has grown exponentially. Aerobic exercise has a significant impact upon cognitive function in aging. The FNDC5/irisin system exerts an interesting role as an important exercise-related factor acting on the aging brain. Its functions are dependenton irisin of peripheral origin or by its direct expression in the CNS, both of which are induced by PE. The administration of glycosylated recombinant irisin may improve cognitive function in animals, mimicking the effect of endurance exercise on specific brain regions such as the hippocampus. The physiological expression of irisin levels decreases with age, albeit irisin expression in muscle is higher in elderly high-fitness men than in elderly low-fitness men. This is not true for the young [129][130][131][132]. Thus, it is possible that the aging-induced reduction in circulating irisin level can be restored by sustained endurance training, and that this effect might be age-specific.
In the future, it will be imperative to completely bridge the gap in scientific literature on the relationship betweenexercise-linked irisin and consequences on cognition in aging, considering that the therapeutic potential of exercise-linked irisin is very relevant, and that it could be highly useful in preserving cognitive performance, and in improving treatment of neurodegenerative diseases. | 2021-09-09T20:44:38.471Z | 2021-07-31T00:00:00.000 | {
"year": 2021,
"sha1": "b6be56f49bdd0e1a245e086fafbb6e176cf4f27e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/15/7120/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cf65ae421b54406a38964c2b93d3f648701475ed",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
255744692 | pes2o/s2orc | v3-fos-license | Implementation of Artificial Intelligence in Modeling and Control of Heat Pipes: A Review
: Heat pipe systems have attracted increasing attention recently for application in various heat transfer-involving systems and processes. One of the obstacles in implementing heat pipes in many applications is their difficult-to-model operation due to the many parameters that affect their performance. A promising alternative to classical modeling that emerges to perform accurate modeling of heat pipe systems is artificial intelligence (AI)-based modeling. This research reviews the applications of AI techniques for the modeling and control of heat pipe systems. This work discusses the AI-based modeling of heat pipes focusing on the influence of chosen input parameters and the utilized prediction models in heat pipe applications. The article also highlights various important aspects related to the application of AI models for modeling heat pipe systems, such as the optimal AI model structure, the models overfitting under small datasets conditions, and the use of dimensionless numbers as inputs to the AI models. Also, the application of hybrid AI algorithms (such as metaheuristic optimization algorithms with artificial neural networks) was reviewed and discussed. Next, intelligent control methods for heat pipe systems are investigated and discussed. Finally, future research directions are included for further improving this technology. It was concluded that AI algorithms and models could predict the performance of heat pipe systems accurately and improve their performance substantially.
Introduction
Heat pipes are among the most efficient passive heat transfer technologies capable of transporting large quantities of heat over long distances by latent heat (phase-change process) [1,2].Figure 1 shows a schematic diagram of a conventional heat pipe.A heat pipe consists of a sealed container/evacuated tube and a wick structure.It is partially filled with working fluid at liquid/vapor equilibrium and does not incorporate moving parts [3].The heat pipe is basically divided into three sections: (1) an evaporator section where the heat is absorbed from the source, and the working fluid evaporates; (2) a condenser section where the heat is dissipated into the surrounding environment (sink) and the working fluid returns to its liquid state; and (3) an adiabatic section [4].
Predicting the thermal performance of a heat pipe is difficult, as it is influenced by several parameters, which can be classified into three categories: (1) operational parameters such as heat input and filling ratio; (2) property parameters such as surface tension and thermal conductivity; (3) geometrical parameters such as lengths of evaporator and condenser sections, and shape of cross-section [24].As several parameters affect the operation and performance of heat pipe systems and their effects overlap, the modeling of heat pipes is of very high complexity.A promising alternative to classical modeling that can overcome these issues is artificial intelligence (AI)-based modeling.The development of artificial intelligence (AI) technologies has been of great benefit to scientific research [25][26][27].AI algorithms mainly point to an artificial neural network (ANN), fuzzy logic (FL), genetic algorithm (GA), particle swarm optimization (PSO), etc.Such algorithms have been widely used to develop reliable and accurate prediction models for different systems Predicting the thermal performance of a heat pipe is difficult, as it is influenced by several parameters, which can be classified into three categories: (1) operational parameters such as heat input and filling ratio; (2) property parameters such as surface tension and thermal conductivity; (3) geometrical parameters such as lengths of evaporator and condenser sections, and shape of cross-section [24].As several parameters affect the operation and performance of heat pipe systems and their effects overlap, the modeling of heat pipes is of very high complexity.A promising alternative to classical modeling that can overcome these issues is artificial intelligence (AI)-based modeling.The development of artificial intelligence (AI) technologies has been of great benefit to scientific research [25][26][27].AI algorithms mainly point to an artificial neural network (ANN), fuzzy logic (FL), genetic algorithm (GA), particle swarm optimization (PSO), etc.Such algorithms have been widely used to develop reliable and accurate prediction models for different systems in different fields [28][29][30][31].AI models can be used to model various operational aspects of heat pipe systems while implicitly taking the internal complexities of the model into consideration.
Recently, Ahmadi et al. summarized the applications of machine learning methods in modeling various types of heat pipes [32].However, at the time of publication of that work, the progress on utilizing machine learning models for modeling of heat pipe systems was significantly limited compared to the current progress, as several works on this topic have been published in the last years [33][34][35][36][37][38][39][40], and thus a recent review that covers the current state-of-the-art is required.Furthermore, the previous work by Ahmadi et al. [32] did not discuss various aspects of the implementation of AI models for modeling of heat pipe systems, including the optimal AI model structure, the models overfitting under small datasets conditions, the use of dimensionless numbers as inputs to the AI models, and the AI-based control of heat pipe systems.This work covers this research gap by providing a recent review of the current progress while discussing several important aspects related to the successful implementation of AI models in heat pipe systems to provide a practical guideline for future research on this topic.Moreover, this work provides critical future research directions for improving the current technologies.This work reviews AI methods in modeling and controlling systems in heat pipe applications.Section 2 provides an overview of different fundamental AI methods.Next, Section 3 investigates and discusses the use of AI technology in predicting and controlling heat pipe systems.Finally, the conclusions and future research directions are introduced in Section 4.
Background
Several AI techniques have previously been applied in the literature for the modeling and optimization of heat pipe systems.These techniques include artificial neural networks, fuzzy logic, adaptive neuro-fuzzy inference system (ANFIS), and metaheuristic optimization algorithms.In this section, a brief background on the applied AI techniques is included.Artificial neural networks (ANN) were the first AI algorithms used for modeling heat pipe systems.An ANN is a computational model capable of simulating the brain's behavior and performing various computational tasks by predicting a number of outputs using a number of inputs [41].There are various types of neural networks based on different training algorithms, such as the multilayer perceptron neural network (MLPNN), radial basis function (RBF) neural network, convolutional neural network (CNN), etc. ANNs are employed in forecasting, control, modeling, and pattern classification applications [42].
Another type of AI algorithm that was applied in the modeling of heat pipe systems is fuzzy logic, which uses fuzzy if-then statements to perform the model prediction and decision-making.Fuzzy logic can be applied in different applications such as temperature control, aircraft control, robotics, and many other control applications [43].A hybrid AI model that combines the principles of ANNs and fuzzy logic and was applied for modeling of heat pipe systems is the adaptive neuro-fuzzy inference system (ANFIS).ANFIS can model a large class of complex nonlinear systems to an anticipated grade of precision.ANFIS has previously been applied in the literature for forecasting, modeling, and control tasks [44].Finally, metaheuristic optimization algorithms have been used in the literature to model heat pipe systems, mostly via hybridization with other types of AI models.Such algorithms include genetic algorithms (GA) [36], particle swarm optimization (PSO) [45], and the grey wolf optimizer (GWO) [46].In summary, Table 1 compares the AI algorithms applied to model heat pipe systems.Optimizing photovoltaic systems [47], and modeling hydrogen production [25].
Fuzzy Logic
Flexible and allows modifications.Output decisions can be interpreted easily.Can handle multiple different inputs at the same time.
Mainly dependent on the expertise of the designer.Inaccurate designs result in wrong outputs.Requires extensive testing with equipment.
Metaheuristic Optimization
Faster convergence speed compared to the classical optimization algorithm.Lower computational cost.Broad applicability.Easy to hybridize with other algorithms.
Not guaranteed to perform effectively on all tasks.Some problems could result in significantly longer processing times.It could be trapped in a local maxima.Requires careful parameter tuning.
Current Progress and Discussion
Several works in the literature have discussed the application of various AI techniques for the modeling and optimization of heat pipe systems.AI techniques have been used to predict heat pipe systems' performance and operational parameters and optimize the design and operational variables for maximizing the system's performance and response.In this section, the current progress on the application of AI techniques in heat pipe systems is reviewed and discussed.
Optimal ANN Structure for Heat Pipe Modeling
The performance and accuracy of an artificial neural network are influenced by its structure (number of hidden layers and neurons) and training algorithm.Artificial neural network (ANN) modeling was conducted by Patel and Mehta [52] to predict the thermal performance of a closed loop pulsating heat pipe (CLPHP).Eighteen different ANN models (radial basis, generalized regression, linear layer, cascade forward back propagation, feed-forward backpropagation; feed-forward distributed time delay, layer recurrent and Elman backpropagation) involving different activation functions (linear (PURELIN), logistic sigmoid (LOGSIG), tangent sigmoid (TANSIG), and radial basis Gaussian function) were tested.It was found that a generalized regression neural network with radial basis Gaussian function had the lowest mean absolute relative deviation among all ANN models and predicted the thermal performance of CLPHP in the error range of ±1.81% compared to the experimental data.Furthermore, thermal performance prediction models for a pulsating heat pipe (PHP) using an artificial neural network (ANN) were discussed by Patel and Mehta [53].A feed-forward backpropagation neural network was adopted.Eleven ANN models with different numbers of neurons were constructed based on 1652 experimentally obtained data.The most accurate model was that with 14 neurons (R = 0.9447).
Another study by Salehi et al. [54] designed an optimized neural network using a genetic algorithm to predict the heat transfer characteristics of a silver/water nanofluid two-phase closed thermosyphon that is thermally enhanced by a magnetic field.The genetic algorithm was applied to optimize the number of neurons in the hidden layer, the coefficient of the learning rate, and momentum.The optimal model was achieved with two hidden layers with nine and six neurons structure.The results showed excellent accuracy compared to the experimental results.A 96-neuron artificial neural network (ANN) model was constructed by Chavda [39] to investigate the thermal performance of a two-layer screen mesh-type cylindrical heat pipe using silver nanofluid.The built ANN models were divided into three categories in which the heat pipe's performance was predicted based on: (1) one output parameter (thermal resistance); (2) two output parameters (thermal resistance and thermal conductivity); and (3) three output parameters (thermal resistance, thermal conductivity, and overall heat transfer coefficient).A singlelayer feed-forward backpropagation network with six hidden layer neurons predicted the values with the lowest prediction error for one output parameter (normalized mean square error NMSE = 0.000041).For two output parameters, a cascade feed-forward backpropagation network with 11 hidden layer neurons predicted the thermal performance of the heat pipe with minimum errors (NMSE = 0.00000019), and a forward backpropagation network with 12 hidden layer neurons predicted the values with least error of prediction for three outputs (NMSE = 0.000001).Lee and Chang [55] presented the application of a nonlinear autoregressive algorithm with an exogenous (NARX) neural network to study the thermal dynamics of a pulsating heat pipe in both time and frequency domains.There was good agreement between the predicted and experimentally measured results, which demonstrates the effectiveness of the model/method in analyzing PHP dynamics.
In summary, optimizing the structure of the used AI model is highly important, as it directly affects the model's prediction accuracy and processing time.The current works mainly relied on trial-and-error for finding the optimal structure of the AI model with best prediction capability.
Overfitting of Trained Prediction Models without Validation Sets
Some heat pipe modeling studies have constructed successful prediction models without including a dataset for validation (only training and testing), and few studies have excluded the testing dataset (only training and validation), meaning there are no specific criteria for establishing accurate prediction models.Implementing an AI-based heat pipe model that was only evaluated on specific scenarios without validating its generalizability for other scenarios could be of highly negative effect as the model could perform very poorly and thus worsen the system's performance.For example, Kahani and Vatankhah [37] investigated the effect of Al 2 O 3 as a working fluid on the thermal performance of wickless heat pipe (WHP) by developing an optimized artificial neural network (multilayer perceptron MLP) using 52 experimentally obtained datasets (75% for training and 25% for testing).The effect of different parameters on heat pipe solar collector (HPSC) was analyzed by Sivaraman and Mohan [56] using an artificial neural network (ANN).The study implemented a multilayer feed-forward ANN architecture consisting of two layers with six inputs and one output.A 168 data were used for training the network, and 66 data for testing without validation.The simulated and experimental results were found to be very close, with a mean square error of 0.9234.However, such small dataset with no validation set could result in an overfitted model, in which the model only memorizes the received data and does not generalize well for general cases.
Meanwhile, Khandekar et al. [57] adopted a fully connected feed-forward multilayer ANN configuration using a backpropagation momentum learning algorithm to model pulsating heat pipe thermal performance.Two models were analyzed, the first model was trained with 52 datasets (out of 72 datasets) within the typical operation range, i.e., data of fill ratios between 20-85%.The second model was trained with the whole dataset (72 sets).Both models showed satisfying results, but the model trained with the typical PHP operation range dataset provided better results than the model trained with the whole dataset.This demonstrates that the output of an ANN model might get negatively affected by those training datasets which represent different phenomenological regimes of the system, as the ANN model is a typical black box, unaware of the physical phenomena guiding the system dynamics.
In summary, training AI models with a small dataset results in the model only performing well on the training set and achieving lower performance on the testing set.Moreover, using a validation set helps in verifying the robustness and accuracy of the models in cases different from the training case.
Prediction Models' Input and Output Parameters
Artificial intelligence technologies (mostly artificial neural networks) have proven reliable and efficient for predicting the thermal performance of different heat pipes.The input parameters of artificial neural networks are mainly the parameters that have a significant influence on the heat pipe operation, such as heat flux/input, filling ratio, number of turns and lengths of evaporator and condenser; while the thermal resistance is considered as one of the most common output parameters of the network usually used to measure the performance and efficiency of the heat pipe system.
Jia-qiang et al. [58] used a function chain neural network to predict the heat transfer performance of a looped copper-water oscillating heat pipe based on grey relational analysis (GRA).GRA was used to determine the main influencing factors based on experimentally obtained data.It was found that the charging ratio, inclination angle, and heat input are the main influencing factors (relational grade more than 0.5).Thus, two function chain neural networks with three inputs (charging ratio, inclination angle, and heat input) and four inputs (charging ratio, inclination angle, heat input, and number of turns) were built.The relative error and fitting degree of both neural networks were almost the same (4% error for the three-input model and 5% error for four-input model) when tested several times in different conditions.Still, the four-input neural network was more complicated than the three-input neural network.Thus, the results suggest that only input variables of a relational grade of more than 0.5 should be considered when constructing a function chain neural network to save computing time and guarantee an acceptable fitting precision.
To predict the thermal resistance of pulsating heat pipes filled with ethanol, Ahmadi et al. [59] proposed four models, including multilayer perceptron (MLP), radial bias function combined with genetic algorithm (GA-RBF), least square support vector machine (LSSVM), and a conjugated hybrid of particle swarm optimization and adaptive neuro-fuzzy inference system (CHPSO ANFIS).The filling ratio, the thermal conductivity of the tube, inclination angle, lengths of adiabatic, condenser and evaporator sections, heat input, and inner and outer diameters were used as input parameters.A genetic algorithm (GA) was applied to the RBF model to obtain the optimum number of parameters.PSO was applied to the ANFIS model to train the FIS and optimize the tuning process.The results showed that the GA-RBF model was the most accurate in predicting the PHP's thermal resistance with a determination coefficient (R 2 ) of 0.9892, as shown in Figure 2. The same input parameters were used by Ahmadi et al. [35] to estimate the thermal resistance and thermal conductivity of pulsating heat pipe (PHP) with water as a working fluid using the group method of data handling (GMDH) neural network.The maximum relative error was approximately 35.8%, reaching less than 5% for a thermal resistance higher than 10 K/W.In addition, it was notable that the average relative deviation decreases and reaches zero for effective thermal conductivity higher than 10,000 W/K.m.The results demonstrated that the GMDH method is an effective tool for predicting the thermal performance/heat transfer characteristics of PHPs and can be applied to PHPs filled with various operating fluids such as ethanol, acetone, etc.
In a study conducted by Wen [40], two types of artificial neural networks, multilayer perceptron (MLP) and group method of data handling (GMDH), were employed to model the thermal resistance of vertical-oriented oscillating heat pipes filled with acetone.Heat load, filling ratio, lengths of different sections of the heat pipe, inner and outer diameters, and several turns were the models' inputs.The results demonstrated that both models accurately predict the OHPs thermal performance.However, the complex architecture of the MLP model (MSE = 0.0045, R 2 = 0.9893) and its ability to employ functions with higher ability in training the network are the reasons behind its higher accuracy than the GMDH model (MSE = 0.0144, R 2 = 0.9651).Similarly, Wang et al. [60] presented a fully connected feed-forward neural network model to predict the thermal resistance of closed vertical meandering pulsating heat pipe (PHP) with water as a working fluid.The input parameters were the same as those of Wen [40] except for the outer diameter.The model results indicated a satisfactory prediction of the PHP thermal performance (MSE = 0.0025, correlation coefficient R = 0.9962).
Nanofluids are being used in heat transfer applications due to their high thermal transfer properties and high thermal conductivity compared to base fluids.Nanofluid concentration and thermal conductivity are essential parameters that should be considered when analyzing nanofluid-filled heat pipe systems.In a study conducted by Wen [40], two types of artificial neural networks, multilayer perceptron (MLP) and group method of data handling (GMDH), were employed to model the thermal resistance of vertical-oriented oscillating heat pipes filled with acetone.Heat load, filling ratio, lengths of different sections of the heat pipe, inner and outer diameters, and several turns were the models' inputs.The results demonstrated that both models accurately predict the OHPs thermal performance.However, the complex architecture of the MLP model (MSE = 0.0045, R 2 = 0.9893) and its ability to employ functions with higher ability in training the network are the reasons behind its higher accuracy than the GMDH model (MSE = 0.0144, R 2 = 0.9651).Similarly, Wang et al. [60] presented a fully connected feed-forward neural network model to predict the thermal resistance of closed vertical meandering pulsating heat pipe (PHP) with water as a working fluid.The input parameters were the same as those of Wen [40] except for the outer diameter.The model results indicated a satisfactory prediction of the PHP thermal performance (MSE = 0.0025, correlation coefficient R = 0.9962).
Nanofluids are being used in heat transfer applications due to their high thermal transfer properties and high thermal conductivity compared to base fluids.Nanofluid concentration and thermal conductivity are essential parameters that should be considered when analyzing nanofluid-filled heat pipe systems.Shanbedi et al. [61] designed an MLPNN model to predict the temperature performance of a two-phase closed thermosyphon using two synthesized nanofluids, including carbon nanotube (CNT)/water and CNT-Ag/water.According to the experimental results, the appropriate range of weight fraction to obtain a suitable (∆T) was 0.91-1.1% wt, 0.2-0.3%wt, and 0.95-1% wt for CNT/water and CNT-Ag/water, respectively.These results indicate that the weight fraction of nanoparticles is a crucial parameter for predicting the thermal efficiency of a two-phase closed-loop thermosyphon.The MLPNN model attained a correlation coefficient (R) above 0.99 and a small RMSE value of 0.3338 with only minor prediction errors reported.Three artificial intelligent approaches: multilayer feed-forward neural network (MLFFNN), adaptive neuro-fuzzy inference system (ANFIS), and group method of data handling (GMDH) type neural network was employed by Malekan et al. [38] to investigate the thermal resistance of a closed loop oscillating heat pipe (OHP) filled with γFe 2 O 3 /water and Fe 3 O 4 /water nanofluids.The input parameters were heat input, the thermal conductivity of working fluids, and the ratio of inner diameter to the length of OHP.Several MLFFNN, ANFIS, and GMDH models were built and tested.The MLFFNN model with one hidden layer with five neurons and the Levenberg-Marquardt training algorithm was the most accurate with an RMSE value of 0.0508, while the GMDH models showed the highest error value of 0.0569.Moreover, the prediction of thermal performance (thermal resistance) of a two-phase closed thermosyphon was conducted by Shanbedi et al. [62] using an adaptive neuro-fuzzy inference system (AN-FIS).Two water-based nanofluids were used: pristine carbon nanotube (CNT) and CNT with ethylenediamine (CNT-EDA).The considered input parameters were nanofluid type, nanofluid concentration, input power, length, and temperature difference.The R 2 of the model was equal to 0.9999, indicating a very high accuracy and reliability.
Maddah et al. [63] predicted the efficiency of Cu/O water nanofluid in a heat pipe exchanger using a three-layered forward neural network and the Levenberg-Marquardt training algorithm.Filling ratio, nanofluid concentration, and input power were selected as the input parameters, and the output parameter was the heat exchanger efficiency.The predicted results matched the experimental results with a high accuracy indicated by a testing the R-value of 0.9978.
In summary, various input and output parameters have been used in the literature for the modeling of heat pipe systems.Some parameters were general and applied in modeling all types of heat pipes, while other parameters were related to specific types of heat pipes.Thus, carefully choosing the input and output parameters is crucial for achieving excellent overall modeling.
Dimensionless Numbers as Input Parameters
The prediction of the thermal performance (thermal resistance) of some types of heat pipes (PHP, for example) are sometimes tricky, as many parameters affect the operation (performance), such as heat input, inner diameter, filling ratio, etc.Therefore, heat transfer correlations (dimensionless numbers) have been used to develop reliable heat transfer prediction methods.
A novel method for predicting the thermal performance of a closed pulsating heat pipe with different working fluids and a variety of operational conditions usng a fully connected feed-forward neural network was proposed by Wang et al. [24].The input parameters were the Kutateladze number (Ku), Bond number (Bo), Morton number (Mo), Prandtl number (Pr), Jacob number (Ja), number of turns (N), and the ratio of the evaporation section length to the diameter (L e /d), while the output parameter of the ANN model was the thermal resistance.The system's property parameters were evaluated at the coolant temperature as it was known in the initial stages.A backpropagation learning algorithm was used in building the ANN due to its adaptability.The results indicated that the developed model is reliable and can predict thermal performance accurately (MSE = 0.0138, R = 0.9824).The working fluid highly influenced the variation of the predicted values.In a similar approach, Liang et al. [64] conducted a thermal performance investigation of miniature revolving heat pipes (MRVHPs) using a backpropagation neural network and genetic algorithm (GA-BPNN).However, Bo, Ja, Pr, Fr, and filling ratio were the considered input parameters, and Ku was the output parameter.It was concluded that the maximum error for estimating the best filling ratio under several operational conditions is 11.4% for a heating load of 200 W and rotation speed of 500 rpm, while others were within the 10% range.The trained model achieved a high R value of 0.9260 indicating the model's accuracy in modeling the heat pipe system.
Qian et al. [65] proposed a novel heat transfer prediction model for oscillating heat pipes based on an extreme gradient boosting algorithm (XGBoost), which requires a smaller dataset for prediction compared to ANNs and can evaluate the contribution of each parameter in influencing the output and final decision.The ratio of inner diameter to evaporator section length (D i /L e ), Ku, and Ja were the most important parameters influencing the output.
In summary, the use of dimensionless numbers as inputs and outputs of AI models is a very promising method for modeling the highly complex behavior of heat pipe systems as they combine several parameters of the heat pipe system at once.
AI-Based Prediction Models for Heat Pipe Applications
AI technologies are not limited to only simulating the performance of individual heat pipe systems but have also been implemented to model heat pipe applications such as solar energy (e.g., solar collectors) and electronic cooling applications.By implementing AI models, the difficulty of the modeling process becomes significantly lower compared to other classical and computational methods, such as computational fluid dynamics (CFD), as AI methods could directly model the entire system with all its internal processes directly from the collected experimental data.This data-driven approach could be faster and computationally less expensive than full models of the developed systems.
For instance, the precision of various data-based and energy balance-based methods for modeling the performance of heat pipe solar collectors (HPSC) for a whole year under the climatic conditions of Western Australia was investigated and compared by Shafieian et al. [34].The models included an artificial neural network (multilayer perceptron-MLP), thermal resistance network (TRN), artificial neuro-fuzzy inference system (ANFIS), and fuzzy methods.The input parameters were inlet temperature of HPSC, ambient temperature, and solar radiation, whereas outlet temperature (the main contributing parameter in the thermal efficiency of solar collectors) was the output parameter.Regarding R2, the best prediction method for the HPSC's performance was the ANN (R 2 = 0.98079, 0.98974, 0.98903, 0.99209 for spring, summer, autumn, and winter, respectively) followed by ANFIS and TRN.Due to large errors, the fuzzy method was not recommended for modeling HPSCs.Sivaraman and Mohan [56] studied the effects of different parameters on heat pipe solar collectors (HPSCs) using multilayer feed-forward ANN.It was found that a decrease in the total length/inner diameter of the heat pipe (L/d i ) ratio results in an improvement in HPSC performance.This is justifiable since the transport capability of heat pipe increases with increasing the internal diameter, which mainly determines heat transport.The ANN analysis of HPSC showed that the collector (L/d i ratio = 52.63,L c /L e = ratio 0.3333, water inlet temperature = 34 • C) is better than other cases for water flow rate of 0.0033 kg/s.The results demonstrated that the proposed model can successfully predict the effects of different parameters on the HPSC performance, as indicated by a high R 2 value of 0.9234.Two different types ANN for predicting the thermal performance of hybrid solar collectors (heated gas + solar radiation as heat sources) were compared by Facão et al. [66].Different configurations of multiple layer perceptron (MLP) and radial basis functions (RBF) were considered.MLP, despite being simpler, showed slightly better performance than that of the RBF.Tolon et al. [67] evaluated thermodynamic analysis of evacuated tube heat pipe (ETHP) solar energy systems integrated into sustainable buildings with an artificial neural network (ANN).The ANN was applied to analyze the effects of radiation (I), mass (m), and ambient temperature (T air ) (input parameters) on the exergy of the system.A backpropagation neural network (BPNN) with hidden layer (two hidden layers) feedback was the chosen form of ANN.The results indicated that the effect of mass on the exergy was approximately double that of mass and radiation and the trained model achieved excellent prediction capability indicated by a small average error value of 0.0006 at the end of the training process.
Furthermore, Taheri et al. [33] presented a new design of a liquid-cooled heat sink for the thermal management of the printed circuit board (PCB), as an electronic device, by altering the heat sink heat pipe application.Two methods of ANN (radial basis function (RBF) and multilayer perceptron (MLP)) were used to predict PCB steady-state temperature (based on the experimentally obtained results) under different operating conditions that are not studied in the experiments.The results indicated both ANN methods demonstrate practically accurate estimates of the heatsink module, but RBFANN has more precise prediction results (R 2 = 0.7223 for MLPNN and R 2 = 0.9966 for RBFNN).
In summary, various AI models were used for the modeling of heat pipe systems employed in various applications.AI models can model the heat pipe systems separately or modeling the entire system incorporating heat pipes.Modeling the operation of the entire system could be of benefit in cases where the internal interactions and processes in the system are very complex and is better to incorporate them implicitly in the AI model.
Hybrid AI Methods for Working Condition Optimization
Optimization algorithms can be implemented into a heat pipe's performance prediction models to optimize the operating conditions to achieve the optimal working rate (highest efficiency).Optimization of finned heat pipe operation conditions/parameters was conducted by Naresh [36] using a combined artificial neural network (ANN) and genetic algorithm (GA).The objective of the optimization was to obtain the optimum conditions of a number of fins and fill ratio for a given heat input at which the minimum thermal resistance can be achieved.The network was trained using the Levenberg-Marquardt algorithm.The optimum average fill ratio and the number of fins were found to be 52% and seven, respectively.Jalilian et al. [68] investigated the behavior of a pulsating heat pipe flat-plate solar collector (PHPFPSC) using the artificial neural network method and optimized the solar collector's parameters using a genetic algorithm.Multilayer perceptron (MLP), specifically, two-layer perceptron (with one hidden layer) and three-layer perceptron (with two hidden layers) neural networks, were used to investigate the system because of the nonlinearity of PHPs and solar collectors.The results demonstrated that the evaporator length, inclination angle, and filling ratio were the most influencing factors on the system's efficiency.The optimal values of the parameters were an evaporator length of 108.3 cm, a filling ratio of 56.9%, and an inclination angle of 25.01 • , and the optimal thermal efficiency, based on the optimal parameters, was 61.4%, which was 4.0% higher than that in the nonoptimal case.Moreover, the results indicated that a decrease in the temperature of the input water of the water tank leads to an increase in the system's thermal efficiency (efficiency increases by about 1% for a decrease of 1 • C).The average error was less than 7.5%, which indicates that neural networks are capable of predicting the performance of PHPFPSC systems with high accuracy.Using a similar approach, the simulation and optimization of a pulsating heat pipe (PHP) was conducted by Jokar et al. [69] using a novel approach that consists of an artificial multilayer perceptron (MLP) neural network and genetic algorithm (GA).The optimum operation point obtained by the GA was heat flux (q") = 39.93 W, filling ratio (FR) = 38.25%,inclination angle (IA) = 55.6 • , and the obtained results by the GA were validated by comparison to experimental results.
In summary, hybrid models combining optimization algorithms with AI-based models have shown excellent performance in modeling and optimizing various types of heat pipe systems.The incorporation of optimization algorithms in the modeling process helps in improving the overall model's accuracy and can be employed for optimizing the operational parameters of the heat pipe system.
Intelligent Control Methods for Heat Pipes
Control algorithms are usually applied to AI models/systems to establish intelligent control systems for heat pipe systems.As shown in Table 2, all of the intelligent control systems that were applied in the literature are based on fuzzy logic models.
Control Method
Target Parameter Ref.
PID Temperature [45] Nonlinear adaptive fuzzy controller Energy (heat) wastage in the heat pipe radiator [70] Fuzzy incremental control LHP temperatures, condensing pressure, and mass flow rate [71] Dual intelligent model (fuzzy fusing rules) Temperature control and heat flux tracking effects [72] Dong et al. [71] proposed a fuzzy incremental control algorithm for controlling loop heat pipe space cooling system (LHP-SCS) consisting of an LHP with ammonia as working fluid and a variable emittance radiator with MEMS louver.This intelligent control technique takes advantage of minor overshoots, no steady error, and strong operating properties.The proposed FIC strategy was compared with the traditional PID approach.The former demonstrated an improvement in the heat flux tracking effect and temperature control than the latter, as indicated with lower overshoot values by more than 15% for considered control parameters combined with shorter settling times by more than 30%.Furthermore, it showed potential for more stable thermal and hydraulic conditions for safe operation of the LHP structures and working fluid.Dual-driven intelligent combination control (TQ-ICC) of a heat pipe space cooling system (HP-SCS) was developed by Yunze at al. [72] to improve the temperature control and heat flux tracking effects.The combination control strategy improves the final control action by employing temperature regulation and heat flux tracking errors to the proposed dual-driven system and adaptively adjusting their contributions using a fuzzy fusing rule.The results suggested that the proposed model can considerably enhance the thermal control effects and promote safe operation of heat pipe space cooling system as well, which was indicated by a more than 75% smaller settling time and more than 89% smaller overshoot compared to a base PID controller.Zhang et al. [70] designed and simulated a nonlinear adaptive fuzzy controller to control the heat pipe radiator with a new-type function of contraction-expansion factor.The controller was designed to resolve the energy (heat) wastage in the heat pipe radiator due to its complex nonlinear nature.The model was found to be feasible and adaptive.
A particle swarm optimization (PSO) algorithm was also used by Xi et al. [45] to tune the proportional-integral-derivative (PID) control parameters to optimize the parameters of heat pipe temperature control during a vacuum thermal test.The temperature control model was constructed based on the heat response data of the heat pipe.The time integral of the absolute value of the control error was used as the objective function.Compared to the attenuation curve method, the PSO method achieved better results in reducing overshoot by more than 63%, shortening the time to reach steady-state, and improving performance by reducing the maximum overshoot by more than 15%.
In summary, the control of heat pipe systems using AI algorithms was discussed previously in the literature for controlling various parameters such as temperature and heat release.However, ANN has not previously been used for the intelligent control of heat pipe systems.Thus, the implementation of ANN in this task is limited, and future research on this gap is highly recommended.
Summary
Several studies in the literature have discussed the application of various AI techniques for the modeling and optimization of heat pipe systems.Table 3 summarizes the progress made on modeling heat pipes' performance using AI techniques.It can be observed from the table that several AI techniques, including MLPNN, GMDH, and ANFIS, were used for modeling different parameters of heat pipe systems, including the thermal resistance, the water outlet temperature, the heat transfer rate, and the thermal efficiency.It is observable from the table that the AI model that was used the most is the MLPNN due to its ease of application and excellent ability to model highly nonlinear relationships between the parameters.Several types of heat pipe systems were discussed for optimization using AI in the literature, including PHP, OHP, thermosyphons, and heat pipe heat exchangers.
Conclusions and Future Research Directions
This work reviewed and discussed the recent developments of AI technologies in heat pipe applications.Most of the studies reviewed in this work primarily focus on predicting the thermal performance of heat pipe systems.Hybrid AI algorithms and intelligent control systems for heat pipes were also covered.The following highlights can be concluded from this review: 1.
Most of the work on AI in heat pipes involves pulsating/oscillating heat pipes.This is mainly due to the difficulty of experimentally analyzing the effects of different parameters on the performance of a heat pipe, since its performance depends on several parameters.Furthermore, the numerical modeling of PHPs using computational fluid dynamics (CFD) is relatively complex due to the chaotic nature of PHPs, which shows the potential of AI-based modeling methods.2.
ANN is the most widely used AI technology in predicting the performance of heat pipes and has proven its ability to be one of the most effective techniques for predicting performance accurately.As a result, it can be used for the efficient design of heat pipes.
3.
Multilayer perceptron neural networks (MLPNNs) achieved the highest accuracy in predicting the performance of heat pipe systems compared to other similar models.4.
The AI model structure (number of hidden layers and neurons) is an important factor that influences the model's prediction accuracy.5.
The most common influencing input parameters are heat flux, filling ratio, and length of each heat pipe section (evaporator and condenser sections).In the case of nanofluidfilled heat pipes, nanofluid properties (such as concentration and thermal conductivity of nanofluid) are the most common input parameters that should be considered, as they most significantly influence the operation.6.
Optimization algorithms and a combination of optimization algorithms and AI models can identify the optimum operating conditions of heat pipe systems.7.
Fuzzy (and hybrid fuzzy) controllers are the most widely used controllers for heat pipes and heat pipe systems.
Despite the wide application of AI models for modeling various heat pipe systems, some important research gaps still need to be addressed to improve this technology and move it closer to practical application.These research gaps include: 1.
Hybrid models combining metaheuristic optimization algorithms with AI-based models have shown excellent performance in modeling heat pipe systems.However, the current progress on hybridizing AI models with optimization algorithms is very limited, and further research on this topic is highly recommended.Optimization algorithms that were not hybridized previously with AI models for heat pipe modeling include grey wolf optimizer [74], ant colony optimization (ACO) [75], and the whale optimization algorithm (WOA) [76].2.
Several recent AI models have performed superior tasks that were not previously applied for modeling heat pipe systems.These models include the recurrent neural networks (RNN) [77,78] and transformer networks [79,80] that showed excellent performance in sequential data prediction and modeling tasks, as well as generative adversarial networks (GAN) [81,82] that showed excellent performance in various AI tasks in general and modeling tasks in specific.Further works discussing the applications of these state-of-the-art models for modeling various aspects of heat pipe systems are highly recommended and are expected to improve the currently achieved limits of the AI-based modeling of heat pipes.
3.
The progress made on the AI-based control of heat pipe systems is very limited, and various types of intelligent control algorithms and target parameters have not previously been discussed in the literature.Most importantly, ANN showed excellent performance in different system control tasks and has not previously been used to control heat pipe systems' operation.Thus, developing ANN-based methods for the operational control of heat pipe systems is highly recommended and expected to achieve higher performance compared to the currently applied techniques.
Figure 1 .
Figure 1.Schematic diagram of heat pipe structure and operation.
Table 1 .
Summary and comparison between various AI algorithms.
Table 2 .
Intelligent control methods for heat pipes. | 2023-01-12T16:43:03.146Z | 2023-01-09T00:00:00.000 | {
"year": 2023,
"sha1": "80da726bfc9ac2a91428a201f5d993aea4eadbe9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/2/760/pdf?version=1673261112",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e34b925a7dcf8fc92114c7c11c23dbe92e95ce67",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
11557129 | pes2o/s2orc | v3-fos-license | Social Influences in Sequential Decision Making
People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.
Introduction
Individuals often ignore their own opinion in favor of the opinions of others. Early experimental results of Asch and Sherif impressively illustrated how the judgments of others influence individuals' judgments [1][2][3]. People sometimes follow the behavior of others even when they provide inaccurate information. The present article focuses on a decision-making problem in which several individuals sequentially make decisions and have the potential to influence each other. This situation has been studied by economists who focused on conformity behavior that results from the cognitive integration of socially inferred information improving individual decisions [4,5]. In contrast, social psychologists have additionally emphasized conformity behavior that is motivated by maintaining or building acceptance and belonging. Following a cognitive modeling approach, we sought to examine to what extent individual decisions are affected by different types of social influence. Specifically, we are interested in how socially inferred information and normative expectations of an authority have an impact on individual decisions. Imagine a physician confronted with the task of diagnosing a type of flu strain in a patient showing several symptoms. The symptoms speak in favor of Influenza A, but symptoms are only probabilistically related to flu strains. Thus the physician knows that her diagnosis will be correct only with a certain probability. Meanwhile she knows that her colleague has diagnosed a case of the relatively harmless Influenza C in the same patient. What should she do: rely on the symptoms that she has observed or follow her colleague's judgment? If she follows her colleague's judgment this would be a typical case of conformity behavior, because she is disregarding the evidence the patient's symptoms provide. Can such a conformity decision be reasonable?
To explain why people conform it is helpful to distinguish two types of social influence: normative social influence and informational social influence [6]. Normative social influence describes behavior that has been driven by the desire to achieve a valued, coherent self-identity and to convey a particular impression to others [7]. The influence is based on people's motivation to gain approval and avoid rejection by conforming with others' expectations. The physician's decision to conform may be motivated by the desire to avoid looking ridiculous in front of others because she was incapable of diagnosing the harmless Influenza C. In contrast, informational social influence arises from useful and valid information that another's opinion or behavior provides to improve a decision or judgment [8,9]. If, for instance, the physician's colleague was very experienced and potentially had additional information for a diagnosis, this informational influence would lead the physician to the correct inference that her colleague's diagnosis is very likely correct, making her own conforming decision the best she can do.
Dual-motive views of social influence have already been proposed in several domains, appearing in conformity research [6,10], group polarization research [11,12], and persuasion research [13]. Criticism of such views has mainly focused on the problem of how the two types of influence can be separately measured and, consequently, how they interact [14][15][16]. In many conformity studies individuals' behavior is examined under two conditions: In the public condition, individuals act under the surveillance of others, whereas in the private condition, responses are given anonymously. If behavior in the public condition differs from behavior in the private condition, this is usually attributed to salient beliefs of the person being socially influenced by the fact that others will positively evaluate his or her conformity behavior. Nevertheless, normative social influence cannot be excluded in the private condition. Social expectations of others can also emerge when their presence is imagined, so they hold across public and private contexts [13]. Moreover, priming studies have suggested that individuals' tendency to conform can even arise automatically, outside conscious awareness or voluntary control [17,18].
Whereas normative social influences are difficult to control in experimental settings, the utility of informational social influence for improving one's decisions or judgments is still poorly understood. Most social influence paradigms create conformity by having the influence source give unrepresentative information and then focusing on incorrect answers as dependent measures. Hence, the analysis of informational social influence is tainted by an overemphasis on normative social influence that leads to incorrect conformity decisions [14,16] and pays little attention to the fact that conformity can also improve behavior. For instance, in situations where individuals have to decide under uncertainty others can provide useful accurate information.
In sum, many social psychologists agree that conformity can result from informational and normative social influence. How the two types influence behavior is often difficult to measure, and whether and how they might work together is an even more complicated question. In the present study we examined a sequential decision-making task that allowed us to identify the different types of social influence on individual behavior. More specifically we examined decision making using the "information cascade paradigm" [4,5].
Information Cascades and Conformity Behavior
Bikhchandani et al. argued that people's judgments, in principle, are based on private and public information [5]. For instance, a person's own examination of a judgment situation provides access to information others have not obtained, which is private information. In addition, the person can consider information that is commonly available to everyone; this is public information. In a situation in which several individuals make the same decision sequentially, the decisions made by others preceding an individual's own decision provide public information to that individual. An informational cascade occurs when it is optimal for an individual, having observed others' preceding decisions, to follow the behavior of the preceding person, ignoring his or her own private information. Bikhchandani et al. showed that such decisions are rational when following a Bayesian analysis of the problem [5], which we demonstrate below.
Anderson and Holt examined whether information cascades actually occur [4]. In their experiment, one of two urns was randomly selected by the experimenter. The two urns contained the same number of balls, but the composition of the balls' color differed for the urns. For instance, both urns could contain three balls, with two white and one black ball for the first urn (Urn A) and two black and one white ball for the second urn (Urn B). The participants knew the compositions of the two urns but did not know which urn was randomly selected by the experimenter. Participants decided sequentially which of the two urns had been selected. Before making a decision, each participant drew one ball from the selected urn and observed its color, which was not revealed to the other participants (i.e., private information) and the drawn ball was afterward put back into the urn. Thereafter, each participant publicly announced his or her decision. Thus, participants had private information, which was the color of the drawn ball from the chosen urn, and public information, which were the decisions of the preceding participants (but not their private signals). To make a correct prediction, participants could use both types of information.
More precisely, according to a Bayesian analysis of the problem, the posterior probability that Urn A was selected could be determined by applying Bayes's theorem: pðAjn a ; n b Þ ¼ pðn a ; n b jAÞpðAÞ pðn a ; n b jAÞpðAÞ þ pðn a ; n b jBÞpðBÞ ð1Þ where p(n a , n b |A) is the likelihood of observing the number n a and n b of "a" and "b" signals given Urn A was selected, where "a" speaks for Urn A and "b" speaks for Urn B. Signals are either obtained from private draws or inferred from public decisions of others. It is easier to determine the log odds of the posterior probability that Urn A was selected relative to the posterior probability that Urn B was selected. When assuming equal a priori probabilities with which the two urns are selected, the log odds are defined as ln (for details see S1 Text). When the log odds ratio is positive, then the posterior probability that Urn A was selected is larger than the posterior probability that Urn B was selected, whereas a negative ratio makes Urn B more likely to have been selected. Under the assumption of equal priors and equal likelihoods of observing "a" or "b" signals, it can be easily seen with Eq 2 that solely the difference in the number of "a" and "b" signals is decisive (regardless of the absolute number of signals). For more details on the Bayesian solution to this problem see also Phillips and Edwards [19], Grether [20], Anderson and Holt [4], or Hung and Plot [21].
The following example illustrates the Bayesian analysis of the sequential decision problem. Suppose there are three people, named John, Jim, and Jack, facing the decision problem. John draws, unobserved by the others, the first ball and publicly decides for Urn A. After John's decision, Jim draws a ball and also decides for Urn A. Now it is Jack's turn. He draws a "b-ball," which indicates the selection of Urn B, but since John and Jim decided for Urn A, Jack infers that John has drawn an "a-ball," since he decided for Urn A. In addition, Jack infers that Jim also drew an a-ball, because if he had drawn a b-ball he probably would have decided for Urn B, to avoid being misled by a potential mistake of John. Thus, Jack infers that two a-balls (n a = 2) and one b-ball (n b = 1) have been drawn and can calculate the log odds for Urn A: which are positive, so that Urn A should be selected despite the private signal supporting Urn B. Any subsequent decision makers should also follow the decision of the first and second decision makers, so that an information cascade emerges. If a fourth and fifth person drew b-balls it would be rational for them to decide for Urn A. Thus, although after the fifth person three bballs and only two a-balls have been drawn, making Urn B the most likely selected urn, all individuals would be acting rationally by selecting Urn A according to a Bayesian analysis of the private and public information available to them. Anderson and Holt observed a high proportion of individuals' decisions in line with the illustrated Bayesian updating process [4], which has been replicated by a multitude of empirical studies [21][22][23]. However, compared to people who use the Bayesian solution, participants in cascade experiments seem to overweight their private information relative to the public information [24][25][26]. A meta-analysis led Weizsäcker to the overall conclusion that people often overweight their private information in comparison to public social information [27].
However, we think that this conclusion needs to be limited to the artificial cascade paradigm examined. We think that people are often strongly influenced by other people's behavior in many real-life situations and thus overweight social relative to private information. Research illustrating the strong impact of social influences on behavior and decision making is widespread; for an overview, see, for instance, Cialdini and Goldstein [28]. Here we illustrate with the sequential decision-making paradigm described above how the impact of social influence can increase depending on the social context in which it is embedded. Moreover, we follow a cognitive modeling approach to identify the importance people give to private as compared to social information.
Social influence model
To identify the importance people give to different sources of information we suggest a social influence model. For this model we modify Eq 2 by separating one component containing private from another component containing public information: where f ðxÞ ¼ ln pðxjAÞ pðxjBÞ . The social importance parameter β soc (0< β soc <2) specifies how much weight a person gives to the social as compared to the private information. In the case of β soc >1 the decision maker overweights social information, and in the case of β soc <1 the decision maker overweights private information. The prior weight β bias represents any initial bias toward one of the two choice options. When β soc = 1 and β bias = 0 the social influence model is equivalent to the Bayesian solution expressed by Eq 2. Note that the log odds of Eqs 2 or 3 can be easily retransformed into posterior probabilities by The larger the posterior probability is for one option, the larger the probability that a person chooses this option should be. Accordingly we define the choice probability with which a person chooses an option as a function of the option's posterior probability of being correct: where θ(10 >θ > 0) represents a free sensitivity parameter that specifies how sensitive a person's response is to the different posterior probabilities. A large sensitivity parameter implies that the option with the higher posterior probability will be chosen with a higher probability. In sum, the social influence model allows us to quantify the importance given to information inferred from others' decisions (public social information) relative to private information. By specifying the Bayesian solution as a special case of the model, we can test whether people deviate from the normative solution of probability theory.
In the first study, we did not differentiate between different types of social influence. In the second study, we manipulated the hierarchical rank of a previous decision maker to increase that decision maker's social influence, and the model allowed us to test whether this manipulation affects the social influence. This was achieved by embedding the rather artificial cascade paradigm in a clinical decision-making context. To do this, we drew on the authority principle, which states that people are willing to follow the suggestions of someone that they see as a legitimate authority [29][30][31]. The principle works within hierarchical relationships, which are asymmetrical in nature and involve the management of dominance "in ways that maximize the interests of the more dominant individual and limit harm to the less dominant individual" [31]. We understand the authority principle as a specific type of normative social influence, since it is based on the deference to authority norm, which is a prevailing norm in most organizations [28]. However, manipulating normative social influence by confronting participants with a decision of a higher ranked person is a relatively weak induction of normative influence when compared to a much more "pressurizing" homogeneous majority opinion.
We examined the impact of social influence in two experiments by testing to what extent individual decisions are affected by social influences according to the following two hypotheses: 1. The informational influence hypothesis follows from a Bayesian view of information usage.
This hypothesis states that people try to be as accurate in their judgments as they can be, efficiently inferring information from others' behavior and integrating the socially inferred information with their own private information to derive a decision. This decision can be the opposite of a decision that is reached from private information alone. Decision makers who behave in a manner consistent with the informational influence hypothesis will make decisions in line with the Bayesian model specified above (i.e., Eq 2). The social influence model allowed us to test whether decision makers weight all available information equally to make a decision, regardless of whether it is private or public information.
2. The authority influence hypothesis predicts that people's behavior will also be influenced by the hierarchical status of other decision makers. In line with the authority principle, people will make decisions that conform to higher ranked others' decisions more often, even if other available public information and their own private information suggest doing otherwise. Behavior that is consistent with the authority influence hypothesis should be better described by the social influence model, which allows decision makers to give greater weight to the information that is inferred from the behavior of the higher ranked other person.
The aim of the following studies was to test these two hypotheses.
Study 1
The purpose of Study 1 was primarily to test the informational influence hypothesis. The experimental task was constructed in such a way as to minimize normative social influence on people's decisions, so that conformity behavior would largely express the informational social influence of others. If people's decisions were consistent with the Bayesian model, as suggested by Bikhchandani et al. [5], this would indicate that individuals' decisions reflect a process of rational information integration of privately and socially inferred information. In Study 1 we fit the social influence model to participants' decisions to see if and how people's behavior deviates from the Bayesian solution.
The experimental task was similar to that used by Anderson and Holt [4]. However, to increase our experimental control, participants were not confronted with real urns from which balls were drawn. Instead they had to make judgments for a series of hypothetical scenarios (see Huck & Oechssler for a similar experimental procedure [32]). This allowed us to systematically vary the information given to each participant. In contrast, in Anderson and Holt's experiment participants had to announce their decisions to a group, so that normative social influence cannot be ruled out completely. In Study 1 participants were additionally asked to estimate the probability that their predictions were correct, so that we could compare it to the posterior probabilities derived by the Bayesian model (see Eq 2).
Method
Ethics statement. The study was conducted in accordance with the Declaration of Helsinki and the ethical guidelines of the American Psychological Association. Before the start of the experiment, all participants filled out a written informed consent, which informed them about the goals and the completion of the experiment and clearly indicated that they could abandon the experiment at any time without consequences. Prior to the experiment, the investigator collected the signed informed consent forms. No participant abandoned the experiment. All questionnaire data were entered into our database in an anonymized form such that data could not be assigned to individual subjects.
Participants. Forty-two students from different departments at the University of Basel participated in the 30-min experiment. Two participants were excluded from further analysis because they said after the experiment that they had not understood the experimental task. Participants received course credit or a book voucher worth 10 Swiss francs. In addition, participants were informed that one of their decisions would be selected randomly, and if that decision was correct they would be rewarded with 2 Swiss francs. If their corresponding confidence rating lay within the range of ±5% of the Bayesian solution they would receive an additional 2 Swiss francs.
Procedure. Participants received a questionnaire with a description of the urn decision scenario featuring two urns, each containing three balls, where Urn A had one black and two white balls and Urn B had two black and one white ball. Participants were instructed that one urn was randomly chosen at the beginning of the task by the experimenter and a maximum of four people had the task of sequentially inferring which of the two urns was randomly chosen.
They were told that up to four people each sequentially drew one ball from the selected urn, which they replaced in the urn after they privately observed the ball's color. Thereafter each person announced which urn he or she considered most likely to have been chosen. Thus each person knew the predicted urn of her or his predecessors (but not the color of their drawn balls). It was also explained that each person in the urn scenario had observed his or her predecessors' decisions. Participants were told they should play the role of the person who made the last decision, in a total of 24 different scenarios.
After the situation description, participants received the 24 scenarios in a randomized order, in which the color of the ball that the last person had drawn and the decisions of the preceding person(s) were provided. The 24 scenarios presented 12 different decision tasks, where all possible combinations of up to four decision makers were specified. Decision sequences where participants were confronted with an unreasonable preceding decision (according to the Bayesian solution) were not included in our scenarios. The 12 decision tasks were presented in two different ways; that is, the decision sequences were mirrored in terms of the color of the balls and the decisions of the preceding people. Thus, each participant decided twice on the same decision task. Participants were asked to predict for each scenario which urn (A or B) was most likely to have been randomly chosen by the experimenter. In addition, they had to judge the probability with which they thought their decision was correct (on a scale of 50-100%). Tables 1 and 2 together summarize the 12 decision tasks with the corresponding posterior probabilities.
Results
We first analyzed whether participants' decisions were in line with the Bayesian solution. The fifth column of Table 1 shows the proportion of choices in line with the Bayesian solution (see Eq 1). For all tasks in which the posterior probability was in favor of one alternative (Scenarios 1-9), 86.9% of all choices were consistent with the Bayesian prediction. In particular, when the Bayesian prediction was in favor of a participant's private signal, 90.2% of all choices were consistent with the prediction. To determine whether information cascades occurred, Scenarios 6 and 8 are crucial. Here the Bayesian solution predicted that the private signal should be disregarded in favor of the previous decisions. A high degree of cascade behavior consistently occurred: Of all 160 choices, 120 (75.5%) were consistent with the Bayesian prediction.
In situations with posterior probabilities of p = 0.50 (Scenarios 10-12), private and public information canceled each other out. These scenarios allowed us to test whether public social information has a stronger influence than private information. As shown in Table 2, in 79.9% of all choices, participants decided in line with their private signal, thus giving more weight to their own information than to the public information. In sum, the results show that participants used the information provided by others' decisions in a way that is consistent with a Bayesian analysis of the decision problem, supporting the informational influence hypothesis.
To examine in more detail how much weight participants gave to public information relative to private information, we estimated the importance (β soc ), the bias (β bias ), and the sensitivity (θ) parameters of the social influence model on the basis of the observed data. We estimated the model by following a Bayesian approach for each participant [33][34][35]. This approach provides a posterior probability distribution of each of the model's free parameters. For each parameter, we first specified a prior distribution expressing the initial belief in every possible parameter. For the β bias parameter we assumed a prior truncated normal distribution with a mean of zero and a standard deviation of 10, truncated at +1 and -1. For the social importance parameter β soc we assumed a prior uniform distribution ranging from 0 to 2 (specified by a beta distribution). Likewise we assumed a uniform prior distribution ranging from 0 to 10 for the sensitivity parameter θ (specified by a beta distribution). According to the Bayesian approach, the prior distributions are then updated on the basis of the data and the model's likelihood function (i.e., Eq 5). Technically we relied on JAGS [36] through the rjags interface in R [37]. For the sampler we chose a thinning factor of 100 (to minimize autocorrelation) and an initial burn-in of 10,000 (to produce more representative samples from the posterior). The final Markov chains had a net length of approximately 50,000. Group estimates for the parameters of the model were derived by averaging the posterior distributions of all participants (by averaging the results of the Markov chains). The derived distributions of the means can be used to calculate summary statistics, for example, median and 95% highest density interval (HDI), among others.
For the social influence model the median estimated sensitivity parameter was θ = 6.08 (95% HDI = 5.57 to 6.58), which implies that participants reacted rather sensitively to the different posterior probabilities. For instance, with a value of 6.08 for the sensitivity parameter, Urn A will be chosen with a probability of 0.89 given a posterior probability of 0.67 for Urn A. For β bias the estimated median parameter value was -0.12 (95% HDI = -0.22 to -0.01), which indicates a slight tendency to favor Urn B a priori. Fig 1 illustrates the different weights given to public as compared to private information according to the estimated parameters of the social value model. The median importance parameter given to the public information was β soc = 0.78 (95% HDI = 0.71 to 0.86), which shows that participants weighted public information less strongly than private information. When contrasting the weight given to private information (2-β soc ) with the weight given to public information (β soc ) a median positive difference of 0.44 results (95% HDI = 0.28 to 0.59), illustrating overweighting of private information. In sum, the analysis shows that participants overweight private as compared to public information-inconsistent with the Bayesian model that weights all information equally. In addition to making choices between the two urns the participants had to judge the probability that their choices were correct. The probability judgments, reported in the last columns of Tables 1 and 2, did not match the Bayesian posterior probabilities. Whereas the average probability judgment of 0.59 was higher than the posterior probability of 0.50 in Scenarios 10-12, for scenarios with a posterior probability of 0.67, 0.80, and 0.89 the average probability judgments of 0.61, 0.69, and 0.74, respectively, were lower. These results appear similar to the standard conservatism phenomenon reported in the early literature on probability judgments [38], according to which people tend to give moderate probability judgments. However, our social influence model might give an alternative explanation for these deviations. The social influence model that we propose predicts the probability with which people will select one or the other option (see Eq 5). These predictions follow from the models' predicted subjective posterior probabilities that one or the other option is correct. Therefore people's probability judgments can also be compared to these subjective posterior probabilities that the model predicts. Importantly, the model was estimated solely on the basis of participants' choices, ignoring their probability judgments. Therefore predicting participants' probability judgments represents a strong generalization test of the social influence model. Importantly, the model also predicted overweighting of small probabilities and underweighting of large probabilities. For instance, the model correctly predicted a confidence level of 61% compared to people's observed confidence levels of 59% in situations in which the normative Bayesian account predicts a posterior probability of 50%. Similarly, for situations with a normative posterior probability of 89%, the model predicted a probability judgment of 83% compared to the empirically observed probability judgment of 75%. Thus, the social influence model can predict the observed deviations of people's probability judgments from the normative account. According to the social influence model, these deviations result from overweighting individual as compared to public information. For instance, in the normative indifference situation with posterior probabilities of 50%, overweighting private information leads to increased confidence, whereas in situations with normative high posterior probabilities, overweighting private information leads to more moderate probability levels.
Discussion of Study 1
Study 1 shows that whether people go against their own private signal depends on whether the posterior probabilities speak for or against their private signal. In situations where private and public information cancelled each other out, participants preferred private information over public information. These results suggest that private information and socially inferred information are cognitively integrated. Furthermore, the results replicate Anderson and Holt's findings [4], where the participants made real draws from urns. Our hypothetical scenarios have the advantage of maximizing experimental control. For instance, the scenarios minimize potential normative social influences of other people present in a public setting. Therefore, our results illustrate the impact of informational social influence leading to conformity behavior. The results of the social influence model show that participants overweight private as compared to public information, in contrast to the equal weighing of the Bayesian model. Likewise, Social Influences in Sequential Decision Making participants' probability judgments did not correspond to the Bayesian solution. These deviations from the Bayesian posterior probabilities are explained by the social influence model, according to which people overweight their private information as compared to social information, which on average in the tested situations led to more moderate probability judgments. Importantly, the model predicted these deviations from the Bayesian account without being fitted to the observed probability judgments.
Study 2
The purpose of Study 2 was to investigate decision making in a real-life situation in which both informational and authority influences can affect people's decisions. Therefore, in Study 2 the decision problem of Study 1 was embedded in a medical decision-making context. Participants had to take on the role of an assistant physician who had to diagnose, on the basis of particular symptoms, which of two diseases a patient had developed. The task was analogous to that of Study 1: The assistant physicians had information about others' decisions: here, the previously made diagnoses of other physicians recorded in the patient's record. The other physicians' decisions were often not supported by the private information available to the assistant physician. Again, these decisions represent informational social influence to the assistant physician. To examine the social influence of the hierarchical status of the preceding decision makers, the cascade paradigm offers the opportunity to control the strength of authority influence by varying the hierarchical ranking of the preceding decision makers. At the same time, we can control the strength of informational influence by determining the validity of available information that the decision makers in a sequence draw on. In the following, we explain how we manipulate both types of social influence to examine their relative influence.
To manipulate authority influence, the hierarchical ranking of the influence source was varied: The preceding decisions were made either by a colleague (another assistant physician) with the same hierarchical ranking or by a supervisor (the medical director) with a higher hierarchical ranking. This manipulation varied the strength of the authority influence by focusing on the legitimate power of previous decision makers in relation to the assigned hierarchical ranking of the participant's role. Although our participants did not expect any negative consequences when deciding against the diagnosis of the medical director, we argue that the tendency to conform should emerge as a result of the perceived hierarchical status difference, in line with priming studies on conformity [17,18]. To control the strength of informational social influence, participants were told that the average accuracy of diagnoses on the specific decision problem was the same for the assistant physicians and the medical director. This allowed us to test the informational influence hypothesis and the authority influence hypothesis within the same task.
In Study 2, 40 scenarios were employed in which participants were confronted with the same 12 decision tasks of Study 1. To test our hypotheses, we created all possible variations of the same decision task in terms of varying hierarchical rankings of the previous decision makers. More specifically, 40 scenarios for all possible decision sequences for up to four decision makers were created, in which the medical director and assistant physicians decided at all positions in the decision sequence with corresponding diagnoses. Again, we excluded scenarios with unreasonable preceding decisions (according to a Bayesian analysis), for example, scenarios where two decisions favoring the same diagnosis were followed by an opposing decision. In sum, we created four scenarios with one previous decision maker (one with the assistant physician and one with the medical director as previous decision makers favoring or opposing participants' private information), 12 scenarios with two previous decision makers, and 24 with three previous decision makers (see Tables 3-6 for the scenarios used in Study 2).
Next we structured the scenarios according to the corresponding Bayesian predictions, resulting in four groups of scenarios (scenarios with a posterior probability of 0.50, 0.67, 0.80, and 0.89; see Tables 3-6). This study design allowed us to test both social influence hypotheses. According to the informational influence hypothesis, we should obtain no differences in participants' decision making and probability judgments in (the four groups of) scenarios where the Bayesian solution is the same. However, according to the authority influence hypothesis, participants' decisions should vary depending on (a) whether the decision of the higher ranked decision maker (the medical director) supports or speaks against participants' privately held information and (b) whether the medical director is one of the preceding decision makers or not. Because the informational value of the previous decisions was the same regardless of the hierarchical status of the preceding decision makers, changes in participants' decision making and probability judgments within a scenario group (i.e., a group of scenarios with the same Bayesian solution) could be traced back to the impact of the hierarchical status of previous decision makers. Therefore, we calculated the average proportion of participants' decisions in Table 3. Participants' decisions and probability judgments for the 13 decision scenarios of Study 2 in which the posterior probability of one disease according to a Bayesian analysis was 0.67. favor of their private information for the following three types of scenarios (within each of the four groups of scenarios, i.e., of scenarios with a posterior probability of 0.50, 0.67, 0.80, and 0.89; see Tables 3-6): 1. Scenarios in which only assistant physicians were the preceding decision makers (baseline condition) 2. Scenarios in which the medical director's decision supported participants' private information
Scenarios in which the medical director's decision spoke against participants' private information
In accordance with the authority hypothesis, we predicted that (a) participants would decide more strongly according to their private information (and would be more confident) when the medical director supported it relative to decisions in the baseline condition (i.e., comparing Scenario b to a); and (b) participants would decide less according to their private information (and would be less confident) when the medical director's decision spoke against it relative to decisions in scenarios of the baseline condition (i.e., comparing Scenario c to a).
Method
Ethics statement. The study was conducted in accordance with the Declaration of Helsinki and the ethical guidelines of the American Psychological Association. Before the start of Table 6. Participants' decisions and probability judgments for the 10 decision scenarios in Study 2 in which the posterior probability of both diseases according to a Bayesian analysis was 0.50 (i.e., the posterior probabilities predicted an indifference situation between both diseases). the experiment, all participants filled out a written informed consent, which informed them about the goals and the completion of the experiment and clearly indicated that they could abandon the experiment at any time without consequences. Prior to the experiment, the investigator collected the signed informed consent forms. No participant abandoned the experiment. All questionnaire data were entered into our database in an anonymized form such that data could not be assigned to individual subjects. Participants. Forty students from different departments at the University of Basel participated in the experiment, which took approximately 1 h. Participants received course credit or a book voucher worth 10 Swiss francs. In addition, participants were informed that one of their diagnoses would be selected randomly, and if that diagnosis was correct according to the Bayesian solution they would be rewarded with 2 Swiss francs. If their corresponding confidence rating lay within the range of ±5% of the Bayesian solution they would receive an additional 2 Swiss francs.
Scenario Previous diagnosis
Procedure. First, participants received a description of a hypothetical situation in a hospital. They were asked to imagine themselves in the position of an assistant physician who had to make a decision concerning a patient's disease. Participants were told about two possible diseases, which were a priori equally likely: sigmoid diverticulitis and appendicitis. Both diseases were probabilistically related to two independently occurring symptoms. Participants were informed that the patient suffered from one of the two symptoms; this constituted the participant's private information. The first symptom, regurgitation, was more often observed when patients suffered from sigmoid diverticulitis; that is, the conditional probability of observing the symptom when the patient suffered from the disease was p(regurgitation|sigmoid diverticulitis) = 0.67, whereas the conditional probability of observing the symptom when the patient suffered from appendicitis was p(regurgitation|appendicitis) = 0.33. The second symptom, twinges in the lower left part of the abdomen, was more often observed when patients suffered from appendicitis; that is, p(lower abdominal twinges |appendicitis) = 0.67, whereas the symptom was less often observed when patients suffered from sigma diverticulitis; that is, p(lower abdominal twinges|sigma diverticulitis) = 0.33.
In addition, the scenarios provided public information concerning the previous diagnoses made by other assistant physicians and/or the medical director, which were recorded in the patient's record. Participants were informed about the average accuracy of the assistant physicians and the medical director when making an independent diagnosis, that is, a diagnosis without knowing other physicians' diagnoses. Participants were told that an independent diagnosis of the assistant physician and the medical director was correct in two of three cases (p = 0.67). Thus, the decisions of all preceding decision makers (independent of their hierarchical rank) had the same chance of being correct.
After the initial situation was described, 40 decision scenarios were given to the participants in a randomized order. The 40 scenarios provided the participants with the symptom of the patient and the previous diagnoses. Tables 3-6 summarize the 40 decision scenarios with the corresponding posterior probabilities. For each scenario participants were asked to predict which disease (appendicitis or sigma diverticulitis) the patient had developed. In addition, they were asked to judge the probability with which they thought their diagnosis would be correct (on a scale of 50-100%).
Results
The purpose of Study 2 was to examine individuals' decision making in relation to the predictions of the informational and authority influence hypotheses. We have broken down our analysis into three parts: First, we present the results of testing the informational influence hypothesis. Next, we describe the results of examining the authority influence hypothesis. Finally, we fit the observed decisions with the social influence model, describing the interplay between informational and authority influences.
Informational social influence. To examine whether participants behaved according to the Bayesian analysis of the decision problem, we first analyzed their decisions. The fifth column of Tables 3-5 shows the proportion of participants who made choices in line with the posterior probabilities derived from the Bayesian analysis (see Eq 2). For all scenarios in which the posterior probability was in favor of one disease (Scenarios 1-30), 92.0% of all choices were consistent with the Bayesian prediction. In particular, when the Bayesian prediction was in favor of a participant's private information, 95.1% of all choices were consistent with the prediction. To determine if informational cascades occurred, Scenarios 3, 4 and 9-13 are crucial (Table 3). Here the Bayesian solution predicts that the private signal should be ignored in favor of the previous decisions. Consistently, a high degree of cascade behavior occurred: Of all 280 decisions, 230 (82.1%) were consistent with the Bayesian prediction.
In situations with posterior probabilities of p = 0.50 (Scenarios 31-40), private and public information canceled each other out. As in Study 1, these scenarios allow us to test whether public information has a stronger influence than private information. As shown in Table 6, in only 49.7% of all diagnoses did participants decide in line with their private signal. To explain this result it is important to examine the effect of the authority influence, presented in the next section. Overall, the results show that participants used information provided by others' decisions in a manner consistent with a Bayesian analysis of the problem, supporting the informational influence hypothesis.
Authority influence. To examine the authority influence on participants' decisions, we first analyzed whether participants in general decided against or with the diagnosis of the medical director. We had 1,119 diagnosis decisions in scenarios where the medical director was one of the preceding decision makers. Of these, 838 (74.89%) were in line with the diagnosis of the medical director. However, to evaluate the impact of authority, it is crucial to focus on participants' diagnoses and probability judgments with regard to (a) the Bayesian prediction of each decision scenario and (b) the comparison of scenarios with and without the medical director as preceding decision maker supporting or opposing participants' private information. Therefore, we drew on four scenario groups (see Tables 3-6) in which each scenario had the same posterior probability of one disease (Scenarios 1-30) or the posterior probabilities predicted an indifference situation (Scenarios 31-40).
The authority influence hypothesis predicts that (a) participants should decide more strongly according to their private information (and should be more confident) when the medical director's decision supports their private information relative to decisions in scenarios of the baseline condition where the medical director was not one of the preceding decision makers. Likewise, (b) participants should make fewer decisions according to their private information (and should be less confident) when the medical director's decision speaks against their private information relative to decisions in scenarios of the baseline condition.
We began with scenarios for which the posterior probability of one disease according to a Bayesian analysis was 0.67 (see Table 3). The average proportion of participants' decisions favoring the private information was higher in scenarios where the medical director supported participants' private information compared to the baseline scenarios where the medical director was not one of the previous decision makers (z = -5.12, p = 0.001 according to a Wilcoxon signed-rank test). Moreover, we found a lower average proportion of decisions according to private information in scenarios where the medical director decided against participants' private information compared to the decisions in the baseline scenarios (z = -4.85, p = 0.001). Participants' average probability judgments with M = 0.69 were higher in scenarios where the medical director's decision supported participants' private information compared to the baseline condition with M = .65, t(39) = -2.54, p = 0.015. However, the average probability judgments for scenarios where the medical director's decision spoke against participants' private information with M = 0.66 were not different from the average probability judgments for the baseline scenarios with M = .65, (p = 0.07).
Next, we present the results of comparing participants' decisions in scenarios for which the posterior probability of one disease according to a Bayesian analysis was 0.80 (see Table 4). We found no significant difference in the average proportions of decisions according to the private information between scenarios where the medical director's decision favored the private information and the baseline scenarios (p = 0.65). However, participants decided less often according to their private information in scenarios where the decision of the medical director spoke against their private information compared to their decisions in the baseline scenarios (z = -2.23, p = 0.026), supporting our authority influence hypothesis. No significant differences in the probability judgments were observed between scenarios where the medical director's decision favored the private information and the baseline scenarios. However, participants' decisions against the medical director's decisions showed significantly lower average probability judgments (M = 0.67) compared to the average probability judgments in the baseline scenarios (M = 0.77, t(39) = 4.36, p = 0.001).
In scenarios in which the posterior probability of one disease was 0.89 (Table 5), we found no significant differences in participants' average proportion of decisions in line with their private information between scenarios where the medical director's decision corresponded to participants' private signal and the baseline scenarios (p = 0.32), whereas their probability judgments significantly differed between the two conditions, t(39) = -3.55, p = 0.001, in the direction of a higher confidence for decisions which corresponded with the medical director's decision.
Last, we analyzed decisions and probability judgments in scenarios where the posterior probabilities for both diseases were the same, with 0.50 predicting indifference for the diagnoses (see Table 6). We found no significant differences between scenarios where the decision of the medical director favored participants' information and the baseline scenarios. Consistent with the authority influence hypothesis, we found a significantly lower average proportion of participants' decisions in line with their private information in scenarios in which participants' private information was not also in line with the medical director's decision compared to the baseline scenarios where the medical director was not one of the preceding decision makers (z = -3.42, p = 0.001). Participants' average probability judgments were significantly higher in scenarios where the medical director supported the private information (M = 0.69) compared to the average probability judgments in the baseline scenarios (M = 0.63, t(39) = -5.18, p = 0.001). Moreover, the average probability judgments in scenarios where the decisions of the medical director spoke against participants' private information (M = 0.65) were significantly higher compared to the average probability judgments in the baseline scenarios, t(39) = -2.54, p = 0.015.
In sum, we found strong empirical evidence for our authority influence hypothesis when comparing participants' decisions in scenarios without the medical director as preceding decision maker (baseline scenarios) with scenarios in which the medical director's decision contradicted participants' private information. Here, the average proportion of decisions according to private information indicates a consistent tendency to follow authority influence. The analysis of the impact of authority influence supporting participants' private information provides evidence that for scenarios with a posterior probability of 0.67, participants more often decided according to their private information (compared to their decisions in the baseline scenarios), whereas this influence was not observed for scenarios with posteriors of 0.80 and 0.89. This could be due to a ceiling effect, because for the scenarios with high posterior probabilities we had already observed high proportions of decisions in line with private information in the baseline scenarios. However, the probability judgments were consistently higher in scenarios with a supporting decision of the medical director compared to the probability judgments in the baseline scenarios, illustrating an authority influence.
The social influence model. Finally, we estimated the social influence model on the basis of participants' decisions. The goal in Study 2 was to distinguish informational from authority influence. Therefore, we decomposed the public information component within Eq 3 into two components instead of only one-one referring to information from higher ranked decision makers and one referring to information from equally ranked decision makers, providing ln where β HR refers to the importance given to the information derived from the decisions of the higher ranked (HR) medical director and β ER refers to the importance given to the information derived from the decisions of the equally ranked (ER) assistant physicians. In the case of β HR = 1 and β ER = 1 the social influence model specified by Eq 6 is identical to the pure Bayesian model (see Eq 2). To estimate the four free parameters (β bias , β HR , β ER and θ-see Eq 6) of the social influence model for every participant in Study 2 we applied the same Bayesian approach as used in Study 1 (except that we used a precision (SD = 1/ p precision) of 0.01 instead of 0.1 for the prior distribution of β bias ). The median estimated sensitivity parameter for the social influence model in Study 2 was θ = 7.37 (95% HDI = 6.9 to 7.85), thus just a little higher than in Study 1. The median parameter estimate for β bias was 0.06 (95% HDI = -0.02 to 0.13), indicating no prior bias toward one of the two decision options. As can be seen in Fig 3, the median importance parameter β HR was 1.12 (95% HDI = 1.05 to 1.19), which was higher than the median importance parameter β ER = 0.85 (95% HDI = 0.78 to 0.91). The contrast between the two parameters β HR -β ER was positive with a median difference of 0.27 (95% HDI = 0.15 to 0.39). The median weight for the private information of 1.03 (95% CI = 0.97 to 1.10) shows that participants gave more weight to private information than public information derived from the decisions of the equally ranked physicians (Mdn difference = 0.19 (95% HDI = 0.08 to 0.30).
However, we found no difference in importance given to private information and information derived from the higher ranked physician (Mdn difference = -0.09 (95% HDI = -0.21 to 0.04). Therefore, the results of the social influence model show that people gave greater weight to public information derived from a higher ranked individual than public information derived from equally ranked individuals. Furthermore, in line with the results of Study 1, people overweighted private information when compared to social information derived from equally ranked people.
As in Study 1, we compared the actual to the predicted probability judgments of participants (see Fig 2B). Again, this test of the social influence model was performed purely on the predicted subjective probabilities that were derived from the model, which were estimated on the basis of participants' decisions. Thus, participants' probability judgments were not used at all to fit the model. Again, the social influence model was able to predict people's probability judgments very accurately.
Discussion of Study 2
The results of Study 2 support the view that individuals are affected by informational and authority influences: The majority of participants made decisions that can be regarded as rational when considering the sequential decision problem from a Bayesian perspective. This held for scenarios in which, according to a Bayesian analysis, the posterior probability of one disease was above .50. Authority influence was observed when the decision of the medical director contradicted participants' private information (as opposed to the baseline condition), independent of the corresponding posterior probability of the scenarios. The average proportion of decisions according to private information and probability judgments was consistently lower, illustrating the authority influence. With regard to the impact of authority influence supporting participants' private information, only the analysis of participants' probability judgments reveals a consistent pattern-that is, higher confidence in their own decision when the previous decision of the medical director was in line with participants' private information. Finally, the results of our social influence model reveal that people treat public information differently due to its normative quality and independent of its validity. Moreover, the social influence model was also able to predict people's probability judgments quite accurately, importantly without making use of the confidence data to estimate the model's parameters.
General Discussion
The primary goal of our studies was to examine how individuals' decisions are influenced by the decisions of others. Therefore, we tried to manipulate informational and authority influences by embedding a social decision task in different contexts. Using the cascade paradigm, we were able to trace back the effects of the two influence types on people's decisions. Study 1 shows that individuals do integrate socially inferred information to make a decision consistent with a Bayesian analysis. Study 2 shows the impact of authority and informational social influences on individual decision making. Authority influence affects people's judgments most when the decision of a higher ranked individual speaks against participants' private information. In these types of situations, people show stronger conformity behavior and lower confidence in their own private information compared to situations in which they are confronted with opposing decisions of individuals whose hierarchical rank is similar to their own. Additionally, we found a consistent authority influence on participants' probability judgments when previous authority decisions supported participants' private information.
As a consequence, one can assume that the impact of authority influence should foster the emergence of information cascades. In Study 1 the majority of our participants decided in indifference situations according to their private information (on average 79.9% of all participants, Scenarios 10-12, see Table 2). In Study 2 the majority of our participants decided in indifference situations against their private information (on average 61.5% of all participants, Scenarios 36-40, see Table 6) when authority influence was exerted. Given the risk that two decision makers may have unfortunately obtained private information indicating the wrong state of affairs and that subsequent decision makers followed them, the results of Study 2 reveal that only one authority decision will suffice to start a cascade, regardless of subsequent privately obtained information.
The results of Studies 1 and 2 show that people apparently use social information to make decisions in a way that is generally consistent with a Bayesian perspective of the sequential decision problem. However, quantifying social influences with our computational model based on participants' choices shows that the weight people give to social and private information is context dependent and therefore the weights deviate from the pure Bayesian analysis that weights both kind of information equally. In line with recent studies on cascade behavior [24][25][26][27] we found that participants assigned higher weights to private information relative to public information in an urn-and-balls setting (Study 1). In contrast to the procedures used in recent studies on cascade behavior, embedding the decision task in a real-life context reveals that people treat public information derived from higher ranked individuals more seriously than public information derived from equally ranked individuals, whereas they overweight private information as compared to social information derived from lower ranked persons. Therefore, we argue that normative social influence cannot be neglected when analyzing the occurrence of information cascades in real-life settings. Moreover, the model, which was estimated only on the basis of participants' choices, was also able to predict people's probability judgments. For Studies 1 and 2 the model was able to explain why people's probability judgments deviate from the posterior probabilities of the Bayesian account.
The current research sheds new light on the motivational grounds of conformity by clarifying the different roles of informational and authority social influence. The findings of both studies highlight the cognitive aggregation of available public and private information as a decisive factor in the occurrence of conformity. According to the informational influence hypothesis, people evaluate the validity of socially inferred information and integrate it to make a decision. Likewise, one can assume that people are principally influenced by information of others and that authority influence only marginally accounts for conformity behavior. However, in both studies we used a task in which participants' decisions could be objectively evaluated. Thus the impact of authority influence should have affected people's decisions less compared to when tasks in which objectively correct solutions are barely, if at all, identifiable (e.g., judging people's attractiveness [39]). Therefore, the results apply to social influence situations where the intellective properties of a task are salient [12,40,41].
Moreover, our results and conclusions are limited with regard to the operationalization of authority influence. As mentioned above, confronting participants with a decision of a higher ranked person is a relatively weak induction of normative social influence. Thus, following the decision of a higher ranked person may happen for different reasons. For instance, the medical director may be responsible for the assistant physicians' decisions. In deviating from the director's decision one runs the risk of publically undermining the director's responsibility. Alternatively, following the director's decision may also be driven by the desire to gain the director's approval, especially when other assistants have decided otherwise. Consequently, future research needs to address these different mechanisms underlying authority influence in more detail.
From an applied perspective, our results reveal that the emergence of informational cascades can be fostered by authority influence. In particular, our results reveal that in situations in which the decisions of higher ranked individuals should have been given the same importance as those of other individuals due to equal decision accuracy, people still assigned more importance to the decisions of the higher ranked individual. Here, the majority of our participants decided against their private information and thereby started a cascade independent of subsequent privately obtained information. From this one may conclude that even when people act rationally according to a Bayesian perspective, a group of decision makers might not make good decisions as a whole. Thus, interventions to support sequential decision-making processes should focus more on changing the design of redundant systems rather than on changing the individual. Here it is important to change the structure of how individuals make decisions. For instance, one can think of systems where individuals first decide without knowing the decisions of their predecessors, after which the single decisions are aggregated in a group context. This has the advantage that all available private information is integrated in the decision of the group.
Improving the reliability of sequential decision-making structures should also include reflections on the incentives that individuals expect. Our studies focused on situations in which people wanted to maximize their individual outcomes; however, social influence situations may differ with respect to their underlying incentive structure. On the one hand, there can be incentives for following the group regardless of being correct. On the other hand, social influence situations can provide incentives to follow the group and make a correct decision. For example, Hung and Plott provided evidence on how information cascades developed when decision makers were positively rewarded when their personal decision was identical to the majority decision [21]. They demonstrated that the attainment of a group goal led to a tendency to place more weight on public information than on private information. Therefore, it seems important to consider the incentives people expect in sequential decision-making structures and whether these goals correspond to their individual goals.
Our studies show that people cognitively integrate both private and public information for making decisions. They attach importance to the inferred information not solely based on its validity but also by taking into account the normative qualities of this information. Therefore, people make smart decisions that aim at being accurate and consistent with their social environment.
Supporting Information S1 Text. The Bayesian analysis of the sequential decision problem. (DOCX) | 2016-05-04T20:20:58.661Z | 2016-01-19T00:00:00.000 | {
"year": 2016,
"sha1": "b7cc5d7e69a48be42377eb2f3ebc36381b35198c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0146536&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7cc5d7e69a48be42377eb2f3ebc36381b35198c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
143166482 | pes2o/s2orc | v3-fos-license | Animal Ethics and Politics Beyond the Social Contract
This paper is divided into three sections. First, I describe thewide plurality of views on issues of animal ethics, showing that our disagreements here are deep and profound.This fact of reasonable pluralism about animal ethics presents a political problem.According to the dominant liberal tradition of political philosophy, it is impermissible for one faction of people to impose its values uponanother faction of peoplewho reasonably reject those values. Instead, we are obligated to justify our political actions to each other using reasons that everyone can accept. Thus, in the second section I suggest that our condition of reasonable pluralism inspires us to turn toward some form of contractarianism.The social contract tradition emerged precisely as an attempt to think about how a society characterized by deep moral disagreement couldnonetheless agreeabout thebasic principles of justice. Iwill show, in this section, that although the social contract tradition would seem to contain the best tools for thinking about how to deal with moral disagreement, it fails to help us think through the important issues of animal ethics. In the concluding section, I suggest some ways in which political philosophy might move beyond contractarianism when thinking about this issue, including embracing an agonistic style of politics.
Article abstract
This paper is divided into three sections. First, I describe the wide plurality of views on issues of animal ethics, showing that our disagreements here are deep and profound. This fact of reasonable pluralism about animal ethics presents a political problem. According to the dominant liberal tradition of political philosophy, it is impermissible for one faction of people to impose its values upon another faction of people who reasonably reject those values. Instead, we are obligated to justify our political actions to each other using reasons that everyone can accept. Thus, in the second section I suggest that our condition of reasonable pluralism inspires us to turn toward some form of contractarianism. The social contract tradition emerged precisely as an attempt to think about how a society characterized by deep moral disagreement could nonetheless agree about the basic principles of justice. I will show, in this section, that although the social contract tradition would seem to contain the best tools for thinking about how to deal with moral disagreement, it fails to help us think through the important issues of animal ethics. In the concluding section, I suggest some ways in which political philosophy might move beyond contractarianism when thinking about this issue, including embracing an agonistic style of politics.
INTRODUCTION
Moral questions surrounding the human treatment of non-human animals are both inescapably pressing and inescapably difficult. They are inescapably pressing because our consciousnesses have been sufficiently raised about this issue to be (at least abstractly) aware of the scale and scope of the harms that are daily inflicted upon helpless animals all around the world. Because of this, moral and political philosophers, even if they do not work directly on issues concerning animals, nowadays need to have an answer to the question, "How does your moral or political philosophy deal with the human treatment of non-human animals?" For a long time, philosophers felt no need to have an answer to this question, but today an inability to answer the question betrays a deficiency of some kind in the philosopher's thought. These moral questions about the treatment of animals are inescapably difficult because the pre-reflective moral intuitions of people about these issues are widely divergent, and furthermore philosophical reflection on issues of animal ethics has yielded widely divergent positions. Thus, we are faced with both the necessity and the difficulty of thinking through the moral issues involved in the human treatment of non-human animals.
I am interested in exploring the following question: given the depth of the disagreement we experience about matters of animal ethics, is there any hope for an overlapping consensus on these issues-and if not, what are the implications? This paper will be divided into three sections. First, I will describe the wide plurality of views about questions of animal ethics, showing that our disagreements on these topics are deep and profound. This condition of reasonable pluralism presents a political problem. According to the dominant liberal tradition of political philosophy, it is impermissible for one faction of people to impose its values upon another faction of people who reasonably reject those values. Instead, we are obligated to justify our political actions to each other using reasons that everyone can accept. Thus, in the second part of the paper I will suggest that our condition of reasonable pluralism inspires us to turn toward some form of contractarianism. The social contract tradition emerged precisely as an attempt to think about how a society characterized by moral and religious disagreement could nonetheless agree about the basic principles of justice that would govern shared political institutions. I will show, in this section, that although the social contract tradition contains some well-developed tools for thinking about how to deal with the fact of reasonable pluralism, it fails to help us think through the important issues of animal ethics. In the concluding third section, I will suggest some ways in which political philosophers might move beyond contractarianism when thinking about these issues. First, I suggest the plausibility of a two-tiered moral system that proposes "contractarianism for humans, utilitarianism for animals." Second, I recognize that this proposal will not satisfy the demands of all reasonable people, such as the radical animal liberationist. Ultimately, those in deepest disagreement about animal ethics should recognize themselves as engaged in agonistic political struggle outside of any implicit social contract with each other, thus blurring the lines between advocacy, extremism, and terrorism.
THE AXES OF DISAGREEMENT
Our moral disagreements about the human treatment of non-human animals can be separated out into three categories, or axes, of disagreement. First, there is the issue of moral membership, which poses the question, "Which beings are part of the moral community?" Beings that are part of the moral community are in some way deserving of moral consideration. But who or what counts? This is a complex question with many possible answers. For some, only humans count as having moral membership. 1 For others, cognitively advanced non-human animals (like great apes) count, 2 but other animals do not. Still others count all sentient life as members of the moral community. Questions about what counts as sentience, and how to measure it, include both philosophical and empirical issues. Still others are even willing to extend moral membership beyond sentient life to include non-sentient nature. 3 To further complicate the question, the issue of moral membership is (arguably) not all-or-nothing, but rather membership can come in gradations. Indeed, controversy here arises even when trying to assign moral membership to different classes of humans. For some philosophers there are some humans who are not fully moral persons, such as stem cells or fetuses, which might be thought to have only partial moral membership, and thus deserving of only partial moral consideration (leaving open the possibility that their concerns can be overridden by countervailing concerns). With non-human animals, some believe that moral consideration should be accorded in proportion to cognitive complexity-the more complex, the more consideration. For others, all animals should have absolute consideration in the form of absolute rights (which are thought to be unoverrideable). Providing a full account of the necessary and sufficient conditions for membership in the moral community is a matter of profound disagreement, and inevitably leads one into complex empirical and metaphysical issues.
Second, there is the issue of moral obligation, which poses the question, "What is the nature of our moral obligations to animals (assuming that they are members of the moral community)?" This question is addressed differently in each of the many moral and religious traditions. For some deontological thinkers, we are obligated to respect the rights of animals, with "rights" understood as moral side-constraints against unwanted interference by others. 4 For some utilitarian thinkers, we are obligated to maximize the utility of all sentient beings, including animals (with "utility" being understood in different ways even within the utilitarian tradition itself). 5 For some care ethicists, we are obligated to practice certain forms of care toward those animals who are caught up in interactions with humans and who are dependent upon us, with "care" being understood in different ways. 6 For others, however, humans have no direct moral obligation towards animals. We may have indirect moral obligations toward animalsnamely, we may be obligated to respect other humans' claims over animals they own. Or perhaps, as Immanuel Kant famously argued, we should not abuse animals because doing so would corrupt our moral character (not because of the harm inflicted upon the animal). Furthermore, each of the major religious tradi- tions contains a great deal of material about how humans ought to treat animals. 7 Given the fact that each of these moral and religious traditions has a large number of reasonable adherents, it makes it hard to imagine the possibility of agreement about the content of our moral obligations towards animals. Providing a full account of our moral obligations to non-human animals is a matter of profound disagreement, and inevitably leads one into complex debates in moral and religious philosophy.
Third, there is the issue of the relationship between the moral and the legal, which poses the question, "Which of our moral obligations (whatever they happen to be) are legally enforceable?" This question highlights the fact that the moral domain is not completely coextensive with the legal domain, since there are some moral infractions which are not (and should not be) legally punishable, and there are some morally virtuous acts which are not (and should not be) legally rewarded. For example, there are some cases in which lying is a legally punishable offence (such as lying under oath), but there are other cases in which lying, although morally reprehensible, is not legally punishable (such as lying to a spouse about one's true feelings about the relationship). 8 In the case of animal ethics, it is unclear which moral offences against animals trigger a justification for legal and political regulation and punishment. Jan Narveson, for example, argues that the state should prohibit wanton cruelty toward animals, and that it should act so as to preserve endangered species, but he then argues that the other more morally ambiguous practices should be devolved to private judgments, letting people individually choose how to treat and whether or not to consume animals, but none of these private views should be coercively legislated so as to regulate the practices of others. 9 We might call this the position of "liberal toleration." Those who accord animals a higher moral status, of course, feel not only entitled but also obligated to impose their moral views on others through legislation, since abstaining from this would permit the continued exploitation and slaughter of countless animals. Descending from the more egregious harms of slaughter and experimentation, there might be cases like the ownership of domesticated animals where even those who find the practice immoral might recognize that the use of state power to enforce that view would be problematic. Providing in detail the necessary and sufficient conditions under which moral values can be legitimately imposed with state force is a matter of profound disagreement, and inevitability leads one into complex debates in political and legal philosophy.
Clearly the issues surrounding the human treatment of non-human animals divides people over questions of metaphysics, moral philosophy, political philosophy, and legal philosophy. One response to the presentation of this plurality of views is to insist, "Yes, there are many views, but only one view-namely my own-is true!" Those attracted to this response are convinced that one of the views on offer is uniquely reasonable, with all the other views being obviously unreasonable-that is, the latter views could be arrived at only through some fairly obvious epistemic or moral missteps. I want to suggest that this would be a mistake. The debate about animal ethics features a reasonable pluralism of views. This is not to say that all the views are reasonable, only that some broad set of the views are reasonable. The reasonable views of others on this issue cannot be dismissed out of hand, but must be respected.
This fact of reasonable pluralism on the subject of animal ethics reflects a broader condition of reasonable pluralism that has long been recognized in the liberal tradition, from the classical social contract thinkers through contemporary articulations of "political liberalism." 10 Pluralism appears as a historical fact that demonstrates that the free exercise of human reason does not issue in identical judgments on all issues of religion, morality, and philosophy, but rather in a rich diversity of such judgments. In Political Liberalism, Rawls argues that "a plurality of reasonable yet incompatible comprehensive doctrines is the normal result of the exercise of human reason within the framework of the free institutions of a constitutional democratic regime." 11 Rawls puts the point more strongly when he later argues that pluralism regarding moral worldviews (or "comprehensive doctrines") is the "inevitable outcome of free human reason." 12 Rawls refers to the cause of this reasonable pluralism as "the burdens of judgment," which expresses the fact that people reasoning in good faith are beset by so many complexities that disagreement is to be expected (and thus respected) even in epistemically ideal conditions. 13 Galston likewise notes, "Modern liberal-democratic societies are characterized by an irreversible pluralism, that is, by conflicting and incommensurable conceptions of the human good." 14 Reason itself, it seems, breeds a pluralism of values. 15 This fact of reasonable pluralism poses a challenge. If we accept that the views of (some) others are reasonable, we will be uncomfortable with enlisting state power to impose our own views upon others who have reasonable objections to them. 16 This discomfort at imposing sectarian views on others, and the hope of living under a regime affirmed by all reasonable people, lies at the heart of liberal political philosophy. As Jeremy Waldron articulates the view, "a social and political order is illegitimate unless it is rooted in the consent of all those who have to live under it; the consent or agreement of these people is a condition of its being morally permissible to enforce that order against them." 17 Nagel echoes, "The task of discovering the conditions of legitimacy is traditionally conceived as that of finding a way to justify a political system to everyone who is required to live under it." 18 Finally, Fred D'Agostino writes, "No regime is legitimate unless it is reasonable from every point of view." 19 Some articulation of this commitment can be found in the writings of all contemporary liberal political philosophers.
The challenge faced by the contractarian tradition (and, indeed, by all of modern political philosophy), then, is clear: we seek consensus about justice, but recognize the fact of reasonable pluralism. Is it possible to overcome pluralism? Is the hope of living under a mutually agreeable regime realistic? The social contract tradition has attempted to answer this question. Social contract thinkers have recognized that our personal values and judgments differ, and this can lead to social conflict. The proposed solution is to set up some kind of hypothetical fair bargaining situation, in which we temporarily bracket those values that divide us and restrict ourselves to values or interests that we share, which thereby permits us to deliberate and (hopefully) agree upon social rules and institutions that can achieve the benefits of social cooperation and avoid the costs of social conflict. 20 Many philosophers recognize the power of the social contract tradition to deal with the problems generated by the fact of reasonable pluralism, and thus many of those concerned about animal ethics have turned to the social contract tradition. But we face a problem when we turn our attention to the issue of animal ethics. Namely, none of the major social contract thinkers included animals or their interests in the contract. Up until very recently, only humans have counted. Animals have been vulnerable outsiders to the contract, and therefore outside the purview of justice. This blindness in the social contract tradition has been exposed as a serious flaw, and many philosophers are now attempting to find ways to deal with the question of animals within the social contract tradition.
In what follows, I will review some of these attempts. I present three separate strands of the social contract tradition (Hobbesian, Lockean, and Kantian), and evaluate whether or not they can adequately deal with the reasonable pluralism we experience about animal ethics. I hope to show that each of them fails in some way. This will lead me to draw a pessimistic conclusion: although contractarianism is a powerful and sophisticated framework for thinking through matters of justice in conditions of reasonable pluralism, it is incapable of offering a viable solution for this particular issue.
VARIETIES OF CONTRACTARIANISM
The first strand of contractarianism comes from Thomas Hobbes. Hobbes imagines a hypothetical social contract in which bargainers are motivated solely by self-interested calculations, stripped of all the moral and religious values that divide them. Can self-interest alone generate shared moral obligations? At first glance, it seems unlikely-why would self-interested bargainers ever agree to "moral" constraints on their behavior? Well, in most cases it is in my self-interest to give up my right to harm you on the condition that you give up your right to harm me. The potential benefits I might gain from harming you are dwarfed by the potential harms that you might inflict on me, and the same goes for you. Thus, mutual non-aggression (secured by state power) will likely be a matter of agreement for Hobbesian bargainers, no matter what moral and religious values they are committed to. Other basic rights can be secured through this kind of idealized bargaining. The difficult questions of morality can (seemingly) be reduced to the simple ground of prudence. 21 The power of this version of contractarianism is that it is able to generate moral obligations without any controversial moral inputs at all. Regardless of which moral or religious ideals that you hold, you are likely to agree with the principle of mutual non-aggression simply because of your self-interest.
213
Despite these benefits, the problems of Hobbesian contractarianism are many. Returning to the question of the human relationship to non-human animals, it is clear that the Hobbesian contract entirely excludes animals from consideration. Why? We humans have the opportunity to use animals to our benefit in all kinds of ways-by owning them as pets, by using them in scientific experiments, by eating them, etc. Under the terms of the Hobbesian contract, we would only be obligated to curtail our self-interested activities towards animals if doing so was reciprocated by animals altering their behavior in ways that provided us humans more benefit than could be gained by self-interestedly exploiting them. 22 However, it is clear that the human use of animals benefits humans' narrow self-interest more so than does the non-use of animals. Therefore, it is in our interests to remain in a pre-contract (state of nature) situation vis-à-vis non-human animals. This places animals entirely outside the realm of justice, and leaves them vulnerable to all kinds of human-interested use and abuse.
This problem of the Hobbesian contract applies not only to animals, but certain classes of humans as well, namely infants and the disabled. These beings cannot offer anything in return for non-aggression. Nothing in our narrow self-interest discourages us from exploiting these beings, using them, perhaps, for scientific experimentation. 23 This conclusion is, to be sure, entirely unacceptable, and clashes strongly with the moral intuitions of most human moral agents. 24 The Hobbesian contract hopes to generate our moral obligations out of a non-moral bargaining situation, but in starting with such a morally stingy framework, it is completely incapable of explaining why we should take into consideration the interests of those beings who cannot buy off our non-aggression. Not even minimal obligations of non-cruelty toward vulnerable groups can be secured. The repulsiveness of this outcome exposes the unacceptability of Hobbesian contractarianism. As Chris Tucker and Chris Macdonald rightly note, "Tying morality to mutual advantage seems to exclude those lacking productive capacities. Any moral theory which fails to afford consideration to children and the congenitally handicapped is hardly worthy of its name." 25 We turn, then, to the second strand of contractarianism, which traces back to John Locke. For Locke, the pre-contract state of nature is not the moral freefor-all that Hobbes describes. Instead, Locke insists that people have pre-political rights; that is, rights that are grounded in our nature, not created through human political agreement. For Locke, these natural rights include the right to life, liberty, and property. Robert Nozick, a contemporary Lockean, describes our natural rights as "moral side-constraints" against the unwanted interference by others. 26 Thus, Lockean bargaining over the social contract takes place against the backdrop of natural rights. This element of the Lockean tradition responds to the main problem identified in the Hobbesian tradition, which is that vulnerable humans, like infants and the disabled, are not afforded protection because doing so is not in the narrow self-interest of the bargainers. For Locke, all humans have pre-political rights that afford them protection against the aggression of others.
V O L U M E 9 N U M É R O 3 A U T O M N E / F A L L
But where does this leave animals? For Locke, animals do not have natural rights, and therefore human interactions with animals are property governed, as with Hobbes, on the basis of our self-interest alone. Following Martha Nussbaum, however, we might insist on extending some package of pre-political rights or entitlements to animals as a constraint on the outcome of human bargaining. 27 While this extension of pre-political rights to animals would provide them with moral side-constraints against self-interested use and abuse by humans, is it a defensible philosophical move? I think not. Recall that which motivated us to engage with the social contract tradition in the first place. We find ourselves experiencing reasonable disagreement about the moral questions surrounding the human treatment of non-human animals. Recognizing this, we are inclined to resist straightforwardly imposing the values of one group upon another group who reasonably disagrees. This partisan first-order moral disagreement inspires us to ascend to a non-partisan second-order position where our deliberation is undertaken in a hypothetical and idealized contract-situation in which our partisan views are temporarily bracketed. This is done with the hope that the agreement reached at the second order can allow us to return to and better navigate our first-order disagreements. One of the things that we disagree about at the first-order level is whether animals have rights and, if so, what the content of those rights are. When someone proposes building pre-political rights into the (background of the) contract-situation, then partisan first-order commitments are being smuggled into our supposedly non-partisan second-order deliberation. This defeats the purpose of the social contract move. If we are not going to bracket at least some of the controversial partisan views that divide us, then deliberating about a social contract will be just as hopeless as our everyday moral disagreements are when none of our values are bracketed. The insistence on extending natural rights to animals is of course a reasonable position, but, since people can reasonably reject it (so it seems to me), it cannot structure the contract situation.
We then turn, finally, to the third and last strand of the social contract tradition, which is most fully articulated by John Rawls, but is rooted in the thinking of Immanuel Kant. Rawlsian contractarianism pictures bargainers as motivated merely by self-interested calculations (like in the contractarianism of Hobbes), but these bargainers are placed behind a "veil of ignorance," which removes from them certain knowledge that might bias their bargaining strategy, and this information includes categories such as race, gender, social class, natural ability, and more. This veil of ignorance helps ensure that the outcome of the bargaining is truly impartial, since no bargainer will be inclined to seek advantages for a particular group to the detriment of others. 28 So Rawls is able to embrace the moral minimalism of Hobbes (since the bargainers have no controversial moral or religious values, but are motivated strictly by self-interest), to drop the question-begging natural rights assumption of Locke, but to avoid (at least some of) the repugnant conclusions of Hobbes by including the veil of ignorance (which helps to guarantee that the bargainers take into account the interests of all moral agents). On this last point, although Rawls had little to say about the issue of disability, many have argued that his position can be extended to include the disabled, by simply making "ability" a piece of information removed by the veil of ignorance. If I were a self-interested bargainer and did not know my level of ability, then I would surely extend basic rights to the disabled, as a way of protecting myself if I happened to be disabled when the veil of ignorance was lifted.
How do animals figure into Rawls' impartial social contract? For Rawls, impartiality only covers humans; that is, the bargainers are partial toward the human species and do not take into account the interests of animals. Many political philosophers who find this unsatisfying, then, propose making "species membership" a piece of information removed by the veil of ignorance. If the bargainers are forced to decide upon rights and obligations without knowing to which species they will belong when the veil of ignorance is lifted, then they will (out of self-interest) extend certain basic rights to all or most non-human animals. 29 While this tweak to the contract situation would indeed result in the extension of moral and political protection to animals, is it a defensible philosophical move? Again, I think not. The reasons for this skepticism are identical to those discussed above about my objection against building in pre-political animal rights to the background of the Lockean contract situation. Namely, placing species membership behind the veil of ignorance is not a neutral way of framing the contract, because doing so already chooses sides in the first-order disagreement about whether or not animals count as moral agents. Contractarianism requires a morally thin starting point, and promises to generate morally thick principles of morality and justice. Thus, if people experience disagreement about the nature and limits of the veil of ignorance, then the contract situation itself is a matter of reasonable disagreement, and is thus not able to help us deal with first-order disagreement.
What I have shown in this section is that all three major versions of contractarianism fail to help us deal with our disagreements over animal ethics because how we construct the contract situation is itself contestable. 30 While contractarianism promises to help us rise above our moral disagreements and find common ground, no such common ground can be found. Our disagreements about animal ethics follow us up into our disagreements about how to structure the contract situation.
BEYOND CONTRACTARIANISM?
While contractarianism is the tradition that most systematically thinks through the problem of reasonable pluralism, it fails to help deal with the many disagreements we have about animal ethics. This failure poses serious problems for political philosophy. My main contribution in this article is to point out the failure and the reasons for its importance, but I am not entirely clear about how the failure is to be resolved or overcome. So, to conclude, I will outline a few provisional and tentative thoughts about how philosophers and activists concerned about animal ethics might think about this issue beyond the contractarian framework.
216
First, political philosophy might benefit by dropping the insistence, so common to moral and political philosophy, upon a "unified theory" of morality or justice. That is, perhaps we should not expect a single set of principles to adequately deal with all moral and political questions. One way of breaking away from this assumption is spelled out by Nozick, who outlines a possible two-tiered system of morality under the slogan "utilitarianism for animals, Kantianism for people." 31 He explains, "It says: (1) maximize the total happiness of all living beings; (2) place stringent side constraints on what one may do to human beings. Human beings may not be used or sacrificed for the benefit of others; animals may be used or sacrificed for the benefit of other people or animals only if those benefits are greater than the loss inflicted." 32 So Nozick insists upon a minimal baseline commitment that all positions must meet in order to be considered reasonable: "Animals count for something." 33 Any position that grants no moral status to all non-human animals can safely be dismissed as unreasonable, and we should feel entitled to impose legal obligations on adherents of these views regardless of their protest.
But this moral minimalism regarding animals would rule out only cruelty toward animals that does not have sufficient compensating benefits for humansa far cry from the moral demands of many animal advocates. Nozick recognizes that his proposal is too modest for many people, himself included, but he takes it as a reasonable moral minimum. Perhaps the difficulties outlined in this article might be tackled with the following philosophical strategy: the rights and responsibilities of humans vis-à-vis other human members of the political community should be hashed out through some version of contractarianism, while the obligations of humans vis-à-vis non-human animals should be worked out through other moral theories, starting with a minimalist utilitarian core, and working outward. 34 I find this two-tiered system of morality quite compelling, and it meshes with many of my moral intuitions about animal ethics. However, we are not looking for a view that best mirrors my moral intuitions-we are looking for an overlapping consensus that might capture the moral intuitions (or at least most of the moral intuitions) of the entire political community. When we broaden our scope to think about the moral intuitions of all other reasonable citizens on the issue of animal ethics, we must accept that the two-tiered moral system, which applies seemingly "weaker" moral standards to our treatment of animals (as compared to our treatment of humans), will surely not satisfy all reasonable people. Some animal advocates will view utilitarianism as an unacceptable moral theory for animals (asking, "if it's not good enough for humans, why impose it on animals?"). The moral minimalism of Nozick will be rejected by many philosophers and activists who demand stronger protections for animals. The social contract tradition and political liberalism seem unhelpful in resolving this stand-off. Political liberals insist that we bracket our "non-shared" values when we engage in "public reason" about matters of basic justice. But what does this require of us in this context? Which values get bracketed? If political liberalism prohibits citizens from coercively imposing their values upon others who reasonably disagree, then what is to be done when any policy that we choose con- cerning the treatment of animals will inevitably generate loud yet reasonable objections from some segment of the political community?
When we experience these deep yet reasonable disagreements about matters of basic justice, we are forced into agonistic politics. In the context of agonistic politics, we should mutually recognize the right of each other to violate the basic political liberal prohibition against coercively imposing one's views over the reasonable objections of others. That is, we should feel entitled to engage in political action without bracketing our non-shared moral values, and we should reconcile ourselves to the fact that others will do so as well. 35 The agonistic contest will hopefully be one mostly between Adversaries, not Enemies, in which both sides agree to willingly abide by the results of the democratic struggle when they lose, even as they keep preparing for the next political battle. In sum, while agonistic politics suspends the basic liberal requirement to abstain from imposing values on others over their reasonable objection, 36 our disagreements about animal ethics are so deep and profound that we cannot avoid imposing controversial values when legislating on this issue one way or the other. 37 The moral minimalist will likely permit the continued slaughter of animals (provided that the practices are relatively humane) and animal experimentation for medical research, among other human uses of animals (whenever the harms to the animals are outweighed by the compensating benefits to humans). The animal liberationist, on the other hand, will insist upon the complete elimination of these practices throughout society, enforced and punishable by law. This impasse (which spans metaphysical, moral, and political questions) is enormous, with common ground entirely lacking, and in which neither side is obviously and uniquely reasonable (or unreasonable). This divide cannot be adequately dealt with in the tradition of social contract theory, which insists upon a bracketing of non-shared values, since what should be bracketed is itself a matter of reasonable contestation. 38 In the terms of the social contract tradition, then, we should recognize that, when it comes to issues surrounding animals ethics, we confront each other in the state of nature, without an agreed upon social contract to regulate our interactions. This leads to some troubling implications regarding political action -namely, the lines become blurred between important categories like advocacy, extremism, and terrorism. The animal liberationist faces only weak moral constraints (but many prudential constraints) on radical political action for the sake of animal liberation. The moral minimalist, on the other hand, is permitted to try to capture and use state power to punish those who engage in radical actions for the sake of animal liberation. This does not, of course, sanction all violent action in the name of (or in opposition to) the liberation of animals, but it should open us up to the reasonableness of certain forms of radical political action. This issue requires much more elaboration than I can give it here. Surely it is not the case that anything goes in conditions of agonistic politics, since we do have socially contracted constraints on our behavior vis-à-vis other humans (such as the bodily and civil liberties that we are likely to agree about), and possibly also vis-à-vis the legitimately acquired private property of others. Thus, while extremist political action may be permissible in the context of agonistic politics, there are undoubtedly limits within which reasonable political activism must remain.
Moving beyond contractarianism means moving beyond the comfortable hope that our disagreements can be settled on the basis of shared values, shared interests, or an agreed upon decision-procedure. But move beyond we must. The metaphysical, moral, political, and legal questions that surround the issues of animal ethics do not admit of clear rational answers. Only so much ground can be won in the rational world of philosophy. The rest must be won in the wild realm of agonistic politics. , and there are some legal obligations that are not themselves moral obligations (such as driving on the right side of the road, wherein the legal obligation is merely a way of prudentially coordinating behaviour). 9 Narveson, "On a Case for Animal Rights", op. cit., p. 45. 10 For a compelling account of the philosophical continuity between the social contract tradition and contemporary "political liberalism" (or "public reason liberalism"), see Gerald Gaus, "Public Reason Liberalism", unpublished. 11 Rawls, John, Political Liberalism, New York, Columbia University Press, 1993, p. xviii. 12 Ibid., p. 37; my italics. 13 Rawls describes the burdens of judgment as follows: "(a) The evidence-empirical and scientific-bearing on the case is conflicting and complex, and thus hard to assess and evaluate. (b) Even where we agree fully about the kinds of considerations that are relevant, we may disagree about their weight, and so arrive at different judgments. (c) To some extent all our concepts, and not only moral and political concepts, are vague and subject to hard cases; and this indeterminacy means that we must rely on judgments and interpretation (and on judgments about interpretations) within some range (not sharply specifiable) where reasonable persons may differ. (d) To some extent (how great we cannot tell) the way we assess evidence and weight moral and political values is shaped by our total experience, our whole course of life up to now; and our total experiences must always differ… (e) Often there are different kinds of normative considerations of different force on both sides of an issue and it is difficult to make an overall assessment. (f) Finally…any system of social institutions is limited in the values it can admit so that some selection must be made from the full range of moral and political values that might be realized" (Ibid., pp. 56-57). | 2019-05-03T13:11:37.347Z | 2014-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "8ea27871552dcacf2c8ad9d67746434ead498b0a",
"oa_license": "CCBY",
"oa_url": "http://www.erudit.org/fr/revues/ateliers/2014-v9-n3-ateliers01748/1029066ar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f59b4a03ca307fb2835411373dba544dd3145e9f",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Political Science"
]
} |
7634180 | pes2o/s2orc | v3-fos-license | Total kidney and liver volume is a major risk factor for malnutrition in ambulatory patients with autosomal dominant polycystic kidney disease
Background In patients with autosomal dominant polycystic kidney disease (ADPKD), malnutrition may develop as renal function declines and the abdominal organs become enlarged. We investigated the relationship of intra-abdominal mass with nutritional status. Methods This cross-sectional study was performed at a tertiary hospital outpatient clinic. Anthropometric and laboratory data including serum creatinine, albumin, and cholesterol were collected, and kidney and liver volumes were measured. Total kidney and liver volume was defined as the sum of the kidney and liver volumes and adjusted by height (htTKLV). Nutritional status was evaluated by using modified subjective global assessment (SGA). Results In a total of 288 patients (47.9% female), the mean age was 48.3 ± 12.2 years and the mean estimated glomerular filtration rate (eGFR) was 65.3 ± 25.3 mL/min/1.73 m2. Of these patients, 21 (7.3%) were mildly to moderately malnourished (SGA score of 4 and 5) and 63 (21.7%) were at risk of malnutrition (SGA score of 6). Overall, patients with or at risk of malnutrition were older, had a lower body mass index, lower hemoglobin levels, and poorer renal function compared to the well-nourished group. However, statistically significant differences in these parameters were not observed in female patients, except for eGFR. In contrast, a higher htTKLV correlated with a lower SGA score, even in subjects with an eGFR ≥45 mL/min/1.73 m2. Subjects with an htTKLV ≥2340 mL/m showed an 8.7-fold higher risk of malnutrition, after adjusting for age, hemoglobin, and eGFR. Conclusions Nutritional risk was detected in 30% of ambulatory ADPKD patients with relatively good renal function. Intra-abdominal organomegaly was related to nutritional status independently from renal function deterioration. Electronic supplementary material The online version of this article (doi:10.1186/s12882-016-0434-0) contains supplementary material, which is available to authorized users.
Background
Malnutrition increases mortality, morbidity, and the duration of the hospital stay in various clinical settings, including inpatient settings in general as well as in liver failure and cancer patients [1]. In chronic kidney disease (CKD), the prevalence of malnutrition increases to 30-40% of patients, and protein-energy malnutrition is one of the strongest predictors of morbidity and mortality [2,3]. In previous studies, nutritional markers such as serum albumin, creatinine, body mass index (BMI), and subjective global assessment (SGA) score were independent predictors of death and treatment failure in CKD [4,5]. Pre-transplant nutritional status also influences the outcomes of kidney transplantations [6]. Therefore, efforts have been made to establish guidelines for properly assessing the nutritional status of CKD patients and intervening to improve their outcomes [7]. However, the value of nutritional markers in the early stage of CKD was not meticulously evaluated in patients with early stages of CKD.
Autosomal dominant polycystic kidney disease (ADPKD) is the most common hereditary kidney disease, and can progress to end-stage renal disease (ESRD) as kidney cysts grow. The prevalence of liver cysts in ADPKD patients was 58% in patients aged 15-24 years and up to 94% of patients older than 35 years [8]. Many uncontainable complications can develop as cysts grow to cause massive organomegaly. From previous study, mass effect due to organomegaly was reported to cause pressure related symptoms (46.5%), pain (58.8%), gastrointestinal symptoms (32.4%) and obstructive complications which can lead to leg edema (20.4%), ascites (16.6%) and infection (3.1%) [9]. In these patients, pressure effects from the enlarged organs may also result in poor oral intake and eventually malnutrition. Occasionally, massive organomegaly requires volume reduction interventions to relieve symptoms and to improve the patient's quality of life [10].
In ADPKD, the mass effects from increased kidney and liver volume may aggravate malnutrition, even in the early stages of kidney disease [11]. Therefore, assessment of nutritional status even in the early stages of ADPKD with significant organomegaly is advised to ensure timely interventions that result in the subsequent improvement of clinical outcomes as in polycystic liver disease patients [11,12]. However, traditional anthropometric parameters, such as body weight and BMI, are of limited value because of the fluid-filled kidneys and liver. In this study, we evaluated the nutritional status of ambulatory ADPKD patients using SGA as a standard method, and identified intraabdominal organ volume as an independent risk factor for malnutrition.
Patient population
ADPKD patients who visited polycystic kidney disease clinic in Seoul National University Hospital from December 2013 to March 2014 were included in this study. Patients of age 18 years and older who agreed to participate in the study were included. Abdominal computed tomography (CT) scan on ADPKD patients were taken every other year for clinical purpose as a standardized evaluation protocol during the outpatient clinic [13]. Patients with active cancer, active infection, CKD stage 5 at the time of enrollment, ESRD treated with renal replacement therapy, or a history of volumereductive therapies of liver (transarterial embolization, liver resection or transplantation) due to severe polycystic liver disease were excluded. Electronic medical records were reviewed retrospectively and 31 patients were identified who met exclusion criteria.
Since this study was a cross-sectional one using clinical data, and it did not involve further invasive intervention, treatment, or costs to patients, the study received a consent exemption and it was approved by the Institutional Review Board of Seoul National University Hospital (H-1407-083-594). The patient's record was de-identified and analyzed anonymously. This study was performed in accordance with the Declaration of Helsinki.
Subjective global assessment and clinical data collection
The SGA score is a method of nutritional assessment that has been well validated in various settings and is based on a clinical history and physical examination. Nutritional assessment has been validated in CKD patients as a predictor of complications and outcomes [10][11][12][13]. Based on these results, SGA has been recommended in the Kidney Disease Outcomes Quality Initiative guidelines as a nutritional assessment tool, especially for CKD patients [7]. SGA is frequently used as a reference method for evaluating new nutritional assessment techniques.
The modified SGA, which has been validated in many studies of CKD patients [14][15][16][17], was performed to evaluate the nutritional status of ADPKD patients according to the standardized protocol in our clinic from December 2013. A well-trained internist performed SGA to ensure consistency. SGA consists of a medical history (weight changes, dietary intake, gastrointestinal symptoms, functional capacity, and comorbidities related to nutritional needs) and a physical examination. In detail, a clinician inspected subcutaneous fat below the eye, triceps or biceps area or at chest area, and examined the temples, clavicles and the back of the hands for muscle wasting. The presence of edema or ascites was assessed by physical examination. Based on these components, a clinician uses a seven-point scale to reflect an overall judgment of the patient's nutritional status. The SGA score was interpreted as follows: 7, well nourished; 6, at risk; 5, mildly malnourished; 3-4, moderately malnourished; and 1-2, extremely malnourished. Laboratory tests, including serum hemoglobin, creatinine, total protein, albumin, and total cholesterol were simultaneously performed. Estimated glomerular filtration rates (eGFR) were calculated by the Chronic Kidney Disease Epidemiology equation, using isotope dilution mass spectrometry-traceable creatinine [18].
Volume measurement of kidneys and liver
In our polycystic kidney disease clinic, abdominal CT scans were taken every other year. A latest abdominal CT scan at the time of nutritional assessment was used to measure total liver volume (TLV) and total kidney volume (TKV). The mean time interval between the CT scan and the nutritional assessment was 12.5 ± 12.6 months. TLV was calculated by adding the product of slice thickness and the area measured on a set of contiguous images generated by CT using Rapidia 2.8 CT software (INFINITT Healthcare Co. Ltd, Seoul, Korea). TKV was estimated by using the ellipsoid method [19]. Height-adjusted TLV (htTLV, mL/m) and heightadjusted TKV (htTKV, mL/m) were used in this study. Height-adjusted total kidney and liver volume (htTKLV, mL/m) was defined as the sum of the htTLV and htTKV values.
Statistical analyses
For statistical analysis between genders, we used student t-test for variables with a normal distribution and used Mann-Whitney test for variables with a non-normal distribution (height, weight, protein, albumin, htTLV, htTKV, and htTKLV). For SGA scores, all patients were classified into three groups: mildly to moderately malnourished (an SGA score of 4-5), at risk (an SGA score of 6), and well nourished (an SGA score of 7), because none had SGA score less than 4 [20]. For statistical analysis, we used linear association test or Jonckheere-Terpstra test to analyze the p-for trend among three SGA groups. P-values <0.05 were considered to indicate statistical significance.
Receiver operating characteristic (ROC) curve analysis was used to evaluate htTKLV as a discriminating parameter for malnutrition (SGA score ≤5), in contrast with the well-nourished group (score 7). In this analysis we excluded patients with SGA score of 6 to analysis the effect of htTKLV on definite malnutrition. The Youden index was used to determine the optimal cutoff value. Binominal logistic regression was used to test the significance of the htTKLV threshold after adjusting for age, hemoglobin, and eGFR. All statistical analyses were conducted using SPSS version 22 (IBM Corporation, Armonk, NY, USA) and MedCalc for Windows version 14 (MedCalc Software, Ostend, Belgium).
Nutritional status of subjects
Mild to moderate malnutrition was detected in 7.3% of all patients. Only two patients (0.7%) had an SGA score of 4, and 19 patients (6.6%) had a score of 5. Sixty-three patients (21.9%) were at risk of malnutrition (a score of 6), and 204 (70.8%) were well nourished (a score of 7). No statistical difference was observed in the distribution of SGA scores between genders (Table 1).
Hypertension showed higher prevalence in lower SGA groups (100, 87.3 and 79.4% respectively, p for trend = 0.011). However there was no statistical difference in liver cyst prevalence among SGA groups (p for trend = 0.16) ( Table 1).
When we compared various data between SGA score of 4 and 5 (malnutrition) with 6 and 7 (at risk or well nourished), similar results were obtained (Additional file 1: Fig. 3).
ROC curve analysis was used to compare the volume parameters to identify a threshold predictive of malnutrition (an SGA of 4-5) over a state of being well nourished (an SGA score of 7). Since SGA score category 6 can be ambiguous due to the limitations of SGA itself, we constructed the ROC curve using the data of SGA score 7 (normal) and 4-5 (malnutrition). The area under the curve (AUC) of htTKLV was larger (0.727) than that of htTKV (0.687) and htTLV (0.645). The cut-off value for htTKLV was 2340 mL/m, with a sensitivity of 66.7% and a specificity of 81.4% (Fig. 4). By comparison, in an ROC curve analysis between an at-risk or malnourished state over a well-nourished state (an SGA of 4-6 vs. 7), similar but less significant results were obtained (AUC of htTKLV, htTKV, and htTLV were 0.658, 0.646, and 0.571, respectively), and the cut-off value for htTKLV was 2190 mL/m with a sensitivity of 53.6% and a specificity of 76.5% (data not shown).
It is well known that the enlargement of the kidneys is closely related to renal insufficiency in ADPKD patients [21]. As expected, the eGFR fell as the SGA score decreased (Fig. 2), and the proportion of patients with lower SGA scores increased in our patients as the CKD stages increased from 1 to 3 (Fig. 5). When we stratified by CKD stage, even in stage 1 and 2 CKD, 15.4 and TKLV, TKV, TLV, htTKLV, htTKV and htTLV are shown in median and interquartile range BMI body mass index, CKD chronic kidney disease, eGFR estimated glomerular filtration rates, htTKV height-adjusted total kidney volume, htTKLV height-adjusted total kidney and liver volume, htTLV height-adjusted total liver volume, SGA subjective global assessment, TKV total kidney volume, TKLV total kidney and liver volume, TLV total liver volume 20.9% of patients were either malnourished or at risk of malnutrition, respectively. Among stage 3 and 4 CKD patients, 43.4 and 42.8% were either malnourished or at risk of malnutrition, respectively. In patients with stage 4 CKD, the proportion of patients with a lower SGA score was slightly lower than among stage 3B CKD patients, which may have been due to the relatively small number of patients in stage 4 CKD or because we excluded patients with severe organomegaly who had already undergone surgical intervention. In order to minimize the confounding effect of renal failure, subgroup analysis was performed in patients with an eGFR ≥45 mL/min/1.73 m 2 (CKD stages 1-3A). In these patients, only htTKLV showed a significant association with SGA scores (Fig. 3d).
Using 2340 mL/m as the cut-off value of htTKLV based on ROC curve analysis, logistic regression analysis was used to estimate the odds ratio between the malnourished (an SGA score of 4-5) and the wellnourished group (an SGA score of 7) using variables that showed statistical significance among SGA groups. Patients with htTKLV ≥2340 mL/m showed a higher risk of malnutrition (an SGA score of 4-5) (odds ratio = 8.74, 95% confidence interval 3.30-23.13, p < 0.001), even after adjusting for age, hemoglobin, and eGFR.
Discussion
Although previous studies have assessed the association of htTKV with renal function outcomes [22] and poor quality of life [23], this is the first study to assess the nutritional status of ADPKD patients and its relationship with htTKLV. In our previous study, an htTLV value >1600 mL/m was associated with an increase in pressure symptoms [9]. Therefore, we hypothesized that an enlarged liver and/or kidneys may exert a mass effect on the nearby areas of the gastrointestinal tract, causing gastrointestinal symptoms and eventually affecting the nutritional status of ADPKD patients. With this in mind, we measured htTKV and htTKLV, defining htTKLV to reflect the total mass effect of the enlarged kidneys and liver. We showed that htTKLV was the sole significant predictor of malnutrition after adjusting for other risk factors, including renal function. Even in subjects with relatively good renal function (eGFR ≥45 mL/min/ 1.73 m 2 ), htTKLV was significantly associated with SGA scores of 4 and 5. Based on the ROC curve analysis and binominal logistic regression, htTKLV values ≥2340 mL/m, which is three times larger than the mean liver volume of healthy individuals [24], raised the risk of malnutrition by more than eightfold in ADPKD patients. When compared with htTLV and htTKV, htTKLV showed a closer Fig. 1 Correlations between the SGA score and the anthropometric nutritional parameters. a body weight (Bwt) and b body mass index (BMI). P*; p for trend, SGA; subjective global assessment, SGA 7, well-nourished; SGA 6, at risk; SGA 5, mildly malnourished; SGA 3-4, moderately malnourished relationship to malnutrition on the ROC curve, suggesting that total organ volume, instead of the size of each organ, may be responsible for the mass effect and the corresponding symptoms. However, since we excluded patients with severe polycystic liver disease who underwent surgical therapy (n = 16; mean htTLV, 5136 ± 2563 mL/m), the statistical association of htTLV with SGA scores could have been underestimated.
This study also shows that the prevalence of malnutrition in ADPKD should not be ignored. Most of patients had no malnutiriton (70.8%). However, even in outpatient clinic, 7.3% of patients were mildly to moderately malnourished (SGA scores of 4 and 5), and 21.9% of patients were at risk of malnutrition (an SGA score of 6). From previous studies, the prevalence of malnutrition in stage 4 and 5 CKD has been reported to be 20-30% Fig. 2 Correlations between the SGA score and the following laboratory markers. a estimated glomerular filtration rate (eGFR), b hemoglobin (Hb), c albumin, and d total cholesterol (total chol). P*; p for trend, SGA; subjective global assessment, SGA 7, well-nourished; SGA 6, at risk; SGA 5, mildly malnourished; SGA 3-4, moderately malnourished [16,25], and 10-60% of dialyzed patients have been found to have malnutrition (SGA score ≤ B by using conventional SGA or ≤5 by using modified SGA) [26]. Cuppari et al. [20] found that approximately 11% of patients with stage 2-5 CKD had protein-energy wasting (SGA ≤5) and 32% showed signs of protein-energy wasting (SGA 6). It is not proper to compare our data with those of Cuppari et al. [20], since most participants in their study were in the advanced stages of CKD (48.9% in stage 3 and 40.3% in stage 4), unlike our patient sample (58% in stage 1-2 CKD) (Fig. 5). Moreover, it is surprising that the prevalence of malnutrition Fig. 3 Correlations between the SGA score and abdominal volume. a height-adjusted total kidney and liver volume (htTKLV), b height-adjusted total kidney volume (htTKV), and c height-adjusted total liver volume (htTLV); d correlation between SGA score and abdominal volume in subjects with an eGFR ≥45 mL/min/1.73 m 2 . P*; p for trend, SGA; subjective global assessment, SGA 7, well-nourished; SGA 6, at risk; SGA 5, mildly malnourished; SGA 3-4, moderately malnourished risk (SGA ≤6) is up to 30% in patients treated in an ambulatory setting with relatively good renal function. We further analyzed whether these findings in ADPKD could be due to the increased volume of the kidneys and liver, which can cause various gastrointestinal symptoms and malnutrition even in the early stages of CKD. In our study, SGA score correlated with htTKLV even in the patients with relatively well preserved kidney function (eGFR ≥45 mL/min/1.73 m 2 or CKD stages 1-3A).
Renal insufficiency itself can contribute to malnutrition and protein-energy wasting [27]. We also observed increased proportions of patients with malnutrition as the CKD stage advanced. The proportion of patients with a lower SGA score was slightly lower in stage 4 CKD than stage 3B patients, which may have been due to the relatively small number of patients in stage 4 CKD or because we excluded patients with severe organomegaly who had already undergone surgical intervention. In addition, when we analyzed parameters with SGA scores, most anthropometric or laboratory parameters that are widely used as markers for nutritional status failed to show an association with SGA scores, except for renal function. This finding that renal function was significantly related to SGA scores suggests regular assessment of nutritional status in ADPKD patients is needed as the disease progresses.
The association of parameters with SGA scores was different between genders. In male patients, old age, lower body weight, lower BMI, and lower hemoglobin levels were related to lower SGA scores, but these relationships were not seen in female patients. One of explanation would be the relatively less muscle mass in women compared to men, changes in body weight and BMI caused by malnutrition might be too small to be detected in our Asian women population. Moreover, enlarged cysts, ascites, or edema, which are frequent complications in ADPKD patients, may mask the reduction in muscle mass or fat proportion in the body. Laboratory parameters such as hemoglobin, total protein, albumin, or total cholesterol were not sensitive enough to detect changes in nutritional status during the early stages of malnutrition. Thus, further studies will be needed to confirm this finding in other cohorts and clarify the reason including analyzing muscle mass. Also other markers for nutritional status should be developed for patients with ADPKD, especially for female ADPKD patients.
In this study, we found that 23.5% of patients had nutritional problem (SGA score ≤6) even in early stage CKD (stage 1-3a). In addition, increased htTKLV was an independent risk factor after adjusting for kidney function by using a multivariate logistic regression model. Other nutritional biomarkers, such as prealbumin, insulin-like growth factor-1, or transferrin, were not assessed in this study. htTKLV could provide valuable information about nutritional status as well as the progression of disease, but it is cumbersome to measure with current methods. Therefore, developing new tools for the nutritional assessment of ADPKD patients is necessary, and such tools would be useful for improving long-term patient outcomes.
Even though this is the first observational study showing the impact of abdominal mass on nutritional status in ADPKD, it has several limitations. Relatively small numbers of patients in the low-SGA group may have undermined the statistical power, especially in females. Moreover, our hypothesis that the mass effect from enlarged liver and kidneys may be related to nutrition needs to be further verified by comparing with other non-ADPKD CKD groups, which is not possible for now because of lack of data on nutritional status in the early CKD stages. Also further comprehensive studies are needed to understand the factors affecting nutritional status in ADPKD patients including socioeconomic state and detailed food intake using diary. Although SGA is a well-validated method for nutritional assessment in patients with a range of conditions and is easy to perform, its ability to detect subtle changes in long-term nutritional status needs to be validated. Therefore, this study should be replicated in other larger cohorts, preferably in the form of a multicenter design including non-Asian populations. Also to improve the patients outcome eventually, more studies are needed to find out the possible effective therapies such as nutritional counseling and diet intervention to prevent and treat the malnutrition in ADPKD patients. This is a cross-sectional study but there are time interval between SGA measurement and CT scan as 12.5 ± 12.6 months. We used routinely perform CT scan to measure TKV, TLV and TKLV which are taken every other year in ADPKD outpatient clinic. Since the annual growth of PLD is about 0.9 to 3.2% we thought it was acceptable to use the information from the latest regular follow up CT image instead of taking new one at the time of nutritional assessment [28]. Because of this limitation, there were 4 patients who had renal aspiration and sclerotherapy between CT scan and SGA assessment. However when we analyzed data after excluding 4 patients, the similar result was seen and we reported whole patient's data.
Conclusions
In conclusion, detecting marginal malnutrition in patients in ADPKD outpatient clinics and initiating proper support can play an important therapeutic role, especially in patients who have decreased renal function or an increased htTKLV. | 2018-04-03T04:59:55.493Z | 2017-01-14T00:00:00.000 | {
"year": 2017,
"sha1": "7efed6805a2fa2abb25668681148e6afa75337fc",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-016-0434-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7efed6805a2fa2abb25668681148e6afa75337fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31107411 | pes2o/s2orc | v3-fos-license | Argument Mining on Twitter: Arguments, Facts and Sources
Social media collect and spread on the Web personal opinions, facts, fake news and all kind of information users may be interested in. Applying argument mining methods to such heterogeneous data sources is a challenging open research issue, in particular considering the peculiarities of the language used to write textual messages on social media. In addition, new issues emerge when dealing with arguments posted on such platforms, such as the need to make a distinction between personal opinions and actual facts, and to detect the source disseminating information about such facts to allow for provenance verification. In this paper, we apply supervised classification to identify arguments on Twitter, and we present two new tasks for argument mining, namely facts recognition and source identification. We study the feasibility of the approaches proposed to address these tasks on a set of tweets related to the Grexit and Brexit news topics.
Introduction
Argument mining aims at automatically extracting natural language arguments and their relations from a variety of textual corpora, with the final goal of providing machine-processable structured data for computational models of arguments and reasoning engines (Peldszus and Stede, 2013;Lippi and Torroni, 2016). Several approaches have been proposed so far to tackle the two main tasks identified in the field: i) arguments extraction, i.e., to detect arguments within the input natural language texts and the further detection of their boundaries, and ii) relations prediction, i.e., to predict what are the relations holding between the arguments identified in the first task 1 . Social media platforms like Twitter 2 and newspapers blogs allow users to post their own viewpoints on a certain topic, or to disseminate news read on newspapers. Being these texts short, without standard spelling and with specific conventions (e.g., hashtags, emoticons), they represent an open challenge for standard argument mining approaches (Snajder, 2017). The nature and peculiarity of social media data rise also the need of defining new tasks in the argument mining domain (Addawood and Bashir, 2016;Llewellyn et al., 2014).
In this paper, we tackle the first standard task in argument mining, addressing the research question: how to mine arguments from Twitter? Going a step further, we address also the following subquestions that arise in the context of social media: i) how to distinguish factual arguments from opinions? ii) how to automatically detect the source of factual arguments? To answer these questions, we extend and annotate a dataset of tweets extracted from the streams about the Grexit and the Brexit news. To address the first task of argument detection, we apply supervised classification to separate argument-tweets from non-argumentative ones. By considering only argument-tweets, in the second step we apply again a supervised classifier to recognize tweets reporting factual information from those containing opinions only. Finally, we detect, for all those arguments recognized as factual in the previous step, what is the source of such information (e.g., the CNN), relying on the type of the Named Entities recognized in the tweets. The last two steps represent new tasks in the argument 1 We refer the reader interested in more details on argument mining to (Peldszus and Stede, 2013;Lippi and Torroni, 2016) as survey papers, and to the proceedings of the Argument Mining workshop series (https:// argmining2017.wordpress.com/). 2 www.twitter.com mining research field, of particular importance in social media applications.
Mining arguments on Twitter
In this section, we describe the approaches we have developed to address the following tasks: i) Argument detection, ii) Factual vs opinion classification, and iii) Source identification, on social media data. Our experimental setting -whose goal is to investigate the tasks' feasibility on such peculiar data -considers a dataset of tweets related to the political debates on whether or not Great Britain and Greece had to leave the European Union (i.e. #Brexit and #Grexit threads in Twitter).
Experimental setting
Dataset. 3 The only available resource of annotated tweets for argument mining is DART (Bosc et al., 2016a). From the highly heterogeneous topics contained in such resource (i.e. the letter to Iran written by 47 U.S. senators; the referendum for or against Greece leaving the EU; the release of Apple iWatch; the airing of the 4th episode of the 5th season of the TV series Game of Thrones), and considering the fact that tweets discussing a political topic generally have a more developed argumentative structure than tweets commenting on a product release, we decided to select for our experiments the subset of the DART dataset on the thread #Grexit (987 tweets). Then, following the same methodology described in (Bosc et al., 2016a), we have extended such dataset collecting 900 tweets from the thread on #Brexit. From the original thread, we filtered away retweets, accounts with a bot probability >0.5 (Davis et al., 2016), and almost identical tweets (Jaccard distance, empirically evaluated threshold). Given that tweets in DART are already annotated for task 1 (argument/non-argument, see Section 2.2), two annotators carried out the same task on the newly extracted data. Moreover, the same annotators annotated both datasets (Grexit/Brexit) for the other two tasks of our experiments, i.e. i) given the argument tweets, annotation of tweets as either containing factual information or opinions (see Section 2.3), and ii) given factual argument tweets, annotate their source when explicitly cited (see Section 2.4). Tables 1, 2 and 3 contain statistical information on the datasets.
Inter annotator agreement (IAA) (Carletta, 1996) between the two annotators has been calculated for the three annotation tasks, resulting in κ=0.767 on the first task (calculated on 100 tweets), κ=0.727 on the second task (on 80 tweets), and Dice=0.84 (Dice, 1945) 4 on the third task (on the whole dataset). More specifically, to compute IAA, we sampled the data applying the same strategy: for the first task, we randomly selected 10% of the tweets of the Grexit dataset (our training set); for task 2, again we randomly selected 10% of the tweets annotated as argument in the previous annotation step; for task 3, given the small size of the dataset, both annotators annotated the whole corpus. Classification algorithms. We tested Logistic Regression (LR) and Random Forest (RF) classification algorithms, relying on the scikit-learn tool suite 5 . For the learning methods, we have used a Grid Search (exhaustive) through a set of predefined hyper-parameters to find the best performing ones (the goal of our work is not to optimize the classification performance but to provide a preliminary investigation on new tasks in argument mining over Twitter data). We extract argumentlevel features from the dataset of tweets (following (Wang and Cardie, 2014)), that we group into the following categories: • Lexical (L): unigram, bigram, WordNet verb synsets; • Twitter-specific (T): punctuation, emoticons; • Syntactic/Semantic (S): we have two versions of dependency relations as features, one being the original form, the other generalizing a word to its POS tag in turn. We also use the syntactic tree of the tweets as feature. We apply the Stanford parser (Manning et al., 2014) to obtain parse trees and dependency relations; • Sentiment (SE): we extract the sentiment from the tweets with the Alchemy API 6 , the sentiment analysis feature of IBM's Semantic Text Analysis API. It returns a polarity label (positive, negative or neutral) and a polarity score between -1 (totally negative) and 1 (totally positive).
As baselines we consider both LR and RF algorithms with a set of basic features (i.e., lexical).
Task 1: Argument detection
The task consists in classifying a tweet as being an argument or not. We consider as arguments all those text snippets providing a portion of a standard argument structure, i.e., opinions under the form of claims, facts mirroring the data in the Toulmin model of argument (Toulmin, 2003), or persuasive claims, following the definition of argument tweet provided in (Bosc et al., 2016a,b). Our dataset contains 746 argument tweets and 241 non-argument tweets for Grexit (that we use as training set), and 713 argument tweets and 187 non-argument tweets for Brexit (the test set). Below we report an example of argument tweet (a), and of a non-argument tweet (b).
(a) Junker asks "who does he think I am". I suspect elected PM Tsipras thinks Junker is an unelected Eurocrat. #justsaying #democracy #grexit 6 https://www.ibm.com/watson/ alchemy-api.html (b) #USAvJPN #independenceday #Justin-BieberBestIdol Macri #ConEsteFrioYo happy 4th of july #Grefenderum Wireless Festival We cast the argument detection task as a binary classification task, and we apply the supervised algorithms described in Section 2.1. Table 4 reports on the obtained results with the different configurations, while Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category.
Approach
Precision Most of the miss-classified tweets are either ironical, e.g.: If #Greece had a euro for every time someone mentioned #Grexit and #Greferendum they would probably have enough for a bailout. #GreekCrisis that was wrongly classified as argument, or contain reported speech, e.g.: Jeremy Warner: Unintentionally, the Greeks have done themselves a favour. Soon, they will be out of the euro http://t.co/YmqXi36lGj #Grexit that was wrongly classified as non argument. Our results are comparable to those reported in (Bosc et al., 2016b) (they trained a supervised classifier on the tweets of all topics in the DART dataset but the iWatch, used as test set). Better performances obtained in our setting are most likely due to a better feature selection, and to the fact that in our case the topics in the training and test sets are more homogeneous.
Task 2: Factual vs opinion classification
This task consists in classifying argumenttweets as containing factual information or being opinion-based (Park et al., 2015). Our interest focuses in particular on factual argument-tweets, as we are interested then in the automated identification of their sources. This would allow then to rank factual tweet-arguments depending on the reliability or expertise of their source for subsequent tasks as fact checking. Given the huge amount of work in the literature devoted to opinion extraction, we do not address any further analysis on opinion-based arguments here, referring the interested reader to (Liu, 2012).
An argument is annotated as factual if it contains a piece of information which can be proved to be true (see example (a) below), or if it contains "reported speech" (see example (b) below). All the other argument tweets are considered as "opinion" (see example (c) below).
To address the task of factual vs opinion arguments classification, we apply the supervised classification algorithms described in Section 2.1. Tweets from Grexit dataset are used as training set, and those from Brexit dataset as test set. Table 6 reports on the obtained results, while Table 7 reports on the results obtained by the best configuration, i.e. LR + All features, per each category.
Most of the miss-classified tweets contain reported opinions/reported speech and are wrongly classified by the algorithm as opinion -such behaviour could be expected given that sentiment features play a major role in these cases, e.g.,
Approach
Precision that was wrongly classified as fact.
Task 3: Source identification
Since factual arguments (as defined above) are generally reported by news agencies and individuals, the third task we address -and that can be of a value in the context of social media -is the recognition of the information source that disseminates the news reported in a tweet (when explicitly mentioned). For instance, in: The Guardian: Greek crisis: European leaders scramble for response to referendum no vote. http://t.co/cUNiyLGfg3 the source of information is The Guardian newspaper. Such annotation is useful to rank factual tweet-arguments depending on the reliability or expertise of their source in news summarization or fact-checking applications, for example.
Our dataset contains 79 factual argument tweets where the source is explicitly cited for Grexit (training set), and 40 factual argument tweets where the source is explicitly cited for Brexit (test set). Given the small size of the available annotated dataset, to address this task we implemented a simple string matching algorithm that relies on a gazetteer containing a set of Twitter usernames and hashtags extracted from the training data, and a list of very common news agencies (e.g. BBC, CNN, CNBC). If no matches are found, the algorithm extracts the NEs from the tweets through (Nooralahzadeh et al., 2016)'s system, and applies the following two heuristics: i) if a NE is of type dbo:Organisation or dbo:Person, it considers such NE as the source; ii) it searches in the abstract of the DBpedia 7 page linked to that NE if the words "news", "newspaper" or "magazine" appear (if found, such entity is considered as the source). In the example above, the following NEs have been detected in the tweet: "The Guardian" (linked to the DBpedia resource http://dbpedia.org/page/ The_Guardian) and "Greek crisis" (linked to http://dbpedia.org/page/Greek_ government-debt_crisis). Applying the mentioned heuristics, the first NE is considered as the source. Table 8 reports on the obtained results. As baseline, we use a method that considers all the NEs detected in the tweet as sources.
0.69 0.64 0.67 Most of the errors of the algorithm are due to information sources not recognized as NEs (in particular, when the source is a Twitter user), or NEs that are linked to the wrong DBpedia page. However, in order to draw more interesting conclusions on the most suitable methods to address this task, we would need the increase the size of the dataset.
Discussion and Future work
This paper investigated argument mining tasks on Twitter data. The main contribution is twofold: first, we propose one of the very few approaches of argument mining on Twitter, and second, we propose and evaluate two new tasks for argument mining, i.e., facts recognition and source identification. These tasks are particularly relevant when applied to social media data, in line with the open popular challenges of fact-checking and source verification to which these results contribute.
The issue of argument detection on Twitter has already been addressed in the literature. Bosc et al. (2016a,b) address a binary classification task (argument-tweet vs. non argument), as first step of their pipeline. Goudas et al. (2015) experiments machine learning techniques over a dataset in Greek extracted from social media. They first detect argumentative sentences, and second identify premises and claims. However, none of them is neither interested in distinguishing facts from opinions nor to identify the arguments' sources. An argumentation-based approach is applied to Twitter data to extract opinions in (Grosse et al., 2015), with the aim of detecting conflicting elements in an opinion tree to avoid potentially inconsistent information. Both the goal and the adopted methodology are different from ours.
Being it a work in progress, several open issues have to be considered as future research. Among them, we are currently extending the dataset of annotated tweets both in terms of annotated tweets per topic, and in terms of addressed topics (e.g. Brexit after the referendum, Trump), in order to have more instances of facts and sources. On such extended dataset, we plan to run experiments using the three modules of the system as a pipeline.
Moreover, we plan to extend our pipeline by considering also the links provided in the tweets to verify their sources, i.e., if a tweet claims to report information from the CNN but the link actually redirects towards an advertisement website the source is not indubitably the CNN. | 2017-12-09T18:11:42.234Z | 2017-09-07T00:00:00.000 | {
"year": 2017,
"sha1": "29e4e14a5613be06a39e76bf0d8a4c8217573c2f",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/D17-1245.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "29e4e14a5613be06a39e76bf0d8a4c8217573c2f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
260435691 | pes2o/s2orc | v3-fos-license | Prevalence of type 2 diabetes mellitus in elderly in a primary care facility: An ideal facility
In 2011 census, 5.3% of the Indian population was > 65 years of age. This number has steadily grown over past few years and is steeply growing. Healthcare burden of elderly diabetics is immense and proper diagnosis and treatment alone can prevent further complications. According to the most recent surveillance data in U.S., the prevalence of diabetes among U.S. adults aged ≥65 years varies from 22 to 33%, depending on the diagnostic criteria used. In CSIR-NEERI, India, we have healthcare system wherein a fixed and limited number of patients are treated for their lifetime by qualified practitioners with negligible financial burden of the treatment costs. The patients have regular monthly follow up and hence we diagnose Diabetes and evaluate the control and diagnose micro vascular and macro vascular complications in all patients. We did retrospective analysis of all elderly patients following up in NEERI Hospital to find the exact prevalence of T2DM in elderly. It was observed that from total 585 elderly people, 178 had T2DM (30.42%- Prevalence). The sex ratio of Diabetic males to females was almost equal (1:0.97). Obesity was present in 114 people (64%). High prevalence of hypertension was found in Diabetic elderly population (80%). Comparing our prevalence rates with few other studies, it was found that our prevalence rates are quite high. The contributing factors may be urban living, with high prevalence of central obesity and Asian ethnicity, over and above, data of all patients undergoing treatment is available. We treated all diabetics with persistent values of Systolic BP > 130 mm of Hg and Diastolic values of BP > 80mm of Hg as Hypertensives, in order to achieve reduction in cardiovascular mortality and morbidity. This paper is for awareness of disease burden, in real primary care setup. It is not cross-sectional study but study with 100% inclusion of beneficiaries’. This is real world urban diabetes prevalence, also associated hypertension and central obesity prevalence.
INTRODUCTION
In 2011 census, 5.3% of the Indian population was >65 years of age. [1]This number has steadily grown over past few years and is steeply growing.The segment of people >80 years of age is increasing at the fastest rate.Furthermore, whereas the majority of those >65 are now between the ages of 65 years and 75 years, there will be a shift in demography over the next few decades such that the majority of the geriatric population will be ≥75 years of age. [2]th enhancement of diagnostic and treatment facilities, with better healthcare facilities and awareness, we have a growing population of elderly people.With the rise in this population group, there is increase in the illness burden and hence, the healthcare burden of each individual.Over and above as the age advances, this is the population which is neglected worldwide.Proper evaluation of their problems, correct diagnosis and suitable treatment are the key factors in reducing the illness burden.This enhances the quality of life of the patients, which is of utmost importance.
Of all the diseases, type 2 diabetes mellitus (T2DM) is the single most disease affecting a large number of elderly populations along with Hypertension.Diabetes and its complications take a major toll on the quality of life of the elderly and the healthcare costs of the society.Diabetes further increases the risk of cardiovascular mortality in older people.The management of diabetes in elderly requires special care and attention.According to the most recent surveillance data, the prevalence of diabetes among U.S. adults aged ≥65 years varies from 22 to 33%, depending on the diagnostic criteria used. [3]he epidemic of type 2 diabetes is clearly linked to increasing rates of overweight and obesity in the U.S. population, but projections by the Centers for Disease Control and Prevention (CDC) suggest that even if diabetes incidence rates level off, the prevalence of diabetes will double in the next 20 years, in part due to the aging of the population.Other projections suggest that the number of cases of diagnosed diabetes in those aged ≥65 years will increase by 4.5-fold (compared to 3-fold in the total population) between 2005 and 2050.Older adults with diabetes have the highest rates of major lower-extremity amputation, myocardial infarction (MI), visual impairment, and end-stage renal disease of any age-group. [3]Also, there is suffi cient evidence to prove that Diabetes Mellitus (DM) is strongly linked to sudden cardiac death. [4]Hence, it is necessary to screen the entire population for presence of T2DM and treat them according to the guidelines.This is practically a diffi cult task, but when small and fi xed groups are treated by defi nite and same diabetes practitioners, the results can be encouraging.
In Council of Scientific and Industrial Research-National Environmental Engineering Research Institute (CSIR-NEERI), India, we have healthcare system wherein we treat a limited and fi xed population.The burden of healthcare cost is born by Institute and negligible to the patient.We have two qualifi ed Physicians and one Family Practitioner treating these patients.All the patients follow up with us right from the day they enter into this institution until thereafter for lifetime.Since the facility is free for patients, all prescriptions are taken from these doctors and very rarely outside medical facility is used.This assures continuous follow up.In addition, we arrange regular camps 2-3 times in a year to diagnose diabetes, to evaluate the control and diagnose micro vascular and macro vascular complications.
AIM
To fi nd prevalence of T2DM in all elderly population (≥60 years of age) taking treatment at NEERI Hospital.
Secondary aims
• To fi nd the sex ratio of diabetic elderly population, • To fi nd association with central obesity, • To fi nd association with hypertension.
MATERIALS AND METHODS
Retrospective analysis of all elderly patients following up in NEERI Hospital was done.Elderly population was defi ned as people more than 60 years. of ages on May 1, 2013.Prevalence of T2DM was calculated.T2DM was diagnosed according to American Diabetes Association Guidelines: Fasting Plasma Glucose ≥ 126 mg/dl (fasting for at least 8 hours), 2-hour Plasma glucose ≥ 200 mg/dl and/or glycosylated hemoglobin ≥ 6.5% or patients on antidiabetic medication.No. of males and females were noted.No. of patients with central obesity were noted.Central obesity was considered if abdominal circumference at umbilicus in supine position was >90 cm for males and >80 cm for females.Associated hypertension was evaluated.Systolic blood pressure (BP) >130 mm of Hg and Diastolic BP > 80 mm of Hg were considered abnormal for diabetic patients.
Patients come to the hospital for monthly check ups and monthly medicines.During these visits, risk stratifi cation of the patients, close monitoring of their symptoms, signs and lab values are done as per recommendations.Moreover, we regularly conduct 2-3 camps every year to diagnose diabetes, to evaluate the control and diagnose micro vascular and macro vascular complications.We get annual health checkup of all patients, especially of those who are at high risk.We have such a system, that all patients have 100% follow up, and therefore, we do not miss a single case of diabetes.
Statistical method
The prevalence of the elderly population was calculated using the direct standardization method.Multiple logistic regression analysis was conducted to look for the association of various parameters (categorical) with diabetes.
RESULTS
There are a total of 585 elderly people following up at NEERI Hospital.Out of these, 178 have been diagnosed till date to have T2DM.Thus, the prevalence rate of T2DM in elderly population is 30.42%.
Out of these, 90 (50.56%) are males and 88 (49.43%) are females.Thus almost equal numbers of both the sexes are affected, the ratio being-(1:0.97).One hundred and fourteen patients (64.04%) have central obesity.Eighty percent patients had associated hypertension.
DISCUSSION
Rising geriatric population, almost 5.3% of the entire population is a major concern to health economists. [1]High geriatric population means a higher number of patients with various chronic diseases and increasing percentage of lifetime health care costs are accounted for this population.As our population lives longer and medical advances continue to develop, individuals will have a greater chance of developing diseases that occur more commonly during later life; many individuals will also live with chronic illnesses such as diabetes for many more years than might be possible at present. [2]cording to the most recent surveillance data, the prevalence of diabetes among U.S. adults aged ≥65 years varies from 22 to 33%, depending on the diagnostic criteria used. [3]Postprandial hyperglycemia is a prominent characteristic of type 2 diabetes in older adults, contributing to observed differences in prevalence depending on which diagnostic test is used.Using the A1C or fasting plasma glucose diagnostic criteria, as is currently done for national surveillance, one-third of older adults with diabetes are undiagnosed.The epidemic of type 2 diabetes is clearly linked to increasing rates of overweight and obesity in the U.S. population, but projections by the CDC suggest that even if diabetes incidence rates level off, the prevalence of diabetes will double in the next 20 years, in part due to the aging of the population. [3]Other projections suggest that the number of cases of diagnosed diabetes in those aged ≥65 years will increase by 4.5-fold (compared to 3-fold in the total population) between 2005 and 2050. [1] a study from South Indian area, prevalence rates of T2DM and impaired glucose tolerance (IGT) were surveyed.In urban areas, 211 (23.7%) had diabetes, and 101 (12.4%) had IGT.In the rural area, 56 (9.9%) had diabetes, and 82 (14.9%) had IGT.In rural South India, the age-adjusted rates for known diabetes in the middle-aged and elderly subjects were unexpectedly high, considering the poor socioeconomic circumstances, decreased health awareness and decreased access to medical facilities.[5,6] In a study conducted in Trivandrum, the capital city of Kerala State, overall prevalence of T2DM was found to be 16.3%.This is comparable to the prevalence of diabetes among Indians residing in Singapore.The prevalence is even higher among people of Indian origin in Fiji.These data suggest that increasing life-expectancy (as in Kerala State) and changes in lifestyle and nutrition may result in substantially higher incidence of diabetes in India than currently established.[7] Data from the 2011 National Diabetes Fact Sheet in U.S. (released January 26, 2011) states that amongst people with age 65 years or older 26.9% of all people have diabetes.[8] The overall 10-year incidence of diabetes and Impaired fasting glucose was 9.3% and 15.8%, respectively, in a study conducted in Australia to fi nd 10-year incidence of Diabetes in older Australians.[9] Participants with metabolic syndrome in community-based study of prevalence of T2DM from California, using World Health Organization (WHO) Criteria, the results of age group 50-89 years.were as follows: 16.5% males and 12.7% females.[10] In another study from Denmark, T2DM was prevalent in 10% at age 70 years and 12% at age 80 years (WHO Criteria) [11] in 1982 study.
In yet another study from Finland, 33.8% men and 37.9% women were found to have abnormal glucose tolerance according to WHO Criteria. [12] a Swedish study, prevalence of T2DM by WHO criteria was 7.6% in men and 4.0% in women respectively. [13]But this was older study [Figure 1].
The differences in the prevalence rates at various places could be due to difference in the ethnicity, [14] the lifestyle of the patients and the population screened.In our case, prevalence rates are high due to ethnicity, urban lifestyle, central obesity, and the fact that all the geriatric patients who take treatment at our hospital are screened for Diabetes.
Hence, we can see that, on screening all the patients in a 0.00% community, the prevalence rates of diagnosis of T2DM increase.The more the number of people, are diagnosed to have T2DM, the better will be their management and hence secondary prevention of complications.
In our study, the prevalence of Diabetes in elderly males and females was almost similar.This proves that when a complete surveillance is done, the sex difference in prevalence of Diabetes in elderly is negligible.The difference in the ratio of male sex to female sexing other studies may be due to less number of females being included in the study.Also, one more reason for high female prevalence rates in our population is that, many of these females are widows whose husbands have expired due to cardiovascular causes, both related and unrelated to DM.
Obesity was found in 64% patients in our study.With advancing age, lean body mass decreases and percent adiposity increases, but there may be little or no change intotal body weight.Hence, it is necessary to look for central obesity and not Body Mass Index, which may not depict true obesity.Aging is associated with sarcopenia, referred to as the universal and involuntary decline in skeletal muscle mass.This result in loss of muscle strength and contributes to the eventual inability of the elderly individual to carry out tasks of daily living.A major mechanism of insulin action is facilitating glucose uptake by the muscle.A reduction in lean body mass means the eventual inability to adipose glucose, reduced metabolically active lean tissue mass, and reduced physical activity. [7,15]pertension is well-recognized as an insulin-resistant state.Hypertension is a common comorbidity among persons with diabetes and its prevalence increases with advancing age.In people with type 2 diabetes, hypertension is a major risk factor for cardiovascular disease.Elderly patients with hypertension and DM have a higher mortality risk than similarly aged controls without DM.The United Kingdom Prospective Diabetes Study blood-pressure trial demonstrated the benefi ts of more intensive BP control in individuals with type 2 diabetes.Those persons randomized to tight BP control (mean treated BP 144/82 mmHg) with an angiotensin-converting enzyme (ACE) inhibitor or beta-blocker had a 24% relative risk reduction in diabetes-related end points, 32% fewer diabetes-related deaths, and 44% fewer strokes compared with those in the less-tight control arm (mean treated BP 157/87 mmHg).Hence, detection of hypertension amongst the diabetics and maintaining its control is essential for reducing the cardiovascular mortality of these patients. [16,17]e have 80% elderly diabetic patients who are treated for hypertension.Keeping stricter criteria, we tend to treat more patients and thus reduce their cardiovascular mortality.
CONCLUSION
Thus, the prevalence rate of T2DM in elderly population is 30.42%.Almost equal numbers of both the sexes are affected, the ratio being-(1:0.97).64.04% have central obesity.Eighty percent patients had associated hypertension.
The health burden of elderly diabetics is immense.The associated complications complicate the matters.Both the life span and quality of life are affected badly.There is enhanced workload on the medicare system.But in a system like ours, wherein a fi xed and limited number of patients are treated for their lifetime by qualifi ed practitioners with negligible financial burden of the treatment costs, the scenario is quite different.The high prevalence of elderly diabetics, i.e., 30.42%, in our set up is probably due to higher rates of detection, ethnicity, lifestyle, and obesity.Duration of diabetes is not in preview of this paper, but many of these diabetics have been diagnosed to have diabetes since long.
Like many other studies showing importance of Primary Healthcare provision and healthcare outcomes of benefi ciaries and reduction in the treatment cost, [18][19][20] the salient features to follow from this study are: • A fi xed and limited group to follow lifetime for their chronic illness with a specifi c trained and qualifi ed practitioner.• To screen and treat early, monitoring closely the high risk population.
This paper is for awareness of disease burden, in real primary care setup.It is not cross-sectional study but study with 100% inclusion of benefi ciaries'.This is real world urban diabetes prevalence, also associated hypertension and central obesity prevalence. | 2020-04-09T21:07:07.185Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "fe8e9b4f08c378319df52cee9f3260c481479342",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2230-8210.119647",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62b1defa582bd28cc57b50871fad672c460c1b74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212839278 | pes2o/s2orc | v3-fos-license | Design and Fabrication of a Vertical Axis Wind Turbine with introduction of Plastic Gear
This project is a design of Vertical Axis Wind Turbine using Kinetic theory, Aerodynamics model, Hooke’s law and Young Modulus. Aerodynamic model method was use to design the blade and the blades are three for effective harness of the wind speed. Bearing was introduced for easy rotation and reduction of noise. The use of plastic gears was introduced so that one revolution of the shaft carrying the blades leads to forty six (46) revolution of the alternator. The alternator then generate electric power. The power generated was 65 W under wind speed of 0.8m/s..
Introduction
Energy is absolutely required our life. It furnishes us with make life easy, beautiful and better for us. Its enhance activities that associated with advance country like transportation, technology advancement and the ability to produce food and material goods. Energy is a fundamental ingredient for development [1], and has always been a vital and indispensable input to the economic needs of our present civilization [2]. It functions as the driving potential for industrialization. Research shows that increase in the World population is directly proportional to increase in energy demand, therefore to meet the increasing in energy demand as a result of fast growing population in the past few decades, wind energy, as one of the renewable energy resources, is widely developed. Wind energy is free renewable energy unlike fossil fuels, coal and natural gas that pollute the environment. There is no air-pollution emission to environments after consumption of wind energy. As a result, the development of wind energy has been drawing attention from academia to industries [3]. Historically, energy has been directly related to the gross national product (Jonathan and Brian, 2016) [4], which is a measure of the market value of the total national output of goods and services. A most casual look at our civilization shows the important part played by the supply and control of energy. Wind is a natural phenomenon that has to do with the movement of air masses caused primarily by the differential solar heating of the earth's surface. The different or variation in the energy received from the sun affects the strength and direction of the wind. The way in which aeroturbines transform energy in moving air to rotary mechanical energy suggests the use of electrical devices to convert wind energy to electricity [5]. Wind energy has been in used for decades, for water pumping as well as for the milling of grains [6].
Wind energy has now been developed into one of the major alternative energy source. Over 159,000 [7] megawatts of wind generation were operational by the end of 2009 [8] with 38,312 megawatts added in 2009 alone [9]. The reasons for this growth are clear and straightforward. Wind is a boundless energy resource which is clean and renewable. By its integral nature, wind power has the potential to reduce the environmental impact on wildlife and human health. Improve¬ments in power electronics, materials, and wind turbine designs allow produc-tion to continually lower the cost of wind generated electricity making it today economically viable compared with most other fossil fuels. Vertical axis wind turbine, such as the Daurries (built in 1931) [10] use drag instead of lift. Drag is resistance to the wind, like a brick wall. The blades on vertical axis are designed to give resistance to the wind and are as a result pushed by the wind. Windmills, both vertical and horizontal axis, have a lot of uses of which some of them are: hydraulic pump, motor, air pump, oil pump, churning, creating friction, heat director, electric generator, Freon pump, and can also be used as a centrifugal pump [11].
Methodology 2.1 Theory
The power in the wind can be computed by using the concepts of kinetics. The wind mill works on the principle of converting kinetic energy (K.E.) of the wind to mechanical energy. The same method is used in this design. The kinetic energy of any particle is equal to one half its mass times the square of its velocity, or ½ mv 3 . The amount of air passing in unit time through an area A, with velocity u, is Av, and its mass 'm' is equal to its volume multiplied by its density U of air, or m = U AV (1) (m is the mass of air transversing the area A swept by the rotating blades of a wind mill type generator ) Substituting this value of the mass in expression of K.E. P ୵ = ଵ ଶ ɏAV ଷ W (2) Where Pw = power of the wind (W) U = air density (kg/m 3 ) A = area of a segment of the wind being considered (m 2 ) V = undisturbed wind speed (m/s) At standard temperature and pressure
Aerodynamic Model of Turbine Blade
In order to model the performance of a vertical-axis wind turbine there are four main possible approaches [6]: The induced velocity in the upstream part of the rotor is: = (6) Where Vu is the upstream induced velocity, Vo is the free stream air velocity and au is the upstream interference factor, which is less than 1 as the induced velocity is less than the ambient velocity. In the middle plane between the upstream and downstream there is an equilibrium-induced velocity Ve: = ( െ ) (7) The downstream part of the rotor, the corresponding induced velocity is: Where is the downstream induced velocity and is the downstream interference factor which is smaller than the upstream interference factor. The resultant air velocity that the blade sees is dependent on the induced velocity and the local tip speed ratio: Where WU is the resultant air velocity and TSR is the local tip speed ratio defined as: = ࣓ 10) :KHUH 5 LV WKH URWRU UDGLXV DQG Ȧ LV WKH DQJXODU VSHHG Figure 4 Schematic Diagram
Design of Shaft
In the design of shaft, stiffness and torsional rigidity consideration was taken into account. Insufficient rigidity can result in poor performance. The ASME code equation for a solid shaft having little or no axial loading is given as: Power Output (W)
Discussion
The result obtained shows that in some region of low wind speed, there is a way wind energy can be harnessed using small scale wind turbine and gear mechanism to produce an encouraging voltage and power. This design produces an average direct voltage of 38v and Electric power of 65W. It shows that this can be improved to get desirable electric power output. It was discovered that wind speed and area of rotor are major parameters that affect both wind power harnessed and Mechanical power. Also the blade shape (aerodynamic of blade) determine effectiveness of the blade to harnessed wind speed. The wind speed is amplified by the introduction of plastic spur gear in the base. The spur gear help to multiply rotational speed caused by wind speed. This shows that in area or region where there is no powerful wind speed, this method can be used to maximize the available wind speed harnessed and converts it to electrical power. Figure 7 shows the effect of wind speed on the power output of the turbine at different efficiency. It was observed that at speed lower than 2 m/s the turbine appears not to produce much power, however, at speeds higher than 2 m/s the efficiency of the turbine shows major difference in power output. At wind speed of 5 m/s the power outputs per unit area were found to be 7.66, 19.14, 38.28 and 49.94 W/m 2 at turbine efficiencies of 10, 25, 50 and 60% respectively. From this analysis the device would not be effect in areas where average wind speed is less than 2 m/s. The effect of turbine on the power output, as shown in Figure 8, revealed that an increase in rotor diameter shows a significant increase in power output, particularly wind speed of 5 m/s resulting in power outputs of 24.09, 60.24, 120.47 and 144.57 W at turbine efficiencies of 10, 25, 50 and 60% respectively.
Conclusion
The fabrication of this design shows that light materials like fiber plastic that will not add much weight to the shaft's weight is of great help in multiplying the revolution of the alternator and in turn produce Electrical Energy. It is discovered that light weight gear boost the little wind energy that blade could harness.
5.
Recommendation Wind energy is a good source of renewable energy. It is environmental friendly and better than fossil fuel and the likes, therefore I recommend more research to be done especially in the area of using gear mechanism to increase power output.
1. More research can be done on the best material that could be used for manufacturing of gear for this purpose. 2. Fore work can be done on the best arrangement of gear meshing to ease the operation and maintenance. 3. Further work can be done in such a way that some region with low wind speed such as 5m/s can benefit the use of wind energy with the introduction of gear. | 2019-12-19T09:18:28.107Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "6b6a709cb33b47434ebcf3976c9d2ba50a818d21",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1378/4/042098",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8fb4b93645f9e5b31e5dc74d32a137832d1c67f1",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
232486136 | pes2o/s2orc | v3-fos-license | Tissue-specific signatures of metabolites and proteins in asparagus roots and exudates
Comprehensive untargeted and targeted analysis of root exudate composition has advanced our understanding of rhizosphere processes. However, little is known about exudate spatial distribution and regulation. We studied the specific metabolite signatures of asparagus root exudates, root outer (epidermis and exodermis), and root inner tissues (cortex and vasculature). The greatest differences were found between exudates and root tissues. In total, 263 non-redundant metabolites were identified as significantly differentially abundant between the three root fractions, with the majority being enriched in the root exudate and/or outer tissue and annotated as ‘lipids and lipid-like molecules’ or ‘phenylpropanoids and polyketides’. Spatial distribution was verified for three selected compounds using MALDI-TOF mass spectrometry imaging. Tissue-specific proteome analysis related root tissue-specific metabolite distributions and rhizodeposition with underlying biosynthetic pathways and transport mechanisms. The proteomes of root outer and inner tissues were spatially very distinct, in agreement with the fundamental differences between their functions and structures. According to KEGG pathway analysis, the outer tissue proteome was characterized by a high abundance of proteins related to ‘lipid metabolism’, ‘biosynthesis of other secondary metabolites’ and ‘transport and catabolism’, reflecting its main functions of providing a hydrophobic barrier, secreting secondary metabolites, and mediating water and nutrient uptake. Proteins more abundant in the inner tissue related to ‘transcription’, ‘translation’ and ‘folding, sorting and degradation’, in accord with the high activity of cortical and vasculature cell layers in growth- and development-related processes. In summary, asparagus root fractions accumulate specific metabolites. This expands our knowledge of tissue-specific plant cell function.
Introduction
Plant roots secrete a wide range of compounds that function in the mobilization of low-availability nutrients from the soil or govern interaction with other organisms in the rhizosphere. Metabolites may be exported to the soil by diffusion along a concentration gradient, by channel proteins, or by ATP-or proton-driven transporters against a concentration gradient 1 . Numerous transporters have been identified which govern the exudation of specific compounds, mostly in the plasma membrane of cells 2 . However, it is largely unknown where these exometabolites are synthesized within the root, the specificity of root exudation with respect to different compound classes, and how the biosynthetic pathways are spatially partitioned within the root. Considerable efforts have been made to profile the array of released compounds 3,4 . Especially secondary metabolites are important molecules with significant impact on the rhizosphere ecosystem, acting as allelochemicals that are exuded to mediate plant growth in the vicinity 5 or to mobilize nutrients 6 . They also act as signaling molecules that attract or repel microorganisms in the rhizosphere 2,7 . The roles of some groups of secondary metabolites are well described in this context, e. g., flavonoids, strigolactones, or terpenes 8 , while others are less understood. Such specialized metabolites are often accumulated in particular anatomical structures and cell types of the root 9 but information about the localization of metabolites is lost when homogenized sample material is investigated. It is crucial to track the spatial dynamics of metabolite accumulation to provide insights into tissue and cell typespecific metabolite compartmentalization. However, a systematic assessment of metabolites present in root tissues and the exudate fraction has not yet been attempted in monocot and dicot plant species.
Asparagus (Asparagus officinalis L.) is a perennial vegetable consumed worldwide, with a high nutritional value and low-calorie intake. Spears are rich in antioxidants, such as polyphenols, flavonoids, and ascorbic acid as well as amino acids [10][11][12] , while roots are traditionally used as a medicinal product, mainly due to their accumulation of saponins and fructans 13,14 . Asparagus grows from a root system of fleshy storage roots attached to an underground rhizome. Small feeder roots attached to storage roots absorb nutrients and water and are short-lived, while storage roots continue to grow throughout the plant's life. Roots exert considerable antimicrobial activities, producing specialized metabolites (steroid-terpenes, alkaloids, flavonoids, among others) that allow shaping their rhizosphere microbiota over the lifespan of an asparagus bed 14 . Asparagus can therefore serve as an excellent model in elucidating long-term rhizodeposition processes.
Here we describe an integrative approach to define tissue-specific resolution in the metabolome and proteome of storage roots. Until now, root metabolite profiling of asparagus focused on selected metabolite classes, such as saponins [15][16][17] , fructans 18,19 , and flavonoids 20 . Asparagus storage roots consist of an epidermis and a suberized exodermis, the cortex, and the endodermis-surrounded stele 21 . In our approach, roots were dissected into epidermis/exodermis and cortex/vasculature (Fig. S1). The metabolomes of both compartments were compared with that of root exudates. We hypothesized that root tissues have individual metabolite signatures, which differ from those of root exudates. The differentially accumulated metabolite distribution of selected compounds was validated by matrix-assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI).
Proteins control the biosynthesis of plant metabolites and proteomic techniques have expanded our knowledge about biosynthetic pathways. So far, only transcriptome analyses have been performed in asparagus with the aim to elucidate the biosynthesis of specific compounds 17,22,23 . However, the detection of a particular gene product in a transcript-based experiment does not indicate the presence or absence of the resulting protein product. Further, quantitative differences in the transcript of a particular gene may not necessarily correlate with the corresponding protein abundance or the accumulation of related metabolites. Typical studies measure the protein composition of whole tissue, which leads to an average assessment of the proteome, overlooking cell type-specific dynamics.
Only a few cell type-specific proteome studies have aimed at understanding responses to plant development and specific stresses 24 . Therefore, the objective of this study was to investigate the compartmentalization of biosynthetic pathways to enhance our understanding of the intricate regulation of metabolic pathways and networks at the cellular level.
Metabolome profiling of asparagus roots and root exudates
Measurements of exudates, outer and inner root tissues by negative ion electrospray ionization (ESI−) led to the detection of 1915, 1437, and 1309 mass/retention time pairs, respectively. ESI+ measurements contained slightly more signals, with 2127, 2413, and 1942 in exudates, outer and inner tissue, respectively. The principal component analysis revealed a distinct separation between the three fractions, for both ionization modes (Fig. S2).
The distribution of compounds between the root fractions was visualized using Venn diagrams. A total of 613 ESI− and 1040 ESI+ signals were common to exudates, outer and inner root tissues (Fig. 1). The largest number of unique features was found for root exudates; more features from outer tissue were common to root exudates as compared to inner tissue. Paired t-tests (p < 0.05, fold change >2) confirmed the above observation and revealed that metabolite profiles of outer and inner tissue were more similar to each other than to exudates. Table S1 presents those features that were successfully annotated based on their accurate mass and tandem mass spectrum, including their compound class annotation according to KEGG pathway analysis. In total, 263 non-redundant metabolites were found as having significantly different abundances across the three root fractions. Under ESI−, 139 annotated metabolites were significantly changed, with the largest group being 'lipids and lipid-like molecules' (38%) and the second largest being 'phenylpropanoids and polyketides' (16%, Fig. 2A). Under ESI+, 170 annotated metabolites were significantly changed, with 55% being 'lipids and lipid-like molecules' (Fig. 2B). The secondlargest group was annotated as 'phenylpropanoids and polyketides', accounting for 12% of annotated compounds.
In general, most of the differentially abundant metabolites accumulated either in the outer tissue or in the exudate or both, while only a limited number was enriched in the inner tissue. Prenol lipids and purines were mainly present in root exudates. The identified prenol lipids belonged to mono-, di-, tri-and sesquiterpene groups (including the phytohormones abscisic acid and gibberellin precursor A12) with largely unknown function in the rhizosphere. Purine nucleosides can be transported out of the cells or might be derived from exuded purine nucleotides that have been hydrolyzed by strong extracellular apyrase activity. Glycerophospholipids were enriched exclusively in the outer tissue. These compounds are major structural constituents of cell membranes and root hairs in particular integrate glycerophosphocholines and glycerophosphoethanolamines into their membranes. Steroids and steroid derivatives were also found mainly in outer tissue. Most compounds in this group were annotated as saponins, which represent a plant protective chemical barrier with antimicrobial activity. Fatty acyls and organoheterocyclic compounds were highly abundant in the exudate and epidermis. Most of the annotated fatty acyls were linoleic acids and their derivatives. Free linoleic acid possesses antifungal activity and could have a protective function. Linoleic acid is the most abundant fatty acid in plant membranes and the enrichment reflects the synthesis of new membranes in the epidermis and root hairs. The annotated organoheterocyclic compounds included asparagusic acid, nicotinic acid derivatives, and B vitamins (riboflavin, niacin), which act as regulators for microbial interactions in the rhizosphere, among others. Phenylpropanoids and polyketides were highly abundant in the exudate and outer tissue, annotated mainly as cinnamic acid derivatives which are major components of root waxes that form the lipid barrier on the root surface; p-coumaric, ferulic, and caffeic acids are also known allelochemicals and antimicrobials. Different members of other metabolite families were found in all three root fractions, including organic acids and their derivatives, and organic oxygen compounds.
Tissue type-specific localization of metabolites in asparagus roots MALDI-MSI was performed to verify the observed region-specific metabolite accumulation in asparagus roots. Three metabolites were selected for the analysis: riboflavin-5-sulfate and protodioscin, both highly abundant in the outer tissue, and raffinose, which was enriched in the inner tissue. A MALDI-MS method was established using authentic standards of protodioscin and raffinose. For riboflavin-5-sulfate, a riboflavin-5-phosphate standard was used, given the similar molecular masses and the identical MS/MS fragmentation pattern of the riboflavin moiety. α-Cyano-4-hydroxycinnamic acid matrix and negative ionization were used for detection of riboflavin-5-sulfate, with 2,5-dihydroxybenzoic acid matrix and positive ionization for protodioscin and raffinose. The identification of the three substances was based on the molecular mass and MS/MS fragmentation of ions present in methanolic extracts of outer or inner tissue . Metabolite abundance is presented by a color coding, where orange is the highest abundance, yellow ocher is medium abundance and bright yellow indicates the lowest abundance of the respective metabolite, as mean of all biological and experimental replicates. On the right-hand side, the abundance of selected compounds is shown, based on all replicate measurements (exudate: n = 15, outer and inner tissue: each n = 9, blank: n = 18). The median, 10th, 25th, 75th, and 90th percentiles are plotted as vertical boxes with error bars. Letters indicate significantly different fractions (Kruskal-Wallis One Way ANOVA on Ranks, followed by Dunn's test for multiple comparisons, p < 0.05). Table S1 provides the annotation of the respective compounds, the confidence level of annotation, measured and theoretical masses, retention times, molecular formulas, ion intensities, p values, and compound class annotation up to level 3 subclass. deriv. derivatives ( Fig. S3), since the ion-abundances of riboflavin-5-sulfate and protodioscin were too low for on-tissue MS/MS measurements.
MALDI-MSI confirmed the results of the metabolome profiling (Fig. 3). Riboflavin-5-sulfate was detected exclusively in the outer tissue. The highest abundance of protodioscin was found in the outer tissue, although traces were also detected in the periphery of the cortex. Ion signatures of raffinose were more intense in the cortex as compared to the outer tissue. In order to quantify how strongly those compounds discriminate outer and inner tissue, a receiver operating characteristic (ROC) curve approach was used. The true positive rate of detection was plotted against the false positive rate for each m/z value. Area under the curve (AUC) values between 0 and 1 are obtained describing the discriminatory power of an m/z value based on its normalized relative abundance. The closer the AUC is to 0 or 1, the higher the discriminatory power of the m/z value. AUC values were 0.94 ± 0.03 (±standard deviation, n = 6) for riboflavin-5-sulfate, 0.87 ± 0.10 for protodioscin, and 0.72 ± 0.13 for raffinose, indicating that riboflavin-5-sulfate has the highest ability to distinguish between root tissues.
Spatial distribution of root proteomes
The proteome characteristics of asparagus root outer and inner tissue were investigated using a label-free LC-MS approach. The analysis resulted in 598,054 peptide spectrum matches, indicating 127,241 peptides and 2861 proteins. The principal component analysis revealed a close grouping of technical replicate runs and a clear separation between outer and inner tissue-derived protein samples, explaining 62.7% of the observed variation (Fig. S4). The data set was filtered according to the parameters described in the Materials and Methods section and 1924 identified proteins were subjected to statistical analysis. A total of 104 proteins were found exclusively in outer tissue samples, 76 proteins were found only in the inner tissue, and 405 proteins were differentially abundant (p < 0.05, Benjamini-Hochberg corrected for false-discovery rate, Table S3).
In order to characterize individual protein functions, KEGG orthology assignments were performed, permitting annotation of 74.5% of all differentially abundant proteins (Fig. 4, Table S3). Proteins from both tissues were included in the categories 'carbohydrate metabolism', 'energy metabolism', and 'amino acid metabolism', which describe broad and basic metabolic functions. However, differential abundance could be due to the expression of tissuespecific isoforms (e.g., of cysteine synthase, glutamine synthetase, malate dehydrogenase, sucrose synthase), but also due to the enhancement of different cellular processes. For instance, proteins involved in sucrose synthesis (sucrose-phosphate synthase, sucrose-phosphatase) had a higher abundance in the inner tissue, while proteins involved in cell wall-related carbohydrate metabolism (alpha-galactosidase, trifunctional UDP-glucose 4,6-dehydratase/UDP-4-keto-6-deoxy-D-glucose 3,5-epimerase/ UDP-4-keto-L-rhamnose-reductase) were more abundant in the outer tissue. The categories that were specifically enriched for outer tissue proteins were 'lipid metabolism', 'biosynthesis of other secondary metabolites' and 'transport and catabolism', reflecting its main function in the formation of a hydrophobic barrier, secretion of secondary metabolites into the rhizosphere, and water and nutrient uptake. Major pathways for inner tissue-derived proteins included 'transcription', 'translation' and 'folding, sorting and degradation', indicating that the cortical and vascular cell layers investigated in our study are highly active in growth-and development-related processes.
To identify proteins that may be involved in root exudation or other processes at the plant-soil interface, the dataset of proteins found exclusively or with higher abundance in the outer tissue was searched for proteins that are located at the plasma membrane or directed to the apoplast. Software tools for predicting the subcellular localization of proteins (CELLO2GO, LOCTree3, TargetP 2.0, WoLF PSORT) yielded contrasting results. Hence, the asparagus protein sequences were blasted against the UniProtKB/SwissProt database, and the putative subcellular localization and putative function were extracted manually based on experimental data of heterologous proteins, if available. Thirty proteins were found that are putatively localized to the plasma membrane, and ten proteins to the apoplast (Table 1). Most prominent were proteins involved in vesicle trafficking and endocytosis, indicating that vesicle transport might be an important mechanism of exudation in asparagus; endosomal interactions also contribute to tip growth of root hairs. Further, proteins related to lipid metabolism, cell wall metabolism, and signaling/stress response were found, which relate to the primary function of the root epidermis as a barrier and boundary between the plant and its environment. Our analysis identified two ATP-binding cassette (ABC) transporter G family proteins (gi:1150689455, gi:1150740767) as highly abundant in the epidermis. The initial proteome dataset contained two additional ABC transporter G proteins that were expressed exclusively in the epidermis (gi:1150748405, gi:1150679400), however, these proteins were each identified by a single peptide and thus did not meet our quality threshold.
Spatial distribution of biosynthetic pathways
Several metabolite classes were identified that showed a tissue type-specific accumulation (Fig. 2). To gain more insight into these distribution patterns, proteins related to their underlying biosynthetic pathways were examined, using the asparagus KEGG pathways for α-linolenic acid metabolism, steroid biosynthesis, and phenylpropanoid biosynthesis (Fig. 5, Fig. S4). α-Linolenic acid is a polyunsaturated fatty acid, structural component of storage and membrane lipids, and a precursor of the signaling molecule jasmonic acid. A number of proteins involved in earlier metabolic steps were exclusively expressed or had significantly higher abundance in the inner tissue, including a linoleate 9S-lipoxygenase isoform (gi:1150749578), quinone-oxidoreductase (gi:1150698529), and allene oxide cyclase (gi:1150676538). Downstream proteins related to (15Z)-12-oxophyto-10,15-dienoate metabolism were more highly abundant in the outer tissue; a 12-oxophytodienoate reductase isoform (gi:1150734677) and one acyl-coenzyme A oxidase isoform (gi:1150714278) were exclusively expressed there.
Phenylpropanoid metabolism generates a vast array of secondary metabolites. One scopoletin glycosyltransferase (gi:1150677396) was exclusively expressed in the inner We further investigated the localization of enzymes involved in the biosynthesis and metabolism of different types of specialized metabolites (Table S5). Glycosyltransferases govern the transfer of a glycosyl moiety to a substrate compound, which can be phenylpropanoids, flavonoids, hormones, or xenobiotics. Thirteen glycosyltransferases have been identified in asparagus roots by the proteome approach. For most, their specific substrate is unknown but deduced based on sequence similarity searches against UniProtKB. Two proteins were tissue-specific: UDP-glycosyltransferase 92A1 (gi:1150750624) with unknown substrate specificity being expressed in the outer tissue and scopoletin glucosyltransferase (gi:1150677396) found only in the inner tissue. Six proteins were significantly differentially expressed between the tissues, all of them more abundant in the outer one. Notably, the increased occurrence of UDP-glycosyltransferases in the outer tissue was accompanied by a higher abundance of their co-substrate UDP-glucose. Cytochrome P450 monooxygenases mediate multiple oxidative processes, especially in the biosynthesis of specialized metabolites. Twelve cytochrome P450 enzymes were found in this analysis. P450 71A1 (gi:1150681655), involved in cyanogenic glycoside biosynthesis, was exclusive to the outer tissue. Three enzymes had significantly higher expression in this tissue, two involved in phenylpropanoid metabolism and one in fatty acid biosynthesis (gi:1150714305, gi:1150734984, gi:1150748158). P450 90B1 (gi:1150690507), functioning in brassinosteroid metabolism, was significantly highly expressed in the 5 Heatmaps depicting proteins that were identified by the proteome analysis as involved in the biosynthetic pathways of α-linolenic acid, terpenoid and steroid, and phenylpropanoid biosynthesis. Colors represent the normalized protein expression ranging from minimum (dark blue) to maximum (dark red). Missing colors indicate the absence of the protein in the respective tissue. Asterisks indicate significantly different protein abundances (t-test, p < 0.05). Further information related to protein accession numbers, abundance ratio, and significance testing is provided in Supplementary Table S5 inner tissue. Glutathione-S-transferases (GST) and glutathione conjugation have multiple functions in plants, including detoxification processes and oxidative stress alleviation. However, they are also essential for the vacuolar accumulation of phenylpropanoids as well as for transport processes via ABC transporters. Sixteen GSTs were identified in asparagus roots, but the substrate specificity for most of them is unknown. Four GST isoforms were identified exclusively in the outer tissue and four other isoforms were significantly upregulated there. Four GST isoforms had higher abundance in the inner tissue, with two of them functioning in scavenging reactive oxygen species via ascorbate (gi:1150727243, gi:1150682857). Overall, from the 41 proteins with putative activity as UDP-glycosyltransferases, cytochrome P450 enzymes, or GSTs, 31 had higher abundance in the outer tissue, reflecting the enhanced synthesis, storage, and transport of specialized metabolites.
Discussion
This study demonstrates that the analysis of specific root fractions provides valuable insights into cellular function. Metabolites in plant roots exert a variety of functions, such as fueling primary metabolism and root growth, rhizosphere communication, and plant defense. By combining metabolomics and proteomics, we were able to dissect specific metabolic profiles in the three analyzed fractions, and relate those profiles with protein abundances of spatially resolved biosynthetic pathways.
Most differentially abundant compounds found in our study were annotated as 'lipids and lipid-like molecules' that accumulate mainly in the outer tissue and the exudate of asparagus roots. Plant lipid metabolism generates compounds with functions in surface protection, intra-and extracellular signaling, membrane organization, and environmental adaptation 25,26 . α-Linolenic acid metabolism is an essential pathway in this regard. Underlying biosynthetic proteins were also more abundant in the outer tissue. Together with the increased abundance of proteins involved in vesicle trafficking, this reinforces the role of these cells in synthesizing and secreting lipids and their derivatives.
Purines and pyrimidines were predominantly found in exudates of asparagus roots. It is known that nucleosides and nucleobases can pass plant membranes via several transport proteins 27 , and plant roots can take up and metabolize nucleosides for degradation, or utilize them in more efficient salvaging processes 28 . However, the extracellular nucleotide ATP has been identified as a plantsurface signaling molecule, with functions in stress and wounding responses of roots [29][30][31] .
The limited lifespan of an asparagus bed and the problematic replanting are in part associated with the accumulation of pathogenic soil-borne microorganisms, such as Fusarium species, and also with the root exudation of autotoxic compounds, including trans-cinnamic acid 32 and caffeic acid 33 . Besides caffeic acid and its derivatives, our study identified several other cinnamic acid derivatives with allelochemical properties accumulated in root exudates (p-coumaric acid, ferulic acid). This could indicate the presence of yet unconsidered autotoxic metabolites relevant for future crop improvement. In general, most differentially abundant compounds belonging to the family of 'phenylpropanoids and polyketides' accumulated in the outer tissue and exudate; hydroxycinnamic acids integrate especially into root surface lipids 34 . Concomitant with the presence of metabolites, proteins involved in their biosynthesis were also found more highly abundant or exclusively expressed in the outer tissue, compared to the inner tissue. In particular, a number of putative UDP-glycosyltransferases, cytochrome P450s, and glutathione-S-transferases, catalyzing the final biosynthetic steps of a wide range of compound classes, were outer tissue-specific, indicating a metabolic flow of intermediates from the cortex to the epidermis. Within the cell, secondary metabolites are stored in vacuoles to avoid self-toxification and unspecific compound modifications. Transport to vacuoles and to the apoplast for rhizodeposition occur either in a Golgi-dependent way via vesicle trafficking 35 or in a Golgi-independent way by specific transporters 36 . Both distribution systems are present in asparagus root outer tissue but their specific roles have not yet been determined.
Saponins usually accumulate in underground tissues of plants and have also been found in root exudates 37 . Saponins are part of the constitutive defense system acting as deterrents, toxins, and digestibility inhibitors 38 . More recently, their role in plant development, especially root growth, root hair morphology, root cap, and root epidermis formation has been described 37 . Metabolite analysis resulted in the annotation as saponins of 42 compounds in ESI+ mode and four compounds in ESI− mode. Most saponins, including protodioscin, were significantly enriched in the outer tissue, while other saponins were more abundant in the exudate or inner tissue. The spatial localization of protodioscin was validated by MALDI-MSI. Protodioscin has multiple medicinal properties 39 . In Asparagus cochinchinensis and Asparagus racemosus, protodioscin accumulated in all root tissues, but was highest in the epidermis of dried storage roots 16 . A similar epidermis specificity was shown for saponins in Panax roots 40 and avenacins in oat 41 . Despite the clear spatial separation of the metabolite in root tissues, the same degree of separation was not observed for the biosynthetic pathway, as investigated by proteome analysis. Primary enzymes for protodioscin synthesis are cycloartenol synthase (gi: 1150672824) and obtusifoliol 14-alpha demethylase (gi: 1150669044), both having higher abundance in the inner tissue. Neither glucosylation through sterol 3-betaglucosyltransferase UGT80A2 (gi:1150698245) nor deglucosylation via a furostanol glycoside 26-O-beta-glucosidase (gi: 1150670467) demonstrated tissue specificity, indicating that steroidal saponin synthesis occurs in both tissues and metabolites might be directed to the outer tissue by vesicle transport. Uncompleted biosynthesis of triterpenoid avenacins in oat disrupts membrane trafficking and causes reduced root growth and root hair deficient phenotypes 41 , but comparable insights into steroidal saponin sequestration are lacking.
Numerous studies have investigated proteome responses of entire and developing plant roots towards biotic or abiotic stresses but tissue and cell type-specific investigations are scarce. Root hair cells have become a model to study single-cell proteomes due to the relative simplicity of their preparation and separation from the epidermis [42][43][44] . In contrast to this and probably due to the relatively low amount of cells required, transcriptome analyses of specific root tissue and cell types have been applied to a greater extent, shedding light on cell differentiation and functioning 45 . We demonstrate in our study that tissue type-specific proteome analyses are particularly useful for studying the molecular mechanisms of processes, which have varied effects on different layers of root cells, similar to the studies on microdissected root tissues of tomato 46 . However, taking into account the broad range of compound classes released from asparagus roots, the number of potential plasma membrane transporters identified in our study was relatively low. Subcellular proteome analysis of enriched plasma membranes from root epidermis would reveal numerous new candidates for the rhizodeposition process 47 improving knowledge on import and export activities at the plasma membrane. Further, the present proteome study does not differentiate between the epidermis and root hairs, the latter representing tubular extensions of epidermal cells that largely account for import and export processes in the root. Thus, epidermis cells and root hairs should be analyzed separately, as has been demonstrated for Arabidopsis roots 48 .
Plant material and growth conditions
Plants were grown as previously described 49 . Briefly, seeds of Asparagus officinalis L. 'Backlim' were sown in trays containing a 1:2 mixture of sand and standardized plant growth substrate (Fruhstorfer Erde type P, Germany) and cultivated at 25°C in the dark until the first stem developed. Plantlets were then exposed to 25/20°C, 75/85% relative humidity and a 12/12 h day-night-cycle with a light intensity of ca 400 µMol m -2 s -1 and watered as required. After the development of the third stem, single plants were transferred to pots containing a 1:1 mixture of sand and standardized plant growth substrate and cultivated at 23/18°C and 75/85% relative humidity with a 16/8 h photoperiod (ca 400 µMol m −2 s −1 ). The experiment was performed in triplicate.
Collection of root exudates and separation of root tissues
Root exudates were collected from five plants per experiment (n = 15) using a protocol modified from Xu et al. 50 . The cultivation substrate was carefully removed and the plants transferred to glass beakers filled with distilled water. After 1 h, plants were transferred to fresh glass beakers filled with double-distilled water (ca 1 L) and exudates were collected for 3 h. During this time, the water in the beakers was aerated. Exudates were filtered through a 0.22 µm mixed cellulose ester membrane (Carl Roth GmbH, Germany) to remove cellular debris and external microorganisms. Exudates were freeze-dried and subjected to metabolite analysis. After the collection of exudates, the roots were harvested and weighed fresh to determine the exudate-root biomass ratio. A control ('blank') was carried out without roots.
For the analysis of metabolites and proteins in root epidermis/exodermis (outer tissue) and root cortex/vasculature (inner tissue), tissues were separated using forceps. The quality of preparation was assessed visually (Fig. S1) and microscopically. For metabolite analysis, three plants per experiment were harvested and analyzed (n = 9). For proteome analysis, three plants per experiment were harvested and the material was pooled to give one sample per experiment (n = 3).
For root tissues, between 50 and 150 mg were blended with 500 µL 80% methanol (v/v) in a pre-cooled homogenizer for 2 × 45 s at 6500 Hz. The debris was sedimented in a tabletop centrifuge at maximum speed and room temperature for 15 min. The pellet was re-extracted with another 500 µL methanol (80% v/v) and the supernatants combined. The sample was taken to dryness at 30°C in a concentrator, redissolved in 500 µL methanol (80% v/v with internal standards) per 100 mg starting material using 5 min in an ultrasonic bath, 15 min shaking and another 5 min in the ultrasonic bath. All samples for LC-MS analysis were suspended in HPLC mobile phase A (water (Chromasolv LC-MS ultra, Honeywell/Riedel-de Haën) with 0.1% v/v formic acid; LCMS HiPerSolv CHROMANORM, VWR, Germany), 80-20 sample-solvent), incubated overnight at −20°C, centrifuged for 10 min at maximum speed and filled into LC vials.
For the acquisition of CID (collision-induced dissociation) mass spectra, the same parameters as above were used with additional settings for data-dependent acquisition (AutoMSMS): Mode: CID, intensity threshold 600, number of precursors, 3; precursor background subtraction on, active exclusion on after 3 spectra, release after 1 min, smart exclusion, on, 5x; isolation and fragmentation settings, size and charge dependent, width 3-15 m/z, collision energy 10-70 eV, charge states included: 1z, 2z, 3z.
LC-MS and MS/MS data were processed with Meta-boScape 4.0 (Bruker Daltonik) using Bruker's T-ReX 3D algorithm with the following settings: intensity threshold 1500 counts, minimum peak length 7 spectra, feature signal = intensity, mass recalibration auto-detect. Recursive feature extraction: minimum peak length (recursive) 3 spectra, minimum number of features for recursive extraction 6 of 213. Bucket filter: Presence of features in minimum number of analyses 6 of 213.
Annotation of compounds was based on (1) an inhouse library of analytical standards and known plant metabolites according to mass, retention time and spectrum; (2) known metabolites from asparagus 12,16 and the KNApSAcK database 52 , considering mass and spectral similarity for compound class; (3) via spectral similarity to the NIST17 and WEIZMASS databases 53 , Sumner Spectral library (Bruker Daltonik), MoNA (https://mona.fiehnlab.ucdavis.edu/), GNPS (https:// gnps.ucsd.edu/), ReSpect 54 and an in-house database via the spectral library search function of MetaboScape; (4) via similarity towards already annotated compounds from the data set 55 .
PCAs and Student's t-tests were created or calculated with MetaboScape 4.0 (Bruker Daltonik). For t-tests, zeros were replaced with the smallest intensity value (20) in the data set. Venn diagrams were created using only compounds that were detected in two of the three experiments replicate samples from the specific root fractions 56 . Box plots and non-parametric tests shown within were created with Sigma Plot 14.0 (Systat Software, Germany).
MALDI mass spectrometry imaging of root cross sections
Roots from two plants per experiment were harvested for MALDI mass spectrometry analysis (n = 6), snap frozen in liquid nitrogen, and stored at −80°C for further analysis. At least two tissue sections per plant were analysed in MALDI-MSI.
MALDI-MSI was performed as described earlier 57 . In brief, intact roots were transferred to a cryostat (CM3050S, Leica, Germany) with the chamber cooled to −20°C and the sample holder to −18°C, and cut at a thickness of 16 µm. These sections were immediately thaw mounted onto a conducting indium tin oxide-coated glass slide (ITO slide, Bruker Daltonik) and held in a desiccator for 15 min. The matrix was α-cyano-4-hydroxycinnamic acid (CHCA, Bruker Daltonik) diluted to 7 g L −1 in 50% acetonitrile/0.2% trifluoroacetic acid (Sigma-Aldrich), or 2,5-dihydroxybenzoic acid (DHB, Sigma-Aldrich) diluted to 30 g L −1 in 50% methanol/0.2% trifluoroacetic acid (Sigma-Aldrich) and was applied to the slide surface by an ImagePrep device (Bruker Daltonik) using the pre-set method for the respective matrix.
MSI experiments were performed using an ultra-fleXtreme MALDI-TOF instrument (Bruker Daltonik) run in negative ionization for CHCA matrix or positive ionization for DHB matrix. The laser raster was set to 40 µm and the m/z range was 200-1200. For measurements performed in negative mode, the method was calibrated with a mix of authentic standards (1 mM each) ROC curve analysis was performed on tissue sections from two plants per replicate (n = 6) using SCiLS Lab software (version 2019c Pro, Bruker Daltonik). Tandem mass spectrometry measurements (Lift mode, ultra-fleXtreme MALDI-TOF instrument) of selected MSI m/z were performed on methanolic extracts of outer and inner tissue and compared with a standard (protodioscin, riboflavin-5-phosphate, raffinose; Sigma-Aldrich).
Protein extraction and label-free protein quantification of root tissues
Proteins in root epidermis and root tissue were extracted using RapiGest SF (Waters), then reduced and digested following the method of Kaspar et al. 58 . Desalting of peptides was done using Peptide Desalting Spin Columns (Pierce, Thermo Scientific, United States) following the manufacturer's instructions. Peptides were resuspended in 2% acetonitrile/0.1% trifluoroacetic acid to a concentration of 100 ng µL −1 and 6 µL of protein digest were analyzed using nanoflow liquid chromatography on a Dionex UltiMate 3000 system (Thermo Scientific) coupled to a Q Exactive Plus mass spectrometer (Thermo Scientific) as described previously 59 , with the following modifications. Peptides were separated using an Acclaim PepMap 100 C18 analytical column (75 µm × 25 cm, 2 µm, 100 Å, Thermo Scientific) and eluted via a 100 min gradient from 2 to 44% solvent B (80% acetonitrile). Each sample was measured in triplicate. The raw files were processed using Proteome Discoverer 2.4 and Sequest HT engine (Thermo Scientific), searching the NCBI A. officinalis Annotation Release 100 (as released on 1 March 2017). Precursor ion mass tolerance was set to 10 ppm and fragment ion mass tolerance was set to 0.02 Da. False discovery rate (FDR) target values for the decoy database search of peptides and proteins were set to 0.01 (strict level for highly confident identifications). Protein abundance quantification was done using the Top N average method, (N = 3). Differential protein expression was validated using a t-test (p < 0.05, Benjamini-Hochberg corrected for FDR), after an analysis of variance (ANOVA) test, implemented in the Proteome Discoverer software (Thermo Scientific). The result lists were filtered and proteins were only kept for further investigation that fulfilled the following characteristics: identified by at least two peptides or by one peptide representing at least 10% protein coverage.
Functional annotation of proteins was performed using BlastKOALA and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database 60 . Proteome raw data has been deposited at MassIVE (https://massive.ucsd.edu/ ProteoSAFe/static/massive.jsp?redirect=auth) under the dataset ID MSV000086166. | 2021-04-02T13:41:30.974Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "e1214c360e209cd7822c602af0b916adf23ce310",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41438-021-00510-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78891517259904df094f09250e50bb0fe56d63f8",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253894141 | pes2o/s2orc | v3-fos-license | Environmental and cortisol-mediated control of Ca2+ uptake in tilapia (Oreochromis mossambicus)
Ca2+ is a vital element for many physiological processes in vertebrates, including teleosts, which live in aquatic environments and acquire Ca2+ from their surroundings. Ionocytes within the adult gills or larval skin are critical sites for transcellular Ca2+ uptake in teleosts. The ionocytes of zebrafish were found to contain transcellular Ca2+ transporters, epithelial Ca2+ channel (ECaC), plasma membrane Ca2+-ATPase 2 (PMCA2), and Na+/Ca2+ exchanger 1b (NCX1b), providing information about the molecular mechanism of transcellular Ca2+ transports mediated by ionocytes in fish. However, more evidence is required to establish whether or not a similar mechanism of transcellular Ca2+ transport also exists in others teleosts. In the present study, ecac, pmca2, and ncx1 were found to be expressed in the branchial ionocytes of tilapia, thereby providing further support for the mechanism of transcellular Ca2+ transport through ionocytes previously proposed for zebrafish. In addition, we also reveal that low Ca2+ water treatment of tilapia stimulates Ca2+ uptake and expression of ecac and cyp11b (the latter encodes a cortisol-synthesis enzyme). Treatment of tilapia with exogenous cortisol (20 mg/l) enhanced both Ca2+ influx and ecac expression. Therefore, increased cyp11b expression is suggested to enhance Ca2+ uptake capacity in tilapia exposed to low Ca2+ water. Furthermore, the application of cortisol receptor antagonists revealed that cortisol may regulate Ca2+ uptake through glucocorticoid and/or mineralocorticoid receptor (GR and/or MR) in tilapia. Taken together, the data suggest that cortisol may activate GR and/or MR to execute its hypercalcemic action by stimulating ecac expression in tilapia.
Introduction
The maintenance of Ca 2+ homeostasis is important because Ca 2+ is involved in many physiological activities, such as muscle contraction, neuron excitation, and bone formation in vertebrates (WendelaarBonga and Pang 1991). Fish, which live in aquatic environments with inconsistent Ca 2+ levels, have to maintain their body fluid Ca 2+ homeostasis through an efficient Ca 2+ regulation mechanism. The major organ for ionoregulation in fish is the gills, which are responsible for over 95 % of Ca 2+ uptake from water in freshwater-adapted species (Flik et al. 1995). The skin serves as the main organ for ionoregulation at early developmental stages of fish, before the gills are fully developed (Hwang et al. 1994. Ionocytes in the gills or larval skin are vital sites for ion uptake in fish . In an early study in trout, branchial Ca 2+ uptake was demonstrated to be active and transcellular (Perry and Flik 1988). The understanding of the Ca 2+ absorption mechanism in fish gills or skin progressed swiftly after the discovery of epithelial Ca 2+ channel (ECaC) (Qiu and Hogstrand 2004;Pan et al. 2005;Shahsavarani and Perry 2006); expression of ECaC mRNA and/or protein expression was specifically identified in the gills and/or skin ionocytes of zebrafish, trout, and medaka (Pan et al. 2005;Shahsavarani and Perry 2006;Liao et al. 2007;Hsu et al. 2014). Furthermore, Liao et al. (2007) revealed that ECaC, plasma membrane Ca 2+ -ATPase 2 (PMCA2), and Na + -Ca 2+ exchanger 1b (NCX1b) are co-expressed in the same group of ionocytes in zebrafish (Liao et al. 2007). Based on the above studies, the following model of transcellular epithelial Ca 2+ transport in the gills/skin was provided: external Ca 2+ is absorbed through apical ECaC, and the absorbed Ca 2+ is then extruded into the plasma by basolateral PMCA and NCX .
The Mozambique tilapia (Oreochromis mossambicus), a euryhaline teleost, is capable of surviving up to approximately 4-times the salt content of seawater (Stickney 1986); this organism was previously used to investigate the correlation between the morphology of gill ionocytes and declining environmental Ca 2+ (Chang et al. 2001). The regulation of Ca 2+ balance in developing larvae is dependent upon external Ca 2+ levels. Upon acute exposure to low Ca 2+ , both Ca 2+ influx and net uptake were increased in newly hatched larvae (Hwang et al. 1996;Chou et al. 2002). When small-bodied or growing female tilapia were transferred to a low-Ca 2+ environment, significant upregulation of Ca 2+ influx in the tilapia was observed (Flik et al. 1986;Chang et al. 2001). Moreover, orthologues of zebrafish ECaC, PMCA2, and NCX1b have also been identified in tilapia (Pan et al. 2005;Liao et al. 2007). Based on findings in zebrafish (Liao et al. 2007;Hwang and Chou 2013), it may be assumed that apical ECaC and basolateral PMCA2 and NCX1 in ionocytes are responsible for transcellular epithelial Ca 2+ transport in tilapia. However, there are no published accounts of comprehensive studies of the role of these Ca 2+ transporters (ECaC, PMCA2, and NCX1) in Ca 2+ regulation, or molecular evidence of their expression in ionocytes, in any fish species other than zebrafish.
Previous studies indicated that plasma cortisol levels are upregulated in trout exposed to a low ambient Ca 2+ level (Perry and Wood 1985;Flik and Perry 1989). Lin et al. (2011) revealed that low Ca 2+ water treatment stimulated expression of cyp11b (encoding an enzyme involved in the final step of cortisol synthesis) in zebrafish. Due to the hypercalcemic action of cortisol (Perry and Wood 1985;Flik and Perry 1989;Shahsavarani and Perry 2006;Lin et al. 2011), these responses were suggested to assist the maintenance of body fluid Ca 2+ homeostasis in low Ca 2+ environments. However, few studies have further explored the hypercalcemic effect of cortisol on transcellular epithelial Ca 2+ transporters in fish. Cortisol treatment was shown to stimulate ecac mRNA expression in the gills of trout (Shahsavarani and Perry 2006). Moreover, ecac expression was found to be enhanced by cortisol treatment, while expression of both pmca2 and ncx1b was unaffected in zebrafish embryos, suggesting that ECaC is a regulatory target of cortisol . However, it is unclear whether ECaC is the main target of cortisol signaling in terms of transcellular Ca 2+ transport in teleosts other than zebrafish and trout. Hormones exert their activity by binding specific receptor(s). In cell lines transfected with teleost glucocorticoid receptor (GR) or mineralocorticoid receptor (MR), cortisol treatment activated the transcription of a glucocorticoid response element (GRE)-element containing plasmid (Trapp and Holsboer 1996;Colombe et al. 2000;Bury et al. 2003;Greenwood et al. 2003;Sturm et al. 2005). In addition, cortisol treatment affected the mRNA expression of different ion transporters through GR and/ or MR in Atlantic salmon (Kiilerich et al. 2007). GR and MR mRNA signals were detected in the branchial ionocytes in tilapia (Aruna et al. 2012). Thus, cortisol may exert its hypercalcemic function through GR and/or MR in fish. Cortisol acts via GR, but not MR, to stimulate Ca 2+ uptake and ecac expression in zebrafish ), but it is unclear whether this regulation also occurs in other teleosts.
The purpose of the present study is to enhance our comprehensive understanding of fish Ca 2+ transport and cortisol control in terms of body fluid Ca 2+ homeostasis. We initially hypothesized that (1) ECaC, PMCA2, and NCX1 are responsible for transcellular epithelial Ca 2+ transport in tilapia, and (2) cortisol acts via GR and/or MR to regulate Ca 2+ uptake by modulating expression of these Ca 2+ transporters in tilapia. To test these hypotheses, we designed experiments to answer the following specific questions: (1) are ecac, pmca2, and/or ncx1 expressed in ionocytes in tilapia? (2) Does the external Ca 2+ level regulate ecac, pmca2, and ncx1 expression in tilapia? (3) Does cortisol modulate Ca 2+ uptake and the mRNA expression of ecac, pmca2, and ncx1 in tilapia? And finally, (4) does cortisol regulate Ca 2+ uptake through the GR and/or MR?
Animals
Tilapia (Oreochromis mossambicus), 1-50 g in body weight, were taken from stocks at the Institute of Cellular and Organismic Biology, Academia Sinica, and kept in freshwater (local tap water; [Ca 2+ ], 0.20 mM; [Mg 2+ ], 0.16 mM; [Na + ], 0.5 mM; [K + ], 0.3 mM; [Cl − ], 0.45 mM) at 27 °C under a 14 h:10 h light:dark photoperiod. Tilapia larvae were acquired as follows: fertilized eggs were collected from the mouths of female tilapia and incubated in aerated FW. Fertilized eggs that hatched at the same time were used in the experiments. All experiments were conducted on yolk-sac larvae, and no feeding occurred. The incubation water was changed daily to control water quality. For sampling, fish (adult and hatched embryos) were anesthetized with buffered MS-222 (Sigma-Aldrich, USA) and then dissected. Sampling was performed in accordance with the guidelines of the Academia Sinica Institutional Animal Care and Utilization Committee (Approval No.:RFiZOOHP2002086).
Acclimation experiment
Artificial fresh waters with high (2 mM) and low (0.02 mM) Ca 2+ levels were prepared with doubledeionized water (model Milli-RO60; Millipore, Billerica, MA, USA) supplemented with adequate CaSO 4 ·2H 2 O, MgSO 4 ·7H 2 O, NaCl, K 2 HPO 4 , and KH 2 PO 4 . The Ca 2+ concentrations of the high-and low-Ca 2+ media were 2 and 0.02 mM, respectively, but all other ion concentrations in all media were the same as those in local tap water ([Na + ], 0.5 mM; [Mg 2+ ], 0.16 mM; and [K + ], 0.3 mM). Variations in ion concentrations were maintained within 10 % of the predicted values by monitoring with an atomic absorption spectrophotometer (Hitachi Z-8000, Tokyo, Japan). For acclimation, hatched embryos and adults were incubated with high-and low-Ca 2+ media for 3 days and 2 weeks, respectively. Fish were sampled for the assay at the end of the acclimation period.
Cortisol and receptor antagonist incubation
Cortisol dosages were selected with reference to previous studies (Lin et al. 1999(Lin et al. , 2015aCruz et al. 2013a). Cortisol (hydrocortisone, Sigma-Aldrich, USA) was prepared as a stock solution in dimethyl sulfoxide (DMSO) first and then the stock was diluted to the final working solution (0, 10, and 20 mg/l) in local tap water. Hatched tilapia embryos were treated with cortisol media for 3 days and then were sampled for subsequent analysis. Incubation media were refreshed every day to maintain consistent levels of cortisol. During incubation, neither significant mortality nor abnormal behavior was observed. Doses of GR and MR antagonists were selected with reference to a previous study (Kiilerich et al. 2007). In this study, 10 µg/ml of RU486 (GR antagonist, Sigma-Aldrich, USA) or Spironolactone (MR antagonist, Sigma-Aldrich, USA) were used, and the medium was changed every day. Although used dosages of cortisol and antagonists in the present study are higher than in some studies (Pippal et al. 2011;Kumai et al. 2012), they had been proofed to work in cultured gills and fish larvae in previous studies (Lin et al. 1999(Lin et al. , 2015aKiilerich et al. 2007;Cruz et al. 2013a). In addition, these used dosages did not cause the damage to tilapia larvae.
Preparation of total RNA
After anesthesia with 0.03 % MS222 (Sigma), appropriate amounts of tilapia tissues or embryos were collected. For the RNA extraction, the samples were homogenized in 1 ml Trizol reagent (Invitrogen, Carlsbad, CA, USA) and then referred to manufacturer's protocol. Finally, the quantity and quality of total RNA were assessed based on the absorbance at 260 nm and the ratio of the absorbance at 260 and 280 nm, as measured using a Nanodrop ND-2000 (Thermo Scientific, Wilmington, DE, USA).
Reverse transcription-PCR analysis
The mRNA was purified from the total RNA extracted from tilapia tissues with a commercial kit (Oligotex, Qiagen, Hilden, Germany). For cDNA synthesis, 0.36 μg of mRNA was reverse transcribed in a final volume of 20 μl containing 0.5 mM dNTPs, 2.5 μM oligo (dT) 18 , 5 mM dithiothreitol, and 200 units PowerScript reverse transcriptase (Clontech, CA, USA) for 1.5 h at 42 °C, followed by a 15 min incubation at 70 °C. For PCR amplification, 2 μl cDNA was used as template in a 50 μl final reaction volume containing 0.25 mM dNTP, 2.5 units EX-Taq polymerase (Takara, Shiga, Japan), and 0.2 μM of each primer. GenBank accession numbers of the sequences for primer sets were used as follows: ecac, GenBank BankIt Submission ID:1884659; pmca2, AAK15034; ncx1, AY283779; gadph, FN673690.
In situ hybridization
PCR fragments of tilapia ecac, pmca2, and ncx1 were obtained by PCR and inserted into a pGEM-T easy vector (Promega, WI, USA). After linearization by restriction enzyme digestion, the plasmids were subjected to in vitro transcription with T7 and SP6 RNA polymerase (Roche, Penzberg, Germany) to produce sense and anti-sense transcripts, respectively. Dig-labeled RNA probes were examined with RNA gels and dot-blot assay to confirm the quality and concentration.
Excised gills were fixed with 4 % paraformaldehyde for 3 h at 4 °C and then washed several times with phosphate buffered saline (PBS). Fixed samples were immersed in PBS containing 30 % sucrose overnight, and embedded in OCT compound embedding medium (Sakura, Tokyo, Japan) at −20 °C. Frozen cross-sections of 10 μm were cut with a CM 1900 rapid sectioning cryostat (Leica, Heidelberg, Germany) and attached to poly-l-lysine coated slides (Erie, New Hampshire, USA). After brief washing with PBST, slides were incubated with hybridization buffer (HyB) containing 50 % formamide, 5× SSC, and 0.1 % tween-20 for 5 min at 65 °C. Prehybridization was performed for 2 h at 65 °C with HyB + (hybridization buffer with additional 500 ng/ml yeast tRNA and 50 μg/ml heparia). For hybridization, samples were incubated with100 ng RNA probe in 200 μl HyB + at 65 °C overnight. Next, the slides were washed at 65 °C for 10 min in 75 % HyB and 25 % 2× saline sodium sitrate (SSC), 10 min in 50 % HyB and 50 % 2× SSC, 10 min in 25 % HyB and 75 % 2× SSC, 10 min in 2× SSC, and finally 30 min in 0.2× SSC at 70 °C (this final wash was repeated twice). Further washes were performed at room temperature for 5 min in 75 % 0.2× SSC and 25 % phosphate buffered saline with 0.1 % triton X-100 (PBST), 5 min in 50 % 0.2× SSC and 50 % PBST, 5 min in 25 % 0.2× SSC and 75 % PBST, and 5 min in PBST. After the series of washes, slides were incubated for 2 h in blocking solution containing 5 % sheep serum and 2 mg/ml bovine serum albumin (BSA) in PBST, and then incubated with 1:2500 antibody (Roche, Basel, Switzerland) in blocking solution for another 2 h at room temperature. Finally, sections were washed with PBST plus blocking reagent, and were then transferred to staining buffer. The staining reaction was performed with 5-bromo-4chloro-3-indolyl phosphate (BCIP) and p-nitroblue tetrazolium chloride (NBT) in staining buffer until the signal was strong enough for analysis.
Immunohistochemistry
Sections were washed several times with PBST after in situ hybridization. Blocking was performed in 3 % BSA at room temperature for 2 h, and sections were then incubated with α5 mouse anti-Na + /K + -ATPase (2.5 µg/ml in PBS) at 4 °C overnight. Samples were washed in PBS for 30 min twice, and then incubated with goat anti-mouse IgG conjugated with FITC (7.5 µg/ml in PBS; Jackson Immunoresearch Laboratories, West Grove, PA, USA) for 1 h at room temperature. Images were acquired with a Leica TCS-NT confocal laser scanning microscope (Leica Lasertechnik, Heidelberg, Germany).
Measurement of Ca 2+ influx
Measurement of Ca 2+ influx was performed as described previously (Chen et al. 2003). High-and low-Ca 2+ freshwater-acclimated tilapia were transferred to tracer media containing 45 Ca 2+ . The plot of radioactivity against incubation time was linear within 8 h. Samples (200 μl) were collected from the tracer media at 0.5 and 2.5 h after transfer. Counting solution (Ultima Gold, Packard, USA) was added to the samples, and the radio activities were counted with a liquid scintillation β-counter (LS6500, Beckman, Fullerton, CA). Ca 2+ influx rates were calculated using the following formula: where Q i and Q f (cpm ml −1 ) refer to initial (0.5 h) and final (2.5 h) radioactivities in the tracer media, V i and V f (ml) refer to initial and final volumes of the tracer media, SA i and SA f are initial and final specific activities (cpm mmole −1 ), t (2 h) is incubation time, and W (g) is fish body weight.
Statistical analysis
Group datasets were confirmed to be normally distributed by Anderson-Darling Normality test (p < 0.05). Data are presented as the mean ± SD and were analyzed by oneway analysis of variance (ANOVA) and Student's t test.
Expression of mRNA encoding Ca 2+ transporters in different tissues
The mRNA expression patterns of the tilapia genes encoding ECaC, PMCA2, and NCX1 were first evaluated by RT-PCR (Fig. 1). In adult fish, expression of ecac, pmca2, and ncx1 were ubiquitous among all tissues examined, including brain, heart, gills, intestine, liver, spleen, testis, and kidneys.
Localization of ecac, pmca2, and ncx1b in the gills
Tilapia gills were removed and treated to prepare cryosections. Sections were subjected to in situ hybridization against tilapia ECaC, PMCA2,or NCX1 mRNA, and then double stained with Na + /K + ATPase α5 (an ionocyte marker) antibody, revealing that ecac, pmca2, and ncx1b are expressed in the ionocytes of gill filaments (Fig. 2). All ecac signals were co-localized with ionocytes (Fig. 2g). However, only some pmca2 and ncx1 signals co-localized with ionocytes (Fig. 2h, i).
Ca 2+ influx and expression of branchial Ca 2+ transporters in adult tilapia acclimated to low or high Ca 2+ water
Tilapia were treated with low (0.02 mM) or high (2.0 mM) Ca 2+ water for 2 weeks prior to sampling for investigation of Ca 2+ influx and mRNA expression. Ca 2+ influx of tilapia was higher in low than high Ca 2+ water (Fig. 3a). Furthermore, branchial ecac expression was approximately threefold higher in tilapia acclimated to low Ca 2+ than tilapia acclimated to high Ca 2+ treatment. However, expression of branchial pmca2 and ncx1 was no different between treatments (Fig. 3b).
Ca 2+ influx and related gene expression in tilapia larvae treated with low or high Ca 2+
The effects of different Ca 2+ levels on tilapia larvae were examined by acclimating hatching tilapia embryos to low (0.02 mM) or high (2.0 mM) Ca 2+ water for 3 days. After 3 days, larvae were sampled for analysis of Ca 2+ influx and gene expression. Similar to the adult, tilapia larvae acclimated to low Ca 2+ water also exhibited significantly upregulated Ca 2+ influx and ecac expression (Fig. 4a, b), while the expression of pmca2 and ncx1 was not modulated by external Ca 2+ level (Fig. 4b). The role of cortisol in Ca 2+ uptake in tilapia was further clarified by studying the expression of cortisol-related genes in tilapia larvae. The expression of cyp11b was significantly higher in larvae acclimated to low than to high Ca 2+ medium, but the expression of gr and mr was not modulated (Fig. 4b).
The effect of exogenous cortisol on Ca 2+ influx and Ca 2+ transporter expression in tilapia larvae
As described above, expression of cyp11b, which encodes a cortisol synthesis enzyme, was enhanced in low Ca 2+ water. The effect of cortisol on Ca 2+ uptake of tilapia was further examined by treating hatching tilapia embryos with exogenous cortisol (0, 10, or 20 mg/l cortisol in local tap water) for 3 days. After the treatment period, larvae were sampled to analyze Ca 2+ influx and transporter expression. We report that treatment with exogenous cortisol (10 and 20 mg/l) resulted in significant stimulation of Ca 2+ influx and ecac expression, but did not modulate expression of pmca2 or ncx1b (Fig. 5). Application of exogenous cortisol (20 mg/l cortisol in high Ca 2+ ) also clearly enhanced ecac expression in tilapia larvae (Fig. 6). Furthermore, the enhanced level of ecac transcription was similar to that observed in tilapia larvae acclimated to low Ca 2+ (Fig. 5b).
Effects of GR or MR antagonist on ecac expression in tilapia larvae treated with exogenous cortisol
The regulatory mechanism of cortisol on Ca 2+ uptake in tilapia was clarified by the exposing cortisol-treated tilapia larvae to 10 µg/ml RU486 or spironolactone (GR and MR receptor antagonists, respectively). We report that treatment with either GR or MR antagonist dramatically decreases the stimulatory effect of exogenous cortisol on ecac transcription (Fig. 7).
Discussion
In the present study, expression of ecac, pmca2, and ncx1 was detected in several tissues in tilapia (Fig. 1), consistent with observations in zebrafish (Pan et al. 2005;Liao et al. 2007). Universal expression of these Ca 2+ transporters in tilapia may be related to the maintenance of intracellular Ca 2+ homeostasis, similar to the reported situation in mammals (Lee et al. 1994;Guerini 1998). Ionocytes in the adult gills or the larval skin are vital sites for Ca 2+ uptakein fish (Flik et al. 1995;Hwang et al. 2011). Liao et al. (2007) first identified mRNA signals of ecac, pmca2, and ncx1b in ionocytes of zebrafish, thereby providing comprehensive molecular evidence for the model of transcellular epithelial Ca 2+ transport in ionocytes. The present study also identified ecac, pmca2, and ncx1 mRNA signals in ionocytes of tilapia (Fig. 2), in agreement with the findings of Liao et al. (2007). In zebrafish, there are at least four subtypes of ionocytes, and they are specifically responsible for the regulation of (1) Cl − , (2) Na + , (3) Ca 2+ , and (4) K + and acid-base, respectively (Hwang and Chou 2013). There are also four subtypes of ionocytes in tilapia (Inokuchi et al. 2009;Hwang et al. 2011). Herein, we observed that some ionocytes express ECaC mRNA (Fig. 2c): these ecacexpressing ionocytes may belong to a previously identified subtype or a new subtype. However, technical limitations prevented us from further classifying the ecac-expressing cells in the present study. This issue awaits further exploration in the future.
In the present study, normalized branchial ecac expression was observed to be higher (over ~300 fold at least) than that of pmca2 and ncx1 in tilapia (Fig. 3b). In zebrafish gills, the normalized mRNA expression of ECaC is also much higher than that of NCX1b and PMCA2 (Liao et al. 2007). In fact, several studies have shown that ECaC plays a dominant role in fish Ca 2+ regulation. In zebrafish, a lossof-function mutation of the ECaC gene resulted in a significant decrease of Ca 2+ content and defective bone structure (Vanoevelen et al. 2011). Treatment with low Ca 2+ water stimulated both Ca 2+ absorption and ecac expressionin zebrafish, but did not affect the expression of NCX1b and Fig. 7 Effects of GR and MR antagonists on ecac expression in tilapia larvae treated with cortisol. Expression ofmRNA was analyzed by qPCR, and values were normalized to β-actin. abc Indicate a significant difference (p < 0.05) using Tukey's multiple comparison test following one-way ANOVA. Values are the mean ± SD (n = 6) PMCA2 (Pan et al. 2005;Lin et al. 2011Lin et al. , 2012Lin et al. , 2014Lafont et al. 2011). Similarly, low Ca 2+ medium treatment also stimulated branchial protein and mRNA expression of ECaC in trout; intra-arterial infusion with CaCl 2 was found to suppress gill ecac expression in trout (Shahsavarani and Perry 2006). In the present study, normalized branchial ecac expression was observed to be higher than that of pmca2 and ncx1 in tilapia (Fig. 3). Moreover, low Ca 2+ water treatment stimulated Ca 2+ uptake and ecac expression in both adult and larval tilapia. These results further reinforced the findings of previous studies that indicate that modulation of ecac expression is vital for teleosts to cope with environmenta lCa 2+ challenges.
Many studies have indicated that hormones and signal transduction pathways may be involved in the adjustment of Ca 2+ uptake in fish upon Ca 2+ challenge (Evans et al. 2005;Lin et al. 2012Lin et al. , 2014. Cortisol is a hypercalcemic hormone, but there is little molecular evidence for the effects of cortisol on fish Ca 2+ uptake (Shahsavarani and Perry 2006;Lin et al. 2011). Here, expression of cyp11b, which encodes the enzyme required for the final step of cortisol synthesis, was found to be significantly upregulated by low Ca 2+ medium treatment in tilapia (Fig. 4b). Exogenous cortisol treatment was found to cause upregulation of both Ca 2+ influx and ecac expression in tilapia (Fig. 5). Moreover, exogenous cortisol treatment also enhanced ecac expression in tilapia larvae treated with high Ca 2+ (Fig. 6). These results indicate that cortisol is a hypercalcemic hormone in tilapia, reinforcing the findings in other species. Lin et al. (2011) revealed that expression of both cyp11b and ecac is stimulated in zebrafish embryos treated with low Ca 2+ . Exogenous cortisol treatment was previously reported to enhance Ca 2+ uptake through increasing ecac expression in zebrafish . In trout, exogenous cortisol treatment was also reported to stimulate branchial Ca 2+ uptake or ecac expression (Flik and Perry 1989;Shahsavarani and Perry 2006;Kelly and Wood 2008). Taken all together, it appears that the hypercalcemic effects of cortisol are of physiological significance in terms of fish body fluid Ca 2+ homeostasis.
Cortisol is the main corticosteroid hormone and may exert its actions through GR and/or MR in fish. Although several studies have addressed cortisol's effects on Ca 2+ regulation, only the earlier study by Lin et al. (2011) precisely investigated the role of cortisol receptor in Ca 2+ regulation in the zebrafish model. Cortisol was demonstrated to increase Ca 2+ uptake through GR alone, and protein and mRNA expression of GR was identified in Na + -K + -ATPase-rich cells (i.e., ecacexpressing ionocytes) in zebrafish Cruz et al. 2013b). Prior to the current study, it was unknown whether GR (or MR) mediated the effects of cortisol on Ca 2+ uptake in teleosts other than zebrafish. Here, we exposed cortisoltreated tilapia larvae with either GR or MR antagonists, both of which could antagonize the stimulatory effect of exogenous cortisol on ecac expression (Fig. 7); these findings imply that both GR and MR mediate the effect of cortisol on ecac expression in tilapia. A previous study reported that GR and MR mRNA are expressed in the branchial ionocytes of tilapia (Aruna et al. 2012). This result raises the possibility that cortisol may directly regulate ecac expression in tilapia. In the present study, spironolactone was used as a MR antagonist; however, the dispute about antagonist property of spironolactone existed in different fish species or experiment designs. Spironolactone revealed antagonist property in the gills of killifish with freshwater acclimation and cultured gills of salmon (Scott et al. 2005;Kiilerich et al. 2007), and it appears to also functionas an antagonist in the present study. In trout, spironolactone showed antagonist and agonist properties in vivo and in vitro studies, respectively (Sloman et al. 2001;Sturm et al. 2005). On the other hand, spironolactone acted as an agonist to the zebrafish MR overexpressed in mammalian cell lines (Pippal et al. 2011) and did not show effect on the gills of killifish with seawater acclimation (Shaw et al. 2007). To reinforce the present study's findings, it is necessary to further clarify the property of spironolactone on tilapia MR in cell line experiments in the future.
For summary, the mRNA expression of three Ca 2+ transporters (ECaC, PMCA2, and NCX1) was specifically detected in branchial ionocytes and exposure to low Ca 2+ water resulted in significant stimulation of both Ca 2+ influx and ecac expression in tilapia, similar to the previous findings in zebrafish (a stenohaline species) (Liao et al. 2007;Lin et al. 2011). One of the underlying mechanisms is probably the GR and/or MR-mediated hypercalcemic action of cortisol in tilapia (a euryhaline species), different from the GR-mediated mechanism in zebrafish as reported previously. From the point of view of comparative physiology, the present study enhances our understanding ofthe effects of cortisol on fish body fluid Ca 2+ homeostasis. | 2022-11-26T14:09:44.058Z | 2016-02-08T00:00:00.000 | {
"year": 2016,
"sha1": "c71f48162e247849c35a3b854f3cd9259d6ad9a4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00360-016-0963-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "c71f48162e247849c35a3b854f3cd9259d6ad9a4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
259044192 | pes2o/s2orc | v3-fos-license | Li-Fi technology-based long-range FSO data transmit system evaluation
Visible light is used by a technology known as Light Fidelity to establish wireless internet connections very quickly. This article offers line-of-sight communication between the transmitter and receiver using LED technology. Li-Fi technology is a method that transmits data using LED light, which is faster and more efficient than Wi-Fi. Since it is practically ubiquitous, light can be used for communication as well. A cutting-edge technology called optical communication includes a subset called Li-Fi. By sending out visible light, the Li-Fi device enables wireless intranet communication. An in-depth study and analysis of Li-Fi, a novel technology that transmits data at high speeds over a wide spectrum by using light as a medium of transmission.
mobile communications at the College of Edinburgh. Data is transmitted over the visible light portion of the electromagnetic spectrum using General Sound VLC, a technology that has been around since the 1880s. From January 2010 to January 2012, funding was provided for the A-Lite project at the Edinburgh Institute of Digital Communication [2]. To drive this technology, Haas expanded this market in 2011 with TAD Global Talk, which has benefited the market company. Puru Li-Fi is the parent company of a market-ready OEM company for Li-Fi system products to integrate with the current PVLC, a LED (lighting) system.
In October 2011, businesses and industry associations established the Li-Fi Consortium to significantly enhance optical communications networks and get around radio spectrum restrictions. Li-Fi, which was created by the IEEE 802.15.7 RATA Standard Committee, is not the same as the Universal Database VLC devices sold by some businesses. In 2012, VLC technology was shown to utilize Li-Fi. In August 2013, single-color LEDs flashed data rates of over 1.6 GB/s. According to a press release from September 2013, neither the VLC system nor general Li-Fi had to meet any line-of-sight requirements. In October 2013, firms in China stated that they were developing Li-Fi development kits. The Li-Fi wireless network Beam Custard was unveiled in April 2014 by the Russian business Steincomm. While future speeds of up to 5 GB/s are anticipated, the current module transmits data at a rate of 1.25 GB/s. Sisoft achieved a new record in 2014 by transporting data at 10 GB/s throughout the light spectrum given by LED lights. The latest integrated Li-Fi system's CMOS optical receiver employs less sensitive glacier photodiodes. IEEE transitioned to Gig-Mode in July 2015, which improves energy consumption as a photon beam charges ions diode and boosts receiver sensitivity.
This procedure may also be carried out with computational sensitivity, in which the receivers detect a faint signal from a long distance. Li-Fi is not a brand-new technological breakthrough; infrared light has been used in remote controls since the nineteenth century. The discovery was made in 2011, when the first gigabit-class semiconductor, Li-Fi, was developed. Fraunhofer IPMS and Ibisentelcom support this new development. Li-Fi has a one-of-a-kind chance to broadcast radio frequency (RF) technologies. One Wi-Fi building is excellent for widespread wireless coverage, both technologies can be viewed as complementary because one is ideal for highdensity wireless data coverage with minimal liability, and the other is perfect for that [3]. Bandwidth, distance, data quality, Security, dependability, availability of power, transmission, power production, influence on the environment, device-to-device communication, interference, device accounting, market preparedness, and comparison of each transmitter and receiver technology. Li-Fi technology will therefore be superior in the future, as we can infer from this data. Our project's main goals have been presented. To develop and construct a long-range data transmitter structure based on Li-Fi technology. Must put the entire system into action in order to assess its actual impact and validate our efforts to investigate the performance of the system for future reference and upgrade.
Literature review
This section is based on a review of the literature. Here is a look at the literature of the past year, including our efforts. Perhaps by reading it we can overcome the weaknesses of the previous project and improve its effectiveness. We can overcome the weaknesses of the previous project and make it more effective by reading it. The development of Li-Fi technology aims to increase data throughput, power usage, and performance. Li-Fi is a bidirectional network solution that provides a user experience very similar to Wi-Fi. Over time, connectivity requirements will increase dramatically [4]. We need a network with higher spectral capacity to meet these demands. With Li-Fi, we can use a spectrum that is 100,000 times larger than that of radio frequencies. Li-Fi is now capable of delivering unparalleled data and capacity. It is a type of optical radio technology that includes infrared, ultraviolet, and visible light transmission [3]. Li-Fi is distinguished by the fact that the identical light energy utilized for lighting might also be used for connectivity [4]. Li-Fi technology is simple but effective. Photons are emitted from a LED bulb when a continuous current flow is applied to it. It appears to be light. With semiconductor technology, LED bulbs allow for very rapid changes in current and light that may be detected by a photodetector. High-speed information may be sent using this technology via an LED light. Remote controllers, for example, are examples of low-cost optoelectronic gadgets, Li-Fi uses direct modulation techniques. LED light bulbs can also transmit very high data rates due to their high intensity [5]. The requirement to transfer bandwidth with certain other users is reduced by high bandwidth density, which improves the user experience. Li-Fi has a data density that is a thousand times greater than that of Wi-Fi. As a result, more data per square meter is provided [6]. Li-Fi communication technology can function even in direct sunlight since modified light rays may be recognized. Because the system detects quick variations in light intensity rather than the gradual fluctuating levels created by interruptions induced by sunlight, and because light waves in Li-Fi are substantially modulated, the sun just provides a continuous light that the receiver can simply filter out. Connecting to local network settings, integrated Li-Fi wireless technology for new "smart luminous" technology, including Li-Fi short range and for communities worldwide, is used.
Visible Light Communication (VLC) technology transfers data from people's minds at the speed of light. Li-Fi is being utilized to provide low-cost, long-lasting, secure, and high-quality work. VLC does not offer a health risk, but it may harm the human body because it employs microwaves and sustainable and environmentally green technology. However, along with EPP, VLC, and simple wireless plug-and-play technology come the system's benefits and applications. LEDs are more practical than the present fluorescent pipes, and VLC systems work at light speed in direct sunlight. It is not eliminated by Wi-Fi or other RF interference with system users, electromagnetic interference for visible light, or free, uncontrolled, and outdated THz. VLC is safe for the human body since it uses ecologically friendly green technology, such as microwaves, which pose no health hazards. The hybrid system is made up of many components. They layer structures: the structure of a complex system, a channel model, and modulated schemes. The MAC and PHY layers are separated in the framework. The Li-Fi VLC was created to construct the PLC. We attempted to complete this project by reading the aforementioned material, and we were able to complete it successfully by avoiding the faults of the previous year's project.
Research method
We detailed our study process, project block diagram, circuit diagram, the project functioning concept, and the final project view in this area. This review relied on specific criteria and settings to construct its connected articles, from the beginning stages of the search procedure to the final stages of the production of this work. A critical component of every inquiry is the usage of proper keywords to discover possible research areas. The phrase "Li-Fi" is one of the most common search phrases for previous research on Li-Fi technology. This phrase has appeared in all Li-Fi studies, including Haas' work and other relevant articles [7].
As a result, it is concluded that this keyword is sufficient and acceptable to cover crucial areas in this evaluation. The papers considered for the study were all written in English. Review studies, which offer a literature review and are also valuable sources of knowledge, or journal research papers, which present original research, are employed. All work in this review has been produced within the ten-year time frame since 2011 when Li-Fi was first made public. Li-Fi offers a wide range of applications. As a result, including them, all in a single document will be tough. Instead, focusing on a few areas of Li-Fi research and emphasizing them will provide fascinating findings.
As a result, the closing paragraph in this section will concentrate on the methods of inclusion and exclusion utilized in this study [8]. All of the research involved Li-Fi-related simulation studies. We picked this study because we wanted to provide simulation-based studies on Li-Fi. Only a few countries and industries have adopted LiFi as a communication system, and it is still not widely used worldwide. Presenting all relevant numerical simulations would therefore encourage researchers and developers to test with Li-Fi before it is legally implemented. Due to methodological limitations, all OWC papers that do not contain Li-Fi-based systems as part of their communication and equipment analyses have been eliminated [7]. The procedure for this project is as follows: (a) develop an idea for the design and construction of a long-range data transmission system based on Li-Fi technology, (b) design a block diagram and schematic to determine which components we need to build, (c) assemble all the components and program the microcontroller to control the entire system, and (d) assemble all the components on a printed circuit board and solder them. Finally, put all the components on the board and test the system. In our project, we developed a long-range data transmission system based on Li-Fi technology [9]. The current from the AC source enters the DC output circuit through an adaptor. This circuit included an audio amplifier, a lithium-ion battery, a laser light, a tiny solar panel, and a speaker. Li-Fi is a free-space wireless communication system that uses light to convey data and location between devices. The laser light, DC power supply, and Li-Fi audio amplifier are all connected to a 3.5mm port, which will be connected to the audio source on the transmitter side. We have a solar panel, a lithium-ion battery-powered audio amplifier, and a speaker on the receiving side. At the connection of the 3.5mm port to the audio source on the transmitter side, the laser or LED will illuminate, but there is no variation in the intensity of light at the time of audio source is turned off. When researchers perform the sound, you'll also notice that light intensity varies regularly. When the volume is increased, the intensities of LEDs and lasers vary more quickly. The photovoltaic system is so sensitive that even small variations in the intensity result in a variation in voltage at the panel's output. As a result, when light from the LEDs falls on the panel, voltages fluctuate depending on the intensity of the light. The photovoltaic voltages are instead sent into the amp Li-Fier (it is a speaker), which amplifies the signal and audio output through the speaker attached to the amp Li-Fier [10]. As long as the solar cell is in contact with the LEDs, the output will be produced.
Experimental setup, results, and discussion
This project focused on two areas: hardware and software. The most apparent component of any information system is the hardware. An audio amplifier, a lithium-ion battery, a laser light, mini-solve software, and a speaker comprise the hardware. "Software" refers to the entire collection of procedures, techniques, and programs necessary for a computer system to function. The program aids in the design of circuits. For schematic capture, we utilize Proteus software. The hardware and software are described in depth below. Proteus 8.9 is the software. The hardware components are the PAM8403 Audio Amplifier Li-Fi Module, Mini Photovoltaic Panel, Communication Module, Laser Light, Adapter DC Power Supply, and Speaker [11].
Audio amplifier module (PAM8403): An electrical amplifier known as an audio power amp (or power amp) boosts weak electronic signals, such as those from a radio reception or a musical instrument pickup, to a volume that can drive loudspeakers or headphones. Many different types of audio equipment, Sound reinforcement, broadcasting, and domestic sound systems, as well as audio instrument amps and Li-Fiers such as guitar amps, are all examples of products that fall into this category, containing audio power amplifiers and Li-Fiers. It is the last electrical stage before the signal is routed to the loudspeakers in a conventional multichannel audio chain. PAM8403 Li-Fier Stereo Audio Amp Module: The PAM8403 is an amplified Li-Fi board that can drive two 3W + 3W LEDs and is powered by a typical 5V input. Anyone looking for a Class-D stereo audio amp with lithium-ion batteries that fits on a small board should definitely consider this option. With this amplifier, users can output high-quality audio from stereo input [12]. It also has a unique function because it is able to drive speakers straight from its output. As seen in the project prototype photographs below. The PAM8403 Amp Li-Fier Board has the following features. The voltage of operation of amp is power supply voltages range from 2.5-5V DC. With a four-channel load at 5 volts DC, dual-channel stereo with a high maximum output (3 W + 3 W) at 10% THD Maximum Gain: 24 dB Architecture without Filters Low EMI and low quiescent current Temperature range: -30 to +80°C Short-circuit protection, thermal shutdown, and up to 90% low capacity PAM8403 is the primary power amp Li-Fi IC, as seen in the figure below. Aside from IC, the module is made up of a few other components, such as capacitors and resistors. The amp board is a dualchannel Li-Fi amp board with a total output power of 6W (3W + 3W). At Left Channel Input Audio Jack is '⎿'. At Ground Channel Input Audio Jack is '⏉', at Right Channel Input 'ʀ' Audio Jack, the power supply is 5V. L± indicated Left Channel Positive and Negative Output and R± indicated Right Channel Positive and Negative Output. The PAM8403 includes built-in short circuit protection, which is crucial for a trouble-free operation because each big Li-Fi system requires it [13,14].
Since PAM8403 Amp Li-Fier IC does not require a heat sink, it is an excellent choice for bespoke speaker applications. It can also drive 4 or 8 speakers directly. You must use a good speaker with a maximum output power of 3 W. Since this is a Li-Fi board with a stereo amplifier, the input section includes two inputs, L and R, separated by a common ground. It will generate 3W + 3W audio output from any form of audio input that requires amplifying with lithium-ion batteries. At 5V DC input and 4 Ohms load output, this amplified Li-Fi module has a peak gain of 24 dB and a THD of 10%. Without a heatsink, it works smoothly, which frees up space on board. Regardless of the heatsink, it might also provide thermal protection, which is an important function for a low-wattage Li-Fier module. LCD monitors and TV projector speaker output Notebook laptops' lithium-ion batteries improve speaker output. Portable speakers, portable DVD players, and game machines can all be used. Any wireless amplified project with a compact footprint and 5V output.
A solar panel is made up of many electrically connected photovoltaic modules that are mounted on structural support [15]. Solar cells that have been pre-packaged and linked together form a photovoltaic module. The solar panel can be utilized in commercial and domestic applications as a part of a bigger solar power delivery and generation system. Each module is rated for its DC output power under conventional test settings, which generally varies from 100 to 320 watts under International Electro-Technical Commission specifications (IEC). The size of a module is dictated by its efficiency for a given maximum power: a 230-Watt module with an efficiency of 8% requires twice as much area as a 230-Watt module with an efficiency of 16%. Due to a single solar panel's capacity limitations, the majority of systems use multiple solar panels [16]. A photovoltaic system is made up of a panel or array of solar cells, a transformer, and, on rare occasions, a battery, a solar tracker, and communication cables. Photovoltaic solar modules only generate power when the sun shines. They do not store energy; hence, to assure the flow of power when the sun is not shining, a portion of the electricity produced must be stored. The most apparent answer is to employ batteries, which store electrical power naturally. Batteries are series-connected sets of rechargeable batteries (devices that convert electrical energy from chemical energy). Batteries are made up of two electrodes submerged in an electrolyte solution, which when connected by a circuit generate an electric current. The current is generated by reversible chemical reactions within the cell between the two electrodes and the electrolyte. Supplementary or extra batteries are rechargeable batteries. Electric energy is stored as chemical energy in the cells while the battery is charged. When the battery is drained, the chemical energy contained in it is released and transformed into electrical energy [17 -20]. A robust outer poly frame encloses and safeguards high-quality, specially designed solar modules and polycrystalline solar cells. The highest output power is 0.66 W, the highest operating voltage is 6 V, and the highest charging current is 110 mA. The minimum output power is 0.55 W, the operating voltage is 5.5 V, and the charging current is 100 mA.
The installation of or integration of small epoxy solar panels into a product is simple. There are no frameworks or specific adjustments required for construction. Installation requires only a minimal amount of room. Comparable amorphous thin-film solar cells only produce half as much electricity as these ones. They don't need any additional frames or modifications and are ready to use right away. Simply solder or crimp the copper tape to make connections. Trays are made of thin, incredibly strong, and weather-resistant substrates, or they can be custom-designed, injection-molded trays that are laser-cut, wrapped in UV and weather-resistant materials, and made for the good or service in question.
Possibilities include making your own solar-powered models or toys as well as small crafts, science experiments, electrical applications, charging small DC batteries, and electrical applications [21]. Laser light waves travel together, with their peaks in alignment, or phase. This explains why laser beams can be focused in such a small space and are so brilliantly focused and narrow. Due to the laser light's continued concentration and lack of dispersion relative to a flashlight, laser beams can cover very long distances. A laser is a device that produces light by amplifying it optically through the electromagnetic radiation that is stimulated to emit. Research Labs by Theodore H. Maiman [22,23]. The coherent light that a laser emits sets it apart from other light sources. Lasers can be concentrated in a small space thanks to spatial coherence, making it possible to use them for processes like lithography and laser cutting. Additionally, spatial coherence enables a laser beam to collimate, which enables the use of lidar and laser pointers over long distances.
The only way to create light with an extremely narrow spectrum is by using lasers, which have the maximum degree of spatial synchronization. Alternately, femtosecond-long, wide-spectrum ultrashort light pulses may be made via temporal coherence. Electroacoustic transducers, also known as loudspeakers, are devices that convert electrical audio streams into the desired sound. A loudspeaker system, also known as a "box" or "speaker," is made up of one or more of these speaker drivers, an enclosure, and electrical connections, which may or may not include a crossover. An analogy with the driver of a loudspeaker can be made between a linear motor and a diaphragm that converts the motion of the motor into the motion of air or sound. The acoustic equivalent of the original, unamplified electronic signal is achieved by electronically amplifying an audio signal, usually through a microphone, recording device, or radio broadcast, to a power level that drives the motor. Proteus is a proprietary software toolset used primarily for electrical design automation. Electronics designers and engineers use the program primarily to develop schematics and electronic prints for PCB production. The original version of what is now referred to as Proteus Design Suite, PC-B, was developed in 1988 for DOS by the company's CEO John Jameson. Support for schematic capture was added in 1990, and Windows environments were adopted at the same time. Proteus originally included multipathing simulation in 1996 SPICE, followed by microcontroller simulation in 1998. Shape-based auto routing was introduced in 2002, and 3D visualization of printed circuit boards was added in 2006. 2011 saw the creation of a specific IDE for simulations, and 2015 saw the addition of MCAD import and export. 2017 saw the introduction of support for high-speed design. Receiving and transmitting modules are critical components of such a system. Figure 2 depicts these components in further detail. An approximation of the components and their functional purpose is provided. Model of dynamic routing laser signal interaction across transceivers taking into account. A no-view line and a LOS comprise a VLC channel (NLOS). Depicts the geometry of VLC dispersion inside a single room. Each receiver is considered to have a photodetector. The straight line path between both the transceivers is known as LOS, and the appropriate Euclidean distance is indicated by the fact that each receiver has a photodetector. The line segment channel in between the transceiver is denoted by LOS, and the relevant Euclidean distance is denoted by diu. The direction of emission and attenuation linked with the LOS path is denoted by diu, φi,u, and ψi,u respectively. Many different types of LEDs with modulation bandwidths varying from many tens of MHz to around 150 MHz or even too many scores were used to broadcast data. A detector or photovoltaic grid is frequently used to detect a signal on the receiver side [24]. Depicts many sorts of models of a light-emitting diode's intensity of radiation and their related values of the Lambert radiation parameter m. Consider the following situation to acquire precise indications for the operation of the Li-Fi network: let's have a system where user 1 sends data to user 2. The signal is sent from user #1's transmission module to the reception module on the ceiling, Signal is picked up with a cluster of LED lights mounted on the ceiling. Photons are transformed into electric current once they have been collected by the photodetector. The optical energy is converted into electric current, which then enters the microprocessor, which controls the LED panel. The signal was transferred from the microcontroller to the data by the second user. The power of the intermediate transmitter is defined as the ratio of the received data to the value of the transmission data, which is determined by the structural features of a particular bulb. This is critical to remember. When transmitting information from a set of LED bulbs on the ceiling to user 2's receiver, the previously outlined processes apply. Table 1 includes the data required to execute the computations [25 -28]. Table 1. Data required to execute the computations To calculate how much energy is emitted during the transfer of a light signal to the receiver from the transmitter, use the following mathematical equation: The LOS channel, according to [13], may be computed as follows: Where, = − (2) Lambertian discharge mechanism is represented by 1/2 represents the emission direction with 1⁄2 the strength of the principal optical axis.
Apd -physical region of Parkinson's disease; gain of optical filter's.
Symbolic designation Value
Height between ceiling and user (photodetector) h 2m Photodetector area Apd 1sm 2 The optical filter coefficient gf 1 Refractive index n 1,5 Half-intensity radiation angle Ф1/2 60 о ; 1,047rad Photodetector field view ψmax 90 о ; 1,571rad Optical transmitter power Based on the dataset obtained, the result of HLOS at the fastest route is 4,686*10-5 between the light and the photodetector, and 2,929*10 -5 at the largest distance (according to the input data). For the purpose of simplicity, only the reflection of the first order is considered in the NLOS route. First-order reflection is divided into two parts: Distances are indicated by di,w and dw,u. The ranges of beam and incident are φi,w and θi,w for the original section, and θw,u і ψw,u. for such a second section. Because the route allocation throughout the room is quite small, delays among these diverse paths may be ignored. In other terms, information from various pathways is believed to arrive at the receiver at the same time. The following method is used to determine the NLOS of a Li-Fi channel: ) 2 * * cos ( , ( )) cos ( , ( )) cos( )cos( ( )) where Aw signifies a modest wall reflection zone; ρwreflection from the wall.
Let us do the computation, substituting the gap between both the wall and also the photodetector for the parameter x: (0) = 4.074 × 10 −10 (2) = 1.91 × 10 −13 (6) = 2.205 × 10 −14 (9) = 9.863 × 10 −15 As we can see from the calculations above, in the best case, (0) equals 4,074*10 -10 , this is 5 orders of scale less than the best route value ( ). As a result, we shall disregard the value throughout the remainder of this study.
Thereafter: HLi-Fi = HLOS + HNLOS = HLOS, Which means that: HLi-Fi = HLOS The photons are received in a light sensor now at the received signal and transformed into an electrical charge, the value of which could be quantified as shown in: Where: Rpddetector responsivity; Popttransfer optical signal per Li-Fi access point, κis a transformation factor from optoelectronics to power production; the Popt/κ factor is equal to the strength of the signal.
The findings of calculations done using MatLab software with the input data supplied in Table 1 are displayed in the received signal power and the value of the electric current in the photodetector. The received signal has an optimum output of 0.043079 watts and a lowest value of 0.00011096 watts. It must be noted that its light sensor may receive signals ranging from 4-6 Watt to 500-3 Watt, therefore the values received are within the allowable range. The maximum electric current value is 0.06847 A, while the smallest value is 0.00017643 A. We continuously monitor our work once it is completed. It performs as expected. Our project yields faultless results, and every piece of equipment works perfectly. We tested it to see how well it worked, and we got great results on that project.
Finally, we effectively completed our project and made sure everything was running as planned. First, we start our system and test it under different settings. When the laser light hits the solar panel, the audio can be transmitted. We created this technique to transport audio using laser light. Our major goal was to employ the wireless transmission method. Although we had some difficulties creating this system, we were able to finish it in a wonderful manner thanks to the dedication and great assistance of the supervisor, sir. Because of its precision, our project has several advantages. Some of the benefits are listed below: Unlike Wi-Fi, Unlike Wi-Fi, which operates on the radio frequency spectrum, Li-Fi operates on the visible wavelength spectrum, which is still underutilized. Li-Fi addresses the problem of radio frequency signal interference due to the vast range of the light wave frequency spectrum. Low use and maintenance expenses are required for Li-Fi. Light waves, since they are impenetrable, give greater privacy, security, and monitoring than Wi-Fi. User-friendliness: The entire system requires very little energy. Audio transmission in a wireless system; audio transmission in a Li-Fi system The project is small, inexpensive, and simple to use. This project has several possible applications in today's modern and practical world, some of which are listed below: Relief from RF Spectrum Mobile Connectivity, Smart Lighting, Hazardous Environments, and RF Avoidance.
Conclusions
This article takes an in-depth look at the audio transmission system that uses Li-Fi technology. The notion of Li-Fi is now causing quite a stir all over the world. This system is routinely used with this infrastructure and requires no substantial modifications. Visible light communication may be a fast-evolving technique in the field of wireless technology. Li-Fi is a wireless data transfer system that is both fast and inexpensive. The growing need for larger bandwidth, faster and safer data transfer, audio transfer, and environmentally and demonstrably human-friendly technologies predicts the beginnings of a massive wireless revolution. This new technology is typically touted as ecologically benign and safe. It can also be used in potentially hazardous situations, such as in thermal and nuclear power plants, without causing electromagnetic interference. As a consequence, Li-Fi may effectively replace Wi-Fi. We are considering adding numerous features to our project in the future to get better outcomes. Some of the actions that we are considering are as follows: We are considering adding more functions in the future, such as data transmission and video distribution methods. Adding a noise termination circuit at the receiver end may help minimize output noise in the future.
Author contribution
Successful in this study, we investigated various transmission and reception techniques while transmitting and receiving data via FSOC using light technology. We have shown that our technique can distinguish the best results from others. The contribution to the paper is as follows: Omar Faruq 1 : study conception, physical project supervision, design, analysis and interpretation of results, draft preparation; Kazi Rubaiyat Shahriar Rahmana 2 , Nusrat Jahan 3 : data analysis, data collection, coding, operation, and draft preparation; Sakib Rokoni 4 , Mosa Rabeya 5 : physical project making, calculation, operation, and draft preparation. All authors approved the final version of the manuscript.
Declaration of competing interest
The authors declare that they have no known financial or non-financial competing interests in any material discussed in this paper.
Funding information
The author(s) did not receive any funding for the project making, research, writing, or publication of this work. | 2023-06-03T15:17:02.123Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "297010adcb814a17547571e9cc0e57d7e0dcb425",
"oa_license": "CCBY",
"oa_url": "https://sei.ardascience.com/index.php/journal/article/download/192/170",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d3957a41e08b33fb00231cabd77c14b6359a5f27",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
55011170 | pes2o/s2orc | v3-fos-license | Research on Dynamic Modeling and Application of Kinetic Contact Interface in Machine Tool
Amethod is presented which is a kind of combining theoretic analysis and experiment to obtain the equivalent dynamic parameters of linear guideway through four steps in detail. From statics analysis, vibrationmodel analysis, dynamic experiment, and parameter identification, the dynamicmodeling of linear guideway is synthetically studied. Based on contactmechanics and elasticmechanics, the mathematic vibration model and the expressions of basic mode frequency are deduced.Then, equivalent stiffness and damping of guideway are obtained in virtue of single-freedom-degree mode fitting method. Moreover, the investigation above is applied in a certain gantry-type machining center; and through comparing with simulation model and experiment results, both availability and correctness are validated.
Introduction
The dynamic analysis and simulation technology of numerical control machine tool are a very important research direction in modern advanced manufacturing technology and equipment technology.In a numerical control machine tool, the linear rolling guide is not only a significant functional component but also an important kinetic contact interface, meaning its characteristic has a direct effect on the machining precision and performance.Therefore, the modeling of contact interface, which takes the dynamic property of the NC machine tool into consideration, is a prerequisite research for establishing the dynamic overall model.Without this, it is impossible to get a practical conclusion.
For the dynamic analysis and prediction of machine tool, the modeling of contact interface and accurate identifying of the parameters are the main difficulty.In recent years, the domestic and overseas research of contact interface is basic on three aspects: the mechanism of contact interface, modeling, and parameter identification.Some of the researchers, who are represented by Wen et al. [1,2], put forward the scaleindependent stiffness fractal modeling of the normal and tangential contact.At the same time, they reveal the nonlinear relationship between the contact stiffness and each interface parameter.The researchers, represented by Zhang et al. [3] and Mao et al. [4], build the fundamental property model of contact interface and multinode dynamic model; Dhupia et al. [5] applies the frequency-domain joint part model with weak nonlinear characteristic to the basic machine tool modeling and predicts the processing performance.When it comes to identifying the parameter of the joint part, there are several main ways such as frequency response function identification method [6], response coupling method [7], and contact interface parameter optimization method which is based on finite element modeling [8].At present, the research of joint part is in the contending stage, and there are numerous problems of the mechanism and modeling to be solved.
There are certain difficulties to apply the mechanism model to engineering, because although the fundamental performance model and parameter statistic obtained have a versatility, it should be based on a great amount of experiments.
On the basis of the researches before [9][10][11] and starting from the material and structure aspects, this paper puts forward a combined "analysis-experiment" dynamic modeling method for linear guideway and studies the method of identifying the parameters of guideway contact interface.Furthermore, this paper builds a four-in-one joint part research method of "static stiffness model-vibration modelexperimental parameter-parameter identification" and applies this method to the guideway contact part and overall dynamic modeling of the gantry machining center.The research results provide an effective and feasible solution channel and practical method to predict the machine performance in the pattern design phase.
Modeling of the Rolling Guideway
Contact Part First of all, obtain the relationship between the stress and deformation through analyzing the static dynamics while the rolling linear guideway is under the general loading, solve the static stiffness via building the guideway static stiffness model, and provide the input parameter for the guideway vibration model.Secondly, employ the Lagrange method to analyze the vibration property of the linear rolling guideway and solve the basic modal vibration frequency of the vibration model through using analytic method, to provide the input requirement for the dynamic model parameter identification.At last, with the help of the hammer-hitting experiment, the multipoint frequency response function of the guideway is acquired.Combined with the guideway vibration basic modal frequency, the single degree modal fitting method is used to obtain the stiffness and damping value of the guideway equivalent model.
Static Stiffness Model.
In the condition that the linear rolling guideway is under the load in vertical direction , the load in horizontal direction , and vertical moment (rotating around the axis) (shown in Figure 2), the relation [12] between the stress and deformation is obtained via the mechanical analysis while the linear guideway is under general loading.The equation system is presented as This nonlinear equation system can be solved by numerical computation method.
Substitute a series of different external applied load , , and into the algorithm above and then a series of relative displacement , , and under different loading of the guideway can be obtained.Using the computer calculating method, the vertical direction stiffness, horizontal direction ( On the basis of the computing method above and MAT-LAB software, taking the Schneeberger MRB35 linear rolling guideway as an example, the parameters of the guideway measure are shown in Table 1.
Figure 3 shows the relation between the vertical bearing load and vertical direction deformation while the load is within the bearing capacity.In Figure 3, the line marked with red five-pointed stars represents computing data by theoretical model, and the line marked with blue points represents the experimental data provided by the guideway manufacturer.As is shown in the figure, the relation between the loading and deformation is moderate linear and it can be approximately treated as a linear relation.In the simplified model, it can be replaced by a linear model.
Dynamic Model.
Ignoring the mass of the roller, linear rolling guideway vibration model only takes the normal direction stiffness of the roller (the normal direction means the direction that is vertical to the contact interface) and simplifies the roller between the two contact interfaces as a spring that is vertical to the contact interface, whose stiffness is .It is shown in Figure 4.The solution for the stiffness [9,10] is as follows.
The simplified spring normal stiffness of the single sphere roller is The simplified spring normal stiffness of the single cylinder roller is where variation of the generalized coordinates to present the virtual displacement of the particle in the particle system.This kind of system is more useful than Newton's laws of motion while solving some problems (e.g., the small oscillation theory and rigid body dynamics).
The Lagrange equation of linear rolling guideway is given in the following: where means the kinetic energy of the system; means the generalized coordinate of the system; q means the generalized velocity of the system; means the generalized force which corresponds to the generalized coordinate of the system; means the number of the generalized coordinates of the system, meaning the free degree of the system.If all the force applied on the particle is potential force, using to represent the potential function of the system; the generalized force corresponding to the generalized coordinate is Defining = − , is named the Lagrange function or dynamic potential.Then the Lagrange function in potential field can be written as The Lagrange function of the linear rolling guideway system shown in Figure 4 can be established directly.
The overall dynamic energy can be expressed as follows.
The potential function can be expressed as where means the displacement in direction; V means the displacement in direction; means the angular displacement around -axis (pitching motion); means the angular displacement around -axis (yawing motion); means angular displacement around -axis (rolling motion); means the mass of slider; means the moment of inertia around -axis of the slider; means the moment of inertia around -axis of the slider; means the moment of inertia around -axis of the slider; means the moment of inertia around -axis of the slider; ℎ 1 means direction distance from the center of roller 1 to the origin of coordinate; ℎ 0 means direction distance from the center of roller 1 to the center of roller 3; means the angle between the normal direction of the contact interface and the horizontal direction; 1 means the elastic potential energy of the normal direction spring; ( = 1, 2, 3, 4) means the number of the rollers in the ith raceway; ( = 1, 2, 3, 4) means the -axis coordinate of the ith roller in jth raceway as is shown in Figure 5; the angle between axis and -axis is (/2 − ); means the length of single raceway roller.
Detected from (11) the vibration frequency of the direction that vertical to -axis of the guideway can be obtained: Detected from (12), the vibration frequency of the pitching motion around -axis of the guideway can be obtained: Detected from (4), the vibration frequency of the yawing motion around -axis of the guideway can be obtained: The displacement along direction and the angular displacement around -axis couple together so (10) and ( 14) are simultaneously solved to get the vibration frequency.
Then the simultaneous equation above can be written as follows.
In this expression, Solve (19): The displacement along direction and the angular displacement around -axis couple together so we suppose that the frequency obtained is low order rolling frequency RL and high order rolling frequency RH : The solution of the vibration model above is the vibration frequency of the linear rolling guideway.
The parameters of the experimental guideway Schneeberger MRB35-V2 are displayed in Table 1.The result of the vertical moving frequency, which is computed by the analytical method from the vibration model, is = 3529 Hz.
Dynamic Parameter Identification of Rolling Guideway
Because the contact part of the guideway has multiple modes, which can be known from the hammer-hitting experiment, numbers of groups vertical and horizontal "spring-damper" can be used as simplified equivalent model of the guideway contact part.We make a use of the results of the vibration model above and combine them with experiment to identify the dynamic parameters of the equivalent model.The experimental facility of the hammer-hitting test is shown in Figure 6.The main apparatuses are listed in the following: The measure in the four points A, B, C, and D is shown in Figure 7.
Shock and Vibration
The vibration spectrums of the four points A, B, C, and D are displayed in Figure 8 (including the transfer function amplitude and correlation coefficient).The measured value of the vertical vibration frequency is = 3450 Hz, whose error is less than 3% compared with the theoretical calculation.
As is shown in Figure 8, the slider-roller-guideway is a system with multiple modes.Find out the necessary parameters by combining them with the analytical results of the theoretical vibration model, which is the vertical vibration frequency and horizontal vibration frequency .And then identify the dynamic stiffness in both vertical and horizontal direction.The damping ratio and damping value are calculated by using the half-power method.The detail of the calculation is displayed in (24)-(29).
The stiffness in vertical direction is The stiffness in horizontal direction is The damping ratio in vertical direction is The damping value in vertical direction is The damping ratio in horizontal direction is The damping value in horizontal direction is In these equations, represents the mass; represents the vertical direction vibration frequency; represents the horizontal direction vibration frequency; 1 , 2 , 1 , and 2 , respectively, represent the homologous half-power frequencies in vertical and horizontal directions.
The measured value of vibration frequency in vertical direction, which is detected from the experiment, is = 3450 Hz and the horizontal vibration frequency is = 1075 Hz.The results can be computed by using ( 24)-(29).
Thus, the stiffness in vertical direction is The damping value in vertical direction is The stiffness in horizontal direction is The damping value in horizontal direction is
The Dynamic Analysis and Experimental Verification of Machine Tool Considering Contact Interface of Guideway
4.1.The FEM Model of Machine Tool.We conduct the research on the gantry machining center driven by a linear electric motor (as is shown in Figure 9) and make use of the FEM software to analyze the dynamic property of the machining center.The guideway in the machine tool system, which is shown in Figure 10, is equivalent to the four-vertical-direction spring-damper system and eighthorizontal-direction spring-damper system.The dynamic parameters can be obtained by using the method mentioned above.
After adding the material property parameters, constraints, and loadings, we apply the modal analysis to the machining center and compute the first-six-order frequencies and modes of the overall machine tool (shown in Table 2).
The result of the experiment indicates that the solution of the FEM analysis has a relative small error compared to the experimental result, and the simulation of FEM model well matches the practical situation.
Conclusion
Based on the research of static property of the linear rolling guideway, this paper makes a research on the dynamic performance of the rolling guideway and the simplified model of the guideway and the identification of dynamic parameters.At the same time, the research achievement is applied to a gantry machining center driven by a linear electric motor, which solves the problem of dynamic modeling of contact interface in NC machine tool.On account of the research mentioned above, the conclusions are obtained as follows.
(1) Based on the Hertz contact theory, the relation between the bearing load and deformation of the linear rolling guideway is detected to be approximate linear function.In the simplified model it can be replaced by the linear model.
(2) Based on the Lagrange method, we conduct a research on the vibration property of the linear guideway and deduce the dynamic equations.It is explained through model analysis that the 'Slider-Roller-Guideway' system has multiple vibration modes called pitching vibration, yawing vibration, vertical vibration, low order rolling vibration, and high order rolling vibration, which means that it is a complex multiple modes system.Therefore, we use the multigroups of vertical and horizontal "spring-damper" system as the simplified equivalent model.
(3) The paper proposed a type of guideway contact interface modeling method combining four stages which were static stiffness model, vibration model, experimental parameter, and parameter identification.This method is effective while solving the stiffness and damping value of the guideway equivalent model.Moreover, it solves the key problem in machine tool modeling-the problem of contact interface modeling.
(4) Combined with the research and development of the practical machine tool, for the object of a type of gantry machining center, we establish an overall FEM model of the gantry machining center driven by linear motors which includes the guideway contact interface model; moreover, the results of natural frequencies and modes of the machine tool are obtained.By comparing the computed solution with the experimental result, the efficiency and accuracy of the guideway contact interface model is verified.
(5) The spring model is used to replace the roller in the guideway system to improve the comprehension of stiffness and damping in the joint.However, when the interaction in the joint is needed to consider, the spring model is not suitable for simplified modeling of the roller, and it will be discussed in the future research.
Figure 1 :
Figure 1: Mind map of the guideway dynamic model research method.
Figure 3 :
Figure 3: The relationship of vertical stress and deformation of MRB35 guideway.
Figure 8 :Figure 9 :Figure 10 :
Figure 8: The frequency response functional graph of acceleration-force of linear guideway.
Table 2 :
The first-six-order frequencies and modes of machine tool computed by FEM method.
Table 3 :
The first-six-order natural frequencies of overall machine tool (unit: Hz). | 2018-12-05T05:46:29.714Z | 2016-12-15T00:00:00.000 | {
"year": 2016,
"sha1": "f09fd161cf93ddaf56a49b961748dbe2c55c7cf8",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sv/2016/5658181.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f09fd161cf93ddaf56a49b961748dbe2c55c7cf8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
118453232 | pes2o/s2orc | v3-fos-license | Kondo model in nonequilibrium: Interplay between voltage, temperature, and crossover from weak to strong coupling
We consider an open quantum system in contact with fermionic metallic reservoirs in a nonequilibrium setup. For the case of spin, orbital or potential fluctuations, we present a systematic formulation of real-time renormalization group at finite temperature, where the complex Fourier variable of an effective Liouvillian is used as flow parameter. We derive a universal set of differential equations free of divergencies written as a systematic power series in terms of the frequency-independent two-point vertex only, and solve it in different truncation orders by using a universal set of boundary conditions. We apply the formalism to the description of the weak to strong coupling crossover of the isotropic spin-1/2 nonequilibrium Kondo model at zero magnetic field. From the temperature and voltage dependence of the conductance in different energy regimes we determine various characteristic low-energy scales and compare their universal ratio to known results. For a fixed finite bias voltage larger than the Kondo temperature, we find that the temperature-dependence of the differential conductance exhibits non-monotonic behavior in the form of a peak structure. We show that the peak position and peak width scale linearly with the applied voltage over many orders of magnitude in units of the Kondo temperature. Finally, we compare our calculations with recent experiments.
We consider an open quantum system in contact with fermionic metallic reservoirs in a nonequilibrium setup. For the case of spin, orbital or potential fluctuations, we present a systematic formulation of real-time renormalization group at finite temperature, where the complex Fourier variable of an effective Liouvillian is used as flow parameter. We derive a universal set of differential equations free of divergencies written as a systematic power series in terms of the frequency-independent two-point vertex only, and solve it in different truncation orders by using a universal set of boundary conditions. We apply the formalism to the description of the weak to strong coupling crossover of the isotropic spin-1 2 nonequilibrium Kondo model at zero magnetic field. From the temperature and voltage dependence of the conductance in different energy regimes we determine various characteristic low-energy scales and compare their universal ratio to known results. For a fixed finite bias voltage larger than the Kondo temperature, we find that the temperature-dependence of the differential conductance exhibits non-monotonic behavior in the form of a peak structure. We show that the peak position and peak width scale linearly with the applied voltage over many orders of magnitude in units of the Kondo temperature. Finally, we compare our calculations with recent experiments.
I. INTRODUCTION
For many decades, the Kondo model has attracted a great amount of interest in condensed matter physics. The Kondo effect was first discovered 1 and analyzed 2 in bulk metals which contain magnetic impurities, where the exchange coupling J between a localized spin-1 2 and the conduction electrons leads to a screening of the spin and to an increased resistivity at low temperatures (see Ref. 3 for a review). More recently, it was first predicted theoretically 4,5 and then confirmed experimentally 6,7 that the Kondo effect also occurs in quantum dots in the Coulomb blockade regime, where the net spin on the dot can form a single impurity that is exchange-coupled to the conduction electrons in two or more reservoirs. It turns out that the Kondo effect causes an enhancement of the conductance through the quantum dot at low temperatures, and that the conductance can reach the unitary value 2 e 2 h for very low temperatures and zero bias voltage. 8 Quantum dots do not only permit us to control the coupling between the impurity to the conduction electrons, but also allow us to study the behavior of the impurity in a nonequilibrium setup by applying a finite bias voltage. 9,10 A. Previous theoretical work From a theoretical point of view, the Kondo model can be deduced from the single impurity Anderson model by integrating out the charge degrees of freedom using the Schrieffer-Wolff transformation. 11 Various methods have been applied to the Anderson and Kondo models in three different regimes: Equilibrium. Methods that have been applied successfully to the Anderson and Kondo models in equilibrium include Fermi-liquid theory, 12 the Bethe Ansatz, [13][14][15][16] conformal field theory, 17,18 and the numerical renormalization group [19][20][21] (NRG). An important result is that the zero bias conductance G V =0 (T ) through a single impurity at finite temperature is unitary at T = 0, and is a universal function of the ratio T TK , where the Kondo temperature T K is a characteristic energy scale that governs the low-energy behavior of the impurity. In two-loop poor man scaling methods 3, 22 it is defined by where D is the band width of the reservoirs, and J 0 is the exchange coupling between the impurity spin and the conduction electrons. The Kondo temperature is related to the width of the peak in G V =0 (T ) at T = 0. Therefore, a precise definition of a characteristic low-energy scale is the temperature for which the conductance drops to half its maximum value: We denote this energy scale by T * K , in contrast to T K which is not uniquely defined in the literature.
Expansions in the strong coupling regime. In the case that both temperature and voltage are much smaller than the Kondo temperature, Fermi liquid theory has been used 12,23,24 to obtain an expansion of the differential conductance up to second order in T TK and V TK . The result is where the ratio of the coefficients c V and c T is For the ratio of c V and c T it is not important which definition of T K is chosen. If one uses the energy scale T * K instead of T K in the expansion (4) we write This defines uniquely the coefficients c * T and c * V , which are universal numbers (i.e. independent of the details of the high-energy cutoff function) of O(1). Recently, the coefficient c * T has been determined from very precise numerical renormalization group calculations with the result 25 c * T ≈ 6.58, c * V = 3 2π 2 c * T ≈ 1.00, which serves as a quality benchmark for the reliabilty of other many-body methods in the regime of very low energies. A delicate issue for the precise calculation of these coefficients is the fact that the band width D has to be many orders of magnitude larger than the Kondo temperature T K in order to obtain universal results in the scaling limit D → ∞ , J 0 → 0 , T K = const.
Numerically, as explained in Ref. 26, it is very difficult to achieve this for the Kondo model whereas for the underlying Anderson impurity model it has only recently become possible to extrapolate the universal value of c * T . 25 In this paper, we will show that our analytical method allows for a different way to achieve universality directly for the Kondo model.
Weak coupling regime. If there is an energy scale in the system, such as the temperature, the voltage, or the magnetic field, which is much larger than the Kondo temperature, perturbative renormalization group (RG) methods can be used. They perform an expansion of the physical quantities in terms of a renormalized, but still small, coupling, provided that the RG flow of the coupling does not cause divergencies. These methods were pioneered by poor man's scaling 22 and include the following: • Scaling methods that include a phenomenological decay rate Γ as a cutoff for the RG flow. [27][28][29][30] • The flow equations method, where the competition between terms of different orders in the coupling constant prevents divergencies during the RG flow. 31 • The real time renormalization group (RTRG), which, unlike the previous methods, can explain the emergence of a decay rate Γ even in the lowest order truncation of the RG equations. The RTRG has been used with either the reservoir bandwidth 32 or an imaginary frequency cutoff, which cuts off the Matsubara poles of the Fermi distribution function, 33,34 as the flow parameter.
B. Recent developments
The purpose of this paper is twofold. In the first part, we will describe in all detail the idea of the E-flow scheme of the RTRG, as proposed in Ref. 35. We note that this scheme is essentially different from the one developed in Ref. 33, where a cutoff of the Matsubara poles of the Fermi functions was used, and the RG equations were derived by the principle of invariance when reducing the cutoff. In contrast, the E-flow scheme uses the Fourier variable E itself as flow parameter, yielding a physical result for all quantities at each stage of the RG flow. The technical derivation of the RG equations is very different compared to Ref. 33 since one does not make use of the principle of invariance. Instead, one can set up directly a systematic and well-defined perturbative expansion of the derviatives of all physical quantities w.r.t. E in terms of the effective two-point vertex. Since E can be considered in the whole complex plane, the RG equations can be solved along arbitrary paths in the complex plane. This provides a natural scheme to define analytic continuations of all retarded quantities into the lower half of the complex plane, even on a pure numerical level. For these reasons, the E-flow scheme is a natural RG scheme capable of addressing the physics of nonequilibrium stationary states, together with the full time evolution starting from an initially uncoupled system from the reservoirs (for more general initial conditions for quantum quenches and time-dependent Hamiltonians, see Refs. 44 and 45). Technically, the E-flow scheme allows for a systematic resummation of all logarithmic divergencies at high and low energies (i.e., short and long times) simultaneously and provides the possibility to solve the RG flow also starting from the infrared regime. As we will explain below, the latter turns out to be important to determine the universal part of the solution. The supplementary part of Ref. 35 contains a short description of the ideas of the E-flow scheme, whereas the present paper will reveal all technical details. Moreover, we will also go beyond Ref. 35 and develop a scheme which can be generalized to all orders and we will show that it is sufficient to set up a systematic power series in terms of the frequencyindependent two-point vertex only. We will focus on fermions and consider the case of a generic quantum dot in the Coulomb blockade regime (i.e., charge fluctuations are suppressed) which is coupled to noninteracting reservoirs with a flat density of states (DOS). Other extensions for charge fluctuations or frequency-dependent DOS are also possible and have recently been started in connection with the interacting resonant level model 44 and the Ohmic spin-boson model. 46 An important issue of this paper concerns universality, i.e., the way how one can set up the scaling limit (8), which determines that part of the solution which is independent of the specific choice of the high-energy cutoff function. Whereas the limit D → ∞ can be performed directly for the RG equations (since all frequency integrals are convergent), it is necessary to find appropriate universal initial or boundary conditions to solve the differential equations. This is achieved by using a perturbative calculation for various quantities at high energies, together with the boundary condition of unitary conductance for E = V = T = 0. In this way, no specific form for the high-energy cutoff function is needed. In comparison to Ref. 35, we propose an improved scheme to set up the initial conditions which, for the Kondo model, guarantees universality already for exchange couplings of the order of J 0 ∼ 0.04, i.e., by using Eq. (2), for D TK ∼ 10 6 . Furthermore, we will discuss critically the crucial issue why the E-flow scheme can sometimes even provide quantitatively reliable information for the strong coupling regime although the RG equations are truncated in a perturbative manner. We will explain why this issue is related to the complex nature of the flow parameter E such that the stationary case is not related to any fixed point of the RG but corresponds to some intermediate point in the RG flow where the solution is still analytic in E. In contrast, the fixed points correspond to a flow parameter E * = ±Ω − iΓ * , where Ω > 0 are the oscillation frequencies and Γ * > 0 the relaxation/decoherence rates of the time evolution.
In the second part of the paper, we will apply the Eflow scheme to the special case of the isotropic spin-1 2 and 1-channel Kondo model in nonequilibrium at zero magnetic field. In contrast to Ref. 35, we will consider the general case that both temperature and voltage are nonzero (and not only one of these scales) and analyze the interplay between temperature and voltage. We discuss situations where this interplay leads to a nonmonotonic temperature-dependence of the conductance at fixed finite voltage, and compare our results to recent experiments. Furthermore, due to our improved scheme for the inital conditions, we will present a new result for the universal coefficient c * V and compare it to the known result (7). Surprisingly, we find that the deviation in third order truncation is only ∼ 1%, providing evidence that our solution for the nonlinear conductance is reliable in the whole range of voltages. This paper is organized as follows. In Sec. II, we present the generic model of a quantum dot in the Coulomb blockade regime and the special case which is considered in more detail here, namely, the isotropic Kondo model. In Sec. III, we introduce the description of the dynamics of the system in terms of superoperators in Liouville space, which forms the basis of the RTRG. Section IV describes the E-flow scheme of the RTRG for the generic model. Section IV A explains the general idea of the method, whereas readers who are interested in the technical details can find a step-by-step derivation of the RG equations in Secs. IV B-IV G. Section V demonstrates how the E-flow scheme of the RTRG can be applied to the isotropic Kondo model. Section VI presents the results of our calculations and a comparison with recent experiments. Finally, we summarize the most important ideas and results of this paper in Sec. VII. We use units e = k B = = 1 throughout this paper.
II. MODEL
We consider a system which consists of a quantum dot with fixed charge (Coulomb blockade regime) and external non-interacting reservoirs. The quantum dot and the reservoirs are coupled in such a way that spin and/or orbital fluctuations can be induced on the dot. The total Hamiltonian of the system is The term is the part that corresponds to the isolated quantum dot with eigenstates |s and eigenvalues E s .
H α = σ dω (ω + µ α ) a +ασ (ω)a −ασ (ω) (16) describes the reservoirs in continuum representation, where the operators a ηασ (ω) are creators and annihilators (for η = + and −, respectively) for electrons with spin σ in reservoir α, and ω is the energy relative to the chemical potential µ α . We will often use multiindices 1 ≡ η 1 α 1 σ 1 ω 1 (17) to simplify the notation, and sum or integrate implicitly over indices which appear twice in a term. If no ambiguities can occur, the index 1 will be left out, e.g., The reservoir operators fulfill the anticommutator relation where is a dimensionless high-energy cutoff function for the leads with band width 2D,1 is a shorthand notation for switching the index η, i.e.,1 ≡ −η, ασω, and We note that the DOS of lead α with spin σ is given by where the constant ρ (0) ασ is absorbed in the field operators such that the anticommutation relation (19) is fulfilled. Finally, the term = 1 2 g 11 ′ : a 1 a 1 ′ : describes the coupling between quantum dot and reservoirs, where g 11 ′ = g ηασ,η ′ α ′ σ ′ (ω, ω ′ ) is an operator that induces spin and/or orbital fluctuations on the quantum dot, and : . . . : denotes normal ordering of the reservoir operators. Note that g 11 ′ can be non-zero only if η = −η ′ because V should not change the charge on the quantum dot.
A special case of this generic model that will be examined more closely in this paper is the isotropic spin-1 2 and 1-channel Kondo model with spin-unpolarized leads. In this case, the coupling operator g 11 ′ takes the form where S is the spin-1 2 operator on the quantum dot, and σ is the vector of Pauli matrices.
The operator that corresponds to the electron current from reservoir γ to the quantum dot is where N γ is the number of electrons in reservoir γ. The current operator can be written in the form where The current at time t is given by where ρ tot (t) is the total density matrix of the system at time t.
A. Superoperators and Fourier transform
Following the procedure described in Ref. 33, we introduce the concept of superoperators in Liouville space, which act on ordinary operators in Hilbert space. In particular, the Liouvillian L tot is the superoperator which, when applied to an arbitrary operator b, yields the commutator of that operator with the Hamiltonian of the system: It can be used to write a simple expression for the reduced density matrix ρ(t) of the quantum dot at time t, provided that the density matrix at time t 0 can be factorized into an arbitrary dot part ρ(t 0 ) and a product ρ res = α ρ α of grandcanonical density matrices for the reservoirs: In this equation, Tr res denotes the trace over the reservoir degrees of freedom only. Together with the trace over the quantum dot degrees of freedom, denoted by Tr, it yields the total trace Tr tot = TrTr res .
In the same way as L tot , a current superoperator L γ can be defined by The current at time t is then given by In the following, it will be convenient to use the Fourier transforms (note that all functions are only defined for t > t 0 such that the Fourier transform is identical to the Laplace transform, where −iE denotes the Laplace variable. Our definition is similar to the definition of the Fourier transform of retarded response functions such that all nonanalytic features occur in the lower half of the complex plane.) These can be used to calculate the stationary density matrix ρ st and the stationary current I γ st : B. Description in terms of the effective Liouvillian of the system Following the procedure described in Ref. 33, the Liouvillian L tot = [H tot , ·] can be split into three parts, where each of these corresponds to one of the terms in the Hamiltonian H tot = H + H res + V : Using the bare quantum dot superoperator G (0)pp ′ 11 ′ , called vertex, and the lead superoperator J p 1 , which are defined by their action on an arbitrary operator b, the coupling superoperator can be written as In principle, it would be possible to also include the Keldysh indices p and p ′ in the multiindices 1 and 1 ′ . However, it will be shown later that only the sum of the vertex over the Keldysh indices, i.e., remains in the final RG equations (in renormalized form). Therefore, it is more convenient to treat the Keldysh indices separately for the time being.
In analogy to the vertex G It enables us to find the representation of L γ [cf. Eq. (44)]. We expand Eqs. (36) and (37) in L V , perform the trace Tr res over the reservoir degrees of freedom, apply Wick's theorem w.r.t. the reservoir degrees of freedom, and define the irreducible kernel Σ(E), which is the sum of all diagrams that are connected by reservoir contractions (see Ref. 33 for details). A contraction between two vertices corresponds to the term whereω is a shorthand notation forω = ηω, and f (ω) = 1 e ω/T + 1 (50) denotes the Fermi function at temperature T . Analogously, the irreducible current kernel Σ γ (E) is the sum of all connected diagrams where the first vertex G (0) is replaced by a current vertex I γ(0) . We define an effective Liouvillian L(E) of the system, which contains all effects that are due to the coupling to the reservoirs, by This permits us to rewrite the reduced density matrix (36) and the current (37) in a form where the reservoirs do not appear explicitly: Defining the Liouvillian and the current kernel in time space by the inverse of the Fourier transform, we can write (52) and (53) in time space as The formal analogy of Eq. (54) to the von Neumann equation demonstrates most clearly that L(t) is an effective Liouvillian containing memory effects. Since L(t), Σ γ (t) ∼ θ(t) are retarded response functions, i.e., only defined for positive times t > 0, L(E) and Σ γ (E) are analytic functions in the upper half of the complex plane and the usual Kramers-Kronig relations hold. Applying the inverse Fourier transform to (52) and (53) we obtain an explicit formula for the time evolution for t > t 0 where the contour of integration is slightly above the real axis to ensure convergence. Closing the integration contour in the lower half of the complex plane we see that the individual terms of the time evolution follow from enclosing the poles and branch cuts of the resolvent 1/[E − L(E)] and the current kernel Σ γ (E). The stationary solution follows from Eqs. (38) and (39): The remaining challenge is to find a way to calculate the irreducible kernels Σ(E) [or, equivalently, L(E)] and Σ γ (E).
IV. RG FORMALISM
In this section, we will discuss how the effective Liouvillian L(E) of the system can be evaluated using a real-time renormalization group (RTRG) approach. In contrast to the approach presented in Ref. 33, where a cutoff was defined by cutting off the Matsubara poles of the Fermi functions, we use an alternative flow scheme which uses the Fourier variable E as the flow parameter, which was proposed in Ref. 35. The new approach is called the E-flow scheme in the following.
Here, we derive the E-flow RG equations for the case where only fermionic two-point vertices G p1p2 12 are present in the bare perturbation theory and describe either spin, orbital or potential fluctuations. Moreover, we assume that the bare vertices do not depend on the frequency variablesω i = η i ω i .
As already summarized in Sec. I C, the E-flow scheme is technically very different compared to the Matsubara RG scheme described in Ref. 33. Therefore, the detailed description of the E-flow scheme in this Section does not rely on the Matsubara scheme and we will only use the diagrammatic rules of the perturbative expansion in terms of the bare vertices as starting point, as described in Ref. 33.
Before entering the technical details on how to determine RG equations within the E-flow scheme from the specific diagrammatic rules, we will first motivate what the idea of the E-flow scheme of RTRG is and why it is the most appropriate choice for the determination of the time evolution.
A. The idea of the E-flow scheme of RTRG For small couplings between the quantum dot and the reservoirs, the most obvious choice to calculate the effective Liouvillian L(E) is to use a perturbative expansion in terms of the bare vertices G (0)pp ′ 11 ′ . These vertices are dimensionless and we denote their order of magnitude by O(G). The expansion of L(E) can then formally be written as where L (n) (E) ∼ O(G n ) denotes the Liouvillian in order n, L (0) = L is the bare Liouvillian, and the term with n = 1 is missing due to normal-ordering. For the current kernel Σ γ (E) an analogous expansion holds but also the lowest order term n = 0 is absent. The problem with the series (60) is that, for n ≥ 2, the internal frequency integrations can be logarithmically divergent at large energies for D → ∞. In order n, the divergencies occur in the form ln k D max{|E|,∆} , where ∆ = T, V, . . . is some physical energy scale (except E), and k ≤ n−1. From the perturbative series it can be seen that the Fourier variable E occurs always in linear combination with the internal frequencies in the form E + iω i , i.e., the imaginary part of the Fourier variable always acts as a high-energy cutoff. Thus, for |E| larger than any other physical energy scale, all logarithmic divergencies occur in the form ln k D −iE . By convention, we have chosen −iE in the argument of the logarithm such that, for the natural choice of the logarithm, all branch cuts point into the direction of the negative imaginary axis. This leads to exponentially decaying integrands for the integrals around the branch cuts to obtain the time evolution from Eq. (56). Concerning the precise position of the branching points it can be shown 46,47 that they are given by the non-zero poles z ± i = ±Ω i − iΓ i of the resolvent 1/[E − L(E)] in the lower half of the complex plane (Ω i , Γ i > 0), shifted by multiples of the voltage, i.e., generically the branching points appear at z ± i +mV with some integer m. In Fig. 1, we show the position of the poles and the branch cuts of the resolvent 1/[E − L(E)] for the specific example of the isotropic Kondo model considered in this paper, where the non-zero pole of the resolvent is given by −iΓ * with the spin relaxation rate Γ * . At finite temperature, it turns out that all branch cuts are replaced by an infinite number of poles separated by the Matsubara frequencies. for the isotropic Kondo model at zero temperature and finite voltage V . There are two pole positions at E = 0 and E = −iΓ * together with branch cuts of L(E) starting at E = −iΓ * + nV with some integer n.
In order to get rid of the divergencies, the idea of the Eflow scheme of RTRG is not to consider an expansion of the effective Liouvillian L(E) in terms of the bare vertices G (0)pp ′ 11 ′ but to consider an expansion of the second derivative ∂ 2 ∂E 2 L(E) of the effective Liouvillian in terms of effective vertices G pp ′ 11 ′ (E X ). The latter quantities are defined as the sum over all connected diagrams with two outgoing reservoir lines. The quantities X ≡ 12 . . . n, containing all possible sets of indices, determine a shift of the Fourier variable by linear combinations of the chemical potentials of the leads via E 12...n = E + n i=1 η i µ αi . As shown below this expansion can be achieved by a unique resummation of certain subclasses of diagrams which has also the effect that only the full effective Liouvillian L(E X ) occurs in this series. Most importantly, we will show that if the second derivative is taken for the effective Liouvillian, the resulting series does no longer contain any logarithmic divergence at high energies, such that we can take the limit D → ∞ to calculate the frequency integrals of any diagram in any order of the effective vertices. The same can be shown to hold for the first derivative of the effective vertices such that the RG equations within the E-flow scheme can be symbolically written as where F L/G denote some functionals which have to be determined from the diagrammatic rules, see the next section. We note that the RG equations involve only the two-point vertex G pp ′ 11 ′ (E X ), whereas in Ref. 35 a set of coupled RG equations for all n-point vertices occur. Moreover, as we will see in the next section, it can be shown that the right-hand side of the RG equations can be rewritten as a well-defined power series in terms of the frequency-independent two-point vertex only (i.e. the index 1 no longer involves the frequency). This simplifies the analysis of the RG equations in higher order truncation schemes beyond third order.
Similar RG equations can be set up to calculate the effective current kernel and vertex. Since the limit D → ∞ can be taken, these RG equations are universal, i.e., are independent of the specific choice of the high-energy cutoff function (20). If this limit is taken, the RG equations are only valid in the regime |E| ≪ D and a corresponding initial condition has to be set up in this regime. In principle, it is also possible to include the high-energy cutoff function on the right-hand side of the RG equations (61) and (62), such that the RG equations are valid for all values of E in the complex plane and can include a specific microscopic choice to describe the physics at high energies. In this case, the initial conditions of the RG equations at E = iΛ 0 with Λ 0 ≫ D are just given by the bare values of the Liouvillian and the vertices. However, the advantage of the E-flow scheme is that the scaling limit can be built in directly such that the limit D → ∞ can be performed from the very beginning before solving the RG equations, and only the universal part of the solution is obtained. To achieve this, we need to find an appropriate initial condition for Λ 0 ≪ D. In this regime and neglecting terms of O(1/D), the bare perturbation series will contain the band width D only within the logarithmic terms ln k D −iE . All these logarithmic terms are generated by the universal RG equations, if E = iD is used as initial value. Therefore, in order to set up universal initial conditions, we set E = iD in the bare perturbation series after having neglected all terms of O |E| D , in order to remove all logarithmic and nonuniversal terms in the initial condition. Furthermore, D is taken much larger than all other physical energy scales to avoid nonuniversal terms of O(∆/D). In addition, we consider only those lowest order terms of the perturbative expansion which are universal, i.e., independent of the choice of the high-energy cutoff function. If the lowest-order term is non-universal we take zero for the initial condition. As a consequence of this procedure, only the universal part of the solution is picked out and the band width D enters only as initial value E = iD of the Fourier variabe but does not appear explicitly in the initial value of the Liouvillian or the vertices. Together with the initial values of the vertices, the band width D will finally enter into some characteristic non-universal low-energy scale T K of the problem, like the Kondo temperature for the Kondo problem. Once this scale is defined, the scaling limit (8) is defined such that this scale stays constant in the limit where the band width D → ∞ and the bare couplings are sent to zero (such that only the lowest order terms of the perturbative expansion dominate the initial condition).
The prescribed way to determine the initial condition at E = iD works very well for the initial condition of all dimensionless quantities, like the vertices and the first derivative ∂ ∂E L(E) of the effective Liouvillian. However, for the effective Liouvillian L(E) itself a problem occurs since it contains terms which are proportional to E. As proposed in Ref. 46, the effective Liouvillian can be decomposed in two terms where L ∆ (E) ∼ ∆ contains all terms proportional to some physical scale except E, and EL ′ (E) contains all terms proportional to E. The quantities L ∆ (E) and L ′ (E) are slowly varying logarithmic functions in E, and the above procedure to determine the initial condition at E = iD can be applied to them. However, setting E = iD in (63) leads to a term iDL ′ (iD) proportional to D itself. Furthermore, it turns out that the coefficient in front of this term is non-universal for the Kondo problem. Neglecting this term in the initial condition would lead to a large error since the term diverges linearly in D. Therefore, we have to find a different way to set up the initial condition for L(E). One way is to set up directly RG equations for the quantities L ∆ (E) and L ′ (E) as proposed in Ref. 46, which can be used very effectively for a generic weak-coupling solution of the RG equations. 48 However, the decomposition (63) is not unique and some ambiguity is left to describe problems in strong coupling. Therefore, for the Kondo problem, we choose here a different strategy by first solving the RG equations, when all other physical scales ∆ = T, V, . . . are set to zero, and starting the RG flow at E = 0. This point corresponds to the stationary case, and it is known exactly that the conductance is unitary at this point. This boundary condition is used as an input to fix the unknown initial condition of the Liouvillian. The RG flow is then first solved for ∆ = 0 starting from E = 0 up to E = iD and the result is used as initial condition for the RG flow at finite ∆ ≪ D.
Since the RG equations involve the Liouvillian and the two-point vertex at the shifted variables E X = E +μ 1...n , an initial condition is needed for all these values. In Ref. 35, the same initial condition has been taken at all these points but, for the Kondo model, it turns out that for this choice the solution of the RG equations in the low energy regime |E| ≪ T K is not independent of the initial value E = iD even if D differs by many orders of magnitude from the physical scales ∆ ∼ T K , T, V . The problem is that there is an instability of the low-energy solution against exponentially small changes (of the order of T K ) of the Liouvillian at high energies. Therefore, the relative difference between L(E) and L(E + nV ), with n = 0, is important and cannot be neglected for large E at fixed voltage V . In this paper, we will solve this problem by solving the RG equations at T = V = 0 from E = 0 up to E = iD and, subsequently, from E = iD to E = iD + nV , providing different initial conditions for all quantities at the shifted variables. Using this procedure, one finds that the scaling limit is achieved already for values of the exchange coupling of the order of J 0 ∼ 0.04, i.e., by using Eq. (2), for D TK ∼ 10 6 . The E-flow scheme is a new concept in RG methods, since it uses a complex flow parameter. This allows the solution of the RG equations along an arbitrary path in the complex plane and all effective quantities can be analytically continued from the upper to the lower half of the complex plane. Only if a branching point is encircled the solution does not return to the same value. Thus, even numerically one can determine the precise position of all branching points and can fix the shape of the branch cuts in a convenient way. To calculate the time evolution it is not necessary to calculate the integrals in Eqs. (56) and (57) along the real axis which is numerically not very convenient due to strongly oscillating integrands. Choosing the shape of the branch cuts along the negative imaginary axis starting from a branching point/pole of the resolvent 1/[E − L(E)] at position z B = z σ i + mV , one can close the integration contour of (56) and (57) in the lower half of the complex plane and can address each individual term of the time evolution separately by calculating the integration around each individual branch cut. This requires the knowledge of the effective Liouvillian for z = z B − ix ± 0 + , with x > 0, which can be determined by solving the RG equations along the path starting at Λ ∼ D down to Λ = −∞. Using (56) for ρ(t), the branch cut integral leads to a term for the time evolution, where the position of the branching point/pole determines the exponential and F (t) is a pre-exponential function given by A similar equation holds for the time evolution for the current by using Eq. (57). Due to the exponentially decaying integrand, the long time behavior of F (t) can be determined by analyzing the scaling behavior of the Liouvillian close to the branching point z B . 46 Since each term of the time evolution has a different oscillation frequency and a different decay rate due to different positions of the branching points, it is very hard to distinguish the different terms if a method is used which can only calculate the sum of all terms. Thus, the E-flow scheme is a very natural and effective scheme for a systematic determination of the time dynamics for problems in dissipative quantum mechanics.
Within the E-flow scheme, also the notion of fixed points of the RG flow has to be generalized. In conventional RG methods, the flow parameter is a real cutoff Λ and the fixed points are defined as those points where the RG flow of all quantities stops for Λ → 0. Within the E-flow scheme, there is no unique path for the flow parameter. For each given set of initial conditions, there is a certain set of branching points z B in the lower half of the complex plane where the RG flow stops. Thus, if the RG flow is solved along the path z B + iΛ, the fixed point is defined as the value of all quantities which is obtained for Λ → 0. This means that the fixed point itself is associated with z B such that z B can be equivalently called a fixed point. If the initial conditions are changed then also the position of the branching points can change, i.e., it makes no sense in general to associate several fixed points with a single branching point. Therefore, in the following, we will denote the branching points as fixed points of the RG. As already mentioned above, the scaling behavior around these fixed points determines the long-time behavior of pre-exponential functions for the time evolution.
The stationary solution requires only the knowledge of the effective Liouvillian (or the effective current kernel) close to E = 0, see Eqs. (58) and (59). Around this point the effective Liouvillian is analytic for the isotropic 1-channel Kondo model, where the branching points are located at −iΓ * +nV . Therefore, in contrast to other RG methods, the RG flow is still sufficiently away from the fixed points, and the expansion in E, T , or V is analytic around this point. This is the reason why even for the strong coupling case T, V T K , there is some hope that the stationary conductance G(T, V ) can be quite close to the exact value even if the RG equations are truncated perturbatively in the effective vertices. Although this truncation is not controlled in a strict mathematical sense since the effective vertices are still of order 1/3 at E = 0 and T, V T K , one can check the reliability of the method by comparing the results in second and third order truncation. Despite the fact that it cannot be anticipated whether the result will converge by increasing the truncation order, a nearly identical result in second and third order gives some hint that an asymptotic series may be present leading to a very good result already in a low-order truncation. As we will see, this is indeed the case for the isotropic 1-channel Kondo model, where we can additionally check the quality of our results by comparing with the temperature dependence of the con-ductance at zero voltage obtained from numerically exact NRG calculations.
Close to the branching points z B , the situation is very different. Here, the Liouvillian is non-analytic and no finite-order truncation scheme will lead to a reliable result in the strong coupling case, where the vertices are of O(1) close to the branching point. This can indeed be checked for the Kondo model at T = V = 0, where completely different results are obtained in second and third order truncation for the time evolution. 35 The same holds for quantum-critical models, like the 2-channel Kondo model, where the branching point starts at E = 0, such that even stationary quantities cannot be calculated by a perturbative truncation scheme. Only for weak-coupling problems, where the effective vertices stay small close to the branching points, a truncation in finite order is controlled. For such models, the E-flow scheme is a useful method to resum systematically all powers of logarithmic terms g ln(T K t) in leading or sub-leading order to calculate the long-time behavior, where g is some small dimensionless coupling constant. This has been demonstrated recently for the Ohmic spin-boson model, 46 , where different power-law exponents have been obtained for the time evolution compared to previous results. To the best of our knowledge, this is not possible within any other RG method at the moment.
As already noted in Ref. 46, the values of the effective vertices close to the branching points can be of order O(1) even if they are small at the stationary point E = 0. This is the case, e.g., for the 1-channel Kondo model at large bias voltages compared to the Kondo temperature. Thus, weak-coupling problems for stationary quantities can turn into strong-coupling ones concerning the longtime evolution. The physical reason is that the long-time dynamics is not cut off by any decay rate Γ i since the cutoff parameter of the RG flow determining that term of the time evolution associated with the branching point z B is given by the linear combination |E − z B | ∼ 1/t which tends to zero for t → ∞.
B. RG equations for the effective Liouvillian and the effective vertex
We will now derive the basic RG equations (61) and (62) for the effective Liouvillian and the effective vertices. We start from the diagrammatic representation of the effective Liouvillian in terms of the bare vertices as derived in Ref. 33. Each diagram consists of a sequence of vertices connected by dot propagators denoted by the resolvents where we have already resummed all self-energy insertions to obtain the full effective dot propagator. The reservoir field operators of the vertices are connected by reservoir contractions γ ≡ γ pp ′ 11 ′ (ω, ω ′ ) as defined in Eq. (48). The diagrammatic series can be written as Here, L (0) and are the bare Liouvillian and the bare vertices as defined in Eqs. (41) and (42). The resolvents with are determined by the set of indices X which are associated with the reservoir lines crossing over the corresponding resolvent, where each index is taken from the vertex connected to this line and standing left to the resolvent. N p is the number of crossings of reservoir lines and S = k m k ! is a symmetry factor arising if two vertices are connected by m k reservoir lines. ( γ) irr denotes the product over all reservoir contractions, where the subindex "irr" means that only connected diagrams without any self-energy insertions are allowed. Using Eq. (48), we see that only pairs of indices (1,1) can be connected by reservoir lines. Thus, the lowest order diagram for the effective Liouvillian reads where G denotes the bare vertex averaged over the Keldysh indices. Implicitly we sum always over all indices and integrate over all frequencies ω i .
Similar to the effective Liouvillian, one can also set up a diagrammatic series for the effective vertex G p1...pn 1...n (E), which is defined by the sum of all connected diagrams with n free reservoir lines with indices 1 . . . n and Keldysh indices p 1 . . . p n . In the following, we will call these objects n-point vertices. Since we consider here only bare vertices with n = 2, the effective vertices must have an even number of external lines. The diagrammatic series for G p1...pn 1...n (E) is exactly the same as for the effective Liouvillian L(E) (which can be considered as a zero-point vertex) with the following additional rules: (i) By convention, all reservoir lines are directed to the right, and the corresponding frequencies and chemical potentials of the external lines have to be included in the resolvents.
(ii) If the sequence of the external indices from left to right is given by P 1 . . . P n , where P is any permuation of 1 . . . n, the diagram gets a factor (−1) P , i.e., a minus sign for an odd permutation. This minus sign accounts correctly for the minus sign from crossings of external lines if an effective vertex is used in a certain diagram instead of a bare vertex.
(iii) If the external lines are associated with different vertices, one has to sum over all permutations of the external lines. If two external lines are associated with the same vertex, only one sequence of the indices has to be considered.
(iv) The external vertices are normal-ordered, i.e., if an effective vertex is used instead of a bare one in a certain diagram it is not allowed to connect the effective vertex with itself.
These rules give, e.g., for the second order diagrams for the effective two-point vertex Note that we integrate only over the frequency variablē ω 3 but not over the external onesω 1/2 . If we want to exhibit the frequency dependence of the effective vertices explicitly we will also use the representation where on the right-hand side the indices i ≡ η i α i σ i do no longer contain the frequency variable. Furthermore, when omitting the Keldysh indices, we define by G 1...n (E) the n-point vertex averaged over the Keldysh indices.
Once the effective vertices are defined, one can use them instead of bare ones in the diagrammatic series by resumming subclasses of connected diagrams. According to rule (i), within a certain diagram the energy argument of the effective vertex has to be chosen identical to the one of the resolvent standing left to this vertex, i.e., the combination will occur. If the first vertex from the left in a diagram is replaced by an effective one it has the energy argument E. For example, replacing both vertices in Eq. (72) by effective ones, we obtain the expression (76) However, we note that because of double-counting, it is not possible to replace all bare vertices by effective ones in the diagrammatic series and omitting certain diagrams. For example, when inserting Eq. (73) for the two two-point vertices into Eq. (76), we find a double counting of third order diagrams for the effective Liouvillian. The same happens for the diagram (73) of the two-point vertex when we replace the two bare vertices by effective ones. Only for all n-point vertices with n > 2 a straightforward inspection shows that all diagrams can be resummed in a unique way such that only two-point vertices remain. Furthermore, as will be explained below, after this resummation all internal frequency integrations are well-defined in the limit D → ∞. This is the reason why we need a reformulation of the diagrammatic series in terms of the RG equations (61) and (62) only for the effective Liouvillian and the two-point vertices.
Using similar proofs as in Ref. 33, one can show that the effective vertices have the following properties for fermions and n even (the case which we consider here) where the c-transformation is defined in matrix notation guarantees the important property that the reduced density matrix ρ(t) given by Eq. (56) is Hermitian, which is related to the Hermiticity of the original Hamiltonian. 33 The property TrL(E) = 0 guarantees conservation of probability, Trρ(t) = Trρ(t 0 ).
Furthermore, we note that all n-point vertices are analytic functions in the upper half of the complex plane w.r.t. the Fourier variable E and the external frequencies ω i . This follows from the fact that these variables occur only in the argument of the resolvents standing between the bare vertices in the form R(E+ω i +. . . ) together with the property that the resolvent is an analytic function in the upper half of the complex plane. Here we have assumed that the bare vertices are frequency independent. If an effective vertex is used instead of a bare one in a diagram it appears in the form (after integrating out all δ-functions between the internal frequencies of connected vertices) where the indices 1, . . . , m and m + 1, . . . , n belong to the contractions which point to the left or to the right direction, respectively. Using the diagrammatic series for Eq. (80), we again see that the quantity is analytic w.r.t. E and allω i . Therefore, even if effective vertices are taken instead of bare ones, the internal frequency integrations can be closed in the upper half of the complex plane and only the nonanalytic features of the functions γ p ′ (ω) defined in Eq. (49) have to be considered, which are the Matsubara poles of the Fermi functions and the pole iD of the high-energy cutoff function D(ω). This is very helpful for practical calculations. In particular, as we will show in the following by using a proper reformulation of the diagrammatic series in terms of RG equations and effective vertices, it will turn out that the limit D → ∞ can be performed and only the Matsubara poles of the Fermi functions remain. In this case, it is useful to split the Fermi function into symmetric and antisymmetic parts by When inserted in Eq. (49), this leads to the decomposition with Thus, for D → ∞, the internal frequency integration can be written as a sum over the Matsubara poles of the antisymmetric part of the Fermi functions on the positive imaginary axis, where ω n = (2n + 1)πT denote the fermionic Matsubara frequencies.
We will return to this point at the end of this section when analyzing the analytic structure of the RG equations. We next turn to the central question as to how the limit D → ∞ can be performed. The convergence of the frequency integrals at high energies can easily be checked by counting the number of integrations and resolvents. For example, for the diagram (72) of the effective Liouvillian we have two frequency integrations and thus we need three resolvents for convergence. However, since there is only one resolvent present, we see that we need at least two derivatives w.r.t. E to obtain a convergent integral. For the diagram (73) of the two-point vertex we have one frequency integral, so we need two resolvents for convergence. Since there is only one resolvent present, we need one derivative w.r.t. E for convergence. This is the reason why we consider a perturbative expansion for ∂ 2 ∂E 2 L(E) and ∂ ∂E G p1p2 12 (E) to perform the limit D → ∞. Using the diagrammatic representation (68), the E-derivative can only act on the resolvents R X (E) occurring between the bare vertices. If a specific resolvent of a certain diagram is chosen for the derivative, one can classify the diagrams by the number of contractions running over this resolvent, i.e., contractions which connect vertices standing left to the resolvent with vertices standing right to it. Cutting all these contractions virtually, the diagram splits into two parts and one can subsequently resum all connected diagrams to the left and to the right of the resolvent. This resummation leads to effective vertices such that no contraction is left which connects effective vertices both standing either left or right to the resolvent. As a result, we obtain diagrams which contain only the effective vertices instead of bare ones. Moreover, only diagrams are allowed where all internal contractions connect effective vertices which are on different sides of the resolvent where the derivative is taken. This leads to the following equations up to O(G 3 ): for the second derivative of the effective Liouvillian, and for the first derivative of the two-point vertex. In these diagrams, the slash indicates the E-derivative ∂ ∂E and a double-circle represents the full effective two-point vertex (a convention which we use always in all following diagrams). Symmetry factors 1 n! , arising either from the factor S in Eq. (68), or from the E-derivatives 1 n! ∂ n E , are explicitly quoted for convenience. Most importantly, even if one neglects the frequency dependence of the effective vertices, all frequency integrations converge in these equations in the infinite band width limit D → ∞. This is the reason why the frequency dependence of the vertices can be systematically treated perturbatively as will be shown in the following sections.
In O(G 4 ), diagrams involving the four-point vertex can occur for the derivative of G p1p2 12 (E) like, e.g., Neglecting the frequency dependence of the two effective vertices, the frequency integrations do not converge since two resolvents and two integrations are present. However, all n-point vertices with n > 2 can be expressed in terms of two-point vertices where all frequency integrations are convergent. For this reason these vertices are called irrelevant, which means that they can be treated perturbatively. For example, the lowest order terms of O(G 3 ) for the four-point vertex are given by (−1) P P 1 P 4 P 2 P 3 where (P 1 , P 2 , P 3 , P 4 ) denotes a permutation of (1, 2, 3, 4). Obviously the frequency integration converges for D → ∞ and we can insert this expression for the four-point vertex into Eq. (88). This leads to the terms for the derivative of G p1p2 where the frequency integrations are all convergent for D → ∞. Thus, we see that the frequency dependence of the four-point vertex is very important for the convergence of the frequency integrations in Eq. (88). The reason is that the frequency dependence of the four-point vertex is not logarithmic but, as can be seen from Eq. (89), behaves as 1/ω i for large frequencies. Therefore, instead of writing complicated coupled RG equations for all higher-order effective vertices, it is more convenient to directly resum the diagrams for ∂ ∂E G p1p2 12 (E), such that only two-point vertices occur left and right to the resolvent where the derivative is taken. A straightforward inspection shows that this leads directly to the diagrams of Eq. (90) in O(G 4 ), and in all orders there is no problem with double counting and all frequency integrations are convergent for D → ∞. A similar procedure can be used for ∂ 2 ∂E 2 L(E) to obtain a series for the second derivative of the Liouvillian in terms of the two-point vertex with convergent frequency integrations in all orders. As a result, we obtain the RG equations (61) and (62) for the effective Liouvillian and the two-point vertex.
We note that the limit D → ∞ can only be performed if all bare quantities are replaced by effective ones and the derivative w.r.t. the Fourier variable E is taken. The effective Liouvillian and the two-point vertices contain the band width D implicitly via the initial value E = iD (see Section IV F for the determination of the initial conditions). Therefore, concerning these quantities, the infinite band width limit has to be taken in the sense of the scaling limit, i.e., the bare coupling constants are simultaneously sent to zero, such that a characteristic low energy scale T * K remains constant. In this paper, we will restrict ourselves to a truncation scheme where all terms in O(G 2 ) or O(G 3 ) are considered on the right-hand side of the RG equations, which, in the following, will be called truncation in second and third order, respectively. Therefore, we will restrict ourselves to the RG equations (86) and (87) in the following. Nevertheless, if desired, the systematic construction of the RG equations allows straightforwardly to go beyond third order truncation schemes.
Since the limit D → ∞ has been performed, all internal frequency integrations for any diagram of the RG equations of the effective Liouvillian or the two-point vertex can be replaced by the sum over the Matsubara poles of the antisymmetric part of the Fermi function, see Eq. (84). In particular, this means that the symmetric part p ′ γ s (ω) = p ′ /2 of the contraction γ p ′ (ω) does not contribute in the limit D → ∞, and only the antisymmetric part γ a (ω) = f (ω) − 1/2 remains. Since the latter is independent of the Keldysh indices, we find that we need to consider only the vertices G 1...n (E) averaged over the Keldysh indices. This is very helpful for practical calculations and an important advantage over other nonequilibrium formalisms, such as the Keldysh formalism, where the whole matrix structure in Keldysh space has to be taken into account. Although the symmetric part of the Fermi function does not enter the RG equations, we note that it is important for the determination of the initial condition (see Sec. IV F).
Following Ref. 33, another important consequence of the fact that only the vertices averaged over the Keldysh indices occur in the RG equations is that the effective Liouvillian occurring in the resolvents between the effective vertices cannot contain the zero eigenvalue. This follows since the projector P 0 (E), which projects any op- 33. This allows for a general analysis of the analytical properties of the RG equations even before calculating the sum over the Matsubara frequencies. Replacing the real frequencies by positive Matsubara frequencies viaω X → i j∈X ω nj , each resolvent R(E X + i j∈X ω nj ) occurring between the two-point vertices has a pole for E X + i j∈X ω nj = z σ i , where z ± i are the non-zero poles of the resolvent R(E). Since all ω n are positive, there is an infinite set of poles at for any two integers n, m with m > 0. For T → 0, the infinite set of poles turns into a branch cut with branching point z ± i + nV pointing in the direction of the negative imaginary axis. Thus, with our choice of how to calculate the internal frequency integration, we have determined the shape of the nonanalyticities in the lower half of the complex plane or, equivalently, have chosen a specific way of how to analytically continue the effective vertices into the lower half of the complex plane. In the original form (68) of the diagrammatic series with integrations over the real axis, all branch cuts would appear on the real axis which would be very inconvenient for an evaluation of the time evolution via inverse Fourier transform.
Although all internal frequency integrations can be written as sums over the Matsubara frequencies, such a procedure is not very convenient for an explicit solution of the RG equations since a set of differential equations has to be solved numerically where the frequency dependence of the vertices is parametrized by an infinite set of Matsubara frequencies. Because of the huge number of variables, this is numerically very complicated. Therefore, in the following sections, we will show how we can set up systematic RG equations only for the frequency independent vertices where the indices i = η i α i σ i will no longer contain the frequency variable from now on. Furthermore, we will show how the frequency dependence in the argument of the effective Liouvillian L(E X +ω X ) occurring in the resolvents between the effective vertices can be systematically eliminated. The procedure consists of two steps. First we will transform the E-derivative on the righthand side of the RG equations (61) and (62) to frequency derivatives and use integration by parts to shift them to the derivative of the Fermi functions. Secondly, we will use a perturbative expansion for the frequency dependence of the two-point vertices and the effective Liouvillian.
C. Transforming the E-derivatives to frequency-derivatives Before using a parametrization of the frequency dependence of the vertices and the resolvents to calculate analytically the integrations over the internal frequencies for the various diagrams of the RG equation, it is useful to first replace the derivative w.r.t. E by a frequency derivative. This is possible since the resolvents R 1...n = R(E 1...n +ω 1...n ) depend on the sum of the Fourier variable and the frequenciesω i . Therefore, the E-derivative can be written as frequency-derivative ∂ ∂ωi and we can apply integration by parts to calculate the frequency integrals. For example, for the first term on the right-hand side of Eq. (87) we obtain [we have permuted the two indices of the vertices by using antisymmetry, The cross indicates the frequency derivative with respect to the frequencyω of either the corresponding reservoir contraction or the frequency argument of one of the vertices. The first transformation is exact and follows from integration by parts. For the derivation of the second line, we have used which follows analogously to Eq. (87) by using the fact that, in the original diagrammatic series, a frequency associated with an external line occurs only in those resolvents which lie below the external line but not in other resolvents (here we assume that the bare vertices are frequency independent). Note that these relations can only be applied if it is specified whether the external line involving the frequency derivative is directed either towards the left or towards the right. Analogously, we can treat the first term on the righthand side of Eq. (86) by two integrations by parts and using the fact that We obtain where, in the second step, we have again used Eqs. (94) and (95).
Inserting Eqs. (97) and (93) in (86) and (87), respectively, we see that many diagrams in O G 3 cancel each other, which enables us to write the final third order RG-equations in a very compact and generic form: (99)
D. Frequency dependence of the vertices
To evaluate the frequency integrals on the right-hand side of the RG equations (98) and (99) explicitly, we need a consistent approximation for the frequency dependence of the vertices and the Liouvillian. In contrast to n-point vertices with n > 2, this can be achieved for the two-point vertex since the frequency dependence of G 12 (E;ω 1 ,ω 2 ) is logarithmic. Therefore, we can use a perturbative expansion in terms of the frequency independent vertex G 12 (E). Although such an expansion will contain arbitrary powers of logarithmic terms in the frequencies, it does not lead to any divergence of the integrals when inserted into the RG diagrams. To expand G 12 (E;ω 1 ,ω 2 ) in terms of the frequency-independent two-point vertex G 12 (E), we start from the diagrammatic series in terms of the bare vertices and split each resolvent R X (E) as where X = X in ∪X ex consists of internal indices X in and external ones X ex . The first term is the resolvent where all external frequencies are set to zero, and falls off like (1/ω Xin ) 2 w.r.t. all internal frequency variablesω Xin . Inserting Eq. (100) for each resolvent, we obtain a sequence of resolvents R X (E)|ω Xex =0 and ∆ Xex R X (E) between the bare vertices. Since R X (E)|ω Xex =0 are the resolvents without the external frequencies, we can resum all diagrams between two subsequent ∆ Xex R X (E) in terms of the two-point vertices at zero external frequency, similar to the procedure described in the previous section. Up to O(G 3 ), this gives the equation Here the filled dots indicate that the corresponding frequency of the vertex is set to zero. A contraction with an open circle and index X ′ indicates that the resolvent R X (E) corresponding to the vertical cut at the position of that circle has to be replaced by ∆ X ′ R X (E). If several contractions with open circles at the same position of a certain resolvent appear, X ′ contains the set of all indices of these contractions. Since ∆ Xex R X (E) falls off like (1/ω Xin ) 2 , all frequency integrals are convergent in the limit D → ∞. We note that the left resolvent in the last diagram on the right-hand side of Eq. (102) involves the external frequencyω 1 since one can sum the two diagrams where the external frequency does not occur and where ∆ 1 R appears. This is a generic feature for all resolvents produces also a valid diagram such that the two terms can be added to R X (E).
To get rid of the remaining frequency dependence of the vertices w.r.t. the internal frequency variables in Eq. (102), one can iterate this equation and obtains up to O(G 3 ) Here, the last two diagrams occur due to the internal frequency dependence of the two vertices of the second diagram on the right-hand side of Eq. (102). Note that in the last diagram we have to setω 12 = 0 for the right resolvent since the right vertex of the second diagram on the right-hand side of Eq. (102) does not depend on the external frequencies. Proceeding in this way in all orders we see that we obtain a systematic perturbative expansion of the frequency dependent two-point vertex in terms of the frequency-independent ones which is free of any divergence for D → ∞. We note again that, for large external frequencies |ω i | ≫ |E|, this expansion involves arbitrary powers in ln |ω E |, i.e., it is not a meaningful expansion to determine the high-frequency behavior of the vertex. However, settingω 1/2 = 0 in Eq. (99) and inserting Eq. (103) for the dependence of the two-point vertices on the internal frequencies in Eqs. (98) and (99), there is no divergence at high frequencies since, due to the presence of the resolvents and the derivatives of the Fermi functions, the integrand falls off either like (1/ω) 2 or exponentially w.r.t. all internal frequency variables, such that additional logarithmic powers do not change the convergence.
Equation (103) refers to the case where the two external lines are directed to the right, and similar equations can be written for the other cases. We note that the sign factors for the terms on the right-hand side where the indices 1 and 2 are interchanged accounts explicitly for the crossing of the two external lines. Therefore, when using these relations in a certain diagram, the sign factor must not be written explicitly since it is automatically accounted for in the diagrammatic rules.
Settingω 1/2 = 0 in Eq. (99) and inserting Eq. (103) in (98) and (99), we obtain At zero temperature, the evaluation of the RG equations is simplified considerably because all diagrams in Eqs. (104) and (105) which contain a contraction with a circle and a cross vanish. The reason is that the cross indicates a contraction which is differentiated with respect to the frequency, and at zero temperature and D → ∞, such that the difference ∆R yields zero.
E. Frequency dependence of the propagator
Finally, to calculate the integrals over the internal frequency variables on the right-hand side of the RG equations (104) and (105), one needs a consistent approximation for the frequency dependence of the resolvent whereω X = i∈Xω i contains the integration variables ω i together with the frequencies of external lines. This requires a perturbative expansion of the difference in terms of the frequency-independent vertices. Treating this difference similar to the two-point vertex as described in the previous section, we find that the frequency integrals do not converge in the limit D → ∞, similar to the fact that, for infinitesimal differences, two derivatives w.r.t. E are needed to guarantee convergence (see Sec. IV B). Therefore, we need a convenient discrete version for the second derivative. We use the following definition for the second variation for a finite shiftω which, forω = δE → 0, gives the second variation is of second order in the two-point vertex, since a Taylor expansion produces the terms 1 n!ω n ∂ n ∂E n L(E) with n ≥ 2. For all these terms we can use the procedure described in Section IV B by starting from the diagrammatic series in terms of the bare vertices, taking the derivatives of the resolvents, and resumming the diagrams in between to the full two-point vertices. This gives convergent terms in the limit D → ∞ and, for all n ≥ 2, the diagrams are of O(G 2 ) since at least one resolvent is needed for the derivative. However, this procedure is not very practical since all terms with n ≥ 2 contribute to the lowest order G 2 . To resum all terms we apply the same procedure separately for the difference L(E +ω) − L(E) and forω ∂ ∂E L(E), following Section IV D and IV B, respectively. All terms for L(E +ω) − L(E), which contain more than one difference are at least of O(G 3 ) and contain already convergent frequency integrals for D → ∞. For the other terms with only one ∆ωR X (E), this is not the case and here we need the difference to the correponding term ofω ∂ ∂E L(E), where the derivative of the resolvent is taken. This means that Eq. (110) is changed to This is a form for the discrete version of the second derivative of the resolvent which can be seen after some straightforward manipulations where L X (E) = L(E X +ω X ). According to Eq. (109), the last term is of O(G 2 ), which gives at least O(G 4 ) for ∆ 2 ω L(E). Therefore, in order O(G 2 ) we obtain for ∆ 2 ω L(E) the expression (76), where R 12 (E) has to be replaced by the first term of (112). All frequency integrations exist in the limit D → ∞, such that the symmetric part of the Fermi function does not contribute, and only the two-point vertex averaged over the Keldysh indices is needed. This gives the following result for the lowest order contribution to the second variation where, in lowest order, only the frequency-independent vertices enter. Using Eq. (109), we can now approximate the frequency dependence of the resolvent R X (E), which is given by Eq. (107). We use L(E X +ω X ) = L(E X ) + ∆ω X L(E X ) together with and expand the resolvent in ∆ 2 ωX L(E X ) ∼ O(G 2 ). This gives where The first term on the right-hand side of Eq. (115) is sufficient for the RG equations (104) and (105) since we neglect O(G 4 ). This gives explicitly where we have introduced the definition After inserting the spectral decomposition of the effective Liouvillian, all frequency integrals can be calculated analytically which will be done later for the specific example of the Kondo model.
For completeness, we note that the RG equations can be systematically improved by going beyond O(G 3 ). In this case, one needs also the second term on the righthand side of Eq. (115), i.e., the second variation ∆ 2 ω L(E) is needed up to O(G 2 ) by using Eq. (113). To evaluate Eq. (113) up to O(G 2 ), the first term on the righthand side of Eq. (115) is sufficient to approximate the frequency dependence of the resolvents. Using we find Using this equation in (113) gives the final result for the second variation which has to be used in Eq. (115) in order to calculate the frequency dependence of the resolvent up to O(G 2 ) needed for the evaluation of the RG equations up to O(G 4 ).
F. Initial conditions
To determine the initial condition as described in Sec. IV A, we consider the lowest order diagrams for the effective Liouvillian and the two-point vertex, as given by Eqs. (72) and (73). Taking the unrenormalized values for the Liouvillian and the vertices on the right-hand side of these equations, denoted by L (0) and G where G The last two terms of Eq. (128) are non-universal. We assume that the last term linear in D vanishes, otherwise the frequency-dependence of the unrenormalized vertices has to be taken into account and the precise form of the high-energy cutoff function as defined by the original model containing charge fluctuations becomes important. This gives the condition which, as we will show, is fulfilled for the Kondo problem considered in this work (this proof can be generalized to generic 2-level models, see Ref. 48). As a consequence, also in the first term on the right-hand side of Eq. (128), we can replace E 12 →μ 12 . In contrast, all other terms for the effective Liouvillian and the two-point vertex have universal coefficients independent of the specific choice of the high-energy cutoff function. Note that the first term for the Liouvillian has a different algebra compared to the third one due to the additional factor of the Keldysh index p. Therefore, it can happen that the third term gives zero whereas the first one is finite, as it is, e.g., the case for the current kernel for the Kondo model (see later). Omitting all non-universal terms together with the logarithmic ones (which are anyhow generated by the RG equations), and using the property (130), we take the following universal form for the initial condition at As already explained in Sec. IV A, we cannot use this result for the initial condition of the effective Liouvillian since we have neglected non-universal terms proportional to E, which are very large. Therefore, when applying the formalism to the Kondo model, we will set up another universal boundary condition to fix the effective Liouvillian. For the two-point vertex, the term of O(G 2 ) in the initial condition is only used for those matrix elements where the first term of O(G) vanishes.
Leaving out the nonuniversal and logarithmic terms, the initial Liouvillian is independent of E. Therefore, we take ∂ ∂E L(E) = 0 at E = iD or G. Current and differential conductance To find the average of the current flowing into reservoir γ [Eq. (53)], one needs the current kernel Σ γ (E) in Fourier space. As shown in Ref. 33, it can be determined analogously to L(E) by replacing the first vertex from the left in all diagrams of Eqs. (104) and (105) by the current vertex I γ 12 (E). To calculate the differential conductance, one needs the variation δI α for an infinitesimal variation δµ α of the chemical potentials of the reservoirs. Within the E-flow scheme, the RG equation for ∂ ∂E δL(E) [or equivalently, for ∂ ∂E δΣ α (E) by replacing the first vertex by the current vertex] can be straightforwardly established by applying the variation to the original diagrammatic series, where the chemical potentials occur in the denominator of the resolvents explicitly via the energy argument E X = E + µ X and implicitly in the effective Liouvillian. Therefore, the variation of each resolvent can be split in two terms where the first term contains the variation from the change of the argument and the second one the variation δL X (E) = (δL)(E X +ω X ) of the effective Liouvillian. Fixing the position of the resolvents where the variation δR X (E) and where the differentiation ∂ ∂E R X (E) is taken, we can resum the rest of the diagrams for ∂ ∂E δL(E) in terms of two-point vertices, analogously to the procedure described in the previous sections. Analogous to where δL ∼ O(δµG) is represented by δ and can be calculated from the first (lowest order) term on the righthand side of Eq. (135) and subsequently inserted in the second and third term on the right-hand side of Eq. (135). Applying integration by parts twice to the first term on the right-hand side of Eq. (135) At zero temperature, the diagrams containing a contraction with a circle and a cross vanish, cf. the remark after Eqs. (104) and (105). Writing the diagrammatic Eq. (137) explicitly yields The initial condition for δL(E) follows from Eq. (131) as and the corresponding initial condition for δΣ α (E) by replacing the first vertex from the left by the current vertex.
V. RG EQUATIONS FOR THE KONDO MODEL
We now apply the RG-equations (117), (118) and (138) to the isotropic spin-1 2 Kondo model at zero magnetic field. In that case, the Hamiltonian is where S is the spin-1 2 operator on the quantum dot, σ is the vector of Pauli matrices, and the coupling J In the important case that the Kondo model is derived using a Schrieffer-Wolff tranformation, the couplings fulfill the additional constraint (see, e.g.,
Ref. 32 for a detailed derivation)
A. Initial condition for the vertex superoperators According to Eq. (42), the bare vertex superoperator G (0)pp ′ 11 ′ is defined in terms of g 11 ′ by its action on any operator b: Since the operator g 11 ′ fulfills the symmetry property according to the definition (141), G Therefore, it is sufficient to consider the bare vertex for the case η = −η ′ = + in the following. We use the shorthand notation where the multiindices 1, 1 ′ on the left hand side only contain the reservoir and spin indices, and not η, η ′ : Since the operator g 11 ′ , which induces spin fluctuations on the quantum dot, is proportional to the spin-1 2 operator S [cf. Eq. (24)], we need superoperators which multiply an arbitrary operator b with S from the left and right to find a suitable representation of the initial bare vertex G (0)pp ′ 11 ′ . We thus define a vector superoperator L p for p = ± by for any operator b. We can then write For the bare vertex averaged over the Keldysh indices [see Eq. (45)], we get Similarly, according to Eq. (46), the bare current vertex averaged over the Keldysh indices is where We introduce the shorthand notation for the vertex which is first multiplied with the first Keldysh index and then averaged over the Keldysh indices. The current vertex is just this vertex multiplied with c γ 11 ′ : For the Kondo model, the bare vertex G B. Superoperator algebra Before we proceed with the parametrization of all superoperators, we define a set of convenient basis superoperators which form a closed algebra.
Vector superoperators
We define the following vector basis superoperators: This means that the bare vertex (150) can be expressed using L 2 , and the current vertex (151) using L 1 + L 3 . Note that this was the case even if we had not included the terms ∼ L + × L − in the definition of L 1,3 . However, such terms are generated by the RG, and including them in the basis superoperators makes the calculations simpler.
It should be noted that no other independent vector superoperators can be found by combining L + and L − in an arbitrary way.
Scalar superoperators
We define the two scalar superoperators L a and L b by These are the only independent scalar superoperators that can be formed from L + and L − .
Trace of the basis superoperators
The trace of some of the basis superoperators is zero. This means that applying them to any operator b will yield an operator with zero trace: Tr L a = 0, Tr L 2 = 0, Tr L 3 = 0.
For the other two basis superoperators L 1 and L b , there exist operators b for which Tr L 1 b or TrL b b are non-zero. We note the properties where σ acts on the quantum dot, and not on the reservoir spins as in the rest of this paper. Therefore, only L 1 and L b are relevant for the current vertex and the current kernel.
Scalar multiplication of scalar and vector superoperators
The only non-zero products of scalar and vector basis superoperators are
Scalar products of vector superoperators
The only non-zero scalar products of the vector superoperators L 1,2,3 are
Vector products of vector superoperators
The only non-zero vector products of the vector superoperators L 1,2,3 are Closely related to these vector products are the commutator relations
Extending the basis superoperators to the reservoir spin space
The vector superoperators L 1,2,3 and the scalar superoperators L a,b act on operators of the local dot. In the superoperators that are of interest in the context of the isotropic Kondo model, i.e., the effective (current) vertex, the effective Liouvillian, and the current kernel, they always appear together with the vector of the reservoir Pauli matrices or the identity matrix in the reservoir spin space. The reason is that any other combination of dot and reservoir superoperators would violate spinrotational invariance. Therefore, it is convenient to define new superoperators, which act both on operators of the local dot and the reservoir spin state, by It will turn out that L 1,2,3,a,b σσ ′ are sufficient to describe not only the initial conditions of all superoperators, but also all terms which are generated by the RG in leading and sub-leading order.
Multiplication of these superoperators is defined by The results of all such multiplications can be derived from the properties of the superoperators L 1,2,3 and L a,b and the property of the Pauli matrices. They are summarized in Table I. Sometimes, it is also necessary to multiply the Pauli matrices in reverse order in the RG equations. To make this more convenient, we define The result L i T L j T T of the multiplication of these transposed superoperators only differs in some minus signs from L i L j . The results are summarized in Table II.
The trace over the reservoir spin indices only is denoted by Tr σ . We obtain Tr σ L a,b = 2L a,b , Tr σ L 1,2,3 = 0. Using the superoperator algebra defined in the previous subsection, the bare vertex (150) can be written as Analogously, the bare current vertex (151) is and similarly, the vertex G However, the trace over L 3 vanishes [cf. Eq. (160)], such that this part does not contribute to the current. Therefore, when the trace is taken from the left, it is sufficient to include the term ∼ L 1 σσ ′ in the current vertex: Tr I γ(0) In the following, we will omit the trace when considering the current vertex or the current kernel and always imply implicitly that it is taken from the left, i.e., we use To find a convenient parametrization of all superoperators during the entire RG flow, we use the symmetry properties [cf. Eqs. (77) and (78), which also apply for the current vertex and the current kernel] charge conservation, and spin-rotational invariance. Spin-rotational invariance limits the terms which contribute to the superoperators to the basis superoperators L a,b,1,2,3 , which have been introduced in the previous subsection.
From Eq. (184) and charge conservation, we can deduce for the effective vertex and the effective current vertex that and that they can be described using G 11 ′ (E) and I γ 11 ′ (E), which depend only on the reservoir and spin indices (such that 1 ≡ ασ on the right-hand side of the following equations): Because the trace of G 11 ′ (E) is zero, G 11 ′ (E) cannot contain any terms ∼ L 1 , L b : Comparing with the bare vertex (179) shows that G 2 thus describes the exchange coupling and is related to L + or L − , which multiply any operator with S either from the left or from the right. On the other hand, G 3 is generated from higher-order terms during the RG flow, and the corresponding superoperator L 3 has a more complicated matrix structure in Liouville space, which mixes all states. G 3 is important for the generation of the current rate.
Finally, G a is a term that does not induce any spin flips, but can be interpreted as potential scattering. It will turn out that no contributions to G a will be generated by the RG even in next-to-leading order.
For the current vertex, only the superoperators which have a non-zero trace are of interest because the others do not contribute to the current. Therefore, we can make the ansatz Comparing with the bare current vertex (183) yields I γ1 thus corresponds to an exchange coupling which is responsible for the current flow. It will turn out that the coupling I γb is not important. Similar considerations apply for the effective Liouvillian and the current kernel: the former has zero trace, and for the latter, only superoperators with non-zero trace are interesting. Moreover, only the scalar basis superoperators L a,b are suitable for them. This motivates the ansatz Because of the symmetry properties (184)-(186), the quantities G χ αα ′ (E), I γχ αα ′ (E), Γ(E), and Γ γ (E) fulfill D. Spin dynamics, current, and differential conductance The information about the physical observables is contained in Γ(E) and Γ γ (E). Γ(E) is the spin relaxation/decoherence rate. Due to Eq. (161), the expectation value of the spin is given for any local density matrix ρ by S = 1 2 Tr L 1 ρ .
Using Eq. (52) and the representation L(E) = −iΓ(E)L a for the Kondo model, we get where we have used that L 1 L a = L 1 .
To obtain an expression for the current, we substitute Eq. (195) into Eq. (53): Using Eqs. (161) and (163) yields where we have used that e = = 1 in our units. The stationary current is then [according to Eq. (39)] To find a convenient description of the differential conductance, we express variations of the current rate with a tensor H γ αα ′ (E), which is defined by Current conservation implies that The conductance tensor G γ αα ′ (E) is defined by and fulfills The conductance tensor permits us to write the variation of the current as where and µ α = eV α .
E. Shorthand notations
Before we discuss the initial conditions for the RG flow, and the RG equations for all couplings and rates, we summarize some shorthand notations which will be useful in the folowing.
For the vertex G (and similarly for the current vertex I γ ), we use different notations, where the multiindex 1 always contains the reservoir index, and optionally the spin index, and the index η: A hat always indicates that the index η is not included. Analogously, we define energies which are shifted by the chemical potentials of the leads: We will replace the relevant vertex functions G 2 , G 3 , and I γ1 by the more convenient We note that the last symbol I γ 11 ′ is not unambigious since it was also defined for the full current vertex with 1 ≡ ηασ. However, in the context it will always be clear whether we consider the case 1 ≡ α or 1 ≡ ηασ.
From the symmetry properties (199)-(202) of the original vertex functions, we can conclude
F. Initial conditions
The initial conditions for the RG flow at high energies E are determined using a perturbative calculation as outlined in Sec. IV F. The idea is to find those terms in lowest order in J (0) αα ′ which are universal and not logarithmically divergent.
Vertex functions
The initial values for J αα ′ (E) and I γ αα ′ (E) are already known, cf. Eqs. (192) and (194): The initial condition for K αα ′ (E) = −i 2 π G 3 αα ′ E α ′ α is determined by considering the lowest order diagrams for the effective vertex, and disregarding non-universal terms and logarithmic terms. The result is [cf. Eq. (132)]: For the Kondo model, the bare vertices G (0) and G (0) are given by Eqs. (179) and (181), respectively. Using the superoperator algebra yields and Finally, the initial condition for the vertex is This means that the initial value for the vertex function and for the corresponding simplified function where, in the last equation, we imply matrix multiplication w.r.t. the reservoir indices.
Effective Liouvillian and current kernel
The perturbative solution for the effective Liouvillian for T, V ≪ |E| ≪ D is given by Eq. (128). The condition (130) is fulfilled, such that all terms ∼ D vanish. Nevertheless, as already outlined in Section IV F, the problem with the initial condition for the effective Liouvillian is that it contains nonuniversal terms which cannot be neglected because they are proportional to the Fourier variable and hence very large for E = iD. Therefore, we will use an alternative scheme to find the initial condition for the Liouvillian, which will be discussed in Sec. V I.
However, for the derivative of the Liouvillian and for its variation, the linear terms in E can be omitted, and we can use Eqs. (133) and (139), This gives For the perturbative calculation of the current kernel Σ γ (E) for T, V ≪ |E| ≪ D, we have to replace the first vertex G Inserting Eqs. (179), (181), and (183) for the vertices, we find that, except for the first one, all terms are zero due to where 1 ↔ 2 was used in the first relation, and Tr σ L 1 L 2 = Tr σ L 1 = 0 in the second one. Here, W (E) denotes any function of E. The first term of Eq. (243) can be evaluated as where we have used Using the representation Σ γ (E) = iΓ γ (E) for the current kernel, this results in the initial value for the current rate. The variation of the current rate is and the initial values of the tensors H γ 12 and G γ 12 , defined in Eqs. (208) and (211), respectively, are therefore
Summary of the initial conditions
At high energies, E = iD, we found the following initial conditions for the isotropic Kondo model without magnetic field:
G. RG equations
We will now discuss how the generic RG equations for the effective Liouvillian (117), for the effective vertex (118), the variation of the effective Liouvillian (138), and the corresponding equations for the current vertex and the current kernel are evaluated for the isotropic Kondo model.
Strategy for the selection and evaluation of diagrams
It has been shown in Sec. V F 3 that the initial conditions for the vertex functions at E = iD fulfill where we have omitted the reservoir indices. For arbitrary E, we use as a reference scale for the RG flow, in the sense that we calculate all quantities, such as vertex functions and rates, in leading and subleading order in J 12 (E). We will see that the relations still hold during the RG flow. It will be shown later [cf. Eq. (358)] that the behavior of the scale J at large E is (note that we omit the reservoir indices and the Fourier variable for simplicity) which implies This means that we have to include the following terms on the right-hand side for J and other quantities which are ∼ J at large E: This includes the current vertex I γ , which is ∼ J according to the initial condition (194), and the rate Γ, which is ∼ J because we will show that the right-hand side of its RG equation is ∼ 1 E J 2 in leading order [cf. Eq. (355)]. In the RG equations for these quantities, we discard terms such as which are of higher order in J or have a prefactor ∆ E , where ∆ is an energy scale, like, e.g., the voltage V , which fulfills |∆| ≪ |E|.
On the other hand, K and Γ γ are ∼ J 2 according to the initial conditions (238) on the right-hand side of their RG equations in order to cover all important contributions. There are still two special cases which have to be considered separately, namely, the coupling G a and the variation δΓ of the rate Γ: • We will show later [cf. Eq. (327)] that the part G a of the vertex is G a ∼ V E J 2 in leading order. Therefore, its contribution to the right-hand side of the effective vertex, the current vertex, and the effective Liouvillian (which are all ∼ J in leading order) is at least of the order and can thus be neglected. Therefore, it is not necessary to include G a in the calculations.
• The leading order term on the right-hand side of δΓ [Eq. (138)] is ∼ 1 E J 2 , which makes δΓ itself ∼ J in leading order. Considering subleading terms (∼ 1 E J 3 on the right-hand side, which cause contributions ∼ J 2 to δΓ) is not necessary because it would only result in additional terms ∼ 1 E J 4 beyond the subleading order on the right-hand side of Eq. (138), and, as we will see later [cf. Eq. (364)], terms ∼ 1 E J 5 beyond the subleading order in the analogous RG equation for δΓ γ .
An overview of the orders up to which the terms on the right-hand side have to be considered for the different vertex couplings and rates is shown in Table III.
Propagators and frequency integrals
For the isotropic Kondo model, the effective Liouvillian L(E) = −iΓ(E)L a has a threefold degenerate eigenvalue −iΓ(E). The fourth eigenvalue zero cannot occur in resolvents between vertices (according to Ref. 33, this follows from the fact that only the vertices which are averaged over the Keldysh indices appear in the RG equations). Therefore, we can always replace the quantities χ(E) and Z(E), defined in Eq. (116), which are superoperators in Liouville space, according to by complex numbers. We use the shorthand notations for these in the following. Consequently, also the propagator is a complex number. This permits us to simplify the RG equations (117), (118), and (138) by factoring out all frequency-dependent parts, and separating the frequency-integrations from the evaluation of the frequency-independent vertex superoperators in Liouville space.
The frequency integrals which are required for the evaluation of the RG equations are where F 1...n (ω) is a shorthand notation for which has been defined in Eq. (119). The evaluation of these integrals will be discussed in Appendix A.
Note that the first two of these integrals fulfill the relation which can be shown using integration by parts.
Analogous equations can be found for the current kernel and the current vertex by replacing the leftmost effective vertex G with a current vertex I γ .
Summation over η indices
To perform the sum over the η indices in the RG equations, we use Eqs. (189) and (190) and the shorthand notations (222-224) for the quantities with a hat, which do not depend on the η indices any more. We adopt the same notation for Z 12 , χ 12 , and the integrals F i 12 and F i 12,34 , and define that Z 12 , χ 12 , F i 12 , and F i 12,34 do not depend on the η indices any more, and that always η 1 = −η 2 = η 3 = −η 4 = +.
We now perform the summation in the different terms in the RG equations for L(E), δL(E), and G 12 (E).
for the subleading terms. c. Terms in the RG equation for G 12 (E): We only consider the case η 1 = −η 2 = + here, i.e., we consider the renormalization of G 12 (E).
For the first leading order term, we get 13 , (294) and for the term where the indices 1 and 2 are exchanged, For the first subleading term that contributes to the renormalization of the effective vertex, we get (note that η 1 = −η 2 = + implies η 3 = −η 4 = − in the first term, and η 3 = −η 4 = + in the second one, where 1 and 2 are interchanged).
The third subleading term becomes where we interchanged the indices 3 and 4 in the term with η 3 = −η 4 = − to merge both terms. This term can be merged with the first subleading term to (300) d. Summary: After the summation over the η indices, the RG equations take the form 32,1234 .
The corresponding equations for the current kernel, its variation, and the current vertex can be obtained by replacing the first effective vertex by a current vertex in these RG equations.
Summation over the reservoir spin indices
To perform the summation over the spin index σ in 1 ≡ ασ in Eqs. a. Terms in the RG equation for L(E): The leading order term in Eq. (301) contains the following product of effective vertices (note that we frequently omit the Fourier argument in this section to improve the readability of the equations): According to the multiplication Table I, the only combinations which yield a non-zero trace over the spin degree of freedom in this equation are χ = χ ′ = a and χ = χ ′ = 2: It will be shown later that G a ∼ V E J 2 , such that it only contributes to the renormalization of L(E) beyond the subleading order. Therefore, these terms can be omitted.
Including the Fourier arguments and the F -integral, the leading order contribution in Eq. (301) is thus In the products ∼ G G G that contribute to L(E), only the contribution ∼ L 2 of the effective vertex is needed.
Including the term ∼ L a would lead to terms in Γ(E) which are beyond the subleading order in J, and the term ∼ L 3 cannot be included in any product of three vertices which is non-zero when summed over all spin degrees of freedom according to the multiplication Tables I and II and the property Tr σ L 2,3 = 0.
Therefore, the products of three vertices which appear in (301) are According to the Tables I and II, we get When performing the trace over the spin degree of freedom, this yields 1 2 L a and − 1 2 L a , respectively. The sum of the subleading terms in Eq. (301), including the Fourier variables and the F -integrals, is thus 13,12 + F 12,13 L a 32,12 + F 12,32 L a .
b. Terms in the RG equation for δΓ(E): As discussed earlier, we only need the leading order term for δΓ(E) (cf . Table III). Therefore, only the first term from the RG equation (302) is required. It differs from the one in the corresponding equation (301) only in the additional factor δ µ 12 . The spin summation can thus be done similarly, and the result, analogous to Eq. (306), is c. Current kernel: The RG equation for the current kernel and for its variation can be obtained from the respective Eqs. (301) and (302) for the effective Liouvillian and its variation by replacing the first vertex by a current vertex.
The leading order term is Tr σ L χ L χ ′ = 0 only for χ = 1, For the variation of the current kernel, we also need products of the form I γ 12 δL G 21 . Using δL(E) = −iδΓ(E)L a , we get Combining these terms and including the Fourier arguments and F -integrals yields the leading order contribution to the variation of the current kernel: The subleading terms for the current kernel and its variation contain the products We are only interested in contributions which have a nonzero trace. Therefore, only the term ∼ L 1 in the current vertex, the term ∼ L 2 in the first vertex G, and the term ∼ L 3 in the last vertex G are relevant (cf . Tables I and II). The required superoperator products are therefore When performing the trace over the spin degree of freedom, this yields 6L b and −6L b , respectively. Consequently, the subleading terms which contribute to the variation of the current kernel are d. Terms in the RG equation for G 12 (E): First, we consider the leading order terms in Eq. (303). The product of two effective vertices G is Note that the spin indices are contained in the matrices L a,2,3 , which are defined in Eqs. (173) and (174), on the right-hand side. All matrices L a,b,1,2,3 which appear in the final results on the right-hand side of equations in this section have the spin indices σ 1 and σ 2 , which are left out here to improve the readability, i.e., L a,b,1,2,3 ≡ L a,b,1,2,3 σ1σ2 .
If the spin indices are reversed, we find (note that an overall minus sign has been added for convenience because it also appears in the RG equations where the spin indices are interchanged) We will first discuss why the terms containing G a can be omitted here and in all other RG equations. We get the leading order part of the RG equation for G a 12 (E) by including the Fourier arguments in the contributions ∼ L a from Eqs. (320) and (322) and adding the F -integrals: In the case V = 0, all Fourier arguments and F -integrals are equal, and the right-hand side is thus zero. For V ≪ |E|, an expansion of the effective vertices and the Fintegrals yields This means that the leading contribution to the renormalization of G a 12 (E) is (where we have used ∂J ∂E ∼ 1 E J 2 in leading order), and that the leading contribution to G a 12 (E) itself is thus As discussed in Sec. V G 1, this observation allows us to omit all terms which contain G a , because they would only cause contributions beyond the subleading order to the RG equations of the physical observables.
In the subleading terms in Eq. (303), different products of effective vertices occur, which are evaluated in Appendix B 1. The final results are [cf. Eqs. (B22)- e. Terms in the RG equation for the current vertex I γ 12 (E): We have to replace the first vertex in each of the terms on the right-hand side of Eq. (303) by a current vertex in order to obtain the RG equation for the current vertex. The leading order terms are and According to Eqs. (315) and (319), the part ∼ L b of the current vertex does not contribute to the current kernel and can therefore be neglected. The only relevant leading order contribution to the renormalization of the current vertex is thus The subleading terms are evaluated in Appendix B 2, cf. Eqs. (B36)-(B40): 13,12 + F (2) 12,13 13,12 + δ µ 12 F (2) 12,13 32,12 + δ µ 12 F (2) 12,32 , 13,1243 + 32,1234 , 14,34 − I γ1 32,34 .
For large |E|, the integrals which appear in the leading order terms of the RG equations can be approximated by Therefore, we find that the right-hand side of the RG equations of couplings and rates take the form (we leave out reservoir indices because we are only interested in the overall scale) On the other hand, considering subleading terms ∼ J 2 for δΓ(E) in the right-hand side of Eq. (350) would add terms to ∂ ∂E δΓ γ (E), which is two orders in J higher than the leading contribution (363). These terms beyond the subleading order can be neglected.
To solve the RG equations (347)-(353) numerically we use the initial conditions (250)-(260), Furthermore, we use the special form (142) for J 12 and consider (for simplicity) the case of two reservoirs with α ≡ L/R ≡ ± and symmetric coupling x L = x R = 1 2 , The chemical potentials are written as µ α = α V 2 , where V denotes the bias voltage. In this case, we get from Eqs. (208), (211), (214) and (217) where G L LR (E) = G(E)/G 0 is the conductance G(E) in units of the universal conductance G 0 = 2e 2 h . The variation of the stationary current follows from where G st = G(0) is the stationary conductance.
For the special case of two reservoirs with symmetric couplings, we obtain the initial conditions We note that these initial conditions do not contain nonuniversal terms of O( 1 D ) and higher orders in the bare coupling J 0 . Therefore, to extract only the universal part of the solution up to subleading order, one has to use the scaling limit (8). Furthermore, as explained in Section IV F, the missing initial condition for Γ(E) at E = iD has to be determined from another reference point since this energy scale is related to the Kondo temperature and not universal. As shown in the next section, one can set up the scaling limit and the initial condition for Γ by studying the analytic solution for T = V = 0.
H. Analytic solution for T = V = 0, the scaling limit and the initial condition for Γ In the special case of zero temperature and zero voltage, the RG equations can be solved analytically (except for Γ). We set E = iΛ and start the RG flow at Λ = Λ 0 ≡ D. In accordance with the initial conditions (379), (380) and (381), we find that the vertices J 12 and K 12 do not depend on the lead indices, [we omit the variable E = iΛ in all quantities for simplicity here], and that the current vertex can be parametrized by By substituting the integrals (A14)-(A18) into the RG equations (347)-(353) we obtain We define and a new flow parameter which implies Transforming the RG equations to the flow parameter λ yields Integrating Eq. (396), we obtain the invariant The nonuniversal invariant T K sets the low energy scale.
If not written explicitly, we will use the energy unit T K = 1 in the following. Note that this invariant is identical to the Kondo temperature defined in Eq.
(2), since in the limit Λ ≡ D → ∞ we getJ = J 0 → 0. Taking ratios one can eliminate λ from some of the RG equations and obtains Integrating the RG equations for Z, K and J I , we obtain another set of invariants Inserting this solution for K and J I into the RG equation (401) for G and integrating it, we find another invariant The invariants c Z , c K , c I and c G are fixed by comparing with the initial conditions (375), (379), (380), (381) and (378) in the scaling limit (8). We obtain c Z = c K = c I = 1 and c G = 0, leading to the universal results This is a complicated differential equation and cannot be solved analytically. Furthermore, since only the ratio Γ/T K is universal, it is impossible to set up a universal initial condition at high energies from a perturbative calculation of Γ. Therefore, we study the numerical solution of the differential equation for Γ by starting at Λ = 0 and using the exact and universal result G(0)/G 0 = 1 as boundary condition. Using Eqs. (409) and (399), the initial condition for Γ(0)/T K can then be calculated from Although this procedure by first solving the T = V = 0 RG equations from Λ = 0 up to Λ 0 and, subsequently, at finite T or V , solving backwards from Λ 0 down to Λ = 0, is in principle possible, it is numerically not the most accurate one. The reason is that, at T = V = 0, the universal conductance is not precisely reproduced numerically by the two subsequent steps. Therefore, we describe in the following section a numerically more precise procedure.
A. Differential conductance at finite temperature and voltage
We have solved the RG equations which have been derived in the previous section for the second and third order truncation schemes numerically. Thus we have obtained results for the differential conductance G(T, V ) for transport through a Kondo quantum dot at finite temperature and voltage. Figure 2 shows a three-dimensional plot of G(T, V ) calculated in third order. The tempera- ture and voltage are scaled by T * K and T * * K , respectively, as defined by Eqs. (3) and (10). The plateau in the upper left corner of Fig. 2 corresponds to the unitary conductance G 0 = 2e 2 h which is reached if both temperature and voltage are several orders of magnitude smaller than the Kondo temperature. Figure 3 shows the T -dependence of the differential conductance for six different fixed values of V . For V = 0, the temperature dependence of the conductance has already been compared to numerically exact NRG calculations in Ref. 35, where a deviation 3% has been found in the whole temperature regime independent of the truncation order. For finite voltage, the results for the second and third order truncation schemes agree quite well if the temperature and the voltage are scaled by the corresponding values of T * K and T * * K , respectively. In the range which is plotted in Fig. 2, the maximal deviation in the differential conductance G(T, V ) between both truncation schemes is less than 15%. In contrast, the ratio to hold also for other ratios of low-energy scales within our method when applied to the strong coupling regime (see the next section). We obtain T * K T * * K ≈ 1.044 in second order truncation, and in third order truncation. Since the result in third order truncation is expected to lie closer to the correct result (see also the next section), the last result may serve as a guideline for more precise calculations in the future. We note that our prediction is quite close to the result T * K /T * * K ∼ 0.66 obtained in Ref. 41, where the GW approximation within the σGσW formalism has been used for the symmetric Anderson model. Taking our result, the Fermi liquid coefficients c * * T and c * * V can be calculated from Eqs. (7) and (12), leading to the prediction c * * T ≈ 16.95, c * * V ≈ 2.58. (432) Figure 3 clearly shows that the differential conductance there is a pronounced maximum. Figure 4 shows how the position T max (V ) of this maximum depends on the voltage V (if the RG equations are solved using the third order truncation). For V T * K , i.e., in the regime where there is a pronounced maximum, the function T max (V ) can be approximated quite well by a linear fit over six orders of magnitude in the ratio V /T * * K . Figure 5 shows that the behavior is very similar if the conductance is calculated within the second order truncation scheme.
This linear fit from Fig. 4 appears to be a reasonable approximation also for the width of the peak of the function G V (T ) at fixed V , see Fig. 6. We define the peak width as the difference T 2 (V )−T 1 (V ), where T i (V ), since temperature acts as a cutoff of the RG-flow, as can be seen from the form of the integrals in Eqs. (279)-(283). However, by looking at the final RG equations (347)-(353), one can see that the cutoff provided by the voltage V is very different and by no means an overall cutoff for all quantities. In particular, the quantities for E = iΛ and for E = ±V + iΛ have a considerably different flow as function of Λ and are coupled to each other in a complicated way. For finite V T * * K and T = 0, this leads to the effect that the conductance shows a maximum for E = iΛ ∼ iT * * K as function of the flow parameter Λ. This in turn leads to a maximum for the temperature dependence of the conductance since temperature is an overall cutoff for all quantities. Besides this subtle technical explanation, a more physical interpretation can be given in terms of the spectral function of the Kondo model, which is believed to have side-peaks at ω ∼ ±V . 49,50 Therefore, when temperature is of the order of the voltage, these side peaks can be reached and give rise to an enhanced conductance.
B. Expansion of the differential conductance for small temperature and/or voltage We now consider temperatures and voltages much smaller than the Kondo temperature and calculate numerical approximations for the coefficients c * T and c * V , which appear in the Fermi liquid result (6), using the differential conductance obtained in the second and third order truncation schemes. We note again that due to our improved scheme for determining the initial condition of the RG flow at finite voltage (see Section V I), we obtain here an improved result for the Fermi liquid coefficient c * V in comparison to Ref. 35. Our result for the coefficient c * T is the same as in Ref. 35, but since recent NRG calculations 25 have obtained an improved value for c * T , the quality of our results has to be revisited also for this quantity. Figures 7 and 8 visualize how the coefficients c * T and c * V can be determined in second and third order, respectively, using a suitable plot of the differential conductance. When comparing the results in second and third order to the known results (5) and (7), it is important to note that the Fermi liquid coefficients are ratios of various low-energy scales in different energy regimes where the scales T ′ K , T ′′ K characterize the curvature of the function H(T, V ) = 1 − G(T, V )/G 0 w.r.t. temperature and voltage at T = V = 0, Analogously, one can write the Fermi liquid coefficients c * * T and c * * V , defined by Eq. (6), as T /T * K , V /T * third order truncation, might be an accident since the ratio of two energy scales can have a significantly different error than the energy scales themselves. Overall we observe that all ratios depend significantly on the truncation order and, if the ratio of two energy scales from the same energy regime is taken, the result improves when increasing the truncation order. For example, the ratio improves from 49% deviation in second order to 18% error in third order truncation. Furthermore, we expect that a perturbative truncation of the RG equations should lead to a better improvement for larger energy scales when increasing the truncation order. Therefore, we speculate that our result (431) for T * K T * * K in third order truncation might have an even better quality than the corresponding result for the ratio . This is also in accordance with the fact that the result (9) is in agreement with experiment and in agreement with another recent effective action method 39 . Moreover, as already mentioned in Sec. VI A, our result (431) for the ratio T * K /T * * K is very close to the result of Ref. 41. This provides evidence that our results in third order truncation are quite reliable for temperatures and voltages close to the Kondo temperature. Furthermore, speculating that T * K and T * * K are approximately correct in third order truncation, the precise result for in the same order indicates that T ′′ K is quite reliable, i.e. it seems that the voltage dependence can also be trusted for small voltages V ≪ T K . In contrast, the poor result for in third order truncation indicates that T ′ K deviates significantly from the correct result, i.e. the temperature dependence is not so well described for T ≪ T K . Therefore, it seems that the rather precise result for T * K T ′ K in second order truncation is an accident and originates from the fact that both T * K and T ′ K are incorrect by approximately the same factor, whereas, in third order truncation, T * K is more precise than T ′ K , leading to the counterintuitive effect that the quality of T * K T ′ K decreases with increasing truncation order.
In summary, we have presented arguments that the voltage dependence of the conductance seems to be quite reliable in third order truncation, whereas the temperature dependence needs to be improved in the regime T ≪ T K . Nevertheless, our arguments are partially based on speculations and need further substantiation by improved calculations for the nonequilibrium Kondo model in the strong coupling regime. Furthermore, we note that the third order RG scheme is in principle capable of reproducing the Fermi liquid result cV cT = 3 2π 2 exactly when an analytical solution of the RG equations is done for either V = 0 and T ≪ T K or T = 0 and V ≪ T K , and cV cT is expanded systematically in orders of J, see Ref. 35 for details. The numerical solution contains all terms up to a certain order in J, but higher order terms which are not treated consistently cause deviations from the exact value for cV cT . We note that our improved scheme to determine c * V can also be applied to the S = 1 Kondo model and to the calculation of the Fermi liquid coefficients of the static magnetic susceptibility where the authors of Ref. 38 report the following changes to their results in third order truncation 51 For S = 1, their results for cV cT deviate by ∼ 9% from the exact value cV cT = 3 2π 2 4+10S 5+8S ≈ 0.164, as derived in Ref. 38.
C. Comparison with experiments
We have compared our calculations with experimental results obtained by Kretinin et al. 37 They measured the differential conductance at finite temperature and voltage in an InAs nanowire-based quantum dot. This system can be described by the Kondo model, provided that both temperature and voltage are sufficiently small to suppress charge fluctuations. In previous publications, only the results for G(T = 0, V ) and G(T, V = 0) have been compared between theory and experiment, here we present the comparison where both temperature and bias voltage are finite. In Figs. 9 and 10, either the temperature or the voltage is fixed, and the differential conductance G is plotted as function of the other quantity. Both figures compare the results for six different fixed values of the temperature or the voltage, respectively.
We find good agreement between our calculations and the experiment if temperature and voltage are much smaller than the Kondo temperature. If either of these quantities is too large, charge fluctuations become important, which cannot be described properly by the Kondo model that our calculations are based on.
In an earlier publication, 52 results obtained with the method presented here had been used to determine the Kondo temperature of an experimental device and the temperature at which the experiment had been performed.
VII. SUMMARY AND OUTLOOK
In this paper, we presented a real time renormalization group approach that extends the flow scheme introduced in Ref. 35, which uses the Fourier variable as the flow parameter. We showed how universal RG equations can be set up in all orders and that only an expansion in the frequency-independent effective two-point vertex is needed to guarantee convergence of all frequency integrals. The RG equations can be solved in various truncation orders providing a consistency check for the reliability of the results. Whereas in this paper the RG equations have been solved explicitly up to third order truncation for the Kondo model, we have also outlined the procedure how to determine all terms in fourth order truncation. This might be helpful for future applications to test the reliability of the results even further.
We have shown that universality can be achieved for the Kondo model by using appropriate boundary conditions including the universal stationary conductance at zero temperature and zero bias voltage. With our procedure it is possible to arrive at stable results already for initial cutoffs which are about six orders of magnitude larger than the Kondo temperature. This is a significant improvement compared to other methods trying to solve directly for the universal properties of the Kondo model instead of the more involved Anderson impurity model. We applied the method to the nonequilibrium spin-1 2 Kondo model at zero magnetic field but arbitrary temperature and voltage. We found that the temperaturedependent conductance G V (T ) at fixed voltage V exhibits non-monotonic behavior. The height and width of the appearing local peak were shown to scale linearly with the applied voltage over approximately six orders of magnitude in units of the Kondo temperature. We compared our results to recent experiments and found good agreement in the regime where the Kondo model is expected to describe the experimental system accurately.
To characterize the temperature and voltage dependence of the conductance in different energy regimes close and far below the Kondo temperature, we have defined four different energy scales T * K , T * * K , T ′ K , and T ′′ K . The scales T ′ K /T ′′ K are defined from the curvature of G V =0 (T )/G T =0 (V ) at T = 0/V = 0, and the scales T * K /T * * K by the half width at half maximum of the peak of G V =0 (T )/G T =0 (V ) at T = 0/V = 0. All these energy scales are proportional to the Kondo temperature T K . We found that the shape of the conductance G(T, V ) is independent of the truncation order when T and V are scaled in units of T * K and T * * K , respectively, providing evidence for the reliability of our result for the temperature and voltage dependence of the conductance. However, an interesting issue is the determination of the three independent universal ratios of the four characteristic energy scales, which turn out to depend crucially on the truncation order. The ratio 2 ≈ 0.39 is known exactly from Fermi liquid relations, relating the temperature and voltage dependence for T, V ≪ T K . Numerically exact results exist for the ratio T * K T ′ K ≈ 2.57 from recent NRG calculations, relating the temperature dependence for T ∼ T K to the one for T ≪ T K . Our method predicts in third order truncation the result T * K T * * K ≈ 0.62 for the remaining unknown ratio, relating the temperature and voltage dependence at energies close to T K . We presented evidence for the reliability of this result in third order approximation, based on the result G T =0 (V = T * K ) ≈ 2 3 G 0 , which has been confirmed experimentally and by another recent effective action method. From a comparison of our results for the other two ratios in third order truncation with the exact ones we obtained evidence that our results for the voltage dependence of the conductance are quite accurate for all voltages, whereas the ones for the temperature dependence need to be improved for T ≪ T K .
Concerning future directions, the E-flow scheme offers a systematic method to avoid 1 E n and logarithmic divergencies by resumming self-energy insertions and vertex corrections. Since approximation schemes in different truncation orders can be defined, its reliability can be tested by itself, in particular in those regimes where the vertices start to grow. So far, applications were successful for 2-level models where the dynamics of the local system is driven by spin fluctuations (Kondo model), energy fluctuations (spin-boson model) or charge fluctuations (interacting resonant level model). In the future, it is of partic-ular interest to understand the interplay between these fluctuations, as described, e.g., by the Anderson impurity model (spin and charge fluctuations) or by quantum dots coupled to a bosonic environment (charge/spin and energy fluctuations). In particular, the Anderson impurity model is expected to be a suitable model to extract the universal behavior in the Kondo regime without resorting to the boundary condition of universal conductance, as used in this paper. This is motivated by recent NRG studies, 26 where it was shown that universality is reached much faster for the Anderson impurity model compared to the Kondo model. Furthermore, the Anderson impurity model allows for the study of potential scattering terms away from the particle-hole symmetric point and logarithmic energy renormalizations in the mixedvalence regime. Other interesting applications for the Eflow scheme are generic n-level quantum dots and models with quantum critical behavior, like, e.g., the sub-Ohmic spin-boson model or multi-channel Kondo models.
We note that F 12 (0) = 0. In the special case that no bias voltage is applied, i.e., all chemical potentials are the same, and the integrals are For finite T , the Fermi function, its antisymmetric part, and its derivative are where ω n = 2πT n + 1 2 (A22) is a Matsubara frequency for any integer number n. The integrals are calculated by closing the integration path in the upper half of the complex plane and applying the residue theorem. Note that χ ij is an analytic function in the upper half of the complex plane, such that the only poles in the upper half plane are the poles of the Fermi function or its derivative, which are first and second order poles, respectively.
In the following, we will often use the shorthand notations The polygamma functions are derivatives of the logarithm of the Gamma function Γ(z). They can be used to evaluate series of resolvents that contain Matsubara frequencies. The first polygamma function, also called digamma function, is given by where γ is the Euler-Mascheroni constant. The digamma function permits us to evaluate series of the form ∞ n=0 1 n + z 1 − 1 n + z 2 = ψ(z 2 ) − ψ(z 1 ).
Evaluating the integral F The evaluation of the integrals that depend on two Earguments can be done analogously. The calculations are straightforward, but lengthy. Therefore, we only list the results here.
No simple expression has been found for the integral F (2) 12,12 which, for |E| ≫ T, V , neglects only contributions of O( T,V |E| 2 J 3 ) in the RG equations which are beyond subleading order, consistent with the strategy described in Section V G 1. It has been verified that replacing the exact expressions for the integrals F The approach used to evaluate the remaining series in F (4) 12,34 is as follows: • Sum the terms from k = 0 to k = k 0 − 1 explicitly.
• Replace the remaining series, starting from k = k 0 , by an integral.
Effective vertex G12(E)
In the subleading terms in Eq. (303), different products of effective vertices occur, namely (without Fourier arguments and F -integrals) which all have the form where n i ∈ {1, 2, 3, 4}. Note that the G χ n1n2 only depend on the reservoir indices, and the L χ n1n2 only on the spin indices, and that we only consider χ, χ ′ , χ ′′ ∈ {2, 3}, such that all occurring L χ n1n2 contain the Pauli matrix σ n1n2 : To evaluate these subleading terms, we first evaluate the Pauli matrix products using the multiplication rule σ i 13 σ j 32 = δ ij δ 12 + iǫ ijk σ k 12 .
To evaluate the products (B3) further, we note that only the third superoperator L χ ′′ k can be L 3 k , because all products where L 3 k is multiplied with a component of L 2 from the right are zero according to the rules in Secs. V B 7 and V B 8. Therefore, we only have to consider products of the form where the Pauli matrix products is one of Eqs. (B5-B9), and we omit all terms ∼ δ 12 . This means that two out of the indices i, j, and k are always equal, and we can use the results from Secs. V B 6, V B 7 and V B 8, and the commutator relations (172) to evaluate the frequently occurring products For the terms where the last superoperator in the product is a component of L 2 , this yields: L 2 34 L 2 13 L 2 42 = L 2 34 L 2 42 L 2 L 2 13 L 2 42 L 2 34 = L 2 32 L 2 14 L 2 and similarly for the products where the last factor is a component of L 3 , The final result for the products in Eq. (B1) is We follow the approach of appendix B 1 to evaluate the subleading terms on the right-hand side of the RG equation for the current vertex, which is obtained by replacing the first vertex in each of the terms in Eq. (303), by a current vertex: Only the part ∼ L 1 of the current vertex contributes to the current kernel according to Eqs. (315) and (319). The only way to obtain contributions ∼ L 1 from the products above is to consider the part ∼ L 1 of the current vertex, and the part ∼ L 2 of both effective vertices: where L 1 n1n2 L 2 n3n4 L 2 n5n6 = i,j,k∈{x,y,z} L 1 i L 2 j L 2 k σ i n1n2 σ j n3n4 σ k n5n6 , (B29) and we have to consider the products of Pauli matrix components from Eqs. (B5-B9), except for the terms ∼ δ 12 , which do not contribute to the renormalization of I γ1 . The products of superoperators which need to be evaluated according to these Pauli matrix products are Finally, we get | 2014-09-18T14:29:27.000Z | 2014-05-13T00:00:00.000 | {
"year": 2014,
"sha1": "230a40ca69ca0fb929d0afdb68647a8df177cb2b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.3150",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "230a40ca69ca0fb929d0afdb68647a8df177cb2b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
25674250 | pes2o/s2orc | v3-fos-license | Decreased plasma neurotrophin-4 / 5 levels in bipolar disorder patients in mania
Decreased plasma neurotrophin-4/5 levels in bipolar disorder patients in mania Izabela G. Barbosa, Isabela B. Morato, Rodrigo B. Huguet, Fabio L. Rocha, Rodrigo Machado-Vieira, Antônio L. Teixeira Interdisciplinary Laboratory of Medical Investigation, School of Medicine, Universidade Federal de Minas Gerais (UFMJ), Belo Horizonte, MG, Brazil. Instituto de Previdência dos Servidores do Estado de Minas Gerais (IPSEMG), Belo Horizonte, MG, Brazil. Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, Bethesda, MD, USA, and Laboratory of Neuroscience (LIM-27), Institute and Department of Psychiatry, Universidade de São Paulo (USP), São Paulo, SP, Brazil.
Introduction
Neurotrophins constitute a family of proteins responsible for orchestrating complex processes in the central nervous system (CNS), such as cellular proliferation, differentiation, growth, migration, modulation of neuronal excitability, and synaptic transmission.As a result, neurotrophins have been implicated in the pathophysiology of a wide variety of neurodegenerative and psychiatric disorders, and have also been regarded as potential therapeutic targets.The neurotrophin family comprises brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), neurotrophin-3 (NT-3), and neurotrophin-4/5 (NT-4/5).
The pathophysiology of bipolar disorder (BD) is complex, and neurotrophin dysfunctions seem to play a pivotal role in the neurobiology of the disease. 1 BDNF is the most abundant neurotrophin in the CNS, particularly in the amygdala, hippocampus, and prefrontal cortex, brain areas directly involved in emotional regulation and several aspects of cognition (including attention, memory, and executive functioning). 2Several studies and metaanalyses have reported decreased BDNF levels in patients with BD during acute mood episodes in comparison with healthy controls. 3,4Our group demonstrated that BD patients in mania exhibit lower plasma NGF levels in comparison with controls and BD patients in euthymia. 5The biological effects of the neurotrophins are mediated through the tropomyosin-related kinase (Trk) family of receptor tyrosine kinases (TrkA, TrkB, and TrkC) and the p75 neurotrophin receptor, a member of the tumor necrosis factor receptor superfamily.Despite recent reports of neurotrophic signaling dysfunction in BD, little attention has been paid to the study of NT-3 and NT-4/5, neurotrophins that modulate basal synaptic transmission and long-term potentiation in the hippocampus.
The main aim of the present study was to evaluate plasma levels of NT-3 and NT-4/5 in patients with BD during manic episodes and compare these levels with those of healthy controls.As a secondary aim, the levels of these neurotrophins were correlated with clinical parameters.
Methods
The present study included 40 medicated patients with type 1 BD and 25 controls matched for age, gender, and educational attainment.Patients were recruited at Hospital Governador Israel Pinheiro, Belo Horizonte, state of Minas Gerais, Brazil.The diagnosis of BD was independently confirmed by two psychiatrists using the Mini-International Neuropsychiatric Interview (MINI-Plus). 6All patients were assessed with the Young Mania Rating Scale (YMRS) 7 and the Hamilton Depression Rating Scale (HDRS). 8YMRS and HDRS were administered to evaluate the severity of manic and depressive symptoms respectively.Mania was defined according to the psychiatric evaluation.BD patients with a YMRS score .13 were defined as being in mania.Remission was defined by YMRS score , 7 and HDRS score , 7 points for at least 8 consecutive weeks.The control group was recruited from the local population.Controls were required to not have any psychiatric disorder, family history of psychiatric disorder, or cognitive deficits; the MINI-Plus interview was used to to exclude psychiatric disorders.The study was approved by the local ethics committees.All participants provided written informed consent.
Peripheral blood samples (10 mL) were drawn at 8-10 a.m. from each subject by venipuncture into a heparincontaining vacuum tube at the moment of clinical interview.The blood was immediately centrifuged twice at 3,000 g for 10 min, and plasma samples were kept frozen at -706C until assayed.Plasma NT-3 and NT-4/5 levels were measured using enzyme-linked immunosorbent assay (ELISA) kits for NT-3 and NT-4/5 (DuoSet, R&D Systems, Minneapolis, MN, USA), in accordance with manufacturer instructions.Concentrations were expressed as pg/mL.
All variables were tested for normality of distribution by means of the Kolmogorov-Smirnov test.Descriptive statistics were used to report socio-demographic and clinical features of the sample.Comparisons between dichotomous variables were assessed with the chisquare or Fisher's exact test as appropriate.Betweengroup differences (patients vs. controls) were assessed with the Mann-Whitney U test.Differences among the three groups (patients in mania vs. patients in remission vs. controls) were compared with the Kruskal-Wallis test.Multiple comparisons among levels were checked with Dunn's post-hoc test.Spearman's correlation analysis was performed for NT-3 and NT-4/5 levels, age, disease duration, and YMRS and HDRS scores.All statistical tests were two-tailed and performed at a significance level of p , 0.05.Statistical analyses were performed using SPSS version 17.0.1B).
Eighteen
Plasma levels of NT-4/5 and NT-3 were not associated with the presence of psychiatric or clinical comorbidities, substance dependence, nicotine dependence, or use of any mood stabilizer (i.e., atypical antipsychotics, lithium, or anticonvulsants).NT-3 and NT-4/5 plasma levels did not correlate with age, disease duration, educational attainment, or HDRS and YMRS scores.
Discussion
In the present sample, BD patients in mania had decreased circulating levels of NT-4/5 in comparison with healthy subjects and BD patients in remission.To the best of our knowledge, this is the first study to report lower plasma levels of NT-4/5 in patients with BD.
Previous studies evaluating NT-4/5 levels have reported discordant results.Walz et al. 9 demonstrated increased NT-4/5 serum levels in BD, regardless of mood state, compared with controls.Another study found that mRNA NT-4/5 expression in total blood of BD patients in depression was not significantly different from that of controls. 10The reasons for such discordant results are unclear.
NT-4/5 and BDNF exert their specific biological activities through the same receptors (TrkB and p75), but NT-4/5 seems to be more potent than BDNF in terms of influencing neurite outgrowth. 11In this line, and given the consistent finding of decreased circulating BDNF levels in BD patients during acute mood episodes, 3,4 decreased plasma levels of NT-4/5 in BD patients in mania were to be expected.
Regarding NT-3 plasma levels, our result is consistent with a previous study that evaluated mRNA expression of NT-3 in peripheral blood cells from BD patients and did not find any difference in comparison with controls. 103][14] These discordant results might be due to distinct inclusion criteria (exclusively type 1 BD in the present study vs. type 1 and 2 BD in previous studies) and due to methodological differences, such as serum vs. plasma measurements.
It is difficult to draw a definitive conclusion regarding the neurotrophin profile of patients with BD.Previous studies have reported decreased BDNF levels in acute mood episodes 3,4 and decreased NGF levels in mania, 5 and the present study found decreased NT-4/5 levels in mania.It thus seems reasonable to assume that all neurotrophins are decreased during acute mood episodes (particularly manic episodes) in BD, which is in line with evidence that the related signaling pathways are altered in BD. 15 Longitudinal studies controlling for methodological issues and confounding factors necessary to confirm this assumption, as there are several conflicting reports.Furthermore, it is still uncertain whether plasma or serum levels reflect the distribution of neurotrophins in the CNS.Notably, no previous study has evaluated NT-3 or NT-4/5 levels in the CNS of BD patients.
This study has strengths and limitations that must be considered when interpreting its results.The diagnostic interviews of both patients and controls were performed using the same protocol, overcoming a limitation of previous studies.In addition, the exclusion of patients with other medical conditions, such as inflammatory diseases, can be regarded as strength of the study.The lack of strict control for confounding factors, such as body mass index, medication use, and number of cigarettes, must be considered limitations, as must the small sample size.Furthermore, the question of whether these neurotrophin levels represent primary-causal or secondary-reactive changes remains unaddressed.
In conclusion, our findings reinforce the view that neurotrophin dysfunction is present in BD, especially during acute mood episodes.These findings support a role for NT-4/5 as a potential therapeutic target in BD.
Figure 1
Figure 1 NT-3 (A) and NT-4/5 (B) plasma levels in controls, patients with BD in mania, and patients with BD in remission.Bars represent median values.BD = bipolar disorder; NT = neurotrophin.p , 0.05, Kruskal-Wallis test with post-hoc Dunn's test.
BD patients were in remission (14 female; age [mean 6 SD] 49.7868.92years) and 22 BD patients were in mania (15 female; age [mean 6 SD] 48.04614.22years).The mean 6 SD length of illness was 26.61611.85years in BD patients in remission and 19.0613.43 years in BD patients in mania (p = 0.09, Mann-Whitney U test).The mean YMRS and HDRS scores in mania were 26.0 and 4.77, respectively.Four of the 18 patients in remission had nicotine dependence, as did nine of the 22 patients in mania (p = 0.31).In the control group, 19 out of 25 controls were female and mean age was 49.1667.36years.No control subjects had nicotine dependence.There were no significant , 0.05, Dunn's post-hoc test).There were no significant differences between BD patients in mania vs. BD patients in remission (median [interquartile range] for BD patients in remission, 37.09 [34.27-69.60]pg/mL; p .0.05, Dunn's post-hoc test) or BD in remission vs. controls (p .0.05, Dunn's post-hoc test) (Figure | 2018-04-03T04:57:48.693Z | 2014-07-29T00:00:00.000 | {
"year": 2014,
"sha1": "0a0741ad1d414aa0a1192501f37f10605fc243cb",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rbp/a/FmC8mRNn7J96G4XWb6y4hSh/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76f7d97c6f4ff1d23bb4f3e9f034a871dc2b8c3f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
269833388 | pes2o/s2orc | v3-fos-license | Philosophical Insights into the ‘great’ of Great River Culture through Chuang-tzu’s ‘Autumn Floods’
The Great River Culture, deeply embedded in the fabric of Chinese history, symbolizes the enduring legacy of the Yellow River and its influence on the cultural and spiritual development of the Chinese nation. Understanding the concept of ‘great’ within this cultural context is vital for appreciating its profound impact. In Chuang-tzu’s ‘Autumn Floods,’ the exploration of the relative nature of ‘small’ and ‘great’ provides an invaluable perspective for this understanding. This paper uses the method of literature analysis to delve into the philosophical content expressed by Chuang-tzu in the seven questions and answers between the Deity of the Yellow River and the Deity of the Northern Sea. Chuang-tzu elucidates that understanding the ‘small’ is necessary to discuss the great, the distinction between ‘small’ and ‘great’ is not constant, and the separation between the Dao and objects, along with the concept of reverting to the true essence. Hence, the ‘small’ and the ‘great’ are interdependent, with no fixed division between them. Discussing the ‘great’ only in the tangible and limited sense of ‘seeing things as objects’ is far from sufficient. The higher level of the intangible and infinite Dao represents the true essence of the concept of ‘great’ in river culture.
The Great River Culture, deeply embedded in the fabric of Chinese history, symbolizes the enduring legacy of the Yellow River and its influence on the cultural and spiritual development of the Chinese nation.Understanding the concept of 'great' within this cultural context is vital for appreciating its profound impact.In Chuang-tzu's 'Autumn Floods,' the exploration of the relative nature of 'small' and 'great' provides an invaluable perspective for this understanding.This paper uses the method of literature analysis to delve into the philosophical content expressed by Chuang-tzu in the seven questions and answers between the Deity of the Yellow River and the Deity of the Northern Sea.Chuang-tzu elucidates that understanding the 'small' is necessary to discuss the great, the distinction between 'small' and 'great' is not constant, and the separation between the Dao and objects, along with the concept of reverting to the true essence.Hence, the 'small' and the 'great' are interdependent, with no fixed division between them.Discussing the 'great' only in the tangible and limited sense of 'seeing things as objects' is far from sufficient.The higher level of the intangible and infinite Dao represents the true essence of the concept of 'great' in river culture.
INTRODUCTION
The rise of river civilizations is generally considered the beginning of the world's oldest civilizations.Starting around 5400 BC, river civilizations accounted for about eighty percent of human civilizations worldwide, including the four well-known ancient civilizations, all of which were river cultures.This demonstrates that river cultures had inherent advantages in the conditions necessary for the emergence and development of civilizations.Therefore, in the early stages, river cultures exhibited a high level of development in various fields, including politics, economy, and arts, on a global scale.River cultures were often located in regions with fertile and vast lands, abundant resources, suitable climate, and plentiful water sources.These favorable natural conditions allowed for self-sufficiency in material aspects, unlike maritime cultures which needed to expand and explore externally.The ideological and cultural aspects of river civilizations were developed and continuously evolved based on their own production and living practices, demonstrating a certain degree of independence in their development.The intellectual and cultural contributions of river civilizations can be considered among the earliest and most significant in the history of human thought.The philosophical ideas of each river civilization were unique and brilliantly distinctive in their respective eras.Regarding the river culture of China, it served as a cradle for economic and political development, and also gave birth to a rich cultural heritage.Particularly notable is the philosophical thought that emerged within this river culture.Ancient Chinese literature is replete with discussions on the philosophical ideas inherent in river culture, with Chuangtzu's work being a classic example.Chuang-tzu's text offers a comprehensive exploration of the philosophical concepts embedded in river culture, with 'Autumn Floods' being a quintessential representation of these ideas.This text encapsulates the profound insights derived from the context of the river culture, reflecting on themes such as the natural world, human existence, and the underlying principles of life and society, all of which were deeply influenced by the unique environmental and cultural aspects of the ancient river civilizations of China.
Chuang-tzu's 'Autumn Floods' presents a metaphorical tale between the Deity of the Yellow River and the Deity of the North Sea, encapsulating Chuang-tzu's philosophical exploration of the concepts of 'small' and 'great.'The narrative begins with a vivid depiction of the rivers swelling during the rainy season, a scene both majestic and beautiful, filling the Deity of the Yellow River with pride.However, his encounter with the vast North Sea leads to a humbling realization of his own insignificance, sparking a series of seven dialogues with the Deity of North Sea .These dialogues serve as a conduit for Chuang-tzu to express his philosophical insights.River culture holds a significant position in Chinese civilization.The 'great' of river culture doesn't just refer to the vastness and majesty of the Yellow River and the Yangtze River, but also encompasses a richer philosophical significance.Understanding the concept of 'great' on a deeper philosophical level is a critical issue.By drawing on Chuang-tzu's 'Autumn Floods,' we can better interpret the philosophical essence of the 'great' in river culture, fundamentally enriching its cultural depth
LITERATURE REVIEW
Numerous scholars have delved deeply into Chuang-tzu's 'Autumn Floods,' highlighting its vital role in Taoist thought and its literary merit.These scholarly analyses, examining the text from diverse perspectives, enrich our understanding of Chuang-tzu's philosophy.
Wang Changmin (2016) highlights its incorporation of pivotal Daoist ideas from the Spring and Autumn and Warring States periods, while Ren Pengfei (2019) contrasts Chuang-tzu's inclusive and detached philosophical stance with Laozi's more austere approach.Wang Weiwei (2020) and Sun Mingjun (2021) further dissect the nuanced interpretations of Daoist concepts within the text, such as 'equalizing things'and the harmony between heaven and man.Zheng Songwen (2022) explores the contemporary relevance of 'objectification,' a term originally coined in Chuang-tzu's works, and Liu Guangtao (2022) emphasizes the profound philosophical and aesthetic depth in 'Autumn Floods,' reflecting Chuang-tzu's intellectual breadth and intricate logic.Sun Mingjun(2023) argues that 'Autumn Floods' reaches a philosophical pinnacle, comparable to 'On the Equality of Things,'in its exploration of Daoist concepts of inaction.At the same time, it presents a critique of Confucian thought.
However, these studies are not without flaws.The modern interpretations, while significant, tend to overly rely on contemporary viewpoints and philosophical structures, often overlooking the original context of Chuang-tzu's ideas.Such an approach can distort or oversimplify Chuang-tzu's original thoughts, thereby missing out on a deeper comprehension of the text.Consequently, while these analyses offer invaluable insights into 'Autumn Floods,' they should be balanced with an awareness of modern-centric biases, focusing more on the text's significance within its authentic historical and cultural setting.
METHODOLOGY
Employing literature analysis, this paper meticulously examines the allegorical story of the Deity of the Yellow River and the Deity of North Sea in 'Autumn Floods,' shedding light on Chuang-tzu's philosophical dialectics concerning the relative nature of 'small' and 'great.' The approach used in this article is a classic cross-era interpretation.While Chuang-tzu's philosophical interpretation of river culture is based in his own era, this article aims to discuss not just the river culture of Chuang-tzu's time, but the continuous evolution and influence of river culture from its inception to the present day.Specifically, it involves exploring the essence and impact of river culture from a broader historical perspective.This entails examining how the foundational elements and philosophical concepts of river culture, as presented in classical texts like 'Chuang-tzu,' have been adapted, interpreted, and integrated throughout various periods in history, up to the modern era.By doing so, the article seeks to understand the enduring influence of river culture and its evolving significance across different historical contexts.
RESEARCH RESULT
The Deity of the Yellow River's initial question reveals his epiphany of personal insignificance amidst the grandeur of nature.His pride, rooted in the forceful convergence of rivers into his domain, is dwarfed by the vastness of the North Sea.The Deity of North Sea's responses highlight the limitations of perspective, drawing parallels to a well-dwelling frog's inability to grasp the expanse of the sea and a summer insect's unawareness of winter's frost.These dialogues emphasize the importance of understanding the smaller realities to comprehend greater truths, transcending fixed notions of scale.Chuang-tzu navigates through this discourse, unveiling a dynamic world where perspectives shift with time and context, where opposites coexist and contribute to a holistic understanding of existence, and where the pursuit of Dao leads to a profound realization of the inherent unity and balance in the natural world.
The Structure of 'Autumn Floods'
The 'Autumn Floods' utilizes perspectives from 'Equalizing Things,' vigorously arguing for the infinite relativity of the size and rightness/wrongness of all things, and the extreme impermanence of human status and honor, with the aim of encouraging people to shed falsehoods, embrace truth, and comply with nature, without harming their innate nature in pursuit of fame and fortune.
The chapter begins with seven exchanges between the deities of the Yellow River and the Northern Sea, spanning nearly two thousand characters.The opening discusses the self-contentment of the deity of Yellow River and his admiration for the vast ocean, humorously deeming himself a joke in the eyes of the truly knowledgeable.It then connects to the topic of the 'distinction between great and small' from 'Free and Easy Wandering,' discussing the principle of the well frog being unable to speak of the sea.It continues by denying the difference in size on a quantitative level, shifting from the 'quantity' to the 'quality' of all things.Finally, it moves from discussing all things to expounding on the Dao of Heaven, which encompasses and transcends all things, as in 'all things are equal, who can say which is short or long,' while the Way is present in all things, as in 'what to do, what not to do, it will naturally transform.'This leads into the theme of 'equalizing things' discussed in the 'Equalizing Things' chapter.The chapter interweaves metaphor and reasoning, poetry and prose, in a captivating manner.
In order, although 'Autumn Floods' is in the middle of the outer chapters, it holds a high status and has been highly praised by scholars and literati throughout the ages.Lin Yunming of the Qing Dynasty said, 'The main idea of this chapter originates from the 'Equalizing Things' of the inner chapters, breaking and creating anew.Having reached the pinnacle, it uses words and changes as if with a divine axe, a masterpiece through the ages, opening countless methods for future generations.'Mr. Zhu Wenxiong said, 'This chapter on the distinction between big and small seems to come from 'Equalizing Things.'However, when it says 'do not use humanity to destroy heaven,' it shows the greatest of the Way still lies in inaction, which is also the essence of 'The Grand Master.''He also said, 'This chapter is Chuang-tzu's most satisfactory work.' The chapter views the world from the height of the Dao, recognizing that objects are constantly changing and, due to the limitations of subjective and objective conditions, these changing objects are beyond the exhaustive understanding of humans, thus leading to the relativity of human value judgments.Chuang-tzu's macroscopic perspective of 'demonstrating through the Dao' frees cognition from being confined to narrow knowledge and leads human understanding towards the vast realm of infinite relativity.
'Autumn Floods' is composed of two major parts: The first part describes the conversation between the Northern Sea god and the Yellow River god.A question and answer format completes the main body of this section.This long dialogue can be further divided into seven fragments.The first fragment, up to 'Aren't you doing the same by making yourself greater than the water,' discusses the deity of river's 'small' yet self-perception of 'great,' in contrast to the deity of sea's 'great' yet self-perception of 'small,' illustrating the relative nature of understanding things.The second fragment, up to 'How then can we know that the heavens and the earth are sufficient to exhaust the realm of the utmostly great,' points out the difficulty in truly knowing things and determining their size, showing that cognition is often affected by the uncertainty of things themselves and the infinity of all things.The third fragment, up to 'This is the extreme division,' follows the previous dialogue, further explaining the difficulty in understanding things, often being 'indescribable in words' and 'ungraspable in thought.'The fourth fragment, up to 'the house of small and big,' starts from the relativity of things, going deeper to point out that neither size nor status is absolute, and thus ultimately should not be discerned.The fifth fragment, up to 'It will naturally transform,' based on the view of 'all things being equal' and 'the Way having no beginning or end,' states that human cognition of external things must be inactive, only waiting for their 'natural transformation.'The sixth fragment, up to 'Speaking from the extreme to get to the essence,' discusses why it's important to value the Dao, indicating that understanding the Dao leads to comprehending the principles of things and recognizing the laws of change in things.The seventh fragment, up to 'This is called returning to the true,' is the final part of the conversation between the deity of river and the deity of sea, proposing the idea of returning to the true nature, that is, not using humanity to destroy the natural, pushing the concept of 'natural transformation' a step further.
The second part consists of six independent fables, each standing on its own without connection to each other or to the first part's dialogue between the deity of sea and the deity of river, and does not contribute much to the overall theme, giving a sense of disconnection.
Philosophical discourse on the concept of 'small' and 'great' in the Autumn Flood 1. Understanding the 'Small' to Discuss the 'Great'
In Chuang-tzu's tale, the Deity of the Yellow River's initial question to the Deity of North Sea is actually an admission of his own smallness.The Deity of the Yellow River once reveled in the grandeur of his domain, swollen with torrents during the rainy season, feeling invincible and immensely proud.This pride is abruptly humbled by the vastness of the North Sea, which reveals to him his own narrowness.The Deity of North Sea's response employs familiar analogies -a frog in a well and a summer insect, unaware of winter's cold, to depict the limitations of a narrow viewpoint.The Deity of North Sea then presents the fundamental premise for discussing 'great:' the necessity to first understand the 'small.'Unlike the Deity of the Yellow River, whose pride stemmed from the seasonal abundance of the rivers, the Deity of North Sea exhibits a humble spirit, recognizing the dynamic relationship of sizes and roles among the rivers, sea, and the cosmos.
The Fluidity of the 'Small' and 'Great' Divide
The Deity of North Sea's explanation of 'understanding the 'small' to discuss the 'great' inherently suggests that the division between small and great is not fixed but relative.The Deity of the Yellow River's further inquiries, though seeking to understand this concept, remain confined to a more concrete level.His questioning only leads to the Deity of North Sea's more direct responses, denying any fixed standard in categorizing the 'small' and 'great.'These responses involve concepts like the boundlessness of quantity and the endless passage of time, which render the endeavor of capturing the true scale of things elusive and futile.The Deity of North Sea emphasizes that all attempts to understand the physical dimensions and the beginnings and ends of things within these infinite parameters are inherently bound to be confounded.
The Nature of 'Small' and 'Great' within Limitations of Human Understanding
In reflection, humans, limited by their short lifespan and constrained wisdom, often grapple with the vast unknown.As Chuang-tzu puts it, our finite knowledge pursues the infinite, leading to inevitable exhaustion and perplexity.Trying to comprehend the true nature of the vast and minuscule in the constantly changing world often leads to confusion.The Deity of the Yellow River's and humanity's attempts to understand the magnitude and triviality of things are challenged by the dynamic and ever-evolving nature of the universe.This understanding suggests that the perceived 'great' of river culture is not a constant but changes with time, perception, and context, much like the relative sizes of rivers and seas against the earth and cosmos.
In summary, Chuang-tzu's narrative transcends the physicality of size, advocating a philosophical understanding that challenges and redefines conventional perceptions of the 'small' and 'great.'It invites a deeper appreciation for the dynamic and relative nature of existence and underlines the significance of embracing change and perspective as integral to wisdom.
The Distinction Between Dao and Objects
In Chuang-tzu's 'Autumn Floods,' the Deity of North Sea's explanation to the Deity of the Yellow River doesn't focus on strictly defining the small and the great.Instead, it emphasizes the variability of time, all things, and human perspectives, a concept the Deity of the Yellow River struggles to grasp.The Deity of the Yellow River's subsequent questions, more in line with a layman's perspective, shift from tangible to intangible aspects.Chuang-tzu articulates that while the minutest can be termed as 'fine' and the largest as 'vast,' both terms apply to physical entities.However, the immeasurable, formless aspects of existence, which cannot be quantified, lie beyond this tangible scope.This leads to the conclusion that while spoken language can describe the superficial aspects of things, it fails to capture their deeper, intrinsic essence.
Despite the Deity of the Yellow River's confusion about distinguishing between the 'small' and 'great,' the Deity of North Sea elaborates on the difference from the perspective of Dao and objects.In the physical realm, distinctions of size exist due to the diverse and uneven nature of all things, enabling one to draw conclusions of 'great' or 'small' based on perspective.However, in the realm of Dao, there is no distinction of size.The Dao exists beyond the physical, and neither the minutest nor the vastest entities can connect with it.Chuang-tzu emphasizes that the realm of Dao belongs to the indescribable and ungraspable aspects of existence.
This distinction is further explored through the story of Duke Huan of Qi and Lun Bian in Chuang-tzu's 'Heavenly Dao.'Duke Huan, engrossed in reading, is criticized by Wheelwright Bian for focusing on the chaff of ancient texts.Bian uses his experience in wheel-making to convey his understanding of Dao, explaining that the perfect pace in crafting a wheel cannot be captured in words, nor can it be taught or learned.He implies that the essence of Dao, encapsulated in his craft, is lost when the ancient masters pass away, leaving only superficial remnants in texts.Chuang-tzu thus suggests that the pursuit of Dao cannot be fulfilled through books or speech alone, as they are confined to the physical realm.
Consequently, discussing the 'great'of Great River Culture merely in terms of physical attributes fails to touch upon its deeper essence.To truly understand its 'great', one must perceive it through the lens of Dao, where there is no distinction between the 'small' and 'great.'The Great River Culture, in its essence, is a manifestation of Dao -fluid, encompassing, and beyond physical constraints.
Reflecting Their True Nature
As the discussion in Chuang-tzu's 'Autumn Floods' ascends to the broader scope of Dao, fixed notions of small and great dissolve, influenced by the variance of time and circumstances.Historical examples, such as the different outcomes of rulers who either abdicated or fought for power, illustrate this point.Chuang-tzu advocates grasping the duality present in everything, recognizing that the nature of things is in constant flux and dependent on context.
In conclusion, the dialogue between the Deity of the Yellow River and the Deity of North Sea in Chuang-tzu's 'Autumn Floods' delves deep into Chuangtzu's philosophical ponderings on the 'small' and 'great.'The key is understanding the 'small' to truly discuss the 'great,' acknowledging the fluidity and relativity of these concepts, and recognizing the differences in perceiving them through the physical world versus the Dao.The essence of the Great River Culture, therefore, lies not just in its physical manifestations but in its embodiment of Dao, transcending physical constraints and embracing a more profound, spiritual core.
CONCLUSIONS AND RECOMMENDATIONS
The story of the Deity of the Yellow River and the Deity of North Sea in Chuang-tzu's 'Autumn Floods' encapsulates Chuang-tzu's philosophical musings on the 'small' and 'great.'Firstly, acknowledging one's 'small' is vital to discussing 'great.'Thisunderstanding fosters a spirit of humility and inclusion within the Great River Culture.Secondly, the distinction between small and great is not static but ever-changing, influenced by numerous factors.Therefore, the 'great' of the Great River Culture encompasses a spirit of constant evolution and transformation.Thirdly, the real essence of the Great River Culture, its true 'great', lies not in the physical realm but in the realm of Dao, transcending physical distinctions.Finally, understanding and embracing Dao brings a recognition that the apparent divisions of 'small' and 'great' are influenced by various factors, and their true nature is found in their fluidity and interconnectedness.
ADVANCED RESEARCH
In writing this article the researcher realizes that there are still many shortcomings in terms of language, writing, and form of presentation considering the limited knowledge and abilities of the researchers themselves.Therefore, for the perfection of the article, the researcher expects constructive criticism and suggestions from various parties.
ACKNOWLEDGMENT
This paper, submitted to the Sarawak-Great River Culture Forum, thanks the organizing bodies -North China University of Water Resources and Electric Power, University of Technology Sarawak, and the Sarawak Chinese Cultural Association -for the opportunity to present this work.Gratitude is also extended to the scholars at the forum for their contributions to this article. | 2024-05-18T15:58:59.918Z | 2023-10-29T00:00:00.000 | {
"year": 2023,
"sha1": "251f9bdad7705aab32964be6fc7e1af10f0568bb",
"oa_license": "CCBY",
"oa_url": "https://journal.formosapublisher.org/index.php/ajpr/article/download/8516/8600",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3d47249a20cf3f9a696512aefe8257ba4f6f2a7e",
"s2fieldsofstudy": [
"Philosophy",
"History"
],
"extfieldsofstudy": []
} |
251335129 | pes2o/s2orc | v3-fos-license | Usefulness of Automated Hb-HPLC Analyzer Based on Reverse-Phase Cation-Exchange Chromatography for Hemoglobin A1C Determination in the Setting with High Prevalence of Hemoglobin E Disorder
Indian Journal of Endocrinology and Metabolism ¦ Volume 26 ¦ Issue 3 ¦ May-June 2022 290 Sir, Dear Editor, diabetes is a common endocrine disorder that affect millions of world population. For management of people living with diabetes, good glycemic control is required. Laboratory monitoring of glycemic control of the patient is used in general practice. In people living with diabetes, glycohemoglobin (GHb) plays an important role in determining glycemic control. The outcome of an irreversible non‐enzymatic glycation of the beta chain of hemoglobin A, GHb is assessed as hemoglobin (Hb) A1C. In patients with diabetes, HbA1C is regularly used to determine long‐term glycemic management. The monitoring of HbA1C is accepted as a useful tool for management of the patient. At present, the new analyzer for HbA1C is available and allow convenient point of care analysis.
Sir, Dear Editor, diabetes is a common endocrine disorder that affect millions of world population. For management of people living with diabetes, good glycemic control is required. Laboratory monitoring of glycemic control of the patient is used in general practice. In people living with diabetes, glycohemoglobin (GHb) plays an important role in determining glycemic control. The outcome of an irreversible non-enzymatic glycation of the beta chain of hemoglobin A, GHb is assessed as hemoglobin (Hb) A1C. In patients with diabetes, HbA1C is regularly used to determine long-term glycemic management. The monitoring of HbA1C is accepted as a useful tool for management of the patient. At present, the new analyzer for HbA1C is available and allow convenient point of care analysis.
The measurement of HbA1C in patients with Hb variations or derivatives might be hampered by a number of patient and laboratory-related issues. The challenge of employing hemoglobin A1C measurement in hemoglobinopathy prevalent areas is recognized. [1] This becomes the big problem for using Hb1C for monitoring of diabetes in the area with high prevalence of hemoglobinopathy. In our setting, Southeast Asia, the Hb E disorder is highly prevalent. Effect of HbE disorder, especially for with homozygous HbE trait, on HbA1C measurement is well recognized. Here, the authors report the clinical usefulness of using new automated Hb-HPLC analyzer based on reverse-phase cation-exchange chromatography for hemoglobin A1C determination in the setting with high prevalence of hemoglobin E disorder. The new analyzer is ARKRAY ADAMS A1c HA-8180T analyzer, which is proven for accuracy in determination of HbA1C. [2] Regarding this analyzer, inter and intra operation coefficients of variation are equal to 0.43% and 0.29%, respectively. The measurement range for HbA1C is 3-20%. The system can report flags and for HbE and other hemoglobinopathies including HbS, HbC, and HbD based on identification of the variant peak. The new tool can offer an analytical result in a short amount of time and can be utilized as a point-of-care testing analyzer. It is superior to the traditional analyzer in that it may offer results for HbA1C and other hemoglobinopathies on a single analyzer.
This new tool has just been implemented in our setting, a primary medical center for 1 year. The tool can perform dual analysis of HbA1c and HbA 2 on a single run. In our area in Southeast Asia, the diagnostic accuracy of the analyzer for specific analysis of HbA1C and hemoglobinopathy is reported. [3] Here, the authors retrospectively review the record of HbA1C analysis in this setting. The analyzer was able to detect 38 individuals (0.3%) with homozygous HbE trait from 12,578 HbA1C tests over a one-year period. In those individuals, there is no previous history of diagnosis of HbE disorder. In these cases, HbA1C result is discarded and the fructosamine is used for monitoring of diabetes instead. Consider the cost and benefit of the new analyzer in our setting, the cost is 5 USD per one analysis while the cost for the classical HbA1C analyzer is 6 USD per analysis. For classical HbE analysis by electrophoresis is 12 USD per one analysis. Therefore, for detection an HbA1C result with concurrent HbE, the new analyzer can reduce cost up to 13 USD per case. This result can imply the usefulness of the new analyzer for help detect unknown interference on HbA1C measurement from HbE disorder in the setting with high prevalence of hemoglobinopathy.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-08-05T15:09:42.660Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "0e7d63d6c28cef853515a4b3ae4929b8772b24a0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijem.ijem_54_22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31a4fe09cca94cd8351e269fbeb07301700c2ce0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235441460 | pes2o/s2orc | v3-fos-license | Extent of Surgery and the Prognosis of Unilateral Papillary Thyroid Microcarcinoma
It remains controversial whether patients with papillary thyroid microcarcinoma (PTMC) benefit from total thyroidectomy (TT) or thyroid lobectomy (TL). We aimed to investigate the impact of extent of surgery on the prognosis of patients with unilateral PTMC. Patients were obtained from the Surveillance, Epidemiology, and End Results database from 2004 to 2015. Cancer-specific survival (CSS) and overall survival (OS) were evaluated by Cox regression and Kaplan–Meier curves with propensity score matching. Of 31167 PTMC patients enrolled, 22.2% and 77.8% of which underwent TL and TT, respectively. Patients with TT were more likely to be younger, females, present tumors of multifocality, extrathyroidal extension, cervical lymph node metastasis (CLNM), distant metastasis, and receive radioactive iodine (RAI) compared with those receiving TL. The multivariate Cox regression model showed that TT was not associated with an improved CSS and OS compared with TL with hazard ratio (HR) and 95% confidence interval (CI) of 0.53 (0.25-1.12) and 0.86 (0.72-1.04), respectively. In addition, the Kaplan–Meier curves further confirmed the similar survival between TL and TT after propensity score matching. The subgroup analysis showed that TT was associated with better CSS for patients < 55 years, those with tumors of gross extrathyroidal extension, CLNM (N1b), and cases not receiving RAI with HR 95% CI of 0.13 (0.02-0.81), 0.12 (0.02-0.66), 0.11 (0.02-0.64) and 0.36 (0.13-0.90), respectively. TT predicted a trend of better OS for patients with N1b and distant metastasis after adjustment. In addition, TT was associated with better CSS than TL for patients with risk factors like N1b combined with gross extrathyroidal extension, and/or multifocality after matching. In conclusion, TL may be enough for low-risk PTMC patients. TT may improve the prognosis of unilateral PTMC patients with 2 or more risk clinicopathologic factors like CLNM, multifocality, extrathyroidal extension and a younger age compared with TL.
INTRODUCTION
Papillary thyroid microcarcinoma (PTMC) is defined as a papillary thyroid carcinoma (PTC) ≤ 1.0 cm in diameter, which has been increasingly detected in recent decades across the world with the popularity of ultrasound and fineneedle aspiration cytology (1,2). PTMC is generally an indolent disease with excellent prognosis (3). Recently, active surveillance has been recommended as an alternative approach for low-risk PTMC according to the American Thyroid Association (ATA) guidelines (4).
Thyroid lobectomy (TL) alone was sufficient for unifocal and intrathyroidal PTMC in the absence of clinically detectable cervical nodal metastasis (4). TL may be appropriate for PTMC patients when no evidence of extrathyroidal disease was found (5). However, The rate of pathological cervical lymph node metastasis (CLNM) was 48.0% for PTC (6), and 42.4% for PTMC when prophylactic central lymph node dissection was performed (7), which does pose a risk for local recurrence (8). Postoperative local lymph node recurrence was associated with reoperations and the consequently excess morbidity from reoperations (9).
Besides TL, total thyroidectomy (TT) is commonly performed on unilateral PTMC patients. TT was associated with more complications like hypocalcemia, recurrent laryngeal nerve injury, and high-dose hormone replacement throughout one's life. However, it remains unclear whether patients with unilateral PTMC benefit from TL or TT (10). We expect that PTMC patients with risk clinicopathologic features may benefit from more aggressive surgical treatment. However, it remains unclear due to the excellent prognosis of PTMC and limited qualified cases. In this study, we aim to compared the prognosis between patients receiving TT and TL with a large sample size.
Ethics Statement
The patients were enrolled from the Surveillance, Epidemiology, and End Results (SEER) program (https://seer.cancer.gov/) from 2004 to 2015. This study was deemed exempt by the institutional review board approval for the deidentified patient information.
Study Population
Medical records were drawn using the International Classification of Diseases for Oncology code site C73.9. Histotype of PTC with values of 8050 (papillary carcinoma), 8260 (papillary adenocarcinoma), 8340 (papillary carcinoma, follicular variant), and 8341 (papillary microcarcinoma) were included. Values of 8050, 8260 and 8341 were classified as PTC, and 8340 for follicular variant PTC (FVPTC). The demographic, clinicopathologic characteristics, and treatment along with survival data were recorded. Race was categorized into white, black, and other (American Indian/AK Native, Asian/Pacific Islander). Extrathyroidal extension was divided into minimal extension and gross extension. Cervical lymph node metastasis was determined by derived AJCC N stage, 6 th ed (2004+), which includes N0 (without nodal metastasis) and N1 (N1a, N1b, and N1). N1a means nodal metastasis to level VI (pretracheal, paratracheal, and prelaryngeal/Delphian lymph nodes). N1b represents nodal metastasis to unilateral, bilateral, or contralateral cervical or cervical or superior mediastinal lymph nodes. N1 (NOS) means regional nodal metastasis. Patients with multiple primary tumors, tumors in both sides of thyroid lobes, non-positive histology, age < 18 years, unknown or indefinite data of interest were excluded. Only patients with unilateral PTMC were included.
Statistical Analysis
Age and year of diagnosis were expressed as median (upper and lower quartile) for its skewed distribution and analyzed with Mann-Whitney U test. Category variables were presented as percentage and analyzed using chi-square test. The cancer specific survival (CSS) and overall survival (OS) were estimated by the Kaplan-Meier curves and compared by log-rank tests. The Cox proportional hazard model was established to estimate risk factors for CSS and OS with hazard ratio (HR) and a 95% confidence interval (CI). Propensity score matching (PSM) was performed using R software (ver. 3.3.3, http://www.r-project.org/) of package 'MatchIt'. One-to-one matching with a caliper of 0.1 was used to balance demographic, pathologic and treatment covariates between TL and TT (11). The matched variables included age, year of diagnosis, sex (male vs. female), multifocality (solitary and multiple nodules), extrathyroidal extension (no vs. yes), cervical lymph node metastasis (no vs. yes), distant metastasis (no vs. yes), and radioactive iodine (RAI). Subgroup analyses stratified by clinicopathologic characteristics were performed (12). All statistical differences were set at a twosided p value < 0.05. The other data were analyzed by Stata software (Stata/MP ver. 14.2, StataCorp., College Station, TX), and GraphPad Prism (ver 7.0, GraphPad Software, Inc).
Patient Characteristics
The flow chart of selection was shown in Supplementary Figure 1. Finally, a total of 31167 patients with unilateral PTMC were enrolled, including 6929 (22.2%) undergoing TL and 24238 (77.8%) undergoing TT ( Table 1). The following characteristics of patients were more likely to present with TT compared with TL: younger age, later year of diagnosis, female sex, tumors of multifocality, extrathyroidal extension (minimal and gross extension), CLNM (N1a, N1b and N1), distant metastasis, and treatment with RAI ( Table 1).
corresponding counterparts in the multivariate Cox regression ( Table 2). In addition, increasing age, male sex, black race, tumors of gross extrathyroidal extension, N1a, N1b, N1 (NOS), distant metastasis, and treatment with chemotherapy were associated with compromised OS in PTMC patients with HR (95% CI) of 1. 10 Table 2).
In the univariate Cox regression analysis, TT was associated with improved OS compare with TL with HR (95% CI) of 0.74 (0.62-0.88) (Supplementary Table 1). In the multivariate Cox regression model, there was a trend toward a better prognosis in CSS and OS of TT over TL with HR (95% CI) of 0.53 (0.25-1.12) and 0.86 (0.72-1.04), respectively. However, the differences were not statistically different ( Table 2).
Kaplan-Meier Curves Before and After PSM
Kaplan-Meier curves showed no differences in CSS between the TT and TL groups ( Figure 1A). However, the median OS of TT was significantly longer than that of TL before matching ( Figure 1B). After balancing the baseline characteristics between TL and TT, the differences between the two groups were significantly reduced (Supplementary Figure 2). The matched process yielded a total of 6929 paired cases. The differences in baseline covariates were well balanced after matching (Supplementary Table 2). However, there were no significant differences in CSS and OS between patients with TT and TL (Figures 2A, B).
We expected that patients with risk clinicopathologic characteristics may benefit from TT. Patients with tumors of multifocality can gain improved CSS from TT over TL (p = 0.049) after matching ( Figure 3A). In addition, patients with tumors of extrathyroidal extension and CLNM showed marginally improved CSS from TT over TL (P = 0.050 and 0.054, respectively) ( Figures 3B, C).
Subgroup Analysis by Multivariate Cox Regression Analysis
In consideration of the tread toward improved prognosis from TT, we further performed subgroup analysis to identify those who might benefit from TT over TL. Compared with TL, TT was associated with improved CSS for patients < 55 years, those with Table 4).
DISCUSSION
In the present study, we investigated the extent of surgery and the prognosis of patient with unilateral PTMC. TT was not associated with improved CSS and OS compared with TL in the total population after PSM. However, TT predicted better CSS for patients < 55 years, those with tumors of gross extrathyroidal extension, N1b, and not receiving RAI. After balancing the covariates between the TL and TT groups, we found that TT can improve CSS for patients with tumors of multifocality, extrathyroidal extension, and CLNM compared with TL. Importantly, we found that patient with multiple risk clinicopathologic factors like CLNM, extrathyroidal extension, and multifocality were more likely to benefit from TT over TL.
The optimal extent of surgery for PTMC has been controversial. Most of the single institutional studies and metaanalysis failed to discern any differences in prognosis of patients with PTMC underwent TT or TL, which might result from the indolent behavior of PTMC, the short-term follow-up duration, and the relatively small sample size (13)(14)(15). There may be a trend toward lower mortality rate of TT than TL. However, the limited number of mortality events prevented establishing a definitive correlation between the extent of surgery and prognosis of patients with PTMC (14).
extension, pathological type, chemotherapy and RAI were missing or incomplete, and subgroup analysis were not performed. Lee et al. did not find any significant differences in the risk of death and locoregional recurrence between TT and TL in a matched cohort with 506 paired PTMC patients from 1986 to 2006 (15). Some single institutional studies found that the recurrence rate of patients undergoing TT was similar with those undergoing TL (3,15). However, a recent meta-analysis showed that TT was associated with lower recurrence rates than TL (14,17). For PTMC of multifocality, TL may result in a higher rate of thyroid bed and lymph node recurrence than TT (18). The low recurrence rate of TT might result from a more radical resection of the contralateral thyroid lobe and cervical lymph nodes (5,19), while transient and permanent hypoparathyroidism was higher for TT than TL (5). We found that patients undergoing TT were more likely to be younger, and present with tumors of multifocality, extrathyroidal extension, CLNM, and distant metastasis. These features were associated with nodal metastasis, tumor recurrence, and unfavorable prognosis of patients (7,8,20). We found that patients undergoing TT showed a trend toward improved CSS and OS compared with patients receiving TL. Relative treatment effects may vary according to the heterogeneous study population, certain high-risk subsets may benefit most from the treatment (21). We thus expected that a subpopulation of PTMC patients may benefit from TT.
The subgroup analysis revealed that patients < 55 years, those with tumors of gross extrathyroidal extension had improved CSS from TT compared with TL. Younger age and extrathyroidal extension were risk factors for CLNM (7,8). Of note, TT failed to improve the prognoses of patients with minimal extrathyroidal extension and N1a. For patients with N1b, TT significantly improved CSS of patients compared with TL. These findings highlighted the importance of detecting nodal metastasis in the lateral neck. The preferred hierarchy of treatment for PTC with distant metastasis includes TT, nodal dissection, postoperative RAI therapy, and thyrotropin inhibition therapy. As for refractory disease, kinase inhibitors were recommended (4). We found that patients with distant metastasis may benefit from TT over TL. However, the number of patients with distant metastasis was relatively small and the result needs to be validated in the following studies. The present study found that patients not receiving postoperative RAI might benefit from TT with improved CSS compared with TL. TT might facilitate to eradicate disease recurrence of the contralateral lobe, and potential metastatic lymph nodes, which was beneficial for those not receiving RAI.
We did not observe any differences in the prognosis between TT and TL for PTMC patient with multifocality in the multivariate model, which was consistent with a previous study (22). However, after PSM between TL and TT, TT showed improved CSS for multifocal tumor in unilateral PTMC patients compared with TL, while the OS was similar. TL may be a safe treatment approach for selected unilateral PTMC patients with multifocal and nodenegative tumors (3), which was consistent with our results. The prognostic significance of multifocal tumors in PTC remains controversial (23). However, when tumors of multifocality together with CLNM, extrathyroidal extension, or both were presented, TT was associated with improved CSS compared with TL. Therefore, a more radical surgical treatment may be considered for tumors with more risk factors.
The study should be interpreted in consideration of several limitations. First, selection bias was inevitable even though we adjusted the covariates and performed PSM analysis. In addition, data like disease recurrence and thyroid-stimulating hormone inhibition were not available in the database. Additionally, the results of subgroup analysis should be interpreted with caution for the limited samples and events evaluated. Last but not least, the occult thyroid cancer in the contralateral thyroid lobe and the exact number of metastatic lymph nodes may also influence the outcome. The strengths of this study lie in the latest and largest samples to date, comprehensive variables adjusted, and subgroup analysis together with PSM analysis.
In conclusion, we for the first time investigated the association between the extent of thyroid surgery and prognosis of unilateral PTMC patients. The present results suggested that there was no statistical difference in prognosis between TL and TT for unilateral PTMC patients. TL is appropriate for unilateral PTMC without risk factors. However, PTMC patients with risk features such as a younger age, multifocality, gross extrathyroidal extension, and N1b may benefit from TT over TL, especially for those with multiple risk factors. These findings may have an impact on the treatment of unilateral PTMC. Large sample size and long-term follow-up studies are warranted to validate the present findings.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material/Further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
HZ: conception, data acquisition. HZ and LC: data analysis and drafting the article. HZ and LC: revised it critically for important intellectual content. HZ: investigation, project administration, and supervision. All authors contributed to the article and approved the submitted version. | 2021-06-16T13:17:36.585Z | 2021-06-16T00:00:00.000 | {
"year": 2021,
"sha1": "d0f799df98906b7bb96b0f1bf49b7222e334ccef",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.655608/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0f799df98906b7bb96b0f1bf49b7222e334ccef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267608897 | pes2o/s2orc | v3-fos-license | Global research hotspots and trends on robotic surgery in obstetrics and gynecology: a bibliometric analysis based on VOSviewer
Objective Over the last two decades, the quantity of papers published in relation to robotic surgery in obstetrics and gynecology has continued to grow globally. However, no bibliometric analysis based on VOSviewer has been performed to evaluate the past and present of global research in the field. In this study, we aimed to analyze the bibliometric characteristics of papers on robotic surgery in obstetrics and gynecology to reveal research hotspots and trends in this field. Methods The Web of Science Core Collection was searched for scientific papers on robotic surgery in obstetrics and gynecology published between January 1, 1998 and December 31, 2023. Bibliometric metadata of each selected paper was extracted for analysis. The results were visualized by VOSviewer (version 1.6.18). Results A total of 1,430 papers met the inclusion criteria. The United States had the highest total link strengths and contributed the most papers (n = 793). The Mayo Clinic produced the largest number of papers (n = 85), and Professor Pedro T Ramirez contributed the most papers (n = 36). The number of citations ranged from 0 to 295 with a total sum of 29,103. The Journal of Minimally Invasive Gynecology published the most relevant papers (n = 252). Keywords were classified into six clusters based on co-occurrence data, of which cluster 1, cluster 4 and cluster 6 had more main keywords with the largest average publication year. Conclusions This is the first VOSviewer-based bibliometric analysis of robotic surgery research in obstetrics and gynecology. The United States was the leading country, and the Journal of Minimally Invasive Gynecology was the most productive journal in the field. Scientists and institutions from around the world should push their boundaries to bring about deep collaboration. The main research topic has always been the use of robotic surgery in the treatment of gynecologic malignancies. More randomized controlled trials need to be conducted to compare surgical outcomes of robotic surgery with other surgical approaches. Robotic sacrocolpopexy for pelvic organ prolapse has become a new research hotspot, and robotic surgery for sentinel lymph node detection in gynecologic malignancies are more potential directions for future research.
Introduction
Minimally invasive surgery has gradually become a common surgical procedure with the continuous development of surgical techniques.The advent of robotic surgical systems is an exciting development in the field of minimally invasive surgery.Robotic surgical systems provide several benefits, including enhanced precision during the operation, as well as a clearer threedimensional surgical field of view, thereby ensuring the safety of the operation (1).Additionally, surgeons can mitigate fatigue by sitting comfortably at the front of the operating table.Nevertheless, the employment of robotic surgical systems is not immune to limitations, owing to its propensity for exorbitant costs and vulnerability to signal disruption between the console and the apparatus during surgical procedures (2).In general, the trajectory of robotic surgical systems will progress toward greater cost-effectiveness, reduced weight, and enhanced stability.
The advancement of robotic surgery is closely linked to the evolution of information technology and the development of precision machine manufacturing technology.In 1994, an American company, Computer Motion, developed the Aesop system, which was the first endoscopic surgical system designed to assist in minimally invasive surgery.Although this system is not able to operate independently of instructions, it represents a pivotal advancement in the field of robotic surgery (3).In 1998, Computer Motion developed the Zeus system, which enabled voice-activated interaction for endoscope manipulation and surgical instrument operation under the guidance of a physician.The da Vinci system, a third-generation robotic system developed by Intuitive Surgical, was approved for clinical use in 2000 (3).
Research on the utilization of robotic or computer-assisted techniques in minimally invasive obstetric and gynecologic surgery has increased since the late 1990s (4).Much of that research involved the da Vinci robot, which was approved by the U.S. Food and Drug Administration as a modified laparoscopic approach for gynecologic surgery in April 2005 (5).Currently, robotic surgical systems are rapidly gaining popularity in obstetrics and gynecology.Their applications include but are not limited to sacrocolpopexy (6, 7), hysterectomy (8,9), myomectomy (10,11), tubal anastomosis (12,13), and lymphadenectomy (14, 15).
In bibliometric analysis, statistical and mathematical approaches are used to measure the quality and quantity of documents, books and other communication media (16,17).As an increasing number of scientific discoveries emerge and published studies are read and cited by other scholars, bibliometric indicators including impact factor, CiteScore, Eigenfactor score, SCImago Journal Rank, H-index, etc., have become increasingly significant (18).In addition to its widespread use in the fields of physics, chemistry, and computer science, bibliometric analysis has opened up new perspectives in the field of medicine (19)(20)(21)(22)(23)(24).By analyzing bibliometric indicators, scholars can understand the influence of publications, countries, organizations, authors, and journals in a particular field (25).Moreover, the greater strength of bibliometric analysis is that it can summarize large amounts of data to report developments and emerging trends in the field (26).
Over the last two decades, the number of papers on robotic surgery in obstetrics and gynecology has grown exponentially.However, to our knowledge, no bibliometric analysis based on VOSviewer has been performed to evaluate the past and present of global research in the field.Thus, this study aimed to identify papers on robotic surgery in obstetrics and gynecology and then to analyze their bibliometric characteristics to help reveal research hotspots and trends in this field.
Data source and search strategy
We searched for scientific papers related to robotic surgery in obstetrics and gynecology via the Web of Science Core Collection (WoSCC), which includes Science Citation Index Expanded, Social Sciences Citation Index, Arts & Humanities Citation Index, Conference Proceedings Citation Index-Science, Conference Proceedings Citation Index-Social Science & Humanities, Emerging Sources Citation Index, Current Chemical Reactions, and Index Chemicus.We accessed the Web of Science (WoS) by logging into the institutional account of Shandong University.The retrieval strategy was as follows: (Topic = robotic surgical procedure* OR robot surgery OR robotic surgery OR robot assisted surgery OR robotic assisted surgery OR robot enhanced surgery OR robotic enhanced surgery OR Aesop OR Zeus OR da Vinci).The WoS category was restricted to "obstetrics and gynecology".The first robotic surgery on a human began in 1998 (27).Therefore, the time span of our study was set from January 1, 1998 to December 31, 2023.The publication type was limited to original articles and reviews.Two researchers first worked together to search the publications that met the requirements by reading the abstracts on October 31, 2022, and updated on January 18, 2024.Disagreements were settled by a third investigator.The detailed processes of inclusion and exclusion are displayed in Figure 1.This study did not require ethics committee approval.It was a retrospective bibliometric analysis of previously published articles.
Data export
Data from all selected papers, including authors, organizations, countries/regions, keywords, times cited, titles, publication years, source journals and corresponding impact factors, were exported and saved in Microsoft Excel 2019 and EndNote Desktop, respectively.The impact factor was defined in accordance with the Journal Citation Report (2022).These data were subsequently analyzed both qualitatively and quantitatively.
Visualization maps of data
VOSviewer is software that can be used for bibliometric visualization and analysis of literature (28).Visualization maps of authors, countries/regions and organizations based on coauthorship data, as well as keyword visualization maps based on co-occurrences can be constructed using this software (29,30).Different nodes in the visualization map indicate different specific terms, such as keywords, authors, organizations, and countries/ regions.The parameters and settings for using VOSviewer were as follows: Method = Association strength, Attraction = 2, Repulsion = 0, Resolution = 1, and Minimum cluster size = 1.
In network visualization maps of countries/regions, authors, and organizations based on co-authorship data, the size of the node indicates co-authorship frequency.A line between two countries/ regions/authors/organizations indicates their collaboration.The line thickness between two nodes corresponds to the line strength (LS), which varied depending on the number of papers coauthored.Stronger collaboration is indicated by thicker lines.Countries/regions/authors/organizations with high levels of collaboration are depicted by nodes of the same color.The sum of all LS for a given term is the total link strength (TLS), which shows the collaboration strength between the term and other terms.
For visualization maps of keywords based on co-occurrence data, three different types of maps, network visualization map, density visualization map, and overlay visualization map, have their own meanings.In the network visualization map, the size of the node represents the corresponding frequency of occurrence.A larger node means that it appears more times, whereas a smaller node indicates that it appears fewer times.The keywords with the same color form a cluster, and each cluster represents a research hotspot (31).Through the network visualization map of keywords, we can identify the research hotspots represented by each cluster (31).In the density visualization map, the color of a keyword depends on its occurrence frequency.The red keywords appear most frequently, followed by the yellow, green and cyan keywords.With the density visualization map of keywords, we can determine the research focus in this field.In the overlay visualization map, different colors represent different years.The average publication year (APY) based on the average occurrence time of a keyword was used to evaluate the novelty of this keyword.Combined with the above information, we can understand the global research status and predict future research trends in this field.
Results
A total of 1,430 papers were retrieved from the WoSCC using our specific search terms and restrictive conditions (Supplementary Data S1). Figure 2 shows the number of publications published per year from 1998 to 2023.In terms of publication type, the majority of the papers were original research articles (n = 1,249; 87.3%), while the rest were review articles (n = 181; 12.7%).Among the original research articles, there were 371 retrospective studies and 142 prospective studies.In addition, only 38 of the original research articles were randomized controlled trials (RCTs).For languages, there were 1,393 papers in English (97.4%).
Countries/regions
The papers originated from 69 countries/regions, and the top 10 countries/regions are listed in Table 1.The United States
Authors
The top 10 authors and organizations are also listed in Table 1.Professor Pedro T Ramirez contributed the most papers (n = 36), followed by Professor Giovanni Scambia (n = 35) and Professor Javier F Magrina (n = 27).A total of 185 papers published by the top 10 authors accounted for 12.9% of all studies in this field.Eight of the top 10 authors were from the United States, with one author from Italy and one author from Sweden.Based on the co-authorship data, a network visualization map of the authors' co-authorship was constructed as shown in Figure 4.For the purpose of author co-authorship analysis, the minimum number of papers for an author was set at five.A total of 189 authors met this threshold and were selected to be included in the co-authorship analysis.Professor Pedro T Ramirez (TLS = 113) was also the author with the highest total link strength, followed by Professor Giovanni Scambia (TLS = 89) and Professor Pamela T Soliman (TLS = 79).The closest collaboration was between Professor Sarfraz Ahmad and Professor Robert W Holloway (LS = 20).
Organizations
For organizations, the Mayo Clinic produced the most papers (n = 85), followed by the University of Texas System (n = 65) and the University of North Carolina (n = 61) (Table 1).Within the top 10 organizations in this field, eight were organizations in the United States, and two were Italian organizations.Similarly, we constructed a network visualization map of organizations based on the co-authorship data, as shown in Figure 5.For the purpose of organizations' co-authorship analysis, the minimum number of papers for an organization was set at eight.A total of 65 organizations met this threshold and were selected for inclusion in the co-authorship analysis.The organization with the highest total link strength was the University of North Carolina (TLS = 49), followed by Duke University (TLS = 43) and the Mayo Clinic (TLS = 37).The closest collaboration was between Skane University Hospital (Sweden) and Lund University (Sweden) (LS = 10).
Citations and journals
The number of citations for each of the 1,430 papers related to robotic surgery in obstetrics and gynecology ranged from 0 to 295, with a sum of 29,103.The average number of citations per paper was 20.35.A total of 1,430 papers were published in 74 journals.Among these journals, the Journal of Minimally Invasive Gynecology published the most papers (n = 252), followed by Gynecologic Oncology (n = 151) and the International Urogynecology Journal (n = 100).The top 10 journals ranked by the number of papers and their impact factors are listed in Table 3.Among these 10 journals, Obstetrics and Gynecology had the highest average number of citations per paper (n = 59.48), followed by Gynecologic Oncology (n = 44.86)and the American Journal of Obstetrics and Gynecology (n = 37.83).Regarding the impact factors of the top 10 journals, the above three journals still had the highest impact factors, which were the American Journal of Obstetrics and Gynecology, Obstetrics and Gynecology and Gynecologic Oncology in descending order.
Co-occurrence analysis of keywords
VOSviewer identified a total of 2,050 keywords from 1,430 papers based on co-occurrence data.After restricting the minimum number of keyword occurrences to 12, a total of 66 items were included.We manually unified and standardized the keywords and finally identified 37 keywords.We then constructed a network visualization map using these keywords, which were classified into six clusters (Figure 6).The research hotspots were identified according to the keywords contained in each cluster, as shown in Table 4. Cluster 1 was the largest cluster in this study, and prominent keywords in this cluster were endometriosis, fertility, fibroids, infertility, myomectomy, pregnancy, recurrence and trachelectomy.For cluster 2, the main keywords were endometrial cancer, complications, outcomes, quality of life and survival.The primary keywords in cluster 3 were cost, learning curve, simulation and training.In cluster 4, the dominant keywords were cervical cancer, indocyanine green, ovarian cancer, sentinel lymph node and uterine cancer.Cluster 5 consisted of keywords such as gynecologic oncology, hysterectomy and same-day discharge.The keywords mesh, pelvic organ prolapse and sacrocolpopexy were frequently used in cluster 6.
Along with the network visualization map of co-occurrence terms, an overlay visualization map was constructed in which keywords were imparted by using different colors based on the APY (Figure 7).The purple color indicates keywords appearing relatively early in the time course, while the red color reflects recent occurrences.This overlay visualization map showed that cluster 1, cluster 4 and cluster 6 had more main keywords with the largest APY, including sentinel lymph node, indocyanine green, endometriosis, recurrence, fibroids, infertility, Network visualization map of keyword co-occurrence analysis conducted by VOSviewer.The size of a node indicates the frequency of keyword occurrence, and keywords are classified into six clusters: application of robotic surgery in gynecologic benign diseases (cluster 1), surgical outcomes of robotic surgery for endometrial cancer (cluster 2), cost and learning curve of robotic surgery for gynecologic diseases (cluster 3), robotic surgery for sentinel lymph node detection in gynecologic malignancies (cluster 4), robotic surgery for gynecologic oncology (cluster 5), and robotic sacrocolpopexy for pelvic organ prolapse (cluster 6).sacrocolpopexy and pelvic organ prolapse.Furthermore, some main keywords in other clusters, such as same-day discharge, simulation and survival, also had a relatively large APY.This indicated that the topics related to these keywords had recently received increasing attention.
A density visualization map of the keywords according to their occurrence frequency was also constructed, as shown in Figure 8.The main keywords included endometrial cancer (occurrences: 185), hysterectomy (occurrences: 159), cervical cancer (occurrences: 116), sacrocolpopexy (occurrences: 96) and pelvic organ prolapse (occurrences: 94), which appeared the most frequently.
Principal results
This is a VOSviewer-based bibliometric analysis to identify papers on robotic surgery in obstetrics and gynecology as well as to analyze their bibliometric characteristics.In this study, we used the WOSCC to find 1,430 relevant papers from 1998 to 2023.The number of papers in this research field increased from year to year on the whole, peaking in 2021 with 141 published papers, except for a slight decrease in 2018, 2022 and 2023, According to the network visualization map of countries/ regions based on co-authorship data, we can conclude that the distribution of related research on robotic surgery in obstetrics and gynecology is imbalanced, although research in this field has attracted the attention of many countries around the world.The economic environment plays an important role in the level of research and development (42).Correspondingly, the majority of countries in this research field are European countries with high economic levels.Our results showed that approximately half of the papers were published in the United States, reflecting the dominance of the United States in this field.This situation has also been observed in bibliometric analysis in other fields, such as endometrial carcinoma (43) and robotic surgery research in urology (44).This may be due to the high level of funding for academic activities and the long history of research on robotic surgery in the United States.In addition, the United States has the closest cooperation with other countries in this research field, whereas many other countries have research partnerships with only a handful of countries.Therefore, cooperation between countries should be further strengthened.
With respect to the authors, Professor Ramirez PT, from the University of Texas MD Anderson Cancer Center, ranked first in the number of papers published, with 36 papers related to robotic surgery in obstetrics and gynecology.Almost all of the top 10 authors with the most contributions stemmed from the top 10 organizations.Of all the organizations, the Mayo Clinic had the highest number of papers of any organization worldwide, with 85 relevant papers identified, accounting for 5.9% of all papers in this field.The majority of the top 10 organizations with the most contributions were from the United States.In addition, cooperation among authors and organizations was noted.Professor Ramirez PT was the author with the broadest connections to other scientists, and the University of North Carolina was the institution with the most partnerships with other institutions in this field.The network visualization maps of co-authorship analysis reveal that collaboration among authors was limited to small groups, and collaborative links between research organizations were also lacking.This phenomenon suggests that scientists and institutions from around the world should push their boundaries to bring about deep collaboration.Only then can we promote rapid development in this field for the benefit of patients.
Our results showed that the average number of citations for the top 10 papers on robotic surgery in obstetrics and gynecology was approximately ten times that of the average for all papers.The number of citations can be viewed as a direct measure of the recognition a paper has received in its field of study (45).For papers, the number of citations can be related to a number of factors, such as the year of publication and accessibility.In terms of the year of publication, even the most cited papers were not cited when they were originally published, and older papers may have more citations due to cumulative effects (46).We find that the majority of the top 10 most cited papers were published approximately in 2010.Therefore, the time factor should be taken into account when evaluating the impact of a paper through citation analysis (47).In terms of accessibility, open access (OA) means that anyone can have free and unrestricted online access to scientific journal literature (48).It has been shown that OA journals have higher citation metrics than non-OA journals (49).
By collecting journal information, we identified 74 journals that published papers in the field of robotic surgery in obstetrics and gynecology.The Journal of Minimally Invasive Gynecology topped the list with 252 papers, which accounted for approximately one-fifth of the total number of papers published in all journals.In the top 10 journals ranked by the number of papers, the American Journal of Obstetrics and Gynecology had the highest impact factor, and Obstetrics and Gynecology had the highest average number of citations per paper.In general, Gynecologic Oncology, the American Journal of Obstetrics and Gynecology, and Obstetrics and Gynecology were journals with the highest overall comprehensive levels in this research field, both in terms of impact factor and average number of citations per paper.It is worth noting that the second-and fourth-ranked journals were related to gynecologic cancer research.This result reflects the fact that scholars have focused on the application of robotic surgery in gynecologic malignancies.
Keywords in the network visualization map were divided into six clusters.Cluster 1 was related to the applications of robotic surgery in gynecologic benign diseases, mainly including endometriosis, uterine fibroids and infertility.A study based on one of the largest published samples assessed the perioperative outcomes of robotic-assisted laparoscopic surgery for the treatment of deep infiltrating endometriosis (DIE) (50).The researchers in this study did not observe an increase in bleeding or intra-operative or post-operative complications.They concluded that laparoscopic surgery for DIE may require multidisciplinary surgical teams to perform complex surgical procedures, and DIE may be one of the most promising indications for robot-assisted laparoscopic surgery.Cluster 2 reflected the surgical outcomes of robotic surgery for endometrial cancer with the keywords: quality of life, outcomes, survival, and complications.The quality of life is a very important aspect of reporting outcomes.Among 1,430 papers on robotic surgery research in obstetrics and gynecology in this research, we identified 64 papers on the quality of life.Kurt G et al.' paper named "Comparison of health-related quality of life of women undergoing robotic surgery, laparoscopic surgery or laparotomy for gynecologic conditions: A cross-sectional study" demonstrated that women in the robotic group had better quality of life than that in laparoscopic or laparotomy group after gynecologic surgery (51).
Cluster 3 was associated with cost and learning curve of robotic surgery for gynecologic diseases.Professor Lenihan JP's paper titled "What is the learning curve for robotic assisted gynecologic surgery?" was published in Journal of Minimally Invasive Gynecology in 2008 (39).This study showed that a surgeon with advanced laparoscopic skills needs 50 cases to stabilize operating times for the various procedures in women requiring benign gynecologic interventions.The authors predicted that the constant development of instruments suitable for gynecology and computer-based surgical simulators by the da Vinci System development team, as well as the standardization of general surgical protocols by inter-institutional robotic surgeons, will have significant benefits in shortening the learning curve process.Cluster 4 focused on robotic surgery for sentinel lymph node detection in gynecologic malignancies.The paper titled "Detection of sentinel lymph nodes in minimally invasive surgery using indocyanine green and near-infrared fluorescence imaging for uterine and cervical malignancies" published in Gynecologic Oncology in 2014 by Professor Jewell EL et al. was cited 212 times (35).The results of this study suggested that near-infrared fluorescence imaging with indocyanine green intracervical injection using a robotic platform had a high detection rate of bilateral sentinel lymph nodes and appeared to favor the use of blue dye alone or other modalities.The use of blue dye in combination with indocyanine green appears unnecessary.
Cluster 5 focused on robotic surgery for gynecologic oncology.A paper titled "Robotic radical hysterectomy in early stage cervical cancer: A systematic review and meta-analysis" was published in Gynecologic Oncology in 2015 (52).This study found that robotic radical hysterectomy may be superior to abdominal radical hysterectomy with lower estimated blood loss, fewer woundrelated complications, and shorter hospital stays.Robotic radical hysterectomy and laparoscopic radical hysterectomy appeared to be equivalent in terms of intraoperative and postoperative shortterm outcomes, so that the choice of procedure can be based on the choice of surgeon and patient.Cluster 6 was mainly related to robotic sacrocolpopexy for pelvic organ prolapse.The most frequently cited paper on robotic surgery in obstetrics and gynecology was published in Obstetrics and Gynecology by Professor Paraiso MFR et al. in 2011 (32).The objective of this research was to compare the efficacy of laparoscopic sacrocolpopexy vs. robotic sacrocolpopexy in the treatment of patients with post-hysterectomy vaginal prolapse.It was concluded that compared to the conventional laparoscopic approach, robotic sacrocolpopexy was associated with additional costs, increased post-operative pain, and longer procedures without improvement in any of the clinical outcome measures at perioperative, 6-month, or 1-year follow-up.A similar conclusion was reached in a subsequently published randomized controlled trial with a high number of citations in the research field (40).
Through the analysis of the network visualization and density visualization maps generated by the keywords, we concluded that although robotic surgery has been applied to many diseases in the field of gynecology, the main research topics of scholars were gynecologic malignant tumors.In the study of malignant tumors, endometrial cancer and cervical cancer have always been the focus.In addition, researchers typically compared the outcomes of robotic surgery for a given disease with other surgical approaches to assess the feasibility and safety of robotic surgery.While many studies have demonstrated the advantages of robotic surgery in the treatment of diseases in obstetrics and gynecology, the control groups in these studies have mostly been retrospective cohorts in which the surgery was performed over different time periods.In contrast, in one RCT, researchers compare one or more treatment groups to a control group, and randomly assign patients to either the treatment or control group.RCTs are considered the highest evidence for establishing causality in clinical studies, and this process of randomization minimizes differences in group characteristics that could affect outcomes (53).Moreover, there are still many questions regarding the cost and training of robotic surgery in obstetrics and gynecology, and the data from existing studies are still very limited with only 46 papers on cost-effectiveness analysis and 56 papers on training or learning analysis among 1,430 papers in this field.Overall, rigorous scientific research and long-term data are necessary to determine the appropriate use of robotics in obstetrics and gynecology (54).
Combined with the analysis of the overlay visualization map, we found that the keywords sacrocolpopexy and pelvic organ prolapse have attracted attention in recent years, and we also found that these two keywords have a high frequency of occurrence.This means that the study of robotic sacrocolpopexy for pelvic organ prolapse has become a new research hotspot.Other main keywords that have appeared in recent years include sentinel lymph node, indocyanine green, fibroids, infertility, endometriosis, recurrence, same-day discharge, simulation, and survival, and their frequency of occurrence was not high.Some of the topics related to the above keywords may become new research hotspots in the future.Robotic surgery for sentinel lymph node detection in gynecologic malignancies are more promising topics because there was a cluster of keywords related to this research direction in the network visualization map.
Strengths, limitations and recommendations for future research
In contrast to traditional literature review, this bibliometric analysis analyzed the papers on robotic surgery in obstetrics and gynecology with the help of VOSviewer to understand the global research status and predict future research trends in the field.This study can also help researchers identify influential authors, organizations, and journals in this field.Scholars interested in the field of robotic surgery in obstetrics and gynecology can conduct academic activities or seek collaboration with relevant scholars or institutions.At the same time, our results can guide researchers in this field to submit their manuscripts to appropriate journals.However, there were still several limitations to this study.
First, the bibliometric analysis was based on the WoSCC, so relevant papers from other databases were omitted.Second, while we had broadened the search terms as much as possible, there may still have been omissions.Third, the citation analysis did not exclude the effects of self-citation, citations of lectures or conferences, and the potential preference of authors to cite specific journal papers (55,56).Some bibliometric indicators, such as the Eigenfactor score, can help to avoid the bias caused by journal self-citation by removing citations from one paper in a journal to another paper in the same journal (25).Fourth, we may have missed some valuable papers that had recently been published.Fifth, there may be classification errors by WOSCC when indexing the articles.Sixth, there may be other publications that were not categorized as "obstetrics and gynecology".
Future works should involve more detailed methodology for a bibliometric review of the field to bridge some of the limitations of this study.Researchers should increase research on the application of robotic surgery for gynecologic benign diseases.Moreover, researchers need to conduct more RCTs and design more prospective studies.The study of the cost and training of robotic surgery should also be of concern to researchers.
Conclusions
To our knowledge, this is the first VOSviewer-based bibliometric analysis of robotic surgery research in obstetrics and gynecology.The United States was the leading country, and the Journal of Minimally Invasive Gynecology was the most productive journal in the field.Scientists and institutions from around the world should push their boundaries to bring about deep collaboration.The main research topic has always been the use of robotic surgery in the treatment of gynecologic malignancies.More randomized controlled trials need to be conducted to compare surgical outcomes of robotic surgery with other surgical approaches.Robotic sacrocolpopexy for pelvic organ prolapse has become a new research hotspot, and robotic surgery for sentinel lymph node detection in gynecologic malignancies are more potential directions for future research.
In summary, this study illustrates research hotspots and trends on robotic surgery in obstetrics and gynecology using the VOSviewer-based bibliometric analysis method.At the same time, this study identifies the most prolific researchers and institutions in this field, which helps scholars to find suitable scientific research collaborators and lay the foundation for international cooperative research in this field.
FIGURE 1 A
FIGURE 1A flow diagram on inclusion and exclusion of papers related to robotic surgery in obstetrics and gynecology.
FIGURE 2
FIGURE 2 Number of papers published per year from 1998 to 2023.
FIGURE 3
FIGURE 3Network visualization map of countries/regions' co-authorship analysis.The size of the node indicates co-authorship frequency.A line between two nodes indicates collaboration between two countries/regions.The line thickness between two nodes corresponds to the line strength, which varied depending on the number of papers co-authored.Stronger collaboration is indicated by thicker lines.Countries/regions with high levels of collaboration are depicted by nodes of the same color.
FIGURE 5
FIGURE 5 Network visualization map of organizations' co-authorship analysis.The size of the node indicates co-authorship frequency.A line between two nodes indicates collaboration between two organizations.The line thickness between two nodes corresponds to the line strength, which varied depending on the number of papers co-authored.Stronger collaboration is indicated by thicker lines.Organizations with high levels of collaboration are depicted by nodes of the same color.
TABLE 1
The top 10 authors/organizations/countries ranked by the number of papers.
a These three sub-tables in this table are not co-related.
Table 2 listed the top 10 most cited papers in this
TABLE 2
The top 10 most cited papers on robotic surgery in obstetrics and gynecology.
a OA, open access.
TABLE 3
The top 10 journals ranked by the number of papers.
a Impact factor (IF) was defined according to the Journal Citation Report (2022).FIGURE 6
TABLE 4
The clusters formed by keyword co-occurrence analysis.
FIGURE 7Overlay visualization map of keyword co-occurrence analysis conducted by VOSviewer.Keywords are imparted by using different colors based on the APY.which may be attributed to incomplete inclusion in the WoSCC in 2023.Especially since the last decade, robotic surgery in obstetrics and gynecology has stepped into a period of rapid development, with the number of publications increasing at a much faster rate than that in the previous decade.The results indicate that research in this field is attracting increasing attention from researchers. | 2024-02-11T16:07:06.125Z | 2024-02-09T00:00:00.000 | {
"year": 2024,
"sha1": "71fa8fe62f099403c4c059f145240e2713dcf551",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fsurg.2024.1308489/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a37e5306a4e936a3a6a203eda44570993954342f",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257664378 | pes2o/s2orc | v3-fos-license | Axial superior facet slope may determine anterior or posterior atlantoaxial displacement secondary to os odontoideum and compensatory mechanisms of the atlantooccipital joint and subaxial cervical spine
Objective To introduce novel parameters in determining directions of os odontoideum (OO) with atlantoaxial displacement (AAD) and compensations of cervical sagittal alignment after displacement. Methods Analysis was performed on 96 cases receiving surgeries for upper cervical myelopathy caused by OO with AAD from 2011 to 2021. Twenty-four patients were included in the OO group and divided into the OO-anterior displacement (AD) group and the OO-posterior displacement (PD) group by displacement. Seventy-two patients were included as the control (Ctrl) group and divided into Ctrl-positive (Ctrl-P) group and Ctrl-negative (Ctrl-N) group by axial superior facet slope (ASFS) in a neutral position. ASFS, the sum of C2 slope (C2S) and axial superior facet endplate angle (ASFEA), was measured and calculated by combining cervical supine CT with standing X-ray. Cervical sagittal parameters were measured to analyse the atlantoaxial facet and compensations after AAD. Results Atlas inferior facet angle (AIFA), ASFS, and ASFEA in Ctrl-P significantly differed from OO-AD.C0-C1, C1-C2, C0-C2, C2-C7, C2-C7 SVA, and C2S in Ctrl-P significant differed from the OO-AD group. C2-C7 SVA and C2S in Ctrl-N significantly were smaller than the OO-PD group. C1-C2 correlated with C0-C1 and C2-C7 negatively in the OO group. Slight kyphosis of C1-C2 in OO-AD was compared with lordosis of C1-C2 in Ctrl-P, inducing increased extension of C0-C1 and C2-C7. Mildly increased lordosis of C1-C2 in OO-PD was compared with C1-C2 in Ctrl-N, triggering augmented flexion of C0-C1 and C2-C7. Conclusion ASFS was vital in determining directions of OO with AAD and explaining compensations. ASFS and ASFEA could provide pre- and intraoperative guidelines. Key Points • ASFS may determine the directions and compensatory mechanisms of AAD secondary to OO. • ASFS could be achieved by the sum of ASFEA and C2S.
The atlantooccipital complex can serve as the moving object during AAD based on the recognition of the axial superior facet (ASF) as the sliding surface. While few reports have focused on ASF, it is vital to explore ASF characteristics based on significance by comparing AD with PD.
Nontraumatic L5 spondylolisthesis features forwards translation and rotational displacement of L5 vertebrae on the S1 upper endplate, while posterior sliding of L5 is seldom mentioned (except for the loss of L5/S1 lordosis and retrolisthesis in slump-sitting posture [5]). The reason for this is the orientation of the upper sacral endplate inclined from anteroinferior to posterosuperior in the upright position [6]. This made us consider whether the displacement is related to the anatomy and position of the ASF in the upright position. Therefore, the axial superior facet endplate angle (ASFEA), an anatomical parameter derived from supine cervical CT, and the axial superior facet slope (ASFS) were proposed to explore the relationships of AAD directions to recognize the ASF in standing spinal X-ray examinations and limit the use of standing cervical CT examinations.
Hitherto, their little attention has been given to the regulation of cervical-global sagittal balance [7], especially compensations of the atlantooccipital joint and subaxial cervical spine after AAD. However, whether AAD can trigger compensations as does spondylolisthesis remains unknown. Herein, we found that the ASFS might determine the displacement orientation of AAD secondary to OO and trigger compensations between the atlantooccipital joint and subaxial cervical spine, thereby providing guidelines for treating AAD.
Data collection
Research was performed on patients admitted from 2011 to 2021 undergoing surgery for upper cervical myelopathy caused by OO with AAD. Patients who underwent surgery for spinal compression caused by OO with AAD and who had clear radiographical data were included. Patients with AAD caused by trauma, rheumatoid arthritis (RA), inflammation, infection, tumour, history of surgery on the cervical spine and OO combined with atlas occipitalisation were excluded. Twenty-four patients diagnosed with OO with AAD were included in the OO group. Seventy-two patients with cervical degeneration without an occipitocervical disorder were included in the control group (Ctrl) at a 1:3 ratio with matching by age and sex to the OO group. Patients in the Ctrl group had cervical disc degenerative diseases with supine cervical CT data, standing cervical lateral radiographs and no occipitocervical abnormalities.
Measurements of cervical sagittal parameters
Cervical sagittal parameters are shown in Table 1 and Fig. 1. C7S can be used to replace T1S to avoid an unclear upper T1 endplate [8,9].
ASFEA measurements are shown in Fig. 2, and the specific steps are as follows: (1) Mid-sagittal CT reconstructions of the axis and ASF were captured. (2) Two images were aligned, overlapped and cropped according to positioning lines such that the final image clearly demonstrated the axial inferior endplate and ASF simultaneously. (3) ASFEA was measured as the angle between the line connecting the anterior and posterior points of the ASF and the extension line of the axial lower endplate. In addition, C2S was measured, and ASFS was obtained as the sum of ASFEA and C2S (Fig. 3).
When the angle vertex lay in front of or behind the cervical spine, the values were positive or negative, respectively. Positive values of C0-C1, C1-C2, C0-C2, C2-C7, C0-C7, and ASFEA indicated kyphosis, and negative values indicated lordosis. Positive values of C2S, C7S, and ASFS implied that the lower endplate of C2, C7, and ASF were inclined from anteroinferior to posterosuperior compared with the horizontal plane (AIPS), while negative values represented inclination from anterosuperior to posteroinferior (ASPI).
Consistency analysis
Two surgeons measured and calculated ASFS independently every 2 weeks. Mean measurements were used for further accuracy. Intraobserver and interobserver reliability were analysed.
Statistical analysis
Data were analysed using SPSS 28.0 (SPSS, Inc.). Parameters in the two groups were compared by t tests. Reliability was calculated by the intraclass correlation coefficient (ICC) [10]. Correlation analysis was performed using the Pearson method. A p value < 0.05 was considered to indicate statistical significance. X-ray C0-C2 cobb angle C0-C2 cobb angle (C0-C2) was between McGregor's line and axial lower endplate X-ray C2-C7 cobb angle C2-C7 cobb angle (C2-C7) was between the axial lower endplate and C7 lower endplate X-ray C0-C7 cobb angle C2-C7 SVA C0-C7 cobb (C0-C7) angle was measured as the angle between McGregor's line and C7 lower endplate C2-C7 SVA was the distance between the gravity line of the center of C2 and posterosuperior margin of C7 X-ray X-ray C2 slope C2 Slope (C2S) was the angle between the axial lower endplate with horizontal line X-ray C7 Slope C7 Slope (C7S) was the angle between lower endplate of C7 and horizontal line X-ray Sagittal inferior C1 facet angle Sagittal inferior C1 facet angle (AIFA) referred to previous study and it was between the line connecting the anteroinferior edge of the lateral mass and the upper edge of the posterior arch of the atlas and the line connecting the two points at the edge of C1 articular surface of lateral mass joint [20] CT ASFEA Seen in "Materials and methods" CT ASFS ASFS was between the line connecting the anterior and posterior points of ASF and horizontal line, which was equal to the sum of C2S + ASFEA X-ray and CT
Positive ASFS values indicated AIPS of the ASF, and the atlantooccipital complex tended to slide forwards; conversely, it tended to slide backwards. OO-AD was compared with Ctrl-P (positive, ASFS > 0), and OO-PD was compared with Ctrl-N (negative, ASFS < 0) to analyse the inferior C1 facet and ASF morphology.
Compensations of the atlantooccipital joint and lower cervical spine for OO-AD and OO-PD
Comparisons of cervical sagittal parameters are shown in Table 4, and the correlation analysis results are shown in Table 5.
Correlation analysis showed that C1-C2 correlated positively with C0-C2; C0-C1, C2-C7, C2-C7 SVA, C2S, and C7S values correlated positively with each other except for a weak negative correlation between C2-C7 and C7S; and C1-C2 and C0-C2 correlated negatively with C0-C1, C2-C7, C2-C7 SVA, C2S, and C7S values. The weakest correlations were in the OO-PD group, possibly due to sample size limitations, nonobvious compensations and slight PD. Strong correlations were observed in the OO-AD group, possibly due to sufficient sample size, thus inducing apparent compensations. The strongest correlations were observed in the OO group.
Discussion
AAD has been described in diseases such as traumatic odontoid fracture, OO, and RA [12], featuring higher morbidity in AD than in PD. Morphologic remodelling of axial LAJs during AAD has attracted much attention. LAJs with non-union odontoid fractures gradually remodel into a fish-lip or dome-shape during AAD, resulting in irreducibility [13,14]. Salunke found that the AIFA was linked with disease progression and irreducibility [15]. Ma also classified the morphology of lateral C1-C2 joint facets in congenital AAD patients based on the AIFA [16]. These results reflect the atlantoaxial facet morphology but fail to consider OO patients and positional parameters of atlantoaxial facets.
In AAD, the ASF is the sliding surface, while the atlantooccipital complex is the sliding object. Hence, the ASF better reflects the mechanisms of AAD. A previous study assessed the atlantoaxial superior facet morphology in AAD and showed the significance of sagittal joint inclination, similar to ASFEA, in determining AAD severity [17]. However, it was more anatomical than positional. Yuan proposed a novel cervical parameter, sagittal atlantoaxial joint inclination (SAAJI), illustrating the atlantoaxial articular surface on the parasagittal view for irreducible AAD, which resembled ASFS [18]. However, horizontal lines were difficult to determine on supine CT, which restricted its clinical application. Additionally, the sagittal slippage angle of the atlantooccipital inferior joint facet uses the eye-ear plane as the horizontal plane, which fails to reflect real conditions while standing [19]. In addition, the specificity of patients and the ASF were not mentioned.
These studies focused on morphologic changes in LAJs after long-term remodelling in AAD while neglecting initial factors propelling AD rather than PD. A previous study revealed that PD was usually found with odontoid destruction in RA [20]. Mechanisms of irreducible PD with OO showed that once the C1 facet posteriorly crosses the medial hump of the C2 facet, it tends to slide further and locks in this position, making the PD irreducible. Theoretically, the odds of AD should equal those of PD, which is in contrast to the clinical observation that AD cases outnumber PD cases. The displacement in patients with OO-related fracture or RA is accidental to some extent based on the trauma to or pathological destruction of the atlantoaxial joints. Thus, only in cases of OO and type II transverse odontoid fracture without obvious displacement would the atlantoaxial vertebrae experience a long process ranging from instability and subluxation to irreducibility, during which the displacement of the atlas compared with the axis and the remodelling of LAJs advance together and have mutual effects. When traced back to the initial process of displacement, however, the facet surface of LAJs showed no significant changes, but AD was obviously more prevalent than PD, prompting us to consider the relationship between ASFS and the direction of AAD.
To determine the cause-effect relationship between ASFS and OO diagnosed with AAD, it was advisable to conduct a cohort study for the measurement of ASFS before Table 4 Comparisons of cervical sagittal parameters Cervical sagittal parameters in the Ctrl and OO groups measured on X-ray examination according to the definitions are listed in the table. Parameters were compared between the Ctrl-P and OO-AD groups and the Ctrl-N and OO-PD groups to further assess changes in cervical sagittal curves and analyse their correlation. However, these patients are normally diagnosed with neurologic symptoms after AAD, or even if they are occasionally diagnosed in the early stage, surgery is immediately recommended to avoid neurologic sequelae [1], making it difficult to collect cases from prospective cohort observations. The alternative was to study the ASFS in people without AAD and compare it with the ASFS in patients with OO with AAD to indirectly understand the relationship between the ASFS and the direction of AAD in OO. Patients with cervical disc degenerative disease were included in the Ctrl group, which could better reflect the true conditions under physiological circumstances. Therefore, we studied patients with and without AAD to indirectly demonstrate the crosstalk between ASFS and displacement directions.
Herein, ASFEA and ASFS were consequences of AAD after remodelling, which cannot represent the initial conditions of the atlantoaxial facet surfaces. The displacement of the atlantooccipital complex on the ASF and the remodelling of LAJs have mutual influences, which cannot be interpreted as only causes or results [3,[21][22][23]. Moreover, their relationship should be recognised as an interaction, as one condition could significantly affect the other while also accelerating the displacement process. Since it was impossible to track ASFEA and ASFS before AAD, atlantoaxial parameters from non-AAD patients were used as a substitute approach to reflect them before AAD. Notably, the skeletal system remains steady in adults, leading to presumptions that ASFEA and ASFS stay constant in adults, which corresponded to pelvic inclination (PI) and neutral sacral slope (SS) in a previous report [24]. OO patients were an average of 51.13 ± 13.50 years old (we were not a specialised children's hospital), suggesting that AAD might start following adulthood. Therefore, it was better to measure ASFS and ASFEA in the non-AAD group to simulate conditions before AAD. In this study, the displacement in OO patients (AD (20 cases, 83.3%), PD (4 cases, 16.7%)) was consistent with that in previous reports [1,4]. In the control group (neutral), a positive ASFS was found in 57 (79.2%) patients, and a negative ASFS was found in 15 (20.8%), whereas a positive ASFS was found in 68 (94.4%) patients on hyperflexion radiography. These findings indicate that the distribution of displacement in OO was approximate to that of the ASFS in the Ctrl group, underlying the potential clinical significance of ASFS for determining displacement direction in OO patients.
Specifically, when the ASFS was 0° in OO patients before AAD, the ASF remained relatively horizontal, maintaining the stability of the LAJs and surrounding structures. Less mechanical loading would be applied to LAJs in the absence of remodelling. However, a positive ASFS would enable the tendency to slide forwards given that the facet surfaces were AIPS based on gravity. In turn, LAJs with AD would result in further remodelling of facet surfaces to increase the ASFS and contracture of periarticular stabilisers to induce abnormalities of LAJs, leading to anterior instability, subluxation, and luxation of the LAJs [19]. Conversely, a negative ASFS tended to slide backwards under gravity for ASPI.
Most AAD patients were AD with a positive ASFS and PD with a negative ASFS, leading to a higher prevalence of AD than PD, which was aggravated by head-lowering movements during daily work [25]. Kauppi reported that during neck hyperextension, the contact between the posterior arch of the atlas and the spinous process restrained PD, while such restriction did not completely work during flexion, partially explaining the lower incidence of PD [26]. We further illustrated that AAD initiated even in a neutral position, possibly due to ASFS and ASFEA imbalances.
Furthermore, sacral osteotomy has been reported in the correction of pelvic parameters (PI) for maintaining spinal-pelvic sagittal balance [27], hinting at possibilities of adjusting the ASFS via ASF osteotomy or C2-C3 fusion, thereby preventing AAD and preserving atlantoaxial functions without spinal syndromes. Osteotomy on the inferior C1 facet and ASF could convert AAD irreducibility and thereby independently realise an intraoperative reduction of LAJs by a posterior approach [15]. This further shows the significance of the ASFEA and ASFS intraoperatively; otherwise, the opposite osteotomy may be performed, which would aggravate the irreducibility of AAD or deteriorate the deformity of LAJs. Generally, the ASFS and ASFEA could help surgeons adjust treatments appropriately in early stages to delay or reverse AAD progression, especially to facilitate the osteotomy direction based on the ASFEA [18,28]. More importantly, the ASFS and ASFEA predicted the direction of AAD, which was useful for instructing patients to adjust their neck posture and effectively suspend displacement.
Lordosis of the ASFEA became kyphosis with an average of 25°, and the AIFA decreased by approximately 5° due to remodelling after OO-AD. The ASFS in OO-AD increased, which further explained the crosstalk between the ASF and displacement direction. Atlantoaxial parameters in the comparison between the Ctrl-P and OO-PD groups were not significantly different, possibly due to the sample size and blockage of the odontoid process.
Notably, sagittal imbalance of the cervical spine has been shown to induce compensations in adjacent segments. Cervical parameters have been used to analyse sagittal compensations of the cervical spine, such as in C0-C2, C1-C2, C2S, and T1S [29,30], and crosstalk between C2S and cranial slope (CS) has been reported in the superior cervical spine [7]. Plain standing spinal films enabled the measurement of C2S, propelling us to further determine ASFS and ASFEA. We showed that ASFS = C2S + ASFEA, where ASFEA was an anatomical parameter that remained stable regardless of position, indicating a positive correlation between ASFS and the functional parameter C2S. The results showed that both ASFS and C2S correlated with C1-C2, C0-C2, and C2-C7, indicating that AIPS facets of a positive ASFS induced larger C1-C2 and smaller C0-C1 and C2-C7 values in AD patients, while ASPI facets of a negative ASFS induced smaller C1-C2 and larger C0-C1 and C2-C7 values in PD patients, thereby maintaining balance during standing. A larger ASFS in AD patients led to lordosis of C0-C2 and C2-C7, showing that craniocervical compensations were involved in maintaining a horizontal view. However, patients with an excessively large ASFS developed severe cervical lordosis in the late stages of AD, which triggered unsustainable compensations, leading to failure to maintain a horizontal gaze while standing.
To maintain an upright standing posture and horizontal gaze, the spine as a whole system is balanced by adjusting the spinal segments. Consequently, the musculoskeletal system coordinates spinal alignment to adjust physiologic curves and maintain a horizontal view [31]. Compensations in adjacent vertebrae usually arise when spinal misalignment appears in localised segments [32]. Herein, sagittal deformity of C1-C2 led to compensations of C0-C1, C2-C7, C2-C7 SVA, C2S, and C7S, indicating that sagittal parameters exerted apparent effects on adjacent segments but minor impacts on distal segments. A negative C1-C2 in Ctrl-P represented lordosis, while a positive C1-C2 in OO-AD indicated kyphosis, with an approximate difference of 33°. Changes in the C0-C1 and C2-C7 curves indicated compensations of approximately 10° and 30°, respectively, driven by C1-C2 kyphosis, which further resulted in C2-C7 SVA and C2S decreases. No significance was found in C7S, showing that the thoracolumbar spine was not involved in compensations for C1-C2 deformity. Significance was found in C2-C7 SVA and C2S when OO-PD was compared to Ctrl-N, indicating that OO-PD patients developed slight morphologic alterations of atlantoaxial joints, inducing compensations in atlantoaxial adjacencies. However, this finding is carries less meaning considering the sample size of OO-PD patients.
Understanding C1-C2 deformity and compensations in AAD can help surgeons adjust fusion angles intraoperatively. Although the reported fusion angle was 20-22° for AAD with OO, another study argued that it was unnecessary to recover the C0-C2 fusion angle to the normal range intraoperatively [33,34]. Herein, a horizontal gaze was maintained by C2-C7 adjustments due to the absence of C0-C1 compensations in C0-C2 fusion. The fixation angle of C0-C2 should be emphasised to avoid lower cervical spine degeneration. In contrast, the C1-C2 fixation angle allowed a larger range to maintain a horizontal view after C1-C2 fusion by adjusting C0-C1 and C2-C7 to compensate for C1-C2 deformity. Limitations of this study include the sample size and inherent limitations of cohort studies. Therefore, relationships between the ASFS and the direction of AAD in OO patients remain speculative.
Conclusions
The ASFS and ASFEA were introduced for determining the direction of AAD. A positive ASFS indicated AIPS, suggesting the tendency of AD due to forwards sliding, while a negative ASFS indicated ASPI in PD. Compensations after displacement included OO-AD inducing increased flexion of C0-C1 and C2-C7 and decreased flexion of C1-C2; OO-PD triggered opposite compensations. Therefore, recognition of the ASFS and ASFEA provided referential guidelines preand intraoperatively. | 2023-03-23T06:17:31.450Z | 2023-03-22T00:00:00.000 | {
"year": 2023,
"sha1": "5958c4c6af553b31dde0f413d8e54ed38a1e5e11",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-023-09544-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "a7cea290bec734707be27b23ed4bf5edcf411de7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237210757 | pes2o/s2orc | v3-fos-license | Spectrophotometric determination of chlorophylls in different solvents related to the leaf traits of the main tree species in Northeast China
The accurate detection of the leaf chlorophyll (Chl) is of substantial importance for the immediate assessment of forest conditions to manage and conserve forest ecosystems. We compared 80% acetone, 95% ethanol, and dimethyl sulfoxide (DMSO) over a range of incubation times (2, 4, 6, 8, 18, 26, and 32 h) to determine the Chl contents of 12 tree species in northeast China. The results showed that to obtain the maximum Chl (a+b) contents for most tree species extracted by 80% acetone and 95% ethanol required a minimum of 18 h, while the incubation periods by DMSO were 2-6 h and 18-32 h to extract 90% of the Chl from the broadleaved and coniferous tree species, respectively. We observed that the amount of Chl extracted with DMSO was significantly higher than that extraction with 80% acetone and 95% ethanol, particularly for conifer species with the exception of Phellodendron amurense, Fraxinus mandshurica, and Tilia amurensis, in which the maximum amount of Chl was extracted with acetone. The DMSO extracted Chl in exhibited the lowest degree of variation among the three solvents. The leaf mass area (LMA), leaf thickness, and diameter of the primary leaf vein were significantly negatively correlated with the Chl a, Chl b, and Chl (a+b) content for the 12 tree species. There were non-significant different slopes or intercepts between the curves for LMA and Chl a, Chl b, or Chl (a+b) at the different incubation times for the same solvent or the different solvents at the certain incubation time (P > 0.05).
Introduction
Chlorophyll (Chl) is one of the most fundamental and important physiological parameters in forest ecology. The accurate measurement of the Chl content is of substantial significance for the management and protection of forest ecosystem function. The traditional process for the determination of the foliar Chl content (Chl a, Chl b, and Chl (a+b)), which are the most widely distributed two forms of Chl that occur naturally in the trees, has been measured by the extraction of leaf tissue obtained with acetone, methanol, ethanol, or dimethyl sulfoxide (DMSO), followed by spectrophotometric measurements. Researchers have found that solvents can vary in their ability to extract Chl from different plants. It is practical to determine the most effective solvent for a particular set of samples.
Site description and plant material
The experiment was conducted at the Demonstration Base of Urban Forestry in the northeast Forestry University, Harbin, Heilongjiang Province, northeast China (45°43′N, 126°37′E). The demonstration base area is 43.95 hm 2 . The regional climate is described as a temperate monsoon, which is characterized by warm summers, cold winters, a short growing season, and abundant precipitation with the annual average temperature and annual precipitation of 3.5 ºC and 569.1 mm primarily occurring from June to September, respectively. The base was farmland before 1949, and its original vegetation was valley meadow steppe. We investigated the main tree species in northeast China-Betula platyphylla (Bp), Tilia amurensis (Ta), Quercus mongolica (Qm), Ulmus davidiana var. japonica (Ud), Acer mono (Am), Phellodendron amurense (Pa), Fraxinus mandshurica (Fm), Juglans mandshurica (Jm), Pinus koraiensis (Pks), Picea koraiensis (Pkn), Larix gmelinii (Lg), and Pinus sylvestris var. mongolica (Ps) with a range of species whose leaf tissues were quite different.
Chlorophyll extraction
For each species, six fully developed, outermost healthy fresh green leaves from the top third of the south-oriented crown per tree of three sample trees were randomly chosen to measure the Chl contents at approximately 9 a.m. on sunny days. The leaves were placed in labeled plastic bags in coolers with ice and immediately transported to the laboratory for Chl extraction.
We determined the Chl based on the same procedures and conditions for sampling, pigments extraction, and measured by the same spectrophotometer to reduce the potential of a large amount of error into the results (Linder, 1974). Briefly, the leaf area discs for broadleaves and needles were cut into pieces approximately 2 mm in length, and the fresh mass (FM) of the leaf was determined by an analytical balance (Sartorius BT224S, Sartorius Scientific Instruments Co., Ltd., China). Six replicates of each species were placed in 10 ml 80% acetone, 95% ethanol, and DMSO, which were incubated in a water bath maintained at 65 ºC for 32 h in the dark. The absorbance of the solution was measured at 664 nm and 647 nm for 80% acetone, 664 nm and 649 nm for 95% ethanol, 665 nm and 649 nm for DMSO for Chl a and Chl b at 2, 4, 6,8,18,26, and 32 h for the broadleaved species, and at 4, 8, 18, 26, 32 h for the conifer trees by a UV-visible spectrophotometer (WFJ-2100, INESA Analytical Instrument Co. LTD., Shanghai, China). The Chl contents (mg· g -1 ) were determined by the specific published equations by applying the absorbance values to the equations reported by Lichtenthaler (1987) [6] for the acetone and ethanol, and Wellburn (1994) [29] for the DMSO. Chl (a+b) was calculated as the sum of Chl a and Chl b. All the procedures were performed under diffused light to eliminate the exposure of the leaf materials to direct, bright or sun light.
Leaf traits measurement
An additional 30 leaf samples per three trees of each species were collected to determine the leaf traits, including the leaf thickness (LT), primary leaf vein diameter (LVDa), leaf mass area (LMA) and leaf water content (LWC). Calipers were placed on a leaf at a representative point of the midrib, closed until the calipers had securely grasped the leaf, and the calipers were slowly opened until the leaf would slide out when gently pulled. This distance was considered to be the leaf thickness. The fresh mass of each leaf in which the petioles were cut was determined by an analytical balance, and the leaves were then scanned (Model T210, Founder Technology Instrument Co. Ltd., Beijing, China) to obtain high-resolution images to measure the leaf area, and leaf vein diameter using Image J software (NIH, Bethesda, MD, USA). Finally, the leaf samples were dried at 85 ºC for at least 26 h, and the dry mass was recorded. The LMA was calculated as the ratio of the leaf dry mass to the leaf area. The LWC was calculated as the ratio of the difference between the fresh mass and the dry mass to the fresh mass.
Statistical analysis
We analyzed the species, solvents, and temporal effects on the Chl contents using repeated measures ANOVA. The species was treated as a fixed factor; the extraction time was treated as the fixed repeated factor, and the individual tree was treated as a random factor. The mean Chl content among all the leaves within an individual tree and measurement period was used in the analysis (i.e., n = 3). The treatment means were compared using Fisher's Least Significant Difference test to determine the extraction time and solvent. The ratio of the maximum to minimum was used to describe the variation of the Chl content with the extraction time. All the analyses were performed using a mixed model procedure (PROC MIXED) of SAS Version 9.3 (SAS, Inc., Cary, NC, USA) with α = 0.05.
We explored the relationships between the Chl a, Chl b, and Chl (a+b) contents and leaf traits through correlation procedures and fit the relationships between the Chls contents and LMA through regression analysis using the curve-fitting procedure and chose the highest R 2 of SPSS 18.0 (SPSS, Inc., Chicago, IL, USA). The ordinary least squares regression techniques were performed to test the incubation time and different solvents on the LMA versus the Chls relationships [27].
Results
The Chl a, Chl b, and Chl (a+b) extracted by 80% acetone increased with the extraction extension, reaching the highest values on 18, 26, and 32 h, and there were no significant differences in the three time periods (P>0.05) with the exception that Chl a, Chl b, and Chl (a+b) for Lg, Ud and Am peaked at 4 and 6 h with a pattern for the former of a concave curve, while the latter was a single peak curve ( Figure 1). The Chl a, Chl b, and Chl (a+b) for Fm increased sharply at 18, 26, and 32 h, and the ratio of the maximum to minimum was as high as 4. The extreme value ratio of the three indices for the rest of species were 1.1-2.2 with the exception of the Chl b of Ta.
The Chl a, Chl b, and Chl (a+b) extracted by 95% ethanol also increased with the extraction extension. The highest values occurred at 18, 26, and 32 h, and most of them were non-significant differences during the above time periods (P>0.05) with the exception that Chl a, Chl b, and Chl (a+b) for Pa and Jm peaked at 2 and 4 h and then decreased slightly ( Figure 1). The extreme value ratio of Chl a, Chl b, and Chl (a+b) for Ps, Pkn and Pks were the highest, followed by Fm, and the rest of the species were the lowest. The values were 2.5-3.6, 1.6-2.5, and 1.0-1.5, respectively.
The Chl a, Chl b and Chl (a+b) extracted by DMSO for Ps, Pkn and Pks increased with the extraction time and reached the highest at 26, 32 and 18 h, respectively, which was significantly higher than those of the other time periods (P<0.05) ( Figure 1). The extreme value ratios were 1.3-1.9. The Chl a and Chl b values for the other nine species decreased or increased with the extension time, and the amplitude varied between species. For example, the Chl a contents for Jm and Fm were the highest at 2-8 h (P>0.05). The extreme value ratios were 1.2 and 1.6. However, the Chl b readings were the highest at 18-26-32 h, and the extreme value ratios were 1.6 and 3.3. The extreme value ratios of Chl a and Chl b for the remaining seven tree species were in the range of 1.1-1.4. The extreme value ratios of Chl (a+b) for the eight tree species were in the range of 1.0-1.1 with the exception of Fm. DMSO-extracted Chl (a+b) for the coniferous tree species were significantly higher than those of 80% acetone and 95% ethanol (P<0.05) during the same period. The DMSO extraction of Chl (a+b) for Ps, Pkn, Pks and Lg was 1.4-1.7, 1.3-2.2, and 2.2-3.9 fold greater than that of 80% acetone, respectively. The Chl (a+b) extracted by 95% ethanol for Lg were 1.4-2.2 fold greater than that of 80% acetone (P<0.05) whereas the Chl (a+b) extracted by 95% ethanol for Ps, Pkn and Pks was lower than that of 80% acetone. Specifically, the former was 0.4-0.7 times that of the latter from 2 to 8 h (P<0.05).
The Chl (a+b) extraction from the broad-leaved trees could be divided into three groups. First, DMSO extracted the highest Chl content from Bp, Jm, and Qm. The extraction amount of DMSO was 1.1-1.6-fold that of 95% ethanol and 80% acetone, respectively. The ratio between 95% ethanol and 80% acetone was 0.9-1.1. Second, the extraction efficiency of Chl (a+b) by DMSO and 95% ethanol for Ud and Am was similar and was 1.1-2.4 fold that of 80% acetone. Third, the extraction of Chl (a+b) by DMSO and 95% ethanol for Pa, Fm and Ta was similar at 2-8 h and was 1.3-2.4 fold that of the 80% acetone. However, at 18-32 h, the extraction amount from high to low was 80% acetone, 95% ethanol and DMSO. The extraction amount with the 80% acetone was 1.1-1.4 fold that of DMSO. The Chl a, Chl b, and Chl (a+b) extracted by 80% acetone, 95% ethanol and DMSO over a range of incubation times for the 12 tree species were significantly negatively correlated with the LMA, LT, and LVDa and mostly non-significantly correlated with the LWC (Table 1). Since the LMA, LT, and LVDa were significantly positively correlated with each other, we explored the relationships between the Chl a, Chl b, and Chl (a+b) with LMA through regression analyses. The power equations described the relationship between Chl content and LMA marginally better than the rest of the models ( Table 2). There were non-significantly different slopes or intercepts between the different incubation times for the same solvent (P>0.05) and between the different solvents at the certain incubation time (P>0.05) with the exception that the intercepts for Chl a extracted by 95% ethanol were significantly higher than those by 80% acetone and DMSO at incubation times of 4 h and 8 h (P<0.05).
Discussion
The Chl extraction efficiency by the solvents differed depending on the plant materials. The Chl (a+b) extracted by DMSO was higher [1,9,20], lower [9,25] acetone and 95% ethanol. A comparison between ethanol and acetone also indicated differences between the species [9,19]. For example, Minocha et al. (2009) [9] found the Chl extracted by DMSO was the highest for five conifer tree species, but the data for six broadleaved species were different. The Chl (a+b) extracted by 95% ethanol for Fagus grandifolia and DMSO for Q. velutina was the highest. For Prunus serotina and Liriodendron tulipifera, the Chl (a+b) extracted using 95% ethanol and DMSO was similar and significantly higher than that extracted by 80% acetone. There was no significant difference between the extractions of B. alleghaniensis and Tsuga canadensis using 80% acetone, 95% ethanol and DMSO. Our results showed DMSO was a better solvent for the Chl extraction with the exception of the highest extraction of Chl of Pa, Fm, and Ta obtained with 80% acetone, and these results supported the concept that DMSO extracted the conifer species efficiently as indicated by the results of Minocha et al. (2009) [9] and Barnes et al. (1992) [1].
The extraction time of Chl is based on the diffusivity of solvents within the particular intact plant tissue. The solvent extraction times varied from 15 min to 7 h [4, 11,21,28], and 26 h [25] for DMSO. At 65 ºC , Chl (a+b) was extracted from the leaves of Trifolium subterraneum with DMSO, and over 99% of the Chl was extracted at 1 h [20]. However, the Chl (a+b) at 7, 26, and 48 h for P. virginiana, Helianthus annuus, Fragaria vesca, Andropogon gerardii, and Cymbopogon citrates were similar [25]. With the increase in the extraction time from 4, 6, 8, 26, and 48 h, the Chl a and Chl b contents extracted by DMSO at 25, 40, 60, and 80 º C for the leaves of C. unshiu Marc. cv. Okitsu increased with the exception that Chl a extracted by 80 º C at 48 h decreased slightly [3]. The Chl content for A. sessilis was very stable with the protracted extraction extension using hot acetone [5]. The Chl extraction was performed with 95% ethanol at 70 º C for 30 min for the birch, beech, ash, and sycamore. Our study showed that the Chl contents extracted by 80% acetone and 95% ethanol for most of the tree species increased with the prolonged extraction time and reached the highest value at least for 18 h. The Chl content extracted by DMSO for the thicker conifer leaves of Pks, Pkn, and Ps was the highest from 18 to 32 h. More than 90% of the Chl was extracted at 2-6 h for the rest of the nine tree species, although the Chl a for Jm and the Chl b for Fm decreased and increased with the extraction extension, and the Chl (a+b) remained stable.
The particular sample tissues are incubated in solvents determined by the leaf thickness, degree of cutinization [1,10,21,22] because mechanical disruption of the cells does not take place. As Hiscox & Israelstam (1979) [4] and Barnes et al. (1992) [1] delineated, the Chls extraction from the leaf tissues with DMSO requires incubation for various times, depending on the degree of cutinization and thickness of the leaf. Nikolopoulos et al. (2008) made the first attempt to determine the influence of the leaf anatomy on the extraction efficiency of DMSO for 19 plant species [10]. They observed that the linear correlation between each specific anatomical parameter and the extraction efficiency of the DMSO was poor (R 2 = 0.35 for SLA, R 2 = 0.44 for leaf density and R 2 = 0.28 for LT). Our study showed that the LMA, LT and LVDa were significantly negatively correlated with the extraction efficiency of the solvents but not the LWC in most cases. The result supported the hypothesis that the extraction time of Chl is based on the diffusivity of the solvents within the particular intact plant tissues depending on the leaf thickness and degree of cutinization.
The temperature used in the Chl extraction with solvents differed depending on the references. 80% acetone and 95%-98% ethanol were often used at 4 ºC or room temperature to extract the Chl for at least 26, 48, or 72 h, resulting in poor pigment stability or incomplete extraction, which could be solved by heating the solvents [14]. In the range of 8 to 30 ºC , the temperature had little effect on the Chl extraction by 80% acetone [24]. 80% acetone at 60 ºC and 65 ºC was used to extract Chl from the leaves of A. sessilis [5] and walnut [30], resulting in slightly lower values of Chl (a+b) than the highest ones obtained at 50 ºC . 95%-98% ethanol at 65 º C [9], 70 º C [8], and 80 º C [16] was used to extract Chl from the leaves. The DMSO extracted the Chl was primarily at 65 º C [11,15,28] and also at 70 º C [8,26]. The Chl extraction by DMSO at 40 º C was not complete for the thick, highly cutinized leaves of C. citrates [25] and fern species [1], and 65 º C was required for complete extraction. The Chl (a+b) extraction of the Citrus unshiu cv. Okitsu leaves by DMSO at 60 º C was similar with the highest value at 80 º C [3]. Minocha et al. (2009) also certified that heating solvents at 65 º C for acetone, [9]. Therefore, the selection of 65 º C as the optimum temperature for the Chl extraction is feasible. Prolonged heating may result in a lower Chl value due to the destruction of Chl. It was reported that Chl a was less thermally stable than Chl b. Scott & Robson (1991) found that the Chls were undisturbed by additional incubation for 2 h, but an extraction time of 3 h or longer would destroy the extracted Chl a, resulting in a decrease in Chl a and a slight increase in Chl b under the conditions of the extract (65 °C) in DMSO [20]. However, Hiscox & Israelstam (1979) suggested extraction times as long as 6 h for Chl from pine needles [4]. Barnes et al. (1992) also clearly demonstrated that the period of incubation in warm DMSO resulted in a lack of significant degradation of Chl a or Chl b [1]. Jinasena et al. (2016) also showed that the Chl content for A. sessilis was very stable with prolonged extraction using hot acetone, and there was no Chl degradation for a long period of time while heating [5].
The Chl absorption wavelength and the various calculation formulas used would also lead to different results for the same solvent. For example, the readings were 646 and 663 nm [26] and 649 and 665 nm [25] for the Chl extracted by DMSO, and the Chl content was calculated based on the formula of Wellburn (1994) [29]. Some researchers believed that the DMSO absorption spectrum of Chl a and Chl b were the same as that in 90% acetone [4, 18,22] and suggested determining the value of 645 and 663 nm and using the classical Arnon formula to calculate the Chl content [15,20,28]. It has been noted that there is a significant error in the calculations of the Chl extracted by DMSO based on the formula described above [1,13] because the Arnon formula is 80% instead of 90% acetone. Furthermore, Barnes et al. (1992) found that the Chl content extracted by DMSO was underestimated by approximately 10% using the Arnon formula [1]. Parry et al. (2014) also found that the Chl content extracted by DMSO from 22 types of plants calculated by the acetone formula (absorption wavelength 646.6, 663.6 nm) was underestimated by 7.84% compared with the DMSO formula (absorption wavelength 649.1, 665.1 nm) [11]. Therefore, the wavelengths measurement and the corresponding formula should be strictly followed whether using acetone, ethanol, or DMSO [6].
Conclusions
Solvents play a major role in the process of extracting Chl. The spectrophotometric absorbance properties of the Chl molecules facilitate their qualitative and quantitative analysis using different solvents, and the contribution of these solvents to the extraction in various species was compared. Furthermore, suitable solvents related to the leaf traits on Chl were selected. Our results clearly indicated that the Chl extraction by DMSO, 80% acetone and 95% ethanol are dependent on the leaf morphological characteristics, such as the thickness, LMA and degree of cutinization. This study revealed that DMSO was the most effective solvent to extract the most significant amount of Chl for most of the species sampled. | 2021-08-19T20:05:15.627Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "0cd95df24e1a2048eb44b84d7bf21748f7a40ebb",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/836/1/012008/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0cd95df24e1a2048eb44b84d7bf21748f7a40ebb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
244503310 | pes2o/s2orc | v3-fos-license | Differences in Fat-Free Mass According to Serum Vitamin D Level and Calcium Intake: Korea National Health and Nutrition Examination Survey 2008–2011
We analyzed the differences in fat-free mass (FFM) according to serum vitamin D level (VitD) and daily calcium intake (Ca) in 14,444 adults aged over 19 years. We used data from the 4th and 5th Korea National Health and Nutrition Examination Surveys (2008–2011). FFM was measured using dual-energy X-ray absorptiometry. VitD was classified as insufficient or sufficient (cutoff: 20 ng/mL). Ca was classified as unsatisfactory or satisfactory (recommended daily intake: 700 mg). In men, the FFM of group 2 (VitD ≥ 20 ng/mL; Ca < 700 mg), group 3 (VitD < 20 ng/mL; Ca ≥ 700 mg) and group 4 (VitD ≥ 20 ng/mL; Ca ≥ 700 mg) was 0.50 kg (95% confidence interval (CI), 0.084–0.92), 0.78 kg (95% CI, 0.26–1.29) and 1.58 kg (95% CI, 0.95–2.21) higher than that of group 1 (VitD < 20 ng/mL; Ca < 700 mg), respectively. In women, a 1 ng/mL increase in VitD was associated with a 0.023 kg increase in FFM (95% CI, 0.003–0.043) and a 1 g increase in Ca was associated with a 0.62 kg increase in FFM (95% CI, 0.067–1.16). High VitD and Ca were associated with a high FFM.
Introduction
The prevalence of vitamin D deficiency in Korea in 2014 was 75.2% in men and 82.5% in women [1] and the average daily calcium intake of individuals aged over 50 years was only 470 mg/day in 2008-2010 [2].
Vitamin D and calcium levels are known to be related to body muscle mass and bone mass. There is ample evidence on the association of serum vitamin D levels with overweight, obesity [3,4], body fat mass [5] and regulation of adipogenesis and fat metabolism [6]. Furthermore, low serum vitamin D levels have been found to increase the risk of muscle weakness and sarcopenia [7].
Likewise, many studies have found associations of calcium intake with body weight [8], body adiposity [9] and body composition [10,11], possibly because calcium plays a significant role in the regulation of lipogenesis, lipolysis and energy metabolism [12]. In a 10-year longitudinal study, low serum calcium levels were found to reflect significant muscle loss in adults aged over 50 years and low calcium intake was significantly associated with muscle loss in women [13].
Muscle mass is an important source of energy expenditure. A previous study found that skeletal muscle metabolism is a major determinant of resting energy expenditure [14]. Therefore, factors that increase muscle mass can even lead to a decrease in body fat mass. In a randomized controlled trial of the combined effect of vitamin D and calcium, it was found that calcium and vitamin D intake promoted visceral fat loss in individuals with a very low calcium intake [15]. Moreover, it has been reported that calcium and vitamin D supplementation improves muscle function [16].
The prevention of sarcopenia is crucial because it can impair physical capability, increase the risk of falls and lead to dependence [17]. By elucidating the relationship among vitamin D, calcium and muscle mass, it may be possible to prevent sarcopenia through the improvement of nutrition intake. However, few studies have simultaneously considered vitamin D and calcium and analyzed their relationship with body composition. Therefore, we grouped the Korean general population based on their serum vitamin D level and daily calcium intake and analyzed the differences in body composition, especially fat-free mass (FFM), among them.
Study Design and Participants
The Korea Disease Control and Prevention Agency has conducted the Korea National Health and Nutrition Examination Surveys (KNHANES) since 1998 to comprehensively examine the health, nutritional and socioeconomic status of Korean individuals. We screened 28,377 participants aged over 19 years whose data were collected in the fourth and fifth survey (2008)(2009)(2010)(2011). Among them, 13,933 were excluded because of decreased renal function (estimated glomerular filtration rate < 30), history of diagnosed cancer, inappropriate fasting duration before sample collection (>24 h or <8 h), inappropriate nutritional intake (<500 or >5000 kcal/day), excessive water intake per kilogram body weight (≥90 g/kg) and missing survey records or test results. Consequently, data from 14,444 participants (5856 men and 8588 women) were used in this study.
All procedures were approved by the Ethics Committee of the Korea Disease Control and Prevention Agency (approval numbers 2011-02CON-06-C, 2010-02CON-21-C, 2009-01CON-03-2C and 2008-04EXP-01-C) and were carried out in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Signed informed consent was obtained from all KNHANES participants. The KNHANES data are publicly available.
Measurements
Blood samples and body composition data were collected on the same day. Body composition was measured using dual energy X-ray absorptiometry (Hologic Discovery, Hologic, Marlborough, MA, USA). It included measurement of the whole-body total FFM (including bone mineral content (BMC)) and whole-body total BMC. FFM was defined as FFM data obtained from dual energy X-ray absorptiometry minus BMC.
We retrieved data on the participants' basic characteristics such as age, body mass index (BMI; kg/m 2 ), FFM (kg), serum 25-hydroxy vitamin D level (ng/mL), daily calcium intake (mg), daily nutritional intake (kcal), water intake per kilogram body weight (g/kg), smoking status (non-smoker, past smoker, or current smoker), alcohol consumption status (more than one drink per month for the past 1 year), education level (elementary school or lower, middle school, high school, or college graduate or higher), average monthly household income (10,000 KRW), occupation and survey year.
The participants' physical activity (PA) level was measured in metabolic equivalents (METs) according to guidelines for the processing and analysis of International Physical Activity Questionnaire data [18]. Serum vitamin D levels were classified as insufficient or sufficient based on a cutoff of 20 ng/mL [19] and daily calcium intake was classified as unsatisfactory or satisfactory based on a recommended daily intake of 700 mg [20].
We categorized the participants as follows: group 1 (serum vitamin D level < 20 ng/mL and daily calcium intake < 700 mg); group 2 (serum vitamin D level ≥ 20 ng/mL and daily calcium intake < 700 mg); group 3 (serum vitamin D level < 20 ng/mL and daily calcium intake ≥ 700 mg); and group 4 (serum vitamin D level ≥ 20 ng/mL and daily calcium intake ≥ 700 mg).
Statistical Analysis
Statistical analyses were performed using STATA version 14.0 (StataCorp., College Station, TX, USA) and the level of significance was set at p < 0.05. Sampling for the KNHANES was performed using two-stage stratified cluster sampling rather than simple random sam-pling and we weighted the data during analysis to reflect this. Linear regression analysis and the chi-square test were used to compare and analyze the basic characteristics of the participants by sex and group.
Linear regression analysis was used to analyze the differences in FFM among the four groups. We performed multiple linear regression analysis with adjustments for age, BMI (<25 vs. ≥25 kg/m 2 ), daily nutritional intake, water intake per kilogram body weight, smoking status, alcohol consumption status, PA level, education level, average monthly household income, occupation and survey year.
A sensitivity analysis was performed by changing the serum vitamin D level cutoff to 10 ng/mL or 30 ng/mL and changing recommended daily calcium intake to 800 mg or 1000 mg. We also conducted an analysis with serum vitamin D levels and daily calcium intake values as continuous variables. Table 1 shows the basic characteristics of the participants by sex. The average age of the 14,444 participants was 44.77 years old and 59.46% of them were women. The average FFM of all the participants was 43.03 kg, with higher values in men than in women (51.14 ± 0.13 vs. 35.95 ± 0.078 kg, p < 0.001). The mean serum vitamin D level of all the participants was 18.00 ng/mL, with higher levels in men than in women (19.34 ± 0.20 vs. 16.83 ± 0.15 ng/mL, p < 0.001). The average daily calcium intake of all the participants was 0.51 g, with higher levels in men than in women (0.57 ± 0.006 vs. 0.45 ± 0.005 g, p < 0.001). Higher values in men than in women were also observed for BMI, total energy intake, water intake per kilogram body weight, PA level and average monthly household income. Furthermore, the proportion of current smokers, alcohol drinkers (≥1 time/month), highly educated participants (≥college) and participants with occupation was also higher for men than for women.
Basic Characteristics of the Participants by Group
In men, the FFM, water intake per kilogram body weight, PA level and the proportion of participants with a BMI ≥ 25 kg/m 2 and occupation were the highest in group 4. The total energy intake, average monthly household income and proportion of highly educated participants (≥college) were the highest in group 3. The proportion of current smokers was the highest in group 1 ( Table 2). In women, the PA level was the highest in group 4. The total energy intake, water intake per kilogram body weight, average monthly household income and proportion of highly educated participants (≥college) and participants with occupation were the highest in group 3. The proportion of participants with a BMI ≥ 25 kg/m 2 was the highest in group 2. The proportion of alcohol drinkers (≥1 time/month) was the highest in group 1 ( Table 3).
Sensitivity Analysis
Linear regression analyses of changes in FFM by group were performed with the following values: when the serum vitamin D level cutoff was 10 ng/mL and the recommended daily calcium intake was 700 mg (Table S1); serum vitamin D level cutoff of 30 ng/mL and recommended daily calcium intake of 700 mg (Table S2); serum vitamin D level cutoff of 10 ng/mL and recommended daily calcium intake of 800 mg (Table S3); serum vitamin D level cutoff of 20 ng/mL and recommended daily calcium intake of 800 mg (Table S4); serum vitamin D level cutoff of 30 ng/mL and recommended daily calcium intake of 800 mg (Table S5); serum vitamin D level cutoff of 10 ng/mL and recommended daily calcium intake of 1000 mg (Table S6); serum vitamin D level cutoff of 20 ng/mL and recommended daily calcium intake of 1000 mg (Table S7); serum vitamin D level cutoff of 30 ng/mL and recommended daily calcium intake of 1000 mg (Table S8).
In men, the FFM of group 2, group 3 and group 4 was higher than that of group 1 (Tables S1 and S7). The FFM of group 2 and group 4 was higher than that of group 1 (Tables S3, S4 and S6). The FFM of group 3 and group 4 was higher than that of group 1 (Tables S2, S5 and S8).
In women, the FFM of group 2 and group 4 was higher than that of group 1 (Tables S1, S3 and S6). The FFM of group 3 and group 4 was higher than that of group 1 (Table S5). The FFM of group 4 was higher than that of group 1 (Tables S2 and S4). For certain cutoffs, there was no significant difference in the FFM among the groups (Tables S7 and S8). When the serum vitamin D level and daily calcium intake were analyzed as continuous variables (Table S9), in men, a 1 ng/mL increase in serum vitamin D level was associated with a 0.061 kg increase in FFM (95% CI 0.035-0.086) and a 1 g increase in daily calcium intake was associated with a 1.13 kg increase in FFM (95% CI 0.59-1.68). In women, a 1 ng/mL increase in serum vitamin D level was associated with a 0.023 kg increase in FFM (95% CI 0.003-0.043) and a 1 g increase in daily calcium intake was associated with a 0.62 kg increase in FFM (95% CI, 0.067-1.16).
Discussion
In this study, we found that high serum vitamin D level and daily calcium intake were associated with a high FFM.
A recent study reported that vitamin D insufficiency, along with high BMI, was related to paraspinal muscle atrophy in postmenopausal women [21]. However, there was no evidence that vitamin D supplementation had beneficial effects on muscle health, according to a recent meta-analysis [22]. A recent study reported that low calcium intake may be a predictor of muscle loss in women aged over 50 years [13].
We derived these results by analyzing the relationship between these variables by studying them using various cutoffs of vitamin D level and calcium intake and considering them as continuous variables. In recent years, there has been increasing interest in vitamin D and calcium and the same is true for sarcopenia. However, the serum vitamin D level and daily calcium intake required to sustain muscle health has not been clearly determined. The clinical practice guidelines formulated by the Endocrine Society Task Force on Vitamin D [23] defined vitamin D deficiency as a serum vitamin D level less than 50 nmol/L (20 ng/mL) and a Korean guideline also defines vitamin D deficiency in this manner [19]. In another study, a daily calcium intake of at least 668 mg/day was found to be sufficient to maintain bone mass [20]. Therefore, we used a serum vitamin D level cutoff of 20 ng/mL and recommended daily calcium intake of 700 mg. However, a sensitivity analysis performed by varying these cutoffs yielded some significant results. Moreover, in women, the results of the analyses of groups created using the abovementioned cutoffs were not significant in many cases, but they were significant when these variables were analyzed as continuous variables. Therefore, the serum vitamin D level or daily calcium intake required to sustain sufficient muscle health should be further explored.
The mechanisms by which vitamin D and calcium act on body fat and muscle are unclear.
In 1972, it was suggested that muscle and fat could be important reservoirs for vitamin D [24]. It was found that injected radioactive cholecalciferol was rapidly distributed from the serum and that adipose tissue and voluntary muscle were the principal sites of vitamin D storage in humans. In addition, vitamin D receptors are present in human skeletal muscle tissue [25]. Through those receptors, vitamin D may promote the expression of actin, troponin C and components of the sarcoplasmic reticulum [26]. It can also stimulate fatty acid oxidation and mitochondrial metabolism [27]. Vitamin D supplementation has been found to reduce weight gain and fat accumulation in mice, possibly due to the robust induction of genes involved in fatty acid oxidation and mitochondrial biogenesis and function. Body fat may inhibit vitamin D synthesis via inflammatory mechanisms mediated by leptin and interleukin-6 [28]. In a longitudinal study involving 859 participants, the associations between changes in serum vitamin D levels and body adiposity were studied over 2.6 years. Those who recovered from vitamin D deficiency had a lower body fat percentage and lower serum leptin levels than those who did not recover from vitamin D deficiency, which suggests that there may be an association between adiposity and vitamin D levels mediated by leptin. Body fat has been found to be inversely associated with serum vitamin D levels in healthy black and white women [29]. In addition, healthy, premenopausal, African American women with a low calcium and vitamin D intake were found to be likely to have excessive adiposity [30].
Calcium enhances adipose tissue apoptosis through molecular mechanisms such as uncoupling protein 2 expression [31]. A high dietary calcium intake without caloric restriction reduces adipocyte triglyceride accumulation and results in a net reduction of fat mass in both mice and humans. This indicates that calcium can reduce not only adipocyte size but also adipocyte number. People who increase their dietary calcium intake tend to excrete increased amounts of fat and energy in their feces [32]. It has been reported that the total amount of fat and energy excreted in the feces is higher in the high calcium and normal protein diet than in the low calcium and normal protein diet for one week. Calcium intake has also been linked to appetite regulation in humans [33]. A previous review reported that the intake of various foods and nutrients can be regulated by calcium.
In this study, high total energy intake was found in those with satisfactory daily calcium intake and obesity was more prevalent in those with sufficient serum vitamin D levels. Although obesity itself may increase FFM, daily calcium intake and serum vitamin D levels were significantly associated with FFM even after controlling for obesity and total energy intake as confounding variables.
The limitations of this study are as follows: First, because this was a cross-sectional study, we could not evaluate the cause-effect relationship of FFM with serum vitamin D level and calcium intake. Therefore, randomized controlled trials should be conducted to confirm whether increasing the serum vitamin D level and daily calcium intake can actually increase the FFM. Second, although the KNHANES has been conducted since 1998, fat mass and FFM were measured only from 2008 to 2011. Therefore, these results may not reflect the latest data. Considering the increasing interest in sarcopenia, a national survey should be conducted to measure the body composition of the population. Third, a seasonal variation in vitamin D levels has been reported [34], suggesting that the time of sample collection can be important, but this was not considered in this analysis. These limitations should be considered when designing future studies.
Conclusions
In this study, high serum vitamin D levels and daily calcium intake were associated with a high FFM. Vitamin D [35] and calcium [36] are known to have various positive effects in patients with metabolic disorders. Therefore, we can expect to be helpful in the prevention of metabolic diseases in adults through the improvement of nutrition intake as well as indirectly through the change in muscle mass.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/jcm10225428/s1. Table S1: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 10 ng/mL; recommended daily calcium intake, 700 mg), Table S2: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 30 ng/mL; recommended daily calcium intake, 700 mg), Table S3: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 10 ng/mL; recommended daily calcium intake, 800 mg), Table S4: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 20 ng/mL; recommended daily calcium intake, 800 mg), Table S5: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 30 ng/mL; recommended daily calcium intake, 800 mg), Table S6: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 10 ng/mL; recommended daily calcium intake, 1000 mg), Table S7: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 20 ng/mL; recommended daily calcium intake, 1000 mg), Table S8: Linear regression analysis of changes in whole body total fat-free mass by group (serum vitamin D level cutoff, 30 ng/mL; recommended daily calcium intake, 1000 mg), Table S9, Linear regression analysis of changes in whole body total fat-free mass according to serum vitamin D level and daily calcium intake. | 2021-11-24T16:10:46.316Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "da90208132ba1b499ebb68d39ad14852b8635a9a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/22/5428/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42897eaa396bd99a0b3b52fef3627fbfc33bd2b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
51928737 | pes2o/s2orc | v3-fos-license | Effects of interpregnancy interval on pregnancy complications: protocol for systematic review and meta-analysis
Introduction Interpregnancy interval (IPI) is the length of time between a birth and conception of the next pregnancy. Evidence suggests that both short and long IPIs are at increased risk of adverse pregnancy and perinatal outcomes. Relatively less attention has been directed towards investigating the effect of IPI on pregnancy complications, and the studies that have been conducted have shown mixed results. This systematic review will aim to provide an update to the most recent available evidence on the effect of IPI on pregnancy complications. Method and analysis We will search electronic databases such as Ovid/MEDLINE, EMBASE, CINAHL, Scopus, Web of Science and PubMed to identify peer-reviewed articles on the effects of IPI on pregnancy complications. We will include articles published from start of indexing until 12 February 2018 without any restriction to geographic setting. We will limit the search to literature published in English language and human subjects. Two independent reviewers will screen titles and abstracts and select full-text articles that meet the eligibility criteria. The Newcastle-Ottawa tool will be used to assess quality of observational studies. Where data permit, meta-analyses will be performed for individual pregnancy complications. A subgroup analyses by country categories (high-income vs low and middle-income countries) based on World Bank income group will be performed. Where meta-analysis is not possible, we will provide a description of data without further attempt to quantitatively pool results. Ethics and dissemination Formal ethical approval is not required as primary data will not be collected. The results will be published in peer-reviewed journals and presented at national and international conferences. PROSPERO registration number CRD42018088578.
Introduction Interpregnancy interval (IPI) is the length of time between a birth and conception of the next pregnancy. Evidence suggests that both short and long IPIs are at increased risk of adverse pregnancy and perinatal outcomes. Relatively less attention has been directed towards investigating the effect of IPI on pregnancy complications, and the studies that have been conducted have shown mixed results. This systematic review will aim to provide an update to the most recent available evidence on the effect of IPI on pregnancy complications. Method and analysis We will search electronic databases such as Ovid/MEDLINE, EMBASE, CINAHL, Scopus, Web of Science and PubMed to identify peerreviewed articles on the effects of IPI on pregnancy complications. We will include articles published from start of indexing until 12 February 2018 without any restriction to geographic setting. We will limit the search to literature published in English language and human subjects. Two independent reviewers will screen titles and abstracts and select full-text articles that meet the eligibility criteria. The Newcastle-Ottawa tool will be used to assess quality of observational studies. Where data permit, meta-analyses will be performed for individual pregnancy complications. A subgroup analyses by country categories (high-income vs low and middle-income countries) based on World Bank income group will be performed. Where meta-analysis is not possible, we will provide a description of data without further attempt to quantitatively pool results. Ethics and dissemination Formal ethical approval is not required as primary data will not be collected. The results will be published in peer-reviewed journals and presented at national and international conferences. PrOsPErO registration number CRD42018088578.
IntrOduCtIOn
The length of time between birth and the beginning of the following pregnancy (interpregnancy interval (IPI)) has been linked to an increased risk of adverse outcomes in infants and their mothers. [1][2][3][4] To reduce this risk, the WHO and the American College of Obstetrics and Gynaecology suggest an interval of at least 2 years and a minimum of 18 months following a live birth, respectively. 2 3 IPI is viewed as a potential modifiable risk factor for adverse maternal and perinatal outcomes for planned pregnancies.
The importance of birth spacing has been a focus for perinatal researchers and policy-makers for nearly a century. 5 Studies have revealed that both short and long IPIs are potentially associated with increased risk of adverse perinatal outcomes, including stillbirth, small for gestational age, preterm delivery and neonatal death. 1 3 4 6 Conversely, the effect of IPI on complications during pregnancy has received less attention.
There is a growing body of literature that recognises the association between short IPIs and risk of premature rupture of membrane (PROM), 7 8 placental abruption, placenta praevia, 9 uterine rupture for women who previously delivered by caesarean section 10 11 and gestational diabetes. 12 Similarly, long IPIs have long been associated with increased risk of pre-eclampsia 13 14 and labour dystocia. 4 Although previous reviews 1 15 have suggested that IPI is associated with risk of pregnancy complications, these reviews did not identify a sufficient number of studies to evaluate the effect of IPI on pregnancy complications.
The two systematic reviews investigating the effect of IPI on maternal health/outcomes were published 10 years and 5 years ago, strengths and limitations of this study ► The proposed systematic review and meta-analysis will adhere to the Preferred Reporting Items for Systematic Reviews and Meta Analyses guidelines. ► The review aims to provide an update to the most recent available evidence on the effect of interpregnancy interval on pregnancy complications. ► Two independent reviewers will screen titles and abstracts, study eligibility and perform the quality assessment. ► This review will only include the published literature in the English language.
Open access respectively, 1 15 and there has since been increasing attention paid to this area and a number of publications. 12 16-19 Meanwhile, the reviews have been either limited to few maternal outcomes of interest (ie, maternal haemorrhage, PROM) 9 or not included results from studies published in the last decade. 1 A further systematic review of the effect of IPI on pregnancy complications is warranted, with a view to meta-analysis of the outcomes. This systematic review will explore the effect of IPI on pregnancy complications. The information obtained from this review is important to inform women, their family and clinicians regarding IPI. The main purpose of the systematic review is to update, compile and critically review the evidence on the effects of IPI on pregnancy complications.
MEthOds And dEsIgn Population
The systematic review will include multiparous women with information on length of interval between two consecutive pregnancies. We will not exclude studies that implemented restrictions on age, ethnic group, parity and socioeconomic status.
study design
This systematic review will include all observational prospective or retrospective studies that have assessed the effects of IPI with various pregnancy complications according to birth interval categories. Randomised controlled trials (RCT) are unlikely to be identified due to exposure of interest but will be included if available.
Comparator(s)/control
When assessed as a categorical variable, the reference IPI category will be 18-23 months.
Outcomes
The outcomes of interest in this review are pregnancy complications, defined as gestational diabetes, gestational hypertension, pre-eclampsia, uterine rupture, placental abruption, placenta praevia, PROM and labour dystocia.
dAtA sOurCEs And sEArCh strAtEgy We will conduct electronic searches in Ovid/MEDLINE, EMBASE, CINAHL, Scopus, Web of Science and PubMed databases, using a combination of medical subject headings (MeSH) and keywords related to IPI and pregnancy complications. We will include articles published from start of indexing until 12 February 2018 without any restriction on study type or geographic setting. A search strategy was developed (see table 1 for search criteria and online Supplementary file 1 for detailed search strategy for each database).
The search strategy will be piloted across each database to improve the effectiveness of the final search. We will also check the reference list of primary studies that will be selected for full-text evaluation for additional potentially relevant studies not identified by the electronic search. We will include studies published in peer-reviewed journals conducted with human populations and restricted to English language. Corresponding authors will be contacted to request information not presented in the manuscripts that are required for the review.
ElIgIbIlIty CrItErIA Inclusion criteria
The studies to be included in this review are required to fulfil two criteria.
Study design criterion: all observational studies evaluating the association between IPI and pregnancy complications.
Exposure criterion: studies that investigate IPI or birth interval as the primary exposure. IPI is defined as the length of time between the end of a pregnancy and the start of the next pregnancy. Birth interval is defined as the time elapsed between the end of one pregnancy and the end of the next pregnancy.
Exclusion criteria
Studies will be excluded based on three criteria. (1) Non-primary studies: case series or reports, editorials, letters to the editor or reviews without original data. (2) Studies with insufficient information on adjusted effect (eg, unclear adjustment variable, missing CI estimates).
(3) Studies that do not investigate IPI as a primary exposure. study selection process and software All unique studies identified from each electronic database will be imported into an EndNote library. For reproducibility and to expedite a future update of the review, this library will be published as online Supplementary data. Further screening of titles and abstracts will be accomplished by two independent investigators. Results will be stored using Covidence, a web-based software tool that (1) allows collation of search results, (2) screen abstracts and full text articles, (3) extract data from selected articles, (4) conduct risk of bias assessment and (5) resolve disagreements and export data. In accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), a flow diagram will be used to report the screening process. From the set of studies screened by title and abstract, two reviewers will independently screen full-text articles based on the eligibility criteria. Any discrepancies between the two reviewers for studies that have been included or excluded will be discussed first, if an agreement cannot be reached, a third investigator will be consulted for moderation. The reason for excluding each study will also be recorded.
Risk of bias (quality) assessment
The quality of included studies will be assessed by two independent reviewers using The Newcastle-Ottawa Scale for assessing quality of cohort and cross-sectional studies. 20 Any disagreement which arises between the reviewers will be resolved through discussion with a third reviewer.
Data extraction Data will be extracted from all included studies by two independent reviewers using a specifically developed data extraction form in line with the eligibility criteria and outcomes of interest. For each study, the following data will be extracted (1) author names, (2) publication year, (3) study period, (4) geographic location, (5) World Bank income category (at the time of publication), (6) study design, (7) sample size, (8) exposure, (9) outcome measure of interest, (10) adjustment or matching variables, (11) effect size and (12) response rate (where indicated).
Data synthesis and analysis
The final review will include data presented in summary tables and a narrative synthesis to describe the variables listed in the data extraction section. Where data permit, meta-analyses will be performed for individual pregnancy complications. We will apply random effects meta-analysis using the generic inverse variance method to explore the association between IPIs and pregnancy complications. 21 22 We will calculate pooled odss ratio (OR) from all studies that provided adjusted OR or risk ratio with 95% CIs for each pregnancy complication (outcome of interest). Egger's weighted regression test will be used to assess publication bias. 23 The I 2 statistic will be reported as a measure of heterogeneity between studies. 24 Where meta-analysis is not possible, we will present data without quantitatively synthesising it. If the same data are presented in multiple studies, then those providing the most information will be considered.
Subgroup analyses
Subgroup analyses by country categories based on World Bank income group (high-income countries vs low and middle-income countries) will be performed.
Confidence in cumulative evidence
The quality of the findings on each outcome of interest across studies will be assessed using Grading of Recommendations, Assessment, Development and Evaluations (GRADE) guidelines, which are developed by the GRADE Working Group. 25 The GRADE approach will allow us to determine the quality of the evidence of each outcome. The GRADE system classifies the quality of evidence as very low (very uncertain effect estimates), low (further research will likely change the effect estimate), moderate (further research may change the estimate and our confidence in it) or high (further research is very unlikely to change our confidence in the estimate of effect).
Patient and public involvement
Members of the community Healthy Pregnancies Consumer Reference Group will provide community and consumer perspectives to this study. This group will provide an insight into issues that affect their pregnancy planning decisions, contextualise results and provide participant experience.
Ethics and dissemination
Formal ethical approval is not required as primary data will not be collected. This protocol adheres to the PRISMA protocols guidelines. 26 In addition, the findings of the systematic review will be reported according to the PRISMA statement. 27 review registration This review has been registered with International Prospective Register for Systematic Reviews (PROSPERO) under the identification code: CRD42018088578.
updates to study protocol If any updates to the study protocol are required, these will be listed and included as supplementary information along with a final manuscript and updated on the PROS-PERO register.
dIsCussIOn Families want to know the best time at which they conceive their next child in order to have a safer pregnancy and healthy baby. Clinicians need evidence-based recommendations to provide advice on the optimal IPI leading to fewer maternal and perinatal complications. For planned pregnancies, IPI is modifiable, and such recommendations may therefore be useful for preventing adverse maternal/pregnancy outcomes. The current WHO recommendations, which suggest that women wait at least 2 years after delivering a live birth, 2 were based on a review of observational studies predominantly in low-income and middle-income populations, which may not be generalisable to high-income countries. Context specific and updated evidence is warranted to clarify whether the evidence of studies investigated the effect of IPI on pregnancy complications is sufficient for decision-making.
This will be a comprehensive systematic review investigating the effect of IPI on pregnancy complications. Previous reviews have been limited to few maternal outcome of interest 15 or have not included results from studies published in the last 10 years. 1
A systematic review
Open access investigating effect of IPI on pregnancy complications is now warranted. Systematic documentation and synthesising of literature on the effect of IPI on various pregnancy complications will be important to set and revise evidence-based guidelines for IPIs. By updating the current state of knowledge in IPI research, this review will provide a basis for guiding future studies and future global policies for family planning. | 2018-08-14T19:12:27.132Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "3061801d17251e698df8894413a77370171cbfed",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/8/e025008.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3061801d17251e698df8894413a77370171cbfed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56296146 | pes2o/s2orc | v3-fos-license | 3 D Residual Stresses in Selective Laser Melted Hastelloy X
3D residual stresses in as manufactured EOS NickelAlloy HX, produced by laser powder bed additive manufacturing, are analysed on the surface closest to the build-plate. Due to the severe thermal gradient produced during the melting and solidification process, profound amounts of thermal strains are generated. Which can result in unwanted geometrical distortion and effect the mechanical properties of the manufactured component. Measurements were performed using a four-circle goniometer Seifert X-ray machine, equipped with a linear sensitive detector and a Crtube. Evaluation of the residual stresses was conducted using sinψ method of the Ni {220} diffraction peak, together with material removal technique to obtain in-depth profiles. An analysis of the material is reported. The analysis reveals unwanted residual stresses, and a complicated non-uniform grain structure containing large grains with multiple low angle grain boundaries together with nano-sized grains. Grains are to a large extent, not equiaxed, but rather elongated. Introduction Additive manufacturing, free form fabrication, rapid prototyping and 3D-printing are some of the different designations for processes where components can be built to finished or near-finished shape without machining a block of material or casting material in a mould [1–3]. The processes were primarily developed for simpler materials, such as thermoset plastics and plaster. The lasers equipment originally used could only melt materials with low melting points, for instance brass, and was not powerful enough to completely melt steel. Therefore, this manufacturing method could not meet the requirements for parts subjected to high stress levels or elevated temperatures, e.g., superalloys [4]. With time, the process control was improved and more powerful lasers were developed. With the higher input possible from a more powerful laser it is possible to create a microstructure with a low amount of porosity and no internal defects such as solidification cracks or poor bonding [5]. Free-form fabrication of superalloys is gaining increased interest from the industry, since the available range of alloys is growing. Today, alloys for selective laser melting (SLM) include aluminium, titanium, tool steel, stainless steel and heat resistant materials of cobaltand nickel-base. In the case of melting of metal powders, the dominating manufacturing process is laser melting, often denoted selective laser melting, direct laser metal sintering (DMLS) or LaserCUSING. All of these names are trademarks for different companies manufacturing equipment for laser melting. The laser melting manufacturing process can briefly be described as a layer-by layer process, where powder is distributed on a powder bed, see Fig. 1. Firstly, a powder distributer travels over the powder bed cavity contained by the build chamber walls and build plate. Molten and solidified Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 73-78 doi: http://dx.doi.org/10.21741/9781945291173-13 74 powder constitutes the component surrounded by un-molten powder. Secondly, a laser beam melts the powder layer and creates a new slice of solid material in the component. Thirdly, a ram lowers the build platform and the process is repeated until a finished geometry is formed. After finalisation, the remaining loose powder is removed and the component is cut off from the build platform. Figure 1. Schematic description of the SLM process. (a) Powder is distributed on a powder bed, the build platform. (b) The powder is melted by a laser beam and a slice of solid metal is formed. (c) The powder bed is lowered and the process is repeated until a finished component is formed. Although selective laser melting allows manufacturing of complex geometries, it comes with drawbacks compared to the conventional manufacturing technologies. The temperature gradient and consequent plastic deformation leads to residual stresses and deformation due to the locally focused energy input [6]. Residual stresses can influence the geometrical accuracy and mechanical strength as well as contribute to crack initiation. Previous research has been conducted using methods such as the crack compliance method which is not suited for near surface stresses [7], hole drilling which requires large dimensional sizes and smooth surfaces to be effective [8]. In this study, x-ray diffraction was used to measure surface and bulk stresses using material removal technique. The purpose of this study is to examine residual stress levels in as manufactured SLM material. No post processing or heat treatments were done prior to testing since gas atomised EOS NickelAlloy HX powder is used for manufacturing e.g., gas turbine burners used in the as manufactured state. Experimental details The material used in the current study is manufactured from powder EOS NickelAlloy HX. In literature Hastelloy X can be identified as Alloy X, when not available from the original manufacturer. The powder material is gas atomized and sieved to a fraction (10−45 μm) suitable for the SLM process. After manufacturing no post-processing, such as heat treatment or hot isostatic pressing, was conducted. The nominal composition in wt.% of EOS NickelAlloy HX is shown in Table 1. During the SLM manufacturing the test specimen was attached to the building platform via area A in Fig. 2. After manufacturing, the test specimen was removed from the platform by wire electro discharge machining. The typical microstructure of the laser melted material after manufacturing is shown in Fig. 3-4, where the building direction is indicated by the arrows. Previous work by Brodin et al. [9] on alloy X has shown that material bulk properties meet or exceed the properties of both hot-rolled and cast Hastelloy X in heat treated condition. Table 1. Chemical composition, EOS NickelAlloy HX [wt.%]. Ni Cr Fe Mo W Co C Si Bal. 20.5−23.0 17.0−20.0 8.0−10.0 0.2−1.0 0.5−2.5 ≤0.1 ≤0.1 Mn S P B Se Cu Al Ti ≤0.1 ≤0.03 ≤0.04 ≤0.01 ≤0.005 ≤0.5 ≤0.5 ≤0.15 Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 73-78 doi: http://dx.doi.org/10.21741/9781945291173-13 75 Figure 2. Sample geometry and stress components where A denotes build plate attachment area and the arrow the building direction. Triaxial X-ray measurements [10] were performed using a four-circle goniometer Seifert X-ray machine, equipped with a linear sensitive detector, Cr-tube and a 2 mm spot collimator. The {220} diffraction peak of γ-phase was used for the stress measurement with the sin2ψ-method. Values off the x-ray elastic constants used were calculated using the Young’s modulus (190 GPa) and Poison’s ratio (0.31) based on the work by Saarimäki [11] resulting in: s1 = -1.63×10 and 1⁄2s2 = 6.89 × 10 MPa. Which differ from the experimental work on single crystal Hastelloy X [12]. Raw powder from the distributor was used and analysed for two ɸ angles (D1 = 0° and D2 = 90°) together with 13 ψ angles, evenly distributed between ± 60° to determine the unstressed lattice spacing, d0. No corrections were made considering the stress redistribution due to material removal. The orientation imaging map (OIM) in Fig. 4 was obtained using electron backscatter diffraction (EBSD) with a step size of 1.5 μm and 15 kV in a Hitachi SU70 FEG analytical scanning electron microscope (SEM). Results The EBSD analysis revealed a complicated non-uniform grain structure containing large grains with several low angle grain boundaries together with nano-sized grains. Grains are to a large extent, not equiaxed, but rather elongated, depicted in Fig. 3-4. Stress free lattice spacing, d0, determination [13] of the gas atomized powder is shown in Fig. 5. The stress free lattice spacing d0 was calculated by fitting two linear models and computing the intersect point, resulting in d0 = 1.2728 Å. The directional grains could cause a crystallographic texture as well as anisotropy leading to the non-linear behaviour seen in Fig. 6. Initial surface measurements in Fig. 7 reveal large tensile stresses (σ11 = 660 MPa, σ22 = 740 MPa, σ33 = 755 MPa). A steep gradient is evident from the surface to a depth of ~20 μm, with the lowest residual stresses obtained in the principal directions (σ11= -70 MPa, σ22 = 30 MPa, σ33 = 100 MPa). From 20 − 45 μm, the stress in the σ11 direction increase greatly. After this depth the changes in stresses reduces and are fairly stable in value. At 445 μm depth, the residual stresses in the principal directions are as follows: σ11= 400 MPa, σ22 = 90 MPa and σ33 = 85 MPa. σ12 shows compressive stresses at the surface which approaches zero stress level at 445 μm. Little or no shear stresses were calculated for σ13 and σ23. Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 73-78 doi: http://dx.doi.org/10.21741/9781945291173-13 76 Figure 3. Bulk microstructure comprising cellular dendrites. Figure 4. Orientation imaging map. Figure 5. d0 determination. Figure 6. dspacing vs. sinψ at 45 μm and ɸ = 45°. Figure 7. Residual stress profile. Residual Stresses 2016: ICRS-10 Materials Research Forum LLC Materials Research Proceedings 2 (2016) 73-78 doi: http://dx.doi.org/10.21741/9781945291173-13 77 Discussion Electron backscatter diffraction together with an OIM was used to analyse the microstructure in Fig. 3-4. The analysis revealed a non-uniform grain structure containing large elongated (not equiaxed) grains with many low angle grain boundaries together with nano-sized grains. The grains are oriented directionally, parallel to the building direction giving this unconventional microstructure. The directional grains could cause a crystallographic texture as well as anisotropy leading to the nonlinear behaviour in Fig. 6. The gas atomised powder used to determine the stress free lattice spacing d0 was not completely stress free, as seen in Fig. 5. The D1 li
Introduction
Additive manufacturing, free form fabrication, rapid prototyping and 3D-printing are some of the different designations for processes where components can be built to finished or near-finished shape without machining a block of material or casting material in a mould [1][2][3].The processes were primarily developed for simpler materials, such as thermoset plastics and plaster.The lasers equipment originally used could only melt materials with low melting points, for instance brass, and was not powerful enough to completely melt steel.Therefore, this manufacturing method could not meet the requirements for parts subjected to high stress levels or elevated temperatures, e.g., superalloys [4].With time, the process control was improved and more powerful lasers were developed.With the higher input possible from a more powerful laser it is possible to create a microstructure with a low amount of porosity and no internal defects such as solidification cracks or poor bonding [5].
Free-form fabrication of superalloys is gaining increased interest from the industry, since the available range of alloys is growing.Today, alloys for selective laser melting (SLM) include aluminium, titanium, tool steel, stainless steel and heat resistant materials of cobalt-and nickel-base.In the case of melting of metal powders, the dominating manufacturing process is laser melting, often denoted selective laser melting, direct laser metal sintering (DMLS) or LaserCUSING.All of these names are trademarks for different companies manufacturing equipment for laser melting.
The laser melting manufacturing process can briefly be described as a layer-by layer process, where powder is distributed on a powder bed, see Fig. 1.Firstly, a powder distributer travels over the powder bed cavity contained by the build chamber walls and build plate.Molten and solidified powder constitutes the component surrounded by un-molten powder.Secondly, a laser beam melts the powder layer and creates a new slice of solid material in the component.Thirdly, a ram lowers the build platform and the process is repeated until a finished geometry is formed.After finalisation, the remaining loose powder is removed and the component is cut off from the build platform.The powder bed is lowered and the process is repeated until a finished component is formed.
Although selective laser melting allows manufacturing of complex geometries, it comes with drawbacks compared to the conventional manufacturing technologies.The temperature gradient and consequent plastic deformation leads to residual stresses and deformation due to the locally focused energy input [6].Residual stresses can influence the geometrical accuracy and mechanical strength as well as contribute to crack initiation.Previous research has been conducted using methods such as the crack compliance method which is not suited for near surface stresses [7], hole drilling which requires large dimensional sizes and smooth surfaces to be effective [8].In this study, x-ray diffraction was used to measure surface and bulk stresses using material removal technique.
The purpose of this study is to examine residual stress levels in as manufactured SLM material.No post processing or heat treatments were done prior to testing since gas atomised EOS NickelAlloy HX powder is used for manufacturing e.g., gas turbine burners used in the as manufactured state.
Experimental details
The material used in the current study is manufactured from powder EOS NickelAlloy HX.In literature Hastelloy X can be identified as Alloy X, when not available from the original manufacturer.The powder material is gas atomized and sieved to a fraction (10−45 µm) suitable for the SLM process.After manufacturing no post-processing, such as heat treatment or hot isostatic pressing, was conducted.The nominal composition in wt.% of EOS NickelAlloy HX is shown in Table 1.During the SLM manufacturing the test specimen was attached to the building platform via area A in Fig. 2.After manufacturing, the test specimen was removed from the platform by wire electro discharge machining.The typical microstructure of the laser melted material after manufacturing is shown in Fig. 3-4, where the building direction is indicated by the arrows.Previous work by Brodin et al. [9] on alloy X has shown that material bulk properties meet or exceed the properties of both hot-rolled and cast Hastelloy X in heat treated condition.Triaxial X-ray measurements [10] were performed using a four-circle goniometer Seifert X-ray machine, equipped with a linear sensitive detector, Cr-tube and a 2 mm spot collimator.The {220} diffraction peak of γ-phase was used for the stress measurement with the sin²ψ-method.Values off the x-ray elastic constants used were calculated using the Young's modulus (190 GPa) and Poison's ratio (0.31) based on the work by Saarimäki [11] resulting in: s 1 = -1.63×10-6 and ½s 2 = 6.89 × 10 -6 MPa.Which differ from the experimental work on single crystal Hastelloy X [12].Raw powder from the distributor was used and analysed for two ɸ angles (D1 = 0° and D2 = 90°) together with 13 ψ angles, evenly distributed between ± 60° to determine the unstressed lattice spacing, d 0 .No corrections were made considering the stress redistribution due to material removal.The orientation imaging map (OIM) in Fig. 4 was obtained using electron backscatter diffraction (EBSD) with a step size of 1.5 µm and 15 kV in a Hitachi SU70 FEG analytical scanning electron microscope (SEM).
Results
The EBSD analysis revealed a complicated non-uniform grain structure containing large grains with several low angle grain boundaries together with nano-sized grains.Grains are to a large extent, not equiaxed, but rather elongated, depicted in Fig. 3-4.Stress free lattice spacing, d 0 , determination [13] of the gas atomized powder is shown in Fig. 5.The stress free lattice spacing d 0 was calculated by fitting two linear models and computing the intersect point, resulting in d 0 = 1.2728Å.
The directional grains could cause a crystallographic texture as well as anisotropy leading to the non-linear behaviour seen in Fig. 6.
Discussion
Electron backscatter diffraction together with an OIM was used to analyse the microstructure in Fig. 3-4.The analysis revealed a non-uniform grain structure containing large elongated (not equiaxed) grains with many low angle grain boundaries together with nano-sized grains.The grains are oriented directionally, parallel to the building direction giving this unconventional microstructure.The directional grains could cause a crystallographic texture as well as anisotropy leading to the nonlinear behaviour in Fig. 6.
The gas atomised powder used to determine the stress free lattice spacing d 0 was not completely stress free, as seen in Fig. 5.The D1 linear fit has a positive slope and the D2 linear fit has a negative slope.The presence of residual stresses is likely due to the rapid cooling from molten to solid state since the powder is gas atomised.The assumption to use the gas atomised powder for d 0 determination is reasonable since both the powder and the alloy are free from precipitates.Furthermore, the cellular dendritic SLM material and the dendrites in the gas atomised powder particles have similar primary dendrite arm spacing's.
The residual stresses in the specimen are significant since warping of the specimen is seen just by looking at it.The high surface residual stress values ( 11 = 660 MPa, 22 = 740 MPa, 33 = 755 MPa) shown in Fig. 7 are above the yield stress but lower than the tensile strength (676 MPa in the relevant direction, i.e., horizontal), as reported by Brodin and Saarimäki [14].Electric discharge machining, can change the microstructure to a depth of 5 − 10 µm.This could generate the steep stress gradient observed at the first measured points.Within the measured volume, all stresses except 11 decrease from 145 µm to a depth of 440 µm at which the stresses are fairly stable.Even though the residual stresses should approach zero and eventually become compressive in the bulk, the stresses extend to a greater depth than investigated here.The high 11 stress level is responsible for the specimen deformation caused by the layer by layer build process during solidification and cooling.Biaxial stress state assumes that there is a linear relationship between d spacing and sin 2 ψ [15].However, a biaxial stress state cannot be assumed in this case since in depth and shear components are ≠ 0 which results in the poor linear fit in Fig. 6.Thus, the triaxial stress calculations performed are believed to better reflect the actual stress state in the specimen.
Due to the lack of data regarding the x-ray elastic constants (XEC) s 1 and ½s 2 , they were calculated using the Young's modulus (bulk parameter) and ν.However, these calculated values could render non-negligible errors.Using the XEC for single crystal Hastelloy X [12] or Inconel 718 [16] would generate a shift in measured stresses of approximately 30 %.Hence, accurate XEC need to be determined.
Conclusions
Residual stress measurements and microstructural analysis was conducted on SLM material from EOS NickelAlloy HX powder.We show that the microstructure is fine-grained and the grains are elongated along the build direction.It is clear that the grains are allowed to grow over several layers during the building process.Electric discharge machining locally changes the microstructure resulting in the steep decline in residual stress from the surface (0 − 20 µm).
All stresses except 11 decrease from 145 µm to a depth of 440 µm where 11 is still increasing.Assuming a biaxial stress state is misleading since the SLM process induce large out of plane residual stresses.Accurate XEC need to be determined due to the lack of data regarding the x-ray elastic constants (XEC) s 1 and ½s 2 for SLM EOS NickelAlloy HX.
Figure 1 .
Figure 1.Schematic description of the SLM process.(a) Powder is distributed on a powder bed, the build platform.(b) The powder is melted by a laser beam and a slice of solid metal is formed.(c) The powder bed is lowered and the process is repeated until a finished component is formed.
Figure 2 .
Figure 2. Sample geometry and stress components where A denotes build plate attachment area and the arrow the building direction.
Fig. 7
reveal large tensile stresses ( 11 = 660 MPa, 22 = 740 MPa, 33 = 755 MPa).A steep gradient is evident from the surface to a depth of ~20 µm, with the lowest residual stresses obtained in the principal directions ( 11 = -70 MPa, 22 = 30 MPa, 33 = 100 MPa).From 20 − 45 µm, the stress in the 11 direction increase greatly.After this depth the changes in stresses reduces and are fairly stable in value.At 445 µm depth, the residual stresses in the principal directions are as follows: 11 = 400 MPa, 22 = 90 MPa and 33 = 85 MPa.12 shows compressive stresses at the surface which approaches zero stress level at 445 µm.Little or no shear stresses were calculated for 13 and 23 . | 2018-12-18T03:44:47.011Z | 2017-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "adb86bb2abaf399b235edfb0963f5a9bee8b9621",
"oa_license": "CCBY",
"oa_url": "http://www.mrforum.com/wp-content/uploads/open_access/9781945291173/13.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "adb86bb2abaf399b235edfb0963f5a9bee8b9621",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
104358433 | pes2o/s2orc | v3-fos-license | Combined-cycle gas turbine power plant integration with cascaded latent heat thermal storage for fast dynamic responses
Abstract The combined-cycle gas turbine (CCGT) power plants are often required to provide the essential fast grid balance service between the load demand and power supply with the increase of the intermittent power generation from renewable energy sources. It is extremely challenging to ensure CCGT power plants operating flexibly and also maintaining its efficiency at the same time. This paper presents the feasibility study of a CCGT power plant combined with the cascaded latent heat storage (CLHS) for plant flexible operation. A 420 MW CCGT power plant and a CLHS dynamic models are developed in Aspen Plus based on a novel modelling approach. The plant start-up processes are studied, and large amount of thermal energy can be accumulated by CLHS during the start-up. For load-following operation, extensive dynamic simulation study is conducted and the simulation results show that the extracted exhaust gas can be used for thermal energy storage charging, and the stored heat can be discharged to produce high temperature and high pressure steam fed to the steam turbine. Besides, the stored heat can also be used to maintain the heat recovery steam generator (HRSG) under warm condition to reduce plant restart-up time. The simulation results demonstrate that the integration of CLHS with CCGT power plant is feasible during the start-up, load-following and standstill operations.
9
The combined-cycle gas turbine (CCGT) power plants are often required to provide the essential fast 10 grid balance service between the load demand and power supply with the increase of the intermittent 11 power generation from renewable energy sources. It is extremely challenging to ensure CCGT power 12 plants operating flexibly and also maintaining its efficiency at the same time. This paper presents the 13 feasibility study of a CCGT power plant combined with the cascaded latent heat storage (CLHS) for 14 plant flexible operation. A 420 MW CCGT power plant and a CLHS dynamic models are developed 15 in Aspen Plus based on a novel modelling approach. The plant start-up processes are studied, and 16 large amount of thermal energy can be accumulated by CLHS during the start-up. For load-following 17 operation, extensive dynamic simulation study is conducted and the simulation results show that the 18 extracted exhaust gas can be used for thermal energy storage charging, and the stored heat can be 19 discharged to produce high temperature and high pressure steam fed to the steam turbine. Besides, the 20 stored heat can also be used to maintain the heat recovery steam generator (HRSG) under warm 21 condition to reduce plant restart-up time. The simulation results demonstrate that the integration of 22 CLHS with CCGT power plant is feasible during the start-up, load-following and standstill 23 operations. 24 25 Keywords: combined-cycle gas turbine; cascaded latent heat storage; flexible operation; dynamic 26 modelling; Aspen Plus 27 28 Highlights: 29 · Dynamic modelling of combined-cycle gas turbine power plant with thermal storage. 30 · Cascaded latent heat storage integration strategies to plant operation processes. 31 · Complete system dynamic simulations of the plant with cascaded latent heat storage. 32 · Quantified analysis of stored and released thermal energy for different strategies. 33 34
35
Combined-cycle power generation technology has been developed and served as an effective means 36 for base load supply worldwide since the 1960s due to its inherent advantages in high efficiency and 37 operational flexibility [1]. Although the technology in design and operation of combined-cycle gas 38 turbine (CCGT) plants is now widely available, CCGT plants face new technical challenges nowadays 39 in terms of efficient flexible operation to support the integration of intermittent renewable energy. 40 Over the past 10 years, the capacity of intermittent renewable energy has increased dramatically, 41 which has a significant impact on maintaining the balancing of the power generation and demand. 42 This forces CCGT power plants into a role change: from base load supply to fast response operating 43 services. This has led to a series of potential issues, such as low plant operation energy efficiency, low 44 load factors, and potentially shortened plant life time. To address those issues, this paper investigates 45 a new potential solution -to integrate the plant with thermal storage to create an energy buffer for fast 46 energy dispatch to support plant flexible operation. 47 48 In recent years, the study on flexible plant operation has started being given important consideration 49 and several studies the start-up process of CCGT power plants are reported [2,3]. Those paper 50 focused on optimizing the start-up process, but the dynamic performance of CCGT power plants 51 operating flexibly under different load conditions have not been extensively studied. With the increase 52 of renewable generation, the impact of passive operation of power plants during load changes has 53 received more attention. The flexible operation of CCGT power plants could enhance the stability of 54 the grid dynamics and maximise short-term high profits, but it will lead to a significant reduction in 55 the This paper is organised as follows: Section 2 brief describes the CCGT power plant and its operating 98 conditions; Section 3 presents the mathematical models of the gas turbine, HRSG, steam turbine, and 99 CLHS; Section 4 offers results and discussion of the proposed integration strategies; finally, in 100 Section 5 conclusions in relation to this overall study are drawn, with clearly outlined suggestions for 101 future exploitation. 102 103
104
A CCGT power plant generally consists of the gas turbine, HRSG and steam turbines, as shown in 105 Figure 1. Air is compressed via a compressor and is mixed with natural gas (NG) in the combustion 106 chamber for combustion, then hot combustion gas expands in the gas turbine, which forms a Brayton 107 cycle; the heat from the gas turbine exhaust is used to generate steam for steam turbine, that is, the 108 heat passes through the HRSG to heat the water flow, which formulates a Rankine cycle. In this way, 109 the CCGT power plant can achieve a much higher thermal efficiency than a single cycle gas turbine 110 power plant, because the waste heat from the gas turbine exhaust is recovered via the HRSG which is 111 then used by the steam turbines for electricity generation. in Table 1 The temperature of the compressor outlet stream is given by: 153 where, out T is the outlet temperature, and in T is the inlet temperature. The air composition used in 155 modelling are given in Table 2. 156 The natural gas composition used in the modelling is given in Table 3. It consists of methane, ethane, 161 propane, nitrogen, carbon dioxide, and other gases and the methane and ethane account make up more 162 than 99% of the total volume [27]. Therefore, only two reactions are considered in the combustion 163 process: 164 For the turbine, it was modelled as an isentropic process, and its output power is calculated by Eq. (6) 170 [22]: 171 The isentropic efficiency of the turbine is defined as [28]: 177 where, t n t n is the ratio of rotating speed to its designed value, and t m t m is the ratio of mass flow rate to 179 its designed value. 180 181 The temperature of the turbine outlet stream is given by: 182 (9) 183
HRSG section modelling 185
The HRSG is modelled as a group of heat exchangers in this study. The exhaust gas from the gas 186 turbine enters the HRSG, where the waste heat is recovered to produce steam at different pressure 187 levels (HP, IP, and LP). The heat exchanger dynamic model was developed based on energy and mass 188 balance equations. 189 190 The energy conservation equation is given by [29,30]: and mass balance gives [3]: 193 The heat flux can be calculated by Eq. (12): 196 (12) 197 198 In order to capture the dynamics of the heat exchanger, the heat exchanger is discretized into several 199 zones, as shown in Figure In the model simulation, the thermodynamic properties (e.g. heat capacity and density) of the exhaust 211 gas and water/steam are updated at every time-step based on the current temperature and pressure 212 using Aspen Plus's thermodynamic database. In the CLHS system, thermal energy is transferred to the storage media during charging, and is 226 released in later discharging step. There are mainly three types of thermal energy storage: sensible 227 heat storage, latent heat storage, and chemical heat storage [7]. The latent heat storage will be used for 228 this study because its energy density is much higher than sensible heat storage [32, 33] and the cost is 229 lower than chemical heat storage. Besides, heat transfer irreversibility of a latent heat storage system 230 can be significantly reduced using cascaded phase change materials [7]. Figure 5 (b). The 241 entire CLHS system consists of 5600 sets of such concentric tubes in parallel. 242 243 The consideration for such an arrangement is that heat is required to be quickly absorbed or released 244 during the charging or discharging processes. At rated state, the temperature of gas turbine exhaust gas is 846 K, therefore the material PCM1 is 254 chose whose melting temperature is 773 K. In this way, the outlet temperature of PCM1 will not 255 exceed 773 K for the charging process. This guarantees the maximum temperature of PCM2 will be 256 less than 773 K. Moreover, the PCMs have to operate around the melting point to ensure safety and 257 without poisonous gas generated. For these reasons, the materials listed in Table 4 from the inner tube to the PCM and the heat diffusion in the PCM, the heat transfer is by means of 266 heat conduction. The heat loss through the outer tube of the CLHS system is assumed negligible. 267 Figure 6 shows a portion of a three-dimensional heat conduction grid. 268 269 In a cylindrical-coordinate system, the three-dimensional heat conduction equation for the point P in 273 the Figure 6 is given by [37]: where, subscript P denotes the point P shown in Figure 6. 276 277 Due to the cylinder is symmetrical, the unique temperature in θ direction is assumed. Therefore, the 278 heat conduction equation in the cylinder is given by [38]: The discretization equation is obtained by integrating the differential equations in the control volume 282 over the time interval from t to t t + D . The discretized equation is shown as follows [37]: The ΔV is the volume of the control volume, which is given by: 290 There are three methods available for solving the discretised partial differential equation that depends 293 on the value of the weighting factor ( f ). In particular, 0 f = leads to the explicit scheme, 0.5 f = 294 to the Crank-Nicolson scheme, and 1 f = to the fully implicit scheme. The explicit scheme is used to 295 discretize the differential equation in this study, as follows: 296
a T a T a T a T a T a a a a a T
The discretization equation is given by: 311
324
This section presents the integration strategies of CCGT power plant with CLHS during the start-up, 325 load-following operation, and standby, respectively. In particular, the start-up procedure is studied, 326 and the idea of energy storage during plant start-up is proposed. The paper examines how the 327 integration of CLHS impact on the performance of the plant regarding to the output power and CLHS 328 charging or discharging processes. The plant output power can be regulated through variation of 329 CLHS charging and discharging processes. only a small part of the exhaust gas passes through the HRSG at the start-up, and most of the exhaust 339 gas is directly discharged into the atmosphere, resulting in energy loss. As described in [42], 340 approximately 75% of the exhaust gas (513 kg/s in this study) from the gas turbine is discharged into 341 the atmosphere for 25 minutes during the plant start-up. However, this waste energy is potential to be 342 captured by the CLHS, as shown in Figure 7. The 75% of exhaust gas may first pass through the 343 CLHS before discharging into atmosphere, and the other 25% of exhaust gas flows into HRSG, during 344 the plant start-up process. A filter is needed to remove the corrosive gases of the exhaust gas, as 345 shown in Figure 7, and the gas pressure of CLHS outlet is assumed to be the same as the atmosphere. 346 In this way, waste heat in the exhaust gas can be captured by the PCM layers in the CLHS. 347 For PCM layers filled at the same height in the CLHS system, it can be assumed that they have the 350 same temperature distribution due to their parallel structure [33]. Then the study of the entire CLHS 351 system can be simplified as a study of one set of concentric tubes ( Figure 5 (a)). In order to establish a 352 reasonable initial temperature distribution of the PCM layers such that a phase change process occurs 353 in the simulation, a temperature below the phase change point of each PCM is used to start up the 354 CLHS, as listed in Table 5; when the local temperature reaches the phase transition point, the 355 temperature distribution of each PCM at that time is its initial temperature distribution, as shown in 356 Figure 8. The figure presents the temperature distribution of the shaded area in the Figure 5 (a). For 357 each PCM layer, the phase change temperature is reached first in the lower left corner as expected. 358 The axial temperature distribution coincides with the exhaust gas in the inner tube, while the radial 359 temperature distribution also follows the heat conduction from the inside to the outside of the PCM. 360 361 After 1500 seconds of simulated charging process, waste heat in the exhaust gas is further diffused 365 and stored in the PCMs. The lowest local temperature of each PCM layer reaches the phase transition 366 point, and the temperature in the region where the local temperature is higher than the phase change 367 point continues to increase after undergoing the phase change process. The updated temperature 368 distribution of different PCM layers are shown in Figure 9. The plotted temperature is the right side of 369 the concentric tubes (see Figure 5 (a)) and the gas flows from bottom to top, therefore, the heat 370 diffuses from left side to right side, and from bottom side to top side as well.
CLHS integration strategy during load-following operation 382
In addition to avoiding the energy loss of the exhaust gas during the start-up process, the real-time 383 output power of the CCGT power plant can be regulated within a certain range by the CLHS charging 384 and discharging processes. The response speed of CCGT power plant is mainly limited by the 385 water-steam cycle, therefore, this section focuses on the utilization strategies of thermal storage in 386 water-steam cycle. During off-peak time, part of the high-temperature exhaust gas is extracted from 387 the gas turbine as a heat source for CLHS charging (same as the layout shown in Figure 7). As the 388 result, the power generated by the steam turbines will be reduced, but the gas turbine section is still 389 operating under the rated load condition. The minimum steam turbine power is 66 MW when 363 kg/s 390 exhaust gas by pass to the CLHS for thermal storage. On the contrary, during peak time, part of the 391 feed water from the deaerator flows into the CLHS, undergoing the reverse process of charging, it 392 evaporates into high temperature steam, and then leaves CLHS as superheated steam, as shown in 393 Figure 10. The maximum steam turbine output power increases to 143 MW. In order to produce dry 394 steam for steam turbine, a separator is needed to separate water droplets from steam. Finally, the 395 stored thermal energy is released from the CLHS to the feed water, thereby increasing the power 396 output of the steam turbines. The simulated discharging process is as follows. At beginning, the power plant operates at the 401 nominal load condition, and the total output power is 420 MW, in which 285 MW is from the gas 402 turbine and 135 MW is from the steam turbines. Figure 11 shows the designed load demand dynamics. 403 At the 300th second, the load demand was reduced from 420 MW to 408 MW. After 1800 seconds, 404 the load demand returned to 420 MW. At the 2800th second, the load demand increased again from 405 420 MW to 428 MW and lasted 1200 seconds. During this period, the gas turbine has been operating 406 under rated conditions with an output power of 285 MW. As a result, the real-time power output of 407 the power plant is determined by the steam turbines. It should be pointed out that the initial 408 temperature distribution of the CLHS layers used for the load-following operation simulation is the 409 same as the initial temperature distribution (Figure 8) To meeting the load demand reduction from 420 MW to 408 MW, correspondingly the steam turbine 416 output power was reduced from 135 MW to 123 MW, 60 kg/s of exhaust gas was extracted from the 417 gas turbine outlet and sent to the CLHS. This is under charging conditions, so the extracted gas also 418 flows from the bottom of the CLHS to its top, which is the direction along the PCM melting point in 419 decreasing order. Figure 12 shows the temperature distribution of different PCM layers at the end of 420 charging in the load-following operation (time = 2160s). Compared to the temperature distribution of 421 different PCM layers in the start-up operation (Figure 9), the radial temperature difference of each 422 PCM layer is significantly reduced. This is because the charging time in the load-following operation 423 is longer than that in the start-up operation. Thus, the thermal diffusion in the PCM is more fully. To meet the load demand increase from 420 MW to 428 MW, correspondingly the steam turbine 441 output power was increased from 135 MW to 143 MW, 10 kg/s of superheated steam produced by 442 CLHS was sent to IPTB. This is under discharging conditions, so the extracted feed water flows from 443 the top of the CLHS to its bottom, which is the direction along the PCM melting point in ascending 444 order. Figure 14 shows the temperature distribution of different PCM layers at the end of discharging 445 in the load-following operation (time = 4000s). Compared to the temperature distribution of different 446 PCM layers at the end of charging in the load-following operation (Figure 12), the radial temperature 447 is slowly reduced from the right end to the left end at the same height of each PCM layer. This proves 448 that an amount of heat has been transferred from the PCM layers to the feed water. It can be seen from the simulation results that since the latent heat energy density is much higher than 454 the sensible heat, although the temperature change is small, the amount of stored or released is large. 455 The CLHS system with different melting temperatures can make the temperature difference between 456 the working fluid and PCM large enough to ensure all PCMs phase changes. So that the CLHS system 457 makes heat transfer more efficient for both charging and discharging processes. 458 459 4.2.4 Load-following dynamics 460 Figure 15 shows the real-time output power of the steam turbines during load-following operation. 461 The steam turbines can correctly respond the load dynamics. Whenever the load changes, the steam 462 turbines can respond to them within 6 mins. The response time meets the Secondary Frequency 463 Response requirements of generating units specified in the GB Grid Code [43]. Figure 16 further 464 reveals the amount of heat stored and released over charging and discharging during load-following 465 operation. According to the calculation, a total of 54 GJ heat is stored in the CLHS system in the 1860 466 seconds and a total of 27.5 GJ heat is released to the feed water in the 1200 seconds. It can be seen 467 that each PCM layer stores a relatively equal amount of heat during charging, but that are very 468 different during discharging. The discharged heat from PCM4 is very small (0.1714 GJ), therefore it 469 is not visible from the figure. This is because heat transfer is mainly determined by the heat sink 470 (PCMs for charging and water for discharging) in both processes. During charging the local initial 471 temperature of each PCM layer is close to its own phase change temperature and phase change occurs 472 gradually throughout the PCM layers, so heat is stored primarily through latent heat of phase change 473 and the thermodynamic reversibility of the process is relatively greater. However, during discharging 474 the evaporation temperature of water does not change much, which causes its phase change to occur 475 in only a few layers and the thermodynamic reversibility of the process is relatively smaller. This 476 explanation can also be verified by the results shown in Figure 17. As can be seen, during charging 477 the temperature of the exhaust gas entering and exiting each PCM layer crosses its phase change 478 temperature (Figure 17 (a)), but during discharging only the temperature of the water entering and 479 exiting the PCM layer 2 and 3 crosses its phase change temperature (Figure 17 (b)). Therefore, based 480 on the different thermal properties of PCMs and water, it can be expected that there is an optimal 481 thickness for each phase change layer to maximize the charge and discharge performance. According to the initial temperature of the material, the start-up procedure of the CCGT power plant 494 can be divided into: hot, warm and cold start depending on the initial temperature of the material, with 495 standstill for up to 8 hours, 48 hours and 120 hours, respectively [1]. The start-up speed is limited by 496 the thermal stress of the steam turbine and HRSG. The longer the standstill time, the longer the 497 start-up time is required if there is no heat preservation measure adopted. Therefore, keeping the 498 HRSG warm is crucial vital for the CCGT power plant to restart faster. In fact, the stored thermal 499 energy can also be used to keep HRSG warm during plant standstill period. As shown in Figure 18, 500 during the off-load period, ambient air is fed into the CLHS to produce hot air, which is then sent to 501 the HRSG to compensate for the heat loss of the HRSG, thereby keeping the HRSG in a hot or warm 502 state ready for faster start-up. The potential approach is to keep the HRSG warm through the CLHS 503 instead of maintaining the natural circulation, so the gas turbine and steam turbines can be shut down. 504 This approach does not change the inherent structure of the HRSG and the working fluid, there should 505 be no major technical barrier in the implementation process. In addition, the air flow rate fed into the 506 CLHS is determined by the current temperature drop in the CLHS, and this process can be controlled 507 by a feedback loop. 508
512
This paper describes the dynamic modelling and simulation study for CLHS integration into a 420 513 MW CCGT power plant for flexible plant operation. A modelling method is introduced to achieve 514 whole system dynamic simulation in Aspen Plus by an external FORTRAN code. The integration 515 strategies during start-up, load-following and standstill operations are proposed and studied. 516 517 The dynamic simulation results shown that the strategies for CLHS integration with CCGT power 518 plant is technically feasible. In the plant start-up processes, the gas turbine exhaust gas could pass 519 through CLHS before discharged into atmosphere, and then the waste heat can be captured by CLHS. 520 During the load-following operation, the output power of the CCGT power plant can be reduced by 521 extracting exhaust gas from the gas turbine, the extracted exhaust gas is used to charge the CLHS; and 522 the stored heat can be discharged to produce high temperature and high pressure steam for the steam 523 turbine to increase the output power. Meanwhile the gas turbine section is still running at the rated 524 load condition. Besides, the stored heat can also be used to maintain the HRSG under warm condition 525 to reduce restart-up time after a standstill. 526 527 To further improve the CLHS performance under various operating models, efforts could be directed 528 to its optimising design, such as optimising the layout of phase change materials according their 529 thermodynamic properties, and the air flow rate used to keep the HRSG warm during a standstill. 530 531 | 2019-04-10T13:13:04.958Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "619471de5b6a4549e30506fc5bc52321d7d2b838",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.enconman.2018.12.082",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "49e49c368dc057f2468d5c8d30a2ee3c1d7e7ed7",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
225679687 | pes2o/s2orc | v3-fos-license | INVESTIGATION OF DECISION MAKING SUPPORT IN DIGITAL TRADING
. In order to trade successfully investors are looking for the best method to determine possible directions of the price changes of financial means. The main objective of this paper is to evaluate the results of digital trading using different decision-making techniques. The paper examines deep learning technique known as Long Short – Term Memory (LSTM) neural network and parabolic stop and reverse (SAR) technical indicator as possible means for decision-making support. Based on an investigation of theoretical and practical aspects of digital trading and its support possibilities, investment portfolios in real-time “IQ Option” digital trading platform were created. Short-term results show that investment portfolios created using LSTM neural network performed better compared to the ones that were created using technical analysis. The study contributes to the development of new decision-making algorithms that can be used for forecasting of the results in the financial markets.
Introduction
Rapid digitalisation and increasing levels of Artificial Intelligence (further -AI) applications in all aspects of life bring many changes in how people and businesses operate and create value in everyday activities. At the same time, they bring many new opportunities, which have also been evident in the case of financial markets in all over the world. Digitalisation reshaped the way financial markets operate and made trading processes between the buyers and the sellers of various financial instruments not only faster and less costly, but also more efficient. Meanwhile, as financial markets are not as efficient as it is stated in Efficient Market Hypothesis, there is a high probability that various applications of AI may result in the recognition of unseen patterns in the behaviour of financial instruments and provide useful insights while making financial decisions. However, along with the opportunities many challenges are created. The possibility to access multiple digital trading platforms and trade in different financial instruments has never been so easy. This attracts not only professional investors, but also individual investors who may not necessarily have proper knowledge or necessary skills to apply either basic or more advanced methods and techniques, such as AI, to perform profitable trades. Therefore, in order to exploit the opportunities and combat the challenges created by digitalisation in financial markets, LSTM neural network and its possibilities will be adopted for specific trading decisions and compared against more traditional analytical methods, such as technical analysis, which can also be applied for supporting particular trading decisions.
So the aim of the research is to investigate whether digital trading with the application of LSTM neural network for decision-making processes is superior to technical analysis based trading.
In order to fulfil the aim of the article, the following steps are going to be implemented in the research: to investigate the scientific literature on the main aspects of digital trading and deep learning techniques' application for forecasting the behaviour of financial markets; to adopt deep learning based LSTM neural network for financial time series forecasting; to speculate in real-time selected digital trading platform and compare LSTM neural network based trading portfolios with technical analysis portfolios; to present final conclusions, research limitations and suggestions that would be based on the analysis of scientific literature and the implemented research.
Methods that are going to be used comprise systemization and analysis of scientific sources, including theoretical and experimental studies, also comparative analysis, graphical representation and systemization of data, programming using MatLab and its multiple functions, technical analysis using parabolic stop and reverse (further -SAR) technical indicator.
The conducted research on digital trading and application of deep learning techniques for forecasting the behaviour of financial markets helped to systemize already existing information and to create the basis for further investigation of these matters. The applications performed in the research serve as examples of how both selected strategies potentially behave and what results could be expected with their application. The findings should be useful for anyone who has professional or personal interest in digital trading, while at the same time give an idea whether it is worth dedicating time for more complex methods and their application for more successful trading in digitalized financial markets.
Perspectives of digital trading and its influence for financial markets
In general, digitalisation could be simply understood as the usage and adoption of digital technologies in various aspects of life, or as transformative actions that tend to change formerly physical or analogue actions into digital data systems (T. Dufva & M. Dufva, 2019). There are no doubts that well implemented digitalisation can bring many new opportunities not only for businesses, but also for the society as a whole. It increases the cost efficiency, productivity, the level of innovativeness, as well as brings many other benefits by further increasing the importance of its role in today's world. It is obvious that digitalisation has brought many useful changes for all parties involved in various trades and it further tends to create new opportunities for better, more innovative and effective trading processes.
According "Asia-Pacific Trade and Investment Report 2016: Recent Trends and Developments" published by United Nations ESCAP (Economic and Social Commission for Asia and the Pacific) digital trading is referred to "the use of digital technologies to facilitate businesses without limiting it to just online sales or purchases" (Akhtar et al., 2016). In this report, digital trading is understood very widely since the whole essence is put on the general use of digital technologies. López-González and Jouanjean (2017) state that "digital trade encompasses digitally enabled transactions in trade in goods and services which can be either digitally or physically delivered involving consumers, firms and governments". A trade can be considered as digital when it is digitally enabled, but there is no difference whether a good or a service is delivered physically or digitally. Unlike the first definition, this one clarifies what parties could possibly be involved in the process of trading. Fefer et al. (2018) identify digital trading as "the delivery of products and services over the Internet by firms in any industry sector, and of associated products such as smartphones and Internetconnected sensors. While it includes provision of e-commerce platforms and related services, it excludes the value of sales of physical goods ordered online, as well as physical goods that have a digital counterpart (such as books, movies, music, and software sold on CDs or DVDs)." It has been noticed that not all researchers use the term "digital trading". Other synonyms that are used instead include such terms as "online trading" (Zhong et al., 2012;Rüdiger & Rodríguez, 2013;Dayanand, 2016), "electronic commerce", or "e-commerce" (Zhong et al., 2012;Akhtar et al., 2016;Khan, 2016;López-González & Jouanjean, 2017;Fefer et al., 2018), "electronic trading", or "e-trading" (Bank for International Settlements [BIS], 2001;Orlowski, 2015;Dutta et al., 2017), and "paperless trading" (Fefer et al., 2018;Gao, 2018).
Despite the different terms all authors emphasize either the use of digital technologies, digital channels, or digital devices in the initiation of trades, but not all provide the same extent of detailing on the main players involved in the trades and what can be traded.
In this research, it was decided to focus on digital trading in financial services, particularly on trading that takes place in the financial markets, which play a significant role in both national and global financial systems.
According to Gomber et al. (2017) digital trading in the financial markets covers mobile trading, social trading, online brokerage, online trading, high-frequency and algorithmic trading in Business-to-Consumer (B2C) and Business-to-Business (B2B) areas. It supports both individuals and institutions in making investments decisions and arranging them by using digital devices and technologies from basically anywhere and at any time (Shukla & Nerlekar, 2019). This definition gives an idea that digital trading covers multiple possibilities how to engage in trading activities in a non-physical market, whereas each possibility has its own special features.
Digitalisation has changed the way trading takes place between the participants of the financial markets. In the scientific literature such benefits of digital trading as increased level of liquidity, also a better access to the financial markets due to increased digital connectivity, reduced costs of trading since fees of handling orders became smaller, removed geographic limitations as it became possible for the financial markets' participants to engage in trading activities while being in any place in the world that has an Internet connection, improved speed of execution of transactions since traders became more independent and able to place or cancel orders by themselves without the need of discussing it with a broker, as well as greater transparency of trades since traders have a better access to a wide range of relevant trading information. Digital trading also helps investors to avoid possible misjudgements made by brokers since they can become fully responsible for their decisions whether to buy or sell certain financial assets, or their combinations (Lee, 2009;Petric-Iancu, 2015;Orlowski, 2015;Rani & Srinivasan, 2015;Bech et al., 2016;Kumar, 2018;Shukla & Nerlekar, 2019). It is obvious that a lot of new possibilities became available for the participants of the financial markets after most of the trading activities were transferred to a digital marketplace.
But it must be emphasised that without proper means and knowledge most of the mentioned opportunities may become challenging. Increased trading speed may put slower market players at a disadvantage while benefiting the others (Rani & Srinivasan, 2015), whereas, being independent from a broker may not necessarily result in higher trading profits because traders are left alone with their knowledge, skills and they need to stay connected to their used digital trading platforms (Petric-Iancu, 2015). To do this all on your own may become a huge challenge for anyone who wants to start a successful trading path online. It becomes important to be aware and take actions against possible privacy and cyber security risks (Lee, 2009;Dandapani, 2017), also deal with interpretation, appropriate choice, quick search and data reliability difficulties while making investment and trading decisions due to increased data volumes (Maknickienė, 2015). Grall-Bronnec et al. (2017) have discovered that digital trading has many similar characteristics to gambling and may lead to an addictive-like behaviour.
Despite digitalisation has brought many changes into trading processes happening within the financial markets, not all of them are in the favour of investors and traders. Finding the ways to exploit the opportunities and combat the challenges arising from digitalisation become of paramount importance for all participants of the financial markets. One of such ways could be the application of AI, or more precisely deep learning techniques. As deep learning is considered to be more advanced than such traditional methods as technical analysis, it becomes important to see whether its application to financial trading could be superior in terms of achieved results.
Application of deep learning techniques in financial markets
Financial markets do not always perform in an efficient manner and this in turn may lead to poor trading decisions. However, there is a high possibility that traders and investors who apply complex strategies, models or even AI could potentially achieve a little bit higher than average returns. In recent years AI have gained special interest and was applied in many different areas, including finance, where it covers a wide range of possible applications from market analysis and data mining to portfolio management. Multiple techniques can be applied for predicting the direction of price movements of various assets in this way supporting specific trading decisions.
According to Tsantekidis et al. (2017), in many cases traders apply statistical models for making decisions about whether and when to enter and exit particular markets, but often these models are limited and fail to properly forecast how market participants should act due to naturally noisy and stochastic nature of the financial markets.
Sirignano and Cont (2019) find that with the use of deep learning better accuracy of forecasting of movements in stock prices is achieved.
According to Chatzis et al. (2018) deep learning is a subgroup of machine learning in AI, which constitutes of new generation of artificial neural networks and brings increased versatility in learning nonlinear dynamics in huge sets of data. These techniques have increased in the popularity due to their cutting-edge performance in a variety of scientific fields, including computer vision, natural language processing, and many more, and with the employment of statistical modelling arguments help beating vanishing gradients and over-fitting problems met in traditional methods. Several more ways to define deep learning are presented by Korczak and Hemes (2017). Authors state that it is "a set of machine learning algorithms that attempt to model high-level abstractions in data by using architectures composed of multiple nonlinear transformations", whereas Abe and Nakayama (2018) states that deep learning is "a representation-learning method with multiple levels of representation that passes data through many simple but nonlinear modules". It is visible that, despite different formulations, all definitions give similar information on what deep learning stands for.
Deep learning techniques are actively used as a research object for problem solving in the financial markets during the recent years. Roondiwala et al. (2015) proposed a LSTM neural network approach to model and predict the returns of stocks listed in NIFTY 50 Index and measured the efficiency of their approach by using Root Mean Square Error (further -RMSE). Authors performed multiple simulations by training the LSTM neural network with different historic price combinations and number of epochs, and found out that the best RMSE results were achieved by employing four features set that included high, low, open and close historic stock prices and 500 epochs, proving that the suggested approach was of good use for predicting future behaviour of stock market. Bao et al. (2017) suggested a novel deep learning approach referred as WSAEs-LSTM, which combined Wavelet Transforms (WT), Stacked Autoencoders (SAEs) and LSTM neural network, and was aimed at forecasting the prices of six stock market indices, including CSI 300 Index, NIFTY 50 Index, Hang Seng Index, Nikkei 225 Index, S&P Index and Dow Jones Industrial Average (further -DJIA). For model prediction authors used three different types of input variables: high, low, open and close historic stock prices and trading volume, multiple technical indicators as well as macroeconomic indicators. Also, to compare the performance of the suggested framework, authors evaluated the predictive accuracy and profitability of additional three models, including conventional Recurrent neural network (further -RNN), LSTM neural network and WLSTM. After evaluating and comparing all of the models, authors found out that their suggested approach outperformed other models in both predictability and profitability, suggesting that the combination of different techniques could give useful insights while making investment or trading decisions in the financial markets.
In addition to already overviewed researches, it has been seen that for financial time series forecasting several scientists (Press, 2018;S. Siami-Namin & A. Siami-Namin, 2018) have successfully employed the architecture of LSTM neural network and compared its performance with Autoregressive Integrated Moving Average (further -ARIMA) model. In both cases authors found out that LSTM neural network outperformed ARIMA in predicting and forecasting the financial time series data. In addition, Hiransha et al. (2018) for time series forecasting and price prediction have employed not only LSTM neural network, but also Multilayer Perceptron (MLP), RNN and Convolutional neural network (further -CNN), and found out that all deep learning based models performed better compared to ARIMA.
Continuing, LSTM neural network for financial time series prediction was also applied by Fischer and Krauss (2018), where authors used the network for a large-scale S&P 500 Index directional movements' prediction, and compared its performance with other memory-free benchmark methods, such as Random forest (further -RAF), Deep neural network (further -DNN), and a Logistic Regression classifier (further -LOG). Authors found that LSTM neural network surpassed other methods as it achieved statistically and economically significant daily returns of 0,46 per cent and a Sharpe ratio of 5.8 prior to transaction costs, while daily returns and Sharpe ratios for RAF, DNN and LOG were 0.43, 0.32, 0.26 per cent and 5.0, 2.4, 1.7, respectively. Shah et al. (2018) applied DNN and LSTM neural network, and compared their performance in making daily and weekly price predictions of Bombay Stock Exchange Index (further -BSE SENSEX) and daily predictions of Tech Mahindra stock.
To sum up, investigation of multiple researches focused on the application of deep learning techniques has shown that there are various different ways and deep learning based architectures that can be employed in order to forecast and predict potential behaviour of financial markets, whereas the received information can be used for supporting trading or investment decisions. It has been noticed that multiple scientists tried to achieve this purpose by employing LSTM neural network, or its combinations, as this network greatly performs at financial time series forecasting. Exactly this formed the basis for selecting LSTM neural network for the research.
Methodology of digital trading support applying deep learning based Long Short-Term Memory (LSTM) technique
In order to apply LSTM neural network for financial time series forecasting that would result in efficient decision-making while trading in digitalized financial markets, MatLab program and MathWorks (2019) suggested algorithm were used.
It is known that the RMSE serves as a measure that helps to evaluate the performance and accuracy of predictions (Chong et al., 2017), therefore it was chosen for the determination of the research accuracy.
At the same time, it is important to identify main external and internal parameters of the final LSTM neural network algorithm. Starting with external parameters, Dukascopy Bank's "Historical Data Feed" was used as the main source of historical hourly data of selected financial assets. It was decided to perform training on 90 per cent of the data and testing on the remaining 10 per cent. Continuing with internal parameters, the number of features and number of responses were set to 1, while number of LSTM neural network layer's hidden units was set to 200. Considering training options, the solver of network was set to Adam (adaptive moment estimation) optimizer, training was performed for 250 epochs, also, to prevent the gradients from exploding, the gradient threshold was set to 1, initial learn rate was set to 0.005, the learn rate was dropped after 125 epochs by multiplying by a factor of 0.2. Hardware resources, in MatLab referred as "ExecutionEnvironment", was set to central processing unit (CPU).
It is important to note that for every financial instrument the LSTM neural network algorithm had to be run separately, so that relevant forecasting results for each investigated instrument could be received. This means that the only parameter that was changed each time before running the LSTM neural network algorithm was the financial time series data collected for each selected financial instrument.
Since it was decided to investigate whether digital trading with the application of LSTM neural network for decision-making processes is superior to technical analysis based trading, it remains important to cover the main aspects of technical analysis. Generally, technical analysis involves choosing financial assets based on prior trading patterns (Azzam, 2015), or in other words, it examines past market actions and uses that data to predict the future. It is thought that markets tend to repeat themselves, therefore, previous trends in most areas of life are almost always good indicators of the future (Asefeso, 2011). In this research, it was decided to focus on parabolic SAR indicator, which was selected based on personal interest. This technical indicator may be used for determining stop points and estimating when it is best for the trader to reverse his current position and enter into an opposite one.
The position of dots of parabolic SAR shows the trader to buy the financial asset or to sell it (Prasetijo et al., 2017). Exactly these indicator's sings were used to support buy/sell decisions while implementing the technical analysis based trading strategy.
For the implementation of the previously indicated trading strategies, it was decided to select and use "IQ Option" digital trading platform. This was done in order to test how both of the strategies would perform under real-time market conditions, as well as to compare whether LSTM neural network based trading strategy would performed superior to the trading strategy based on the use of technical indicator.
The amount for either buying or selling each selected foreign exchange option in both LSTM neural network and technical analysis based strategies were based on 1/N investment strategy. This means that for each trading strategy total funds of 10000 USD were equally allocated and set to 2500 USD per instrument. The decisions whether to buy the options, or, in this case, bet that the price of foreign exchanges will go higher, or sell the optionsbet that the price of foreign exchanges will get lower -, for both LSTM neural network and technical analysis based trading strategies were made in the same manner. This means that for LSTM neural network based trading strategy, if the forecasted price of analysed foreign exchange was going up, the decision to buy the option was made, whereas, if the forecasted price of foreign exchange was going down, the decision to sell the option was initiated. All of the decisions whether to buy or sell each option were made one by one without using any other additional tools or indicators. Whilst, for technical analysis based trading strategy, the decision whether to buy or sell a certain foreign exchange option was based on the indications from the parabolic SAR indicator, which was selected from the list of technical indicators provided by "IQ Option" trading platform. So, when the parabolic SAR indicator was turned on, a foreign exchange option was bought when a series of dots was be-low the price line, while, when a series of dots was above the price line then the selected option was sold. Each foreign exchange option based on parabolic SAR indications was bought or sold on the spot one by one without using any other additional tools.
All of the indicated LSTM neural network and technical analysis strategies' steps were performed numerous times in order to form multiple investment portfolios. This was done in order to see whether the application of both strategies gave consistent results. All LSTM neural network and technical analysis portfolios in "IQ Option" trading platform were formed within the period from 2019-11-11 to 2019-11-29. The final summary of all the criteria that were followed while creating all of the trading accounts and multiple LSTM neural network and technical analysis investment portfolios are presented in Table 1. After all the steps indicated in both trading strategies were implemented and results from multiple investment portfolios were received, it becomes possible to evaluate them individually as well as to compare whether the investment portfolios based on the application of LSTM neural network for particular decision-making processes actually performed superior to those investment portfolios based on single parabolic SAR technical indicator, or not.
Implementation of digital trading support applying Long Short-Term Memory (LSTM) network and evaluation of results
Based on the theoretical study and research, an integrated model was adopted to test how well the LSTM neural network works at forecasting the prices of selected foreign exchanges and whether these forecasts are suitable for supporting particular decision-making processes while trading in "IQ Option" real-time digital trading platform. The whole trading in digital marketplace that involves the use of complex algorithms, such as LSTM neural network algorithm, could be considered as algorithmic trading. This type of trading can be successfully used in several ways: − Recommendation system. It works like assistant of specialist that takes information, but the trader makes particular trading decisions himself. − Robo-Advisor. Automatically working Robo-Advisor can get data, analyse it and build an optimized portfolio for asset management by using risk models. Although there are several ways how algorithmic trading could be performed, this research focuses on the use of LSTM neural network algorithm to form the basis of the recommendation system. It is important to notice that the algorithm itself does not make any particular decisions related with buying or selling selected financial assets, it only supports the trader to make decisions whether to buy or sell the assets based on the received forecasts.
Continuing, currency pairs that were selected for this research included EUR/JPY (Euro to Japanese Yen), USD/CAD (United States Dollar to Canadian Dollar), GBP/AUD (British Pound to Australian Dollar) and EUR/NZD (Euro to New Zealand Dollar), AUD/CHF (Australian Dollar to Swiss Franc), GBP/JPY (British Pound to Japanese Yen), NZD/USD (New Zealand Dollar to United States Dollar) and USD/CAD. Later the decision to present the visual outputs of exactly four currency pairs was made because four corresponding financial instruments, in this case, four matching foreign exchange options, were later on included in investment portfolios created in "IQ Option" digital trading platforms, while the investigation of other foreign exchanges was performed in the same manner.
The very first visual outputs that were received after running the LSTM neural network algorithm with EUR/JPY, USD/CAD, GBP/AUD and EUR/NZD historical close prices in MatLab, were the plots of these prices transformed into row vectors. Exactly this kind of data representation helped to realize how much the close prices of each selected instrument were fluctuating during the analysed period. In every case, the historical close prices of selected currency pairs were collected based on the date, which was selected for testing them in real-time digital trading platform. Therefore, as it was decided to form the very first LSTM neural network investment portfolios in "IQ Option" trading platforms on 2019-11-11, the period for collecting each currency pair's historical data was set from 2019-05-13 to 2019-11-11. The relevant data was not collected from 2019-05-11 because this date coincided with the weekend.
Later on, the training of networks with all of the predefined internal and external parameters were performed. After the training of the networks was finished the forecasting of the next time step were performed and the forecasted close prices of EUR/JPY, USD/CAD, GBP/AUD and EUR/NZD were received (see Figure 1). Once again, it is important to notice that all forecasted prices for each currency pair were not received at the same time and are represented together only for the sake of convenience and because they later formed the basis for the formation of investment portfolios.
After the forecasted closure prices of each currency pair were received they were compared with the corresponding test data. In these cases the following RMSE values for each currency pair were received: EUR/JPY -1.4827, USD/CAD -0.010788, GBP/AUD -0.029451, and EUR/NZD -0.014518. A smaller value of the RMSE indicates a better fit of the model to the data, which means that the smaller the RMSE, the better. However, in these particular cases the values of all received RMSE values could be interpreted as quite high, and since it is known that the predictions are more accurate when the network state is updated with the observed values instead of the predicted values, further corrections were implemented.
The implemented changes after updating the network state of the LSTM neural network are discussed further. It was seen that in these cases the received RMSE values for each currency pair were much smaller compared to the previous ones: EUR/JPY -0.86882, USD/CAD -0.00082057, GBP/AUD -0.0018808, and EUR/NZD -0.0014945. This indicates that the predicted values ended up being much closer to the observed ones, which in turn means that LSTM neural network is suitable for making predictions and, therefore, can be used for particular decision-making support while implementing the previously described LSTM neural network based trading strategy in "IQ Option" digital trading platform. Now, as the logic of application of LSTM neural network for forecasting the prices of selected foreign exchanges as well as its suitability for supporting further trading decisions have been discussed, it is possible to go further by covering how final investment portfolios, following previously identified steps of both LSTM neural network and technical analysis based trading strategies, were formed.
Eighteen investment portfolios were created in "IQ Option" digital trading platform between 2019-11-11 and 2019-11-29. Two investment portfolios were formed from EUR/JPY, USD/CAD, GBP/AUD and EUR/NZD options, whereas all other investment portfolio were formed from AUD/CHF, NZD/USD, GBP/JPY and USD/CAD options. Moreover, all foreign exchange options included in either LSTM neural network or technical analysis investment portfolios were traded for no longer than 1 hour, as it was the maximum time to the expiration of each option.
To begin with, for LSTM neural network based trading strategy particular decisions whether to buy or sell each foreign exchange option to form final investment portfolio were based on the forecasts received after running the LSTM neural net-work algorithm with each selected foreign exchange's historical six months data in MatLab. For instance, as EUR/JPY and GBP/AUD forecasts shown increasing trends of the close price, the decision to buy each option was made, whereas decreasing trends in USD/CAD and EUR/NZD supported the decisions to sell both of the foreign exchange options. Meanwhile, for technical analysis based trading strategy the decisions whether to buy or sell certain options to form a portfolio were taken after observing the signals of parabolic SAR technical indicator directly in "IQ Option" digital trading platform.
Certain amounts for buying or selling each foreign exchange option were set based on the same 1/N investment strategy for both of the strategies. Therefore, each selected option was either bought or sold for the amount of 2500 USD, resulting in total investment of 10000 USD.
Considering that decisions in all LSTM neural network and technical analysis investment portfolios were taken in the same manner, the final investment portfolios after implementing the steps of both trading strategies looked as follows (see Table 2). After all active positions in "IQ Option" digital trading platform were automatically closed, separate results on the performance of nine LSTM neural network and nine technical analysis investment portfolios were received. Starting with the performance and results of LSTM neural network investment portfolios (see Table 3), it can be seen that each individual portfolio generated different results ranging from 1377. 04-1501.2 USD profits to 191.87-5320.34 USD losses. These significant differences indicate that the application of LSTM neural network trading strategy did not give consistent results, as well as, in majority of cases, did not help to achieve the goals of avoiding losses and earning some amount of profits.
When talking about investment portfolios' results that were received after trading based on the technical analysis trading strategy (see Table 4), similar situation as with LSTM neural network investment portfolios has been seen in the context of results' consistency and achievement of goals. The received results ranged from 712.93-5014.4 USD profits to 1363.69-7700.29 USD losses, which shows that application of technical analysis trading strategy did not help to achieve consistent results among all investment portfolio and failed at fulfilling the goals to generate profits instead of losses. It was decided to compare the final performance of both LSTM neural network and technical analysis investment portfolios by reviewing the total number of profitable and unprofitable trades within all created portfolios as well as their average values. Moreover, it was decided to compare how many profitable and unprofitable investment portfolios, after following each of the trading strategy, were created and what were their average profits or losses and average profitabilities. Exactly these results are reflected in Table 5 and will be discussed further. It was noticed that the exact number of profitable and unprofitable trades within all created portfolios, differed only by two trades in each strategy's case. To be more precise, the total number of profitable and unprofitable trades based on LSTM neural network trading strategy was equal to fourteen and twenty-two trades, whereas for technical analysis trading strategy it was equal to twelve and twenty-four trades, respectively. Nevertheless, despite the fact that the number of profitable trades following LSTM neural network trading strategy was higher, the average values of these trades for both LSTM neural network and technical analysis trading strategies were quite close to one another and equal to 1761.39 USD and 1704.82 USD, accordingly. In addition, despite the fact that the number of unprofitable trades, which were based on LSTM neural network trading strategy, was lower, their average value was higher and equal to 2132.77 USD, whereas for technical analysis based trading strategy the average unprofitable trade value was equal to 2011.2 USD.
Continuing, a little bit different results were seen in the overall number of profitable and unprofitable investment portfolios, as following each of the strategy two profitable and seven unprofitable portfolios were created. Average value of profits or losses as well as average profitability of all portfolios was calculated. From the received results it has been seen that LSTM neural network and technical analysis investment portfolios following both trading strategies gave average losses equal to 2473.5 USD and 3090.09 USD, while the rates of return from initial investments were negative and equal to 24.74% and 30.9%, accordingly. From these results it is possible to state that LSTM neural network investment portfolios performed superior compared to technical analysis investment portfolios, since average losses of all LSTM neural network portfolios were about 1.3 times lower compared to the average losses of technical analysis investment portfolios.
On the other hand, although LSTM neural network investment portfolios have demonstrated better average results than technical analysis investment portfolios, the overall performance of these portfolios could be considered as poor, as no traders or investors want to receive losses instead of profits. Moreover, the application of LSTM neural network trading strategy required much more time compared to technical analysis trading strategy and the fact that in majority of cases it gave largely unfavourable results, raises doubts whether this trading strategy is worthwhile.
Conclusions
Investigation of scientific literature made it clear that digital trading is a broad concept, which in terms of financial markets covers multiple possibilities, such as mobile trading, social trading, online brokerage, online trading, high-frequency and algorithmic trading and may serve as a facility that supports the provision of electronic order routing, automated trade execution, and electronic dissemination of pre-trade and post-trade information.
Adoption of deep learning based LSTM neural network algorithm using MatLab computing environment and its multiple functions allowed to forecast financial time series of selected financial instruments and in this way formed the basis of a recommendation system, which was aimed at supporting particular buy and sell decisions. Exactly the information received from the Forecasts helped to make particular decisions, which led to the crea-tion of LSTM neural network investment portfolios. It is worth mentioning, that LSTM neural network algorithm did not have the ability to make any automated trading decisions in selected digital trading platform. Instead, it only supported the trader in making final decision whether to buy or sell each selected financial asset. Speculation in "IQ Option" real-time digital trading platform following the steps of both LSTM neural network and technical analysis trading strategies resulted in the formation of eighteen investment portfolios. The comparison of the results that were received in the selected digital trading platform has shown that, in majority of cases, investment portfolios generated losses instead of profits, but, in average terms, LSTM neural network investment portfolios performed superior to technical analysis portfolios, as they achieved almost 1.3 times lower losses.
After performing the scientific analysis and implementing the final research, it is possible to conclude that digital trading with the application of LSTM neural network for decision-making processes performed better than technical analysis based trading. It has been noticed that in "IQ Options" digital trading platform, the experienced average losses of LSTM neural network investment portfolio was equal to 2473.5 USD, while technical analysis investment portfolios' average losses were higher and equal to 3090.09 USD. On the other hand, it has been see that LSTM neural network portfolios in most of the cases generated losses, which did not match with the expectations to achieve higher than average returns, nor matched with the goals of avoiding losses and achieving some undefined amount of profits. This might have happened due to various limitations faced while implementing the LSTM neural network based trading strategy, such as limited time period, frequency and type of selected input data for the analysis, also constant input parameters of LSTM neural network algorithm.
Future research focusing on the fine-tuning of the LSTM neural network algorithm should be performed in order to investigate whether better and more profitable results while trading could be achieved. | 2020-06-18T09:06:26.515Z | 2020-06-16T00:00:00.000 | {
"year": 2020,
"sha1": "9a99eb13377510f4aa42b6e5172d872c8764d390",
"oa_license": "CCBY",
"oa_url": "http://bm.vgtu.lt/index.php/verslas/2020/paper/download/510/181",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3d7c9edba3c1a0e9ea8bc4416160081c6324ab6c",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
2436145 | pes2o/s2orc | v3-fos-license | Primary cultures of dissociated sympathetic neurons. II. Initial studies on catecholamine metabolism.
Initial studies are reported on the catecholamine metabolism of low-density cultures of dissociated primary sympathetic neurons. Radioactive tyrosine was used to study the synthesis and breakdown of catecholamines in the cultures. The dependence of catecholamine synthesis and accumulation on external tyrosine concentration was examined and a concentration which is near saturation, 30 microM, was chosen for further studies. The free tyrosine pool in the nerve cells equilibrated with extracellular tyrosine within 1 h; the total accumulation of tyrosine (free tyrosine plus protein, catecholamines, and metabolites) was linear for more than 24 h of incubation. Addition of biopterin, the cofactor of tyrosine hydroxylase, only slightly enhanced catecholamine biosynthesis by the cultured neurons. However, addition of reduced ascorbic acid, the cosubstrate for dopamine beta-hydroxylase, markedly stimulated the conversion of dopamine (DA) to norepinephrine (NE). Phenylalanine, like tyrosine, served as a precursor for some of the DA and NE produced by the cultures, but tyrosine always accounted for more than 90% of the catecholamines produced. The DA pool labeled rapidly to a saturation level characteristic of the age of the culture. The NE pool filled more slowly and was much larger than the DA pool. The disappearance of radioactive NE and DA during chase experiments followed a simple exponential curve. Older cultures showed both more rapid production and more rapid turnover of the catecholamines than did younger cultures, suggesting a process of maturation.
INTRODUCTION
The initial paper in this series (Mains and Patter-begun a study of catecholamine (CA)' metaboson, 1973 a) described the conditions for the isola-lism in the cultures in order to characterize further tion and growth of sympathetic neurons in low-the neurons as well as to obtain information on density cultures and described some of the basic morphological and biochemical properties of the neurons.It is hoped that this culture system will 1 Abbreviations used in this series of papers : ACh, acetylprove useful in the study of neurospecificity and choline ; BH4, tetrahydrobiopterin ; BSA, bovine neuronal development .Toward this end we have serum albumin ; CA, catecholamine ; DA, dopamine, certain basic questions of catecholamine biochemistry not easily approached in vivo .
The preceding paper presented data showing that the cultured sympathetic neurons synthesize and accumulate radioactive DA and NE when incubated in radioactive tyrosine (Mains and Patterson, 1973 a) .Here we present studies on nutritional requirements for catecholamine synthesis, as well as kinetic analyses of NE and DA biosynthesis and disappearance .
MATERIALS AND METHODS
Much of the methodology used in the experiments described here is the same as was described in the preceding paper (Mains and Patterson, 1973 a) .Except where noted, preparation and maintenance of the cultures, radioactive compounds used, and methods of incubation were the same .Cultures were grown and incubated with 2-amino-4-hydroxy-6, 7-dimethyltetrahydropteridine (DMPH4) (0 .5 mg/I ; Aldrich Chemical Co ., Inc., Milwaukee, Wis.) ; fresh ascorbic acid (25 mg/liter) was added every 2 h during incubations, except where stated .L-[7-3H)norepinephrine (6.4 Ci/mmol ; New England Nuclear, Boston, Mass .) and L-[3-3H]phenylalanine (16 Ci/mmol ; New England Nuclear) were also used .Phenylalanine was purified by paper electrophoresis as described previously.The [3, 5-3 H]tyrosine was always 30 Ci/mmol (New England Nuclear) .
Tyrosine Concentration Dependence
The concentration of tyrosine was varied in the incubation medium by the addition of appropriate aliquots of Leibovitz's (L-15-air) plating medium .Rat serum contributed 3 µmol/liter unlabeled free tyrosine to the final incubation mix, since rat serum contains about 75 µM tyrosine (Mains and Patterson, 1973 a) .
Ascorbic Acid Determinations
The methods of Baker and Frank (1968) and Roe (1954) were used to determine total ascorbate (reduced plus oxidized plus diketogulonic acid) and reduced ascorbate, respectively .
Phenylalanine Incubations
To compare phenylalanine and tyrosine as precursors for catecholamine synthesis, cultures were incubated with [ 14C]tyrosine (455 mCi/mmol) and [ 3 H]phenylalanine .The tyrosine concentration was always 30 µM ; phenylalanine was varied from 30 to 750 µM .Concentrations of 30-250 µM were tested using [3 H]phenylalanine (16 Ci/mmol) .For economy and to make the contributions of each precursor more easily separable, 750 µM phenylalanine was tested using identical cultures incubated with 30 µM [3 H]tyrosine plus 750 µM unlabeled L-phenylalanine, or 30 µM unlabeled tyrosine and 750 µM [3 H]L-phenylalanine (4 .25 Ci/mmol) .Catecholamines formed from [ 3 H]phenylalanine were determined by electrophoresis followed by chromatography, as previously described.Since all evidence in vivo (Long, 1961 ;Meister, 1965) and in vitro (Eagle, 1955 and1959) indicates that mammalian cells do not utilize D-amino acids, nor do D-amino acids compete with L-amino acids even in ten-fold excess, we have ignored the presence of D-phenylalanine in L-15 .
Incubations with Radioactive Norepinephrine
Cultures were pulsed with 0 .25 µM [3H]norepinephrine for 1 h ; then the chase procedure was followed .
Chase Experiments
Chase experiments were performed on cultures incubated for various times in [ 3H]tyrosine, or for 1 h in [3H]NE .The cultures were washed after the incubation with 2 ml of L-15-air without Methocel, drained and then incubated in another 2 ml of L-15air growth medium without Methocel, giving at least a 1000X dilution of residual [3H]NE left in the dish .Fresh ascorbic acid (25 mg/liter) was supplied every 2-3 h during the chase .
Dependence of Catecholamine Production on External Tyrosine Concentration
Since neurons utilize tyrosine for several purposes, studies of catecholamine metabolism could give misleading results if the concentration of tyrosine were low enough to be limiting, thereby forcing incorporation of tyrosine into protein to compete with conversion into neurotransmitter .In these cultures an assessment of the routes of tyrosine utilization can be attempted .Cultures aged 1, 2, and 3 wk were incubated in varying concentrations of tyrosine for 8-h periods and several parameters were measured .Fig . 1 presents the pooled results from experiments on the cultures .In Fig .I A, it can be seen that the free radioactive tyrosine pool of the cultures expands with increases in external radioactive tyrosine concentration at least to 200 µM tyrosine .Although the free tyrosine pool expands with increasing extracellular tyrosine, the total radioactivity accumulated by the cells (largely present as protein, since it comigrates with the major protein bands seen on stained sodium dodecyl sulfate (SDS) polyacrylamide gels of the cultures ; unpublished observations) reaches a plateau between 30 and 60 µM external tyrosine as shown in Fig . 1 B .These values are to be compared with 75 AM, the value for rat serum (Mains and Patterson, 1973 a) .
[3H]catecholamine synthesis and accumulation from extracellular [ 3H]tyrosine also reaches a plateau near the physiological range of tyrosine values, as seen in Fig .I C .For subsequent work, 30 µM tyrosine has been used.
Time Dependence of Uptake and Accumulation of Exogenous Tyrosine
An understanding of catecholamine metabolism in these cultures requires data concerning the time dependence of tyrosine uptake and accumulation, as well as the temporal patterns of the production of DA and NE from tyrosine .As a first step in this study, it was important to determine if the extracellular tyrosine equilibrated rapidly with the neuronal tyrosine pool.This seemed likely, since the free amino acid pools of various non-neural cells in culture are known to exchange with the surrounding medium in less than an hour (Eagle, 1959) .Therefore, a series of cultures were incubated in [14C] tyrosine for 8 h and then an aliquot of [3H]tyrosine was added to the medium ; the cpm ratio 1H /14C in the medium was 9 .8 .The 3H/14C ratio of the free tyrosine pool was then monitored as a function of time .Within 1 h, the ratio 3H/19C in the intracellular pool was 9 .9f experiments with sets of identical cultures in which only [ 3H]tyrosine was used in the medium, the free [3 H]tyrosine peak rose abruptly to its final value at the earliest time point, and remained at that value for periods up to 28 h of incubation (e .g., Fig .2, open circles) .
The rapid approach to a steady state of the intracellular free tyrosine need not represent bidirectional exchange, since the cells utilize an amount of tyrosine greater than the size of their free tyrosine pool every hour (see below) .
To interpret the tyrosine concentration dependence shown in Fig .1, it is necessary to determine whether the data represent the rate or extent of synthesis and accumulation .Two sets of 1-wk old cultures were used to examine the kinetics of labeling, and the pooled data are shown in Fig . 2 .Cultures were incubated for varying lengths of time and then harvested and analyzed .As seen in Fig .2, the total accumulation of radioactivity was linear with incubation time .Thus the 8-h incubation data in Fig . 1 B represent rates, not steady state values .Also, in seven experiments using older (2-wk) cultures, total accumulation of radioactivity was found to be linear with incubation time (e .g., Fig .5 A) .Thus the rapid exchange of free tyrosine and the linear accumulation of total radioactivity occur in both young and older cultures .Since there is little or no proliferation of non-neural cells in these cultures and the neurons apparently do not divide (Mains and Patterson, 1973 a), the steady accumulation of radioactivity with time represents growth and turnover in the neurons .
Catecholamine Biosynthesis
It has been reported that phenylalanine as well as tyrosine can serve as precursor for catecholamine synthesis in brain homogenates (Karobath and Baldessarini, 1972) .Tyrosine hydroxylase is inhibited by tyrosine but not by phenylalanine, and the isolated enzyme can produce dopa by doubly hydroxylating phenylalanine faster than by single hydroxylation of tyrosine (Shiman et al ., 1971) .The question remains, however, whether phenylalanine can serve as a catecholamine precursor in intact cells.If so, it is important to know how much of the catecholamine synthesized and accumulated comes from tyrosine and how much from phenylalanine.
To study the two potential precursors, cultures were incubated with both [ 14C]tyrosine and [3H]phenylalanine for 8 h, and DA and NE were isolated by paper electrophoresis and chromatography .The results of a series of experiments appear in Fig .3, in which the fraction of catecholamines synthesized from tyrosine (relative to the total synthesis from both phenylalanine and tyrosine) is plotted as a function of external phenylalanine concentration (all at 30 µM tyrosine) .Phenylalanine served as precursor for both DA and NE, and competed with tyrosine as precursor in a graded manner .However, even at the highest concentration of phenylalanine tested, 750 µM, phenylalanine accounted for less than 10% of the catecholamines synthesized, in spite of the fact that the cultures took up and accumulated almost eight times more phenylalanine than tyrosine .Furthermore, phenylalanine accounted for less than 107, of the catecholamines synthesized even when the tyrosine concentration was 10 µM .Deletion of DMPH4 from growth and incubation media did not significantly alter the results of the mixed tyrosine-phenylalanine experiments .
Effects of Exogenous Pteridine Cofactor and Exogenous NE
We investigated the possibility that catecholamine metabolism is regulated by the supply of biopterin, the pteridine cofactor for tyrosine hydroxylase, by assaying for a stimulation of catecholamine production upon the addition of exogenous biopterin (BH 4), or its synthetic analog DMPH4 .(The biopterin was the generous gift of Dr .Seymour Kaufman) .Exps .A, B, and C of Table I show that, although there may be a small stimulation using fresh BH4 or DMPH4 during incubations with radioactive tyrosine, the effects were not statistically significant .The cultures were grown in L-15 lacking folic acid and fresh vitamin mix ; the medium was supplemented at every feeding with fresh ascorbic and folic acids (50 and 1 mg/liter, respectively) .This was done in an effort to minimize spontaneous formation in the medium of biologically active derivatives of folic acid which can substitute for biopterin (Lloyd et al ., 1971) .We hoped to force the cells either to synthesize biopterin, acquire biopterin from the serum, or to survive with a deficit of the pteridine cofactor .
Another approach to the study of pteridine cofactors involved the use of exogenous unlabeled NE to inhibit the production of new catecholamines from radioactive tyrosine .Table II shows that exogenous NE inhibited the production of radioactive catecholamines, and that the effect of exogenous NE was stronger on NE than on DA THE JOURNAL OF CELL BIOLOGY -VOLUME 59,1973 synthesis and accumulation, Since NE competes with pteridine cofactors in binding to tyrosine hydroxylase (Nagtasu et al ., 1972), it was possible that BH 4 or DMPH4 would relieve blockade of [ 8H]-catecholamine synthesis and accumulation produced by exogenous unlabeled NE .Exps .C and D of Table I indicate that added cofactor had no significant effect on NE inhibition, although a further small inhibition of catecholamine production may have been caused by the pteridines .
Effects of Exogenous Ascorbic Acid
Early in these studies we observed that as the cultured nerve cells grew for progressively longer times in culture, they synthesized and accumulated more DA than NE .For 8-h incubations, the ratio NE/DA in older cultures was often as low as 0 .1 .This was unexpected since the cultured neurons appeared morphologically to be principal cells, which in vivo produce primarily NE, not DA, and since the growth and incubation media always contained ascorbic acid, the cosubstrate for dopamine-/3-hydroxylase .Upon examination, it became clear that ascorbic acid is extremely labile under culture conditions, and must be frequently supplied fresh in the culture medium if the neurons are to synthesize and store substantial amounts of NE .As seen in Table III A, cultures grown for 7 days in 50 mg/liter ascorbic acid supplied fresh daily produce 2 .5 times as much radioactive NE in an 8-h period as cultures grown for 7 days without exogenous ascorbate ; no significant differences in DA production or total tyrosine accumulation are seen .In Table III B are presented the results of growing cultures for 2 wk without fresh ascorbate and then adding ascorbate for the 16-h incubation ; NE synthesis and accumulation was stimulated tenfold, while the DA pool was actually made smaller by the addition of ascorbate .
Although ascorbic acid is unstable, it was possible that the neurons stored ascorbate for subsequent use .To examine this question, cultures were grown for 3 wk with fresh ascorbate daily and then incubated for 8 h with radioactive tyrosine plus either fresh ascorbate or an equal aliquot of incubation medium which had been preincubated in the absence of cells at pH 7 .2,36 °C, for 16 h (Table III C) .The synthesis and accumulation of NE was stimulated over sixfold by fresh ascorbate, again with no change in the DA pool or the total tyrosine accumulated .Thus the neurons must be supplied with ascorbate more Peterkofsky (1972 a) showed that total ascorbic acid (reduced plus oxidized ascorbate plus diketogulonic acid) in Eagle's minimal essential medium (MEM) decays to very low levels in 16 h when incubated at 37 °C, pH 7 .4; we have confirmed these results using L-15-air .Dopamine ,l3-hydroxylase requires reduced ascorbate to convert DA to NE (Levin and Kaufman, 1961) ; once it becomes oxidized, ascorbate is converted to diketogulonate in a matter of minutes under tissue culture conditions (Ball, 1937) .Since ascorbate is made primarily in the liver and kidney, it is possible that nerve cells cannot resynthesize reduced ascorbate from diketogulonic acid (Pauling, 1970 ;Baker and Frank, 1968) .Therefore we measured the loss of reduced ascorbic acid from L-15 at a variety of pH values, with and without serum, and with and without added glutathione (50 mg/liter) .In all cases, less than I % of the initial reduced ascorbate (50 mg/liter) was present after 2 h of incubation, even when special precautions (lower pH, inclusion of reduced glutathione) were taken to stabilize the ascorbate .
Because ascorbic acid is so unstable, it was supplied fresh every 2-3 h in all subsequent incubations .
Time Dependence of Catecholamine Production from Exogenous Tyrosine
The kinetics of NE and DA metabolism were more complicated than in the case of the free tyrosine and total tyrosine discussed above .In 1-wk old cultures treated daily with ascorbic acid (Fig .4) and supplied with ascorbate every 2-3 h during the incubation as well, the DA pool labeled maximally in 2-4 h and then did not change for the next 28 h .NE, on the other hand, labeled more gradually and in fact showed no sign of saturation In 2-wk old cultures, the rate of catecholamine production was greatly accelerated compared to the younger cultures (Fig .4), as shown by the open circles for NE in Fig .5 B .
The effect of ascorbate on the time course of NE labeling is also shown in this figure .Cultures were either maintained without ascorbate until the incubation (open circles) or were given fresh ascorbate (25 mg/liter) plus glutathione (15 mg/ liter) every 12 h for 4 days before the incubation (closed circles) .It was thought that these feeding protocols would produce cultures with small and large endogenous NE pools, respectively, so that the effects of the endogenous NE level on the kinetics of labeling could be observed .All cultures were given ascorbate and glutathione every 2-3 h during the incubations .The total tyrosine incorporation by the two sets of cultures was identical for 16 h (Fig .5 A), indicating that ascorbate prefeeding did not affect the overall tyrosine utilization .In both types of culture, the radioactivity in DA remained at 5000 ± 700 cpm for all times examined .On the other hand, the cultures grown without ascorbate produced NE at a linear rate for 16 h, while cultures maintained in ascorbic acid produced radioactive NE at a much slower rate .In two other experiments, cultures fed daily with ascorbate showed the linear, rapid NE labeling pattern, while four experiments with cultures fed two or more times daily with ascorbate Data presented as the mean t SEM for quadruplicate determinations .Total cpm = the total radioactivity accumulated by the culture during the incubation.* 25 mg/liter added daily ; medium changed every 3 days .$ 25 mg/liter added fresh every 2-3 h .4 8-day old cultures were incubated for various times (as indicated) with 30 µM radioactive tyrosine .The open circles represent the radioactivity in DA, while the closed circles give the radioactivity in NE at each time .The data for each point is the mean t SEM of 3-6 cultures .The cultures were grown with fresh ascorbic acid added daily and the incubations were in the presence of ascorbate (50 mg/ml ; added fresh every 2-3 h) .
showed NE labeling patterns similar to the closed circles in Fig .5 B .As already seen in Table II, NE can inhibit the production of [3 H]catecholamine from [3H]tyrosine ; endogenous stores of NE due to frequent ascorbate additions may account for the labeling patterns in Fig .5 B .
Turnover of Catecholamines; Chase Experiments
Having obtained some information on the synthesis of catecholamines in the cultures, it was of interest to examine neurotransmitter breakdown or loss as well .After labeling a set of 1-wk old cultures for 8 h, the data from chase experiments in unlabeled tyrosine revealed simple exponential decays for both NE and DA, as seen in Fig .6 .2-wk old cultures also exhibited exponential decline of labeled NE and DA, and these results are summarized in Table IV .An alternative method for observing NE decay, which is often used in vivo, involved a short incubation with [3H]NE at a concentration (0 .25 JIM) that did not alter the overall rate of NE production (see Table II lowed by incubation in the absence of exogenous NE .An example of such a chase following a 1-h incubation of a set of 2-wk old cultures in 0 .25 µM [3H]NE is given in Fig .7 .This method also shows that the radioactive NE disappearance follows a single exponential with the same half-time as seen using radioactive tyrosine to label the NE .The data from experiments using the two methods of determining turnover rate are compared in Table IV .The table includes the results of a chase experiment done on cultures fed twice daily with fresh ascorbate for 4 days before the incubation with radioactive tyrosine ; in addition, the incubation was carried out for 16 h rather than the usual 8 .These manipulations were used in an effort to preload the neuronal stores with as much NE as possible.The data from the chase experiment was, however, identical to that seen previously .The chase results were not altered by maintaining cultures in the absence of penicillinstreptomycin, imidazole, or serum .
TIME (HOURS)
FIOm n 7 2-wk old cultures were pulsed for 60 min with (3HINE as described in Materials and Methods and then chased in 30 µM unlabeled tyrosine for the indicated times.The cultures had been given fresh ascorbate twice daily for the 4 days before the pulse as well as every 2-3 h during the chase .
showed that the chase rates obtained with a negligible change of the medium were the same as when the bulk of the medium was changed .
DISCUSSION
As part of the characterization of the neuronal cultures before the addition of other cell types, this paper has presented studies on certain nutritional and kinetic aspects of catecholamine metabolism in dissociated sympathetic neurons .Some of these findings may be relevant to questions about catecholamine metabolism raised in investigations of sympathetic neurons in vivo .
Tyrosine Metabolism : Concentration Dependence
To simplify the interpretation of the catecholamine synthesis and disappearance in the cultures, the isotopic incubations were carried out under conditions where the tyrosine concentration was not limiting ; the uptake and accumulation of tyrosine was linear throughout the incubation period .
The dependence of both tyrosine accumulation and catecholamine production on external tyrosine concentration shows half-maximal saturation at 15-20 µM tyrosine for both young (7-8 days) and older (14-21 days) cultures of neurons (Fig .I B,C) .
It should be pointed out that although the radioactive intracellular tyrosine pool rapidly reaches a steady state and expands in proportion to the external tyrosine concentration (text and Fig . 1 A), free tyrosine represents a small percentage of the total radioactivity taken up and accumulated by the neurons in an 8-h period (about 10% ; Fig .2) .The data on the tyrosine concentration dependence of catecholamine production (Fig . 1 C) agrees well with the work of Levitt et al . (1965) .In their study of catecholamine metabolism in the rat heart, rapid perfusion was employed in order to maintain the external tyrosine concentration ; 60 ,uM tyrosine was the saturating value in their study .
Tyrosine Metabolism : Time Dependence
The time course of utilization of tyrosine (text and Fig .2) shows that under the incubation conditions employed, the intracellular free tyrosine pool of the neurons reaches a steady state of labeling in less than 1 h while the accumulation of radioactive tyrosine (largely in protein) continues linearly for at least 28 h .This indicates that the data from the 8 h incubations normally employed represented ongoing metabolic activity .Rapid and complete equilibration of extracellular and intracellular tyrosine is implied by the following results : (a) the intracellular tyrosine pool labels completely within I h (text and Fig. 2), (b) the radioactive catecholamines synthesized from tyrosine chase very rapidly (Fig .6 and Table IV), and (c) the tyrosine pool expands in response to changes in extracellular tyrosine concentration (Fig. I A) .The data can only put a bound on the time for the intracellular free tyrosine to reach a steady state of labeling ; the half-time must be less than 10 min .The breakdown of unlabeled proteins could contribute unlabeled tyrosine to the intracellular free tyrosine pool, lowering its specific activity with respect to the specific activity of [3H]tyrosine in the medium .
Phenylalanine as Precursor of the Catecholamines
The longstanding assumption that tyrosine is the precursor for catecholamine synthesis has been called into question by recent observations that purified tyrosine hydroxylase (Shiman et al ., 1971) as well as brain homogenates (Karobath and Baldessarini, 1972) can convert phenylalanine and tyrosine into catechols .Which precursor is actually R .E .MAINS AND P .H. PATTERSON Catecholamine Metabolism in Cultured Neurons used by sympathetic neurons was investigated in the neuronal cultures .The results of these experiments (Fig .3) show that the neurons could utilize phenylalanine as a catecholamine precursor, but that at blood levels of tyrosine and phenylalanine, the latter accounted for only 1-2% of the catecholamines produced .DMPH4 reduces the inhibition of tyrosine hydroxylase by tyrosine, and thus could have caused the phenylalanine contribution to the catecholamines synthesized to be abnormally small in the cultures (Shiman et al ., 1971) .However, deletion of DMPH 4 did not change the results .Even under conditions where phenylalanine uptake was 7 .5 times as great as tyrosine, the former accounted for less than 10% of the catecholamines synthesized .The possibility that the unlabeled phenylalanine pool was very much larger than the tyrosine pool, thereby isotopically diluting the [ 3H]phenylalanine much more than the [ 14 C]tyrosine, seems unlikely in view of Eagle's and others' studies (Eagle et al ., 1957 ;Piez and Eagle, 1958) showing that the free tyrosine and phenylalanine pools are very similar in size in other cell types in culture .
The apparent differences between tyrosine hydroxylase prepared from adrenal medulla and from brain (Nagatsu et al., 1971) make it desirable that intact cells from other sources be examined before general conclusions can be drawn concerning the role of phenylalanine in catecholamine metabolism .
Nutritional Factors : Pteridine Cofactors
Studies on purified tyrosine hydroxylase led to the hypothesis that the concentration of its cofactor, biopterin, or the level of the enzyme(s) which reduce(s) the biopterin, play a critical role in controlling the levels of catecholamines in the sympathetic nervous system (Mussachio et al., 1971) .This suggestion, coupled with the dearth of knowledge concerning the synthesis, storage, and breakdown of biopterin in higher organisms, led to a number of experiments designed to investigate the effect of added pteridines on catecholamine synthesis by the cultured neurons .These efforts resulted in a small but consistent stimulation by added biopterin on catecholamine synthesis (Table I) .The synthetic analog of biopterin, DMPH 4 , had a similar effect in these experiments .These results suggest that the cofactor was entering the cells and exerting a small effect ; they do not provide support for the hypothesis that biop-3 5 6 THE JOURNAL OF CELL BIOLOGY • VOLUME 59, 1973 terin levels limit the rate of catecholamine synthesis, nor for the possibility that higher biopterin levels may relieve feedback inhibition of NE on tyrosine hydroxylase in whole cells .It should be noted, however, that the serum in the culture medium or the neurons themselves may provide an excess of biopterin .A recent report (Fukushima and Shiota, 1973) indicates that some mammalian cells can make biopterin from guanine .The recent work of Craine et al . (1972) indicates that high evels of bi opterin reductase are present in various tissues, so that biopterin itself need be present only in small quantities to be effective .Thoa et al . (1971) reported a stimulation of DA synthesis by DMPH 4 , but their experimental manipulations abolished NE synthesis in the preparation ; the magnitude of the stimulation of DA production by DMPH 4 in their experiments was very similar to our data in Table I .Finally, brief reports of a stimulation by biopterin on catecholamine accumulation in sympathetic explants have appeared (Benitez et al ., 1970), but these effects were not quantitated, so that comparison with the present results is not possible .
Nutritional Factors : Ascorbic Acid
Ascorbic acid is required for the full hydroxylation of proline and lysine residues during collagen synthesis and secretion in culture (Levenson, 1969 ;Peterkofsky, 1972 a) .In addition, a lack of ascorbate can reduce the overall rate of collagen production (Peterkofsky, 1972 b) .Ascorbic acid also stimulates steroid synthesis in adrenal cells in culture (Sato and Buonassisi, 1964) .
It was apparent in these and other studies (Peterkofsky, 1972 a ;Mohlberg and Johnson, 1963) that ascorbic acid added to culture media is very unstable .These experiments confirmed this instability in L-15 .Reduced ascorbate decayed to undetectable levels in about 2 h .This information was then applied in studies of catecholamine synthesis in the cultures .Since the enzyme dopamine-/3-hydroxylase utilizes ascorbate to convert DA to NE (Friedman and Kaufman, 1965), it was natural to ask whether the neurons in culture would be affected by frequent additions of fresh ascorbic acid .Neuroblastoma cells, for example, contain substantial amounts of dopamine-/3-hydroxylase activity (Anagnoste et al., 1972) yet fail to accumulate NE from their intracellular DA (Schubert et al ., 1969) .The striking change in the NE/DA ratio brought about by the addition of fresh ascor-bate to the SCG cultures (Table III) emphasizes this nutritional requirement of sympathetic neurons .These results raise the possibility of using the ascorbate effect to bring the neurotransmitter content of the neurons under experimental control during further electrophysiological and biochemical investigations .
Since ascorbic acid is so unstable, it is not obvious how the neurons can synthesize any NE at all after several weeks in culture (Table III B) without addition of fresh ascorbate .This may be similar to the question why hydroxyproline is found in collagen produced by cells cultured without ascorbic acid (Peterkofsky, 1972 a ;Woessner and Gould, 1957) .Possibly the cells are capable of utilizing another molecule as a reduction cofactor, or of storing small amounts of ascorbate for a long time .In the case of dopamine-/3-hydroxylase, the study of Levin and Kaufman (1961) indicates that dopamine itself may provide reducing power under certain circumstances .
Catecholamine Metabolism : Time Dependence
The time course of labeling the catecholamine pools was more complicated than seen for tyrosine .There was a difference in labeling patterns between young (7-8-day) and older (13-14-day) cultures as well as a difference between DA and NE .The DA pool labeled very quickly and the level remained constant throughout the 28-h incubation period .The labeling of the DA pool showed the same form for both young and older cultures (Fig .4 and text) .However, not only was the rate of labeling of the DA pool faster in the older cultures, but the extent or saturation level was greater as well .The half-times of disappearance of radioactive DA (Fig .6 and Table IV) were consistent with the rates of labeling the pool ; the rate of disappearance of the labeled DA in the older cultures was about twice as fast as in younger cultures .These half-times of filling and chasing appear to give the steady state rates of synthesis and breakdown of the DA pool .The failure of DA to accumulate progressively is consistent with a role as a metabolic intermediate in the biosynthetic pathway for NE .However, the labeling patterns reported here may also be explained by the existence of DA-containing cells which are either a small minority in the cultures, or are exhibiting abnormally rapid DA turnover (Bjorklund et al ., 1970) .Small intensely fluorescent cells have a very stable catecholamine pool in vivo (Norberg et al ., 1966) and may contain DA (Bjorklund et al ., 1970) or NE (Eranko and Eranko, 1971) .Small intensely fluorescent cells have not been detected morphologically in these cultures .
The time course of NE labeling depended on the extent of pretreatment of the cultures with ascorbic acid (Fig .5), which may mean that labeling depends on the size of the endogenous NE pool at the start of an incubation with radioactive tyrosine .Without ascorbate pretreatment (but incubating in the presence of it), label appeared as NE quickly and the curve continued to rise rapidly for at least 28 h (Fig .4 ; Fig .5 B, open circles) .On the other hand, when cultures were fed fresh ascorbate extensively before the incubation, the NE labeling began as before but the rate decreased gradually (Fig .5 B, closed circles) .This difference in labeling pattern is explained by the hypothesis that the ascorbate pretreatment resulted in a large NE pool (unlabeled) before the incubation with radioactive tyrosine .One consequence of a large pool could be that the number of vesicles available for storage of newly made NE would be decreased ; thus the initial rate of synthesis of [3H]NE could be the same in the two cases (representing soluble or not yet vesicularized NE), but the ongoing rate (requiring storage of NE) would be decreased .Tyrosine hydroxylase may also be feedback inhibited by NE (Nagatsu et al ., 1972) .Investigation of this hypothesis awaits determination of the total NE content under these various conditions .
While the synthesis of NE was markedly altered by ascorbic acid pretreatment, the rate at which labeled NE was chased was not affected by prior ascorbic acid additions (Table IV) .Rather, the rate of chase of labeled NE was markedly dependent on the age of the culture .As was the case with DA, NE labeled faster and chased faster in older cultures (Table IV and Figs . 6 and 7) .Other experiments (unpublished) indicate that 3-and 4-wk old cultures have the same turnover rates as 2-wk cultures.The lack of effect of ascorbate pretreatment on the half-time of NE chases suggests the possibility that the NE breakdown rate is independent of the size of the stores .Direct tests of these possibilities depend on analysis of the total NE content of the cultures .It is hoped that future work, for example involving subcellular fractionation to study the various pools of catecholamine (for reviews, see Hall, 1972 ;Molinoff and Axelrod, 1971), will clarify the relationship between the rates of synthesis, accumulation, and disappearance reported here .
The rate of disappearance of NE (t1/2 = 1 .1 h for 2-wk old cultures) was rapid in comparison to most such rates found in vivo, though the results in vivo are quite dependent on the experimental approach, the source of neurons, and the amount of electrical activity the neurons experience (Costa et al ., 1972 ;Spector et al ., 1972 ;Hedqvist and Stjarne, 1969 ;Sedvall et al ., 1968 ;and Gutman and Weil-Malherbe, 1966) .Different techniques have led to widely variant values for the turnover time of NE (in the vasculature and heart) such as 1-3 h and 10-15 h .Lack of electrical activity in endings can reduce the NE turnover rate from 10to 70-fold .However, the consensus of in vivo studies at this time appears to give half-times of NE turnover in peripheral sympathetic endings of 10-15 h, and in the central nervous system of 2-3 h .Sympathetic ganglia give an even shorter halftime, 1-1 .5 h (Brodie et al ., 1966 ;Bhatnagar and Moore, 1971) .
Thus, there is some question as to which of these values is the most appropriate comparison for these cultures .Since the cultures contain both cell bodies and fine processes, perhaps the brain is the closest model, though the neurons are obviously of different origins .The SCG (containing primarily cell bodies) may not be the proper model, since sympathetic endings contain the vast majority of neuronal NE in vivo (Dahlstrom and Haggendal, 1966) .However, the level of spontaneous electrical activity in the culture (which is unknown at present) would determine how much the endings contribute to the overall NE turnover in culture .Finally, there is also the possibility that in these low density cultures of dissociated neurons the reuptake of neurotransmitter released by spontaneous activity is very poor due to the large extracellular space ; in vivo inhibition of NE reuptake during electrical stimulation can increase the NE chase rate by as much as tenfold (Hedqvist and Stjarne, 1969) .It may be that the presence of target or other non-neural cells is required for effective reuptake by the neurons .Finally, immature sympathetic endings in vivo have a half-time for NE of about 1 h (Iversen et al ., 1967), and it may be that the cultures will mature further with respect to catecholamine turnover beyond the 3-wk period examined here .Thus much more information is needed before the chase rates presented in this paper can be properly evaluated .
The higher rates of synthesis, disappearance, and levels of accumulation of catecholamines in the older cultures (which contain the same number of 358 THE JOURNAL OF CELL BIOLOGY • VOLUME 59, 1973 neurons as young cultures, Mains and Patterson, 1973 a) may be of interest in terms of the development of sympathetic neurons .The following paper (Mains and Patterson, 1973 b) considers these and other changes in more detail .
FIGURE 1 1-, 2-, and 3-wk old cultures were incubated for 8 h with radioactive tyrosine at various concentrations (as indicated) .The radioactivity in free tyrosine (A), total accumulation (B), and in catecholamine (C) was determined as described in Materials and Methods .The data from cultures of three ages were pooled and the small numbers near the bars are the number of cultures analyzed at each concentration.The bars represent the SEM (in all figures) .
FIGURE 9 8-day old cultures were incubated for varying times (as indicated) with 30 µM radioactive tyrosine .The open circles represent the radioactivity in free tyrosine and the closed circles give the total radioactivity accumulated at each time point .
FIGURE 5
FIGURE 5 Cultures were grown for 2 wk and then incubated with radioactive tyrosine for the indicated times .One set of cultures was grown without ascorbate additions (indicated by the open circles) and one set received twice daily additions of fresh ascorbic acid for the 4 days before the incubations (indicated by the closed circles) .All cultures received fresh ascorbate additions every 2-3 h during the incubations .Each point is the mean of triplicate determinations .
TABLE I
Effects of Exogenous NE and Exogenous Pteridine Cofactors on CA Synthesis and AccumulationAll data presented as the mean of triplicate determinations ; all incubations were for 8 h in 30 µM [ 3 H]tyrosine ; SEM was within ±20% in all cases for which error bounds are not shown .Total cpm = the total radioactivity accumulated in a culture in 8 h .Exp .A : Grown 8 days in folate-minus medium which was supplemented at every feeding with 1 mg/liter fresh folate ; grown and incubated in presence or absence of DMPH4 , as indicated .Exp. B : Same as exp .A, except 15 days old .Exp .C : Grown 14 days without exogenous pteridine ; various levels of NE and BH4 introduced during incubation .Exp. D : Grown 14 days in 0 .5 mg/liter DMPH4 ; various levels of DMPH4 and NE introduced during the incubation .
often than once daily if they are to synthesize their full complement of NE .Other experiments showed that added glutathione (50 mg/liter) had no effect on the conversion of DA to NE and thus would not substitute for ascorbate .
TABLE IV Chase
Experiments on 1-and 2-Wk Old Cultures found not to change the rates of disappearance of radioactive CA's .The tyrosine concentration in the chase medium could have affected the rate of synthesis and thereby possibly the turnover rates . | 2014-10-01T00:00:00.000Z | 1973-11-01T00:00:00.000 | {
"year": 1973,
"sha1": "d92502c753e9e3e6a482f91982d95613cab7984d",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/59/2/346/1386560/346.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d92502c753e9e3e6a482f91982d95613cab7984d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
10242417 | pes2o/s2orc | v3-fos-license | Measures of critical exponents in the four dimensional site percolation
Using finite-size scaling methods we measure the thermal and magnetic exponents of the site percolation in four dimensions, obtaining a value for the anomalous dimension very different from the results found in the literature. We also obtain the leading corrections-to-scaling exponent and, with great accuracy, the critical density.
Introduction
From the point of view of its definition, the simplest statistical system is perhaps the percolation. In the case of the site percolation, we fill the sites of a given lattice with probability p. Then we construct the clusters as sets of contiguous filled sites.
The critical properties of the system can be described in terms of the clusters. For instance, at the critical percolation the mean cluster size diverges. We define the percolating cluster as the one that contains, in the thermodynamical limit, an infinite number of sites. The strength of this cluster (i.e. the probability of containing an arbitrary point) is the order parameter of the transition: it is zero for p < p c , and finite for p > p c [1].
Another interesting model is the bond percolation. In this case we fill the lattice bonds with a given probability and construct clusters analogously. It is believed that both models belong to the same Universality Class (share the critical exponents).
It is possible to relate the percolation problem (in the bond version) with the q-states Potts model using the "Fortuin-Kasteleyn" representation of the latter. The bond percolation is obtained in the q → 1 limit [2].
Moreover it is possible to write down a field theoretical description of the percolation. In general, the Potts model is described by means of a φ 3 theory, where the coefficient of the cubic term is proportional to q − 2. For the Ising model (q = 2) this term vanishes, and the leading term is φ 4 , recovering the standard field theory representation. For q = 2 we can write where the coefficients d ijk depend on the model (Potts, percolation, Lee-Yang singularities, etc.), and n ≡ q − 1 is the number of components of the field φ i . Thus, the percolation is described by the action (1) in the limit of zero components of the fields. Using the standard tools it is possible to obtain an ǫ-expansion for this model (and in particular for the percolation). The power counting tells us that the upper critical dimension of the model is six and thereby the expansion parameter is ǫ = 6 − d. Results up to three loops can be found in the literature [3].
For large dimensions (d = 5, and, of course, 6) there is a good agreement between the results obtained from the ǫ-expansion (resumed using Padé tech-niques), the values from numerical simulations, and the results from high temperature expansions.
In lower dimensions, the results disagree for the anomalous dimension, η. The ǫ-expansion predicts a clear negative value, while in the two dimensional case η should be non-negative because the correlation function is decreasing with the distance. In fact, in this case, it has been conjectured [4] that η = 5/24.
In this paper we will show that the value of the four dimensional η exponent turns out to be large by a 30%, compared to the ǫ-expansion. Thereby it remains as an open problem to understand why the convergence of the ǫ-expansion for this model is so poor even for small values of ǫ [5]. In order to calculate critical exponents we extend some recently developed accurate finite-size scaling techniques [6] to site percolation. As a benchmark we report the two dimensional critical exponents (for which there are almost exact analytical estimates).
A related model with the site percolation is the diluted Ising model [7]. It is defined as a standard Ising model where the spins live only on filled (with probability p) sites. The field theoretical description of this model is a φ 4 -theory with a random mass term. Using the replica trick it can be related with an O(N) symmetric φ 4 theory with cubic anisotropy, in the limit of zero field components (i.e. N → 0) [8,9].
The limit of zero temperature (large β) of the diluted Ising model is the site percolation while when p → 1 it is the pure Ising model. A precise determination of the critical exponents of the d = 4 percolation is also a very useful first step to understand the phase diagram (β, p) in the diluted Ising model. On the other hand, the site percolation is useful as a benchmark to develop and test different tools to apply to more complicated systems as the d = 4 diluted Ising model [10].
Finally, we remark that we are specially interested in these four dimensional models in relation with the triviality issue (is there an interacting continuum limit in four dimensions?). In order to solve the triviality problem is crucial to characterize all the possible fixed points in four dimensions. The site percolation has the unusual feature of having the critical dimension at d = 6, thus, it does not present the usual Mean Field exponents at d = 4.
Numerical Methods
We will work in a hypercubic lattice of linear size L with periodic boundary conditions. The Monte Carlo (MC) procedure for generating configurations in this model is straightforward: we fill each lattice site with probability p. The next step is to build the clusters, what is a deterministic procedure. To save computer memory in the larger lattices, we use a self-recurrent algorithm (in C language). In this way the total memory employed to sketch the clusters is almost negligible (it grows nearly as the lattice size squared).
Due to the absence of MC dynamics, the system is specially vulnerable to eventual pathologies of the random number generator. We have observed significant deviations in some quantities for a commonly used shift register generator [11], specially in the larger lattices. To avoid these effects, we have used as generator the sum (modulus 1) of the output of the generator of ref. [11] and a congruential one, since it is known that their respective drawbacks are very different. 1 To define the observables that we measure, it is useful to consider a related model that is a diluted Ising model with nearest neighbors infinite coupling, where the spins, σ i = ±1, live only in filled sites. It is easy to show that the magnetization of the latter model, V being the volume, coincides with the strength of the percolating cluster in the thermodynamical limit and at T = 0. Knowing the size of the clusters, as their spins must take the same sign, we can write where s c is the sign of the cluster c, n c its size, and the sum runs over all clusters. As s c are statistically independent, we can construct an improved estimator for even powers of M (the only non-vanishing in a finite lattice) averaging over all possible values of {s c }, that henceforth we will denote as (· · ·). For the second power we have We define the susceptibility as To compute the Binder parameter V M we can construct an improved estimator for the fourth power of the magnetization. Averaging over signs, we obtain after some algebra from which 2 For the finite-size scaling (FSS) method that we employ, it is very useful an accurate measure of the correlation length. We have used the second momentum definition [13] in the associated Ising model, that, in a finite lattice, reads where F is defined in terms of the Fourier transform of the magnetization as It is also possible to construct an improved estimator for | M| 2 as To measure the critical exponents we use a form of the FSS ansatz that only involves measures on a finite lattice. For an operator O that diverges as (p − p c ) −x O , its mean value in a size L lattice can be written, in the critical region, as where F O is a scaling function and ω is the universal leading corrections-toscaling exponent. From a Renormalization Group point of view, ω corresponds to the leading irrelevant operator. We can eliminate the unknown scaling function using the values from two different lattice sizes measuring at a p value where the ξ/L quotients match. Specifically, defining we can write Other examples of application of this method can be found in refs. [6]. The form of the scaling corrections allows to parameterize the finite-size effect on the determination of the critical exponents as To compute the ω exponent, we can use equation (12) for an operator with x O = 0 (as, for instance, V M or ξ/L) obtaining for the shift of the crossing point of lattice sizes L and sL [14] ∆p To efficiently use the FSS formulas, it is necessary to use a reweighting method to move in the critical region. For this model there is not a Boltzmann weight, but the role of the energy is carried out by the density of the configuration, and the probability distribution is binomial.
The probability of finding a density q when filling sites with a probability p is Using equation (18) p-derivatives of observables can also be computed.
Obviously we cannot extrapolate much further than p(1 − p)/V , which is the dispersion of the distribution (17). Therefore the visible region decreases as L −d/2 . Fortunately, it is enough for our purposes since to use eq. (14) we need to move in a neighborhood of the critical point whose size decreases as L −ω−1/ν (≈ L −2.5 ).
Numerical Results
We have produced a million of independent samples for each L 4 lattices, with L = 8, 12, 16, 24, 32 and 48.
To measure the thermal critical exponent we have used as operators: d log χ/dp (x d log χ/dp = 1) and dξ/dp (x dξ/dp = 1 + ν). For the magnetic exponents we have used the susceptibility χ (x χ = γ). We remark that, although χ is a fast varying function of p at the critical region (see refs. [6]), the use of eq. (14) allows a very precise measure. Moreover as what we directly measure is the quotient γ/ν = 2 − η, we can obtain a very accurate determination of the anomalous dimension η.
We have checked the method in the d = 2 case, where there is a very solid conjecture [4] for the values of the critical exponents, which is confirmed by conformal group analysis. We present the measured critical exponents for the two dimensional site percolation in table 1, obtained from a million of samples for each lattice size. The conjectured values by Nienhuis [4] are η = 5/24 = 0.20833 . . ., ν = 4/3 and ω = 2. The agreement is very good.
In the four dimensional case (see table 2), we observe a very stable value for the ν exponent when using the operator dξ/dp. However, the results for the exponents η or ν computed from measures of other operators do need an infinite volume extrapolation, what will be considered next.
To measure the critical density and the corrections-to-scaling exponent ω, we have studied the crossing points of V M and ξ/L for different pairs of lattice sizes, fitting the displacements to the functional form (16). As the behavior of V M and ξ/L is very different regarding the corrections-to-scaling, we obtain a great improvement performing a joint fit. ν η L dξ/dp d log(χ)/dp χ 24 1.324(9) 1.326 (14) 0.2155(5) 32 1.330(8) 1.30 (2) 0.2121(4) 48 1.344(10) 1.36 (2) 0.2085(4) 64 1.330(9) 1.36 (2) 0.2082(4) Table 1: Estimates for the critical exponents of two dimensional site percolation obtained from the finite-size scaling analysis using data from lattice sizes L and 2L. In the second row we show the operator used for each column. Practically we can read from the last row the conjectured values.
We show in figure 1 the crossing points of V M and ξ/L as a function of L −(ω+1/ν) , where we have used ν = 0.689 and ω = 1.13.
We fix the lattices ratio to s = 2 and perform the fit twice, for L ≥ 8 and for L ≥ 12. In both cases we obtain compatible values for the ω exponent and for the critical density. We get acceptable fits, for example χ 2 /d.o.f. = 4.7/4 for the former. We give the central values from the former fit but with the error bars coming from the latter fit: ω = 1.13 (10), p c (∞) = 0.196901 (5).
Using these values, we can obtain an infinite volume extrapolation for the critical exponents by means of (15). To control that higher order scalingcorrections can be neglected, we use an objective criterium. We perform the fit considering data from lattices of sizes L ≥ L min and then repeat it discarding the smallest lattices data. If both fits parameters (extrapolated value and slope) are compatible, we keep the central values from the former fit and error bars from the latter. We have found that L min = 8 is enough for our data.
The results are displayed in the last row of table 2. For η the first term in the error have been obtained considering ω fixed, and the second one corresponds to the variation when ω moves within its error bars. In figure 2 we show the behavior of η(L, 2L) as a function of L −ω , with ω = 1.13, together with the extrapolated value.
At this point we can use the results of ref. [3] obtained with the ǫexpansion. We are specially interested in the corrections-to-scaling exponent, that is, the derivate of the β-function at the non trivial fixed point (i.e. g * = 0). Using the β-function and the non trivial fixed point from ref. [3] we have obtained where ω is the corrections-to-scaling exponent, that we have fixed in this plot to 1.13 (see text for more details). The numerical value found in the literature, η = −0.12, is also displayed.
We show our results for ω in table 3. We also display in this table the results for the exponents ν, γ and η calculated in ref. [3] using the [2,1]-Padé-Borel resummation.
In other cases with ǫ = 2 a good agreement has been found between resumed series and numerical results. For the two dimensional Ising model the differences in η and ω are 1% and 5 − 35% respectively (taking into account the error bars of the ǫ-expansion estimate of ω) [15]. However, we have obtained a discrepancy of 30% in the anomalous dimension and of 50% in the ω-exponent. Linking this discrepancy with the behavior of η with the dimension, reported in the introduction, we find the ǫ-expansion not trustworthy in this case.
Conclusions
Using FSS techniques we have obtained accurate values for the critical exponents of the four dimensional site percolation. We have been able to parameterize the leading corrections-to-scaling what allows to largely reduce the systematic errors coming from finite-size effects.
We have obtained an anomalous dimension that is 30% far away from previous numerical and analytical (ǫ-expansion) approaches.
We project to extend these methods to the case of the diluted Ising model in four dimensions, in order to study the possible variation of the critical exponents on the critical line [10]. | 2014-10-01T00:00:00.000Z | 1996-12-20T00:00:00.000 | {
"year": 1996,
"sha1": "bd47e5b3834e961c7825fbf00710cb3e41701d9b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9612024",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a2b74ed3d1aa261b51e68aabaf4cbf36848b9ef",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
21067911 | pes2o/s2orc | v3-fos-license | Bioactive Peptides in Animal Food Products
Proteins of animal origin represent physiologically active components in the human diet; they exert a direct action or constitute a substrate for enzymatic hydrolysis upon food processing and consumption. Bioactive peptides may descend from the hydrolysis by digestive enzymes, enzymes endogenous to raw food materials, and enzymes from microorganisms added during food processing. Milk proteins have different polymorphisms for each dairy species that influence the amount and the biochemical characteristics (e.g., amino acid chain, phosphorylation, and glycosylation) of the protein. Milk from other species alternative to cow has been exploited for their role in children with cow milk allergy and in some infant pathologies, such as epilepsy, by monitoring the immune status. Different mechanisms concur for bioactive peptides generation from meat and meat products, and their functionality and application as functional ingredients have proven effects on consumer health. Animal food proteins are currently the main source of a range of biologically-active peptides which have gained special interest because they may also influence numerous physiological responses in the organism. The addition of probiotics to animal food products represent a strategy for the increase of molecules with health and functional properties.
Introduction
The general consensus on the impact of lifestyle on human health considers that diet represents a crucial factor in terms of human health status. Proteins of animal origin have been recognized for their nutritional properties as an essential source of amino acids upon digestion, but both digestion and industrial processing may liberate peptides from the parent protein which have biological functions. Animal food products, in particularly dairy foods, were characterized by genetic polymorphisms of the main proteins that impact on protein hydrolysis during food processing prior to consumption and digestion in the human organism. Biologically-active peptides can be produced from milk proteins through different pathways involving milk secretion, milk storage, milk processing, and milk digestion due to enzymatic hydrolyses by indigenous enzymes, digestive enzymes, and microbial enzymes from starter and non-starter cultures. The integrity and structure of meat proteins undergo changes during rigor mortis, the resolution of rigor mortis, and long-term frozen storage. Particularly, a large number of peptides showing important physiological activities are released during meat processing. Dietary supplements allow the delivery of positive molecules in dosages that exceed those obtained from conventional food products. However, great interest has been observed regarding the bioactive components naturally contained in foods which have an impact on biological processes. Bioactive components in foods represent dietary elements that impart a measurable biological effect that affect health in a beneficial way, such as immune-modulating, antihypertnesive, osteoprotective, antilipemic, Biologically-active peptides can be produced from milk proteins through different pathways involving the action of indigenous enzymes, digestive enzymes, and microbial enzymes from starter and non-starter cultures acting during milk secretion, milk storage, milk processing, and milk digestion. Proteolytic activity in fresh raw milk is attributed to indigenous and microbial enzymes. Among the indigenous enzymes, milk contains at least two main proteinase systems, the plasmin-plasminogen system and lysosomal enzymes, as well as possibly other proteolytic enzymes. Plasmin is the principal proteolytic enzyme in raw milk and is associated with casein micelles. The second proteinase in milk is cathepsin D, activity of which is significantly correlated with somatic cell count, which contains several proteinases, including cathepsin B, L, and G, and elastase [15]. The principal indigenous proteolytic enzymes were investigated and characterized in ovine and caprine milk [16][17][18][19][20]. Some bioactive peptides found in milk and dairy products and their functionality have been reported in Table 1. Indigenous enzymes play a role in the liberation of bioactive peptides during milk secretion and storage. A great number of peptides were found in goat milk incubated up to seven days without any protease inhibitors; plasmin was shown to play a major role in the hydrolysis of casein and high numbers of peptides were derived from the hydrolysis of β-casein. Almost 90% of the peptides identified shared a structural homology with previously-described bioactive peptides in caprine and bovine milk and dairy products showing encrypted sequences of bioactive peptides able to exert ACE-inhibitory activity [21,22]; antihypertensive activity [22,23], and antioxidant activity [24].
Crescenza b-CN f(58-72) ACE-I activity [34] In sheep milk, several peptides with functional activity were found deriving from the action of peptidases of different origins on casein fractions. At least three ACE inhibitory peptides were liberated by purified proteinase of Lb helveticus [26] from αs1and αs2-caseins, and antihypertensive and antioxidant peptides were found in ovine sodium caseinate incubated with Bacillus sp. P7 [35]. Four antibacterial peptides were identified from a pepsin hydrolysate of ovine αs2-casein [27], corresponding to αs2-casein fragments f(165-170), f(165-181), f(184-208), and f(203-208), with the former being most effective against Gram-negative bacteria. The peptide corresponding to ovine αs2-casein f(203-208) is a good example for a multifunctional peptide, because it exhibited not only antimicrobial activity, but also potent antihypertensive and antioxidant activity [36].
The most common way to produce bioactive peptides is through enzymatic hydrolyses of whole protein molecules: digestive enzymes and different enzyme combinations of proteinases, including alcalase, chymotripsin, pancreatin, pepsin, and thermolysin have been utilized to generate bioactive peptides from various proteins [37]. Ingested proteins undergo different stages of gastrointestinal hydrolysis in the stomach and intestinal lumen due to proteinases, such as pepsin, trypsin, and chymotripsin. Finally, these peptides are further digested by brush border peptidases at the surface of intestinal epithelial cells to produce amino acids and oligopeptides able to undergo the absorption process. For example, β-casomorphins and phosphopeptides derived from casein (CPPs) are produced in vivo during digestion of dairy products, including milk, fermented milk, cheese, and yogurt [38]. The quantity of peptides released upon digestion is hardly predictable and, consequently, the beneficial effects of human health. Peptide bioavailability is dependent on the resistance of the peptide to hydrolysis in the gastrointestinal tract and serum and its ability to be absorbed across the intestinal epithelium [39]. However, some authors report that the potential yield of bioactive peptides, during the digestion of the major dairy proteins, is relatively high. Meisel and Fitzgerald [40] estimated the theoretical yield of opioid peptides encrypted in milk proteins ranged between 2% and 6%.
Occurrence of Bioactive Peptides in Dairy Products
The ripening process in cheese encompasses several biochemical pathways dealing with the proteolytic, lipolytic, and glicolytic processes. Many dairy cultures are highly proteolytic, leading to bioactive peptide accumulation in ripened dairy products. Depending on the type of dairy products the level of peptides naturally formed in the matrix varies along with the equilibrium between the liberation and the further hydrolysis during ripening. However, the bioactive peptides have been characterized in a wide variety of dairy products distinguished on the basis of the time of ripening in fresh, short, and long ripened cheese, and on the basis of the technological process of fermented cheese, pasta filata cheese, and cooked cheese.
In long-ripened Gruyere de Comté and Cheddar cheese CPPs naturally occurred due to the primary action of chymosin and plasmin and further hydrolysis of endopeptidases from non-starter lactic acid bacteria [29,41]. The maximum ACE-inhibitor activities were found in Gouda cheese ripened for three months than in short-and long-ripened cheese. On the contrary, in Manchego cheese, from ovine milk, the ACE-inhibitory activity showed a different and complex evolution along with the ripening time decreasing in the first four months, with a subsequent increase and then decreasing again in twelve-month cheese [30]. In Emmental cheese, different bioactivities were detected as mineral-carrying, antimicrobial, antihypertensive, and immunestimulatory due to both the action of plasmin and cathepsin D and to proteinases associated with microbial starter [32]. In Cheddar cheese, the sequence RPKHPIK was found in Festivo and Iberian ovine cheeses [42][43][44] and was also found when the cheeses were subjected to a hydrolysis process that simulated gastric digestion and reported antimicrobial activity. The sequence RPKHPIKHQ was found in water-soluble peptide preparation isolated from Gouda ripened for eight months, showing a potent antihypertensive activity tested in spontaneously-hypertensive rats [33]. Furthermore, the fragment 1-23 of αs 1 -CN, known as Isracidin, originated from the proteolytic activity of chymosin and exerted antimicrobial activity on several microorganisms [45]. The sequence PQEVLNENLLRF was referenced by Minkiewicz et al. [28] as an immunomodulating and antimicrobial peptide sequence in the primary structure of αs 1 -CN freed by chymosin activity. Furthermore, antimicrobial peptides were isolated from Mozzarella, Italico, Crescenza, and Gorgonzola cheeses [34] with a specific inhibitory action towards endopeptidase from Pseudomonas fluorescens. Such a microorganism is responsible for the impairment of technological and organoleptic features of dairy products. The fermented milks are a source of bioactive peptides with anticariogenic, antihypertensive, mineral binding, and stress relieving activities due to the action of probiotic strains such as Lb. casei, Lb. helveticus, and S. cerevisiae [46][47][48].
The development of probiotic cheeses regarded Cheddar cheese [42,[49][50][51][52][53][54], Gouda cheese [55], Cottage cheese [56], Pategrás cheese [57], Crescenza cheese [58], Minas fresh cheese [59,60], and Turkish white cheese [61]. Few studies have been conducted on the production of functional cheeses made from ewe milk; the first research was performed on PDO Canestrato pugliese cheese using B. bifidum and B. longum [62] as a starter adjunct. Probiotics added to cheese yield a wide spectrum of enzymes able to influence the biochemical events involving the protein and lipid fractions in cheese during ripening. These events have an impact on the development of texture, flavor and health components of cheese. The use of lamb rennet paste containing probiotics is a suitable strategy for innovation in traditional ovine cheese without modification of the production procedures [19,[63][64][65][66][67]. This could provide a spin-off for health properties of cheese and for its ripening features, such as an acceleration of the ripening process with economic advantages to producers. It was found that using starter cultures and L. acidophilus and Bifidobacteria spp. Produced ACE-inhibitory activity peptides in Festivo cheese [43] and Manchego cheese [31]; peptides with antimicrobial activity were found in Cottage cheese produced with Bifidobacteria [68]. In functional Scamorza cheese made from ovine milk, containing a mix of B. longum-, B. lactis-, and L. acidophilus-specific peptides deriving from microbial enzymes were found in cheese at fifteen days of ripening. Several fragments were identified which shared structural homology with previously-reported peptides with ACE-inhibitory activity, antimicrobial activity, antihypertensive activity, and immunomodulating activity. Specific peptides deriving from microbial enzymes may be regarded as tracing fragments and may represent a tool to verify the presence and activity of probiotic cultures in cheese. In functional Scamorza cheese fragments were identified deriving from β-galactosidase and from endonuclease associated to B. longum, or deriving from enzymes yielded by Lactobacillus acidophilus.
Bioactive Peptide Generation
Due to the presence of high-quality proteins, meat represents the most investigated source for the isolation of novel bioactive peptides. Different mechanisms concur for bioactive peptide generation from meat and meat products ( Table 2). During meat post-mortem aging, the proteolytic activity due to endogenous enzymes (calpains and cathepsins) is a key process that affects the destructuration of proteins and, consequently, the production and release of a large number of peptides and free amino acids [69,70]. Bauchart et al. [71], in a study on aged beef, found an increase of bioactive peptides in meat after 14 days of post mortem storage than in fresh meat. In a recent study, Fu et al. [72], also demonstrated that post-mortem aging can generate bioactive peptides of about 3 kDa in longissimus dorsi and in semitendinosus muscles after 20 days of extensive proteolysis. During post-mortem meat storage the generation of peptides may also be driven by oxidation processes [73]. An oxidative status could regulate the endogenous enzymatic activity and, consequently, the myofibrillar and sarcoplasmic protein degradation [74]. Changes of temperature and pH can affect the content of bioactive peptides during meat storage due to the variation in the activity of endogenous enzymes and the destruction of pH or heat-sensitive amino acids [75,76].
It is known that bioactive peptides are generated naturally in mammals within the gastrointestinal tract during the metabolisms of dietary meat proteins [77,78]. During gastrointestinal proteolysis, ingested meat-derivative proteins are attacked by stomach-secreted digestive enzymes, such as pepsin, followed by trypsin, chymotrypsin, elastase, and carboxypeptidase secreted in the small intestine with a consequent generation of biological peptides [79]. For this reason, in order to generate potentially-functional peptides from meat products, the gastrointestinal digestive system has been simulated to generate peptides similar to those released in a physiological digestion process. The process that simulates the gastrointestinal digestion is based on an enzymatic hydrolysis using different commercial exogenous proteinases obtained from animal tissues (pepsin and tripsin), plants (papain, ficin, and bromelain), and microbial sources (alcalase ® , flavourzyme ® , neutrase ® , collagenase, or proteinase K) [79][80][81]. Enzymatic hydrolysis is a widespread method selected by food and pharmaceutical industries to produce bioactive peptides. In addition to meat sources, several bioactive peptides have been obtained through enzymatic hydrolysis from meat collagen or slaughtered by-products (trimmings, organs, hemoglobin), as reported in many studies [73,82]. Other mechanisms, such as freezing and cooking processes, can affect the isolation and availability of bioactive peptides from meat. Freezing can denature proteins due to different chemical and physical stress mechanisms, including ice formation, pH variations, and cold temperature [99], leading to an increase of bioactive peptides. Cooking can affect the generation of peptides and their related bioactivities [72,76] due to changes in the native conformation (denaturation) and rupture of intramolecular forces of proteins caused by heat [100].
A number of bioactive peptides were shown to be released, also, from meat products during curing or ripening processes [101]. The proteolytic degradation that occurs during the ripening of dry-cured ham or during fermentation of sausages, responsible also for flavor and texture, lead to a production of small peptides and free amino acids [83,102]. In particular, in fermented meat products the protein degradation is influenced by different variables as product formulation, processing conditions, and the presence of starter cultures. The content of peptides is influenced by proteolytic degradation of endogenous enzymes together with lactic acid bacteria. In particular, the presence of lactic acid bacteria induces a decrease of pH resulting in a greater activity of endogenous muscle proteases [103].
Functionality of Meat Bioactive Peptides
Meat peptides have proven effects on consumer health due to different types of bioactivity, including antihypertensive, antioxidant, antithrombotic, antimicrobial, or anticancer activities [104]. Bioactivities of peptides depend on the sequence, amino acid composition, and molecular mass [105]. Furthermore, Vermeirssen et al. [39] reported that the length of peptides could affect the intensity of the bioactivity, with smaller peptides characterized by greater bioactivity.
The most extensively-studied meat bioactive peptides are the angiotensin I-converting enzyme inhibitory (ACE-I) peptides, probably due to their implication in the regulation of blood pressure. ACE is a dipeptidylcarboxypeptidase enzyme that convert angiotensin I (decapeptide) into angiotensin II (octapeptide) resulting in a vasoconstriction of the arteries and, consequently, an increase of blood pressure. Therefore, the inhibition of ACE could be linked to the prevention of cardiovascular disease [106]. Meat proteins are a good source of ACE-I peptides with in vitro and in vivo bioactivities. In recent years, several bioactive peptides have been isolated through the hydrolysis of meat proteins with gastrointestinal enzymes, like pepsin, trypsin, chymotrypsin, or pancreatin. Katayama et al. [84] found two different ACE-I peptides from pork meat (KRQKYD, EKERERQ) through pepsin treatment. Both isolated peptides were studied in vivo in rats showing a hypotensive activity after three and six hours of oral administration.
Twenty-two ACE-I peptides from pork meat using pepsin and pancreatin proteases were isolated in vitro. Among these, KAPVA and PTPVP peptide sequences showed the highest antihypertensive activity [85]. Subsequently, in 2012, the same authors [86] investigated, in vivo, the bioactivity of KAPVA, PTPVP, and RPR peptides in rats, highlighting a major decrease of blood pressure by KAPVA and PTPVP peptides than RPR sequence in rats after eight hours of oral administration.
Peptides extracted from connective tissue were also identified as inhibitors of ACE [92,93,107]. Gómez-Guillén et al. [108] reported that the bioactivities of collagen-derived peptides depends on the amount of Gly and Pro amino acids. In vitro and in vivo ACE-I properties were found in peptides isolated from hydrolysate of bovine Achilles tendon collagen with bacterial collagenase [92]. After hydrolysis, samples were purified, sequenced, and identified as AKGANGA PGIAGAPGFPGARGPSGPQGPSGPP and PAGNPGADGQPGAKGANGAP. Both peptides showed ACE-I activity after an oral administration in rats. In recent years, Fu et al. [72,107] also found bioactive peptides from collagen extracted derived both from nuchal ligament of bovine carcasses (GPRGF) and from cooked semitendinosus muscle (SPLPPPE, EGPQGPPGPVG, and PGLIGARGPPGP) showing greater ACE and renin-inhibitory activities. In addition, Saiga et al. [94] isolated peptides with in vivo ACE-I activity from chicken collagen after hydrolysis with a protease from Aspergillus oryzae.
Several peptides isolated from meat are characterized by an antioxidant activity due to their capability to inhibit lipid peroxidation, chelate metal ions, and remove free radicals and ROS [109,110].
The most important antioxidants naturally present in meat are carnosine and anserine dipeptides, which explicate their antioxidant activity chelating pro-oxidative metals [87]. In addition to the peptides that are naturally present in meat, peptides with antioxidant activity were also generated through the hydrolysis with specific proteases. Saiga et al. [87], in an in vitro study on porcine myofibrillar proteins hydrolyzed with papain and actinase E, found five peptides (DSGVT, IEAEGE, EELDNALN, VPSIDDQEELM, and DAQEKLE) that exhibited an antioxidant activity using the linolenic acid peroxidation system. The same authors suggested that the highest antioxidant activity was reached by the DAQEKLE peptide obtained by actinase E, corresponding to a part of the tropomyosin alpha-1 chain. Thus, the type and specificity of proteases used play an important role in determining the antioxidative properties of peptides. Furthermore, three peptides (ALTA, SLTA, and VT) obtained from porcine skeletal muscle actomyosin showed antioxidative activity not only in vitro, but also in vivo in rats [88]. Four antioxidant peptides were also obtained from porcine collagen by Li et al. [95] using three different protease treatments (pepsin and papain, protease from bovine pancreas, and a cocktail of protease from bovine pancreas, bacterial proteases from Streptomyces, and Bacillus polymyxa). Results of this study showed that collagen treated with the cocktail of three enzymes demonstrate higher antioxidant activity and a major number of peptides (QGAR, LQGM, LQGMH, and LC) rather than the other treatments. In recent years, Banerjee and Shanthi [92] isolated a 36-amino acid residue peptide with free radical scavenging and metal chelating properties from bovine tendon collagen α1. Peptides with antioxidant activity can be produced during meat processing. Twenty-seven antioxidant peptides were sequenced using LC-MS/MS in samples of Spanish dry-cured ham [96]; in this study the highest scavenging activity was identified in the two different peptides (SAGNPN and GLAGA). Broncano et al. [98] also isolated two peptides (FGG and DM) with antioxidant activity in pork Chorizo sausages. Recently, Xing et al. [97] purified several antioxidant peptides from dry-cured Xuanwei ham, highlighting the highest antioxidant activity in DLEE peptide.
Peptides with antithrombotic properties were also isolated from meat. Morimatsu et al. [89] and Shimizu et al. [90] isolated peptides that exhibited antithrombotic activity from porcine longissimus dorsi muscle hydrolyzed with papain. Particularly, Shimizu et al. [90] tested the antithrombotic activity both in vitro, by a platelet function test using rat blood, and in vivo, by oral administration to mice (dose 70 mg/kg of body weight). In vivo results showed that the meat-derived peptide significantly reduced carotid artery thrombosis and decrease platelet activity with a comparable effect to aspirin treatment (at a dose of 50 mg/kg of body weight).
Although a number of peptides with antimicrobial activity have been isolated from bovine blood, only one study showed the presence of antimicrobial peptides derived from bovine meat [91]. In this study, Jang et al. [91] isolated four peptides (GLSDGEWQ, GFHI, DFHING, and FHG) after the hydrolysis with commercial enzymes of beef sarcoplasmic proteins. All peptides were subsequently tested for antimicrobial activity against six pathogens (Escherichia coli, Pseudomonas aeruginosa, Salmonella typhimurium, Staphylococcus aureus, Bacillus cereus, and Listeria monocytogenes). Results showed a different antimicrobial effect against one, or more, bacteria. In particular, GLSDGEWQ peptide showed an inhibition effect on Escherichia coli, Salmonella typhimurium, Bacillus cereus, and Listeria monocytogenes, while all tested peptides were found to be active against Pseudomonas aeruginosa.
It is known that some peptides can also exhibit anti-cancer activity, inhibit cell proliferation and have cytotoxic effects against cancer cells [111]. Jang et al. [91], investigated four peptides extracted from bovine sarcoplasmic proteins against breast, gastric, and lung adenocarcinoma. Results showed that the GFHI peptide had a greater cytotoxic effect against cancer cells of the breast and decreased the viability of gastric cells. In addition, an inhibitory effect on the proliferation of gastric cells has been found for the GLSDGEWQ peptide.
It is known that, after oral intake, bioactive peptides need to be absorbed intact to ensure their bioactivity within the cellular environments. In this regard, it is important that peptides enter the circularly system intact and remain active during the digestive process [112]. Small-sized peptides are more resistant to degradation by the intestinal enzymes and more easily absorbed to the circularly system [113]. Ohara et al. [114] detected small peptides derived from collagen in blood after oral ingestion of protein hydrolysate products. In recent years, nutrient absorption at the intestinal level is studied using an experimental model involving cultures of colon Caco-2 cells. Shimizu et al. [115] reported that chicken collagen octapeptide (GAXGLXGP) can be transported across a human intestinal epithelium. Recently, Fu et al. [107] also identified two peptides derived from bovine collagen (VGPV and GPRGF) with ACE-inhibitory activity into Caco-2 cells in the human intestinal epithelium, highlighting the bioavailability of these peptides.
Meat-derived bioactive peptides, due to their biological properties, are promising candidates as ingredients of functional or health-promoting foods [116]. Although the meat functional peptide-based products have not yet been commercialized by the industry, meat functional products could open a new market. In particular, development of functional fermented meat products could be a strategy to introduce to the market products with high nutritional value.
Occurrence of Bioactive Peptides in Egg
The avian egg is an important source of nutrients, containing all of the proteins, lipids, vitamins, minerals, and growth factors required by the developing embryo, as well as a number of defense factors to protect against bacterial and viral infection [117]. Especially, egg white contains a number of proteins with antimicrobial activities, including bacterial cell lysis, metal binding, and vitamin binding.
Lysozyme is well known to exert antimicrobial activity and, more recently, enzymatic hydrolysis of lysozyme has been found to enhance its activity by exposing antibacterial portions of the protein and producing peptides with antibacterial activity. Peptides corresponding to amino acid residues 98-112 [118], 98-108, and 15-21 [119] possessed antimicrobial activity against E. coli and S. aureus. Furthermore, peptides produced by the enzymatic digestion of ovalbumin, and their synthetic counterparts, were found to be strongly active against Bacillus subtilus and, to a lesser extent, against E. coli, Bordetella bronchiseptica, Pseudomonas aeruginosa, and Serratia marcescens, as well as Candida albicans [120].
Several egg white proteins and peptides have demonstrated immunomodulating activity. Tezuka and Yoshikawa [121] found that the phagocytic activity of macrophages was increased by the addition of ovalbumin peptides, OA 77-84 and OA 126-134, derived from peptic and chymotryptic digestions, respectively.
It has been reported that certain egg white-derived peptides can play a role in controlling the development of hypertension by exerting vasorelaxing effects [122]; a vasorelaxing peptide, ovokinin (OA 358-365), was isolated by the peptic digestion of ovalbumin. Additionally, a peptide produced by chymotrypsin digestion and corresponding to OA 359-364, was found to possess vasorelaxing activity. Both peptides were administered orally in spontaneously hypertensive rats and were found to significantly lower the systolic blood pressure. The replacement of amino acids in the ovokinin (2-7) peptide has resulted in enhanced antihypertensive activity, with the most potent derivative resulting in a 100-fold more potent antihypertensive activity [123]. Two angiotensin I converting enzyme (ACE)-inhibitory peptides were also identified in ovalbumin by peptic (OA 183-184) and tryptic (OA 200-218) digestions. Miguel et al. [124] examined peptides with ACE-inhibitory properties produced by enzymatic hydrolysis of crude egg white, which were mainly derived from ovalbumin. Among these peptides, two novel peptides with potent ACE-inhibitory activity were found, with amino acid sequences Arg-Ala-Asp-His-Pro-Phe-Leu and Tyr-Ala-Glu-Glu-Arg-Tyr-Pro-Ile-Leu.
Hen's egg white lysozyme-derived peptides showed moderate inhibitory activities against calmodulin-dependent phosphodiesterase (CaMPDE) and free-radical scavenging properties [126]. Egg lysozyme hydrolysates have potential as functional foods and nutraceuticals, although bioavailability studies are required to confirm their health benefits in humans. | 2017-05-12T23:52:36.724Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "62c42a8219291215e947cbd4a497562d21804199",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/6/5/35/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62c42a8219291215e947cbd4a497562d21804199",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
225792251 | pes2o/s2orc | v3-fos-license | Uncut Roux-en-Y gastrojejunostomy after totally laparoscopic distal gastrectomy: Learning curve and surgical outcomes
Purpose Totally laparoscopic distal gastrectomy (TLDG) is now widely used for early gastric cancer patients, but the selection of a reconstruction method after TLDG is still controversial. Roux-en-Y gastrojejunostomy is increasingly used in expectation of less gastritis and alkaline reflux despite its technical difficulty. The uncut Roux-en-Y gastrojejunostomy (uRYGJ) retains the advantages of Roux-en-Y reconstruction but helps prevent Roux stasis syndrome. The present study aims to introduce a single surgeon’s experience of TLDG with uRYGJ and analyze the learning curve and surgical outcomes. Methods We retrospectively reviewed the medical records of 124 consecutive patients who underwent TLDG with uRYGJ performed by a single surgeon between July 2014 and August 2015 at Asan Medical Center. The baseline characteristics and surgical outcomes were analyzed, and the learning curve was drawn based on the power-law model. Results The mean total operative time was 165 minutes, and the average length of hospital stay was 6.6 days. Complications included two cases of duodenal stump leakage, two intra-abdominal bleeding, two intra-abdominal fluid collection, one wound problem, two anastomotic strictures, 14 ileus, and no anastomotic leakage. There were five cases of endoscopically proven reflux gastritis/esophagitis and no Roux stasis syndrome. There were five recurrences and one mortality during the follow-up period. The learning curve leveled at the 15th case. Conclusion The results of our study showed the safety and feasibility of uRYGJ, and that the technical difficulty of the procedure can be overcome with a short learning curve for experienced surgeons.
INTRODUCTION
expectation of better quality of life after surgery; less gastritis and alkaline reflux [5,6]. Moreover, it is less dependent on tumor position, whereas there are some occasions B-I cannot be performed because of the expected creation of tension at the anastomosis site.
Unfortunately, Roux stasis syndrome, characterized by symptoms of food stasis in the upper gastrointestinal tract, can be complicated after RYGJ reconstruction with incidence of 5% to 37.3% [6][7][8][9][10]. The etiology of Roux stasis syndrome is still unclear, but it is thought to be caused by transection of the jejunum and separation of the Roux limb from the intestinal pacemaker [11,12].
To prevent this complication, a new anastomosis method, the uncut RYGJ (uRYGJ), was designed by Van Stiegmann et al. in 1988. Zhang et al. [13] revealed advantages of the procedure in myoelectric activity and motility of the Roux limb. Several studies reported a reduced incidence of Roux stasis syndrome as well as the safety and feasibility of the procedure [14,15]. Furthermore, in 2005, Uyama et al. [16] reported a successful early surgical outcome of laparoscopy-assisted LDG with uRYGJ.
The present study aimed to introduce a single-surgeon's experience performing TLDG with uRYGJ and analyze the surgical outcomes and the learning curve.
Patient characteristics
This study is a single-center, retrospective analysis of all consecutive patients who underwent TLDG with uRYGJ in our center between July 2014 and August 2015. All operations were performed by a single surgeon who has a previous experience of approximately 1,000 cases of gastrectomy, including 500 laparoscopic cases. The study was approved by the Institutional Review Board of Asan Medical Center (IRB No. 2018-0880). Informed consents from patients for this retrospective study were waived by institutional review board.
Clinical evaluation of surgical outcomes
We reviewed patient electronic medical charts for data collection. Baseline data included age at operation, sex, body mass index (BMI), operative time, length of hospital stay, duration until the first liquid diet, duration until first flatus, tumor size, number of retrieved and metastasized lymph nodes, distal and proximal resection margins, complications including reflux gastritis/esophagitis and Roux stasis syndrome, recurrence, and mortality. Only cases endoscopically proven were included as reflux gastritis/esophagitis.
Operation time was measured from initiation of incision to skin closure. Surgery-related complications that occurred and were detected within 30 postoperative days were defined as early compli-cations, and those that occurred after 30 days were defined as late complications.
Surgical procedures
The patient was placed in supine reverse Trendelenburg position. The first port was inserted infra-umbilically with the open Hasson technique. The other four ports were inserted with laparoscopic visualization in the upper abdomen. Mobilization of the stomach and duodenum was performed along with en-bloc lymph node dissection including partial omentectomy. The mobilized duodenum and stomach were transected with a laparoscopic linear stapler (iDrive; Covidien, North Haven, CT, USA). After retrieving the specimen, reconstruction was performed. The ligament of Treitz was exposed by retracting the transverse colon, and the jejunum at 20 cm distal to the ligament of Treitz was dragged up towards the stomach. Enterotomy incisions were made in the jejunum and stomach for gastrojejunostomy (GJ) using a 60-mm linear endostapler ( Fig. 1A), and the common entry hole was closed with self-retaining sutures (V-Loc; Covidien) ( Fig. 1B). At 45 cm from the GJ in the efferent loop, and at 10-15 cm from the ligament of Treitz in the afferent loop, enterotomy incisions were made and Braun anastomosis was performed using a 45-mm linear endostapler ( Fig. 2A), and V-loc for the common entry hole (Fig. 2B). At 2 to 3 cm from the GJ, a 45-mm no-knife stapler (ATS45NK: 45 mm ETS Articulating Linear Cutter (No Knife); Ethicon, Cincinnati, OH, USA) is used for the uncut procedure (Fig. 3). Closure of the jejunal mesenteric defect was routinely performed. SPSS 20.0 software (IBM Corp., Armonk, NY, USA) was used to analyze the characteristics and surgical outcomes of the patients. The learning curve was analyzed using Excel 2010 (Microsoft, Redmond, WA, USA) based on the power-law model. 7th edition, there were 101 patients with stage IA gastric cancer, 15 patients with stage IB, four with IIA, and one each with IIB, IIIA, IIIB, and IIIC. There were five recurrences (4%) during the follow-up period, which included one peritoneal dissemination, two liver metastases, and two local recurrences. All patients with recurrence were stage IA at the initial surgery (Table 2). There was no anastomotic leakage during the follow-up period. Early complications included two cases of duodenal stump leakage, two cases of intra-abdominal bleeding, four cases of ileus that delayed hospital discharge, two cases of intra-abdominal fluid collection and one wound problem. There were 11 cases of medical complications, which occurred within 30 postoperative days. These complications included temporary elevation of bilirubin or liver enzymes, pneumonia, angina, and acute myocardial infarction. Late complications included one case of gastrojejunal stricture, one case of jejunojejunal stricture, and 10 cases of mechanical ileus ( Table 3).
Statistical analysis and learning curve evaluation
The only mortality case resulted from myocardial infarction that occurred on the third postoperative day in a patient who did not have underlying heart disease other than hypertension. Six cases of mechanical ileus, which occurred after 30 postoperative days, resulted in re-operation, and one case of jejunojejunal stricture was managed with balloon dilatation.
There were five cases of endoscopically proven reflux gastritis/
Patient characteristics and surgical outcomes
A total of 124 patients underwent TLDG with uRYGJ in our center between July 2014 and August 2015. The mean age of the patients was 57.8±11.8 years. There were 72 male patients (58.1%) and 52 female patients (41.9%), and the mean BMI was 24.44±3.13 kg/m 2 . The median follow-up period was 38 months. The mean total operative time was 165.4 ± 32.5 minutes, with variance from 107 to 285 minutes. The average length of hospital stay was 6.6 ± 2.2 days and liquid diet was initiated after an average of 3.4 ± 1.1 postoperative days. The first passage of flatus was noted after an average of 3.5 ± 0.9 postoperative days ( Table 1).
The mean total number of retrieved lymph nodes and metastasized nodes in each surgery was 40.3 ± 16.1 and 0.3 ± 1.1, respectively. The average tumor size was 2.89 ± 1.42 cm, with a distal resection margin of 5.80 ± 2.83 cm and a proximal resection margin of 4.38 ± 2.60 cm. Under American Joint Committee on Cancer esophagitis; and among them, one patient complained of associated symptom which involved regurgitation. There was no incidence of Roux stasis syndrome during the follow-up period (Table 1).
Learning curve analysis
The power-law model was used to evaluate the association between the number of cumulative cases and the expected operative time. With our data, the curve was best fitted at a learning rate of 95%. Under this logarithmic model, the slope at a certain case number flattens as cumulative case number increases, approaching 0. Lin et al. [17] introduced the learning curve based on the power law method and set the slope of -1 as the point in which the operator reached proficiency. We adopted this concept to identify the point at which the operator reached stability; in our study, this happened at the 15th case.
DISCUSSION
LDG has proven its feasibility, favorable oncologic and surgical outcomes over the years [2]. However, the best reconstruction method after LDG is not yet in consensus and mainly depends on surgeon's preference. B-I, B-II, and RYGJ are commonly used reconstruction methods after LDG. B-I reconstruction is most preferred in Korea because it is technically simple and has physiological advantages [5]. However, B-I reconstruction has been associated with gastroesophageal and duodenogastric reflux.
RYGJ reconstruction is more technically demanding but supplements these disadvantages [5,6,[18][19][20]. Also, it depends less on position of the tumor, and forms a tension-free anastomosis [21]. Some studies have even reported better food intake and nutritional benefit after surgery [20,22]. Unfortunately, despite its advantages, RYGJ reconstruction is associated with Roux stasis syndrome. The etiology of this syndrome is controversial but animal studies suggest that transection of the jejunum with its ectopic pacemaker causes dysfunction of the Roux limb, leading to a slow transit of food material [8,23]. Uyama et al. developed the uRYGJ technique based on this theory and omitted the procedure of jejunal transection in order to avoid Roux stasis syndrome [8,16]. For the uncut procedure, instead of jejunal transection, the afferent loop is blocked with a no-knife stapler. In this way, diversion of the afferent loop is achieved without compromising the jejunal ectopic pacemaker. Performing uRYGJ after gastrectomy has proven to reduce Roux stasis syndrome [8,21].
We performed a modified method of uRYGJ to accomplish a total laparoscopic surgery. A total of 124 patients received TLDG with uRYGJ at our center between July 2014 and August 2015 with a median follow-up period of 38 months. The mean total operative time was 165.4 minutes, which was comparable to other studies [21,24]. Reflux gastritis/esophagitis occurred in only 4.0% of the study group, and there was no incidence of Roux stasis syndrome during the follow-up period. Our results showed the safety and feasibility of uRYGJ, and that the technical difficulty of the procedure can be overcome with a relatively short learning curve for experienced surgeons. The learning curve was no longer than 10 to 29 cases for intracorporeal B-I anastomosis [25][26][27], which is ac- cepted as a technically less challenging procedure. There are several limitations to this study. The study was based on retrospective data collection from a single surgeon, and it is not a comparative study. Another limitation is that we have no data on the recanalization of uncut stapled jejunum during the study period. It is important to evaluate recanalization because the reconstruction turns into a B-II structure when the uncut stapler line of the uRYGJ anastomosis re-opens [16,[28][29][30]. At the time of the study, the endoscopists who performed postoperative follow-up endoscopy were not requested to observe the recanalization after uncut procedure, and there was no description regarding the re-opening on the endoscopic report. A randomized controlled trial with the establishment of a protocol for objective data collection including information on recanalization and detailed description on reflux, nutrition, and quality of life is needed to verify the functional advantage of uRYGJ compared to other anastomotic methods.
In conclusion, laparoscopic uRYGJ is feasible and safe after TLDG. The learning curve was considered to have been reached after 15 cases and this data-based evidence supports our opinion that the technical difficulty of the procedure can be overcome with a short learning curve for experienced surgeons. Therefore, uRYGJ should be considered a viable option after TLDG. | 2020-07-09T09:08:25.710Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "b305a84fc3d12f8ff35ef05cb17ed2082ed8a809",
"oa_license": "CCBYNC",
"oa_url": "http://kjco.org/upload/kjco-16-1-46.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31b8d28a3e2f5aeeb8237b3353ccc9ffa9545ef4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
918796 | pes2o/s2orc | v3-fos-license | Generalized Expectation Criteria for Bootstrapping Extractors using Record-Text Alignment
Traditionally, machine learning approaches for information extraction require human annotated data that can be costly and time-consuming to produce. However, in many cases, there already exists a database (DB) with schema related to the desired output, and records related to the expected input text. We present a conditional random field (CRF) that aligns tokens of a given DB record and its realization in text. The CRF model is trained using only the available DB and unlabeled text with generalized expectation criteria. An annotation of the text induced from inferred alignments is used to train an information extractor. We evaluate our method on a citation extraction task in which alignments between DBLP database records and citation texts are used to train an extractor. Experimental results demonstrate an error reduction of 35% over a previous state-of-the-art method that uses heuristic alignments.
Introduction
A substantial portion of information on the Web consists of unstructured and semi-structured text. Information extraction (IE) systems segment and label such text to populate a structured database that can then be queried and mined efficiently.
In this paper, we mainly deal with information extraction from text fragments that closely resemble structured records. Examples of such texts include citation strings in research papers, contact addresses on person homepages and apartment listings in classified ads. Pattern matching and rule-based approaches for IE (Brin, 1998;Agichtein and Gravano, 2000;Etzioni et al., 2005) that only use specific patterns, and delimiter and font-based cues for segmentation are prone to failure on such data because these cues are generally not broadly reliable. Statistical machine learning methods such as hidden Markov models (HMMs) (Rabiner, 1989;Seymore et al., 1999;Freitag and McCallum, 1999) and conditional random fields (CRFs) (Lafferty et al., 2001;Peng and McCallum, 2004;Sarawagi and Cohen, 2005) have become popular approaches to address the text extraction problem. However, these methods require labeled training data, such as annotated text, which is often scarce and expensive to produce.
In many cases, however, there already exists a database with schema related to the desired output, and records that are imperfectly rendered in the available unlabeled text. This database can serve as a source of significant supervised guidance to machine learning methods. Previous work on using databases to train information extractors has taken one of three simpler approaches. In the first, a separate language model is trained on each column of the database and these models are then used to segment and label a given text sequence (Agichtein and Ganti, 2004;Canisius and Sporleder, 2007). However, this approach does not model context, errors or different formats of fields in text, and requires large number of database entries to learn an accurate language model. The second approach (Sarawagi and Cohen, 2004;Michelson and Knoblock, 2005;Mansuri and Sarawagi, 2006) uses database or dictionary lookups in combination with similarity measures to add features to the text sequence. Although these features are very informative, learning algorithms still require annotated data to make use of them. The final approach heuristically labels texts using matching records and learns extractors from these annotations (Ramakrishnan and Mukherjee, 2004;Bellare and McCallum, 2007;Michelson and Knoblock, 2008). Heuris-tic labeling decisions, however, are made independently without regard for the Markov dependencies among labels in text and are sensitive to subtle changes in text.
Here we propose a method that automatically induces a labeling of an input text sequence using a word alignment with a matching database record. This induced labeling is then used to train a text extractor. Our approach has several advantages over previous methods. First, we are able to model field ordering and context around fields by learning an extractor from annotations of the text itself. Second, a probabilistic model for word alignment can exploit dependencies among alignments, and is also robust to errors, formatting differences, and missing fields in text and the record. Our word alignment model is a conditional random field (CRF) (Lafferty et al., 2001) that generates alignments between tokens of a text sequence and a matching database record. The structure of the graphical model resembles IBM Model 1 (Brown et al., 1993) in which each target (record) word is assigned one or more source (text) words. The alignment is generated conditioned on both the record and text sequence, and therefore supports large sets of rich and nonindependent features of the sequence pairs. Our model is trained without the need for labeled word alignments by using generalized expectation (GE) criteria (Mann and McCallum, 2008) that penalize the divergence of specific model expectations from target expectations. Model parameters are estimated by minimizing this divergence. To limit over-fitting we include a L 2 -regularization term in the objective. The model expectations in GE criteria are taken with respect to a set of alignment latent variables that are either specific to each sequence pair (local) or summarizing the entire data set (global). This set is constructed by including all alignment variables a that satisfy a certain binary feature (e.g., f (a, x 1 , y 1 , x 2 ) = 1, for labeled record (x 1 , y 1 ), and text sequence x 2 ). One example global criterion is that "an alignment exists between two orthographically similar 1 words 95% of the time." Here the criterion has a target expectation of 95% and is defined over alignments Another criterion for extraction can be "the word 'EMNLP' is always aligned with the record label booktitle". 1 Two words are orthographically similar if they have low edit distance. This criterion has a target of 100% and defined for {a = i, j | y 1 [i] = booktitle ∧ x 2 [j] = 'EMNLP', ∀y 1 , x 2 }. One-to-one correspondence between words in the sequence pair can be specified as collection of local expectation constraints. Since we directly encode prior knowledge of how alignments behave in our criteria, we obtain sufficiently accurate alignments with little supervision.
We apply our method to the task of citation extraction. The input to our training algorithm is a set of matching DBLP 2 -record/citation-text pairs and global GE criteria 3 of the following two types: (1) alignment criteria that consider features of mapping between record and text words, and, (2) extraction criteria that consider features of the schema label assigned to a text word. In our experiments, the parallel record-text pairs are collected manually but this process can be automated using systems that match text sequences to records in the DB (Michelson and Knoblock, 2005;Michelson and Knoblock, 2008). Such systems achieve very high accuracy close to 90% F1 on semi-structured domains similar to ours. 4 Our trained alignment model can be used to directly align new record-text pairs to create a labeling of the texts. Empirical results demonstrate a 20.6% error reduction in token labeling accuracy compared to a strong baseline method that employs a set of high-precision alignments. Furthermore, we provide a 63.8% error reduction compared to IBM Model 4 (Brown et al., 1993). Alignments learned by our model are used to train a linear-chain CRF extractor. We obtain an error reduction of 35.1% over a previous state-of-the-art extraction method that uses heuristically generated alignments.
Record-Text Alignment
Here we provide a brief description of the recordtext alignment task. For the sake of clarity and space, we describe our approach on a fictional restaurant address data set. The input to our system is a database (DB) consisting of records (possibly containing errors) and corresponding texts that are realizations of these DB records. An example of a matching record-text pair is shown in An example word alignment between the record and text is shown in Table 2. Tokenization of record/text string is based on whitespace characters. We add a special *null* token at the field boundaries for each label in the schema to model word insertions. The record sequence is obtained by concatenating individual fields according to the DB schema order. As in statistical word alignment, we assume the DB record to be our source and the text to be our target. The induced labeling of the text is given by (name, address, address, address, city, city, state) which can be used to train an information extractor. In the next section, we present our approach to address this task.
Approach
We first define notation that will be used throughout this section.
Let (x 1 , y 1 ) be a database record with token sequence be the text sequence. Let a = a 1 , a 2 , . . . , a n be an alignment sequence of same length as the target text sequence. The alignment a i = j assigns the DB token-label pair
Conditional Random Field for Alignment
Our conditional random field (CRF) for alignment has a graphical model structure that resembles that of IBM Model 1 (Brown et al., 1993). The CRF is an undirected graphical model that defines a probability distribution over alignment sequences a conditioned on the inputs (x 1 , y 1 , x 2 ) as: where f (a t , x 1 , y 1 , x 2 , t) are feature functions defined over the alignments and inputs, Θ are the model parameters and The feature vector f (a t , x 1 , y 1 , x 2 , t) is the concatenation of two types of feature functions: (1) alignment features f align (a t , x 1 , x 2 , t) defined on source-target tokens, and, (2) extraction features f extr (a t , y 1 , x 2 , t) defined on source labels and target text. To obtain the probability of an alignment in a particular position t we marginalize out the alignments over the rest of the positions {1, . . . , n}\{t}, (2) Furthermore, the marginal over label y t assigned to the text token x 2 [t] at time step t during alignment is given by The gradient of p Θ (a t ) with respect to parameters Θ is given by where the expectation term in the above equation sums over all alignments a at position t. We use the Baum-Welch and Viterbi algorithms to compute marginal probabilities and best alignment sequences respectively.
Expectation Criteria and Parameter Estimation
Let 2 ) be a data set of K record-text pairs gathered manually or automatically through matching (Michelson and Knoblock, 2005;Michelson and Knoblock, 2008). A global expectation criterion is defined on the set of alignment latent variables 2 ) = 1, ∀i = 1 . . . K} on the entire data set that satisfy a given binary feature f (a, x 1 , y 1 , x 2 ). Similarly a local expectation criterion is defined only for a specific instance (x For a feature function f , a target expectation p, and, a weight w, our criterion minimizes the squared divergence is the sum of marginal probabilities given by Equation (2) and |A f | is the size of the variable set. The weight w influences the importance of satisfying a given expectation criterion. Equation (5) is an instance of generalized expectation criteria (Mann and Mc-Callum, 2008) that penalizes the divergence of a specific model expectation from a given target value. The gradient of the divergence with respect to Θ is given by, where the gradient ∂p Θ (a) ∂Θ is given by Eq. (4). Given expectation criteria C = F, P, W with a set of binary feature functions F = f 1 , . . . , f l , target expectations P = p 1 , . . . , p l and weights W = w 1 , . . . , w l , we maximize the objective where ||Θ|| 2 /2 is the regularization term added to limit over-fitting. Hence the gradient of the objective is We maximize our objective (Equation 7) using the L-BFGS algorithm. It is sometimes necessary to restart maximization after resetting the Hessian calculation in L-BFGS due to non-convexity of our objective. 5 Also, non-convexity may lead to a local instead of a global maximum. Our experiments show that local maxima do not adversely affect performance since our accuracy is within 4% of a model trained with gold-standard labels.
Linear-chain CRF for Extraction
The alignment CRF (AlignCRF) model described in Section 3.1 is able to predict labels for a text sequence given a matching DB record. However, without corresponding records for texts the model does not perform well as an extractor because it has learned to rely on the DB record and alignment features (Sutton et al., 2006). Hence, we train a separate linear-chain CRF on the alignmentinduced labels for evaluation as an extractor. The extraction CRF (ExtrCRF) employs a fully-connected state machine with a unique state per label y ∈ Y in the database schema. The CRF induces a conditional probability distribution over label sequences y = y 1 , . . . , y n and input text sequences x = x 1 , . . . , x n as (8) In comparison to our earlier zero-order AlignCRF model, our ExtrCRF is a first-order model. All the feature functions in this model g(y t−1 , y t , x, t) are a conjunction of the label pair (y t−1 , y t ) and input observational features. Z Λ (x) in the equation above is the partition function. Inference in the model is performed using the Viterbi algorithm.
Given expectation criteria C and data set 2 ) , we first estimate the parameters Θ of AlignCRF model as described in Section 3.2. Next, for all text sequences x (i) 2 , i = 1 . . . K we compute the marginal probabilities of the labels p Θ (y t |x (i) 2 ), ∀t using Equation (3). To estimate parameters Λ we minimize the KL-divergence between p Θ (y|x) = n t=1 p Θ (y t |x) and p Λ (y|x) for all sequences x, The gradient of the above equation is given by Both the expectations can be computed using the Baum-Welch algorithm. The parameters Λ are estimated for a given data set D and learned parameters Θ by optimizing the objective The objective is minimized using L-BFGS. Since the objective is convex we are guaranteed to obtain a global minima.
Experiments
In this section, we present details about the application of our method to citation extraction task.
Data set. We collected a set of 260 random records from the DBLP bibliographic database. The schema of DBLP has the following labels {author, editor, address, title, booktitle, pages, year, journal, volume, number, month, url, ee, cdrom, school, publisher, note, isbn, chapter, se-ries}. The complexity of our alignment model depends on the number of schema labels and number of tokens in the DB record. We reduced the number of schema labels by: (1) mapping the labels address, booktitle, journal and school to venue, (2) mapping month and year to date, and (3) dropping the fields url, ee, cdrom, note, isbn and chapter, since they never appeared in citation texts. We also added the other label O for fields in text that are not represented in the database. Therefore, our final DB schema is {author, title, date, venue, volume, number, pages, editor, publisher, series, O}.
For each DBLP record we searched on the web for matching citation texts using the first author's last name and words in the title. Each citation text found is manually labeled for evaluation purposes. An example of a matching DBLP record-citation text pair is shown in Table 3. Our data set 6 contains 522 record-text pairs for 260 DBLP entries.
Features and Constraints. We use a variety of rich, non-independent features in our models to optimize system performance. The input features in our models are of the following two types: (a) Extraction features in the AlignCRF model (f (a t , y 1 , x 2 , t)) and ExtrCRF model (g(y t−1 , y t , x, t)) are conjunctions of assigned labels and observational tests on text sequence at time step t. The following observational tests are used: (1) regular expressions to detect tokens containing all characters (ALLCHAR), all digits (ALLDIGITS) or both digits and characters (AL-PHADIGITS), (2) number of characters or digits in the token (NUMCHAR=3, NUMDIGITS=1), (3) domain-specific patterns for date and pages, (4) token identity, suffixes, prefixes and character ngrams, (5) presence of a token in lexicons such as "last names," "publisher names," "cities," (6) lexicon features within a window of 10, (7) Table 3: Example of matching record-text pair found on the web. expression features within a window of 10, and (8) token identity features within a window of 3.
(b) Alignment features in the AlignCRF model (f (a t , x 1 , x 2 , t)) that operate on the aligned source token x 1 [a t ] and target token x 2 [t]. Again the observational tests used for alignment are: (1) exact token match tests whether the source-target tokens are string identical, (2) approximate token match produces a binary feature after binning the Jaro-Winkler edit distance (Cohen et al., 2003) between the tokens, (3) substring token match tests whether one token is a substring of the other, (4) prefix token match returns true if the prefixes match for lengths {1, 2, 3, 4}, (5) suffix token match returns true if the prefixes match for lengths {1, 2, 3, 4}, and (6) exact and approximate token matches at offsets {−1, −1} and {+1, +1} around the alignment.
Thus, a conditional model lets us use these arbitrary helpful features that cannot be exploited tractably in a generative model.
As is common practice (Haghighi and Klein, 2006;Mann and McCallum, 2008), we simulate user-specified expectation criteria through statistics on manually labeled citation texts. For extraction criteria, we select for each label, the top N extraction features ordered by mutual information (MI) with that label. Also, we aggregate the alignment features of record tokens whose alignment with a target text token results in a correct label assignment. The top N alignment features that have maximum MI with this correct labeling are selected as alignment criteria. We bin target expectations of these criteria into 11 bins as [0.05, 0.1, 0.2, 0.3, . . . , 0.9, 0.95]. 7 In our experiments, we set N = 10 and use a fixed weight w = 10.0 for all expectation criteria (no tuning of parameters was performed). Table 4 shows a sample of GE criteria used in our experiments. 8 7 Mann and McCallum (2008) Experimental Setup. Our experiments use a 3:1 split of the data for training and testing. We repeat the experiment 20 times with different random splits of the data. We train the AlignCRF model using the training data and the automatically created expectation criteria (Section 3.2). We evaluate our alignment model indirectly in terms of token labeling accuracy (i.e., percentage of correctly labeled tokens in test citation data) since we do not have annotated alignments. The alignment model is then used to train a ExtrCRF model as described in Section 3.3. Again, we use token labeling accuracy for evaluation. We also measure F1 performance as the harmonic mean of precision and recall for each label.
Alternate approaches
We compare our method against alternate approaches that either learn alignment or extraction models from training data.
Alignment approaches. We use GIZA++ (Och and Ney, 2003) to train generative directed alignment models: HMM and IBM Model4 (Brown et al., 1993) from training record-text pairs. These models are currently being used in state-of-the-art machine translation systems. Alignments between matching DB records and text sequences are then used for labeling at test time.
Extraction approaches. The first alternative (DB-CRF) trains a linear-chain CRF for extraction on fields of the database entries only. Each field of the record is treated as a separate labeled text sequence. Given an unlabeled text sequence, it is segmented and labeled using the Viterbi algorithm. This method is an enhanced representative for (Agichtein and Ganti, 2004) in which a language model is trained for each column of the DB. Another alternative technique constructs partially annotated text data using the matching records and a labeling function. The labeling function employs high-precision alignment rules to assign labels to text tokens using labeled record tokens. We use exact and approximate token matching rules to create a partially labeled sequence, skipping tokens that cannot be unambiguously labeled. In our experiments, we achieve a precision of 97% and a recall of 70% using these rules. Given a partially annotated citation text, we train a linear-chain CRF by maximizing the marginal likelihood of the observed labels. This marginal CRF training method (Bellare and Mc-Callum, 2007) (M-CRF) was the previous stateof-the-art on this data set. Additionally, if a matching record is available for a test citation text, we can partially label tokens and use constrained Viterbi decoding with labeled positions fixed at their observed values (M+R-CRF approach).
Our third approach is similar to (Mann and Mc-Callum, 2008). We create extraction expectation criteria from labeled text sequences in the training data and uses these criteria to learn a linear-chain CRF for extraction (MM08). The performance achieved by this approach is an upper bound on methods that: (1) use labeled training records to create extraction criteria, and, (2) only use extraction criteria without any alignment criteria.
Finally, we train a supervised linear-chain CRF (GS-CRF) using the labeled text sequences from the training set. This represents an upper bound on the performance that can be achieved on our task. All the extraction methods have access to the same features as the ExtrCRF model. Table 5 shows the results of various alignment algorithms applied to the record-text data set. Alignment methods use the matching record to perform labeling of a test citation text. The Align-CRF model outperforms the best generative align- 4) with an error reduction of 63.8%. Our conjecture is that Model4 is getting stuck in sub-optimal local maxima during EM training since our training set only contains hundreds of parallel record-text pairs. This problem may be alleviated by training on a large parallel corpus. Additionally, our alignment model is superior to Model4 since it leverages rich non-independent features of input sequence pairs. Table 6 shows the performance of various extraction methods. Except M+R-CRF, all extraction approaches, do not use any record information at test time. In comparison to the previous stateof-the-art M-CRF, the ExtrCRF method provides an error reduction of 35.1%. ExtrCRF also produces an error reduction of 21.7% compared to M+R-CRF without the use of matching records. These reductions are significant at level p = 0.005 using the two-tailed t-test. Training only on DB records is not helpful for extraction as we do not learn the transition structure 9 and additional context information 10 in text. This explains the low accuracy of the DB-CRF method. Furthermore, the MM08 approach (Mann and McCallum, 2008) achieves low accuracy since it does not use any alignment criteria during training. Hence, alignment information is crucial for obtaining high accuracy. Note that we do not observe a decrease in performance of ExtrCRF over AlignCRF although we are not using the test records during decoding. This is because: (1) a first-order model in Extr-CRF improves performance compared to a zeroorder model in AlignCRF and (2) the use of noisy DB records in the test set for alignment often increases extraction error.
Results
Both our models have a high F1 value for the other label O because we provide our algorithm with constraints for the label O. In contrast, since there is no realization of the O field in the DB records, both M-CRF and M+R-CRF methods fail to label such tokens correctly. Our alignment model trained using expectation criteria achieves a performance of 92.7% close to gold-standard training (GS-CRF) (96.5%). Furthermore, Ex-trCRF obtains an accuracy of 92.8% similar to AlignCRF without access to DB records due to better modeling of transition structure and context.
Related Work
Recent research in information extraction (IE) has focused on reducing the labeling effort needed to train supervised IE systems. For instance, Grenager et al. (2005) perform unsupervised HMM learning for field segmentation, and bias the model to prefer self-transitions and transi-tions on boundary tokens. Unfortunately, such unsupervised IE approaches do not attain performance close to state-of-the-art supervised methods. Semi-supervised approaches that learn a model with only a few constraints specifying prior knowledge have generated much interest. Haghighi and Klein (2006) assign each label in the model certain prototypical features and train a Markov random field for sequence tagging from these labeled features. In contrast, our method uses GE criteria (Mann and McCallum, 2008)allowing soft-labeling of features with target expectation values -to train conditional models with complex and non-independent input features. Additionally, in comparison to previous methods, an information extractor trained from our record-text alignments achieves accuracy of 93% making it useful for real-world applications. Chang et al. (2007) use beam search for decoding unlabeled text with soft and hard constraints, and train a model with top-K decoded label sequences. However, this model requires large number of labeled examples (e.g., 300 annotated citations) to bootstrap itself. Active learning is another popular approach for reducing annotation effort. Settles and Craven (2008) provide a comparison of various active learning strategies for sequence labeling tasks. We have shown, however, that in domains where a database can provide significant supervision, one can bootstrap accurate extractors with very little human effort.
Another area of research, related to the task described in our paper, is learning extractors from database records. These records are also known as field books and reference sets in literature (Canisius and Sporleder, 2007;Michelson and Knoblock, 2008). Both Agichtein and Ganti (2004) and Canisius and Sporleder (2007) train a language model for each database column. The language modeling approach is sensitive to word re-orderings in text and other variability present in real-world text (e.g., abbreviation). We allow for word and field re-orderings through alignments and model complex transformations through feature functions. Michelson and Knoblock (2008) extract information from unstructured texts using a rule-based approach to align segments of text with fields in a DB record. Our probabilistic alignment approach is more robust and uses rich features of the alignment to obtain high performance.
Recently, Snyder and Barzilay (2007) and Liang et al. (2009) have explored record-text matching in domains with unstructured texts. Unlike a semistructured text sequence obtained by noisily concatenating fields from a single record, an unstructured sequence may contain fields from multiple records embedded in large amounts of extraneous text. Hence, the problems of record-text matching and word alignment are significantly harder in unstructured domains. Snyder and Barzilay (2007) achieve a state-of-the-art performance of 80% F1 on matching multiple NFL database records to sentences in the news summary of a football game. Their algorithm is trained using supervised machine learning and learns alignments at the level of sentences and DB records. In contrast, this paper presents a semi-supervised learning algorithm for learning token-level alignments between records and texts. Liang et al. (2009) describe a model that simultaneously performs record-text matching and word alignment in unstructured domains. Their model is trained in an unsupervised fashion using EM. It may be possible to further improve their model performance by incorporating prior knowledge in the form of expectation criteria.
Traditionally, generative word alignment models have been trained on massive parallel corpora (Brown et al., 1993). Recently, discriminative alignment methods trained using annotated alignments on small parallel corpora have achieved superior performance. Taskar et al. (2005) train a discriminative alignment model from annotated alignments using a large-margin method. Labeled alignments are also used by Blunsom and Cohn (2006) to train a CRF word alignment model. Our method is trained using a small number of easily specified expectation criteria thus avoiding tedious and expensive human labeling of alignments. An alternate method of learning alignment models is proposed by McCallum et al. (2005) in which the training set consists of sequence pairs classified as match or mismatch. Alignments are learned to identify the class of a given sequence pair. However, this method relies on carefully selected negative examples to produce high-accuracy alignments. Our method produces good alignments as we directly encode prior knowledge about alignments.
Conclusion and Future Work
Information extraction is an important first step in data mining applications. Earlier approaches for learning reliable extractors have relied on manually annotated text corpora. This paper presents a novel approach for training extractors using alignments between texts and existing database records. Our approach achieves performance close to supervised training with very little supervision.
In the future, we plan to surpass supervised accuracy by applying our method to millions of parallel record-text pairs collected automatically using matching. We also want to explore the addition of Markov dependencies into our alignment model and other constraints such as monotonicity and one-to-one correspondence. | 2014-07-01T00:00:00.000Z | 2009-08-06T00:00:00.000 | {
"year": 2009,
"sha1": "ffac08c2467904858ba23b3fc425fe3c290defbc",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.5555/1699510.1699528",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "1abf21cce494f2a3b2f0a1d118abb500cb2c23ac",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13049176 | pes2o/s2orc | v3-fos-license | Secondary hyperparathyroidism
Title
Introduction
Secondary hyperparathyroidism is a maladaptive response that occurs as a result of declining kidney function.Though this response is adaptive in the early stages, prolonged stimulus leads to several pathologies involving extra skeletal calcification, several possible bone disorders and finally derangements in PTH, phosphate, calcium and vitamin D serum levels.These disorders are labeled under the term CKD-MBD, to replace the previous term of renal osteodystrophy, which focused primarily on CKD related bone pathologies.Skeletal changes that take place in CKD increase the prevalence of hip fracture compared to general population in stages of CKD including dialysis.Dialysis patients in their 40s have a relative risk of hip fractures 80-fold that of age-and sex-matched controls (Alem et al. 2000).Cardiovascular disease accounts for 70% of all deaths in patients with CKD, with an overall mortality of 20% per year in patients with dialysis (USRDS et al. 2003).In individuals with kidney failure on dialysis, cardiovascular mortality rates are 10-500-times higher than in the general population, even after adjustment for gender, race and the presence of diabetes (Foley et al. 1998).
One mechanism by which abnormal mineral metabolism may increase cardiovascular risk is by inducing or accelerating arterial calcification.A substantial body of observational data has now established components of disordered mineral metabolism as independent risk factors for adverse outcomes in CKD patients.These include serum levels of phosphate, PTH and FGF-23.With all of these derangements there seems to be an increase in morbidity and mortality in patients with CKD.
Physiology of Calcium and Phosphorus Homeostasis
Parathyroid Hormone PTH acts mainly on two organs: the bone and the kidney.The immediate effect of PTH on bone is to mobilize calcium from skeletal stores that are readily available and in equilibrium with the extracellular fluid (Talmage & Mobley 2008).Later effects on bone include, PTH activation of bone resorption to further increase calcium (Talmage & Mobley 2008).Renal effects include: calcium reabsorption in the ascending loop of Henle and distal convoluted tubule (Van Abel et al. 2005).Phosphate reabsorption under PTH occurs in the proximal tubule (Pfister et al.1997).This effect is primarily mediated by decreased activity, internalization, and degradation of the sodiumphosphate cotransporter in the luminal membrane of the proximal tubules (Pfister et al.1997).Finally, PTH stimulates the synthesis of 1-alpha hydroxylase in the proximal tubules and thus converts calcidiol to calcitriol (Broadus et al. 1980).Calcium has a negative feedback effect on the parathyroid glands through the calcium sensitizing receptor.Phosphate has shown to have a direct stimulatory effect on parathyroid gland hormone secretion (Brown & Hebert 1997).
Vitamin D
The active form of Vitamin D (1, 25 dihydroxyvitamin D) is synthesized in the kidney by the enzyme 1-alpha hydroxylase.Vitamin D stimulates reabsorption intestinal calcium and phosphate.Along with PTH, vitamin D is a required factor for in the bone reabsorption.It also increases the reabsorption of urinary calcium and phosphorus in the renal tubules.Through the vitamin D receptors it has a direct effect on the parathyroid glands to supress PTH secretion (Ben-Dov et al. 2007).
Fibroblast Growth Factor-23 FGF-23 is a circulating peptide that plays a key role in the control of serum phosphate concentrations (Llach 1995).FGF-23 is secreted by bone osteocytes and osteoblasts in response to calcitriol, increased dietary phosphate load, PTH, and calcium (Llach 1995).FGF-23's primary function is to maintain normal serum phosphate concentration by reducing renal phosphate reabsorption and by reducing intestinal phosphate absorption through decreased calcitriol production (Llach 1995).FGF-23 also suppresses PTH secretion by the parathyroid gland (Llach 1995).
Hyperphosphatemia
The subclinical hyperphosphatemia that occurs at estimated GFR's of >30 mL/min, is said to be the principal factor leading to the development of secondary hyperparathyroidism (Llach 1995).As GFR decreases so does the filtered phosphate load.The initial increases in phosphate promote the following: 1.The induction of hypocalcaemia by serum phosphate and serum ionized calcium binding (leading to increase PTH synthesis) 2. Decreased formation/activity of calcitriol by direct effects of phosphate at the enzyme level (leads to a decrease in phosphate and calcium absorption in the intestines) 3. Increased PTH gene expression (leading to increase PTH secretion) 4. Increased secretion of FGF-23 (Llach 1995) From the viewpoint of phosphate homeostasis, the initial elevation in PTH secretion as result of high phosphate is appropriate since the ensuing increase in phosphate excretion lowers the plasma phosphate concentration toward normal.Among patients with severely reduced GFR, PTH inhibits proximal tubule phosphate reabsorption from the normal 80 to 95 percent to as low as 15 percent of the filtered phosphate (Gutierrez et al. 2005).Hyperparathyroidism also tends to correct both the hypocalcemia (by increasing bone resorption) and the calcitriol deficiency (by stimulating the 1hydroxylation of calcidiol (25-hydroxyvitamin D) (Gutierrez et al. 2005).However, as GFR further decreases, filtered phosphate cannot be excreted and more phosphate is mobilized from bone (Gutierrez et al. 2005).PTH and FGF-23 help maintain normophosphatemia until the GFR reaches 20 ml/min (Gutierrez et al. 2005).
To make the situation even more complicated hyperphosphatemia also stimulates the secretion of FGF-23, which acts to suppress PTH secretion and vitamin D synthesis (Gutierrez et al. 2005).FGF-23 molecule synthesis increases at an early stage even without hyperphosphatemia contributing to the possibility of the revised trade off hypothesis (Rodriguez et al. 2005) (Figure 1).Early target in treatment of secondary hyperparathyroidism revolves on the reduction of phosphate load and reduces mechanisms of the revised trade off hypothesis.
Calcitriol Involvement
The initial decline in calcitriol synthesis is most likely affected by the increased secretion of FGF-23 by the osteoblasts (the revised trade-off hypothesis) (Fukagawa et al. 2013).This is especially prominent at early stages of chronic renal failure.Other mechanisms such as hyperphosphatemia reduce calcitriol synthesis during later stages of renal failure.The FGF-23 has a direct inhibitory effect on the renal 1-alpha hydroxylase.Low Indirect effects on PTH are achieved through decreased intestinal absorption of calcium and calcium release from bone, both of which promote the development of hypocalcaemia, which stimulates PTH secretion.Direct mechanisms result from the loss of inhibitory affects at the level of VDR receptors on parathyroid glands (Gogusev et al. 1997).Low calcitriol concentrations appear to play an important role in the decline in VDRs since the defect can be largely corrected by calcitriol supplementation (Gogusev et al. 1997).Calcitriol is given to increase the number of VDR's to sensitize the parathyroid gland for further release of parathyroid hormone (Gogusev et al. 1997).
Hypocalcaemia and Calcium Sensitizing Receptors (CaSR)
The release of PTH from parathyroid glands in response to low concentration of calcium is regulated by the CaSR (Naveh-Many et al. 1995).In CKD, the number of CaSRs may be reduced in hypertrophied parathyroid glands, particularly in areas of nodular hypertrophy (Naveh-Many et al. 1995).Decreased expression of the CaSR appears to be related to the proliferation of parathyroid tissue, and both may be related to increased phosphorus (Naveh-Many et al. 1995).The change in receptor number can lead to inadequate suppression of PTH secretion by calcium, resulting in inappropriately high PTH concentrations in the setting of normal or high calcium concentrations (Naveh-Many et al. 1995).Total serum calcium concentration are low during CKD as a result of all of the aforementioned mechanisms.This low serum calcium can be considered to be a stimulus to increase in PTH secretion (Naveh-Many et al. 1995).The role of the CaSR in regulating parathyroid gland function has direct therapeutic implications.The administration of a calcimimetic agent increases the sensitivity of the receptor to extracellular calcium and can lower PTH secretion from the parathyroid gland (KDIGO et al. 2009).
Skeletal resistance to PTH
Skeletal resistance to the calcemic action of PTH appears to contribute to the genesis of secondary hyperparathyroidism in CKD.Resistance to PTH is primarily due to down regulation of PTH receptors induced by the high circulating PTH concentrations, although both calcitriol deficiency and hyperphosphatemia may play a contributory role (Isakova et al. 2011).
Parathyroid Hyperplasia
Prolonged stimulation of PTH secretion leads initially to diffuse polyclonal hyperplasia followed by monoclonal nodular hyperplasia (Tominaga & Takagi 1996) (Figure 2).These two process ultimately lead to tertiary hyperparathyroidism where PTH is autonomously secreted, that is unresponsive to serum calcium levels (Tominaga & Takagi 1996).Furthermore therapies such as calcimimetic and vitamin d analogs do not reduce the levels of PTH (Drueke et al. 2007).Potential pathogenic factors include down regulation of VDR and CaSR receptors (Tominaga & Takagi 1996).As monoclonal nodular hyperplasia develops, it continues overgrow into an adenoma, further increasing PTH secretion (Tominaga & Takagi 1996).Often these patients require parathyroidectomy's to control calcium-phosphate homeostasis (Tominaga and Takagi 1996).
Vascular Calcification
The initial step in the development of vascular calcification is the de-differentiation of vascular smooth muscle cells to osteo/chondrogenic-like cells by transcription factors such as Runx2 and Msx2 (Figure 3) (Moe & Chen 2008).These cells can then form matrix vesicles or apoptotic bodies that mineralize on an extracellular matrix, presumably in a manner similar to bone (Moe & Chen 2008).Cells form collagen and non-collagenous proteins in the intima or media and incorporate calcium and phosphorous into matrix vesicles to initiate mineralization and further grow the mineral into hydroxyapatite (Moe & Chen 2008).The existence of abnormal bone remodelling in CKD may accelerate the process by providing excess calcium and phosphate for matrix vesicles (Moe & Chen 2008).The overall pathogenesis is regulated by a balance of pro-calcifying factors and inhibitors (Moe & Chen 2008).Unfortunately in CKD, the procacifying factors including hyperphosphatemia and hyperparathyroidism are common, and inhibitors such as fetuin-A and matrix gla protein are reduced (Moe & Chen 2008).
Diagnosis of Secondary Hyperparathyroidism
Diagnosing Bone Pathologies Overview Several bone pathologies can occur in patients with CKD (Figure 4).These include the following: 1. Predominant hyperparathyroid-mediated high-turnover bone disease 2. Low-turnover osteomalacia Bones are evaluated by three parameters: turnover, mineralization, and volume (TMV classification).Each form of bone disease has its own characteristic histologic findings.
Regardless of the type of bone disease, fracture risk in CKD patients is 14 to 17 greater than normal controls (Coco & Rush 2000).
Role of PTH
PTH is used as marker for severity of hyperparathyroidism.It cannot predict the underlying bone disease particularly PTH levels in the moderate range (KDIGO et al. 2009).PTH ranges for prediction of underlying bone disease include the following: Intact serum PTH values <100 pg/mL are associated with a decreased likelihood of osteitis fibrosa (OF) and an increased incidence of adynamic bone disease, An intact serum PTH level >450 pg/mL is typically associated with hyperparathyroid bone disease and/or mixed uremic osteodystrophy (MUO) and Intermediate PTH levels between 100 and 450 pg/mL do not correlate well in predicting certain bone disease (KDIGO et al.
2009).
Values mentioned previously have used intact assays for PTH measurement.There have been second and third generational assays in clinical use.Unfortunately present clinical data is based on intact assays (KDIGO et al. 2009).KDIGO guidelines suggest use of caution when using second or third generational assays (KDIGO et al. 2009).
Intact PTH assay use a capture antibody against the C-terminal part of the PTH molecule (epitopes 39-84) and a radioiodinated detection antibody directed towards the N-terminal portion of PTH (epitopes 13-34), and therefore detects the intact as well as large C-terminal fragments that lack portions of the N-terminus (that is the first 12 amino) and are termed non-PTH(1-84) (Figure 5) (John et al. 1999).Non-PTH(1-84) fragments are a subset of large C-terminal fragments that lack only a small portion of the N-terminus, and are therefore measured by the intact PTH assay (John et al. 1999).
They differ from other C-terminal fragments that lack a large portion of the N-terminus and are not detected by the intact PTH assay (John et al. 1999).In individuals with normal renal function, non-PTH(1-84) fragments account for 10 percent of all C-terminal PTH fragments and for 20 percent of the PTH detected by the intact PTH assay (Lepage et al 1998).In patients with chronic kidney disease, such as in patients on hemodialysis, non-PTH(1-84) may account for as much as 45 percent of the immunoreactivity measured by the intact PTH assay (Block et al. 2004).
Bioactive assays use antibodies directed at epitopes located at N terminal ends as well as C terminal ends bypassing non-PTH fragments detected by first generational assays.
bioactive PTH assay reacted with PTH(1-84) and with another N-terminal PTH fragment, which is not recognized by the intact PTH assays (Figure 5 Oxidized PTH occurs in patients with chronic kidney disease due to extensive oxidative stress (Hocher et al. 2012).Oxidized PTH forms the majority of PTH within these patients (Hocher et al. 2012).It's an inactive product and has no effect on the PTH receptor (Hocher et al. 2012).Assays used in these patients particular the intact and bioactive assays detect both oxidized and non-oxidized forms of PTH (Hocher et al. 2012).Intact and bioactive PTH assays detect both oxidized and non-oxidized PTH, resulting in higher PTH levels than when measured with a non-oxidized PTH assay (Hocher et al. 2012).At this point the non-oxidized assay may be useful in those patients requiring dialysis.
Role of Alkaline Phosphatase
Markers of osteoblast-mediated bone formation, such as bone-specific or total alkaline phosphatase, may provide useful information if used in conjunction with PTH serum levels.The combination of a low serum bone-specific alkaline phosphatase concentration (≤7 ng/mL) and a low serum PTH is suggestive of a low remodeling disorder (Ureña & De Vernejoul 1999).An elevated serum bone-specific alkaline phosphatase (≥20 ng/mL), alone or in combination with increased serum PTH (>200 pg/mL), appears to be highly sensitive and specific for high turnover bone disease (Ureña et al. 1996).In Croatia the usage of alkaline phosphatase and PTH levels is sufficient in assessing metabolic bone disease.It is worth mentioning that use of bone metabolic markers in conjunction with PTH or individually to asses' high or low turnover states still lacks sensitivity and specificity (Ureña et al. 1996).
Imaging Techniques
Radiographic examination includes possible detection of subperiosteal absorption on plain radiographs (KDIGO et al. 2012).These changes are associated with progressive hyperparathyroidism (KDIGO et al. 2012).These techniques are less sensitive than PTH and alkaline phosphatase (KDIGO et al. 2012).As a result routine x-rays are only done when there is symptomatic bone disease (KDIGO et al. 2012).Bone mineral density has limited use in CKD patients in assessing bone disease.2012 KDIGO guidelines, suggest not performing BMD measurements among patients with an estimated glomerular filtration rate (eGFR) <45 mL/min per 1.73 m2 since information may be misleading or unhelpful (KDIGO et al. 2012).
Bone Biopsy
Bone biopsy remains the gold standard for diagnosis of symptomatic bone disease.As mentioned previously, even though there is good correlation between PTH and bone formation rate, it cannot consistently replace bone histology because normal or even low bone turnover often are seen in a wide range of PTH values (2-9 fold upper normal limit for the assay) (Barreto et al. 2008).However in clinical practice the procedure has significant limitations.Some of the issues are as follows: individual variation in uremic patient's leads to false results; length of time for preparation of the procedure; the invasive nature of the procedure and finally financial cost.To conclude, bone biopsies are not routinely done in Croatia.In hospitals where it's possible to perform the procedure, KDIGO has published guidelines for usage of bone biopsies.Guidelines
Vascular Calcification
VC is most often detected incidentally on imaging obtained for other purposes (KDIGO et al. 2009).Screening is not attempted to quantify VC in all CKD patients since no specific therapy is available beyond careful attention to calcium and phosphate balance (KDIGO et al. 2009).This is in agreement with the 2009 Kidney Disease: Improving Global Outcomes (KDIGO) guidelines (KDIGO et al. 2009).CT scanning can detect and quantify the level of calcium with CAC scoring (KDIGO et al. 2009).
Treatment in Predialysis Patients
The treatment of secondary hyperparathyroidism in CKD differs between different stages of chronic kidney disease.Clinicians should first divide patients according to the severity of kidney disease before deciding on treatment decisions (Figure 7).Recent KDIGO suggests initiating therapy when the serum PTH is progressively rising and remains persistently above the upper limit of normal for the assay (KDIGO et al. 2012).
The clinician is required to be aware of the particular assay and normal values associated with its use (KDIGO et al. 2012).With respect to calcium and phosphate levels, the KDIGO working group suggests maintaining serum calcium and phosphorus in the normal range (ie serum phosphorus level (<4.5 mg/dL [1.45 mmol/L]) (KDIGO et al. 2012).Clinicians should focus on trends rather than single laboratory values (KDIGO et al. 2012).
Treatment algorithm is the following: Step 1-Limiting dietary phosphorus regardless of normal serum phosphate levels is essential in the beginning of treatment and throughout the course of the disease.
Patients often have subclinical hyperphosphatemia that is compensated by the increased levels of FGF-23.Reducing phosphorous from the diet can decrease the phosphate load and improve treatment outcomes.The recommended dietary phosphorous intake is 900mg/day (KDIGO et al. 2012).Dietary phosphorus should be derived from sources of high biologic value, such as meats and eggs (KDIGO et al. 2012).Phosphorus from food additives should also be estimated and restricted (KDIGO et al. 2012).Food additives (as are found in processed foods) are an important source of dietary phosphate.In addition to having a high phosphate content, highly processed food provides a more easily absorbed form of phosphate, compared with fresh, unprocessed foods (KDIGO et al. 2012).
Step 2-After two to four months of limiting dietary phosphorous, if PTH levels persist the next step is to introduce phosphate binders.Phosphate binders include calcium and non-calcium containing compounds.Calcium-containing phosphate binders should be used in patients who are hypocalcemic and normocalcemic, particularly if they are not also receive active vitamid D analogs.Vitamin D analogs used together with calcium containing phosphate binders can cause both hypercalcemia and hypophosphatemia with vascular calcification as a consequence.Non-calcium-containing phosphate binders are used for hypercalcemic patients.They are also appropriate in normocalcemic CKD patients, particularly if they are also receiving active vitamin D or vitamin D analogs.Non-calcium-containing phosphate binders are also used for patients with adynamic bone disease and vascular calcification.Calcium containing binders include calcium carbonate (25% elemental calcium: 169 mg of calcium/667-mg capsule) and calcium acetate (40% elemental calcium: 200 mg of elemental calcium/500 mg of calcium carbonate) (Mia et al. 1989).Calcium acetate is considered to be a more efficient phosphate binder than calcium carbonate (Mia et al. 1989).In Croatia, calcium carbonate and magnesium hydroxide are widely used options.No studies have examined calcium-based binders versus placebo or compared the 2 forms of calcium-based binders with extraskeletal calcification or patient-centered outcomes, such as mortality, fractures, and hospitalizations.Clinicians need to be aware of the effects of calcium compounds and active vitamin D compounds used together.
Simultaneous use of both can lead to hypercalcemia and hyperphosphatemia.Also a note for clinicians using calcium phosphate binders: Consider the level of calcium intake when prescribing calcium phosphate binders; likely exceeds the 1500 mg/day limit in end-stage renal disease ESRD patients (NFK et al. 2003).Non calcium containing compounds include sevelamer hydrochloride.Sevelamer hydrochloride is the only available non calcium containing phosphate binder available in Croatia.Sevelamer hydrochloride are nonabsorbable cationic polymers that bind phosphate through ion exchange.Conventional dosing is three times daily but recently less frequent dosing has been used.Side effects include GI intolerance and metabolic acidosis.A decrease in LDL cholesterol has been associated with its use.
When comparing both calcium and non-calcium phosphate binders, non-calcium phosphate binders seem to be more effective.The best available data is from a metaanalysis of 11 open-label, randomized trials (4622 patients), which revealed a 22 percent decrease in all-cause mortality among patients randomly assigned to receive noncalcium-based binders (sevelamer, 10 studies including 3268 patients, or lanthanum, one study including 1354 patients), compared with calcium-based binders (relative risk [RR] 0.78, 95% CI 0.61-0.98)(Jamal et al. 2013).Analysis of three nonrandomized trials (2813 patients) revealed an 11 percent reduction in mortality and, in all trials together, a 13 percent reduction in mortality among patients taking non calcium phosphate binders.Analysis of dialysis and nondialysis CKD patients showed similar reductions in mortality (Jamal et al. 2013).Calcium containing phosphate binders are also associated with hypercalcemia, adynamic bone disease, vascular calcification, and a positive calcium balance, all which could result in increased morbidity (Jamal et al. 2013).Thus it's prudent to take this factors in consideration when deciding on the type of phosphate binder.
Step 3-If PTH levels further increase or remain elevated over a six-month period with optimal therapy from the previous steps, introduction of a Vitamin D derivative is the next essential step in management.Treatment should not be initiated with Vitamin D derivatives if either serum calcium or serum phosphorous is elevated.Under a general rule a Vitamin D derivative can only be introduced if the corrected serum total calcium concentration is <9.5 mg/dL (<2.37 mmol/L).If the serum level of corrected total calcium exceeds 10.2 mg/dL (2.54 mmol/L) all Vitamin D therapy should be discontinued.
Vitamin D therapy should also be discontinued if intact PTH levels become persistently low.
There are several divisions of Vitamin D derivatives.There is the naturally occurring Vitamin D derivative calcitriol, and synthetic Vitamin D analogs paricalcitol and doxecalciferol.We shall discuss calcitriol and paricalcitol as these are the only available Vitamin D derivatives available in Croatia.Four placebo-controlled RCTs of various vitamin D analogues all showed efficacy for PTH lowering compared with placebo (Slatopolsky et al. 1992).No RCTs using vitamin D analogues in this group of patients address key patient-level outcomes (such as mortality, fractures, quality of life etc.).As a result any Vitamin D derivative can be used in treatment of these patients In regards to paricalcitol it has been examined in one randomized trial.In this phase-III trial of 220 patients with stage 3 and 4 CKD, compared with placebo, paricalcitol resulted in a significant percentage of patients with at least two consecutive decreases in PTH levels of ≥30 percent (91 versus 13 percent) (Coyne et al. 2006).Both groups had similar incidences of hypercalcemia, hyperphosphatemia, and elevated calciumphosphorus products (Coyne et al. 2006).
Step 4-When all previous therapies fail in control of PTH, cinacalcet be used in treatment.It is still controversial because of the lack of evidence.In fact the (KDIGO) working group recommend not giving cinacalcet, given the paucity of data concerning efficacy and safety in predialysis patients with CKD (KDIGO et al. 2009).Patients in this category of treatment can be managed by other therapeutic interventions such as parathroidectomy.There are risks of hypocalcemia and elevations of serum phosphate in its use.If electrolyte disbalance does occur during treatment, other therapies might have to be adjusted such as Vitamin D derivatives, phosphate binders etc.
Two trials have evaluated cincalcet in this category of patients.The first trial, a A phase-II, 18-week study was performed in which 54 patients were randomly assigned to cinacalcet or placebo, with the dose titrated from 30 to 180 mg/day to obtain a ≥30 percent reduction in PTH levels (Charytan et al. 2005).Inclusion criteria included GFRs between 15 to 50 mL/min per 1.73 m2, one intact PTH level >130 pg/mL, and a serum calcium concentration of 9.0 mg/dL (2.25 mmol/L) (Charytan et al. 2005).Compared with placebo, cinacalcet significantly lowered intact PTH levels (32 percent decrease versus 5 percent increase with placebo) and achieved target reduction in PTH levels (56 versus 19 percent attained a 30 percent reduction from baseline) (Charytan et al. 2005).
Increments in serum phosphate levels were observed with cinacalcet, which was likely due to the reduction in PTH levels (Charytan et al. 2005).The second trial was a A longterm, randomized, double-blind, placebo-controlled trial of cinacalcet in 404 patients with stages 3 and 4 CKD reported that active therapy reduced the mean PTH level by 43 percent after 32 weeks of treatment, but also led to an 9 percent decrease in serum calcium level, a 21 percent increase in serum phosphorus level, and a 14 percent decrease in urinary phosphorus excretion (Chonchol et al. 2009).In addition, cinacalcet induced a >50 percent increase in calcium excretion from baseline (Chonchol et al.
Treatment in Dialysis Patients
Goals of therapy in this group of patients are slightly different than in those in the predialysis group mentioned above.There is some debate as to which guideline should be used to treat these patients.The new KDIGO guidelines recommend that PTH levels should be 2-9 times the upper limit of normal as is defined by the specific assay.
Previous guidelines had specific values in which PTH should be controlled.However these guidelines were based on secondary generation assays that is no longer available.Furthermore there are data suggesting significant variability with PTH results among the different available assays, as well as marked differences based on sample collection and storage.The KDIGO guidelines also state that calcium and phosphorous levels should be in the normal range.However what is considered to be the normal range?For example increased phosphorous levels have been associated with increases in all-cause mortality in certain prospective studies.This was best shown in a meta-analysis of 12 studies that included 92,345 patients with CKD, over 97 percent of whom were on dialysis (Palmer et al. 2011).Among 10 studies that were perceived to be adequately adjusted (in which seven studies were of dialysis patients), serum phosphate >5.5 mg/dL (1.78 mmol/L) was associated with increased mortality (Palmer et al. 2011) The KDIGO guidelines are not specific enough for this recommendation.
Based on the previously mentioned statements the following should be the standard goals of therapy: 1. PTH levels should be 2-9 times the upper limit of normal as is defined by the specific assay.(KDIGO et al. 2009) 2. Serum levels of phosphorus should be maintained between 3.5 and 5.5 mg/dL.(NFK et al. 2003) 3. Serum levels of corrected total calcium should be maintained between 8.4 and 9.5 mg/dL.(NFK et al. 2003) The approach to treatment consists of the following: the initial focus is managing secondary hyperparathyroidism is the management of hyperphosphatemia with diet and/or phosphate binders.Specific interventions are based upon serum phosphate and calcium levels.The next step is to decide whether phosphate binder therapy is sufficient or whether a calcimimetic or vitamin D analog should be added.This is based upon calcium, phosphate, and PTH levels that are measured when administering optimal phosphate binder therapy.The final step is to adjust the doses of phosphate binders, active vitamin D, and cinacalcet to attempt to attain target values.There is a variable interrelationship between phosphate binders (either calcium or non-calcium containing binders), calcitriol or vitamin D analogs and calcimimetics in treatment.In the end the clinician must balance these medications in order successfully treat hyperparathyroidism.
The approach to treating hyperphosphatemia in dialysis patients is relatively the same as compared to the treatment in the predialysis group.Nonetheless there are certain differences which need to be addressed.In regards to phosphate restriction, protein supplementation (which contributes to high phosphate intake) rather than protein restriction is the goal.In this setting, the patient should be encouraged to avoid unnecessary dietary phosphate while increasing the intake of high-biologic-value sources of protein.The reason for this is that dialysis patients frequently suffer from borderline malnutrition.Restricting dietary protein could affect clinical outcomes by further increasing malnutrition.In regards to phosphate binders treatment principles remain the same as those in the predialysis group.Currently there are no randomized clinical trials for phosphate binders in this group of patients.Any excess phosphate can be removed with hemodialysis.Approximately 1000 mg of phosphate can be removed in each session.Patients have the option of extended dialysis or nocturnal dialysis if time permits.Patients receiving nocturnal dialysis removed twice the amount of phosphorous per week compared with those on thrice-weekly intermittent dialysis.A randomized controlled clinical trial of 51 patient's randomly assigned to 6 times weekly nocturnal dialysis versus thrice-weekly intermittent dialysis showed significant and sustained decreases in serum phosphorus levels over a 6 month period (Walsh et al. 2010).
The general approach to the use of vitamin D analogs was outlined in the predialysis section.The same principles in approach apply to this group of patients.Currently there have been no randomized controlled clinical trials to assess vitamin D analog effects on patient-based outcomes such as fractures and mortality.Furthermore there is lack of evidence on which vitamin D analog to administer.Here we will discuss evidence on two vitamin D analogs, Calcitriol and Paricalcitol which are available in Croatia.It has been assumed since Paricalcitol has specific actions on target receptors (less gut absorption of calcium and phosphate and less increments in serum values of calcium and phosphate) there could be a survival advantage.However, data from the only prospective, comparative study suggest no significant differences between paricalcitol and calcitriol.In this phase-III, double-blind, multicenter, randomized openlabeled trial, paricalcitol was directly compared with calcitriol among 263 hemodialysis patients with plasma PTH levels >300 pg/mL and a serum Ca x P product <75 (Sprague et al. 2003).To asses effect on mortality, paricalcitol was evaluated in a large cohort (the survival of hemodialysis patients administered paricalcitol (29,021 patients) or calcitriol (38,378 patients)) (Teng et al. 2003).The findings showed that at three-year follow-up, mortality was significantly lower in the paricalcitol group (crude mortality of 18 versus 22.3 percent per person-year for paricalcitol and calcitriol) (Teng et al. 2003).
There are some issues with the studies design, such as randomization.This study should not be considered when selecting between the two vitamin D analogs.Currently both compounds can be used until further randomized controlled clinical trials have been performed.
As mentioned previously Cinacalcet is a calcimimetic used often in combination with vitamin D and phosphate binders when levels of PTH are excessive.The indications for calcimimetic therapy are the following: indicated in dialysis patients with PTH levels >300 pg/mL who have serum calcium levels >8.4 mg/dL (>2.1 mmol/L).
Hyperphosphatemia is not a contraindication for starting cinacalcet, unlike vitamin D analogs.There are numerous efficacy studies evaluating the effects of Cinacalcet.In one large trial (which was the combination of three phase-III studies), 1136 dialysis patients with iPTH levels of >300 pg/mL were randomly assigned to traditional therapy plus cinacalcet HCl or placebo for 26 weeks (Moe et al. 2005).Some of the findings were as follows: sufficient control of PTH levels, calcium and phosphorous levels compared with placebo (Moe et al. 2005).A combined analysis of the three phase-III trials (which are also analyzed in the previously cited study) and one phase-II trial also found that, compared with placebo, cinacalcet lowered the risk of parathyroidectomy, fracture, and cardiovascular hospitalization (Cunningham et al. 2005).Does cinacalcet improve all-cause mortality and cardiovascular mortality?In the EVOLVE randomized trial, cinacalcet did not decrease the risk of death or major cardiovascular events among hemodialysis patients (Chertow et al. 2012).Cinacalcet reduced the rate of parathyroidectomy by approximately half (Chertow et al. 2012).Furthermore, in a recent published meta-analysis of all 9 clinical trials involving cinacalcet, there was no reduction in all-cause mortality and cardiovascular mortality (Palmer et al. 2013)
Figure 1 .
Figure 1.Role of FGF23 in the revised trade-off theory on pathogenesis of secondary hyperparathyroidism, according to: Fukagawa (2013), p 866, from Expert Opinion Pharmacotherapy.The reverse trade off hypothesis is the following: dietary phosphorus load supresses activation of vitamin D within the kidney.FGF-23 is synthesized to reduce serum phosphorus and inhibit vitamin D synthesis.These two mechanisms stimulate PTH secretion.
Figure 3 .
Figure 3. Pathogenesis of vascular calcification according to: Moe (2008), p 215, from Journal of the American Society of Nephrology.
Figure 4 .
Figure 4. Different bone pathologies according to: Torres (2014), p 615, from Seminars in Nephrology.Column 1 shows typical severe secondary hyperparathyroidism with high bone turnover, volume, and mineralization.Column 2 denotes the case of secondary hyperparathyroidism with low bone volume, normal mineralization, and high bone turnover.Column 3 shows the case of adynamic bone disease with all 3 TMV parameters reduced.Column 4 describes the usual osteomalacia with low bone turnover, low mineralization, but normal bone volume.Columns 5 and 6 show2 situations of mixed osteodystrophy; In column 5 bone volume is high, turnover is low, and mineralization is normal; in column 6 bone volume is low, turnover is normal, and mineralization is reduced.
) (D'Amour et al. 2003).In individuals with normal renal function, N-PTH accounts for 4 to 8 percent of PTH detected with a bioactive PTH assay, whereas it accounts for up to 15 percent in patients with renal failure (D'Amour et al. 2003).To further add, there is excellent correlation between intact and bioactive assays.Mean PTH levels in intact assays are larger due to detection of large c-fragments (non-PTH 1-84) mentioned above (D'Amour et al. 2003).
Figure 5 .
Figure 5. Schematic presentation of PTH(1-84) and the approximate location of the peptide regions recognized by the antibodies used in displacement-type radioimmunoassays and in "two-site" immunometric assays according to Heinrich (2006), p 54 from Clinical Chemistry.
therapy with bisphosphonates(KDIGO et al. 2009) Indications for bone biopsy can be further divided into Clinical, Laboratory, Radiologic and Research categories (Figure6).
observational data and multiple clinical trials have shown different clinical outcomes regarding therapy in these two groups of patients.Furthermore patients in the predialysis group still have clinically significant working kidneys.As a result similar therapies used in both groups might produce different outcomes.More clinical trials are required to asses' efficacies of treatment in regards to phosphate binders and vitamin D analogs.There is still ongoing debate in the nephrology community as to what the correct approach is to these patients.
Figure 7 .
Figure 7. Staging of kidney disease according to: National Kidney Foundation (2003) from the National Kidney Foundation.
Table 1 .
. Prevalence of secondary hyperparathyroidism in dialysis patients at Department of Nephrology of Klinicki Bolnicki Centar Sestre Milosdrnice, Zagreb Croatia.Different therapy regiments, with serum calcium and phosphate levels are also shown. | 2019-08-19T04:39:47.126Z | 2020-02-02T00:00:00.000 | {
"year": 2020,
"sha1": "9d205c95ce917585de36cbe69f29899bd9dd7590",
"oa_license": "CCBY",
"oa_url": "https://www.qeios.com/read/I3HL2F/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "49e73c22f5c67a2042fa77a575fd1d968535e01e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
1514629 | pes2o/s2orc | v3-fos-license | Parametric Study of Nonlinear Adaptive Cruise Control for a Road Vehicle Model by MPC
MPC (Model Predictive Control) techniques, with constraints, are applied to a nonlinear vehicle model for the development of an ACC (Adaptive Cruise Control) system for transitional manoeuvres. The dynamic model of the vehicle is developed in the continuous-time domain and captures the real dynamics of the sub-vehicle models for steady-state and transient operations. A parametric study for the MPC method is conducted to analyse the response of the ACC vehicle for critical manoeuvres. The simulation results show the significant sensitivity of the response of the vehicle model with ACC to controller parameter and comparisons are made with a previous study. Furthermore, the approach adopted in this work is believed to reflect the control actions taken by a real vehicle.
INTRODUCTION
train dynamics, to the nonlinear vehicle models. The simple vehicle models used are the longitudinal vehicle model [1][2][3][4][5][6], and first-order vehicle models [7,8]. In either case, the input to the simple ACC vehicle model is the control signal calculated by the ULC (Upper Level Controller).
Simple ACC-vehicle models have been used in the previous studies to analyse the performance of the ULC. In the case of a nonlinear vehicle model, the desired acceleration commands obtained from the ULC are given to the LLC (Lower-Level Controller) which then computes the required throttle and brake commands for the nonlinear vehicle model to follow the required acceleration commands. The nonlinear model includes the vehicle engine model, transmission model, wheel model, brake model, ULC and LLC models. In the literature, various control algorithms A n application of mathematical control techniques to the longitudinal dynamics of a road vehicle with an ACC system has been presented to address vehicle control. ACC systems have been developed as an enhancement to the standard cruise control systems. The ACC system operates on the throttle as well as brakes to maintain a desired speed and a SIVD (Specified Inter-Vehicle Distance) from a preceding vehicle in its vehicle-following mode. An ACC system typically aims to increase road safety and passenger comfort.
A number of ACC vehicle models and controller approaches have been developed in the literature which cover a wide range of ACC vehicle applications. The vehicle models used range from the simple vehicle model, which does not take into account the engine and drive-have been developed for the ULC, namely, PID (Proportional Integral Derivative) control [9,10], sliding mode control [5,6,[11][12][13][14], CTG (Constant Time Gap) [7,8], and MPC [7,[15][16][17][18]].
Transitional Manoeuvres for Accident Avoidance
It is not always necessary that an ACC vehicle has to perform steady-state operations [1,5,17,19]. It might need to execute TMs (Transitional Manoeuvres), e.g. it might encounter a slower or halt vehicle in front of it [7,12] in the same lane or during a cut-in (another vehicle comes in between the ACC and preceding vehicles, when the ACC vehicle is in vehicle-following mode) from another lane [20], or sudden braking applied by the preceding vehicle [16,18] or stop and go scenario [10,21,22]. During each TM, the ACC vehicle has to execute a high deceleration manoeuvre in order to avoid the crash with the preceding vehicle. The acceleration tracking capability of an ACC vehicle must be of high accuracy [8]. The acceleration tracking task is more challenging, because due to the deceleration limits an ACC vehicle is not capable of applying the required brake torque to evade a crash with any object in front of it, and this can cause the brake torque saturation [7,8]. The TM will be performed in the presence of acceleration, states and collision avoidance constraints when the brake and engine actuators have limited allowable forces that may saturate [7,8,18]. The development of the overall system model includes: vehicle modelling, controllers modelling, and their interaction.
TWO-VEHICLE SYSTEM MODEL
A two-vehicle system is considered which consists of a preceding vehicle and an ACC vehicle as shown in Fig. 1 [8] the point where the preceding vehicle is at present speed [23]. The control loop diagram of the two vehicles is shown in Fig. 3. A first-order lag is considered in the ULC input command which correspond to the LLC's performance and comes from brake or engine actuation lags and sensor signal processing lags [8,16]. The first-order lag can be defined as [8,16]: where x 1 and x 2 are the absolute positions of the preceding and ACC vehicles. u is defined as the control input commands determined by the ULC. τ is the time-lag equivalent to the lag in the LLC performance. Analytical and experimental studies show that has a value of 0.5s [8,24] and the same value is used in this study.
Each vehicle's longitudinal motion is described in the continuous-time domain using a set of differential equations. Whereas, the MPC-based vehicle-following control laws for tracking the desired acceleration are calculated using discrete-time model.
Objectives
The
VEHICLE MODEL
A 3.8 litre spark-ignition engine model which consists of two states cylinders and a five-speed automatic transmission has been chosen, where the two states are the intake manifold pressure (pman) and the engine speed where T man is the manifold temperature, R is the universal gas constant of air, V man is the intake manifold volume,
FIG. 3. CONTROL LOOP DIAGRAM FOR A 2-VEHICLE SYSTEM COMPRISES A PRECEDING AND AN ACC VEHICLE
ai m & and ao m & represent mass flow rate in and out of the intake manifold, T i is the engine combustion torque, T f is the engine friction torque [26], T a is the accessory torque, and T p is the pump torque representing the external load on the engine, and I e is the effective inertia of engine. The input to the engine model is the throttle angle.
Along the vehicle longitudinal axis a force balance results.
where m refers to the mass of the vehicle, x is the vehicle displacement, F xf is the longitudinal tyre force at the front tire, F aero is aerodynamic drag force [26], R x is the rolling resistance torque [26], r eff is the effective tyre radius, and θ is gradient of the road. This nonlinear vehicle longitudinal dynamics model is used for both vehicles. The nonlinear vehicle model has been carefully redeveloped and assessed in [25,27] for the suitability of the two-vehicle system to control the longitudinal dynamics. The necessary parameters of the vehicle model are listed in Table 1, based on the information from [26].
MPC CONTROLLERS FORMULATION
The MPC-based ULC is presented in this paper and the details of the LLC algorithm can be found in [25]. This is one of the aims of this study. The ULC uses the range R and range rate R & between both vehicles to determine the desired acceleration commands as shown in Fig. 4.
The main task by using the MPC method for the TM is to operate the system close to the constraint boundaries.
The main tasks for the MPC control method on the ACC system are to: (i) Track smoothly desired acceleration commands.
(ii) Reach and maintain a SIVD in a comfortable manner and at the same time react quickly in the case of dangerous scenarios.
(iii) Optimize the system performance within defined constrained operational boundaries.
Moving Horizon Window
The moving horizon window also referred as timedependent window which can start from any arbitrary time ti to the prediction horizon t i +N P for i=1,...,N P . The N P prediction horizon (N P ) defines how far ahead in time the future output states are predicted and its length remains constant. However, t i which actually starts the optimization window, increases based on sampling instant [29].
Receding Horizon Control
The algorithm of the MPC controller is regarded as shown in Fig. 5. A discrete-time setting is assumed, and the current time is labelled as time step t. A set-point trajectory shown is the absolute target for the system to follow. It is unlikely that the system will follow exactly the set-point trajectory. The reference path is therefore a newly defined path which starts from the current output at time t and defines an ideal path along which the plant (vehicle) should return to the set-point trajectory.
A MPC controller has an internal model which is used to predict the behaviour of the plant, starting at the current time t, over a future prediction horizon (N P ). Using the current output state information y(t) of the system and with defined future control inputs u(t+m|t) for m=0,..., N C , the system's predicted outputs y(t+m|t) for m=1,..., N P are obtained up to a limited prediction horizon (N P ) [28][29][30].
The set of future control inputs u(t+m|t) for m=0,..., N C is determined up to the control horizon (N C ), Fig. 5, by optimizing the suitable measure (determined criterion) to maintain the process close to the reference path [31]. This criterion usually represented as a quadratic function of the errors (between the predicted output signal and the predicted reference trajectory), also taking into account the control effort input. Changes in the control input are weighted and accumulated in the quadratic function. During this process an online computation is used to find out the state-trajectories which are actually based on the current state and then a cost minimizing control strategy is determined until time t+N C . Once the future control inputs are determined then only the first element of the set of future control inputs is applied as the input signal to the plant. During this process the prediction horizon length remains constant as before, but glide by one time interval at each step, this entire phenomenon is called a receding horizon strategy [29,30].
Cost Function and Control Objective
At each time t, the state information is sampled in order to predict the future control strategy. Once the sampling process is completed this information is then compared with the desired value (reference path), this comparison then generates an error function which is based on the difference of these two values. This error function is formulated as a cost function, 'J', which consists of elements relating to the system's output accuracy and control effort input. The cost function also incorporates the weighting which penalizes the control input u(t) for the required performance of closed-loop. The control objective is to minimize J inside the optimization window and by doing so the optimized control action is determined [29].
Formulation of Prediction Model
For the purpose of illustration of the MPC control algorithm a linearized, continuous-time, SISO (Single Input and Single Output) system is considered and is described by: where x represents the state variable, u denotes the control input, y refers to the system output, and A,B,C,D are the state-space matrices. The system matrix D is assumed zero because the control input u has no influence on the output y due to receding horizon control principle [29].
In the MPC literature, controlled system is usually modelled by a discrete time state space model [16,32]. Therefore, the continuous time state space model, Equations (5-6), is altered into a discrete time state space model as: where k represents the kth sampling point. The prediction is performed within an optimization window N P which is the number of samples and each sample is denoted by the is measured which provides the current plant information.
Having the current plant state x(k i ), the upcoming states are then envisaged for N P instants and the future state variables can be defined as: where x(k i +m|k i ) is the envisaged state-variable at k i +m with the given recent state x(k i ). Similarly, using the recent system state x(k i ), the set of future control input, which minimizes the cost function J, are denoted by: where Δu(k) is the control increment (augmented model).
N C is called the length of control-horizon [30]. The length of N C should be less than or equal to the length of N P .
The future state variable in Equation (9) can be calculated sequentially using the current state vector and the set of future control parameters.
Similarly, using Equation (11) the foreseen output variables can be determined as: The above equations can be written in the vector form as: (14) where the length of Y is equal to N P and the length of Δ Δ Δ Δ ΔU is equal to N C . Equations (13)(14) can be re-written into a state space expression, calculating all system outputs using the initial states x(k i ) and vector of predicted control inputs Δ Δ Δ Δ ΔU as: For detailed understanding of the augmented model, the discrete time state space model (Equation (7)) and its transformation into the state-space model (Equation (15)), the reader is referred to Maciejowski [30] and Wang [29].
Control Input Optimization
The cost function J, that describes the control objective, can be defined as: The cost function J consists of two separate terms. The first terms is meant to minimize the error between desired output and the predicted output while the second term deals with the size of Δ Δ Δ Δ ΔU when the cost function J is made as small as possible.
where R is employed as a fine-tuning operator for the needed closedloop performance [29] which penalizes the control input vector (Δ Δ Δ Δ ΔU). R s is the vector that contains the desired state information and can be defined as: where r(k i ) is the given set-point signal at time instant k i [28].
The next step is to find Δ Δ Δ Δ ΔU which can be obtained by substituting Y in Equation (18) and re-arranging as: Taking the 1 st derivative of J And the required condition for the minimized J can be expressed as: where, err k is spacing error, k rr e & is range-rate (relative velocity between the two vehicles), and k rr e& & is the absolute acceleration of the ACC vehicle. Each element of the error vector (e k ) is the quantity which is measured by the ACC system and the control objective is to steer these quantities to zero [7]. u k is the control input, and y k is the system output at time step k. The system matrices A and B can be obtained from the comparison of Equations (24)(25).
And the system matrix C is defined as [7]: Using the MPC control approach, Section 4.4, the future error can be defined as: Where e(k i +m|k i ) is the predicted error variable at k i +m with the given current error information e(k i ). The set of future control inputs are denoted by: where Δu(k)=u(k)-u(k-1) is the control increment. The MPC controller forms a cost function (Equation (18)) which consists of these errors (e k ) and the control input which determines the best set of future control inputs to balance the output error minimisation against control input effort.
Repeating the derivation steps from Equation (11) to Equation (23) one can find the optimal solution for the control input which is the function of current error information e(k i ). In the case of higher R=20 value the ACC vehicle response is quite satisfactory. It has been precisely observed that during the transitional operation and steady-state operation the response of the ACC vehicle has not been delayed but has been prolonged. This is because of the higher influence of the cost weighting on the optimal control input. The ACC vehicle can successfully perform the TM and can achieve the desired control objectives.
The analysis carried out in this section shows that a lower value of R(R<1) for input is not suitable for the ACC vehicle at all when using the MPC control algorithm, however, a higher value up to 20 can be used for the ACC vehicle in order to improve the performance of the ACC vehicle. | 2016-04-02T20:36:10.000Z | 2012-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "614fa944de4e2c3cca303c172f3c7cc9f50434a1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "614fa944de4e2c3cca303c172f3c7cc9f50434a1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
214074446 | pes2o/s2orc | v3-fos-license | A quasi experimental study to assess the effectiveness of nurse-led intervention on knowledge regarding malnutrition of under five year children among anganwadi workers in selected urban ICDS centers, Agra, Uttar Pradesh
Aim: The objectives of the study were. 1. To assess and compare the pre interventional knowledge on malnutrition among experimental and comparison group AWW. 2. To assess and compare the post interventional knowledge of malnutrition between experimental and comparison group of AWW. 3. To associate the pre interventional level of knowledge on malnutrition with selected demographic variable. Method: 130 AWWs from urban ICDS center of Agra were included as samples by purposive sampling. Data to assess the knowledge was collected by close-ended questionnaire with 50 items with maximum score of 50. The felt learning needs were assessed by open-ended questionnaire. Reliability of the questionnaire was tested by test retest method and the tool was found to be reliable (r = 0.83). Validity was tested by consultation with guides and experts from related field. Results: Analysis showed that AWWs had total mean percentage of 53.2%. Area wise mean percentage was highest (68.0%) in the area 'assessment of malnutrition', it was higher (57.0%) in the area prevention of malnutrition, 41.0 in the area 'management of malnutrition' and 40.0 for the area 'factors related to malnutrition'. Further, most of the AWWs expressed felt learning needs in all areas of malnutrition. Power point presentation (PPT) was prepared focusing on areas and subareas where mean knowledge score was average or below average and also based on felt learning needs expressed by AWWs in the open-ended questionnaire. PPT was validated by consulting guides and experts from related field. Effectiveness of the module was evaluated by a post-test. Interpretation and conclusion: Total mean percentage of knowledge scores of AWWs improved from 53.2% to 97.6%. Further, area wise knowledge mean percentage improved from 40.0% in pre-test to 94.0% in post-test in the area ' factors related to malnutrition'. The same increased from 57.0% to 97.3% for the area prevention of malnutrition. The mean percentage for the area 'management of malnutrition' was 41.0% in pre-test which increased to 99.0% in post-test and mean percentage of the area 'assessment of malnutrition' was 68.0% in pre-test whereas it was 100.0% in post-test. Paired’t’ test indicated very highly significant (P> 0.001) difference between the pretest and post-test knowledge scores of AWWs regarding malnutrition. Further chi-square test indicated no association (P< 0.05) between the post-test knowledge scores and demographic variables of AWWs such as age, education, refresher course attended on malnutrition among children below Five years of age and number of years back refresher course attended. There was significant association (P > 0.05) between the years of experience as Anganwadi worker and post-test knowledge scores of AWWs.
Introduction
Children's are the first call on programme of human resource development not only because the children's are very susceptible, because the foundation for lifelong learning and human development is laid in these crucial early years. It is now all over acknowledged that investment in human resources development is a mandatory for economic development of any nation. An estimated in 2017, 15 year under age of children of 6.3 million died. Within the 1 st month of life million children died. According to this 15000 death of under-five per day [1] .
Need for the study
In early childhood success, emotional stability, and wellbeing have their roots. The ministry of child and women development implemented different schemes of child and mother. ICDS was one of them [26] . Today; this scheme includes world's big and unique program for children's development [27] . In India, ICDS is recently significant programme of government which reduces child and maternal malnutrition. ICDS is most important interventions, and many studies indicate its positive role in tackling India's health and nutrition problems. www.nursingjournal.net The studies indicate its positive role in tackling India's health and nutrition problems. Studies indicate its positive role in tackling India's health and nutrition problems. The available data indicates that maternal and child interventions have played an integrated role substantially lowering under five, infant mortality rates and the levels of both severely and moderately malnourished children have declined due to ICDS.
Statement of problem
A Quasi experimental study to assess the effectiveness of Nurse-Led Intervention on knowledge regarding malnutrition of under five Year children among Anganwadi workers in selected urban ICDS centers, Agra, Uttar Pradesh Objectives of study 1. To assess and compare the pre interventional knowledge on malnutrition among experimental and comparison group AWW. 2. To assess and compare the post interventional knowledge of malnutrition between experimental and comparison group of AWW. 3. To associate the pre interventional level of knowledge on malnutrition with selected demographic variable Hypothesis HO1: There will be no significant difference between pre and post interventional knowledge score between malnutrition among experimental group. H02: There will be no significant difference on post intervention knowledge on malnutrition between experimental and comparison group of anganwadi workers. H03: There will be no significant association of pre intervention level of knowledge score on malnutrition among experimental and comparison group with their selected demographic variables
Delimitations
The study is limited to Anganwadi workers: Working at ICDS centers of urban Agra. Willing to participate in the study. Available during data collection.
Methodology Research approach
Research approach is a methodical, unbiased method of finding with empirical suggestion and demanding control. It is the basic stratagems that the researcher implements to develop evidence that is correct and not in prediction. The control is achieved by holding the conditions constant and varying with the phenomenon under study.
The choice of research approach constitutes one of the major decisions, which must be made in conducting a research title as the approach chosen on a research project can greatly affect its outcome. In order to achieve the desired objectives of this study, quantitative research approach is adopted to achieve desired objectives.
In the present study, the of Nurse-Led Intervention (PPT) on knowledge regarding malnutrition of under 5 yr.
Children are measured in numerical form and these data are analyzed by using statistical method.
Research design
The research design supports in the selection of samples for observation, and determines the type of analysis interpret the data. The selection of the research design depends upon aim of the study and the conditions under which the study is conducted. The design adopted for this study is quasi experimental one group pre-test Post-test research design. The results. It examines and assess the effectiveness Nurse-Led Intervention (PPT) on knowledge regarding malnutrition of under 5 yr children in AWWs in selected urban ICDS centers, Agra, Uttar Pradesh.
Setting of the study
Setting refers to the area where the study is conducted. Qualitative researchers strive to study their phenomenon in a variety of context. This study was proposed to be conducted among AWWs in selected urban ICDS centers, Agra, Uttar Pradesh" The study carried out urban ICDS, Fatehpur sikri, Agra.
Sample and sampling method
Sample is the subset of the units that comprise the population. Small amount to subject is used in study when it is not practicable to research the whole population from which it is drawn. This sampling process make it possible to gain a generalization to the intended population depend on careful observation of variables, within a relatively small proportion of population.
Sample
The sample composed of 130 Anganwadi workers who are working at the ICDS centers, which are fulfilling inclusion and exclusion criteria and participating in study.
Sampling technique
In the present study, the samples selected for data collection were from Anganwadi ICDS centers of Davanagere, Agra, Uttar Pradesh, who are fulfilling exclusion criteria and inclusion criteria. Participating in study Non probability Purposive Sampling method.
Development of tool
a) Construction of tool to identify the learning needs of Anganwadi workers: A questionnaire was based on the review literature and in consultation with the research experts. The questionnaire had three parts.
Part A-Demographic variables: It includes age, education, year of experience, whether attended any refresher course in malnutrition of under 5 yr children, and if so when was the last refresher course on malnutrition attended.
Part B-Assessment of the knowledge of the AWWS regarding malnutrition.
It construct of close ended question to assess the knowledge of the AWWs regarding malnutrition. It has four sections. Section I had five items regarding factors related to malnutrition. www.nursingjournal.net Section II had 30 items regarding prevention of malnutrition. Section III had five items regarding assessment of malnutrition and Section IV had ten items management of malnutrition. Each item has one, maximum score for correct response. Thus, there were 50 items with 50 maximum scores.
Part C-Identification of felt learning needs related to malnutrition
It consists of open ended questionnaire to identify felt learning needs related to malnutrition, which were not included in Part B. The AWWs were instructed to write their felt learning needs under the following areas such as factors related to malnutrition, prevention of malnutrition, assessment of malnutrition and management of malnutrition. Interpretation of the tool was done by only percentage of responses were considered to identify the felt learning needs of Anganwadi workers related to the above mentioned areas.
Reliability
It has to do with the value of measurement. In its ordinary wisdom, reliability is the "consistency" or "repeatability" of measures. Reliability is the consistency of a set of measurements or measuring tool. Test-retest and internal consistency reliability methods used and Reliability was found r = >0.83. The questionnaire was administered to 14 Anganwadi workers of Agra rural, ICDS centers. The gap between the Ist and IInd test was ten days. r value showing that tool is reliable for assessment of the knowledge of malnutrition.
Referred of related literature
The literatures referred to prepare the content of the nurse led intervention are presented in the Annexure.
Preparation of the Nurse-Led Intervention (final draft of the NLI) Process of development Power point presentation (PPT)
A Nurse-Led Intervention was developed to teach the AWWs related malnourished child care. The Nurse-Led Intervention was of 30 min duration that covered the following areas; The process of development of the nurse-led intervention involved the; a) Development of criteria checklist. b) Preparation of the first draft of nurse-led intervention programme. c) Preparation of the Slides. d) Content validation of nurse-led intervention. e) Preparation of the final draft of nurse-led intervention programme.
Development of the criteria checklist
Criteria checklist was prepared against which the content of the Nurse-Led Intervention was to be evaluated.
Preparation of the first draft of nurse-led intervention (PPT)
The Nurse-Led Intervention comprised the following headings: Lesson 1-Nutrition Lesson 2-Diet for a child from 0-5 years of age Lesson 3-Malnutrition Lesson 4-Prevention of malnutrition Lesson 5-Mgt of undernourished child The first draft of nurse-led intervention programme was developed after reviewing the available literature and consulting the experts. The factors such as time and independent learning, the level of understanding and needs that affects the AWWs learning were considered while preparing nurse-led intervention programme.
Preparation of the nurse-led intervention Power point presentation.
The PPT was prepared care of malnourished children's.
Content validation of Nurse-led intervention
The Nurse-Led Intervention was given to 4 experts for validation against the criteria checklist.
Preparation of the final draft of nurse-led intervention
The final draft was prepared after modification suggested by the experts. The Nurse-led Intervention was based on general and specific objectives. The Nurse-Led Intervention covered the following content areas:
Data collection procedure
Researcher obtained ethical approval from appropriate review panels to conduct the study. Prior permission was obtained from the Assistant Director of Child and women Welfare Department Agra (Annexure). The researcher communicated the supervisors of Anganwadi worker to extend co-operation. Data was collected during the monthly meeting of Anganwadi workers. www.nursingjournal.net There are 89 Anganwadi centers in the urban ICDS project of Agra District. For smooth functioning the 89 Anganwadi centers they are classified into 4 circles. Each circle constitutes 44 Anganwadi centers. Every month meetings are conducted at two AWCs. At each center meeting is conducted for two days for AWWs of two different circles. Thus, for four circles meetings are conducted in two days. Data collection done by each circle separately during meeting of respective circle. Thus each day data was collected from two circles and it took two days to complete the data collection procedure. Researcher dully explains the aim of the study. Only the samples who had signed the consent form are included in the study. Data confidentiality was maintained. Further, Nurse-led intervention technique was followed for distribution of the Nurse-Led Intervention and also collection of data for post-test. Data for pre-test was collected from 01/12/2016 to 28/04/2017, further; data for post-test was collected from 3/02/2018 to 31/12/2018. The researcher approaches the study subjects, explained to them the aim of study and obtained the consent after assuring the subjects about the confidentiality of Data collection done in the information from the students. Over-all of 130 AWWs were selected for the study who encounter inclusion criteria of samples.
Data analysis
Data analyzed by using both inferential and descriptive statistics. SD and mean percentage was used to describe the learning needs of Anganwadi workers. Further, one group pre-test (x) and post-test (y) designed was used.\in the study. (y-x) of the Power point presentation. Formula used to evaluate effectiveness of the PPT is Treatment pre-test (x)? Post-test (y) = effectiveness (y-x). Statistical significance of effectiveness of the module was analyzed by the paired t' test and association was tested through chi-square test
Results
It is observed that the AWWs had mean knowledge score of 19.71 ± 3.90 (39.41%) out of the maximum score of 50. Further knowledge level indicates that only 9% of total scores were in good knowledge level and also AWWs had expressed felt learning needs regarding malnutrition in all areas of malnutrition. Hence it was felt that a module is to be prepared to improve the AWWs knowledge level inform them the facts regarding the self-expressed learning needs. A nurse-led intervention (PPT) was prepared on malnutrition. The steps followed of the module are explained in Chapter III. Effectiveness of the nurse-led intervention was evaluated by post-test for the same group with same tool to assess the knowledge. The statistical formula used to evaluate effectiveness of the module is described in Chapter III. Hypotheses were tested using paired't 'test and X 2 test. Paired't' was calculated to analyses the differences in knowledge of AWWs in the pre and post-test. Further X 2 test was calculated to analyse the associated between demographic variables and posttest knowledge scores of AWWs.
Ho1
There is no significant difference between the pre-test and post-test knowledge scores of AWWs regarding malnutrition among children below 5 years of age. Paired't' test was calculated to analyse the difference in knowledge of AWWs in pre-test and post-test on various areas of malnutrition.
Ho2
There is not significant association between the demographic variables of AWWs and their knowledge scores in pre-test.
Chi-square was calculated to analyses the association of demographic variables with the post-test scores of knowledge is AWWs regarding malnutrition to find effectiveness of the module with regard to demographic variables of the sample.
Discussion
Worldwide the major issue of malnutrition is noted in school going Childers. It is commonly noted that malnutrition in children pervades all aspects of their health, growth, cognitive and social development and can lead to irreversible and lifelong effects. Especially in India, one of the greatest problems for undernutrition among children. Till date, even after a technical developmental phase of the country the country is still struggling with this problem. This is the only malnutrition, the condition resulting from faulty nutrition, weakens the immune system and causes significant growth and cognitive delay. However, this might be associated with high knowledge mean score of the AWWs. In the area 'management of malnutrition' pre-test knowledge mean score was 2.95 ± 1.4 (29.46%) whereas the post-test mean knowledge score was 7.36 ± 2.12 (73.62%) shows an increase of 44.16.0% in the knowledge mean score of AWWs. Effectiveness of the nurse led intervention with regard factors related to malnutrition revealed good improvement in the scores, (51.54%) of effectiveness was observed for the item ' worm infestation is a factor related to malnutrition, 43.08% effectiveness was noticed in dietary deficiency as an important factor' 35.38% in item 'Continuous deficient consumption of body building foods is a factor of stunted growth. Further, 33.08% improvement was seen in area of 'deficient consumption of energy yielding foods is a factor of marasmus' However, the effectiveness was low (26.15%) for the item 'Continuous deficient consumption of body building foods is a factor of stunted growth'. Knowledge of Anganwadi worker It might be associated with the percentage during Pretest Regarding" Initiating breast feeding immediately after child birth" 98% AWWs responded correctly where as 47% AWWs responding correctly on "a child can be given solid foods at the age of 8-9 months" and 34% of AWWs responded correctly on the item "Starting weaning 3-4 months". It reveals that most of AWWs had knowledge on breast feeding but knowledge is lacking on weaning 52% of anganwadi workers responded correctly for the item "Egg is essential to maintain growth of the body". ss51% of AWWs at responded correctly for the item "Fruits are essential to protect the body from diseases". 50% AWWs has responded correctly for the item "Cereals yield energy for body activities". 49% of AWWs at responded correctly for the item "Pulses help to maintain growth of the body" and 41% of AWWs at responded correctly for the item "Vegetables are required to protect body from diseases". 40% of AWWs at responded correctly for the item "Milk helps in bone development of the child".
Conclusion
From the result of the study investigator concluded that around 53% of AWWs were aged between 36-45 years, 76% of them were educated up to higher secondary course, 50% of AWWs had above 11-15 years of service experience, 99% of the anganwadi worker has attended refresher course related malnutrition in children under 5yr.children. Further, anganwadi worker has not good knowledge in most of the areas of malnutrition, but the knowledge was excellent in some cases (16%) in factors related to malnutrition. A nurse led intervention (PPT) was prepared and effectiveness was evaluated by posttest. Mean knowledge score improved from 19.7 ± 3.9 to 36.24 ± 4.73 after implementation of the nurse led intervention. Highly significant difference was found between pre and post-test knowledge scores of AWWs in all areas of malnutrition and module seems to be effective to all groups AWWs except with regard to years of experience. | 2020-03-19T19:49:20.755Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "f5c8df02d179706e6096cdc406978e2711d38ae0",
"oa_license": null,
"oa_url": "https://www.nursingjournal.net/article/view/70/2-2-22",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6ef56f6e906858b09d85af0f2c384cbfbbfdb12d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56233324 | pes2o/s2orc | v3-fos-license | The Healthy Supermarket , an Integrated Model for Prevention and Management of Obesity : Translational Challenges in Real-world in-store setting in Urban Southern India
The issue of obesity is on the rise especially in an urban metropolitan city of Southern India, Chennai. The effects of globalization and westernization coupled with unique Asian Indian phenotype worsened the scenario. Increasingly, public health strategies focusing on environmental determinants of overweight and obesity, an appropriate area of intervention. This has warranted an innovative approach that focuses both population and individual-based prevention strategy to mitigate obesity epidemic. This article presents such an innovation of nutritional education via the ‘healthy supermarket’ model. Though there are supermarkets that propagate healthy food, none has been effective in promoting the awareness on healthy eating, predominantly because along with healthy food, high processed food is also sold. This model, however, used a colour coded method, along with additional labeling of ingredients, in-house nutritionists and posters for education and information. The main barriers faced were related to those of the suppliers, manufacturers, nutritionists, consumers as well as the fiscal aspects of sustaining such a supermarket. However, the upkeep of such a concept requires a multifaceted approach such as support from the public health authorities, need for specialties within the occupation of nutrition and dietitians, who can recognize obesity as being a specialist area of dietetic practice, and promotion of social entrepreneurship. To conclude, the strength of this education model includesreal-life setting, the new design of supermarket and similar concepts can be setup or the existing ones can be remodeled with a sustainable business model for policy changes. In order to observe the success of this model on chronic diseases epidemiology, this model needs to be replicated in different areas of Southern India and other regions with the concept of ‘the food pharmacy’.
Introduction
An enormous increase in the onset of metabolic diseases has overburdened public health of both advanced and developing nations alike.India shoulders this burden of both under nutrition and over nutrition in the same population across the life course.Indeed, Indians have a unique phenotype, that makes them susceptible to chronic diseases at a younger age and lower body mass index due to thrifty phenotype and rapid westernization [1][2][3].Currently, 20% [4] of all adults and 11% of all children are obese [5] with an upsurge of plus-size stores at stupendous rates in the retail market [6] depicting the rise of obesity, demanding increased focus on its prevention and management [7,8].Most of the traditional education-based nutrition interventions that have targeted at individual determinants of dietary intake are unlikely to be effective today in the absence of broader structure support [9].Public health policies focus [10] has identified the strong and environmental determinates of overweight and obesity an appropriate area for intervention in which an evidence-based exists to guide action [11].
In India, public health strategies designed to tackle behavioral risk factors for chronic diseases was largely focused in the form of awareness programs, and fiscal and regulatory measures [12,13].To our knowledge, there is a lack of a single population based strategy that focuses on a balanced approach to improving nutrition security by ensuring the availability and consumption of nutritionally appropriate diet at all stages of life.The supermarket model was one such strategy that may help individuals to make healthier choices about what to eat and may be associated with better health outcomes [14].The supermarket and grocery stores, the primary location for food purchases, are receiving increased attention.Yet, instore marketing to promote healthful eating was conducted in controlled laboratory and field experiments, and observations [15] but to our knowledge, not many have tested in real-world in-store settings where the real challenge exists.The present article, therefore, discusses the challenges involved in setting up the 'The healthy supermarket' in the real-world for the prevention and management of obesity in urban Chennai, Southern India.
Designing 'The Healthy Supermarket Model' Concept
Supermarket market was designed based on available scientific researchbased evidence from Indian context [10,12,16] and in absence, best practices from developed nations including systematic studies and public health policies [17,18] published by World Health organization (WHO), Food and Agricultural Organization (FAO), Centers for Disease Control and Prevention (CDC) [19] across the world [20] have been adopted and customized to fit Indian based culture.The previous systematic review [19] concluded that front-of-pack labels especially 'traffic light' labels were the most liked and readily understood by consumers and effect on sales and consumptions [21,22].Further, especially on the impact of nutrients lists and interpretive label, health-related printed materials distributed Open Access 2 [23], food demonstration, menu signage [24], Glycemic Index (GI) choices, recipe cards, placard on shelf near target food, fact sheet [25,26], nutrition education tours given by nutritionists [27,28], shelf labels, cooking demonstration [29][30][31][32][33], and culturally relevant guidelines shown to effective in previous intervention models have been considered while designing the supermarket.
Goals of 'The healthy Supermarket'
'The healthy supermarket' was commenced with a broader objective of modifying behavior of consumers towards healthy food choices thereby aiding in the prevention and management of obesity for the population in Chennai (Southern India).This model integrated behavioral changes in reference to food choices of consumers based on nutrition education.'The Healthy Supermarket' was intended for its consumersto a) increase the access, availability and consumption of healthy foods, b) increase consumer awareness and knowledge of nutritional claims and labeling facts, and c) self-efficacy and modification of behavior regarding healthy food choices and preparation.The specific goals of nutrition education were to a) create awareness on different food groups and importance of functional foods, b) notify on major nutrient composition, importance and requirement of micro and macro nutrients, c) emphasize the importance of the quality of food; d) educate on the diseases associated with excess or deficient nutrient intake, e) inform on food marketing strategies, labeling and influences of regulation on consumers' choice of food, f) educate on spending on food versus medicine.
Framework of Nutrition Education Program in 'The Healthy Supermarket'
The nutrition education program through 'The Healthy Supermarket' model consists of variations that make this supermarket and dissemination of nutrition education unique.The concept involved: A) A 'food philosophy' was introduced where supermarket did not accommodate foods in which the very first ingredient contained were either trans-fat, high levels artificial colour and preservatives, higher levels of glucose or fructose, corn syrup, sugar and or refined flour (Maida or white flour).The philosophy was further stringent in adhering to the United States (US) Food and Drug Administration (FDA) regulations updating periodically as per future research [34].B) Full-time an in-house supermarket nutritionist.C) Colour coded racks to segregate foods based on their level of processing involved (Figures 1-6).Green depicting the least processed products including foods from basic five groups (e.g.cereals, pseudo-cereals, millets) and functional foods (psyllium husk, garlic and fenugreek seeds).
Blue for semi-processed products while the orange for highly processed food.D) additional labeling strategy on the products and shelves, shelf tags, and small posters i) star indicators: blue stars for high fiber, red for high sugar, yellow for high sodium and green for whole grains, where the number of stars dependent on the serving size ii) Recommended dietary allowances (RDA) [35] for all the possible food groups and nutrients.iii) Posters on oats-reduces LDL cholesterol levels-amount specified by US FDA [34,36] with the portion tools ((teaspoon) on display), recommended versus the actual intake (per recent studies in Chennai) [37], posters, and recipe booklets.E) Models on GI, healthy eating food pyramid (Harvard Nutrition Pyramid Culturally Modified based on available evidence) [38] and portion size tools, F) Booklet on obesity enumerating the facts, contributing factors, preventive methods, myths and alternative healthier recipes for popular restaurant foods.G) Recipe sheets on healthy food preparations, practical tips on how to eat in moderation (by consuming less fat and more fruits and vegetables), highlighting the importance of low-fat dairy and cookery demonstration on the effects of food processing on digestion are part of the healthy supermarket and kitchen with live demonstration setup for prevention and management of chronic diseases such as diabetes Further, organic foods were not highlighted due to lack of evidence against its inorganic, expensive and less appealing to consumers in terms of shelf life and low awareness.In addition, the foods that had a special claim, manufacturers and suppliers were requested to produce evidence.
Barriers to Implementation
The healthy supermarket similar to any novel venture encountered several barriers for implementation from a number of stakeholders involving consumers, manufacturers and suppliers, nutritionists and government.
Designing of Healthy Supermarket
The food based environment was designed after careful review of local (India, Southern India, Chennai) health behavior, nutrition status, and chronic diseases prevalence along with demographic and socio-economic data, customer perceptions and policy data [39].Yet, there are several challenges involved in translating the evidence.Firstly, one of the most significant hurdle was a lack of nutrient information for cooked foods, which challenged to translate the dietary guidelines in terms of more realistic portions [16,40].Secondly, lack of cultural based standardized serving, portion tools and food atlas.Thirdly, lack of standardized food Open Access and nutrition labels across the food products.Fourthly, not many foods based environment intervention studies conducted previously in unlike western which further limited our model.Fifthly, availability and accessibility of many healthier foods in local area was challenging and further importing added to the costs, therefore affordability was questioned.Although there was a fair amount of support from consumers, but a lot of criticism and resistance from suppliers, nutritionists and retailers.
Food Manufacturers and Suppliers
Although Food Safety Standard Authority of India (FSSAI) guidelines [41,42] suggest the nutritional facts on the label along with energy and other nutrients, but it was observed the majority of the products lack such information.The misleading information such as cholesterol 'free' or 'No Sugar' without sufficient evidence posed a challenge to nutritionists educate the consumer.Though the products were shelved per the color code and nutrition information, there was a recurring pressure from food suppliers to display their products according to visual merchandising approach [43] (e.g.fast moving goods in front, such as low-fat chips).Suppliers refused to supply in small quantities and moreover demanded procurement of all their items of supply including foods against the 'food philosophy' which therefore lead to termination of healthy foods procured from them.Minimum shelf-life of less processed foods posed a threat until they were procured by consumers resulting in a frequent return to its reluctant suppliers.
General Public and Consumers
It was observed that a major source of nutrition information was advertisements in print and television media, and physicians' suggestions surpassing a nutritionist's role.On the other end, consumers are prejudiced and often caught up in the quick fix solutions to lose weight, lower cholesterol, and blood pressure owing to the misinformation about nutrition and propaganda of solutions through media.Obesity is yet to be viewed as a threat to public that leads to the onset of recognized diseases like diabetes, heart disease and hypertension.Even when nutritionist explained about its results and effects in the long term, consumers are reluctant to consider these facts.Further, from the consumer perception, the model was more viewed as exclusive store due to the non-availability
Dieticians and Nutritionists
A large amount of money is spent on high-quality and sophisticated research to invent new and effective procedures and model, with an objective behind to help healthcare professionals provide the best possible care.Yet, it was found that nutritionists had the inability to translate the evidencebased practice (EBP) model to the community including management of manufacturers and suppliers, detecting nutrient information of foods in terms of labeling and misinformation and relating it to its effect on health.Majority relied on experiences and opinion while educating consumers, but making a shift from a traditional emphasis on authoritative opinion to an emphasis on data extracted from prior research studies was a huge challenge.There are several reasons such lack of time, limited theoretical (nutrition) and IT skills, lack of searching skills, insufficient evidence, lack of autonomy to change practice, resistance (or negative attitude) towards EBP, which further limited their decision-making skills of the recruited nutritionist.
Government Initiatives
Nutritional education through this model was far reaching and applauded by the consumers; the financial viability was still not promising as it very sustenance was challenging due to above-mentioned factors.Population based public health initiatives are still at the ground work in a reference to obesity especially when the models are translated to real-life setting or in terms of the viable business model.An important challenge was to obtain loan and loan repayment facility, especially consideration to support and promote women entrepreneurs.It is increasingly recognized that environmental programs play an important role particularly fiscal policies that reduce barriers or increase opportunities for healthy choices, such as subsidies or taxation for certain fatty /sugary foods, has received considerable attention to achieve population-wide behavioral changes [44].However, not many subsidies are provided by the government for pricing of healthier food products.Moreover, the convenience of availing unhealthy food products at slashed rates or their smaller versions at affordable prices even by the low-income population has promoted the choice of such unhealthy foods.
Summary
The 'healthy supermarket' , while a novel idea, does require much support from governing bodies in addition to the cooperation from manufacturers and food suppliers.Yet, to mitigate the obesity epidemic, there is a need to address challenges from multi-sectoral response involving the media and celebrity promotions, consumers, public and private healthcare professionals and non-government sector.Further, there is a need for specialties within the occupation of dietitian and nutritionists such as recognizing obesity as being a specialist area of dietetic practice, which perhaps create more impact on mitigating the epidemic.The government needs to recognize social entrepreneurs, who have a unique blend of profit best practices with the non-profit mission as a unique way of tackling the obesity epidemic in India.To conclude, the strength of this education model includesreal-life setting, the new design of supermarket and similar concepts can be setup or the existing ones can be remodeled with a sustainable business model for policy changes.In order to observe the success of this model on chronic diseases epidemiology, this model needs to be replicated in different areas of Southern India and other regions with the concept of 'the food pharmacy' .Open Access n H U B f o r S c i e n t i f i c R e s e a r c h Citation: Ganesan R (2016) The Healthy Supermarket, an Integrated Model for Prevention and Management of Obesity: Challenges in in-store setting in Urban Southern India.Nutr Food Technol 2(1): doi http://dx.doi.org/10.16966/2470-6086.113
n H U B f o r S c i e n t i f i c R e s e a r c h Citation: Ganesan R (2016) The Healthy Supermarket, an Integrated Model for Prevention and Management of Obesity: Translational Challenges in Real-world in-store setting in Urban Southern India.Nutr Food Technol 2(1): doi http://dx.doi.org/10.16966/2470-6086.113
Figure 2 :
Figure 2: Health Kitchen for Live demonstration
Figure 3 :
Figure 3: Sample of grocery form
Figure 6 :
Figure 6: Promotion of Supermarket Model, Published in local News Paper | 2018-12-17T17:43:48.222Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "cefc257e7f3c49eabcadf42548d99e08d9d0560e",
"oa_license": "CCBY",
"oa_url": "https://www.sciforschenonline.org/journals/nutrition-food/article-data/NFTOA-2-113/NFTOA-2-113.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "cefc257e7f3c49eabcadf42548d99e08d9d0560e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
250150186 | pes2o/s2orc | v3-fos-license | Structural Context of a Critical Exon of Spinal Muscular Atrophy Gene
Humans contain two nearly identical copies of Survival Motor Neuron genes, SMN1 and SMN2. Deletion or mutation of SMN1 causes spinal muscular atrophy (SMA), one of the leading genetic diseases associated with infant mortality. SMN2 is unable to compensate for the loss of SMN1 due to predominant exon 7 skipping, leading to the production of a truncated protein. Antisense oligonucleotide and small molecule-based strategies aimed at the restoration of SMN2 exon 7 inclusion are approved therapies of SMA. Many cis-elements and transacting factors have been implicated in regulation of SMN exon 7 splicing. Also, several structural elements, including those formed by a long-distance interaction, have been implicated in the modulation of SMN exon 7 splicing. Several of these structures have been confirmed by enzymatic and chemical structure-probing methods. Additional structures formed by inter-intronic interactions have been predicted by computational algorithms. SMN genes generate a vast repertoire of circular RNAs through inter-intronic secondary structures formed by inverted Alu repeats present in large number in SMN genes. Here, we review the structural context of the exonic and intronic cis-elements that promote or prevent exon 7 recognition. We discuss how structural rearrangements triggered by single nucleotide substitutions could bring drastic changes in SMN2 exon 7 splicing. We also propose potential mechanisms by which inter-intronic structures might impact the splicing outcomes.
INTRODUCTION
Survival Motor Neuron (SMN) protein is an essential housekeeping protein involved in multiple processes, including DNA replication and repair, transcription, pre-mRNA splicing, translation, macromolecular trafficking, stress granule formation, cell cycle regulation, signal transduction and maintenance of cytoskeletal dynamics (Singh R. N. et al., 2017). Low levels of SMN due to deletion or mutation of SMN1 gene causes spinal muscular atrophy (SMA), one of the leading genetic diseases associated with infant mortality (Wirth et al., 2020;Singh et al., 2021). SMN2, a nearly identical copy of SMN1, cannot compensate for the loss of SMN1 due to predominant skipping of exon 7 (Lefebvre et al., 1995;Cho and Dreyfuss 2010). Functions of SMN2 remain unknown, although high demand of SMN in testis is partially met through an adult-specific switch in SMN2 exon 7 splicing . While not intensively investigated, low levels of SMN also affects male fertility in adults suffering from mild SMA (Ottesen, Howell, Singh, Seo, Whitley and Singh 2016;Lipnick et al., 2019). Two currently approved therapies of SMA are based on the restoration of SMN2 exon 7 inclusion (Singh et al., 2020b). Due to the fact that SMA is predominantly linked to defects in SMN1 and that SMN2, a nearly identical copy of SMN1, is "available" in SMA patients, regulation of exon 7 splicing has been intensively investigated. Several cis-elements and transacting factors have been implicated in the regulation of SMN exon 7 splicing (Singh and Singh 2018) (Figure 1).
Splicing regulation is a complex process requiring precise definition of the splice sites (Shenasa and Hertel 2019). Recognition of the 5′ss by U1 snRNP is one of the earliest steps of assembly of the spliceosome that catalyzes the splicing reaction (Charenton et al., 2019). Once recruited, U1 snRNP can also define the upstream 3′ss through cross-exon interactions (De Conti et al., 2013). Similarly, U2 snRNP recruited at the 3′ss promotes recruitment of U1 snRNP at the downstream 5′ss through cross-exon interaction (De Conti et al., 2013). A critical C-to-T mutation at the 6 th exonic position of exon 7 (C6U substitution in RNA) was found to be the primary cause of SMN2 exon 7 skipping Monani et al., 1999). Being close to the 3′ss, the C6U was assumed to weaken the 3′ss of the exon (Lim and Hertel 2001). Initially, it was proposed that C6U abrogates an enhancer associated with ASF/SF2 (Cartegni and Krainer 2002) ( Figure 1). However, this claim was promptly challenged and a competing hypothesis suggesting that C6U creates a silencer associated with hnRNP A1 was put forward (Kashima and Manley 2003) (Figure 1). Our earlier work showed that C6U strengthens an extended inhibitory context (Exinct) at the 3′ss of exon 7 (Singh et al., 2004a;Singh et al., 2004c) ( Figure 1). Additional factors that interact with C6U or in its vicinity were subsequently identified (Pedrotti et al., 2010;Singh and Singh 2018) (Figure 1).
The inhibitory nature of C6U was independently validated by in vivo selection in which relative significance of all 54 positions of SMN exon 7 was probed simultaneously (Singh et al., 2004b). Results of in vivo selection of exon 7 confirmed the presence of "Exinct" in the beginning of the exon 7 and revealed two additional regulatory regions termed as the "Conserved tract" and "3′-Cluster". Located in the middle of exon 7, the "Conserved tract" exerts a positive effect on exon 7 splicing. The "3′-Cluster" is located towards the end of exon 7 and exerts a negative effect on exon 7 splicing ( Figure 1). The most surprising finding of in vivo selection was the suboptimal nature of the 5′ss of exon 7 (Singh et al., 2004b;Singh 2007). In humans, the last exonic position in most cases is represented by a G residue. This G residue base pairs with a C residue of U1 snRNA (Lund and Kjems 2002). In addition, during catalytic core formation this G residue forms a FIGURE 1 | Regulation of SMN exon 7 splicing. Diagrammatic representation of intronic and exonic cis-elements as well as trans-acting factors that modulate SMN exon 7 splicing. Upper-case letters signify exonic sequences, small-case letters, intronic sequences. Exons and introns are also shown as colored boxes and lines, respectively. Numbering of nucleotides, neutral and positive, starts from the first exonic and intronic position, respectively. The 5 and 3′ splice sites (5′ss and 3′ss) are indicated by the arrows. Exinct, conserved tract and the 3′-Cluster are cis-elements revealed by in vivo selection as described in (Singh et al., 2004b). Cr1 and Cr2 represent cryptic 5′ splice sites as described in . Negative and positive regulators of exon 7 splicing are indicated by (−) and (+), respectively. Abbreviations: Exinct, extended inhibitory context, GCRS, GC-rich region; LDI, long-distance interaction; URC, Uridine-rich clusters.
Frontiers in Molecular Biosciences | www.frontiersin.org July 2022 | Volume 9 | Article 928581 2 base pair with a C residue of U5 snRNP (Lund and Kjems 2002). In vivo selection of the entire exon 7 revealed that an A residue at the last position of exon 7 constitutes the most inhibitory nucleotide that contributes to exon 7 skipping. Consistently, an A-to-G mutation at the last position of exon 7 (A54G substitution) fully restored SMN2 exon 7 inclusion. A GA-rich enhancer in the middle of exon 7 has been found to be critical for SMN2 exon 7 inclusion (Hofmann et al., 2000;Hofmann and Wirth 2002;Young et al., 2002). The enhancer constitutes a binding site for Tra2β1 and its associated factors (Figure 1). Of note, the effect of the A54G substitution was so strong that it promoted SMN2 exon 7 inclusion even in the absence of this GA-rich enhancer (Singh et al., 2004b). A subsequent study confirmed that A54G destabilizes a terminal stem-loop structure (TSL2) and helps recruit U1 snRNP through extended based pairing with U1 snRNA at the 5′ss of exon 7 . It should be noted that risdiplam also interacts with A54 and helps recruit U1 snRNP (Campagne et al., 2019). It is worth mentioning that recruitment of engineered U1 snRNAs to sequences located downstream of the 5′ss of exon seven can also promote exon 7 inclusion Singh RN. and Singh NN. 2019).
Several studies have focused on the role of regulatory elements within SMN intron 7, the last intron of SMN genes (Singh and Singh 2018) ( Figure 1). The discovery of the 15-nucleotide long intronic splicing silencer N1 (ISS-N1) spanning the region from the 10 th to 24 th positions of intron 7 revealed (for the first time) the strong inhibitory impact of the intronic element on SMN2 exon 7 splicing ( Figure 1). Deletion or an ASO-directed blocking of ISS-N1 fully restored SMN2 exon 7 inclusion (Singh et al., 2006). Subsequent studies confirmed that among many potential targets for an ASO-based therapy for SMA, ISS-N1 was the leading contender (Hua et al., 2008). Upon successful completion of clinical trials, the ISS-N1-targeting ASO nusinersen (commercial name: Spinraza) was approved as the first drug for the treatment of SMA Bennett et al., 2019). ISS-N1 is a complex regulatory element that harbors two putative sites contacted by a single hnRNP A1 molecule (Beusch et al., 2017). The first five nucleotides of ISS-N1 also overlaps with an upstream 8-nucleotidelong GC-rich motif, sequestration of which by a short ASO promoted SMN2 exon 7 inclusion and provided therapeutic benefits in mouse models of SMA (Singh et al., 2009;Keil et al., 2014). Another splicing silencer element located in intron 7 is created by a SMN2-specific A-to-G substitution at the 100 th position of intron 7 (Kashima et al., 2007). Intron 7 also contains two positive regulatory elements downstream of ISS-N1 (Miyaso et al., 2003;Singh et al., 2011). One of these elements is a binding site for TIA1 that is known to stimulate recruitment of U1 snRNP at the 5′ss (Singh et al., 2011). We have demonstrated that TIA1 is indeed a modifier of SMA in a genderspecific manner . Here, we review the structural context of SMN exon 7 and its flanking introns 6 and 7. We focus on both probed and predicted secondary structures that have been implicated in regulation of SMN exon 7 splicing. We also discuss the structural context of mutations that profoundly impact SMN exon 7 splicing.
Structure of SMN Exon 7
Secondary structure of SMN exon 7 and the downstream intron 7 probed by enzymatic and chemical methods reveals the presence of two terminal stem-loop structures, TSL1 and TSL2 (Singh et al., 2004a;Singh et al. 2007;Singh et al. 2013) (Figure 2). Located at the 5′-end of exon 7, TSL1 sequesters several presumed cis-elements that define the 3′ss of exon 7. SMN2 specific C6U is predicted to stabilize TSL1 by increasing the size of the stem in TSL1. However, results of enzymatic structure probing showed that the U residue at the 6 th exonic position is highly accessible. Hence, the six-nucleotide loop of TSL1 encompasses the UAGAC motif that may serve as a binding site of hnRNP A1, a negative regulator of SMN2 exon 7 splicing. A recent study in the context of telomerase RNA has shown that hnRNP A1 preferentially interacts with its motif presented in the loop (Liu et al., 2017). The findings of this study are instructive as they highlight the viewpoint that a structural context plays a pivotal role in deciding the "fate" of RNA-protein interactions in a living cell, where limited protein supply requires binding of a given protein to its best RNA target. Supporting the inhibitory nature of TSL1, mutations predicted to abrogate this secondary structure stimulated SMN2 exon 7 inclusion (Singh et al., 2004a). However, results of mutations should be interpreted with caution as the stimulatory effect of a given mutation on splicing could be due to accidental creation of an enhancer element and/or abrogation of a silencer element.
TSL2 is one of the most scrutinized structures of exon 7. TSL2 is formed by sequences at the 3′-end of exon 7. Compared to TSL1, the structure of TSL2 is more rigid due to its longer stem. Importantly, TSL2 sequesters the first two intronic positions that define the 5′ss of exon 7 ( Figure 2). In principle, U1 snRNA has a potential to form an 11-bp long RNA:RNA duplex with the 5′ss of an exon formed between U1 snRNA and the last three exonic and the first eight intronic positions (Lund and Kjems 2002). Engineered U1 snRNAs capable of forming a "perfect" 11nucleotide-long duplex with the 5′ss of exon 7 or downstream sequences have been shown to promote SMN2 exon 7 inclusion Singh et al., 2017a). In the case of SMN exon 7, U1 snRNP is predicted to form only 6 base pairs with the 5′ss of exon 7, deeming it suboptimal. Partial sequestration of the 5′ss of exon 7 by TSL2 imparts a strong inhibitory effect on splicing of SMN2 exon 7. Consistently, mutations that disrupted TSL2 promoted SMN2 exon 7 inclusion . Hence, a strong stimulatory effect of A54G substitution revealed by in vivo selection of the entire exon 7 could be attributed, at least in part, to the disruption of TSL2 (Singh 2007). Confirming the inhibitory role of TSL2, compensatory mutations that reinstated TSL2 restored skipping of SMN2 exon 7 splicing. Interestingly, dinucleotide substitutions that strengthened TSL2 promoted exon 7 skipping, even in the context of SMN1 (Singh et al., 2004a). These findings support the idea that transfactors interacting with splicing enhancers present within SMN1 exon 7 are insufficient to promote exon 7 inclusion in the context of a rigid secondary structure sequestering the 5′ss of exon 7. TSL1 and TSL2 are separated from each other by an internal stem, IS1, formed between sequences in the middle of exon 7 and the 3′-end of intron 6 ( Figure 2). IS1 partially sequesters the polypyrimidine tract with a potential negative impact on the recognition of the 3′ss of exon 7 by U2 snRNP. IS1 also sequesters the "Conserved tract", a positive regulator of exon 7 splicing (Singh et al., 2004b). Supporting the negative impact of IS1 on exon 7 splicing, substitutions at several positions within IS1 were selected for in our "in vivo selection of the entire exon 7" experiment (Singh et al., 2004b). In addition, stimulatory mutations in the 5′ strand of TSL1 stem are also predicted to disrupt IS1 by forming alternative structures. We hypothesize that an abrogated IS1 might make interactions between a positive transfactor(s) and the "Conserved tract" possible, which in turn will lead to destabilization of TSL2 and improved recruitment of U1 snRNP at the 5′ss of exon 7. IS1 may also serve as an "anchor" for an interaction(s) with splicing-modulating small molecules. FIGURE 2 | Local structure of SMN exon 7 and adjacent upstream/downstream intronic sequences. Existence of TSL2 and its effect on exon 7 splicing was also confirmed by mutational analysis. Intron 6 and intron 7 sequences are shown in small-case green and blue letters, respectively. Exon 7 sequence is shown in upper-case black letters. Numbering of nucleotides, neutral, positive and negative, starts from the first position in exon 7, first position of intron 7 and the last position in intron 6, respectively. The splice sites of exon 7 are indicated by the arrows. IS1 structure is boxed. Abbreviation: IL, internal loop; IS, internal stem; TSL, terminal stem loop.
FIGURE 3 | Secondary structure of SMN intron 7. The structure is based on combined probing by enzymatic and chemical methods. An exon 7/intron 7 junction as well as the 5′ss are indicated. Exon 8 is represented by a green box. Numbering of nucleotides, neutral and. positive, starts from the first position of exon 7 and the first position of intron 7, respectively. Binding sites of hnRNP A1/A2 and TIA1 are highlighted in pink and green, respectively. Element 2 sequence is highlighted in light blue. ISTL, internal stem formed by a long-distance interaction; TSL, terminal stem-loop; ISS-N2, intronic splicing silencer 2.
Structure of SMN Intron 7
The secondary structure of intron 7 of SMN2 probed using SHAPE revealed five terminal stem loops (TSL3, TSL4, TSL5, TSL6 and TSL7) and three internal stems formed by longdistance interactions (ISTL1, ISTL2 and ISTL3) (Singh et al., 2013;Singh et al., 2015b) (Figure 3). Among structures formed by intron 7, ISTL1 has been intensively investigated. ISTL1 is an 8bp long duplex that sequesters the remaining portion of the 5′ss of exon 7. The cytosine residue at the 10 th intronic position ( 10 C) is the last nucleotide of the 5′-strand of ISTL1. 10 C also happens to be located at the first position of the 15-nucleotide-long ISS-N1 ( Figure 3). The functional significance of ISTL1 was uncovered due to an unexpected finding that two 14-nucleotide ASOs, F14 and L14, produced an opposite effect on SMN2 exon 7 splicing (Singh et al., 2010). While F14 promoted SMN2 exon 7 inclusion by sequestering the first 14 nucleotides of ISS-N1, L14 triggered SMN2 exon 7 skipping by sequestering the last 14 nucleotides of ISS-N1. The annealing positions of F14 and L14 differed by a single nucleotide as F14 sequestered 10 C and L14 did not. Subsequent experiments revealed that F14 and L14 destabilize and stabilize ISTL1, respectively (Singh et al., 2013). The finding that L14 triggers SMN2 exon 7 skipping by stabilizing ISTL1 shows that an ASO has a potential to drastically alter the structural context outside its annealing positions. Two strands of ISTL1 are separated from each other by 279 nucleotides. Supporting the inhibitory nature of ISTL1, mutations that disrupted ISTL1 promoted SMN2 exon 7 inclusion (Singh et al., 2013). Compensatory mutations that reinstate the disrupted ISTL1 restored the inhibitory effect of ISTL1. ISTL2 and ISTL3 are additional structures formed by long-distance interactions, they share a continuous 3′-strand with ISTL1. The continuous sequence encompassing the 3′-strands of ISTL1, ISTL2 and ISTL3 was termed ISS-N2 ( Figure 3) (Singh et al., 2013). ASOs blocking different regions of ISS-N2 stimulate SMN2 exon 7 inclusion, supporting the inhibitory nature of ISTL1, ISTL2, and ISTL3 on SMN2 exon 7 splicing. The stimulatory effect of an ISS-N2targeting ASO was maximal when ASO sequestered the 3′-strand of ISTL1. An ISS-N2-targeting ASO also showed therapeutic benefit in a mouse model of SMA . These results underscored that deep intronic sequences associated with RNA structure could be exploited for therapeutic purposes.
The structural context of intron 7 has significance for a better understanding of RNA-protein interactions and their role in SMN2 exon 7 splicing (Singh et al., 2015b). For instance, TSL3 and ISTL2 sequester the binding sites of TIA1 that is known to stimulate SMN2 exon 7 inclusion (Singh et al., 2011). One of the two putative binding motifs of hnRNP A1 present within ISS-N1 is located within a loop (Figure 3). Based on the structural context, we hypothesize that the strong binding site for hnRNP A1 presented in the loop of the stem-loop structure enables more efficient recruitment of this protein, which in turn renders the 5′ss of exon 7 inaccessible for the recruitment of U1 snRNP (Singh et al., 2015a). Furthermore, an ISS-N1 targeting ASO not only blocks the binding site of hnRNP A1 but also makes the TIA1 binding site "available" for interaction with TIA1 due to disruption of TSL3. Similarly, an ISS-N2-targeting ASO makes U1 snRNP and TIA1 binding sites accessible by disrupting ISTL1 and ISTL2, respectively. Element 2, a positive regulator of SMN exon 7 splicing, is located downstream of the TIA1 binding site within intron 7 (Figures 1, 3) (Miyajima et al., 2002;Miyaso et al., 2003). The probed structure of intron 7 places Element 2 in both the structured region and the loop (Figure 3). Interestingly, the SMN2-specific A-to-G mutation at the 100 th position of intron 7 falls within the stem region of Element 2 (Figure 3). Of note, A-to-G mutation at the 100 th intronic position has been suggested to create a binding site for hnRNP A1, a negative regulator of SMN exon 7 splicing (Kashima et al., 2007). At the same time, deletion of Element 2 (together with A100G) has been shown to have a negative effect on SMN exon 7 splicing (Miyaso et al., 2003). These contradictory findings could be explained by the presence of multiple overlapping cis-elements within Element 2, a 66 nucleotide-long sequence (Miyaso et al., 2003). The strongest stimulatory effect associated with the region within Element 2 corresponds to TSL4 that harbors a U-rich loop (Figure 3). An alternative structure in this region is predicted to form a different stem-loop structure with an A-rich sequence within the loop (Figure 4) (Miyaso et al., 2003). However, the significance of any of these structures have not yet been investigated. Of note, transfactor(s) that interact with Element 2 remain unknown.
Inter-Intronic Structures
Folding algorithms, including mfold, RNAfold and ScanFold, can predict secondary structures with high confidence (Wang et al., 2008;Rouse et al., 2022); notably, the latter program provides metrics and models describing base pairs with likely functionality (Andrews et al., 2018). Secondary structures of SMN exon 7 and downstream intron 7 predicted by the above algorithms have generally agreed with the probed structures. Considering the structure of SMN intron 6 has not yet been probed, one can gain valuable insights from the predicted secondary structure. SMN intron 6 harbors fourteen copies of Alu-like elements, some of them are present as inverted repeats (Ottesen et al., 2017). One of these Alu elements is used as an exon, although it is predominantly skipped or/and degraded via nonsensemediated decay . Secondary structures formed by inverted Alu repeats within intron 6 have been implicated in the generation of SMN circRNAs (Ottesen et al., 2019;Ottesen and Singh 2020). At the same time, a large deletion encompassing all Alu-elements of intron 6 was found to have no impact on splicing of SMN exon 7 (Singh et al., 2004a). However, sequence motifs close to the 3′-end of intron 6 have been shown to modulate SMN exon 7 splicing (Singh and Singh 2018). One such motif is Element 1 that imparts a negative impact on SMN exon 7 (Miyajima et al., 2002). Deletion or an ASO-mediated sequestration of Element 1 has been shown to promote SMN2 exon 7 inclusion (Miyajima et al., 2002;Osman et al., 2016). One of the predicted structures of exon 7 and its flanking intronic Frontiers in Molecular Biosciences | www.frontiersin.org July 2022 | Volume 9 | Article 928581 5 sequences places the middle of Element 1 in an internal stem formed with sequences of intron 7 ( Figure 4A). Interestingly, the 3′-strand of this internal stem comes from a portion of Element 2. Hence, it is likely that this inter-intronic RNA:RNA duplex suppresses the positive effect of Element 2 on exon 7 inclusion.
Formation of an RNA:RNA duplex between two neighboring introns leads to looping out of an exon which might trigger exon skipping. In addition to the inter-intronic structure that involves sequences of Elements 1 and 2, other predicted structures reveal additional RNA:RNA duplexes formed between introns 6 and 7.
One such structure is created by base pairing between a sequence located downstream of Element 1 and the sequence located in the middle of Element 2. Factors such as hnRNP A1/A2 have been implicated in exon skipping through a looping out mechanism (Martinez-Contreras et al., 2006). Considering inter-intronic interactions/structures have potential to bring different hnRNP A1/A2 binding sites of in close proximity, it is likely that skipping of SMN2 exon 7 is facilitated, at least in part, by a looping-out mechanism aided by inter-intronic RNA structures. An additional hypothesis would be that the inter-intronic RNA: RNA duplexes enforce the neighboring intra-intronic structures that sequester the positive regulatory elements in the vicinity of the splice sites of exon 7. The "looping out" hypothesis and "sequestration of splice sites due to RNA structure" hypothesis do not have to be mutually exclusive. The last fifty nucleotides of intron 6 harbor critical splicing regulatory elements, including the polypyrimidine tract and branchpoint. The predicted secondary structure in this region places most of the polypyrimidine tract in the internal stems. The precise location of SMN intron 6 branchpoint has not been identified yet. Nonetheless, the UUUUAAC motif corresponding to the consensus mammalian branchpoint motif YNYURAY is locked in a predicted RNA:RNA duplex ( Figure 4A) (Gao et al., 2008). In an alternative predicted structure, the UUUUAAC motif is placed in the loop region, while positioning of ISS-N1 and the TIA1binding sites is drastically changed (Gao et al., 2022) (Figure 4B). In this alternative structure, the first half of ISS-N1 interacts with exon 7 and blocks a portion of the "Conserved tract", the positive regulator of exon 7 inclusion ( Figure 4B). Mutations within the 5′ half of ISS-N1 are known to promote exon 7 inclusion (Supplementary Figure S1). Future experiments will reveal if the stimulatory effect of these mutations is linked, at least in part, to the RNA structure.
Assessing Propensity for Secondary Structure Formation
To assess potential for additional functional RNA structural elements, we performed preliminary scans of the pre-mRNA sequence of SMN2 using the ScanFold tool (Andrews et al., 2018) ( Figure 5A). Within SMN2, ScanFold identified 84 significantly stable structures (containing base pairs with <-2 average z-score) suggesting these structures have a functional role to play (e.g., the SMN2 intronic structure in Figure 5B). Only two of these 84 highlighted structures overlap exonic sequence, while the rest resides in introns. Notably, exon 7 is spanned by one of the significantly stable ScanFold predicted exonic structures ( Figure 5C) and the results recapitulate portions of the two previously determined hairpin structures TSL1 and TSL2 Singh et al., 2013).
Interestingly, the region of SMN2 which had the lowest ΔG z-score also had the lowest (most negative) minimum free energy (MFE), which is not always the case, as some relatively unstable (ΔG) regions may have ordered stability bias. Additionally, this region is adjacent to the region of highest (least stable) MFE predictions in SMN2 ( Figure 5B). Notably, the region downstream of this identified structural motif also had the lowest ensemble diversity (ED). In this case, the lack of conformational diversity indicated by low ED suggests that the nucleotides here are likely to be singlestranded in most or all conformations, possibly to facilitate intermolecular interactions with regulatory transfactors.
We also employed the ScanFold program to examine the local structural context of the 5-nucleotide motifs occupying the 11 th to 15 th positions of SMN2 intron 7. This region falls within the ISS-N1 sequence and has recently been intensively investigated (Gao et al., 2022). One of the putative hnRNP A1 motifs, CAGCA, happens to occupy the 11 th to 15 th positions of SMN2 intron 7 (Hua et al., 2008). Local structure predicted by ScanFold places the last four residues of the inhibitory CAGCA motif within a stem (Supplementary Figure S1). Six mutants that promoted SMN2 exon 7 inclusion by abrogating the CAGCA motif bring noticeable changes in the local structural context (Gao et al., 2022) (Supplementary Figure S1). The stimulatory effect of these mutations could be due to both, abrogation of the negative hnRNP A1 motif and creation of a novel positive motif with the enhanced accessibility in the newly formed loop. We also observed changes in the local structural context in case of other mutants that maintained the inhibitory context while abrogating the putative motif associated with hnRNP A1 (Supplementary Figure S1). Interestingly, a single U-to-C substitution (from UUUUU to UUUCU) transformed a stimulatory motif into an inhibitory motif (Gao et al., 2022). This change retains the positioning of these 5-nucleotide motifs within the loop (Supplementary Figure S1). These results suggest that both positive and negative regulatory transfactors preferentially interact with their cognate binding sites in the single-stranded region. Future studies will reveal if additional features of highorder structure provide secondary contacts for RNA-protein interactions. Of note, the role of secondary interactions for an enhanced affinity has been reported for a factor known to promote self-splicing of a group II intron (Wank et al., 1999;Singh et al., 2002).
The ScanFold program identified 84 regions with significant thermodynamic stability throughout the pre-mRNA of SMN2. These regions have an ordered sequence arrangement, which displays higher than expected stability compared to randomized sequences of identical nucleotide composition. SMN2 contains 42 Alu elements located within its intronic sequences that represent 39% of the SMN2 gene (Ottesen et al., 2017). The huge repertoire of circRNAs produced by SMN genes is attributed to the RNA duplexes formed between inverted Alu repeats (Ottesen et al., 2019). Of the 84 significantly stable regions identified by ScanFold, 54 overlapped Alu elements, including 47 regions that are fully contained within an Alu element. Of the 42 Alu elements located within SMN2 pre-mRNA, 30 overlapped regions of significant stability. These regions of ordered sequence and enhanced stability are prime candidates for future investigations in structure-function mechanisms associated with Alu elements present in SMN2. Overall, these preliminary results strongly indicate that RNA secondary structure may be playing functional roles, most significantly in processing pre-mRNA into a mature transcript(s).
CONCLUDING REMARKS
The role of RNA structure in pre-mRNA splicing is a topic of growing significance. Local RNA structures are formed instantaneously as soon as a transcript emerges from the RNA polymerase. Initial RNA structures transition to more favorable structures formed by long-range interactions. However, the assessment of structural transitions in the cell remains a challenging task: for example, RNA helicases break certain secondary structures with high specificity. Protein factors tightly interacting with RNA may also impede the transition from one RNA Frontiers in Molecular Biosciences | www.frontiersin.org July 2022 | Volume 9 | Article 928581 8 structure into another. Potentially hundreds of proteins could be recruited during the removal of a single intron. Some of these RNAprotein interactions are expected to be non-specific due to the propensity for negatively charged RNA molecules to form electrostatic interactions with positively charged (basic) residues presented by proteins. In contrast, other RNA-protein interactions rely on certain RNA motifs. RNA structures may provide specificity by placing a given motif into a unique context. A previous report uncovered the structural context of several splicing factors that were initially thought to have preference for small linear motifs (Dominguez et al., 2018). Findings summarized in this study represent the tip of the iceberg as methods to uncover genome-wide RNA-protein interactions in the cell are still at their infancy.
Two steps of transesterification involved in the removal of every intron during pre-mRNA splicing are RNA catalyzed reactions resembling those of the self-splicing group II introns (Smathers and Robart 2019). However, unlike in self-splicing group II introns where an intronic structure alone brings and holds the splice sites together, the role of intronic sequences in pre-mRNA splicing has been assigned to recruiting snRNPs that bring and hold the splice sites together. Owing to the diverse nature of intronic sequences, the mechanism of snRNPs recruitment differs from one intron to another. Currently, there is no explanation why certain introns are removed more efficiently than others despite the comparable strength of their splice sites. The answer to this question may partly lie in the RNA structures that may sequester the splice sites and/or fold in a manner that brings the splice sites of an intron in close proximity. The generation of circRNAs provides the most convincing example of how RNA structures bring a downstream 5′ss to an upstream 3′ss for backsplicing. The process of circRNA generation competes with the forward splicing that produces the linear transcripts.
Aberrant splicing is associated with many genetic disorders. SMA happens to be one of the model diseases in which modulation of splicing employing ASOs and small molecules have conferred therapeutic benefits. The therapeutic compounds utilized in treating SMA were selected based on their ability to restore SMN2 exon 7 inclusion. Several cis-elements and transacting factors have roles implicated in regulation of SMN exon 7 splicing. Probed RNA structures place the 5′ss of SMN exon 7 in a highly sequestered structural context encompassing TSL2 and ISTL1. TSL2 and ISTL1 represent examples of an inhibitory local stem-loop structure and an inhibitory structure formed by longdistance interactions, respectively. The formation of TSL2 and ISTL1 is proposed to provide a platform for the recruitment of negative regulatory factors, including hnRNP A1/A2. Another inhibitory structure, TSL1, formed at the 3′ss of SMN2 exon 7 shifts the hnRNP A1/A2 binding site into the loop. An additional hnRNP A1/A2 binding site is located in the partially structured region of SMN2 intron 7. Overall structural context of SMN2 exon 7 and its flanking intronic sequences renders binding sites inaccessible for stimulatory transfactors that recruit U1 and U2 snRNPs at the 5 and 3′ ss of exon 7, respectively. Abrogation of a long-distance interaction similar to that of ISTL1 has been associated with the X-linked leukodystrophy Pelizaeus-Merzbacher disease (Taube et al., 2014). There is a growing interest to predict such structures as they may offer novel therapeutic targets for a number of diseases (Bernat and Disney 2015;Pervouchine 2018). Although beyond the scope of this review, recent studies on the short-and long-range RNA-RNA interactome provide interesting insights into replication of viruses, including SARS-CoV-2 (Smyth et al., 2018;Ziv et al., 2020). Future studies will reveal if these structures could be exploited for therapeutic purposes.
The roles of inter-intronic structures are the least appreciated aspects of SMN exon 7 splicing. Folding algorithms support formation of structures between introns 6 and 7 with potential to prime exon 7 to skipping if these structures are not immediately disrupted. One mechanism to avoid inter-intronic interactions is through fast removal of either intron 6 or intron 7 before structures are formed. The order in which SMN introns are removed remain largely unknown. In general, large introns are removed later than the smaller introns (Kim et al., 2017). Considering SMN intron 6 is > 13-fold larger than intron 7, its removal is expected to happen after the removal of intron 7. It is possible that a delayed removal of intron 7 further delays the removal of intron 6 leading to skipping of exon 7. We hypothesize that ASOs and small molecules that promote SMN2 exon 7 inclusion preferentially stimulate a fast removal of intron 7 through strengthening of the 5′ss of exon 7. Future studies will determine if the disruption of inter-intronic structures present therapeutic avenues for the treatment of SMA.
Available tools of RNA structure prediction and methods of RNA structure probing have provided important insights into our understanding of SMN exon 7 splicing. Yet, much remains to be learned about how RNA structures decide the fate of RNAprotein interactions in the context of SMN exon 7 splicing. Several mutations in the intron 7 have been recently shown to restore SMN2 exon 7. It will be interesting to see if RNA structure provides a mechanistic basis to explain the consequences of these mutations. Forward splicing affects backsplicing and introns used for backsplicing are generally spliced at the end (Kim et al., 2017). Considering that the 3′ss of exon 6 is used for backsplicing (Ottesen et al., 2019), it will be important to know how backsplicing-associated inter-intronic structures between intron 5 and downstream introns impact the forward splicing events, including removal of introns 6 and 7. Splicing is coupled to transcription elongation as many factors are recruited to the nascent pre-mRNA by RNA polymerase II before the termination of transcription (Saldi et al., 2016). The structure of a nascent RNA affects transcription elongation and vice-versa (Saldi et al., 2016). Hence, compounds that promote SMN exon 7 splicing through RNA structures formed during transcription elongation may provide yet another avenue for SMA therapy. Despite tremendous progress, currently approved therapies do not fully meet the needs of SMA patients (Singh 2019). A proper understanding of an RNA structure of SMN pre-mRNA will make a profound contribution to our understanding of splicing regulation of SMN genes. Novel findings emerging from the studies pertaining to the structure of SMN pre-mRNA will also shape the future therapeutic development of SMA and other diseases amenable by splicing modulation. | 2022-07-01T13:31:56.787Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "5504fdc5921bf65ea4754bc1a1407cb13af0a08c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "5504fdc5921bf65ea4754bc1a1407cb13af0a08c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253652664 | pes2o/s2orc | v3-fos-license | Bundle geodesic convolutional neural network for diffusion-weighted imaging segmentation
Abstract. Purpose Applying machine learning techniques to magnetic resonance diffusion-weighted imaging (DWI) data is challenging due to the size of individual data samples and the lack of labeled data. It is possible, though, to learn general patterns from a very limited amount of training data if we take advantage of the geometry of the DWI data. Therefore, we present a tissue classifier based on a Riemannian deep learning framework for single-shell DWI data. Approach The framework consists of three layers: a lifting layer that locally represents and convolves data on tangent spaces to produce a family of functions defined on the rotation groups of the tangent spaces, i.e., a (not necessarily continuous) function on a bundle of rotational functions on the manifold; a group convolution layer that convolves this function with rotation kernels to produce a family of local functions over each of the rotation groups; a projection layer using maximization to collapse this local data to form manifold based functions. Results Experiments show that our method achieves the performance of the same level as state-of-the-art while using way fewer parameters in the model (<10%). Meanwhile, we conducted a model sensitivity analysis for our method. We ran experiments using a proportion (69.2%, 53.3%, and 29.4%) of the original training set and analyzed how much data the model needs for the task. Results show that this does reduce the overall classification accuracy mildly, but it also boosts the accuracy for minority classes. Conclusions This work extended convolutional neural networks to Riemannian manifolds, and it shows the potential in understanding structural patterns in the brain, as well as in aiding manual data annotation.
manifolds with some simple form of orientation invariance, and we take DWI as the main application. There are a series of proposals trying to generalize a R 2 convolutional neural network to curved spaces, yet in our case, rotational invariance is a desirable property we want in the design and our goal is to be able to understand spherical patterns up to rotations. We propose a general architecture for extracting and filtering local orientation information of data defined on a manifold that allows us to learn similar orientation structures which can appear at different locations on the manifold. Reasonable manifolds have local orientation structures-rotations on tangent spaces. Our architecture lifts data to these structures and performs local filtering on them, after which it collapses them back to obtain filtered features on the manifold. This provides both rotational invariance and flexibility in design, without having to resort to complex embeddings in Euclidean spaces. Our contribution in this work is as follows: • Instead of using Fourier-type methods such as irreducible representations as is done in literature, we directly perform convolution numerically on the surface as is done in classical CNNs in image analysis, which is far more light-weight; • We lift the spherical function locally with SOð2Þ actions instead of lifting it to the full SOð3Þ group as is usually done in literature, which makes our method a more general case that is applicable on manifolds that are not spheres; • We provide an explicit construction of the architecture for DWI data and show very promising results for this case including learning and generalizing patterns of a dataset from only one scan.
This work is an extension of our previous publication. 1
Related Work
The importance of the extraction of rotationally invariant features beyond fractional anisotropy 2 has been recognized in series of DWI works. Caruyer and Verma 3 developed invariant polynomials of spherical harmonic (SH) expansion coefficients, and discussed their application in population studies. Schwab et al. 4 proposed a related construction using eigenvalue decomposition of SH operators. Novikov et al. 5 and Zucchelli et al. 6 argued their usefulness for understanding microstructures in relation to DWI. There is though a vast growth in literature on deep learning (DL) for non-flat data or more complex group actions than just translations. Masci et al. 7 proposed an NN on surfaces that extracts local rotationally invariant features. A nonrotationally invariant modification was proposed in Boscaini et al. 8 On the other hand, convolution generalizes to more group actions than just translation, and this has led to group-convolution neural networks for structures where these operations are supported, especially Lie groups themselves and their homogeneous spaces. [9][10][11][12][13][14][15] Global equivariance is often sought but proved complicated or even elusive in many cases when the underlying geometry is nontrivial. 16 An elementary construction on a general manifold is proposed by Schonsheck et al. 17 via a fixed choice of geodesic paths used to transport filters between points on the manifold, ignoring the effects of path dependency (holonomy when paths are geodesics). The removal of this dependency can be obtained by summarizing local responses over local orientations, which is what was done by Masci et al. 7 On the other hand, Cohen et al. 18 lifted spherical functions to the 3D-rotation group SOð3Þ and used a generalization of Fourier transform on it to perform convolution. To explicitly deal with holonomy, Sommer and Bronstein 19 proposed a convolution construction on manifolds based on stochastic processes via the frame bundle, but it is, at this point, still very theoretical.
A number of works have applied DL to DWI as well, due to the unique structure of the data, as orientation responses. Golkov et al. 20 built multilayer perceptrons in q-space for kurtosis and NODDI mappings. Wasserthal et al. 21 proposed a U-net inspired structure for tract segmentation, while Sedlar et al. 22 proposed a spherical U-net for neurite orientation. To take into account the spherical structure of the DWI data and the homogeneous structure of the sphere, Chakraborty et al. 23 proposed an rotation equivariant construction inspired by Cohen et al. 18 for disease classification. Müller et al. 24 propose a sixth-D, 3D space, and q-space NNs with roto-translation/ rotation equivalence properties.
In this work, we are interested in rotationally invariant features, thus we take a path closer to Schonsheck et al. 17 and Masci et al., 7 we actually lift functions to functions on the bundle of tangent space rotations of our manifolds, a two-dimensional manifold, as opposed to Cohen et al. 18 where the lifting results in functions on SOð3Þ-a three-dimensional manifold. Then, we add one or more extra local group convolution layers before summarising the data and eliminating path dependency. The proposed construction thus applies to oriented Riemannian manifolds, and no other structure (e.g., homogeneous or symmetric space) is used.
Method
All along this section our reference to Riemannian Geometry can be found in the Do Carmo's classical book Riemannian Geometry. 25 CNNs are generally described and implemented in terms of correlation rather than convolution, and we follow this convention as well in this section. Bekkers et al. 14 used the fact that SEð2Þ acts on R 2 to lift 2D (vector-values) images to R 2 × S 1 via correlation kernels. This is not, in general, the case when R 2 is replaced by a Riemannian manifold, where there is no obvious way to define these operations. One can, however, overcome this situation via a somewhat more complex construction. Therefore, we assume in the sequel that we are given a complete orientable Riemannian manifold of dimension n, this will be the sphere S 2 in our case. We assume that the injectivity radius iðMÞ of M is strictly positive. As usual, the tangent space at a point x ∈ M is T x M. An image is a function f ¼ ðf 1 : : : ; f N c Þ ∈ L 2 ðM; R N c Þ, where N c is the number of channels.
Operations will be performed by lifting the function to tangent spaces and kernels are defined on tangent spaces. The exponential map
Lifting layer
We first define transportable filters on tangent spaces to replace CNN's kernels. These filters will also be called kernels. To start with, a "pointed kernel" will be a function k ¼ ðk 1 ; : : : ; k N c Þ ∈ L 2 ðT x 0 M; R N c Þ, at a "base point x 0 ." We assume that SuppðkÞ ∈ B x 0 ð0; rÞ, 0 < r ≤ iðMÞ, the ball of center 0 and radius r in T x 0 M. A piece-wise smooth path γ∶½0;1 → M, joining x 0 to x defines, via the Levi-Civita connection of M, a parallel transport P γ ∶T x 0 M → T x M, and this is an isometry. We set k γ ≡ k ∘ P −1 γ . In general, another smooth path δ∶½0;1 → M joining x 0 and x defines another parallel transport P δ ∶T x 0 M → T x M and P γ ∘ P −1 δ is a rotation R of T x M, i.e., an element of SOðT x MÞ. It follows that k δ ¼ k γ ∘ R. The γ-lift of f by k is the function E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 2 6 1 Note that because SuppðkÞ ∈ B x 0 ð0; rÞ, this integral is defined on B x ð0; rÞ and Exp x is a diffeormorphism from this domain to the geodesic ball Bðx; rÞ ⊂ M. Now we choose, for each x in M, a smooth path γ x that joins x 0 and x. As M is complete, we can, for instance, choose a family Γ ¼ ðγ x Þ x of minimizing geodesics. The mapping E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 1 6 0 (2) lifts a M-image to the bundle of rotations of M (we refer to Gallier et al. 26 chap. 9 for a definition of bundles in differential geometry), denoted by SOðTMÞ in the sequel (SOðTMÞ
Group correlation layer
The object F defined in Eq. (2) is a function on the total space of the bundle ðSOðTMÞÞ (Gallier et al. 26 chap. 9), supposed square-integrable (F ∈ L 2 ðSOðTMÞÞ). The situation is more complex than the one described in Bekkers et al., 14 as there is actually no reason that one can find a "continuous family" of paths x 0 ↝x, ∀ x ∈ M. An important example to us: if M is the sphere S 2 , one can take γ x to be a minimizing geodesic between x 0 and s. It is unique, except when x ¼ −x 0 , where there are infinitely many of them.
Let K be an element of L 2 ðT x 0 MÞ. The parallel transport of K along the path γ is with dR the bi-invariant Haar measure on SOðT x MÞ. In general, we consider objects that are a bit more complicated. Instead of F being a section of L 2 ðSOðTMÞÞ, it is taken as a section of L 2 ðSOðTMÞÞ N l , meaning we have N l channels, FðxÞ ¼ ðFðxÞ 1 ; : : : ; FðxÞ N l Þ ∈ L 2 ðSOðT x MÞ; R N l Þ, and K also has N l channels, K ¼ ðK 1 ; : : : ; K N l Þ ∈ L 2 ðSOðT x 0 Þ; R N l Þ and we replace Eq. (4) with E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 1 5 2 FðxÞ⋆K γ x ðSÞ ¼ The group correlation layer at level l takes a section F of L 2 ðSOðTMÞÞ N l , and uses N lþ1 kernels ðK (2). The function is first mapped onto the tangent space of the point of interest via the exponential map, and κ ð2Þ is convolved with the mapped function to get F 2 . Group correlation is then performed on the resulting image, followed by the projection layer, from which we get rotationally invariant responses. The bottom row shows the same process but with a different kernel parallel transport, illustrating that the responses of the convolutional layers are simply rotated. In the figure on the right, the bottom row shows S 2 with a regular icosahedric tessellation and a tangent plane at one of the vertices and five sampled directions. The disk represents the kernel support. The middle row shows the actual discrete kernel used, with the 2π∕5 rotations and the top row is represents the lifted function on the discrete rotation group.
Projection layer
The base point and path dependency in the lifting and group correlation layer definitions appear problematic. We can, however, reproject the results from these layers to standard functions on M, eliminating this dependency. The only condition is that the same family of paths is used both in the lifting and group correlation layers to parallel transport the kernels.
Indeed, from what precedes, two γand δ-lifts, though in general distinct, obey the simple relation E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 6 0 8 A direct computation shows that where we used the fact that the normalized Haar measure on SOðT x MÞ is bi-invariant, thus in particular right-invariant. Thus the following projection layer is well-defined and removes the base point and path dependency Biases are added per kernel. Nonlinear transformations of ReLU type are applied after each of these layers. Note that without them, a lifting followed by group correlation would actually factor in a new lifting transformation.
Discretization and Implementation in the Case M ¼ S 2
In this work, the manifold of interest is S 2 . Spherical functions f∶S 2 → R N are typically given at a number of points and interpolated using a Watson kernel, 27 which also serves as our choice. We use a very simple discretization of S 2 via the vertices of a regular icosahedron. Tangent kernels are defined over these vertices, sampled along with the rays of a polar coordinate system respecting the vertices of the icosahedron. The radius of the circular kernels are chosen such that when a kernel is moved from one vertex to any of its five neighbors, there is going to be overlap between the kernels before and after moving. This is illustrated in Fig. 1. We use a single-shell setup (one value at each point on the sphere) in all our experiments since it is the most common case (and it is the case for our spinal cord data). However, a multishell setup is possible if we interpolate the functions for each shell at the same locations on S 2 , and the spherical function can be treated as a multichannel function.
Experiments and Results
We evaluate our method on three datasets: a DWI scan conducted on a spinal cord that had been dissected out post mortem from a deceased human female, a synthetic dataset that we generated, and the DWI brain scan dataset from the human connectome project. 28 The human spinal cord DWI data is single-shell, with a b-value of 4000 s∕mm 2 , and 80 directions per voxel. The HCP DWI data has three shells, b ¼ 1000; 2000; 3000 s∕mm 2 , and 90 directions per voxel. We train single-shell models, thus three separate models for the HCP data. In terms of model hyperparameter search in all experiments, we choose the hyperparameters that give us models with the lowest capacity without worsening the performance. By doing this, we get the models with proper capacity that are efficient in training meanwhile prevent overfitting. It is worth noticing that as we increase the model capacities, the stability of the models increase as well-that is, less fluctuation of loss during training, and fewer bad initiations of the models. However, we choose the least complex models possible since more complexity does not introduce better performance in this case. As for the data smoothing/interpolation parameter κ in Watson kernel, we choose the parameter values, in all experiments, that provide a trade-off between data smoothing and peak preserving. In that sense, the hyperparameters chosen are the ones that give us the best model performances.
Experimental Setup
After getting the responses from our proposed layers, we feed them into a small feedforward neural network-a single layer perceptron-to perform our classification task. To validate our method, we compare the proposed framework with two experimental setups: (a) a baseline experiment that feeds the smoothed signal values of each voxel directly into a feedforward neural network without our three-layer convolution; (b) S2CNN 18 which performs convolution on spheres by transforming the signals onto the spectral domain. For all the experiments, we use the smallest model possible for both our method and S2CNN. 18
Data description
The study was conducted on a deceased individual who had bequeathed her body to science and education at the Department of Cellular and Molecular Medicine (ICMM) of the University of Copenhagen according to Danish legislation (Health Law No. 546, Section 188). The study was approved by the head of the Body Donation Program at ICMM. Part of the data used here has been published in a previous report. 29 Briefly, the spinal cord was dissected out from a 91-year old Caucasian female without known diseases post mortem within 24 h after her death. The spinal cord was fixed by immersion into paraformaldehyde (4%), where it was kept for 2 weeks, after which it was transferred to and stored in phosphate-buffered saline until the MRI scanning was conducted. The spinal cord was placed in a plexiglas tube and immersed in fluorinert (FC-40, Sigma-Aldrich) to eliminate any background signal. The scanning was accomplished using a 9.4 T preclinical system (BioSpec 94/30; Bruker Biospin, Ettlingen, Germany) equipped with a 1.5 T∕m gradient coil. The scanning was done in 29 sections of length 1.6 cm, thus covering the whole length of the spinal cord of approximately 40 cm. Between each section scan, the tissue was advanced 1.4 cm by a custom-built stepping motor system, resulting in a 0.2-cm section overlap. For each section, a T2-weighted 2D RARE structural scan was performed. Scan parameters were repetition time (TR) = 7 s, echo time (TE) = 30 ms, 20 averages, field of view 1.92 · 1.92 · 1.6 cm 3 , and a matrix size of 384 · 384 · 80, resulting in 50 · 50 μm 2 in-plane resolution and a slice thickness of 200 μm, resulting in a voxel size of 500000 μm 3 . The scanning time for the structural scan was 30 h.
We take individual voxels containing signals defined on S 2 as the input of the networks and achieve segmentation via voxel classification. Since the numbers of samples of white matter and gray matter are not balanced, we use Focal Loss 30 to counter the imbalance. We used 14 slices from the longest dimension to test and the rest of the scan to train.
Results
We can see from Table 1 that all methods perform quite well for this simple task. Showcase of prediction from our model and the ground-truth can be found in Fig. 2. We observe that classifying white matter and gray matter is not a challenging task considering the baseline model works well for this task. This is because there is already a significant difference between white matter and gray matter in terms of the scales of the intensity values of the two tissues. However, our method and S2CNN 18 have a better balance between the accuracies of the two classes compared to the baseline, which shows the importance of geometric information for recognizing minority classes. To test the rotational invariance and the independence to scaling of the signals of our method, we experiment further on the synthetic dataset and the HCP dataset. 28
Dataset generation
To validate the resistance of our method against rotations, we create and classify spherical functions that are defined on a sphere. We first uniformly sample 90 fixed directions on a hemisphere, and spherical functions of different classes are defined in the same 90 directions. For each class, we sample 90 values from a Gaussian distribution as function values of the 90 directions. Thus the only difference among classes is the function values of the given 90 directions, and we sample the function values for each class from the same Gaussian distribution to keep the scales of the values identical. In addition, we rotate the sphere of each class and use these rotated spherical functions as elements of each class. Therefore, each class of the dataset contains just rotations of Fig. 2 Examples of ground-truth and predictions from the test data. (a)-(d) The same slices from the ground-truth, prediction from our method, prediction from S2CNN, and prediction from baseline.
each spherical function. As explained above, we interpolate the function values at the icosahedron vertices using a Watson kernel 27 from the rotated 90 directions, each assigned with a function value. For the baseline, we interpolate the function values at the same 90 directions that were sampled on the sphere using the same scheme. We generate synthetic datasets of different numbers (n ∈ 2;4; 6) of classes to test the robustness of the model, given different difficulties of the task. For each class, we generate 50 samples for the training set and 1000 samples for the test set.
Architecture and Hyperparameters. As in the experiment above, we use a lift-ReLU-conv-ReLU-projection-FC-softmax architecture for the network. We use k ¼ 1;5; n channels for lift, conv, and FC layers, 0.6 as kernel radius, and 5 rays, 2 samples per ray as kernel resolution. For S2CNN, 18
Results
See Table 2 for comparison of results from different models. We can see that the baseline model is barely learning anything from the data, while our method and S2CNN 18 are capturing the differences from different classes in the data. Moreover, our method achieves more robust performance while having fewer degrees of freedom.
Human Connectome Brain Scans
As in the spinal data experiments, we train networks on individual voxels containing signals on S 2 . Our goal is a voxel-wise classification of four regions of the brain-cerebrospinal fluid (CSF), subcortical, white matter, and gray matter regions.
We used the preprocessed DWI data 31 and normalized each DWI scan for the b-1000, b-2000, and b-3000 images, respectively, with the voxel-wise average of the b 0 .
The labels provided with the T1-image were transformed to the DWI using nearest neighbor interpolation (Fig. 3). Since the four brain regions we are classifying have imbalanced numbers of voxels, we use Focal Loss 30 to counter the class imbalance of the dataset just as in the spine data experiments.
Architecture and hyperparameters. As in experiments above, we use the icosahedron structure as locations for kernels for our method, and lift-ReLU-conv-ReLU-projection-FC-softmax as network architecture with k ¼ 1;5; 4 channels, r ¼ 0.6 as radius, and 5 rays with 2 samples per ray as kernel resolution. For S2CNN, 18 we again use the same architecture provided by the authors S 2 conv-ReLU-SO(3)conv-ReLU-FC-softmax, bandwidth b ¼ 30;10; 6 and k ¼ 3;6; 4 channels. For baseline, we again use FC(90)-ReLU-FC(50)-ReLU-FC(30)-ReLU-FC(4) architecture. We use κ ¼ 10 for the Watson kernel, and 0.001 as learning rate for all models. Table 2 Test accuracy for models evaluated on the generated datasets. Numbers in the brackets are the numbers of parameters for each model. The baseline model is producing prediction accuracies that are the same level as random guessing, while ours and S2CNN 18 can recognize the rotations of the same spherical functions quite accurately, and our method achieves higher accuracy using fewer parameters than S2CNN. 18
Results
We used 1 scan for training, 1 scan for validation, and 50 scans for testing. We chose this split of the dataset for training/validation/testing because it is the most light-weight for training. Including more scans in the training set was also tried, but it did not seem to make the results much better. Therefore, we only used one scan in the training set in the end. Comparison of experimental results of different methods can be found in Table 3. We can see that the baseline experiment does not generalize well compared to our method and S2CNN. 18 Across different b values, we observe that with increased b, over all experiments, it becomes harder to recognize CSF. Higher b does not reduce the accuracies for the majority classes for our method and S2CNN, 18 thus the overall accuracies from these methods do not drop much with increased b. On the other hand, while comparing to S2CNN, 18 we achieve very similar results yet our model has way lower degrees of freedom while achieving the same level of performance as we can see in Table 3. Showcases of predictions from all models can be found in Fig. 4. Model Sensitivity Analysis. We reduce the amount of training data for our method in order to test how sensitive our model is. As mentioned above, there is only one scan in the training set. For that scan, there are 7227 CSF voxels, 35,648 subcortical voxels, 276,191 white matter voxels, and 309,496 gray matter voxels. Therefore, we reduce the number of samples in the training scan from all classes by randomly sampling a fraction of voxels from each class and test how that impacts the performance of the model. The results can be found in Table 4.
We see that reducing the number of samples in each class reduces the performance. On the other hand, it has also boosted the accuracy for the subcortical region, since that the class imbalance was also eased after the reduction. We can observe that the gray matter and white matter tissues are overly represented in a scan that even when we discard most of the voxels from these two classes in the training set, our test result remains a relatively high level of accuracy. This offers us an important application in automating DWI data annotation.
Discussion
This work shows how geometric information in DWI can be significantly useful in understanding general patterns in image analysis. In the future, we expect improvements in performance by Fig. 3 (a)-(c) original diffusion data, the ground-truth segmentation, and the processed groundtruth label image. The label colors for CSF, subcortical, white matter, and gray matter are red, blue, white, and gray, respectively. The figures are only for illustrations of the data, they are not necessarily from the same scan.
adding spatial correlations-convolutions in the product space R 3 × S 2 instead of mere S 2 , for example. This is ongoing work. The correlation of our model to fractional anisotropy (FA) and NODDI is worth investigating as well. Moreover, using scans in the HCP dataset 28 with a different number of diffusion gradients to test our model would also be desirable in later works. Our model generalizes well to the test set while trained with very little data (one scan), but this generalization is limited to data with the same distribution, i.e., with acquisitions from the same Table 3 Results from the HCP brain dataset. We can see that our method has the same level of performance as S2CNN, 18 but uses way fewer parameters. The baseline model produces higher accuracy recognizing the subcortical region, but it is at a high cost of the accuracies of other classes. scanner using the same protocol. A new dataset with a new acquisition protocol would require new training. It will be desirable to apply our model to datasets that consist of irregular scans such as brains with tumors and unprocessed scans, unlike the HCP dataset which only has preprocessed healthy brains. Obtaining that kind of data is another challenge. Additionally, we have so far only tested our construction of the network on S 2 , yet an extension to other surfaces appears feasible, with a smart choice of a discrete representation. An extension to dimension Table 4 Results of sensitivity analysis. The numbers in the first row are the numbers of samples in each experiment for CSF, subcortical, WM, and GM, respectively. We see that while reducing the size of the training set, the overall accuracies decrease to some extent, but the accuracies of the subcortical region are higher since the class imbalance is lower. 3 is worthwhile as well, which will require efficient SOð3Þ convolutions, using, for instance, spectral theory for compact Lie groups.
Conclusion
We proposed a simple extension of CNN to Riemannian Manifolds that learns rotationally invariant features. Our method allows us to learn general patterns from very limited data while having much lower degrees of freedom than existing methods. 18 This is significant because we can now, in machine learning-based DWI analysis, reduce the size of individual data samples to a single voxel-level from a whole volumetric image-level, as well as reduce the training dataset to a single scan-or a fraction of a scan. For the HCP dataset 28 with a single-shell setup, our method, while taking the subcortical region into account, compares well with existing methods that have multishell input, 32,33 which do not classify the subcortical region. We also achieved similar or better results compared to image registration-based methods. 34 The results of this simple task show great potential of this method in understanding structural patterns in brains. Moreover, the results from the model sensitivity analysis show that our method has the potential in aiding manual data annotation. For example, a doctor only has to label a fraction of a scan and the rest can be automated by the model.
Disclosures
The authors declare no conflicts of interest of any kind. | 2022-11-19T16:41:51.854Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "692b59eeabc1111aef61dcb86a030dd6f773314b",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-9/issue-6/064002/Bundle-geodesic-convolutional-neural-network-for-diffusion-weighted-imaging-segmentation/10.1117/1.JMI.9.6.064002.pdf",
"oa_status": "HYBRID",
"pdf_src": "SPIE",
"pdf_hash": "1ceb51fe6112132d89a93caf973845dfa56f78e2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25710202 | pes2o/s2orc | v3-fos-license | Bone Flap Resorption Following Cranioplasty with Autologous Bone: Quantitative Measurement of Bone Flap Resorption and Predictive Factors
Objective To quantitatively measure the degree of bone flap resorption (BFR) following autologous bone cranioplasty and to investigate factors associated with BFR. Methods We retrospectively reviewed 29 patients who underwent decompressive craniectomy and subsequent autologous bone cranioplasty between April 2005 and October 2014. BFR was defined as: 1) decrement ratio ([the ratio of initial BF size/craniectomy size]–[the ratio of last BF/craniectomy size]) >0.1; and 2) bone flap thinning or geometrical irregularity of bone flap shape on computed tomographic scan or skull plain X-ray. The minimal interval between craniectomy and cranioplasty was one month and the minimal follow-up period was one year. Clinical factors were compared between the BFR and no-BFR groups. Results The time interval between craniectomy and cranioplasty was 175.7±258.2 days and the mean period of follow up was 1364±886.8 days. Among the 29 patients (mean age 48.1 years, male: female ratio 20: 9), BFR occurred in 8 patients (27.6%). In one patient, removal of the bone flap was carried out due to severe BFR. The overall rate of BFR was 0.10±0.11 over 3.7 years. Following univariate analysis, younger age (30.5±23.2 vs. 54.9±13.4) and longer follow-up period (2204.5±897.3 vs. 1044.1±655.1) were significantly associated with BFR (p=0.008 and 0.003, respectively). Conclusion The degree of BFR following autologous bone cranioplasty was 2.7%/year and was associated with younger age and longer follow-up period.
INTRODUCTION
Cranioplasty is usually performed for brain protection and cosmetic problem following decompressive craniectomy. Furthermore, previous studies have reported that cranioplasty improves neurological function 11,12) . In recent years, synthetic materials have been developed and occasionally used to re-place autologous bone. However, autologous bone is still the preferred material because it has many advantages including low cost, lack of immunological response, and anatomical fit.
Cranioplasty is a relatively straightforward procedure. However, complications related to procedure including infection, wound dehiscence, postoperative hematoma or fluid collection, implant dislodgement, and bone flap resorption (BFR) have been reported in many studies 1,3,4,9,13,21,22) . In particular, BFR remains a long-term concern after autologous bone cranioplasty and is considered a crucial factor for reoperation. Within the literature, the rate of BFR is reported to vary greatly from 2.7% to 51% 3,6,7,12,18,21) . To date, only few studies have attempted to identify the risk factors related to BFR 2,3,7,18) .
The aim of this study was to investigate the degree and related factors of BFR following autologous bone cranioplasty.
MATERIALS AND METHODS
This study was designed as a single center retrospective cohort study. From April 2005 to October 2014, 101 patients who underwent cranioplasty following decompressive craniotomy for traumatic brain injury, subarachnoid hemorrhage, intracranial hemorrhage, brain tumor, and ischemic stroke were included. After excluding patients for exclusion criteria including follow-up loss, use of artificial bone, comminuted skull fracture, and skull tumor, 29 patients were included in the study (Fig. 1). Medical records were reviewed to determine initial diagnosis of craniectomy, age at cranioplasty, sex, past medical history, linear skull fracture, shunt-dependent hydrocephalus, duration from craniectomy to cranioplasty, operative time, infection, incidence of postoperative hematoma, and follow-up duration after cranioplasty.
BFR was defined as the followings : 1) decrement ratio ([the ratio of initial BF/craniectomy size]-[ratio of last BF/craniec-tomy size]) >0.1; and 2) bone flap thinning or geometrical irregularity of bone flap shape on computed tomographic scan or skull plain X-ray (Fig. 2). The two-dimensional area was calculated on the computed tomographic scan or plain skull X-ray using the region of interest (ROI) parameter (free draw) in accordance to our institution's order communication system program, Maroview (version 5.409; INFINITT, Seoul, Korea) (Fig. 3). The minimal interval between craniectomy and cranioplasty was one month and the minimal follow-up • Follow up period after CP ≥1 year • Duration from CE to CP ≥1 month period was one year. BFR was determined based on the last follow up image. We compared demographic and radiological factors between BFR and no-BFR groups. All continuous variables were presented as means±standard deviation. The Kruskal-Wallis test was used for continuous variables and Fisher's exact test for categorical variables. Results with two-tailed values less than p=0.05 were considered statistically significant. All statistical analyses were performed using SPSS ver. 20.0 for Windows (IBM Corp., Armonk, NY, USA).
RESULTS
The mean age of the 29 patients was 48.1±19.7 years (male : female ratio 20 : 9). Interval time between craniectomy and cranioplasty was 175.7±258.2 days and the mean follow-up period was 1364±886.8 days. A summary of the study population is described in Table 1. BFR occurred in 8 patients (27.6%). There were no significant differences in the presence of skull fracture and shuntdependent hydrocephalus between the BFR and no-BFR groups (Table 2). Furthermore, there were no significant dif-ferences in operative time and duration of bone flap storage between the two groups ( Table 3). The overall rate of BFR was 0.10±0.11 over 3.7 years and the degree of BFR was 2.7%/year. In other words, when BFR occurs, bone flap can be estimated to be resorbed at about 2.7% per year. Following univariate analysis, younger age (30.5±23.2 vs. 54.9±13.4) and longer follow-up period (2204.5±897.3 vs. 1044.1±655.1) were significantly associated with BFR (p=0.008 and 0.003, respectively) (Fig. 4). In one patient, removal of the bone flap was carried out due to severe BFR.
DISCUSSION
This study showed that the degree of BFR following autologous bone cranioplasty was associated with younger age and longer follow-up period. Furthermore, the incidence and degree of BFR were found to be 27.6% and 2.7%/year after cranioplasty, respectively.
As previously mentioned, the incidence of BFR varies from study to study. These variations in BFR may be attributed to different BFR definitions and follow-up durations. Dünisch et al. 7) measured defect size using the formula A=π/4×B×b (with B and b being the longest diameters of an elliptical area) and classified the degree of BFR as either type Ⅰ necrosis (a thinning of the bone flap and/or a beginning resorption along the rims of the flap) or type Ⅱ necrosis (as a circumscribed, complete lysis of the bone within the flap, including inner and outer table). Another investigator calculated defect size as an ellipse with π×a×b (longest axis being 2a and the 90° short axis being 2b) 18) . Because these methods are semi-quantitative and BFR has an irregular margin, we quantitatively measured the decrement ratio of the bone flap size using the ROI curve. This method is quantitative and superior in calculating the BFR rate after cranioplasty.
Previous studies suggest younger age, bone flap fragmentation, shunt-dependent hydrocephalus, large bone defect, and time interval from craniectomy to cranioplasty are the risk factors of BFR 3,6,19) . Younger age was also indicated as a risk factor of BFR in our study. Although the exact mechanism relating to this remains unknown, a thinner skull and faster bone turn-over may be associated with BFR in the younger poulation 2,7,12,16,17,19) .
Bone flap fragmentation has also been suggested as a risk factor for BFR 2,3,19) and could potentially result in severe disruption of the blood circulation, disturbing angiogenesis, and leading to nutritional deficit of the bone flap 2,3,8,19) . In 2013, Bowers et al. 2) showed that comminuted skull fracture was a risk factors for BFR in pediatric patients following cranioplasty. On the contrary, there was no correlation between bone flap fragmentation and BFR in a study of younger patients <19 years of age 10) . We assumed that comminuted fracture may hinder the bone remodeling process and our study population included only four patients <19 years, therefore we did not investigate correlation between comminuted fracture and BFR. Instead, we investigated the association between linear skull fracture and BFR and found no statistically significant difference. Shunt-dependent hydrocephalus as a risk factor for BFR is controversial. Some studies suggest that shunt placement reduces adherence of the dura mater to the bone flap and negatively inf luences skull grow th, thus contributing to BFR 2,7,10,15,19) . However, in agreement with our finding, Brommeland et al. 3) failed to demonstrate correlation between shunt-dependent hydrocephalus and BFR.
Another controversy is the effect of time interval to cranio- plasty. Many investigators recommend cranioplasty be performed after 3 months from craniectomy if the neurological status and medical conditions are stabilized 4,7) . Conversely, delayed cranioplasty can result in complications such as sunken scalp, decreased brain perfusion, and abnormal cerebrospinal fluid dynamics. In a retrospective study of 77 patients with autologous bone flap, longer duration between craniectomy and cranioplasty was associated with BFR 3) . However, in agreement with our findings, a recent study failed to demonstrate any association between the timing of cranioplasty and BFR 7) . This discrepancy may be attributed to the heterogeneous and retrospective design of each study. Recent research has indicated an association between bone flap storage method and BFR 5,14) . A retrospective study of 290 patients showed that the incidence of BFR was higher in the cryopreservation group compared with the subcutaneous pocket group 5) . The authors hypothesized that cryopreserva-tion of the bone flap might lead to increased osteoclast activity and bone resorption 5) . Interestingly, bone flap storage in the abdominal subcutaneous fat is reported to lead to osteolysis by macrophages, which may result in enhanced BFR 20) . Cryopreserved autologous bone grafts were used in the current study, therefore we could not compare the incidence of BFR according to storage methods.
This study has some limitations. First, this single center study is limited by its small size and retrospective design. Second, two-dimensional measurement of stereoscopic bone flaps may be problematic in determining the exact area. When considering the curvature of skull vault, the degree of BFR can be underestimated at the near the vertex of skull. Third, although the longer the duration of follow-up, the risk of BFR increased, we could not estimate whether every bone f lap might undergo resorption or there might be time window for osteoconduction of re-inserted bone flap. Further research will be needed. Nonetheless, we minimized measurement error by calculating the area difference using a program that enabled the ROI free draw method.
CONCLUSION
The risk of BFR following autologous bone cranioplasty is associated with younger age and longer follow up period. Thus, radiological long-term follow up is recommended in the younger population. | 2018-04-03T05:29:18.375Z | 2017-10-25T00:00:00.000 | {
"year": 2017,
"sha1": "a8879dee263ecb2b025c25b0551dbf8ff3fcaa9f",
"oa_license": "CCBYNC",
"oa_url": "https://www.jkns.or.kr/upload/pdf/jkns-60-6-749.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8879dee263ecb2b025c25b0551dbf8ff3fcaa9f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247962252 | pes2o/s2orc | v3-fos-license | Perspective on the Use of DNA Repair Inhibitors as a Tool for Imaging and Radionuclide Therapy of Glioblastoma
Simple Summary The current routine treatment for glioblastoma (GB), the most lethal high-grade brain tumor in adults, aims to induce DNA damage in the tumor. However, the tumor cells might be able to repair that damage, which leads to therapy resistance. Fortunately, DNA repair defects are common in GB cells, and their survival is often based on a sole backup repair pathway. Hence, targeted drugs inhibiting essential proteins of the DNA damage response have gained momentum and are being introduced in the clinic. This review gives a perspective on the use of radiopharmaceuticals targeting DDR kinases for imaging in order to determine the DNA repair phenotype of GB, as well as for effective radionuclide therapy. Finally, four new promising radiopharmaceuticals are suggested with the potential to lead to a more personalized GB therapy. Abstract Despite numerous innovative treatment strategies, the treatment of glioblastoma (GB) remains challenging. With the current state-of-the-art therapy, most GB patients succumb after about a year. In the evolution of personalized medicine, targeted radionuclide therapy (TRT) is gaining momentum, for example, to stratify patients based on specific biomarkers. One of these biomarkers is deficiencies in DNA damage repair (DDR), which give rise to genomic instability and cancer initiation. However, these deficiencies also provide targets to specifically kill cancer cells following the synthetic lethality principle. This led to the increased interest in targeted drugs that inhibit essential DDR kinases (DDRi), of which multiple are undergoing clinical validation. In this review, the current status of DDRi for the treatment of GB is given for selected targets: ATM/ATR, CHK1/2, DNA-PK, and PARP. Furthermore, this review provides a perspective on the use of radiopharmaceuticals targeting these DDR kinases to (1) evaluate the DNA repair phenotype of GB before treatment decisions are made and (2) induce DNA damage via TRT. Finally, by applying in-house selection criteria and analyzing the structural characteristics of the DDRi, four drugs with the potential to become new therapeutic GB radiopharmaceuticals are suggested.
Introduction
Treatment challenges posed by malignant gliomas remain considerable, and many derive from the molecular and cellular heterogeneity inherent to these tumor variants [1,2]. New treatment strategies for glioblastomas (GB), known as the most malignant gliomas (grade IV), are urgently warranted. For newly diagnosed GB patients with overall good health status, the standard of care includes maximal surgical resection, combined external beam radiation therapy (RT), and temozolomide (TMZ), followed by maintenance TMZ [3]. However, even with an optimal treatment protocol and recent advances in targeted therapies, survival has only slightly improved, and almost all tumors recur [1,4]. Molecular biomarkers play an increasing role in treatment decisions and response prediction. For example, the methylation status of the O 6 -methylguanine-DNA methyltransferase (MGMT) promoter is a major cause of TMZ resistance [5,6]. The focus for advancing GB therapy lies in the field of personalized, targeted therapy with the ultimate aim to selectively eradicate GB cells without damaging the surrounding healthy brain tissue [4,7,8]. To achieve this, mechanisms that induce therapy resistance and strategies to induce selective cell death in GB cells need to be explored and exploited. GB is recognized as being highly radioresistant and influenced by the presence of glioma stem cells (GSCs), cellular hypoxia, high cell heterogeneity, and aberrant activation of DNA damage response (DDR) proteins [9,10]. The dysregulation of the DDR in GB allows cancer cells to repair DNA damage and results in resistance to the current state-of-the-art therapies. In contrast to normal cells, components of the DDR pathway are frequently compromised in tumor cells, and their survival is often based on a sole backup pathway. Hence, targeted strategies against essential components of the DDR offer the possibility to promote cell death in cancer cells and increase the tumor's sensitivity to cancer therapies based on the principle of 'synthetic lethality' (Figure 1) [10][11][12][13]. In order to sensitize GB cells to DNA damaging agents, two approaches can be adopted. First, directly targeting key DNA damage signaling kinases such as phosphatidylinositol 3 kinase (PI3K)-related kinases (PIKKs) and PIKK-regulated downstream kinases. These include DNA damage sensor and repair proteins, e.g., ataxia-telangiectasia mutated (ATM), ATM-RAD3-related (ATR) protein, and DNA-dependent protein kinase (DNA-PK) [14]. Second, they interfere with cell cycle checkpoint proteins, which monitor DNA integrity before cell division (G2-M checkpoint) and DNA replication (G1-S checkpoint) [15]. Interestingly, 'replication stress' present in cancer cells could further be enhanced following these therapies through further loosening the remaining checkpoints and inducing failure of further proliferation [16]. Treatment challenges posed by malignant gliomas remain considerable, and many derive from the molecular and cellular heterogeneity inherent to these tumor variants [1,2]. New treatment strategies for glioblastomas (GB), known as the most malignant gliomas (grade IV), are urgently warranted. For newly diagnosed GB patients with overall good health status, the standard of care includes maximal surgical resection, combined external beam radiation therapy (RT), and temozolomide (TMZ), followed by maintenance TMZ [3]. However, even with an optimal treatment protocol and recent advances in targeted therapies, survival has only slightly improved, and almost all tumors recur [1,4]. Molecular biomarkers play an increasing role in treatment decisions and response prediction. For example, the methylation status of the O 6 -methylguanine-DNA methyltransferase (MGMT) promoter is a major cause of TMZ resistance [5,6]. The focus for advancing GB therapy lies in the field of personalized, targeted therapy with the ultimate aim to selectively eradicate GB cells without damaging the surrounding healthy brain tissue [4,7,8]. To achieve this, mechanisms that induce therapy resistance and strategies to induce selective cell death in GB cells need to be explored and exploited. GB is recognized as being highly radioresistant and influenced by the presence of glioma stem cells (GSCs), cellular hypoxia, high cell heterogeneity, and aberrant activation of DNA damage response (DDR) proteins [9,10]. The dysregulation of the DDR in GB allows cancer cells to repair DNA damage and results in resistance to the current state-of-the-art therapies. In contrast to normal cells, components of the DDR pathway are frequently compromised in tumor cells, and their survival is often based on a sole backup pathway. Hence, targeted strategies against essential components of the DDR offer the possibility to promote cell death in cancer cells and increase the tumor's sensitivity to cancer therapies based on the principle of 'synthetic lethality' (Figure 1) [10][11][12][13]. In order to sensitize GB cells to DNA damaging agents, two approaches can be adopted. First, directly targeting key DNA damage signaling kinases such as phosphatidylinositol 3′ kinase (PI3K)-related kinases (PIKKs) and PIKK-regulated downstream kinases. These include DNA damage sensor and repair proteins, e.g., ataxia-telangiectasia mutated (ATM), ATM-RAD3-related (ATR) protein, and DNA-dependent protein kinase (DNA-PK) [14]. Second, they interfere with cell cycle checkpoint proteins, which monitor DNA integrity before cell division (G2-M checkpoint) and DNA replication (G1-S checkpoint) [15]. Interestingly, 'replication stress' present in cancer cells could further be enhanced following these therapies through further loosening the remaining checkpoints and inducing failure of further proliferation [16]. Principle of synthetic lethality. In glioblastoma (GB) therapy, DNA damage is induced by temozolomide (TMZ) and radiation therapy (RT). DNA repair pathways are often disrupted (pathway A) and therefore GB cells solely depend on a back-up pathway to repair DNA damage. Inhibitors of essential DNA damage response kinases (DDRi) can block this rescue pathway to promote GB cell death.
In this review, the rationale and current status of targeted drugs that inhibit essential DDR kinases (DDRi) for the therapy of GB are given. Secondly, a perspective is given on radiopharmaceuticals for nuclear imaging and targeted radionuclide therapy (TRT) targeting DDR kinases. The ability to monitor DNA repair processes using nuclear imaging may be an asset for personalized GB therapy and for monitoring the response to DNA Figure 1. Principle of synthetic lethality. In glioblastoma (GB) therapy, DNA damage is induced by temozolomide (TMZ) and radiation therapy (RT). DNA repair pathways are often disrupted (pathway A) and therefore GB cells solely depend on a back-up pathway to repair DNA damage. Inhibitors of essential DNA damage response kinases (DDRi) can block this rescue pathway to promote GB cell death.
In this review, the rationale and current status of targeted drugs that inhibit essential DDR kinases (DDRi) for the therapy of GB are given. Secondly, a perspective is given on radiopharmaceuticals for nuclear imaging and targeted radionuclide therapy (TRT) targeting DDR kinases. The ability to monitor DNA repair processes using nuclear imaging may be an asset for personalized GB therapy and for monitoring the response to DNA damaging treatments and DDRi. Finally, selection criteria have been applied to reveal Figure 2. The DNA damage response and selected targets (blue). Ataxia-telangiectasia mutated (ATM), ATM-RAD3-related protein (ATR), cyclin-dependent kinase 1 (CDK1/2), checkpoint kinase-1 and -2 (CHK1/2), DNA-dependent protein kinase (DNA-PK), Mouse double minute 2/X homolog (MDM2/X), Nbs1/hMre11/hRad50 (MRN complex), poly (ADP-ribose) polymerase-1 and -2 (PARP), replication protein A (RPA), X-ray repair cross-complementing protein 4 (XRCC4).
Aberrant activation of these DDR kinases (ATM, ATR, DNA-PK, CHK1, CHK2, and PARP) in cancer is strongly correlated with resistance to genotoxic cancer therapies, including in GB [10,23]. Defects in the ATM-CHK2-p53 pathway promote GB formation and play a role in the response of glioma to ionizing radiation (IR) [24]. Mutations in isocitrate dehydrogenase 1 (IDH1) are frequently found in gliomas and are associated with better therapeutic outcomes. Interestingly, co-mutations in DDR kinases could play a role. In IDH1 mutated astrocytoma patients, TP53 (63%) and ATRX (27%) are the top two genes that display a higher frequency of mutations. An association between IDH1 mutations and reduced ATRX expression has also been shown. Mutations in CHK2 are instead associated with an IDH1-wildtype astrocytoma [25]. Núñez et al. discovered that mutant IDH1 helps maintain genomic stability in tumors by enhancing the DDR [26]. Glioma stem cells (GSC) have been shown to promote radioresistance by preferential activation of the DDR pathway through increased cell cycle checkpoint activation. This contributes to an increased DNA repair capacity and results in greater survival [9]. Since the standard therapy of GB includes TMZ and RT, which both aim to damage the DNA, DDR inhibition is being explored as a way to increase treatment efficacy [10]. For example, the inhibition of phosphatase and tensin homolog (PTEN) phosphorylation at Y240 sensitizes GB to IR by preventing enhanced DNA repair [27]. The standard TMZ chemotherapy modifies DNA or RNA at N 7 -guanine, O 6 -guanine, and N 3 -adenine by the addition of methyl groups. Methylated O 6 -guanine sites are usually repaired by MGMT. Since MGMT is often upregulated Figure 2. The DNA damage response and selected targets (blue). Ataxia-telangiectasia mutated (ATM), ATM-RAD3-related protein (ATR), cyclin-dependent kinase 1 (CDK1/2), checkpoint kinase-1 and -2 (CHK1/2), DNA-dependent protein kinase (DNA-PK), Mouse double minute 2/X homolog (MDM2/X), Nbs1/hMre11/hRad50 (MRN complex), poly (ADP-ribose) polymerase-1 and -2 (PARP), replication protein A (RPA), X-ray repair cross-complementing protein 4 (XRCC4).
Sensitivity towards DDRi is dependent on specific biomarkers, and the identification of treatment-responsive patients constitutes one of the key challenges associated with the clinical use of DDRi. Recent work has identified genomic and functional DNA repair assays that provide the identification of predictive and pharmacodynamic DDRi biomarkers ( Figure 4) [30]. Mutational signatures associated with robust HR deficiency (HRD) primarily include alterations affecting BRCA1, BRCA2, PALB2, and two canonical RAD51 paralog genes (RAD51B, RAD51C). A more complex "BRCAness" signature has been defined to denote HRD tumors that share molecular features of BRCA1/2-mutant tumors, which are likely to benefit from DDRi [31][32][33]. The most promising biomarkers of BRCAness in GB relate to IDH1/2, epidermal growth factor receptor (EGFR), PTEN, MYC proto-oncogene, and estrogen receptors beta (ERβ) signatures [34]. For example, the anti-tumor effect of TMZ with ATMi or PARPi is enhanced in IDH1 mutant gliomas, and TMZ increases ATRi sensitivity in MGMT-deficient GB cells [35][36][37].
Strategies to define the mutational status of these genes include immunohistochemistry (IHC) and next-generation sequencing techniques. The requirement of whole-genome or whole-exome sequencing for the identification of selected gene signatures limits its widespread clinical utilization as a biomarker. However, new computational tools such as signature multivariate analysis and combinations of genomic analyses with single-cell imaging may increase the number of patients to be considered for treatments targeting HRD [38,39].
Sensitivity towards DDRi is dependent on specific biomarkers, and the identification of treatment-responsive patients constitutes one of the key challenges associated with the clinical use of DDRi. Recent work has identified genomic and functional DNA repair assays that provide the identification of predictive and pharmacodynamic DDRi biomarkers ( Figure 4) [30]. Mutational signatures associated with robust HR deficiency (HRD) primarily include alterations affecting BRCA1, BRCA2, PALB2, and two canonical RAD51 paralog genes (RAD51B, RAD51C). A more complex "BRCAness" signature has been defined to denote HRD tumors that share molecular features of BRCA1/2-mutant tumors, which are likely to benefit from DDRi [31][32][33]. The most promising biomarkers of BRCAness in GB relate to IDH1/2, epidermal growth factor receptor (EGFR), PTEN, MYC proto-oncogene, and estrogen receptors beta (ERβ) signatures [34]. For example, the antitumor effect of TMZ with ATMi or PARPi is enhanced in IDH1 mutant gliomas, and TMZ increases ATRi sensitivity in MGMT-deficient GB cells [35][36][37].
Strategies to define the mutational status of these genes include immunohistochemistry (IHC) and next-generation sequencing techniques. The requirement of whole-genome or whole-exome sequencing for the identification of selected gene signatures limits its widespread clinical utilization as a biomarker. However, new computational tools such as signature multivariate analysis and combinations of genomic analyses with single-cell imaging may increase the number of patients to be considered for treatments targeting HRD [38,39].
DDR (Radio)Pharmaceuticals
Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging could be utilized to identify patients that would benefit from DDRi therapy. Radiopharmaceuticals targeting DDR kinases have potential for the assessment of DDR target engagement (e.g., PARP activity) and may assist in monitoring response to DDRi or other DNA damaging treatments [40]. In addition, given the well-understood involvement of DDR during tumorigenesis, the ability to monitor these repair
DDR (Radio)Pharmaceuticals
Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging could be utilized to identify patients that would benefit from DDRi therapy. Radiopharmaceuticals targeting DDR kinases have potential for the assessment of DDR target engagement (e.g., PARP activity) and may assist in monitoring response to DDRi or other DNA damaging treatments [40]. In addition, given the well-understood involvement of DDR during tumorigenesis, the ability to monitor these repair processes using PET or SPECT may facilitate the detection of earlier stages of carcinogenesis [41].
Upon confirming the expression of the DDR kinase in the GB tumor, therapeutic DDR targeting radiopharmaceuticals could be administered prior to or in combination with other DNA-damaging agents (e.g., TMZ and external beam RT), ultimately causing a synergistic anti-tumor response. Noteworthy, DDRi induced toxicity to healthy tissue might be limited due to intact DDR pathways in healthy cells ( Figure 1) [42]. Interestingly, TRT agents targeting DDR kinases offer increased cytotoxicity compared to cold DDRi due to the additional radiation-induced DNA damage. Our group recently published a perspective on TRT for the treatment in GB, with a special focus on radiopharmaceutical requirements, including target and radionuclide selection, blood-brain barrier (BBB) passage, toxicity, validation, and combined therapy strategies [43].
The nature of the induced DNA damage in TRT is dependent on the specific radiation characteristics of the used isotopes. Most GB research has investigated the cellular and physical effects of IR in the context of external beam RT, radiation effects which are significantly different compared to TRT radiation effects [20]. Lutetium-177, iodine-131, rhenium-186, rhenium-188, or yttrium-90, are commonly utilized for TRT of GB, featuring β --particle emissions with relatively low linear energy transfer (LET) (0.2-2 keV/µm) and a low relative biological effectiveness (RBE). As a result, the β --emission induced damage consists of some DSBs but mostly repairable SSBs, which could result in sublethal damage repair [44]. Targeted α-particle therapy (TAT) using astatine-211, actinium-225, or bismuth-213, is gaining attention due to the higher LET (50-230 keV/µm) and RBE inducing more complex DNA damage and a lower dependency on the tumor oxygenation status [45,46]. Complex DNA damage significantly contributes to exceeding the cellular capabilities of DNA repair, thereby forcing cells towards cell death [20]. The first positive clinical trials on TAT have emerged, and TAT was suggested as a facilitator to overcome tumoral resistance to chemotherapy [47,48]. A nice example is the astatine-211 radiolabeled PARPi, which induced cellular lethality by targeting alpha-emitters directly to the nucleus, with high sensitivity in neuroblastoma in vitro and in vivo. The [ 211 At]-PARPi was 10,000 times more potent than talazoparib, indicating that the likely mechanism of cell killing does not rely on pharmacological PARP inhibition but rather on alpha-particle induced DNA damage [49,50]. Lastly, the short penetration range and LET (4-26 keV/µm) of Auger electron emitters make them suitable candidates for inducing damage to a specific target with dimensions comparable to the DNA, leading to complex, lethal DNA damage [51].
Targeting ATM/ATR as an Anti-GB Strategy
ATM and ATR are members of the PIKK family of serine/threonine protein kinases, which are crucial in the initiation of cell cycle arrest and apoptosis ( Figure 1). ATM is the main kinase in the cellular response to DNA DSBs, while ATR is activated by singlestranded DNA structures, which may arise upon SSB induction and stalled or collapsed replication forks. Although ATM and ATR are activated by different types of DNA damage, their signaling cascades are partially overlapping [93][94][95]. For example, CHK1/2 is a downstream target in both pathways. However, ATM plays a crucial role in the activation of the G1/S cell cycle checkpoint while ATR enforces the intra-S-phase and G2/M cell cycle checkpoint ( Figure 1) [96]. Notably, the list of substrates undergoing ATM-dependent phosphorylation is still growing [97].
Hypersensitivity of ATM-defective cells to IR and the critical function of the ATR pathway for the survival of tumor cells has led to considerable interest in ATM and ATR as therapeutic targets for cancer therapy [93,98,99]. Glioma cells, especially GSCs, exhibit increased resistance to IR, which is mediated by an upregulation of DDR targets such as ATM, ATR, PARP1, and CHK1. This results in a rapid G2/M cell cycle checkpoint activation and enhanced DNA repair [9,100]. However, tumor cells often suffer from defects in ATM function through mutation of the ATM protein itself or its associated downstream targets, particularly p53. Such mutated cells must maintain functional S and G2/M cell cycle checkpoints mediated by ATR/CHK1 to avoid premature mitotic entry [101]. The genomic characterization of human GB genes and their core pathways showed that p53 signaling was altered in 87% of GB cases [102]. Therefore, ATR/CHK1 inhibition shows great potential to induce synthetic lethality [12,102,103]. Treatment with ATM-or ATRinhibitors (ATMi/ATRi) may thus selectively sensitize glioma cells and GSCs to IR and/or TMZ [103][104][105].
Targeting ATM/ATR as an Anti-GB Strategy
ATM and ATR are members of the PIKK family of serine/threonine protein kinases, which are crucial in the initiation of cell cycle arrest and apoptosis ( Figure 1). ATM is the main kinase in the cellular response to DNA DSBs, while ATR is activated by singlestranded DNA structures, which may arise upon SSB induction and stalled or collapsed replication forks. Although ATM and ATR are activated by different types of DNA damage, their signaling cascades are partially overlapping [93][94][95]. For example, CHK1/2 is a downstream target in both pathways. However, ATM plays a crucial role in the activation of the G1/S cell cycle checkpoint while ATR enforces the intra-S-phase and G2/M cell cycle checkpoint ( Figure 1) [96]. Notably, the list of substrates undergoing ATM-dependent phosphorylation is still growing [97].
Hypersensitivity of ATM-defective cells to IR and the critical function of the ATR pathway for the survival of tumor cells has led to considerable interest in ATM and ATR as therapeutic targets for cancer therapy [93,98,99]. Glioma cells, especially GSCs, exhibit increased resistance to IR, which is mediated by an upregulation of DDR targets such as ATM, ATR, PARP1, and CHK1. This results in a rapid G2/M cell cycle checkpoint activation and enhanced DNA repair [9,100]. However, tumor cells often suffer from defects in ATM function through mutation of the ATM protein itself or its associated downstream targets, particularly p53. Such mutated cells must maintain functional S and G2/M cell cycle checkpoints mediated by ATR/CHK1 to avoid premature mitotic entry [101]. The genomic characterization of human GB genes and their core pathways showed that p53 signaling was altered in 87% of GB cases [102]. Therefore, ATR/CHK1 inhibition shows great potential to induce synthetic lethality [12,102,103]. Treatment with ATM-or ATR-inhibitors (ATMi/ATRi) may thus selectively sensitize glioma cells and GSCs to IR and/or TMZ [103][104][105].
Possible determinants of ATRi sensitivity include high levels of ATR, Cdc25A, and CHK1. Multiple predictive biomarkers have also been incorporated into early phase trials: alternative lengthening of telomeres, reduced expression/loss of function of ATM, BRCA1/2, TP53, ARID1A, and overexpression of CCNE1, APOBEC, and MYC (Table S1 and Figure 4) [106][107][108]. Especially DDRi combined with IR could provide a therapeutic strategy for IDH1 R132H glioma patients who also harbor p53-and ATRX-inactivating mutations [26]. Alpha Thalassemia/Mental Retardation Syndrome X-Linked (ATRX)-deficient glioma displaying p53 loss of function could also benefit from ATRi therapy [109,110]. In p53-deficient settings, suppression of ATM dramatically sensitized cells to chemotherapy, whereas, conversely, ATM suppression had the opposite effect in the presence of functional p53 [101]. ATM kinase inhibition combined with low dose radiation was also selectively toxic to glioma with mutant p53 through the induction of mitotic catastrophe and apopto-sis [111]. Overexpression of cMYC has previously been shown to cause replicative stress and to confer sensitivity to CHK1i and ATR knockdown. Mutations in ARID1A predict response to ATRi and PARPi since ARID1A-deficient cells rely on ATR checkpoint activity to prevent apoptosis [112]. Lastly, pRAD50 has been identified as a novel and clinically applicable pharmacodynamic biomarker of sensitivity to ATM/ATR inhibition [113].
Current Status of ATM/ATR Targeted Therapy in GB
An overview of oncological clinical trials investigating ATMi and ATRi and their relevant biomarker selection criteria are summarized in Table 1 and Table S2. There are currently four ATMi (KU-60019, AZD0156, AZD1390, and M3541) evaluated in clinical trials, of which two (AZD1390 and AZD0156) include glioma patients (Table 1) [103]. One of the first-generation ATMi includes the small molecule ATP-competitive inhibitor 2morpholin-4-yl-6-thianthren-1-yl-pyran-4-one (KU55933) and its ameliorated derivatives KU-60019 and CP466722 [114][115][116]. Therapy with these ATMi has resulted in chemoand radiosensitization of GB cells and a significant two-to three-fold increased survival when KU-60019 was administered intratumorally in GB models combined with external beam IR. Particularly, a signature of IDH mutations combined with a low expression of TP53 or MGMT and high expression of phosphatidylinositol-3-kinase (PI3K) has been identified as a biomarker for more effective ATM-based therapy [35,[117][118][119][120][121]. Interestingly, KU-60019 limited glioma cell growth in co-culture with human astrocytes, with the latter seemingly unaffected by the same treatment [104,115,118,120]. The last generation ATMi AZ32 and AZD1390 have been specifically designed to effectively cross the BBB and showed radiosensitizing effects in GB both in vitro and in vivo [75,93,103,111]. This led to a phase I study of AZD1390 in combination with RT in patients with brain cancer (ClinicalTrials.gov Identifier: NCT03423628). The ATMi AZD0156 has shown potential in multiple cancer types, including synergism with PARPi [29,[122][123][124]. In a phase I trial combining AZD0156 with olaparib, hematologic toxicity appears to be the treatment-limiting toxicity in advanced malignancies (including glioma), although efficient doses could be reached [125]. Data on ATMi KU-59403 in GB are awaited [126]. Finally, besides employing small molecule ATMi, silencing of ATM or ATR using siRNA has also been shown to increase glioma cell chemoand radiosensitivity [21,127,128]. ATRi has demonstrated significant therapeutic potential in cancer treatment, with anti-tumor activity when administered as monotherapy but also when combined with conventional chemotherapy, RT, or immunotherapy [99]. The ATRi currently in clinical trials are VX-970 (also known as VE-822, M6620, or berzosertib, Merck ® , Darmstadt, Germany), VX-803 (M4344, Merck ® ), BAY1895344 (elimusertib, Bayer ® , Leverkusen, Germany), M1774, RP-3500, and AZD6738 (ceralasertib, AstraZeneca ® , Cambridge, UK). Notably, some of these trials considered biomarkers for patient stratification (Table S1). Unfortunately, no trials have been initiated in GB so far [29,107,133,134].
VX-970, for which 15 trials are now active, reached the clinic first [29,124,135,136]. Radiation and chemosensitization effects have been shown, but efflux pump mechanisms limit brain accumulation of VX-970 [37,[137][138][139]. However, prolonged survival was noted in rats with intracranial GB tumors that were treated with RT combined with VX-970. Survival was even more improved upon the combination with PARPi [140,141]. The synthetic lethal interaction of VX-970 might be enhanced by selecting another ATR/CHK1 downstream target, such as WEE1. WEE1 inhibitors (WEE1i) have recently attracted attention with multiple phase I/II studies investigating this synergy (Figure 1) [29,142,143]. WEE1 promotes S and G2/M cell cycle arrest by blocking cyclin-dependent kinase 1 and 2 (CDK1/2) and allowing DNA repair, as shown in Figure 2 [144]. The most studied WEE1i is adavosertib (MK1775, AZD1175), with 23 active trials, including a phase I trial in GB patients (ClinicalTrials.gov Identifier: NCT01849146) [29]. In addition, 27 clinical trials are currently actively evaluating the selective ATRi AZD6738 (Table S1) after promising preclinical results [29,99,145,146]. Notably, no significant radiosensitizing effect was found in an orthotopic GB animal model despite effective AZD6738 brain penetration [147].
ATM/ATR Radiopharmaceuticals
Two ATMi have been 11 C-radiolabelled: AZD1390 and AZD0156. In macaque monkeys, intravenous administration revealed superior permeability and BBB penetrating properties of [ 11 C]-AZD1390 compared to [ 11 C]-AZD0156 [75]. A first clinical trial in healthy volunteers analyzed the brain distribution of [ 11 C]-AZD1390 and confirmed BBB penetration [59]. These findings support the use of radiolabeled AZD1390 for therapy and/or diagnostics in patients with central nervous system (CNS) malignancies, including GB. Only one ATRi, VE-821, a less potent precursor of the ATRi VE-822, has been 18 Fradiolabelled. This VE-821 analog (termed '[ 18 F]-ATRi') was put forward as a clinically relevant PET imaging agent in an in vivo study by Carlucci et al., and specific target binding was confirmed using a U251 MG GB animal model [40].
3.2. CHK1/2 Inhibitors 3.2.1. Current Status of CHK1/2 Targeted Therapy in GB CHK1/2 are cell cycle checkpoint kinases that prevent cell cycle progression when DNA damage is detected and being repaired, as shown in Figure 2 [158,159].CHK1 is activated by ATR phosphorylation on Ser317 and Ser345, and CHK2 is activated by ATM phosphorylation on Thr68. CHK2 phosphorylates p53, preventing its interaction with MDM2, and subsequently, p53 drives the expression of genes involved in apoptosis induction and cell cycle checkpoint activation, such as p21/CDKN1 [96]. CHK1 plays an important role in intra-S-phase and G2/M cell cycle checkpoint progression mediated by phosphorylation and inhibition of Cdc25A and Cdc25C [93,159]. Inhibited Cdc25 proteins are no longer able to activate their CDK proteins substrates and thereby fail to induce cell cycle arrest [160]. CHK1/2 upregulation has been shown in GB, and inhibition is of interest, particularly in GBs with aberrations in other cell cycle regulating factors, such as p53, since these tumors rely on the remaining checkpoints to repair DNA damage. Approximately 50% of GB patients with CHK2 alterations also carry defects in the p53 signaling pathway, while this is only 10-13% for DDR components ATM, ATR, or CHK1 [9,24,161]. In GSCs, the basal expression of CHK1 and Cdc25C has also shown to be much higher compared to differentiated GB cells [100,161]. CHK1/2 inhibition has been extensively explored clinically in various cancer types but not yet in GB, likely because numerous CHK1/2i were discontinued before phase III, such as UCN-01 (7-hydroxystaurosporine), rabusertib (LY2603618) and MK-8776 (SCH 900776) [162][163][164][165][166][167][168]. AZD7762, for instance, showed severe cardiac toxicities in patients with advanced solid tumors (AST) [169]. Clinical trials are currently ongoing for CHK1selective inhibitors CCT245737 (SRA737), GDC0575 (ARRY-575, RG7741), and the CDK1/2 inhibitor prexasertib (LY2606368). Prexasertib-related neutropenia has been identified as an adverse effect but warrants further development with clinical activity in ovarian cancer, squamous cell carcinoma, and advanced cancer types [170][171][172][173][174]. The CHK1i GDC-0425 or GDC-0575, given in combination with gemcitabine to solid tumor patients, both warrant further investigation [175,176].
CHK1i therapy of GB has remained in the preclinical setting. Treatment with gemcitabine and the CHK1i MK-8776 effectively permeated the BBB and inhibited glioma growth in vivo [177]. Moreover, UCN-01, although in itself non-toxic, increased the cytotoxicity of TMZ by five-fold in U87MG (p53 wild-type or deficient) glioma cells by accumulating the number of cells bypassing G2-M arrest and thereby undergoing mitotic catastrophe [178]. UCN-01 also inhibited GSC growth in vitro, and AZD7762 radiosensitized p53-mutated GB cell lines (confirmed in GB in vivo models) [179][180][181]. SAR-020106 sensitized human GB cells to RT, TMZ, and decitabine treatment [182]. The impact of CHK1 inhibition on GB cells was also studied using SB18078 and PF477736, confirming an influence on colony and tumor sphere formation, as well as cell proliferation. Khanna et al. also confirmed that CHK1 acts via protein phosphatase 2A in promoting GB cell growth [183]. Unfortunately, in AST, a phase I study on PF477736 combined with gemcitabine was terminated due to business reasons (NCT00437203) [29]. Interestingly, targeting the CHK1 gene in GSCs, using, for example, lentivirus-delivered short hairpin RNA (shRNA), also showed the potential to increase radiosensitivity via apoptosis induction [184].
Less research is performed on CHK2i in GB. It should be noted that while knockdown of CHK1 expression enhanced radiosensitivity of human GSCs, this was not the case upon CHK2 inhibition [184]. TMZ-induced cell death was also more prominently enhanced by pharmacologic inhibition of CHK1 compared to CHK2 inhibition [128]. However, the CHK2i PV1019 radiosensitized U251 glioma cells [12,185]. As an alternative to CHK1/2 inhibition, inhibition of their downstream targets CDK1/2 or Cdc25A protein phosphatase has been studied [186]. In our opinion, the multi-targeted MAPK inhibitor MEK162, which also inhibits CDK1/CDK2/WEE1/p-ATM besides CHK2, should be further explored since it downregulated and radiosensitized spheroidal and orthotopic GB xenografts [15].
Current Status of PARP Targeted Therapy in GB
PARPi have shown significant promise in a variety of malignancies with deficiencies in HR signaling [34,188]. In GB, the BRCAness phenotype leads to impairment of HR and thus PARPi sensitivity [34]. Glioma biomarkers of predictive value for PARPi therapeutic efficacy include IDH1/2 mutations, a low BRCA1 expression, aberrant ATM or ATR signaling, MYC overexpression, and inactivation of mismatch repair genes, especially MSH6 [36,123,[189][190][191][192][193][194]. PTEN mutations, present in 70% of GB tumors, have shown to increase the level of DSBs upon PARP inhibition, though some studies contradict this [195][196][197][198]. MGMT promoter hyper methylation is also being studied as a potentially predictive biomarker for PARPi-mediated TMZ sensitization [189]. TMZ-induced damage can be repaired by either direct repair (in case of O 6 -methylguanine lesions) or BER (in case of N 7 -methylguanine and N 3 -methyladenine lesions). Thus, inhibiting PARP-mediated SSB repair (BER) leads to the accumulation of DNA DSBs, thereby enhancing cytotoxicity [199,200]. This way, glioma patients may still benefit from alkylating chemotherapy, regardless of their MGMT promotor status [200][201][202]. Another mechanism of PARPi TMZ sensitization is allosteric PARP trapping (leading to instability of stalled replication forks), as well as BRCA1 and RAD51 depletion (leading to compromised fork protection) [189,203]. For more info on the combination effects of PARPi and chemotherapeutics, we refer the reader to [204]. Interestingly, cancers with BRCA-deficiency and PARPi resistance could also benefit from a combined therapy including CHKi and PARPi [174,205,206]. CHK2 inhibition might also provide a strategy to alleviate hematologic toxicity from PARPi [207].
Preclinically, olaparib delayed GB recurrence when combined with RT and sensitized IDH1-mutated tumor cells when combined with TMZ, leading to clinical trials in GB patients (Table 1) [29,36,209,210]. The phase I OPARATIC trial in recurrent GB patients confirmed that olaparib could be safely combined with daily TMZ if intermittent dosing was applied. Additionally, drug penetration into the entire tumor specimen was confirmed [129]. A phase I/II study in GB of olaparib combined TMZ/RT is currently recruiting [29,211,212].
Rucaparib has shown anti-GB effects in vitro, which were ameliorated when combined with BKM120 (PI3K inhibitor) or when conjugated to IR-786 (heptamethine cyanine dye) [225,226]. In combination with TMZ, rucaparib prolonged the time to tumor regrowth by 40% in heterotopic GB xenografts. However, this could not be confirmed in orthotopic GB models, most likely due to limited drug delivery [227]. Despite being FDAapproved for various cancer types, rucaparib has not yet been investigated in clinical trials for GB patients. In AST, however, rucaparib/TMZ was well-tolerated and showed proof-of-principle [29,228].
The PARPi talazoparib is FDA-approved for breast cancer, and a phase II trial on the talazoparib/carboplatin combination is currently recruiting recurrent high-grade glioma patients with DDR deficiency [29]. The combination of high and low LET radiation qualities with talazoparib led to promising preclinical results when administered to GSCs. Moreover, EFGR amplification might increase their sensitivity [229][230][231]. In vivo, talazoparib combined with TMZ prolonged GB stasis, but this could not be confirmed in orthotopic GB models, most likely due to BBB efflux mechanisms [232].
Niraparib (MK-4827) is currently being investigated in recurrent GB, either combined with RT or with tumor-treating fields (TTFs). TTFs are expected to reduce BRCA1 signaling and thereby reduce DNA repair capacity, causing PARPi-assisted synthetic lethality [29]. The first results on the niraparib/TMZ combination indicated tolerance and efficiency in patients with advanced cancer [132]. Notably, niraparib penetrated intracranial tumors in breast cancer models [233].
Finally, it was shown that combined inhibition of PARPi and ATRi in GSCs resulted in a profound radiosensitization, which exceeded the effect of a single ATRi [100,141]. Multiple clinical trials are exploring this combination (olaparib/ceralasertib), including a phase II trial in IDH mutant solid tumors (ClinicalTrials.gov Identifier: NCT03878095) [29].
PARP Radiopharmaceuticals
Radiolabeled versions of PARPi strongly gained momentum in the last years due to their potential to directly and non-invasively image PARP expression, quantify the biodistribution of a PARPi and its tumor uptake, define treatment response and stratify patients likely to respond to PARPi therapy [199]. Due to the nuclear sub-cellular location of PARP and confirmed overexpression in GB, with overall low expression in healthy brain tissue, PARP-1 is also a near-ideal target to develop radiotherapeutics [56,188]. In addition to eliciting synthetic lethality, promoting genomic instability, and enhancing cytotoxicity of a subsequently administered DNA-damaging agent, PARP-TRT could also cause DNA damage [237]. In response to DNA damage, the expression of PARP-1 also increases, which may result in an increased target availability for the therapeutic radiopharmaceutical [50].
[ 18 F]-PARPi, was deemed well tolerable and safe in patients with head-and-neck cancer [61]. In GB mouse models, [ 18 F]-PARPi and a bimodal fluorescence/PET imaging agent succeeded in visualizing the tumor [40,53,54]. Additionally, [ 18 F]-PARPi has shown potential in discriminating active brain cancer from treatment-related changes in a murine model of radiation necrosis. This was confirmed in brain cancer patients, including three patients with IDH wild-type primary GB [76,77]. [ 18 F]-PARPi-PET/MRI is currently being evaluated in a pilot study in recurrent brain tumors (ClinicalTrials.gov Identifier: NCT04173104) [29]. [ 18 F]FTT is currently being investigated in phase I studies in various cancer types, including GB (Table S2) [29,85]. [ 18 F]FTT-PET was, for example, performed to measure PARP-1 expression pre-and post-treatment with TTF and niraparib. Additionally, [ 18 F]FTT uptake was correlated with HR deficiency status [29]. Unfortunately, early clinical results of [ 18 F]FTT report low brain penetration and high uptake values in the liver and spleen [86]. In addition, regrettable results have also been reported for other olaparib-based radiopharmaceuticals. An absorption, metabolism, and excretion (ADME) analysis of [ 14 C]-rucaparib, reported no brain uptake, and development of [ 18 F]-20 was halted due to substantial defluorination [53,69].
PARP radiopharmaceuticals in a preclinical phase that are worth exploring in GB include [ 14 C]-pamiparib and [ 64 Cu]-DOTA-PARPi. The ADME of [ 14 C]-pamiparib was evaluated in four patients with advanced cancer and indicated near-complete absorption and low renal clearance of the parent drug [74]. [ 64 Cu]-DOTA-PARPi showed potential in mesothelioma-bearing animal models [64]. Notably, fluorinated radiopharmaceuticals based on talazoparib have been evaluated in a prostate cancer model and indicated TRT potential [65].
Therapeutic radiopharmaceuticals targeting PARP have been studied in GB preclinically with promising results. The first Auger-based theranostic PARPi, the Iodine-123 Meitner-Auger PARP1 inhibitor, successfully delivered a lethal payload within a 50 Å distance of the DNA of GB cancer cells and demonstrated a survival benefit in mouse models of GB [57,239,240]. [ 123 I] -I2-PARPi retained within GB xenograft tumors and correlated with PARP expression [55]. Jannetti et al. developed [ 131 I] -PARPi, a 1(2H)-phthalazinone with a similar structure to olaparib. Convection enhanced delivery of [ 131 I] -PARPi led to increased survival of mice with orthotopic brain tumors [56]. Selective binding of 131 I-and 124 I-labelled I2-PARPi was also confirmed in GB models [58]. A particularly promising PARP-TRT agent is [ 211 At]-MM4, a rucaparib derivative, due to its high cytotoxicity and favorable half-life (7.2 h). In neuroblastoma models, this compound resulted in increased survival [50].
Current Status of DNA-PK Targeted Therapy in GB
DNA-PK consists of a heterodimer (Ku70/80) and a large catalytic subunit, known as DNA-PKcs [12,241]. This complex initiates NHEJ by binding to the DSB, leading to subsequent phosphorylation and activation of DNA-binding proteins, ultimately causing ligation of DSB ends (Figure 1) [10]. In GB, high DNA-PK levels correlate with poor survival and increased GSC stability [242,243]. DNA-PK has been shown to mediate GSC radioresistance and glioma progression in vivo, suggesting DNA-PK/RAD50 as promising targets for GSC eradication [244]. To date, research on biomarkers demonstrating DNA-PK inhibition sensitivity is only preclinical, but HR deficiency could theoretically predict sensitivity to DNA-PKi given the increased reliance of HR deficient cells on NHEJ [162]. Sun et al. identified p53 as a potential predictive biomarker of response to the combination of DNA-PKi and RT [245].
Small molecule inhibitors of DNA-PK, from the discovery of the first identified inhibitors (wortmannin and its derivatives PX-866 and PWT-458) to more selective DNA-PKi, have been reviewed [13,246,247]. The DNA-PKi VX-984 (Vertex, now licensed to Merck KGaA, Darmstadt, Germany as M9831), nedisertib (M3814, peposertib, MSC2490484A, Merck KGaA), and the recently discovered AZD7648, have entered clinical trials. Only nedisertib combined with RT/TMZ is currently under investigation for GB patients with unmethylated MGMT promotor status, following preclinical evidence of a radiosensitizing upon NHEJ inhibition [29,248]. In phase I trials for AST, this combination was well tolerated and demonstrated modest efficacy [249]. VX-984 has shown promising radiosensitizing effects in GB in vitro and in vivo with confirmed BBB crossing [250]. Interestingly, an inability to resolve γ-H2AX foci in the presence of VX-984 could be induced in T98G cells only [251]. Results on the safety of VX-984 administered in AST patients are still pending [29]. AZD7648 is undergoing clinical evaluation in AST after it showed RT/TMZ sensitizing effects and synergism with olaparib. However, this needs to be confirmed in GB [29,252,253].
Less selective DNA-PKi co-targeting mTOR include CC-115, avadomide (CC-122), samotolisib (LY3023414), and NVP-BEZ235. In GB patients, CC-115 was well tolerated, and 21% achieved a stable disease status with proven drug distribution in GB tissue [254,255]. CC-115 has shown a synergistically lethal effect with functional ATM loss and is included in one of the three experimental arms of the ongoing Individualized Screening Trial of Innovative GB Therapy (INSIGhT) trial [254,256]. Avadomide (CC-122) has recently been deemed safe in various cancer types, and applicability in CNS-related cancers was suggested. A phase I trial on avadomide in patients with advanced tumors unresponsive to standard therapies, including GB, is still active (ClinicalTrials.gov Identifier: NCT01421524) [257]. Samotolisib (LY3023414) had single-agent activity in advanced cancer patients and is being investigated further in pediatric CNS tumors. To be noted, BBB penetration remains a stumbling block [29,[258][259][260].
DNA-PK Radiopharmaceuticals
Radiopharmaceuticals targeting DNA-PK are scarce. In healthy subjects, the disposition of the samotolisib derivative [ 14 C]-LY3023414 following oral administration was studied; however, results are pending (ClinicalTrials.gov Identifier: NCT02575703) [29]. It should be noted that the uptake of radiolabeled LY3023414 would not be DNA-PK specific because it also targets PI3K/mTOR. Additionally, the radiosynthesis protocols for 11 C-labelled chromen-4 derivatives as new potential DNA-PK-PET imaging radiopharmaceuticals were published by Gao et al. but have not yet been validated in vivo [73].
Development of Other DDR Radiopharmaceuticals
Besides DDRi radiopharmaceuticals themselves, radiotracers enabling the visualization and quantification of the amount of DNA damage induced after TRT would be extremely valuable, e.g., to assess the radiobiological treatment response of the tumor. This category includes γH2AX radiotracers: 89 Zr-/ 111 In-labelled anti-γH2AX-TAT. Anti-γH2AX antibodies are routinely used in ex vivo assays to quantify the number of γH2AX foci or DNA DSBs within cell populations, but a cell-penetrating peptide is required for in vivo applications [41,266]. For example, the extent of DNA damage response after [ 177 Lu]-DOTATATE therapy was evaluated using [ 111 In]-anti-γH2AX-TAT SPECT imaging [267].
Challenges and Risks of DDRi (Radio)Pharmaceuticals
Exploiting synthetic lethal interactions has attracted considerable attention as an anticancer strategy; however, the development of such approaches to selectively target cancer cells while sparing health tissues remains challenging [158]. Major hurdles include tumor biology, heterogeneity, and complexity; an inadequate understanding of synthetic lethal interactions; drug resistance, and the challenges regarding screening and clinical translation. Hence, there is an urgent need to develop improved efforts aiming to identify and understand synthetic lethal interactions, as well as validate new screening tools and biomarkers, including DDR radiopharmaceuticals. Improved genetic perturbation techniques, including CRISPR/Cas9 gene editing, are also promising prospects concerning synthetic lethal effects in cancer [268].
DDRi induced toxicity to healthy tissue can be limited due to the innate DDR pathways in healthy cells (Figure 1). The phenomenon of "replication stress", unique to fast proliferating cancer cells, enforces this statement [16,42]. Unspecific cellular toxicity may occur since most DNA repair pathways overlap in terms of DNA repair proteins. This could lead to unwanted DNA damage to normal tissue, increasing the risk for late toxicity [158]. For example, ATM inhibition showed a greater radiosensitizing effect in p53-deficient tumors, but effects were also observed in p53 wild-type cells [111]. This might be an important consideration for proliferating cells of the CNS, where a p53-dependent G1/S checkpoint would stay at least partially activated in the presence of an ATMi (via ATR), thereby inducing cell cycle arrest and preventing apoptosis. In neurons, ATM seems to be a requirement for apoptosis. Hence, transient brain exposure to an ATMi might not be extremely toxic [111]. Upon PARP inhibition, toxicity to the normal brain is expected to be minimal since PARP-1 expression has not been detected in normal neurons [269]. Moreover, early-phase clinical trial data indicates that the radiosensitizing properties of PARPi are most pronounced in rapidly proliferating cells [212]. Hence, due to the non-dividing nature of neuronal tissue in the brain, it is assumed that the addition of a PARPi to RT would have a relatively larger effect on highly proliferative GB cells compared to normal brain cells [200]. The toxicity of PARPi is also related to their PARP trapping capacity, and reactivities differ with different combination partners and the DNA damage mutations present [204,270]. For example, the combination of PARPi with chemotherapy is hampered by overlapping toxicities, thereby limiting their administrable dose. Interestingly, hematologic toxicity seems more pronounced in germline BRCA carriers [270,271].
In the context of TRT, the combined toxicity of the cold DDRi with the radionuclide is important to consider. TRT toxicity can be related to targeting efficiency, radionuclide stability and the nuclear recoil effect, physical properties of the radionuclides, dosimetry, immunogenicity, and administration route [43]. Confirming the presence of the DDR target using an imaging DDR radiopharmaceutical (SPECT/PET) and evaluating its distribution throughout the body (before selecting TRT as a treatment strategy) is essential. Increased toxicity might be expected in case multiple DNA damaging strategies are combined. However, it should be noted that the concentration of the DDRi for targeted therapy will be markedly higher than the prospective dosage given of a radiolabeled DDRi during nuclear imaging or TRT. Following PARP-TRT, normal tissue toxicities in the spleen and bone marrow are projected due to PARP-1 expression in normal tissues. Other potential sites for toxicity include the liver and gastrointestinal tract if involved in the biological clearance of the compound [50].
Nuclear imaging strategies have shown the ability to measure expression levels of DDR kinases in vivo. However, when compared to the tumor uptake of radiopharmaceuticals targeting cancer biomarkers situated on the cell surface, uptake of these agents is generally low. Factors such as the transient nature of DDR protein activation (e.g., following RT, the expression levels of many biomarkers, including PARP-1 and γH2AX, disappear within days), the inefficient drug internalization/nuclear translocation, and specifically for GB applications, BBB crossing play a role [41]. In the case of alpha emitters, sub-cellular delivery to cell nuclei will increase the cytotoxicity due to the high probability that both the alpha particle and its atomic parent nuclei recoil radiation will traverse the cell nucleus [50]. As can be seen in Table S2, most of the DDRi investigated for TRT have been radiolabeled with halogens. Reaching the nucleus might be difficult upon radiometal chelation; however, some preclinical results showed promise. A Cu-64-radiolabeled olaparib analog containing a DOTA moiety resulted in clear tumor uptake in mesothelioma [64]. The nuclear uptake of a [ 177 Lu]-DOTA-labeled DNA intercalator in Raji cells was deemed sufficient, although it was lower compared to total cellular uptake [272].
Treatment resistance is a bothersome limitation for the application of DDRi and DDR-based TRT. GB tumors relying on one DNA repair pathway for their survival may additionally hold mutations that cause resistance to certain DDRi [158]. For example, PARPi resistance may be induced by HR restoration or mitigation of replication stress. Identified biomarkers for PARPi resistance include loss of 53BP1/ARID1A, low level of Schlafen 11 (SLFN11) or GBP1, and genomic reversion of BRCA1/2. In addition, DNA replication fork protection (PTIP/EZH2) or genetic mutations that result in the activation of a drug efflux pump play a role. This highlights the need for functional biomarkers that can assess HR proficiency, predict DDRi effectiveness, and the need for a combined treatment strategy (e.g., PARPi with other DDRi or TKIs) [30,162,208,273,274]. Unfortunately, due to the limited number of clinical trials involving ATMi/ATRi/CHK1i/DNA-PKi, biomarkers indicating resistance to these DDRi are largely unknown. A few examples include: PGBD5 and Cdc25A depletion have been associated with ATRi resistance, and overexpression of ATP-binding cassette G2 (ABCG2) increased CC-115 resistance [275][276][277][278].
Selection of New GB Radiopharmaceuticals Targeting the DDR
In order to select suitable candidate radiopharmaceuticals capable of targeting DDR kinases for GB imaging and therapy, several factors need to be considered, such as biochemical and pharmacological characteristics, radiolabeling, and radionuclide half-life options, and the ability to cross the BBB. The latter is affected by the molecular weight, lipophilicity, polar surface area, and hydrogen bond donors of the inhibitor [43]. In order to identify those DDRi that have the potential to become suitable GB TRT agents, in-house selection criteria were applied to all the above mentioned DDRi studied in GB (listed in Table 2). Thereby four DDRi are suggested that could potentially be converted into novel TRT radiopharmaceuticals: AZD1390, Nedisertib (M3814), SAR-020106, and MK8776 ( Figure 6).
These DDRi contain a halogen in an aryl position that could be a designated location for radiohalogenation, for example, using iodine-125 (Auger emitter), iodine-131 (beta emitter), or astatine-211 (alpha-particle emitter); and/or qualify for insertion of a chelator substituent to harbor a therapeutic radiometal.
Nucleophilic halogen exchange (iodine for iodine) reactions are regularly used for the incorporation of radioiodine into organic molecules, with inorganic salts (ammonium sulfate) or copper (II) salts often being added to catalyze the iodine exchange. Notably, a naturally stable isotope of astatine does not exist, and, therefore, halogen exchange using astatine-211 would require the iodo-or bromo-derivatives [279]. However, this approach is unable to yield a pure, astatinated product since the unreacted iodo-or bromo-starting compounds cannot be removed. Therefore, astatination reactions generally occur through electrophilic substitution reactions in the presence of oxidants, with a new method developed using the substitution of a dihydroxyboryl group [279,280]. When radiohalogenating the abovementioned DDRi, the effect that the larger halogen will have on the modified molecule may also result in altered biological properties. Table 2. Selection criteria for assessment of candidate GB DDRi TRT agents.
Inclusion Criteria
1. The DDRi was studied preclinically or in clinical trials in GB.
2. The DDRi is a small molecule that: A. contains a halogen which indicates a position that can potentially be radio-iodinated or -astatinated; and/or B. has a potential site for attachment of a chelator. 3. The DDRi has already been radiolabeled with a diagnostic isotope and was studied in GB.
Exclusion Criteria
1. Clinical trials results indicate candidate exclusion by way of: A. findings in GB patients revealed unwanted safety/tolerability issues (single agent), serious adverse events that were irreversible or responsible for treatment discontinuation, and/or B. occurrence of unfavorable pharmacokinetic properties. 2. The DDRi does not contain a halogen or any possible site for chelator attachment. 3. The DDRi has already been radiolabeled (diagnostic and/or therapeutic radionuclide) but was not studied in GB.
Cancers 2022, 14, x 18 of 42 Inclusion Criteria 1. The DDRi was studied preclinically or in clinical trials in GB.
2. The DDRi is a small molecule that: A. contains a halogen which indicates a position that can potentially be radio-iodinated or -astatinated; and/or B. has a potential site for attachment of a chelator. 3. The DDRi has already been radiolabeled with a diagnostic isotope and was studied in GB.
Exclusion Criteria 1. Clinical trials results indicate candidate exclusion by way of: A. findings in GB patients revealed unwanted safety/tolerability issues (single agent), serious adverse events that were irreversible or responsible for treatment discontinuation, and/ or B. occurrence of unfavorable pharmacokinetic properties. 2. The DDRi does not contain a halogen or any possible site for chelator attachment. 3. The DDRi has already been radiolabeled (diagnostic and/or therapeutic radionuclide) but was not studied in GB. These DDRi contain a halogen in an aryl position that could be a designated location for radiohalogenation, for example, using iodine-125 (Auger emitter), iodine-131 (beta emitter), or astatine-211 (alpha-particle emitter); and/or qualify for insertion of a chelator substituent to harbor a therapeutic radiometal.
Nucleophilic halogen exchange (iodine for iodine) reactions are regularly used for the incorporation of radioiodine into organic molecules, with inorganic salts (ammonium sulfate) or copper (II) salts often being added to catalyze the iodine exchange. Notably, a naturally stable isotope of astatine does not exist, and, therefore, halogen exchange using astatine-211 would require the iodo-or bromo-derivatives [279]. However, this approach is unable to yield a pure, astatinated product since the unreacted iodo-or bromo-starting compounds cannot be removed. Therefore, astatination reactions generally occur through electrophilic substitution reactions in the presence of oxidants, with a new method developed using the substitution of a dihydroxyboryl group [279,280]. When radiohalogenating the abovementioned DDRi, the effect that the larger halogen will have on the modified molecule may also result in altered biological properties.
The consideration of using a chelator would be that the increase in size, molecular weight, and the possible change in overall charge of the inhibitor could affect pharmaco- The consideration of using a chelator would be that the increase in size, molecular weight, and the possible change in overall charge of the inhibitor could affect pharmacological properties (lipophilicity, metabolism, biological half-life, target binding), and especially for GB targeting, the BBB crossing [43]. Attachment of chelators to biomolecules is generally carried out through a nucleophilic reaction between a bifunctional chelating agent and a primary amine. Insertion of a chelator into the structure of a DDRi would require the replacement of the substituent on an N-or O-atom with a functionalized chelating agent through a variety of available reactions.
ATMi AZD1390
AZD1390, developed by AstraZeneca, is a highly potent ATMi (10.000-fold more specific for ATM than for other PIKK members) that blocks ATM autophosphorylation at Ser-1981 and phosphorylation of KAP1 at Ser-824 [281]. AZD1390 has been converted to a 11 C-radiolabeled drug that showed good BBB penetration (1% ID at T max [brain] = 21 min) in healthy volunteers. Results on the aspects of safety, tolerability, and pharmacokinetics of [ 11 C]-AZD1390 in combination with RT are expected by 2024 [29,59]. The fast localization of AZD1390 to the brain limits the use of AZD1390 in TRT to those therapeutic radionuclides that match its biodistribution characteristics. AZD1390 has a piperidine moiety, an isopropyl moiety, and fluorine at the ortho-position of ring two. There are no crystal structures of ATM reported to date, and the ATM model developed by Degorce et al. was used in this review for SAR rationalization [282]. SAR studies have reported the need for the 4-amino and 3-carboxamide derivative within the structure, as well as the importance of the internal hydrogen bond that is formed between this moiety and the bioactive conformation of ATM [282]. It has a potential radiohalogenation site at the fluoride atom at the ortho-position of ring two. Direct radiohalogenation can potentially add a therapeutic radioiodine or radioastatine to that position. A chelator could possibly replace the piperidinyl moiety; however, it sits within the hydrophobic pocket and would most likely affect binding [75,283].
DNA-PKi Nedisertib (M3814)
Nedisertib (M3814, Peposertib, MSC2490484A), developed by BioVision Inc., Milpitas, CA, USA is an orally bioavailable, highly potent, and selective DNA-PKi. Nedisertib was well-tolerated as monotherapy in AST patients, and two clinical trials are currently evaluating nedisertib (peposertib) in combination with chemo/RT. The maximum systemic concentration of nedisertib occurred between 1-2 h after administration. The BBB penetration capabilities are still under investigation (CilinicalTrials.gov Identifier: NCT04555577). Based on the structural interactions between nedisertib and the active site of the DNA-PK, both the quinazoline and morpholino moieties bind into the hydrophobic pocket, while the pyridazine ring rotates to have π-π interactions with the quinazoline plane [284]. The chloro-fluorobenzene ring in the active site is directed towards the N-lobe, thus potentially allowing radiohalogenation at positions one and three of the ring. However, it is noted that the fluorine points towards the hydrophobic pocket, and thus, radioiodination or radioastatination at this position might not be feasible. The binding model further indicates that the methoxy group on the pyridazine ring is orientated outwards towards the solvent region. The methyl group could potentially be amended to a longer alkyl chain that will extend further into the solvent area and be functionalized with a chelator group in the terminal position. A chelator would be able to complex metallic radioisotopes for TRT.
CHK1i SAR-020106 and MK-8776 (SCH900776)
The kinase domain of CHK1/2 consists of an N-and C-terminal lobe with a hinge region connecting the two lobes. The hinge forms the ATP-binding pocket, and the majority of CHK1i will compete with ATP for binding to this site. Inhibitors bind through hydrogen bonding to peptides (typically Glu-85, Tyr-86, and Cys-87), as well as peptide-bound water within the active site. Generally, polar substituents of the inhibitors are orientated into the ribose pocket, with more lipophilic groups being directed toward the surface where the hinge cleft opens to the solvent. A substituent projecting into the solvent area could be modified with more hydrophilic groups in order to improve inhibitor pharmacokinetics [163].
SAR-020106 is a highly selective and potent inhibitor of CHK1 (IC 50 of 13 nmol/L; >7 000-fold selectivity over CHK2) that is still in its preclinical phase. Although SAR-0020106 is highly bound (94%) to plasma proteins, the tumor drug accumulation within 24 h is significant, with tumor/plasma ratios of 47:1 and 85:1 after 6 h and 24 h, respectively [285]. SAR-020106 is structurally classified into the 'pyrazine scaffold' inhibitor group, with an ether-linked ethylamine substituent on a cyanopyrazine ring connected to a chlorinated isoquinoline [163]. The cyanopyrazine group significantly interacts with Lys-38 and the protein-bound water network within the active site, while the isoquinoline nitrogen and secondary amine connect with Cys-87 and Glu-85, respectively. The chloro-group on the isoquinoline indicates a potential position for radiohalogenation using radioiodine orastatine since this chlorine atom is not involved in active site interaction. The nitrogen atom of the tertiary amine side chain of SAR-020106 also binds to water within the protein active site, but this amine has two methyl groups, one of which could potentially be substituted with a longer alkyl chain extending into the solvent region. A lengthened alkyl chain should not drastically affect the hydrogen bonding of the amine and would potentially allow for the insertion of a chelating group at the end of the hydrocarbon chain. The chelator could then be used for the complexation of therapeutic metal isotopes, such as lutetium-177, for TRT.
MK-8776 (SCH900776), developed by Merck KGaA, is another highly selective and potent inhibitor of CHK1 (IC 50 of 3 nmol/L) that is currently in phase I/II clinical trials for various cancers but has only been tested preclinically for GB therapy [286,287]. These studies have indicated that MK-8776 enhances cellular susceptibility to chemotherapeutic agents, such as gemcitabine and hydroxyurea [288]. The BBB penetration of MK-8776 is currently unknown, but the drug is 49% plasma protein bound with a plasma half-life of 5.6-9.8 h [164]. The structural scaffold for MK-8776 is pyrazolo[1,5-a]pyrimidine functionalized with a piperidine and 1-methyl-pyrazole ring [163,289]. MK-8776 binds to the hinge region of the kinase ATP-binding site through N1 and C7-NH 2 of the pyrazolo[1,5a]pyrimidine core, while the nitrogen of the 1-methyl pyrazole is within the interior pocket bound to water. The piperidine nitrogen atom is hydrogen-bonded to Glu-91 and the amide carbonyl of Glu-134 in the ribose pocket. Position C6 of the pyrazolo[1,5-a]pyrimidine is functionalized with bromine which could potentially be converted to a therapeutic radiohalogen. Although the C7 primary amine of MK-8776 is involved in binding to the active site, similar compounds with a secondary amine in this position that were investigated prior to the development of the clinical candidate also indicated very selective and strong inhibition of CHK1 [163]. Therefore, alkylation of the C7-amine with a chelatorfunctionalized alkyl chain (to harbor therapeutic metal radionuclides) will convert MK-8776 into a TRT-radiopharmaceutical.
Conclusions
DDR kinases are attractive targets to promote DNA damage and DNA replication stress and to render GB cells more vulnerable to RT and TMZ, following the principle of synthetic lethality. The current DDRi targeting ATM/ATR, PARP, CHK1/2, and DNA-PK for the treatment of GB and a perspective and overview on potential radiolabeling options for those small molecules are presented. Despite the hurdles of GB heterogeneity and drug resistance, radiopharmaceuticals targeting DDR kinases have the potential to stratify patients for DDRi therapy, predict response to DNA damaging treatments and guide TRT agents to the nucleus of GB cells, ultimately increasing therapeutic effectiveness. This review revealed that only a limited number of developed DDRi have been explored for their TRT potential. Through the application of relevant selection criteria, four DDRi compounds were identified that could potentially be converted into novel TRT radiopharmaceuticals: AZD1390, Nedisertib (M3814), SAR-020106, and MK8776. Radiopharmaceutical development of these candidates may greatly influence a more tailored and personalized GB therapy. | 2022-04-06T15:22:01.005Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "d78b8c9a6a93666e916717d2ddbc21ca4afe5646",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/7/1821/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57d7494ab6ae8ddbdc0e5875e4b818b5c711fea4",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53096852 | pes2o/s2orc | v3-fos-license | Association of Elevated Peripheral Blood Micronucleus Frequency and Bmi-1 mRNA Expression with Metastasis in Iranian Breast Cancer Patients
Background: In order to find cytogenetic and molecular metastasis biomarkers detectable in peripheral blood the spontaneous genomic instability expressed as micronuclei and Bmi-1 expression in peripheral blood of breast cancer (BC) patients were studied in different stages of the disease compared with unaffected first-degree relatives (FDRs) and normal control. Methods: The Cytokinesis Block Micronuclei Cytome (CBMN cyt) and nested real-time Reverse Transcription-Polymerase Chain Reaction (RT-PCR) assays, were respectively used to measure genomic instability and Bmi-1 gene expression in 160 Iranian individuals comprised of BC patients in different stages of the disease, unaffected FDRs and normal control groups. Result: The frequency of micronuclei and Bmi-1 expression were dramatically higher in distant metastasis compared with non-metastatic BC. In spite of micronucleus frequency with no association with lymph node (LN) involvement and hormone receptor status, the Bmi-1 expression level was higher in LN positive and triple negative patients. Conclusion: Our results indicate that increased genomic instability expressed as micronuclei and higher Bmi-1 expression in peripheral blood are associated with metastasis in breast cancer. Therefore implementation of micronucleus assay and Bmi-1 expression analysis in blood as possible cytogenetic and molecular biomarkers in clinical level may potentially enhance the quality of management of patients with breast cancer.
Introduction
Sporadic breast cancer is the common type of female's malignancies in women worldwide. Its etiology is multifactorial. Predisposition to breast cancer may be consequent of mutation in genes involved in the processing of DNA damage and repair known as low penetrance genes (Hemalatha et al., 2014).
The majority of solid tumors are of epithelial origin. Breast cancer cells that have to undergo epithelial-to-mesenchymal transition (EMT) obtain malignant characteristic, however, the molecular mechanism and/or cytogenetic characteristic underlying this transition are poorly understood. It was shown that genomic instability expressed as aneuploidy and chromosomal rearrangements is closely related to tumor development and tumor progression. Highly aneuploid breast tumors generally progressed faster and were clinically more aggressive than their counterparts without aneuploidy (Li et al., 2008). These data clearly indicate that genomic instability may be considered as an important factor for tumor development and progression including distant metastases. of breast cancer progression. Although markers such as large tumor size, poorly-differentiated histopathological grade, and lymph-node metastasis are common established prognostic markers related to metastasis, distant metastasis still occurs in 20-30% of the patients with negative lymphnode involvement (Loda et al., 2010).
The exact molecular mechanism of breast cancer metastasis remains unclear due to the cancer heterogeneity and represents a new prerequisite for developing better treatment strategies.
The polycomb (PcG) proteins constitute a global system with important roles in cancer, multi-cellular development and, stem cell biology. B-lymphoma Moloney murine leukemia virus insertion region-1 (Bmi-1) is the first functional mammalian Polycomb group (PcG) proto-oncogene to be recognized. The PcG consists of several proteins that form multiprotein complexes that regulate gene activity at the chromatin level. They were initially identified as part of the memory system that ensures the faithful transmission of cell identities throughout cell division. Although PcG protein expression is tightly regulated in normal cell proliferation and differentiation, it is often deregulated in several types of human cancer (Li et al., 2014). Bmi-1 is known to play an important role in carcinogenesis as it was originally identified as an oncogenic partner of c-Myc in murine lymphomagenesis (Joensuu et al., 2011). Previous studies revealed that Bmi-1 is involved in the regulation of stem-cell-associated genes to control cell self-renewing and differentiation. Moreover Bmi-1 may be involved in the carcinogenesis and metastasis of breast cancer due to have role in leading mammary epithelial cells (HMECs) to bypass senescence and immortalize by activation of human telomerase reverse transcriptase (hTERT), which extended the replicative life span lifespan (Silva et al., 2007;Guo et al., 2011) also, a significant correlation has been observed between Bmi-1 expression and axillary lymph node metastasis in invasive ductal breast cancer (Silva et al., 2007). Although some shreds of evidences have shown that Bmi-1 expression is associated with unfavorable prognosis, other studies have not confirmed these findings (Choi et al., 2009;Nalwoga et al., 2010;Shao et al., 2014). Shao (2014) in a meta-analysis study reported that high Bmi-1 expression was significantly associated with poor survival in Asian patients with esophageal carcinoma, gastric cancer, lung cancer, colorectal cancer and cervical carcinoma, whereas the high level of Bmi-1 can predict better prognosis in Caucasian patients with breast cancer. In spite of the aforementioned link between Bmi-1 and cancer, very few studies have focused on the molecular mechanism and clinical outcome of Bmi-1 in breast cancer metastasis. Majority of Bmi-1 expression studies in breast cancer have been focused on its expression in breast cancer tissues and there is only one report analyzing Bmi-1 gene expression at mRNA level in plasma (Silva et al., 2007). In the present study for the first time, we have traced Bmi-1 expression at RNA level in total peripheral blood using nested Realtime RT-PCR. Many tumors shed stray cells, vesicles, and traces of DNA and RNA into the blood and other body fluids, such debris can serve as markers to monitor disease progression and even help to diagnose cancers before symptoms appear. Considering advantages to find cancer biomarkers detectable in blood, in the present study the relative influence of peripheral blood genomic instability expressed as MN and Bmi-1 expression at RNA level was investigated in Iranian breast cancer patients using CBMN and nested real-time Reverse Transcription-Polymerase Chain Reaction (RT-PCR) assays.
Study population
The study was carried out as a case-control study in a group of 160 Iranian females (70 ductal carcinoma breast cancer patients, 40 unaffected first-degree relatives of the studied patients and 50 unaffected matched controls). The protocol was approved by the ethical committee of the National Institute of Genetic Engineering and Biotechnology (NIGEB), based on Helsinki declaration. Patients and controls signed a written informed consent letter before enrolment. Table 1 shows clinical and analytical data for test and control groups. About 5 ml of blood was collected from each donor (breast cancer, unaffected first-degree relatives of patients and normal control, all were female) by venipuncture into heparinized and EDTA tubes. Breast cancer patients were collected from patients referred to ''Imam Khomeini hospital, Tehran, Iran''. All donors completed a written questionnaire to obtain information related to their lifestyle, such as dietary habits, medical history and exposure to chemical and physical agents.
The inclusion criteria for the patient samples were the histopathological diagnosis of ductal carcinoma and availability of immunohistochemistry (IHC) results for human epidermal growth factor 2 (HER-2), estrogen receptors (ER) and progesterone receptor (PR) status and other pathologic diagnostic information. Receiving chemotherapy or radiotherapy before recruitment and any history of familial breast disease or malignancy considered as exclusion criteria in our study.
The patients were distributed into three groups according to tumor stage (stage II to IV), which was determined by a pathologist in compliance with common standards. Details of the patient clinicopathological parameters are presented in Table 1.
CBMN assay Cell culturing
Blood samples were drawn by venipuncture into sodium-heparin vacutainers and processed within 3 hours after retrieved from the hospital. For each individual, four lymphocyte cultures were set up by adding 0.5 mL of whole blood into 4.5 mL of RPMI 1,640 medium supplemented with 15% Fetal Bovine Serum (FBS), 1% antibiotics (100 IU/ml penicillin and 100 µg/ml streptomycin) and 0.15 mL phytohaemagglutinin were also added to the cultures (all provided by Gibco Life Technologies, Paisley, UK). Lymphocytes were cultured at 37°C for 72 hr. After 44 h, 6 μg/mL cytochalasin B [Gibco, Northumberland, UK] was added to the culture to arrest cells at cytokinesis. At 72 hours of incubation, cultures The real-time RT-PCR primers for Bmi-1 and beta actin are F2: 5′CCGCTTTTAGGCATACAGATTG3′, R2: 5′GATTTATACTTCTCTGTTGCTAC3′ and F : 5 ′ C A G C A G AT G T G G AT C A G C A A G 3 ′ , R : 5′GCATTTGCGGTGGACGAT3′, respectively. All reactions were carried out on the ABI 7500/7500 fast real-time system (Applied Biosystem, CA, USA). Using the 2 -ΔΔCT method (Livak and Schmittgen, 2001) the data were presented as the fold change in gene expression normalized to an endogenous reference gene (beta-actin) and relative to the normal controls.
Statistical Analyses
Statistical computations were performed using the SPSS version 16.0 (SPSS, Chicago, IL). The comparison of the data between patient and control groups was carried out using an analysis of variance (ANOVA) test. A Student's t-test was performed for comparisons between two groups. For all analyses, differences were accepted as statistically significant at p < 0.05. Numerical data are presented as the mean ± standard deviation (SD). Table 1 summarizes the demographic and clinical data for the different groups of patients and control. There were no significant differences in the distribution of body mass index, the age of menarche, number of children (data not shown) and smoking habits and use of hormone replacement therapy.
CBMN assay in the studied populations
The background MN frequency, as well as nuclear buds and nucleoplasmic bridges in binucleated peripheral blood lymphocyte and micro nucleated cell frequency in breast cancer, first degree relatives of breast cancer patients (FDR), and control groups, are summarized in Table 2. The background frequency of micronuclei was significantly higher in breast cancer (BC) group compared with both unaffected FDR and control groups (p<0.001).
were harvested by centrifugation at 120g for 8 min followed a brief hypotonic treatment (2-3 min in 0.075 M KCl at 48°C). The cells were centrifuged, then fixed and washed in methanol/ acetic acid (3:1 v/v) solution three times. The resulting cells were resuspended and dropped onto clean slides. Slides were coded and stained with 10% of Giemsa (Merck, Darmstadt, Germany) in phosphate buffer (pH 6.8) for 5 min.
Scoring and data evaluation
The scoring criteria established by Fenech (2007) were used for CBMN Cyt assay analysis. To determine the frequency of CBMN assay endpoints (micronuclei, nucleoplasmic bridge, and nuclear buds) as well as apoptosis and necrosis a total of 1,000 binucleated cells with well-preserved cytoplasm were blindly scored on coded slides. In addition, a total of 500 lymphocytes were scored to determine the percentage of cells with one, two, or more nuclei in order to calculate the nuclear division index (NDI).
Bmi-1 mRNA expression analysis RNA extraction and cDNA synthesis
Total RNA was purified from EDTA saturated fresh blood using TRI reagent BD (sigma, Darmstadt, Germany), 2 µg of total RNA was digested by 2 µg DNase 1 (Fermentas, Manchester, UK) to remove genomic DNA contamination and then 1 µg of RNA was used for cDNA synthesis, with Precision qScriptt Reverse Transcription Kit (Primerdesign, Chandlers's Ford, UK). All the steps were done following the manufacturer's instructions.
Standard curve construction
Amplification efficiency for each primer pairs was determined by the amplification of a linear standard curve (from 0.24 to 1,000 ng) of total cDNA assessed by ultraviolet spectrophotometer. Standard curves showed good linearity and amplification efficiency (100%) for the primer set of experimental (Bmi-1) and reference (beta-actin) genes.
Nested Real-time RT-PCR analysis
Gene-specific primers were designed manually. Because the Bmi-1 gene was expressed at low levels in Peripheral blood RNA, nested real-time PCR was used to quantitate gene expression. The first round of PCR was carried out in 50 μl reaction containing 5 μl of cDNA, 10 μl 10X buffer, 1 μl 10 mM dNTP, 3 μl gene-specific primer mix (20 μM), 5 units Taq Polymerase and amplified on a thermal cycler at 95°C for 2 minutes followed by 40 cycles of 95°C for 30 seconds, 51°C for 35 seconds and 72°C for 35 seconds, then 1 cycle of 72°C for 10 min, ending at 4°C. The PCR primers for the 1st round amplification of Bmi-1 are F1: 5′TAATGCCATCTGATTCTTAC3′ and R1: 5′CATGTCACTGTGAATAACG3′. Real-time RT-PCR reactions were performed to detect the expression of each gene in duplicate, in 25 μl reaction volume using 5 μl of 1st round PCR product, 12.5 μl SYBR Select Master Mix (Applied Biosystems), 0.25 μl primer mix (2 μM final) and 7.25 μl water. Beta-actin gene is a housekeeping gene and was used for normalization. The mean MN frequency was also higher in unaffected FDR group compared with normal control (p<0.01). The MN frequencies are 28.36±8.34, 6.21±1.64, 4.2 ±1.24, for breast cancer patients, FDR and control groups respectively (p<0.001).
As shown in figure 1 when the breast cancer patients stratified according to the metastasis situation (metastasis to lymph node ( LN+ / LN-) and distant metastasis) no significant differences were observed between LN+ and LN-based on micronucleus frequency whereas this frequency was significantly higher in breast cancer patients with distance metastasis (p≤0.001).
Mean of micronucleus frequency in breast cancer patients stratified according to immunohistochemistry (IHC) results for hormone receptors which commonly used in clinical practice was not different between ER+/ ER-or PR+/PR-or HER2+/ HER2-breast cancer patients (p>0.05).
When the breast cancer patients categorized based on clinical stages, the mean MN frequency was significantly higher in stage IV (p≤0.01) whereas no significant differences were observed between other stages ( Figure 2). The data showed that the background frequency of both nucleoplasmic buds and nucleoplasmic bridges were significantly higher than control in breast cancer patients group (p<0.001) (table 2).The rate of apoptosis in breast cancer group was significantly higher than other groups also this frequency was higher in FDR group compared with normal control ones (p<0.001) whereas no statistically significant difference was between necrosis rate among all three test and control groups (p>0.05) ( Table 2).
Bmi-1 expression results
Compared to FDR and normal control groups, the average amount of peripheral blood Bmi-1 RNA expression was significantly higher in breast cancer patients (Figure 3). There were no significant differences between levels of Bmi-1 expression in FDR and the normal control group (p>0.05).
When Bmi-1 expression data stratified in BC patients based on estrogen/ progesterone and human epidermal growth factor receptors status, our results showed that mean of RNA expression in triple negative breast cancer tumors (ER-, PR-, HER2-) was significantly higher than non-triple negative ones (P<0.001) (Figure 4).
In figure 5, nested real-time RT-PCR analysis of Bmi-1 expression in different breast cancer groups based on metastasis situation compared with first degree relatives and control groups is shown. As this figure indicates the mean of Bmi-1 mRNA expression was dramatically higher in metastatic groups both distant metastasis and lymph node metastasis compared with lymph node negative (LN-) breast cancer patients, FDR and normal control groups (p<0.001). The peripheral blood mRNA expression was 4.81±1.29, 4 ± 2, 0.75 ± 0.68 and 1. 2±0.7 in breast cancer with distant metastasis, LN +, LN-and FDR respectively.
It could be concluded from our data that breast cancer patients with the highest levels of both MN frequency and Bmi-1 expression detectable in blood would have major possibility to undergo distant metastasis ( figure 1 and 5).
Discussion
In the present study, we supposed MN frequency and Bmi-1 expression as two cytogenetic and molecular possible metastasis biomarkers detectable in peripheral blood. In the case of MN frequency, CBMN assay was done in peripheral blood of breast cancer patients, FDR, and normal control groups. CBMN assay has been applied to examine the effect of the variety of factors such as genetics, lifestyle, dietary and environmental on chromosomal stability and mitotic function (Salimi et al., 2014;Bitgen et al., 2015;Salimi et al., 2015;Salimi et al., 2016).
Our data demonstrated that the frequency of DNA damage expressed as nuclear aberrations were significantly higher in the breast cancer patient group compared with FDR and normal controls. We studied different CBMN assay endpoints (Table 2) and the frequency of micronuclei was chosen as a biomarker of effects. This biomarker has great biological relevance since MN represent fixed genetic damage resulting from both clastogenic and aneugenic mechanisms (Fenetch, 2007) and it is considered as a well surrogate marker of cancer risk (Giovannini et al., 2014). Our results showed that the frequency of micronuclei in BC patients and FDR groups were higher than the normal control group (Table 2). Our result was somehow in line with some studies have reported the higher frequency of micronuclei in cancer patients compared with normal unaffected individuals (Paz et al., 2018;Santos et al., 2010;Milosević-Djordjević et al., 2010;Bonassi et al., 2011). The micronuclei scoring as a biomarker on fine needle aspiration cytology smears of breast carcinoma was done and confirmed the association of high MN frequency and breast cancer (Hemalatha et al., 2014). Micronucleus assay in buccal smears of breast carcinoma patients showed that micronucleated cells are significantly increased in buccal cells of the breast carcinoma cases (Flores-Garcia et al., 2014). We may conclude from our data and most works of literatures that the increased number of MN in different sample types of BC raises the possibility that the genetic damage in breast cancer patients is generalized and predicted MN scoring could be used in biomonitoring of DNA damage and early detection of high-risk cases of carcinoma of the breast in future. In contrast, Bolognesi et al. reported no significant role of micronucleus frequency as a biomarker of breast cancer risk/susceptibility (Bolognesi et al., 2014).
The higher frequency of micronuclei in FDR group compared with control that was shown in Figure 1 was somehow in line with a study reported that the FDRs of patients having head and neck cancer (HNC) showed significantly higher chromosomal damage in terms of MN frequencies in lymphocytes when compared with those of controls, thus reflecting an increased susceptibility to HNC in FDRs (Burgaz et al., 2011).
This higher MN frequency in FDR compared with normal control group, clearly demonstrates that MN frequency is determined by genetic factors to a major part. The strong reflection of the genetic background supports the idea that MN frequency represents an intermediate phenotype between molecular DNA repair mechanisms and the cancer phenotype and affirms the approaches that are made to utilize them as predictors' cancer risk (Surowy et al., 2011). Our results expressed both higher MN frequency and Bmi-1 expression in breast cancer patients compared with control.
We examined levels of peripheral blood micronuclei and Bmi-1 mRNA expression in LN-positive and negative and metastatic breast cancer cases. Our data showed that the MN frequency was not associated with lymph node involvement but was significantly higher in peripheral blood of patients harboring distant metastasis. Whereas Bmi-1 expression was significantly correlated with both nodal involvement and distant metastasis. Other studies reported the higher Bmi-1 expression at the mRNA level in breast tissue of early-stage patients with no lymph node metastasis (Surowy et al., 2011). Also, it was reported that Bmi-1 expression may be associated with favorable overall survival in breast cancer patients, especially in patients with ER-positive breast cancer (Choi et al., 2009). Controversially, up-regulation of Bmi-1 was shown to be associated with the invasion and poor survival prediction in nasopharyngeal (Song et al., 2009) and with nodal involvement, distant metastasis and clinical stage in uterine cervical and gastric cancers (Zhang et al., 2010).
In a study a xenograft mice model was used to elucidate that BMI-1 was necessary in tumor development by assessing tumor volume and Ki67 expression. They found that Hedgehog (Hhg) signaling exerted synergized functions together with Bmi-1, implicating the importance of BMI-1 in Hhg signaling. They concluded that downregulation of BMI-1 could be an effective strategy to suppress tumor growth, which supports the potential clinical use of targeting Bmi-1 in breast cancer treatment (Yan et al., 2017) Previously the Bmi-1 expression in breast cancer tumor and cells were investigated and showed the positive association between Bmi-1 overexpression and clinical features, such as tumor size, lymph node involvement, distant metastasis and clinical stage (Wang et al., 2015;Gavrilescu et al., 2012). In contract in a study done on pulmonary squamous cell carcinoma, BMI-1 expression was reported to be associated with a favorable prognosis and considered as a possible prognostic factor of pulmonary squamous cell carcinoma (Abe et al., 2017). Our data for the first time showed that Bmi-1 expression in blood was significantly higher in BC compared with FDR and normal control groups (Figure 3, p<0.001) also the positive association was observed between Bmi-1 expression levels and Lymph node involvement and distant metastasis (Figure 5, p<0.001).
Our results indicate that high Bmi-1 peripheral blood expression predicts an unfavorable patient prognosis and serves as a high-risk indicator in breast cancer. Furthermore, we also shed light on the biological impact of Bmi-1 on the invasive and metastatic properties of breast cancer. The main line of evidence involving Bmi-1 in tumorigenesis is the repression of INK4a/ARF suppressor proteins, deregulating both pRb and p53 cell cycle control pathways, facilitating cell proliferation, and desensitizing cells to apoptosis. However, the effect of Bmi-1 overexpression on the inactivation of the INK4a/ARF transcripts in human breast cancer is unclear. As mentioned earlier the overexpression of Bmi-1 enhances the motility and invasiveness, facilitates concurrent EMT-like molecular changes, and promotes the stabilization of Snail and the activation of the Akt/GSK3b pathway. In addition, repression of Bmi-1 reverses the expression of EMT markers and inhibits the Akt/GSK3b pathway (Guo et al., 2011).
Since distant metastasis still occurs in 20-30% of the patients with negative lymph-node involvement (Loda et al., 2010), finding other biomarkers represents metastasis has great value. Our results provide the evidence that Bmi-1 overexpression and high micronucleus frequency measured in lymphocytes may be considered as two unfavorable molecular and cytogenetic possible patient prognosis biomarkers detectable in blood and serve as high-risk metastasis indicators in breast cancer. Therefore implementation of micronucleus assay and Bmi-1 expression analysis in blood as possible cytogenetic and molecular biomarkers in clinical level may potentially enhance the quality of breast cancer management.
Statement conflict of Interest
No potential conflicts of interests were disclosed by the authors. | 2018-11-11T01:39:44.617Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "cf4d7177aba688b84357d358c7d199f95fc1936b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cf4d7177aba688b84357d358c7d199f95fc1936b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250387293 | pes2o/s2orc | v3-fos-license | Evaluation of Parental Acceptability and Use of Intervention Components to Reduce Pre-School Children’s Intake of Sugar-Rich Food and Drinks
Knowledge is needed about effective tools that reach public health objectives focused on reducing the intake of sugar-rich foods and drinks. The purpose of this study was to assess the parental acceptability, use and motivational potential of intervention components developed in the randomized family-based trial ‘Are you too sweet?’ aimed at reducing the intake of sugar-rich foods and drinks among children (5–7 y). Intervention components included guidance on sugar-rich foods and drinks at a school health nurse consultation, a box with home-use materials and a digital platform. The methods used were a questionnaire among intervention families (n = 83) and semi-structured interviews with parents in selected intervention families (n = 24). Results showed the good acceptability and usefulness of the components, with reported frequencies of use of materials ranging from 48% to 94% and a high satisfaction rate with the school health nurse consultation. Personalized feedback and guidance from the school health nurse seemed to be a motivational trigger, and components that were compatible with existing practices were most frequently used. However, the components were not considered engaging by all families. Overall, intervention components were well received and hold the potential for enhancing parental knowledge and parenting practices regarding limiting the intake of sugar-rich foods and drinks.
Introduction
Danish children and adolescents are too sweet in the sense that their average intake of sugar-rich foods and drinks exceeds the recommended maximum amounts [1]. This challenge is not limited to Denmark, as the pattern holds across Western countries [2,3], though Denmark holds the title of 'world champions' in buying sugar confectionery [4]. Studies have shown that the intake pattern of sugar-rich foods and drinks in childhood track into adulthood [5] and cause an elevated risk for dental caries and nutrient dilution, and the literature shows an association between a high intake of sugar-rich drinks and the risk of obesity, cardio-vascular diseases, type 2 diabetes and certain forms of cancer [6][7][8][9].
In order to reach public health objectives that are focused on reducing the intake of sugar-rich foods and drinks, knowledge of effective intervention components and strategies is needed. Despite a substantial number of studies and reviews on improving dietary behaviours, interventions that target children's excess intake of sugar-rich foods and drinks are sparse. Few studies focus on sugar-rich foods (see, e.g., [10,11]), while more studies focus on sugar-rich drinks [12][13][14][15][16], often with an emphasis on environmental changes [17]. However, there is a lack of evidence synthesis to guide practice [18,19]. Equally, few reviews mapping efficient intervention designs or effective intervention components and 2 of 23 tools to reduce the intake of sugar-rich foods and drinks exist; Johnson et al. describe how observational studies have linked restrictive parental feeding practices, such as coercive control or pressure, with higher intakes of sugar-rich foods and drinks among children aged 4-8 years. Furthermore, frequent television use is associated with higher intakes of sugarrich foods, but effective intervention strategies are not yet systematically identified [3]. Likewise, Grieger et al. conclude that studies are required to assess the effectiveness of strategies identified in their review, i.e., reformulation, substitution, restriction/elimination, supplementation and nutrition education/messages [2].
Whereas evidence for effective reduction strategies is lacking, several studies have shown that both parental style and parental dietary practices are decisive concerning young children's eating patterns [20,21]. A recent study among pre-school children and their parents found that the parents' food-related practices (behaviours such as food rules, snack routines, restrictions, and nutrition education) have a greater influence on health behaviours than parental style (parents' general parenting approach, either authoritative, authoritarian, indulgent, or uninvolved) [22]. The authors encourage the development of tools in future interventions and programs that improve and strengthen parenting practices as it holds important potential for health promotion [22]. The family-based approach is further supported by results showing that early establishment of healthy dietary patterns seems to be effective as it promotes health both during childhood and later in life [23].
A number of behavioural and practical barriers have been found in relation to parental behaviour change on dietary habits [24,25]. In regard to sugar-rich foods and drinks, studies have shown that a widespread lack of knowledge among parents on portion sizes and maximum intake among children is a major impediment to behavioural change [26,27]. Another recurrent barrier to change is parental non-commitment to or rejection of recommendations [28,29]. Interventions that advise parents to change how much (portion sizes) or what (e.g., sugar-rich foods and drinks) they serve to their children necessarily involve the emotionally sensitive subject of parenting [30]. The challenge is to give dietary advice that builds knowledge and creates motivation for change without judging or blaming the current parenting. Previous studies have shown that interventions that aim to change dietary habits carry the risk of offending parents, as intervention content such as campaign messages, recommendations, education materials, tools, or other resources are inevitably normative and might leave parents feeling implicitly judged or blamed for their child's diet and eating patterns [31,32]. As the acceptability of intervention messages and components is crucial for their effectiveness and probability of implementation, insights into how parents receive dietary advice addressed to their children are imperative in the development of interventions that aim to engage and support families to change their dietary habits.
In line with these previous findings, the present study aims to evaluate the acceptability and use of the intervention components developed in the intervention study "Are you too sweet?", where Danish pre-school children aged 5-7 years and their families were enrolled [33]. The goal of the intervention was to decrease children's intake of sugar-rich foods and drinks by increasing knowledge, motivation and self-efficacy in families. School health nurses were chosen as the mode of delivery as the consultation provided a personalized, in-person mode anchored in an organizationally structured frame [34]. Further, school health nurses are highly qualified in health education, motivational interviewing and engaging parents [35] and provide an opportunity to reach all children and their families regardless of social background [36]. In addition to the consultation, the intervention components included a box with a range of knowledge-building and behaviour support materials supplemented by a private Facebook group.
Based on an approach combining questionnaire responses and qualitative interviews, this study reports parents' perceptions and use of the 'Are you too sweet?' intervention components and tools. The main aim of the study was to evaluate the acceptability and motivational potential of the intervention components. Moreover, the study aimed to elucidate if the components increased the behavioural capability for behaviour change and if specific intervention messages or components were experienced as patronizing or offensive.
Setting and Intervention Design
The 3.5-month intervention study 'Are you too sweet?' was performed in the Danish municipality of Hvidovre. The municipality was chosen because it is close to the national mean for socio-economic status, ethnicity, and education level in Denmark. Informed by the socio-economic index scores used in the municipality, six schools were selected to participate in the intervention study. The index scores were a continuous variable calculated on the basis of parents' income, marital status, ethnicity, etc. The schools were cluster-randomized to be either intervention (n = 4) or control schools (n = 2). A detailed description of the study design has been published previously [33]. The intervention was conducted from late fall 2020 to early spring 2021 during the COVID-19 pandemic, with several school classes closing for single or several weeks with short notice. The baseline and follow-up measurements, however, were conducted as planned with few modifications (e.g., online interviews).
In short, the intervention components included an extended consultation with the school health nurse with an increased focus on the child's intake of sugar-rich foods and drinks, including feedback from a short web-based assessment tool, the sugar-rich food screener (see Section 2.2). A box with home-use materials was handed out by the school nurses aiming to engage and inspire the families to decrease the intake of sugar-rich foods and drinks (see Section 2.3), and finally, the parents were offered to participate in a private Facebook group during the intervention period (see Section 2.3).
Social cognitive theory was the guiding framework for the intervention design and components. The main aim was to increase knowledge, motivation, behavioural capability and self-efficacy and thereby secure the prerequisites for behaviour change [33]. In order to address the inherent risk of patronizing and to secure the development of non-offensive behaviour change strategies and intervention components, a set of formative research measures were undertaken in the development process. The research elements were informed by parenting theories on tailoring intrinsic motivational messages [37] and encompassed 10 preparatory qualitative interviews with parents to identify value-based or contextual barriers; two focus group interviews to assess acceptability of intervention messages, selected components and delivery mode; and a pilot study with eight families to test feasibility and acceptability. Interviews and tests led to progressive modifications and adjustments to, e.g., the design of components and message phrasing in order to minimize the inherent risk of rejection (of, e.g., the new guidelines on sugar-rich foods) and avoid any tendency to preach 'correct parenting' or give parents the impression that they were receiving a lecture.
The acceptability and usefulness of the intervention components were evaluated by questionnaire responses from 83 families from an evaluation section in the follow-up questionnaire, combined with 24 semi-structured interviews with participating families evaluating their perceptions and practices concerning the 'Are you too sweet?' intervention components. Two focus group interviews with participating school health nurses have been analyzed previously to capture their experience with intervention components [38].
Consultation with the School Health Nurse, New Guidelines and the Sugar-Rich Food Screener
A key element in the 'Are you too sweet?' intervention was the consultation with the school health nurse as a setting for communicating the newly developed maximum limits on discretionary food and drink intake [39], with discretionary foods and drinks being defined as sugar-sweetened and artificially sweetened beverages, sweets, chocolate, biscuits, ice cream, pastries, cakes, salty snacks and other energy-dense, nutrient-poor foods [1]. The maximum intake advised for 4-6-year-old children is four weekly servings consisting of 450 kJ of discretionary foods, equivalent to, e.g., one sandwich cookie, one small cinnamon roll, two lollies and 30 g of gummy bears or similar pick 'n' mix sweets ( Figure 1). The definition and development of guidelines for discretionary foods and drinks have been described in more detail elsewhere [1]. biscuits, ice cream, pastries, cakes, salty snacks and oth foods [1]. The maximum intake advised for 4-6-year-old consisting of 450 kJ of discretionary foods, equivalent to small cinnamon roll, two lollies and 30 g of gummy bea ( Figure 1). The definition and development of guidelin drinks have been described in more detail elsewhere [1]. All families, including both parents and the enro consultation with the school health nurse. The consultatio everyday routines and family life related to well-being a conversation tool prompting the core topics of diet, phy and well-being. The consultation is a mandatory practic extended from 30 to 35 min, where the additional fi discussing the intake and eating habits of sugar-rich fo intervention and as preparation for the consultation, a sh 'the sugar-rich food screener', was developed to assess th drinks prior to the health consultation at the school. The t [40]. Intervention families received a link and were aske screener' three days prior to the consultation and were t Figure 1. The maximum intake advised for 4-6-year-old children is four weekly servings per week consisting of 450 kJ of sugar-rich foods, * of which, maximum one serving (250 mL) of discretionary drinks.
All families, including both parents and the enrolled child, were invited to the consultation with the school health nurse. The consultation unfolded as a conversation on everyday routines and family life related to well-being and health and was guided by a conversation tool prompting the core topics of diet, physical activity, screen time, sleep and well-being. The consultation is a mandatory practice in Danish pre-school but was extended from 30 to 35 min, where the additional five minutes were dedicated to discussing the intake and eating habits of sugar-rich foods and drinks. As part of the intervention and as preparation for the consultation, a short web-based assessment tool, 'the sugar-rich food screener', was developed to assess the intake of sugar-rich foods and drinks prior to the health consultation at the school. The tool was subsequently validated [40]. Intervention families received a link and were asked to fill out the 'sugar-rich food screener' three days prior to the consultation and were to register how much sugar-rich foods and drinks their child ate and drank over the past seven days. The intake of sugar-rich foods and drinks registered in the screener was visualized as an individual output displaying the number of sweet servings the child had consumed and the share that the sugar-rich foods and drinks took up from staple foods. See Figure 2 for an example. Further, the school health nurses had access to a text summary of the discretionary food intake. Portion sizes and practical and social context of the intake were included, and this information enabled the nurses to tailor an informed conversation with the family about the child's intake habits and discuss current practices and potential changes in habits to better health.
Box with Home-Use Materials and Facebook Community
The box with home-use materials contained the following materials: a serving size board illustrating the maximum amount of servings of sugar-rich foods and drinks in a recommended diet with reusable stickers with different examples of servings of cookies, chocolate, ice cream etc.; an inspiration booklet describing different strategies to curb sugar habits; an educational card game (the Monster Game); pamphlets with suggestions for local family activities; a read-aloud children's book; and three small posters and stickers with the project logo. All materials except the children's book and the pamphlets were developed for the intervention. Supplementary to the home-use toolkit, intervention families had access to an educational app with two learning games and an augmented reality function (AR-function) and were invited to subscribe to a private Facebook group used to provide parents with information and 'reminders' of the project during the intervention period. The Facebook group was designed as an opportunity to build social support among the participating families (peers), as the group's content was only visible to its members. For a more detailed description of the intervention components and their theoretical underpinnings of behaviour change strategies and determinants, see Bestle et al., 2020 [33].
Qualitative Interviews and Quantitative Questionnaire
A combination of methods was chosen, including a quantitative evaluation by a questionnaire directed at the parents to get an overall measure of the use of and satisfaction with the 'Are you too sweet?' intervention components and a qualitative evaluation from interviews of parents from selected families to get a deeper understanding of parental perceptions of and experiences with the components.
The follow-up questionnaire (post-intervention) comprised an evaluation section with 27 questions on the participants' experiences, use, and satisfaction with the Further, the school health nurses had access to a text summary of the discretionary food intake. Portion sizes and practical and social context of the intake were included, and this information enabled the nurses to tailor an informed conversation with the family about the child's intake habits and discuss current practices and potential changes in habits to better health.
Box with Home-Use Materials and Facebook Community
The box with home-use materials contained the following materials: a serving size board illustrating the maximum amount of servings of sugar-rich foods and drinks in a recommended diet with reusable stickers with different examples of servings of cookies, chocolate, ice cream etc.; an inspiration booklet describing different strategies to curb sugar habits; an educational card game (the Monster Game); pamphlets with suggestions for local family activities; a read-aloud children's book; and three small posters and stickers with the project logo. All materials except the children's book and the pamphlets were developed for the intervention. Supplementary to the home-use toolkit, intervention families had access to an educational app with two learning games and an augmented reality function (AR-function) and were invited to subscribe to a private Facebook group used to provide parents with information and 'reminders' of the project during the intervention period. The Facebook group was designed as an opportunity to build social support among the participating families (peers), as the group's content was only visible to its members. For a more detailed description of the intervention components and their theoretical underpinnings of behaviour change strategies and determinants, see Bestle et al., 2020 [33].
Qualitative Interviews and Quantitative Questionnaire
A combination of methods was chosen, including a quantitative evaluation by a questionnaire directed at the parents to get an overall measure of the use of and satisfaction with the 'Are you too sweet?' intervention components and a qualitative evaluation from interviews of parents from selected families to get a deeper understanding of parental perceptions of and experiences with the components.
The follow-up questionnaire (post-intervention) comprised an evaluation section with 27 questions on the participants' experiences, use, and satisfaction with the intervention components, the school health nurse consultation and the sugar-rich food screener. Frequency of use and satisfaction were evaluated using five-point Likert scale questions with response options ranging from 'not used' to 'used more than five times', and 'very satisfied' to 'very dissatisfied', respectively. These options were supplemented by a 'don't know/not relevant' option. A total of 83 responses were obtained among the 89 participating families (response rate of 93%).
Post-intervention, 24 families were recruited for a qualitative evaluation interview. Interviews were conducted from two to six weeks after the end of the intervention. In order to recruit an adequate yet socio-economically representative sample, families from all four intervention schools were recruited by phone through random sampling. To reach the sample size, 35 families were contacted. Among the 11 families who declined the interview, the most common reason was lack of time.
The interviews were semi-structured, and a topic guide with open-ended questions was used (see Appendix A). Questions were supplemented by structured follow-up prompts and unstructured probes [41]. Themes included knowledge about and implementation of new guidelines on sugar-rich foods and drinks in the family, use of and assessment of the intervention components and the family's practices around and perceptions of family time and values, food and notably sugar-rich foods and drinks. Due to the analytical focus on acceptability, feelings of blame and rationales for potential rejection of the intervention components were further explored since content, framings or designs that were experienced as offensive by the participants would constitute the main barrier to implementation and behaviour change.
All interviews were conducted by B.J.C. with either one or two parents from each family online due to COVID-19 restrictions. Following oral consent, interviewees were provided with a link to a Microsoft Teams Meeting set up by the interviewer, and interviews were recorded using the Microsoft Teams video conferencing software (Microsoft. Redmond, Washington, DC, USA). Interviews averaged 61 min in length. Recordings were subsequently transcribed verbatim as text documents.
Data Analysis
Results from interviews were obtained by an iterative thematic approach that was applied to the 24 interviews using the framework of thematic content analysis [42]. Through an inductive, open-coding strategy, a preliminary coding framework was developed by two researchers. The double coder approach was employed to increase quality and ensure the identification of a broad range of themes and to utilize the differences in proposed codes as a resource, thereby enhancing the refinement of the coding framework. To establish coding reliability, the procedures proposed by Campbell were used, first determining the units of analysis, then 'blinding' them and subsequently applying codes [43]. The first reliability test resulted in 77% agreement, a result that led to the refinement of the coding scheme and an ensuing second reliability test. The second test provided 86% agreement and was evaluated as satisfactory, as it corresponded to the suggested standard of 80-95% agreement, though there is no universally accepted threshold for what indicates acceptable reliability [44]. In all, five interviews (21% of the sample) were reviewed to determine reliability between the two coders. Subsequently, coding of all transcripts was conducted by the primary researcher (B.J.C.) using NVivo software version 10, (QSR International, Doncaster, Australia). The questionnaire survey was conducted using LimeSurvey version 3.15.5+, (LimeSurvey GmbH, Hamburg, Germany). Descriptive and frequency summaries were computed in Excel for responses to each of the 27 questions. Table 1 details the main characteristics of the 24 interviewed families and the total intervention population for comparison. There was a fair representation of parents of girls and boys, and the distribution of parental educational background among the interviewees resembled the sample distribution.
Perception of the Consultation with the School Health Nurse and the Sugar-Rich Food Screener Output
In the questionnaire responses, the majority of the 83 families indicated that they were either satisfied (53%) or very satisfied (28%) with the consultation with the school health nurse. No respondents indicated that they were dissatisfied with the consultation.
In the interviews, two main profiles of parents were identified in the analysis regarding the presentation of the sugar-rich food screener output and the ensuing advice at the school health nurse consultation. One profile was composed of parents who considered the consultation as 'fine' or 'a cozy chat', but did not deem it to have any significant impact on their perception of their own health habits or their child's intake of sugar-rich foods and drinks. Parents in this profile accounted for around one-third of the interviewees.
"Well, I must admit, I actually do not really remember what the health nurse said", father to girl at school A.
The other profile accounted for a larger part of the interview sample and consisted of parents who conveyed that the consultation and ensuing advice had a substantial impact on their perception of the family's sugar habits. Several interviewees reported that they had experienced the sugar-rich food screener output as an 'eye-opener' and hence a 'wake-up call' to reduce their child's intake of sugar-rich foods and drinks.
"I would say we were probably both in shock because we believe we have a healthy relationship with sweets, so we were very surprised", mother to girl at school D.
Several parents further explained that their astonishment was caused by the fact that their child's intake was much higher than they expected and markedly higher than the maximum number of weekly servings in a recommended diet; this was information that considerably changed their image of themselves as having a healthy diet and their perception of their family's sugar habits as being well-balanced and reasonable.
"I was damn proud when we signed up and I thought "we totally got this" and then when we got that pie chart (from the sugar-rich food screener), I was kind of like "oh, okay . . . the higher you fly, the further you fall", father to girl at school D.
The novel and, for some interviewees, disquieting information on the guidelines in combination with knowledge on their child's intake in relation to these guidelines served as a cue to action and spurred most parents to consider possible changes. The guidelines and the school health nurse's explanations were reported to have had a high motivational impact on parents to follow the advice and guidance.
"I acknowledged her point when she had drawn it in red, which means alarm. Then you think "mayday-mayday." We need to do something", mother to boy at school B.
As mentioned, one of the aims of the consultation was to encourage families to change their habits related to the high intake of sugar-rich foods and drinks. Interviewees explained how they experienced the conversation and the behaviour change suggestions from the school health nurse as helpful and relevant.
"We also had a chat about how it matters to change the little things. It is not like we were supposed to go home and change everything. That is not at all what it is about. But yes, (reducing our intake of) squash may be a good place to start. What would be good alternatives to that, right?", mother to girl at school D.
A sub-theme that emerged was the consideration of the unhealthy impact of sugar-rich foods and drinks other than weight gain. Some of the parents explained that before the intervention, they did not consider limiting their child's intake as long as the child did not have an unhealthy weight development. However, the visualization in the sugar-rich food screener output revealing how a diet that fills up on sugar-rich foods and drinks provides less nutritional value to the child's body made them reconsider their practice.
"I was surprised it was an issue since he is so skinny. But then again, you also talk about the inside of the body and whether it consists of muscle or fat. So, I still listened, even though I was offended at first", mother to boy at school B.
Interviewees also expressed that the fact that the consultation with the school health nurse, which both encompassed parents and the child, had a decisive impact on the subsequent behaviour changes at home. Several parents reported that the child was more compliant and positively received the health messages and guidelines on maximum intakes as the advice came from the school health nurse. Hence, parents could refer to the school health nurse as a trusted sender and thereby encourage the child to be mindful of what they had learned during the consultation.
"She (interviewee's daughter) knew very well that "okay it was not just mom", mother to girl at school C.
Across the interviewees from both profiles, it was reported that the personalized feedback and adjustment of advice provided by the school health nurse made the guidelines more relevant and relatable.
Several parents expressed that their child's intake of sugar-rich foods and drinks somewhat or largely exceeded the advised maximum servings in a recommended diet, and the parents' astonishment over how little room for sweet treats the guidelines allowed for was a recurrent theme in the interviews.
"And then the four pieces of candy for her age. That seemed a bit grotesque. I was like "Wow! That is hardly anything!", father to girl at school C.
Despite their amazement, parents stated that they perceived the guidelines as useful and motivating in reducing their child's intake of sugar-rich foods and drinks.
3.3. The Acceptability and Use of the Intervention Components Used at Home 3.3.1. Quantitative Evaluation Table 2 shows participating families frequency of use of the home-use intervention components. The inspiration booklet and the read-aloud children's book were the mostused components of the home-use materials (used by 94% and 81%, respectively, once or more). Additional questions (not shown) revealed that the most common use of the inspiration booklet was either to use the booklet as a conversation starter in the family (40%) or to get new knowledge and inspiration (27%). The serving size board with reusable stickers and the educational card game (the Monster Game) were used by about two-thirds of the families (used by 59% and 63%, respectively, once or more). The main reasons for not trying out the card game were that families either forgot (30%) or did not manage to get it done (33%) (data not shown). The least-used component was the educational app, which was used by less than half of the families (48%). The most common reason not to download the app was that families forgot (62%), while others had technical difficulties (12%) or other difficulties (6%) (data not shown). Table 3 shows that among those using the home-use materials, the majority expressed that they were either satisfied or very satisfied with the components (65-85%), except for the educational card game (the Monster Game), where only around half of the users expressed that they were either satisfied or very satisfied with the component (50%). * Four responders did not rate the inspiration booklet and answered 'Don't know'; these have been subtracted from the total number of users; ** one responder did not rate the card game and answered 'Don't know'; this has been subtracted from the total number of users; *** Eight respondents did not rate the Facebook group and answered 'Don't know'; these have been subtracted from the total number of users; **** Three respondents did not rate their perception of the school health nurse consultation; these have been subtracted from the total number of users.
With regard to the Facebook option, 61% answered that one or both parents had subscribed (data not shown). Results on satisfaction revealed that most of the subscribers were neither satisfied nor dissatisfied (40%) with this component, whereas around onethird expressed that they were either satisfied or very satisfied with it (37%). Additional questions (not shown) revealed that less than half (46%) had posted, liked, or commented in the group. When asked about the lacking interaction, subscribers rated the content as relevant (95%) but indicated that they did not know what to comment or post (31%) or that they rarely interact on Facebook.
In the following, parents' perceptions and use of the home-use materials and digital resources will be described one by one by evaluations drawn from the qualitative interviews.
Serving Size Board with Reusable Stickers
In the interviews, families who used the serving size board all agreed that the concrete imagery was an effective way to communicate the guidelines and portion sizes. Whether families have used the board to plan for or 'keep accounts' of sugar-rich food and drink intake, the board has been the joint point of reference for the parents and the child.
"It still hangs out there on the fridge ( . . . ) it has worked well because it has been an actual visual thing at her eye level, right. And it has been noticeable in the kitchen, and we could say: "But look. Now you are asking for this, but you already have two stickers, and it is only Friday tomorrow"", mother to girl at school C.
In this way, the serving size board functioned as a tangible and easy-to-understand tool to explain the portion sizes and the maximum number of weekly servings for the child, thereby making the child assist in the monitoring and management of the intake of sugar-rich foods and drinks.
Some parents expressed that they had used the board to make the child aware of serving sizes but without combining it with the guidelines and the number of maximum servings. In other families, notably, the stickers had been turned into a random toy, but with no explicit health message or educational purpose.
"And then there were the stickers. They have used them in all sorts of funny ways (laughs), but that is probably just a kid's thing", mother to boy at school A.
Not all families used the serving size board or the stickers to monitor intake. Some expressed that they found the logic of counting or planning sweet servings irrelevant to their practices as they perceived the guidelines as a general frame for healthy eating and did not follow the guidelines for the limited number of servings. Others had a more value-based rejection of the serving size board, as it was directly aimed at the child as a monitoring tool for their intake. Parents believed that it should not be the child's concern to understand and comply with the guidelines, e.g., the maximum number of four weekly servings (see Figure 1) and therefore rejected the tool.
"It is just, that thing about a six-year-old having to comprehend what she can and cannot have. Well, listen up! The idea is that we present the food she needs. And that is the proper food. Nothing more! And if you eat what we present, then we believe you will get some healthy habits", father to girl at school C.
For them, decision-making on food choice was a parental responsibility, not to be conferred to pre-school children who were thereby rendered individually responsible.
"I believe it is my responsibility. Not his", mother to boy at school A.
A recurrent critique expressed among parents who disapprove of the responsibilization of the child through the serving size board was the separation of foodstuff into 'allowed' and 'allowed in limited amount' categories, and thereby 'good' and 'bad' foods. Parents expressed that conceptualizing food in these categorical ways, in their opinion, paved the way for a dichotomous health talk that they did not want to induce in their child as they believed it could imbue feelings of guilt and anxiety.
"It is very important to me to teach them good habits, so that they learn to make reasonable choices, I mean, do away with this idea of prohibited or bad foods", mother to girl at school D.
Parents stated that because health literacy was important, they were cautious. In their approach, health was a fine line to walk, and it could unintentionally be disrupted and result in adverse consequences that may be serious and irreversible, e.g., disturbed eating [45]. They believed that with age, children should build the ability to navigate and handle the complex demarcation lines between healthy and unhealthy, but only later.
It is important to underline that these parents did not necessarily disapprove of the guidelines as such but criticized the transfer of responsibility for monitoring the intake to the child that the serving size board conveyed. To them, this task required a thorough, nuanced knowledge of nutrition in order to make balanced choices.
The Inspiration Booklet
Families generally conceived the booklet as useful information, easy to access and a resource to give a summary of the guidelines and the background. Among the parents who had used it, some had studied the themes and ideas for changing family sugar habits in more detail and used it as a go-to resource; others had briefly flipped it through and considered to which extent the knowledge and strategies were useful for them.
"I thought it was nice to receive those tips and tricks because it made me look them up again. Like, "what was the message again?"", mother to girl at school B.
Notably, the use of the booklet as a reference for definitions and guidance was underscored, as the booklet was used as a resource that parents could return to when in doubt. Several parents mentioned the opening with different examples of 450 kJ servings as particularly helpful.
"It is this one (shows the servings sizes in the inspiration booklet) I think that is the one we have used the most", mother to girl at school C.
Some parents explained that they already knew several of the strategies for reducing intake or background information on healthy eating already, but in combination with knowledge of the guidelines, it assured them that their rules and routines around sugarrich foods and drinks were 'sensible' and essentially in line with the guidelines.
"To me it was a good inspiration booklet. I probably just needed that service check of our habits, "what are we doing?" and it helped me", mother to girl at school C.
A few participants used the booklet to establish a common understanding with, e.g., their partner, or they had asked grandparents to read it in order for them to obtain knowledge on the guidelines and advice.
"Not long ago I told my husband to read it as well, so we are in it, like, together", mother to girl at school B.
Results thus indicate that for most parents, the booklet served as a helpful reminder both of the guidelines and, e.g., servings sizes, and of strategies and advice of which they knew the majority before they enrolled in the project.
Educational Card Game (the Monster Game)
The educational card game, the Monster Game, is a deck of cards that can be used for two different games and combined with the augmented reality (AR) monster that comes to life when stickers with invisible QR-codes are scanned with, e.g., a smartphone or a tablet. The gameplay was designed to be played as a matching game or in a storytelling version, enabling reflections on habits and intake of sweet foods and drinks in the family where stickers could be placed at strategically chosen spots in the home.
The interview data displayed that families who had played the game overall liked it. Only a few used the second option of the card game, where cards were used to engender dialogue about sugar habits among family members and to explore their own preferences and routines and potential strategies to reduce the intake of sugar-rich foods and drinks.
"We used the game a couple of times. We have not played the actual game a lot. We have been more like making up the stories. We used that part of it, the one with making up a true and a false story. And then the part with thinking of alternatives, because it was actually the kids just as much as myself who came up with the idea of having Friday fruit", mother to girl at school C.
As the box with home-use materials in many families was framed as belonging to the child, the child was likewise 'the manager' of the card game, and some parents explained how the child had invented personal rules or used the cards according to rules pertaining to other card games.
"He loves flipping lottery. I tried to explain to him what we were supposed to do and stuff, but in the end we made it a flipping game instead", mother to boy at school A.
The impact of the educational value of the card game differed among families; while some children did not ascribe any particular meaning to the green cards with 'healthy foods' and the red cards with 'unhealthy foods', others took away an understanding of the (relatively simple) health message behind the gameplay. However, several parents questioned the card game's capacity to successfully promote learning and development.
"I was initially assuming that the kids were to learn about sweets and healthiness and stuff like that, but that was not at all what they were taught. Focus was on capturing the monster and learning how to capture it", mother to girl at school A.
Several families never got started with the game, either because the child (or their sibling) did not want to play, because the parents experienced the gameplay as too complex, because they had lost the manual or similar reasons. The most common reason was the rulebook being too complicated or time-consuming to read.
"There were too many rules. There was, like, too much to comprehend", father to boy at school C.
Others simply did not find the time or forgot about the card game.
"We never really looked at it. It was somehow just forgotten among everything else", mother to girl at school B.
The intervention ran during the Danish COVID-19 lockdown, and the particular circumstances constraining everyday routines impacted family life in general. Families explain how time was an (extra) scarce resource and that parental educational ambitions were lowered.
"I would definitely have spent more time on the game, had it been more of a usual everyday life, as that would also mean more time for it. In the current situation we need to stick to the familiar", mother to girl at school A.
Data thus indicate that because the card game demanded preparation time and engagement from parents, the card game was not played in several families.
Read Aloud Children's Book 'Anton Og Sukkerdillen'
Almost all families participating in the interviews had read the book, 'Anton og Sukkerdillen', aloud to their child and often also read it to the child's siblings.
"[The book] was funny. They really liked it, her little sister as well, also in relation to dentists and such. It is really good", mother to girl at school A.
Many parents explained how the family's bedtime routine includes reading aloud and that children choose which book to read. For some, 'Anton og Sukkerdillen' became one among other popular books, while other children got less involved with the story or preferred other genres.
"He likes to choose which books to read. It is not one he has asked for", mother to boy at school A.
The book's health education message concerns dental care and the importance of a balanced diet and reducing the intake frequency of sugar-rich foods and drinks. How the health promotion message was received differed among families, as it was evident to some, but not to others.
"We have read it a couple of times at least. But, like, I think they see it as a story just like any other", mother to girl at school B.
"The thing with the teeth falling out and "do you remember the crocodile who just suddenly had no teeth". So yes, they got it. It did make an impression on them", mother to girl at school D.
Few families did not read the book, mainly due to practical impediments and not disapproval. Parents' feedback indicated that the easy adaptability of the book into current practices and bedtime routines is a crucial element of its successful implementation in families' everyday life.
Educational App with Learning Games and AR-Function
These evaluations were mirrored in the interviews, where parents of children who have used the app assessed that the health education message was easy to grasp and children liked the gamification concept.
"He liked the app; the one where you can feed it with lots of sugar, or greens and then it, like, got better or did not get better. He thought that was funny. Yeah, and then the fact that it could talk to him", mother to boy at school A.
The evaluation of the app from the child's perspective differed widely and determined the frequency of use.
"Then we tried that app. He did not find that interesting, the one where those gizmos jump around. He really thought that was boring", mother to boy at school A.
As with the card game, the educational app demanded an initial effort from parents to install the app and explain the functionality to the child. In some families, this was an impediment to use. For others, technical challenges prohibited it from being downloaded. To some, technical issues became an insurmountable obstacle due to general frustration with online platforms and digital resources related to the COVID-19 lockdown.
"I must say, with all this lockdown and corona. It has been incredible with this homeschool ing craze and all that technical shit and stuff. So, having to download an app and figuring it out. (tired sigh!) I was just very 'no!'", father to girl at school C.
This argument also conveyed the general situation characterized by a lack of time and energy that many families reported and thus not a critique of the app as such.
Private Facebook Group
As the survey data showed, most subscribers were inactive; they did not post or comment on posts from the project team, even when different engagement tactics were employed by the administrators (who were part of the intervention team). In the interviews, participants could clarify and give more details on the lack of activity among subscribers.
"I do not use Facebook for communication purposes. I simply use it as a tool to look into what people are doing. To probe into people's lives (laughs)", father to girl at school C.
Some parents did not subscribe either because they missed the invitation or because they had dropped social media out of principle, but for parents who subscribed, the evaluation of the group was positive. For the most part, they liked the content but just did not want to comment or like, simply because they rarely interacted on social media. When asked, parents explained that the topics of health, dietary patterns and parental care were sensitive, and they were hesitant to discuss them with, e.g., fellow parents that they hardly knew.
"I probably would have done it in another setting where I knew who the members were and then I probably would have chosen the Facebook group that belongs to (child's name) class. So, like a slightly narrower forum. I only used the Facebook group for inspiration or information. So, only as something for me, not something from me", mother to girl at school C.
Though very few parents contributed actively with content or commentaries, many read the posts that the project team wrote and posted on a regular basis. They received the notifications, and for many, the posts worked as a welcome reminder.
"Yes, but it was nice to have ongoing reminders, because you can easily forget all about it and then get back on the wrong track. Starting again to buy candy, even if you really do not want to. "Why did I do this? There is no reason to do so. " So it was a really good reminder", mother to girl at school A.
Results showed that the Facebook group did not unfold as planned with regard to providing social interaction, but subscribers reported that content and notifications worked as helpful reminders and instigated motivation and engagement.
The Child-Centered Approach as a Basis for a Shared Language
As a crosscutting theme concerning several components, interviewees highlighted the all-family approach in the communication of the guidelines present in both the serving size board, the inspiration booklet, the educational card game (the Monster Game) and the learning games in the app. As the one-by-one presentations have shown, these learning resources included a range of child-oriented, visual, and easy-to-understand tools developed to explain the guidelines. Participants evaluated them as being very useful. The tools equipped them with applicable arguments and logic when discussing reductions and rules on sweet treats with their child.
"She understands if we show her: "At your age you should not have more than this". And then she can more easily put it into perspective, and, like, really understand and accept it", mother to girl at school B.
The parent-child materials provided guidance to help parents explain the guidelines. This shared language on sugar-rich foods and drinks was reported to have helped with making the child understand why reducing the intake of sugar-rich foods and drinks was important and had improved the quality and nuance of the conversation that the family had concerning their sweet habits.
"We have just discussed it: "But there are simply no biscuits for now because listen, you have four available, and therefore you can have an apple"", mother to girl at school C.
However, as described in relation to the serving size board, not all parents agreed upon introducing this intervention tool to their child and adopted an adult-centered approach as a conscious strategy.
Discussion
This study showed an overall good parental acceptability of the intervention components in the family-based intervention "Are you too sweet?" aiming at reducing the intake of sugar-rich foods and drinks among children. The key modality for message delivery was new guidelines on sugar-rich foods and drinks [1] communicated to the families through a consultation with the school health nurse, including individual registration and output through 'the sugar-rich food screener', supplemented by a box with the home-use materials and a private Facebook group to support parenting practices around limiting the intake of sugar-rich foods and drinks.
While all families attended the school health nurse consultation and, in general, expressed satisfaction with both the consultation and the individual registration and output from the 'sugar-rich food screener', both the questionnaire responses and the analysis of the qualitative interviews showed an uneven frequency of use of the home-use materials and, likewise, a certain degree of variation in their satisfaction rating. No component was deemed offensive or inadequate, but not equally relevant or useful either. As a general pattern, components that demanded little effort and were compatible with existing practices were more easily implemented and more frequently used, e.g., the inspiration booklet and the read-aloud children's book, while the Monster Game and educational app provided as a part of the home-use materials were used by fewer families and in general with less satisfaction.
School Health Nurse Consultation and the Sugar-Rich Food Screener
A main component in the 'Are you too sweet?' intervention was the communication of the developed maximum limits on sugar-rich foods and drinks at the school health nurse consultation and the associated individually tailored advice. Families' evaluation emphasized the school health nurse as a trusted information sender, notably for the children. This is in line with another qualitative study on the experience of school health nurses working with overweight children in elementary schools in Sweden, where the nurses' sensitivity to individual needs and ability to provide individual support and advice was considered to be pivotal [46]. Further, in combination with the consultation set-up encompassing both parents and child, the consultation was mentioned as important for establishing the foundation for a shared language on sugar-rich foods and drinks for some.
Participants underscored the usefulness of the personalized guidance in regard to the family's habits and actual intakes. The in-person individual feedback made information relevant and relatable. The differentiated guidance was enabled by the 'sugar-rich food screener', and results showed that the screener equally functioned as a motivational trigger for many parents as the individualized feedback and visualization of the maximum weekly servings displayed the consequences of a high intake in a tangible and easy-to-grasp manner. In a preceding evaluation conducted among the participating school health nurses, they expressed their satisfaction with the information on individual intakes and actual habits that the sugar-rich food screener provided, which allowed them to tailor advice to the family's specific needs [38]. Other studies support how and why the tailoring of advice increases self-efficacy and behavioural capability by providing participants with the knowledge and tools necessary to set and pursue their goals [47][48][49]. The high acceptability indicates that the sugar-rich food screener and the interpretation of the output by an educated health advisor (the school health nurse) are efficient and that the 'Are you too sweet?' team has succeeded in designing a tool that may improve engagement and selfefficacy. It should be underscored, however, that though most parents reported an outcome of the health dialogue with the school health nurse, some parents seemed to benefit less as they found the guidelines and advice less relevant despite the individualized approach. The stance points to a much-debated dilemma in public health ethics: the conflict between the potential paternalistic effects of intervention and individual autonomy [50]; or, as Riiser has asked: "can we justify imposing on the participant's personal preferences by directing actions for his or her own good?" [51] (p. 241).
Components and Materials Used at Home
The box with home-use materials that families received included a serving-size board with reusable stickers, an inspiration booklet, an educational card game (the Monster Game), a read-aloud children's book, and access to an educational app with learning games. In addition, parents were invited to subscribe to a private Facebook group. Responses from the questionnaire showed a certain degree of variation in the use of the home-use materials. While the inspiration booklet and the read-aloud children's book were looked through or read by most participating families (94% and 82%, respectively), about two-thirds of the families used the serving size board and card game (62% and 65%, respectively), and around half of the participants used the educational app (49%). With regard to the Facebook group, around four out of six participants subscribed. Among the participants who had used the materials, the same degree of variation was found in their satisfaction ratings. Participants that were either satisfied or very satisfied ranged from 35% and 53% for the Facebook group and the educational card game, respectively, to 74% and 86% for the serving-size board and the read-aloud children's book. Hence, some components seem more accessible to participants than others, a finding that is mirrored in the interview data, where families report that components that demanded preparation, such as downloading an app or reading rule books or where, e.g., technical difficulty with initial set-up could occur, were less likely to be used. This corresponds to findings from other studies using games and apps [52] that describe poor usability in relation to, e.g., non-intuitive interfaces or technical obstacles. These impediments might have been an even greater obstacle to overcome due to the COVID-19 context, where many parents experienced distress and a lack of time and resources due to the imposed additional work strain of juggling the challenges of home-schooling (often of more than one child) while working remotely themselves. In relation to the Facebook group, the distress and other contextual effects of the societal lockdown in Denmark might likewise explain the frequent assessment of the group and its function as 'a kind reminder'. Despite the lack of social interaction, the Facebook group thus indirectly instigated motivation and engagement. Other studies evaluating behaviour change and motivational techniques in interventions support the effectiveness of digital prompts as cues to reinforce motivation and potentially behaviour change [11,53]. The findings describe how prompts, e.g., in push notifications, increase parental engagement and that parents find the content helpful [53].
Parents who used the intervention components expressed that their behavioural capability increased through the educational properties of notably the booklet and the serving size board. In the interview data, interviewees emphasized the serving size board as a good tool to convey the guidelines to their children and that the stickers were used to cue serving sizes and maximum intakes. Results from the interviews showed that a fraction of parents did not use the serving size board (and were therefore not asked to evaluate it in the questionnaire) because they did not approve of what they deemed a potential responsibilization embedded in the design. In addition, some objected to the division of foods into 'allowed' and 'allowed in limited amount' categories, and thereby 'good' and 'bad' foods. This finding underlines the importance of communicating healthy eating messages that emphasize a balance of food and drinks and avoiding an exaggerated focus on single foods when introducing the components to families.
However, as the serving size board was not imposed as mandatory but offered as an optional tool, parents who disapproved of it could easily refrain from using it. The board still holds a capacity for transfer of responsibility whereby the child is rendered individually responsible for intake pattern or monitoring of intake in relation to the guidelines. The statements in the interviews from the sub-group of parents disapproving of the responsibilization are important in this regard, notably because these same parents, in general, approve of the guidelines as such. Their disapproval of the serving size board expressed in the interviews underlines the unavoidable, inherent risk of responsibilization in child-oriented intervention components that seek to enhance health literacy in the child. A responsibilization of the child could cause feelings of pressure and guilt that might engender negative social and emotional experiences around food and eating. Several studies have shown how such experiences might lead to less healthy eating habits [3,54]. In other families, the child-oriented components facilitated a shared language on sugar-rich foods and drinks and thus invited co-management and collaborative decisions on, e.g., intake patterns. Such practices hold the potential for a transfer of responsibility to the child but do not necessarily induce it. The balance between responsibilization and increased health literacy in the child is a fine line, and the interviewees navigated it differently due to their diverse parenting values.
When assessing the home-use materials in combination, families did not universally prefer one (type of) material. The diverse modalities were each favoured and combined differently from family to family, and it might be argued that the range of different modalities allowed families to customize their own selection of tools and resources to tailor 'their family intervention'. Evaluated against the aim of empowering and motivating participants to generate their own new healthier habits, this is positive.
Engagement of Families Regarding the Intake of Sugar-Rich Foods and Drinks
Considering the feedback from the consultation with the school health nurse, where several parents relayed that it did not have any significant impact on their perception of their own health habits or their child's intake of sugar-rich foods and drinks, it might be questioned to what extent the intervention components were increasing engagement universally. The evaluation of the materials and tools might be positive, and acceptability might be high, but this may not instigate changes among all families as not all parents are motivated and accordingly not compelled to engage in any behaviour change. If the intervention message of reducing the intake of sugar-rich foods and drinks does not align with parental core values, the aim for increased motivation for change will not be attainable, as motivation is conditioned by concordance with personal beliefs and core values [37].
In studies aiming to explain modest results of dietary interventions, insufficient effects are often attributed to social barriers and a lack of specificity or resources [55][56][57]. Moreover, health promotion campaigns and interventions inevitably raise ethical issues as they demarcate normative standards for 'correct behaviour' [58]. Parenting studies have furthermore stipulated the risks of evoking negative emotional responses among parents when correcting their current dietary practices [30,31]. The 'Are you too sweet?' study aimed to overcome these barriers by offering diverse strategies and a motivationally driven range of intervention components to engage and empower families. Overall, results suggest that the 'Are you too sweet?' project team largely achieved the aim of developing a useful, empowering, and, in general, non-offensive toolkit. However, the aim of engaging all families seems not to have been achieved.
Strength and Limitations
It is a strength that questionnaire data was obtained from 83 of 89 participating families (93%) and that 24 families were interviewed. This provides detailed data material for the analyses. Additionally, it is a strength that a broad spectrum of socio-economic levels among participants was obtained and that the study population thus covers a diverse selection of family types and socio-economic statuses. Fathers were still under-represented despite the efforts to recruit them. Furthermore, the study could have been made more nuanced by interviewing the children alongside the parents in the evaluation interview [59], in the same way as in-person interviews would have been favoured to the online version imposed by the pandemic.
It is a strength of the study that the range of different intervention components allowed families to customize their own selection of tools and resources according to preferences; however, a consequence of this is that the intervention components cannot be evaluated separately. In addition, it was not explicitly evaluated if participants conceived of the recommended maximum number of weekly servings as comprising both salty and sweet discretionary foods [1]. As mentioned throughout the article, the impact of the COVID-19 pandemic, lockdowns and related changes in the everyday life of the families might have influenced their participation in the intervention, but the effect is difficult to measure and thus adjust for. Families were differently affected depending on, e.g., their socio-economic situation and work-life organization. An additional limitation is the lack of observational data from the school health nurse consultations. Such data would have provided information on the nurses' attitudes vis-a-vis the guidelines, their use of the intervention components and potential encouragement to use (selected) materials, as well as the strategies implemented to tailor advice to individual families. Such information would have enabled a more nuanced evaluation of the context for and impact of the consultation.
Conclusions
Results suggest that future initiatives to promote a reduced intake of sugar-rich foods and drinks among pre-schools should include individually tailored advice in accordance with parenting values. Knowledge-building materials might prove effective if combined with support tools for behaviour change. Intervention components were generally ac-ceptable and non-offensive and had the potential to increase knowledge and behavioural capability and thereby strengthen parenting practices. The personalized feedback on intake in relation to the guidelines facilitated by school health nurses seemed to be a motivational trigger that made, notably, the knowledge-building and behaviour support materials relevant for many, but not all parents. Further, the intervention components were useful for parents as resources facilitating the translation of advice from the school health nurse into daily family practices, in particular when the component could be implemented in existing practices and routines. A sub-group of parents approved of the guidelines but did not use the serving size board, as the latent risk of responsibilization embedded in its use conflicted with their parenting values. Bearing this in mind, the components hold important potential for health promotion around sugar-rich foods and drinks. Components may significantly improve parental knowledge, establish the foundation for a shared language on sugar-rich foods and drinks and enhance parenting practices around limiting the intake of sugar-rich foods and drinks. | 2022-07-10T05:20:34.246Z | 2022-06-29T00:00:00.000 | {
"year": 2022,
"sha1": "b497b39f105bf7bfbf2123b4f43338988a8ade92",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b497b39f105bf7bfbf2123b4f43338988a8ade92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233394241 | pes2o/s2orc | v3-fos-license | Diverse Image Inpainting with Bidirectional and Autoregressive Transformers
Image inpainting is an underdetermined inverse problem, which naturally allows diverse contents to fill up the missing or corrupted regions realistically. Prevalent approaches using convolutional neural networks (CNNs) can synthesize visually pleasant contents, but CNNs suffer from limited perception fields for capturing global features. With image-level attention, transformers enable to model long-range dependencies and generate diverse contents with autoregressive modeling of pixel-sequence distributions. However, the unidirectional attention in autoregressive transformers is suboptimal as corrupted image regions may have arbitrary shapes with contexts from any direction. We propose BAT-Fill, an innovative image inpainting framework that introduces a novel bidirectional autoregressive transformer (BAT) for image inpainting. BAT utilizes the transformers to learn autoregressive distributions, which naturally allows the diverse generation of missing contents. In addition, it incorporates the masked language model like BERT, which enables bidirectionally modeling of contextual information of missing regions for better image completion. Extensive experiments over multiple datasets show that BAT-Fill achieves superior diversity and fidelity in image inpainting qualitatively and quantitatively.
Introduction
As an ill-posed problem, image inpainting naturally allows numerous solutions as long as the restored images are realistic and semantically reasonable as illustrated in Fig. 1. However, it remains a great challenge to synthesize diverse while realistic contents that maintain integrity and consistency with the uncorrupted image regions, especially when the corrupted image regions are large and rich in complex textures and structures.
Recently, GAN-based (generative adversarial network) inpainting [32,50,30,26] has achieved remarkable progress by training with reconstruction and adversarial losses over large-scale datasets. However, these methods are trained to learn the one-to-one mapping from masked images to complete images, which results in the incapacity of producing diverse inpainting results. In contrast to deterministic inpainting, a few studies [64,61] attempt for diverse inpainting with variational auto-encoder (VAE) networks [23], but the inpainting quality is often compromised while generating complex structural and texture patterns due to the limited capacity of parametric distributions [63]. Instead of parametric distribution modeling like VAE-based methods, [33] utilizes a CNN-based conditional network to learn an autoregressive distribution for recovering diverse and structural features. As the autoregressive models are optimized to encode unidirectional context only, which means the informative contexts of valid pixels after the current position are substantially ignored. To explore bidirectional context, [41] adopts the masked language model (MLM) like BERT [11]. However, MLM predicts the masked tokens independently which may oversimplify the complex context dependency in the data [37] and result in inconsistency in the generated results.
In this paper, we propose a bidirectional and autoregressive transformer (BAT) that marries the best of autoregressive modeling and MLM to model deep bidirectional contexts in an autoregressive manner. In the proposed BAT, we permute the input sequence by sorting the valid and missing pixels and start autoregressive modeling at the position of the first missing pixel. With all available contexts in front, BAT can exploit bidirectional contexts and spatial dependency simultaneously. In addition, we adopt the two-stage completion procedure as reported in [41] and develop BAT-Fill, an image inpainting network that firstly recovers the diverse yet coherent image structures based on the proposed BAT and then exploits a CNN-based texture generator to up-sample the coarse structures and synthesize texture details. Extensive experiments show that BAT-Fill achieves superior image inpainting performance.
The main contributions of this work can be summarized in three aspects. First, we adopt the transformers to learns an autoregressive distribution for diverse image inpainting, which effectively improves the modeling capacity for longrange dependencies and global structures. Second, we design a novel bidirectional and autoregressive transformer (BAT) that captures bidirectional information and establishes output dependency simultaneously. Third, extensive experiments over multiple datasets show that the proposed method achieves superior performance as compared with the state-of-the-art in both inpainting quality and inpainting diversity.
Image Inpainting
As an ill-posed problem, realistic and high-fidelity image inpainting is a challenging task that has been studied for years. Based on the inpainting outcome, most existing image inpainting methods can be broadly classified into two categories including deterministic image inpainting and diverse image inpainting.
Deterministic Image Inpainting
Traditional methods address image inpainting challenge through either image diffusion [5,1] or using image patches [3,15,9]. However, diffusion-based methods often introduce diffusion-related blurs, which tends to fail while the missing or corrupted image regions are large [6,2,4]. Patch-based methods can work well for the inpainting of stationary background with repeating patterns. However, they struggle in completing large missing regions of complex scenes as the patch-based approach relies heavily on patch-wise matching of low-level features.
Generative adversarial networks (GANs) [14] have been investigated extensively in various image synthesise tasks such as image translation [31,36,19,53,51,56,54], image editing [48,44,43], image composition [24,59,58,52,57,55], etc. Specifically for image inpainting, Pathak et al. [32] first apply adversarial learning to the image inpainting task. To further improve the adversarial learning within local regions, Iizuka et al. [18] introduce an extra local discriminator to enforce the local consistency. As the local discriminator uses fully-connected layers and can only deal with missing regions of fixed shapes, Yu et al. [49] inherit the discriminator from PatchGAN [19] due to its great success in image translation. Yan et al. [46] propose patch-swap to make use of distant feature patches for the better inpainting quality. Liu et al. [25] design partial convolutions to alleviate the negative influence of the masked regions. Yu et al. [50] present a novel free-form image inpainting system based on an end-to-end generative network with gated convolutions. To generate reasonable structures and realistic textures, Nazeri et al. [30] and Xu et al. [45] utilize edge maps as structural guidance for image inpainting, and Ren et al. [35] instead propose to use edge-preserved smooth images as structural guidance. Liu et al. [27] propose feature equalizations to improve the consistency between structures and textures. As aforementioned methods focus on reconstructing the ground truth instead of generating pluralistic inpainting, they are constraint to generate a deterministic inpainting image for each incomplete image.
Diverse Image Inpainting
To achieve pluralistic image inpainting with plausible filling contents, Zheng et al. [64] propose a VAE-based network with a dual pipeline, which trades off between reconstructing ground truth and maintaining the diversity of the inpainting results. Similarly, Zhao et al. [61] propose a VAEbased model and leverage a reference image to improve the diversity. Although the above methods achieve certain diversities to some extent, completion quality of VAE-based methods is limited due to variational training. Recently, Zhao et al. [62] propose a co-modulated GAN to incorporate the image condition and the stochastic generation of unconditional generative model for diverse inpainting. Peng et al. [33] introduce a hierarchical vector quantized variational auto-encoder (VQ-VAE) to quantize the context representation and archive diverse structure generation in an autoregressive way. Sharing a similar framework with us, Wan et al. [41] propose to apply transformer for diverse structure generation using the objective of BERT [11]. In contrast, we propose a novel Bidirectional and Autoregressive Transformer (BAT) which inherits the advantages of autoregressive models and bidirectional models and achieves superior image inpainting performance.
Transformers in Vision
Transformer has emerged as a powerful tool to model the interactions between sequences regardless of the relative position. Specially, Vaswani et al. [40] employ transformers for image classification by treating an image as a sequence of patches. DETR [7] utilizes transformer decoder to model object detection as an end-to-end dictionary lookup problem with learnable queries, thus removing the hand-crafted processes such as Non-Maximal Suppression (NMS). Based on DETR, deformable DETR [66] further introduces a deformable attention layer to focus on a sparse set of contextual elements which achieves fast convergence and better detection performance. Recently, Vision Transformer (ViT) [12] showed that pure-transformer networks can also achieve excellent image classification performance as compared with CNN-based methods. DeiT [39] further extends ViT by introducing a novel distillation approach. BoTNet [38] replaces the spatial 3×3 convolution layers with multi-head self-attention in certain stages of the original ResNet [16], demonstrating very competitive performance on different visual recognition tasks. Esser et al. [13] adapt transformers and VQ-VAE in both conditional and unconditional generation tasks, and achieve high-fidelity synthesis of megapixel images.
Instead of leveraging features of transformers for highlevel tasks or generate pixels autoregressively, we specifically propose a novel Bidirectional and Autoregressive Transformer (BAT) for image inpainting, so that the model can learn both bidirectional context and output dependency.
Proposed Method
As illustrated in Fig. 2, the proposed BAT-Fill consists of two major parts including a diverse-structure generator for the reconstruction of coarse image structures and a texture generator for the generation of fine-grained texture details. The diverse-structure generator incorporates and adapts a transformer architecture that models the distribution of global structural information and recovers complete and coherent low-resolution structures S 1 , S 2 , · · · , S N given a Masked Image I m as input. Under the guidance of coarse structure S i , i ∈ [1, N ] and corrupted image I m , the Texture Generator synthesizes high-resolution fine-grained texture to produce the Inpainting Results I out . Once the full model is trained, we can sample different image structures S i , i ∈ [1, N ] by the diverse-structure generator and thus generate diverse inpainting results with the texture generator, more details to be discussed in the ensuing subsections.
Context Representation
To relieve the pressure of quadratic complexity incurred in transformer, we adapt the low-resolution image with the size of 32 × 32 × 3 to represent the coarse structure. As the autoregressive generation requires discrete distribution, the pixel value should be treated as classes to the model, which leads to the dimensionatliy of 256 3 for each pixel of the 8bit RGB images. Following Chen et al. [8], a color palette is applied to further reduce the dimensionality to 512 while faithfully preserving the main structure of original images, which is generated by k-means clustering of RGB pixel values with k=512 from ImageNet [10] dataset.
Bidirectional and Autoregressive Transformer
Autoregressive (AR) modeling and masked language modeling (MLM) in BERT [11] are two representative objectives for exploiting large language corpora in language processing tasks. Given a discrete sequence X = {x 1 , x 2 , . . . , x L } where L is the length of X, AR model is optimized by maximizing the unidirectional likelihood: where θ is the parameters of the model. In contrast, MLM aims to reconstruct corrupted data with the masked posi- where K is the number of masked tokens. Each masked position of the corrupted data is indicated by a special token [M ] following BERT [11]. Denoting the masked tokens as X M and unmasked tokens as X \M , the objective of MLM can be formulated by: (2) AR and MLM differ from two aspects as defined in Eqs. 1 and 2. The first aspect lies with output dependency, where MLM predicts the masked tokens separately and independently which may oversimplify the complex context dependency in the data [37]. As a comparison, AR factorizes the predicted tokens with the product rule, which establishes the output dependency and produces better predictions. The second aspect lies with context dependency, where AR is only conditioned on the tokens up to the current position (in a fixed order), while MLM has access to bidirectional contextual information. Therefore, MLM is more suitable for image inpainting as the missing or corrupted image regions often have arbitrary shapes with rich variation in the neighboring background.
We propose a novel Bidirectional and Autoregressive Transformer (BAT) that inherits the advantages of AR and MLM to achieve bidirectional context modeling and output dependency simultaneously. The training objective of the BAT is formulated by: We first project all the tokens into a d-dimensional token embedding and add a learnable position embedding over the token embedding to preserve the positional information. Unlike XLNet [47] which randomly permutes the input sequence to capture the bidirectional context, we permute all unmasked tokens X \Π in the front while maintaining the original order of the masked tokens for better predicting their positions. Moreover, the positional information of all masked tokens will be conditioned for better modeling of the full input sequence (e.g. the counts and positions of masked tokens in the sequence). The proposed BAT model is then adopted to predict the masked tokens as illustrated in Fig. 3.
As shown in Fig. 3, there is a masked sequence X = Here we use the mask token instead of x 1 to predict x 2 to encourage the leverage of positional information. We apply bidirectional modeling [11] to non-predicted tokens and autoregressive modeling to the predicted tokens to avoid future information leakage. For example, while predicting x 3 , the model could attend to x 4 in non-predicted tokens and meanwhile the previously 'predicted' token x 2 . Hence, we could capture bidirectional context and establish output dependency simultaneously with the proposed BAT.
Transformer Architecture
In this work, we adapt GPT [34] as our network architecture. The network is a decoder-only transformer that consists of N stacked decoder blocks. Given formulated by: where M A, LN and M LP stand for multihead selfattention, layer normalization, and fully-connected layers, respectively. For self-attention, we apply a customized mask to the L × L matrix of attention logits as illustrated in Fig. 3. At the final layer of the transformer, a learnable linear projection is employed to map H N to logits, which parameterizes the conditional distribution for each pixel. During inference, we follow the raster-scan order to predict each masked token bidirectionally and autoregressively. We adopt a top-K sampling strategy to randomly sample from the K most likely next words. The predicted token is then concatenated with the input sequence as conditions for the generation of next masked token. This process repeats iteratively until all the masked tokens are sampled. Finally, the generated discrete sequence can be converted back to the RGB values with the aforementioned color palette.
Network Architecture
As the inpainting diversity can be achieved by sampling the reconstructed structures S, we take the advantages of efficiency and texture representation capacity of CNNs to learn a deterministic mapping between low-resolution structures S and high-resolution completed image I out . The texture generator thus utilizes CNN layers and adversarial training to up-sample the reconstructed structures and replenish high-fidelity texture details by leveraging the styles of the valid pixels of input image I m . In particular, we employ two encoders to encode the low-resolution structures and input images into two high-level CNN representations of the same dimension. We then concatenate them together as the input of a few consecutive residual blocks with different dilation rates. Finally, a SPADE [31] generator is employed to incorporate the modulated style of input images and gradually up-sample the texture features to the target resolution. Meanwhile, all vanilla convolutions are replaced by gated convolution [50].
Loss Functions
The training of the texture generator is driven by the combination of several losses including a reconstruction loss, an adversarial loss, and a perceptual loss. For clarity, we denote the texture generator as G t , the ground truth as I gt , and the completed image as I out . Firstly, a reconstruction loss L rec between I out and I gt can be measured as follows: Besides, a CNN-based discriminator D together with an adversarial loss is employed to synthesize fine texture details. Specifically, the texture generator G t and discriminator D are jointly trained with hinge loss [19], where the adversarial losses for the discriminator and generator are defined by: Next, we penalize the perceptual and semantic discrepancy via the perceptual loss [20] with a pretrained VGG-19 net-work: where λ i are balancing weights, Φ i is the activation of ith layer of the VGG-19 model (including relu1 2, relu2 2, relu3 2, relu4 2 and relu5 2), Φ l represents the activation maps of relu4 2 layer which mainly extracts semantic feature. The texture generator is trained by optimizing the combination of aforementioned losses: where λ rec , λ adv , and λ perc are empirically set at 1.0, 1.0 and 0.2, respectively, in our implementation.
Datasets
We conduct experiments over three public datasets that have different characteristics as listed: -CelebA-HQ [21]: It is a high-quality version of the human face dataset CelebA [28] with 30,000 aligned face images. We follow the split in [50] that produces 28,000 training images and 2,000 validation images, where 1,000 validation images are randomly sampled in evaluations. -Places2 [65]: It consists of more than 1.8M natural images of 365 different scenes. We adopt the same 800 images from validation set with [41] in evaluations. -Paris StreetView [32]: It is a collection of street view images in Paris, which contains 14,900 training images and 100 validation images.
Compared Methods
We compare our method with a number of state-of-the-art methods as listed: -GC [50]: It is also known as DeepFill v2, a two-stage method that leverages gated convolutions. -EC [30]: It is a two-stage method that first predicts salient edges to guide the generation. -MEDFE [26]: It is a mutual encoder-decoder that treats features from deep and shallow layers as structures and textures of an input image. -PIC [64]: It is a probabilistically principled framework that leverages VAE to generate diverse image inpainting. -ICT [41]: It is a diverse inpainting framework that combine the merits of transformers and CNNs for high-fidelity image inpainting.
Evaluation Metrics
We perform evaluations by using five widely adopted evaluation metrics: 1) Fréchet Inception Score (FID) [17] that evaluates the perceptual quality by measuring the distribution distance between the synthesized images and real images; 2) mean 1 error; 3) peak signal-to-noise ratio (PSNR); 4) structural similarity index (SSIM) [42] with a window size of 51; 5) Learned Perceptual Image Patch Similarity (LPIPS) [60] that evaluates the diversity of generated images. The average scores of LPIPS are calculated between random pairs of sampled inpainting results.
Implementation Details
The proposed method is implemented in PyTorch. The network is trained using 256×256 images with random irregular masks [25]. The diverse-structure generator and texture generator are trained using 256 × 256 images with random irregular masks [25]. We train the diverse-structure generator with AdamW [29] with β 1 = 0.9, β 2 = 0.95 and learning rate of 3e-4 following [8]. For the texture generator, we use Adam optimizer [22] with β 1 = 0 and β 2 = 0.9, and set the learning rate at 1e-4 and 4e-4 for the generator and discriminators, respectively. Learning rate decay is applied for the training of both networks, and the experiments are conducted on 4 NVIDIA(R) Tesla(R) V100 GPU.
Quantitative Evaluation
Extensive quantitative evaluations have been conducted over the three datasets with irregular masks [25]. The irregular masks in the experiments are categorized according to the mask ratios, and an additional category 'random' is evaluated which randomly samples masks with ratios varying from 20% to 60%. The performance of the compared methods was acquired by using the publicly available pretrained models or implementation codes. 1 2 3 4 .
We compare the proposed method with both deterministic and diverse image inpainting methods. Note that all reference metrics such as 1 , SSIM ,and PSNR are in favor of deterministic inpainting methods where the prediction is directly compared with the ground truth. Different from PIC [64] that unitizes its discriminator to sort the results, our method adapts the top-50 sampling strategy and use all random samples for fair comparisons, which means our method directly generates the stochastic inpainting without the additional filtering. Figure 5. Qualitative comparison the proposed BAT-Fill with the state-of-the-art: BAT-Fill generates more realistic and diverse image inpainting over Places2 [65] with irregular masks. Table 2 shows the inpainting performance over the dataset Paris StreetView [32]. Compared with deterministic methods GC, EC, and MEDFE, the proposed method achieves the best FID scores over different mask ratios and consistently outperforms the diverse inpainting method PIC in both inpainting quality (FID) and inpainting diversity (LPIPS). In addition, Table 1 shows the inpainting performance over CelebA-HQ [21] and Places2 [65]. For CelebA-HQ, our method consistently outperforms all compared methods, especially in FID scores. For Places2, our method achieves comparable performance with deterministic methods in all evaluation metrics, and it generally outperforms them in FID scores. In addition, the numerical results of BAT-Fill suggest a clear superiority over the diverse inpainting method PIC [64], and better FID scores than ICT [41]. Figure 6. Qualitative comparison the proposed BAT-Fill with the state-of-the-art: BAT-Fill generates more realistic and diverse image inpainting over Paris StreetView [32] with irregular masks. Table 1. Quantitative comparison of the proposed BAT-Fill with state-of-the-art methods over CelebA-HQ [21] and Places2 [65] validation images (1,000) with irregular masks [25] ( * denotes that we trained the model based on official implementations, † denotes the results are copied from [41]). For each metric, the best score is highlighted in bold, and the best score for diverse inpainting methods (i.e. PIC [64] and Ours) is highlighted in underline.
We first evaluate and compare BAT-Fill with EC [30], GC [50], and PIC [64] on CelebA-HQ [21] which contains facial images with similar semantics. As shown in Fig. 4, though EC [30] and GC [50] can synthesize complete facial images with reasonable semantics, they tend to generate distorted facial structures and artifacts in the missing regions which degrades inpainting greatly. In addition, EC [30] and GC [50] can only generate deterministic inpainting, which limits their applicability clearly. Both PIC [64] and BAT-Fill can generate diverse inpainting. However, the PIC generated images share similar makeups and facial features and thus have limited diversity. As a comparison, the BAT-Fill generated facial images vary across a wide range of makeups and facial features and contain much less arti-facts, demonstrating that BAT-Fill can produce more diverse and realistic inpainting.
Next, we evaluate and compare BAT-Fill with EC [30], GC [50], MEDFE [26], and PIC [64] on the datasets Places2 [65] and Paris StreetView [32] where images have various semantics. In addition, visual comparison with ICT [41] is conducted over Places2 [65] dataset. As shown in Fig. 5, EC [30], GC [50] and MEDFE [26] tend to generate blurs and even corrupted texture in the inpainting images. The PIC [64] synthesized images suffer from unreasonable semantics, obvious artifacts, and limited diversity. Both ICT [41] and BAT-Fill achieved realistic image inpainting with much less artifacts and better diversity compared with other methods. For Paris StreetView [32], BAT-Fill produced more diverse and plausible results than the PIC [64], and meanwhile achieved comparable or even better inpainting quality compared with the deterministic methods. Table 2. Quantitative comparison of the proposed BAT-Fill with state-of-the-art methods over Paris StreetView [32] validation images (100) with irregular masks [25] ( * denotes that we trained the model based on official implementations). For each metric, the best score is highlighted in bold, and the best score for diverse inpainting methods (i.e. PIC [64] and Ours) is highlighted in underline.
Metrics
Mask
Ablation Study
We study the effectiveness of the proposed BAT by conducting ablation studies over Paris StreetView [31]. In the ablation study, we remove the two key components from BAT respectively, which result in two models: 1) w/o bidirectional context, where we will get the same objective with the autoregressive model that predicts the missing tokens by conditioning on previous tokens with unidirectional attention; 2) w/o autoregressive model, where the model is equivalent to MLM that independently reconstruct the missing tokens. To measure the diversity of MLM, we employ a Gibbs sampling to iteratively sample tokens and place the predicted tokens into the original sequence instead of directly output all the predicted tokens. For a fair comparison, we apply the same irregular masks (mask ratios 40-60%) on the same low-resolution images (32 × 32) from the validation set of Paris StreetView [31]. After predicting the same inputs, the reconstructed structures of each model are evaluated without applying the texture generator.
As shown in Table 3, using AR greatly degrades the quality of the reconstructed structures, and the high diversity measured by LPIPS is also largely attributed to the poor reconstruction quality. MLM performs reasonably well as it exploits the bidirectional context for inpainting. However, the proposed BAT clearly outperforms in reconstruction quality which is mainly reflected by FID, and it achieves comparable diversity as reflected by LPIPS. This is mainly because BAT models the output dependency to align Table 3.
Ablation study of the proposed BAT over Paris StreetView [31] validation set (100) with irregular masks [25] and mask ratios of 40%-60%. the future predictions with previously predicted tokens and improves the consistency of the reconstructed structures. Overall, the ablation study demonstrates that the proposed BAT addresses the constraints of the AR and MLM effectively.
Conclusion
This paper presents BAT-Fill, a novel image inpainting framework that achieves realistic and diverse inpainting by leveraging the autoregressive transformers with their powerful long-dependency modeling capacity. To improve the quality and diversity of inpainting, we propose a novel bidirectional and autoregressive transformer (BAT) to model the bidirectional context and output dependency simultaneously. Extensive experiments show that BAT-Fill achieves superior image inpainting in terms of both quality and diversity. Moving forward, we will explore the feasibility of adapting our idea to other image recovery or generation tasks by replacing the non-predicted part of BAT with other conditions such as semantic label, edge, and pose. | 2021-04-27T01:16:20.397Z | 2021-04-26T00:00:00.000 | {
"year": 2021,
"sha1": "61153b31cc4de84b4d7af73ba230764251292b85",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.12335",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "61153b31cc4de84b4d7af73ba230764251292b85",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
214612071 | pes2o/s2orc | v3-fos-license | Faster IVA: Update Rules for Independent Vector Analysis based on Negentropy and the Majorize-Minimize Principle
Algorithms for Blind Source Separation (BSS) of acoustic signals require efficient and fast converging optimization strategies to adapt to nonstationary signal statistics and time-varying acoustic scenarios. In this paper, we derive fast converging update rules from a negentropy perspective, which are based on the Majorize-Minimize (MM) principle and eigenvalue decomposition. The presented update rules are shown to outperform competing state-of-the-art methods in terms of convergence speed at a comparable runtime due to the restriction to unitary demixing matrices. This is demonstrated by experiments with recorded real-world data.
I. INTRODUCTION
Blind Source Separation (BSS) aims at separating sources from an observed mixture by using only very weak assumptions about the underlying scenario. Hence, such methods are applicable in a variety of situations [1]- [4]. One important aspect in the design of BSS algorithms is the development of fast converging and at the same time computationally simple optimization strategies. For Independent Component Analysis (ICA) the FastICA update rules based on a fixed-point iteration scheme represent the gold standard in this research field [5]. This update scheme is derived by maximizing the so-called negentropy, i.e., by maximizing the nongaussianity of each separated source. Several variants of these updates including the extension to complex-valued data have been proposed [6].
In this contribution, we consider mixtures of acoustic sources [3], [4]. The most important difference of BSS methods for acoustic mixtures relative to instantaneous problems [1] is the mixture model: Observed acoustic signals undergo propagation delay and multipath propagation. Hence, a convolutive mixture model is needed. A well-established concept is to transform the problem into the Short-Time Fourier Transform (STFT) domain and solve instantaneous BSS problems in each frequency bin [7]. However, this causes the well-known inner permutation problem which has to be resolved by additional heuristic measures to obtain decent results [8]. As an alternative which aims at avoiding the inner permutation problem, Independent Vector Analysis (IVA) has This [9]. A fast fixed-point algorithm called FastIVA has been developed following the ideas of FastICA [5] for the optimization of IVA [10]. Fast and stable update rules have been developed based on the Majorize-Minimize (MM) principle and the iterative projection technique [11] and methods for accelerating its convergence have been investigated [12]. Even faster update rules for the specific case of two sources and two microphones based on a Generalized Eigenvalue Decomposition (GEVD) have been presented in [13].
For source extraction [14], [15], i.e., the separation of a desired source from a set of multiple interfering sources, update rules based on an Eigenvalue Decomposition (EVD) of a weighted microphone covariance matrix have been proposed [16] and spatial prior knowledge about the source of interest has been introduced in these update rules in [17]. Recently, priors on the source signal spectra for IVA were proposed based on deep neural networks [18], [19].
In this contribution, we propose a new update scheme for IVA described in terms of the negentropy of the demixed signals and based on the MM principle. The optimization of the upper bound of the MM algorithm is posed as an eigenvalue problem, which allows for fast convergence of the algorithm. In comparison to [13], our EVD-based update scheme allows for the separation of an arbitrary number of sources instead of only two. In [16], a structurally similar update scheme has been derived for the extraction of a single source from a different perspective. Here, we derive update rules which are also capable of separating an arbitrary number of sources. Update rules for extracting a single source are included in the proposed method as a special case. We note that FastIVA [10] uses the same cost function but uses a fixed-point algorithm for its optimization. The superiority of our proposed method over FastIVA and AuxIVA in terms of convergence speed and separation performance after convergence is demonstrated by experiments using real-world data created from measured Room Impulse Responses (RIRs).
II. COST FUNCTION
We consider a determined scenario, in which K source signals are captured by K microphones with microphone signals described in the STFT domain as where f ∈ {1, . . . , F } indexes the frequency and n ∈ {1, . . . , N } the time frame. The aim of the developed algorithm is to estimate the demixed signals y f,n := [y 1,f,n , . . . , y K,f,n ] from the microphone signalsx f,n . For notational convenience, we introduce the broadband vector of the demixed signal of channel k and time frame n y k,n := [y k,1,n , . . . , y k,F,n ] T ∈ C F , which is modeled to follow a multivariate supergaussian Probability Density Function (PDF) p(y k,n ), where all frequency bins are modeled to be uncorrelated but statistically dependent. Examples for such PDFs, which are typically used for IVA include the multivariate Laplacian PDF or the generalized Gaussian PDF [20]. In the following, signal vectors without frame index n denote Random Vectors (RVs) and signal vectors with frame index their realizations. As the PDF of a mixture of multiple independent non-Gaussian source signals tends toward a Gaussian, maximizing the negentropy [1] of the RV of the demixed signals y := [y T 1 , . . . , y T K ] T is an intuitive and widely used BSS cost function. The negentropy, i.e., Kullback-Leibler divergence between the PDFs of the RVs y and z, where the latter is normally distributed with same mean vector and covariance matrix as y, is defined by Note that the differential entropy of a Gaussian RV H (z) represents a constant and is irrelevant for optimization. To ensure non-trivial solutions if multiple sources should be separated and to fix the scaling of the demixed signals, a common constraint is to impose whiteness of the demixed signals E y f,n y H f,n = I K . Under this restriction, we obtain (cf. [21]) where z k is defined analogously to z. In the following, we will consider the maximization of the sum of the channel-wise negentropies N (y k ) as a surrogate for the maximization of The requirement E y f y H f = I K can be accomplished by whitening the observed signals and estimating the demixed signals with a unitary demixing matrix (cf. [1]) Here, w k,f denotes the demixing filter which extracts the kth source signal sample y k,f,n at frequency f and time frame n. By using the definition of the differential entropy H(·) and the source model G, we obtain the following optimization problem by assuming i.i.d. signal frames (cf. [1], [10]) subject Here, (10) reflects the maximization of channel-wise negentropies (5) and (11) realizes the unitarity constraint on the demixing matrices W f . In (10), we introduced the approximation of the expectation operator by arithmetic averaging over all available time framesÊ {·} := 1 N N n=1 (·). The optimization problem of (10) and (11) is closely related to the IVA cost function [9] J IVA (W) : where W denotes the set of demixing vectors w k,f of all frequency bins f and channels k. The first term of the IVA cost function (12) corresponds to (10). The second term of (12) is a regularizer on the demixing matrices W f ensuring linearly independent demixing filter vectors w k,f . For unitary W f this term is constant and, hence, is irrelevant for optimization. In the optimization problem of (10) and (11), the role of the regularizer is taken by the (stronger) constraint of unitarity of W f . However, the assumption of unitary demixing matrices is a significant restriction w.r.t. the IVA cost function as will become obvious in the experimental evaluations.
In the following, l ∈ {1, . . . , L} denotes the iteration index and W (l) is the set of the lth iterates of all demixing vectors. The main idea of the MM principle [22] is to define an upper bound U , which fulfills the properties of majorization and tangency, i.e., equality iff W = W (l) , w.r.t. the cost function J. The upper bound U should be designed such that its optimization is easier than the iterative optimization of the cost function itself, or, ideally, solvable in closed form. As minimization of the upper bound enforces monotonically decreasing values of U , the following 'downhill property' of MM algorithms is obtained For the construction of the upper bound, we use the inequality [11] for supergaussian source modelsG(r k,n ) = G(y k,n ) dependent on the norm of the kth demixed signal r k,n := y Here, we introduced the weighted covariance matrix of microphone observations Combining (10) and (15) and neglecting constant terms yields a surrogate for the optimization problem (10), (11) defined by the novel cost function (17) and the unitarity constraint (18) subject with equality of (17) to (10) iff W = W (l) . Equality up to a constant is denoted by c =. By inspection of (17), we see that the optimization w.r.t. the demixing matrices W f is now expressed by the optimization of demixing filter vectors w k,f separately for different frequency bins and channels. However, the channel-wise demixing filters w k,f are coupled within one frequency bin due to the constraint (18). To simplify the problem, we divide the optimization of (17) and (18) into two steps: a) Relaxing the constraint (18), which allows for solving (17) for each demixing filter w k,f without being influenced by the other demixing filters. b) Imposing (18) by projecting the results from a) onto the set of unitary matrices, the so-called complex Stiefel manifold. For Step a), we replace the unitarity constraint (18) by a unit norm constraint for the demixing filters and obtain an optimization problem which is now only dependent on a single output channel k minimize w k,f subject to w k,f Optimization by using the Lagrangian multiplier λ k,f yields the following eigenvalue problem which shows that the eigenvalues of V k,f are the critical points of the optimization problem (19), (20). By multiplication of (21) with w H k,f from the left, we obtain Hence, the optimal w k,f is the eigenvector of V k,f corresponding to the smallest eigenvalue λ k,f (as V k,f is Hermitian, its eigenvalues are real-valued and can be ordered). If the smallest eigenvalue λ k,f has algebraic multiplicity one, the choice of w k,f is unique up to an arbitrary phase term, i.e., all elements of the set where φ denotes an arbitrary phase andw k,f is a solution of (19) and (20), represent equivalent solutions. Under the natural assumption of distinct temporal variance patterns of the source signals, the eigenvalues of V k,f can be assumed to be distinct and, hence, the solution for w k,f is unique up to an arbitrary phase term. For Step b), i.e., to impose the unitarity constraint (18) on the demixing matricesW f obtained from collecting the demixing filter vectors from Step a), the closest unitary matrix in terms of the Frobenius distance is calculated where O K×K denotes the set of K × K unitary matrices. This results in [24] The MM algorithm alternates now between two steps: construction of the upper bound by parameterization of the proposed surrogate optimization problem (17), (18) with the weighted covariance matrix V k,f (see (16)) and minimization of it by calculating the demixing filters w k,f by eigenvalue decomposition and orthogonalization of the demixing matrices (25). This is summarized in Alg. 1. f,n = x f,n ∀f, n for l = 1 to L do r k,n = y (l) k,n 2 ∀k, n for f = 1 to F do for k = 1 to K do Estimate V k,f by (16) Compute eigenvector w (l) k,f corresponding to smallest eigenvalue λ (25)) y and 50% overlap at a sampling frequency of 16 kHz. The performance of the investigated algorithms is measured in terms of Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR) and Signal-to-Artefact Ratio (SAR) [25]. These performance measures are not directly connected to the cost function, but are closely related to the separation performance as experienced by a human listener. We used for all algorithms a Laplacian source model yieldingG(r k,n ) = r k,n [9], [11]. The scaling ambiguity of the frequency bin-wise estimates is resolved by the backprojection technique [11].
To benchmark the results, we compared the performance with two state-of-the-art algorithms: AuxIVA [11], which can be considered as the best performing algorithm in the field, and FastIVA [10], which is based on the same cost function as the proposed method (10), but uses a fixed-point algorithm for optimization. Results for the comparison of the investigated algorithms with IVA optimized by a natural gradient update scheme [9] are not shown here as its convergence turned out to be exceedingly slow and the final values are not better than for the competing methods. Note that a variation of experimental parameters such as STFT length, noise type, SNR etc. affected the discussed algorithms similarly. The restriction to unitary demixing matrices is well known to yield a fast initial convergence at the cost of inferior steady-state performance relative to methods that require only invertible demixing matrices [26]. Hence, a natural idea is to use the proposed method 'FasterIVA' until reaching the steady state and then relax the unitarity constraint by switching to the AuxIVA update rules (found to be superior to FastIVA in preliminary experiments) which do not constrain the demixing matrices to be unitary. The switching of this hybrid approach from FasterIVA to AuxIVA is triggered once FasterIVA reached a steady state characterized by only small changes of W f : The threshold γ is chosen here to γ = 0.05. The experimental results including all three different rooms, both source-array distances and 20 repeated draws of source signals resulting in 120 different experimental conditions for each number of sources are shown in Fig. 1 over the number of iterations. The slowest convergence among the discussed methods is obtained by FastIVA. Often, this algorithm did not even reach the steady state within the given number of iterations. However, even after convergence its final values were still not better than the competing methods in the vast majority of cases. The MMbased AuxIVA algorithm outperformed FastIVA in terms of convergence speed but also w.r.t. its final values. The proposed method FasterIVA showed much faster initial convergence than both FastIVA and AuxIVA and usually reached its steady state already after about five iterations. On the other hand, its final values were slightly worse than AuxIVA for the two-source scenarios, while it was the same for the three-source case. The 'Hybrid' approach, which switches after convergence of FasterIVA to the AuxIVA update rules, obtained in all scenarios the fastest convergence and the best final values at a comparable runtime. The values for SAR improvement have been omitted here due to a lack of space, but they showed comparable results for the discussed methods with a slight advantage for the hybrid approach. The runtime per iteration of the investigated methods, which is comparable in most cases, is given in Tab. I. Note that, e.g., in the 3-source experiment, FasterIVA needs only 4 iterations to reach the ∆SIR value of FastIVA after 30 iterations, so that the complexity gain for FasterIVA for comparable performance amounts to a factor of approximately 5.
V. CONCLUSION
In this contribution, we presented a fast converging update scheme based on the MM principle and an EVD of weighted microphone sample covariance matrices. The proposed update scheme outperformed state-of-the-art optimization methods in terms of convergence speed as well as final steady-state values. As a promising next step, these update rules could be investigated w.r.t. their efficacy for the optimization of Independent Low Rank Matrix Analysis (ILRMA)-type algorithms. | 2020-03-24T01:00:58.193Z | 2020-03-20T00:00:00.000 | {
"year": 2020,
"sha1": "3c91b2a1a81bdecea891f45ffa37152d781d9d36",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.09531",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3c91b2a1a81bdecea891f45ffa37152d781d9d36",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
257706937 | pes2o/s2orc | v3-fos-license | The Impact of Auxin and Cytokinin on the Growth and Development of Selected Crops
: With a very diverse structure and small molecules, phytohormones are regulators of plant growth and development. Despite the fact that they are synthesized by plants in small quantities, they are highly active physiologically. According to their action, phytohormones can be divided into two categories, as either activators of plant growth and development or as inhibitors, with auxins and cytokinins belonging to the former group. Auxins are synthesized by plants in the apical meristems of shoots, but also in young leaves, seeds, and fruits. They stimulate the elongation growth of shoots and initiate the production of adventitious and lateral roots. Cytokinins, in turn, are formed in root tips and in unripe fruits and seeds. These hormones are responsible for stimulating the growth of lateral shoots, they also stimulate cytokinesis and, consequently, cell division. The aim of this review paper is to present the progress of the research on the effect of selected auxins and cytokinins on crops, considering the prospect of using them in plant growing methods.
Development of the Research on Plant Growth and Development Regulators
In recent years, great attention has been paid to growth substances that can contribute to increasing the yield potential of crops and their biological value, in particular in unfavorable climatic conditions [1].Regulators of plant growth and development are most often organic substances that even in small amounts modify plant physiology.This modification is based on supporting or inhibiting chemical reactions regulating such processes as germination, root formation, fruit setting, or plant senescence (Figure 1).Today, natural plant hormones are rarely applied to crops since their synthetic counterparts are mostly used i.e., 2,4-dichloro-phenoxyacetic acid (2,4-D), benzyladenine (BA), kinetin, tetrahydropyranyl-benzyladenine (PBA) [2][3][4][5].Synthetic and natural hormones differ in the method of obtaining the substance.In the case of natural hormones, they come from the part of the plant where they are produced.On the other hand, synthetic hormones are usually salts obtained as a result of chemical reactions.Generally, Flasi ński and H ąc-Wydro [6] showed that the natural plant hormone (IAA) interacts with the investigated lipid monolayers stronger than its synthetic derivative (NAA).The reason of these differences connects with the steric properties of both auxins.The naphthalene ring of a NAA molecule occupies a larger space than the indole system of an IAA, making it less well absorbed.
In Poland, there are about two hundred products/preparations that perform regulatory and stimulating functions in relation to plants or soil.In the countries of the European Community, there are over a thousand such products.In Poland, there are officially four categories, distinguished in the relevant legal acts: plant growth regulators, plant growth stimulants, agents improving soil properties, and organic and organic-mineral fertilizers.The above preparations are commercialized on the basis of Article 5 of the Act of 10 July 2007 on Fertilizers and Fertilization [7].This article states that "Fertilizers and plant conditioners authorized for marketing in another Member State of the European Union or the Republic of Turkey, which have been produced in another Member State of the European Union or the Republic of Turkey, or in a country that is a member of the European Free Trade Association (EFTA), may also be placed on the market-a party to the agreement on the European Economic Area, if the national regulations under which they are manufactured and placed on the market ensure the protection of human and animal health and the protection of the environment and suitability for use"."Stimulator", as an official term, is not mentioned in the Regulation (EC) No. 1107/2009 [8].This document defines the term "growth regulators"-these are mainly substances known as plant hormones (IAA, NAA, and gibberellins), ethylene precursors (ethephon, ethyl trinexapac), the wellknown and popular CCC, i.e., chlormequat chloride retardant, inhibiting germination (chlorpropham and maleic hydrazide) and several other less popular ones.This group also includes a product that is a mixture of nitrophenols, stimulating the processes of plant resistance to (abiotic) stresses and inhibiting the aging and cell breakdown processes.The company that commercializes this product in Poland, and globally002C uses the term "biostimulator" when referring to it.Plant growth regulators are products that are registered in a similar way to pesticides.The procedure is regulated very precisely by Regulation (EC) No. 1107/2009 of the European Parliament and of the Council of 21 October 2009 concerning the placing of plant protection products on the market [8].Research on growth regulators has practical applications, for example, in slowing down the growth of lawns in urban areas; they can be used to lower grass growth after mowing [23].According to American studies [24], such retardants can have an effect on other features of lawn grass, for example, on an increase in its tolerance to shading.
In potato production, growth regulators may be of great importance, increasing dry matter and starch yields and the share of the required size of tubers.Other growth regulators may contribute to increasing plant resistance to adverse conditions, such as drought, low temperatures, or disease infestation [25,26].The effect of such synthetic growth regulators as Mival (1-(chloromethyl)silatrane) or Poteitin (a mixture of 2,6 dimethylpyridine-N-oxide and succinic acid) on the growth and yield of 37 potato cultivars was studied by Sawicka [25][26][27] and Mikos-Bielak [28], who found that they stimulated tuber setting processes, increasing the share of marketable tubers in the total yield.
Growth regulators can also be used in the nursery of ornamental plants [29], where the aim is to produce well developed seedlings in the shortest possible time.Plants grow- In a plant, there are about five key growth hormones that interact with each other: auxins, cytokinins, gibberellins, ethylene, and abscisic acid.They are effective if the relationship between them is balanced, but any imbalance affects the action of one of the hormones, triggering or deactivating another.Auxins are the most important because they are involved in all plant physiological processes.They form root buds, participate in cell division, and take part in tropisms.Cytokinins affect cell division, which in turn affects plant growth, but they also stimulate lateral buds and inhibit the aging of organs and plant tissues.In turn, gibberellins induce germination, interrupt plant dormancy, and stimulate cell division [5,[9][10][11][12][13].Ethylene and abscisic acid have the opposite effect to the three above-mentioned groups.They inhibit the growth and development of plants and accelerate their senescence.When the plant is under stress, the activity of abscisic acid increases, which is why it is called a stress hormone [3,4,14].Hormones commonly found in plants primarily include auxins and cytokinins, whose role seems to be crucial in plant growth and development [4,[15][16][17][18][19][20][21][22].
Research on growth regulators has practical applications, for example, in slowing down the growth of lawns in urban areas; they can be used to lower grass growth after mowing [23].According to American studies [24], such retardants can have an effect on other features of lawn grass, for example, on an increase in its tolerance to shading.
In potato production, growth regulators may be of great importance, increasing dry matter and starch yields and the share of the required size of tubers.Other growth regulators may contribute to increasing plant resistance to adverse conditions, such as drought, low temperatures, or disease infestation [25,26].The effect of such synthetic growth regulators as Mival (1-(chloromethyl)silatrane) or Poteitin (a mixture of 2,6 dimethylpyridine-N-oxide and succinic acid) on the growth and yield of 37 potato cultivars was studied by Sawicka [25][26][27] and Mikos-Bielak [28], who found that they stimulated tuber setting processes, increasing the share of marketable tubers in the total yield.
Growth regulators can also be used in the nursery of ornamental plants [29], where the aim is to produce well developed seedlings in the shortest possible time.Plants growing for a long time in small pots are prone to distortion of the root system.By the deformation of the roots of trees and shrubs grown in containers, the growth of their aboveground part may be inhibited because of insufficient amounts of water and minerals.Tangled roots in large quantities limit longer retention of water and mineral salts in the rhizosphere, which in turn may lead to a reduction in plant growth, as well as to a decrease in plant resistance to drought, heat, diseases, and pests [30].According to Balušek et al. [3] and Abas et al. [29], auxins positively affect the regeneration of the root system of transplanted plants.In nurseries, the most commonly used auxins are indole-3-butyric acid (IBA) and naphthyl-1-acetic acid (NAA).
The Importance of Cytokinins in Plant Growth and Development Processes
The most important role of cytokinins is that they stimulate cell division, but their functions are much more complex and depend on interaction with other plant hormones [31][32][33].Regulating root differentiation, cytokinins and auxins can have antagonistic effects, with, for example, auxin stimulating the development of lateral roots and cytokinins inhibiting it [32,34].Cytokinins have a role in the transport and accumulation of photosynthesis products and affecting the activity of other enzymes, and also a huge impact on physiological and biochemical processes [35].Cytokinin concentration in the plant depends on many factors, e.g., on the current stage of the development of cells, tissues, and the whole plant.Environmental factors, such as the amount and intensity of light, the occurrence of stress, or even access to nutrients, may also have a big impact [36,37].Cytokinins are mainly synthesized in root apical meristems, but they can be also produced in fruits and young leaves [36,38].Their levels are controlled by other phytohormones, which significantly affect not only their biosynthesis but also their degradation.Auxins play an important role lowering endogenous cytokinin levels, being also rapid and potent suppressors of cytokinin biosynthesis [39,40].When cytokinins are synthesized, they are transported to other tissues, entering cells through diffusion and active transport involving transport protein, PUP (purine permease), and ENT (equilibrative nucleoside transporters) [41].Cytokinins are key compounds regulating the development and function of chloroplasts [38].The highest cytokinin concentrations are recorded at the initial stage of leaf development, which is attributed to cytokinesis stimulation, membrane formation, plastid division, and, occurring at that time, intensive protein synthesis [39,42].
The literature also deals with the effect of cytokinins on leaf anatomical structure.Microscopic observations conducted on wheat and sugar beet leaves showed that the effects of cytokinins led to mesophyll cell enlargement and intensive lignification of leafstrengthening tissues.The formation of larger amounts of leaf vascular bundles was also observed.Cytokinins also affect the intensification of photosynthesis, regulating it at many levels [43].They stimulate the opening of stomata, mainly in mature and aging leaves, affecting the regulation of CO 2 diffusion necessary in carboxylation.In this case, cytokinin is antagonistic to abscisic acid, which causes stomatal closure [31].Cytokinins affect photosynthesis by regulating chloroplast biogenesis and function, but the main factor regulating the work of chloroplasts is light.Phytohormones from the cytokinin group stimulate the division and formation of chloroplast ultrastructure [41,44,45].Studies have also shown the effect of cytokinins on increasing the level of photosynthetic proteins.An example is chlorophyll-binding proteins that are part of chlorophyll-protein complexes, whose task is to collect light energy.Chloroplast proteins are divided into three groups.The first group consists of proteins whose level increases slightly after the action of cytokinins.Secondly, there are proteins whose accumulation is dependent on light, with cytokinins affecting the speed of the process.The third group comprises of proteins whose levels increase in response to cytokinins regardless of the action of light [20,46,47].It is worth noting that the positive effect of cytokinins on photosynthesis is also attributed to their delaying of leaf senescence.This phenomenon has been observed in many species treated with exogenously applied phytohormones and in genetically modified plants with increased endogenous cytokinin concentrations [48][49][50].With leaf age, the natural concentration of cytokinins decreases, which is associated with their degradation.Cytokinins block or slow down plant aging process by stopping chlorophyll loss and, consequently, by maintaining the green color of the leaves.This situation occurs as a result of an inhibitory effect of phytohormones on the degradation of green pigment.It has also been proven that cytokinins have a great influence on the vital functions of the plant and are perhaps its indispensable element [51,52].
The Importance of Auxins in Plant Growth and Development Processes
Auxins, the second group of phytohormones, are organic compounds that can lengthen stem cells in a manner similar to cytokinins.The initial research on auxins dates back to the nineteenth century.It turned that the coleoptile of Elymus canadensis was sensitive to light, bending towards its source [53][54][55].Researchers [3,56] concluded that there must be some substance that penetrated into an agar block and that it was produced by the tip of the plant and then transported to the coleoptile, causing this organ to bend towards light.The above conclusions were confirmed by other studies in which oat coleoptile tips were placed on agar blocks for several days.After this time, it turned out that both coleoptile tips and the agar exhibited growth-stimulating properties.Based on the above findings, the first quantitative bioassay for the detection of auxins was developed [54,57].The name auxin comes from Greek auxein 'to grow', which reflects the role of this group of hormones.The substance behind this name is indole-3-acetic acid (IAA) [9,58].Many years of research on phytohormones have made it possible to identify a number of substances constituting the group of auxins.It is known now that in addition to IAA, natural auxins include indole-3-butyric acid (IBA) and 4-chloroindole-3-acetic acid (4-CL-IAA).These substances contain an indole ring in their molecule [59,60].
Compared to IAA (indole-3-acetic acid), IBA (indole-3-butyric acid) is much more effective in inducing plant lateral and adventitious roots [61].In addition, it stimulates elongation growth of the stem much faster [62].Auxin in the form of 4-CL-IAA, containing an additional chlorine atom, occurs only in leguminous plants [2].Scientists have discovered two other non-indole compounds, i.e., phenylacetic acid and p-hydroxyphenylacetic acid, with auxin properties [59].Auxins are found in two forms, free and bound.The free form is a small fraction of the total amount of auxins in plants and is biologically active [63].However, most auxins found in plants are bound [62].Auxin synthesis occurs in young, developing parts of plants, such as the tips of shoots, developing leaves, or seeds [64][65][66][67].Auxins are also synthesized in roots, namely in the meristem of the main root and in developing lateral roots [64,[68][69][70].Despite its relatively simple structure, auxin coordinates a whole range of processes occurring during plant life [71][72][73].The hormone is transported through plant organs from the place of its synthesis to specific cells and tissues.For many years, the mechanism of its transport has been the subject of research by scientists from various fields, by biochemists, physiologists and molecular biologists, and the results indicate two transport pathways, which are different physiologically and spatially.The first pathway is rapid transport through the elements of phloem, and the other one is polar transport, from cell to cell, involving various tissues [59,74,75].The accumulation of auxins in specific plant tissues depends on the polar transport of this hormone.
Auxins also support a large number of development processes in the plant.Such processes include, among others, the formation of lateral roots, leaves and flowers, tropisms, differentiation of vascular tissues, as well as the formation of the apical-basal axis during the process of embryogenesis [29,30,53,54,76].Despite research and great progress in understanding the mechanisms of their effect on plants, auxins still constitute a large field of studies for researchers [5,77].
Effect of Auxins and Cytokinins on Selected Physiological Parameters of Plants
Growth and development of plants and, consequently, their yields are primarily dependent on the activity of basic physiological processes, such as photosynthesis and transpiration (Table 1) [78,79].According to studies on soybean cultivars, their physiological activity, expressed as photosynthetic and transpiration intensity, was the highest during the flowering stage.However, during the stage of seed development the intensity of both processes decreased significantly, almost 3-4 times.A radical decrease with age in gas exchange parameters of two soybean cultivars was reported by Fu et al. [80], who also found that their photosynthetic efficiency was the highest between the 10th and 17th day of flowering, after which leaf senescence followed.During the stage of seed development, 30-40 days after flowering began, leaves reduced their photosynthetic activity up to five times.Contrary to that, Subrahmanyam [81] noted the highest photosynthetic and transpiration efficiency of soybean plants during the stage of seed formation.He also pointed out that the intensity of these processes could vary greatly, and, consequently, demand for photosynthetic products also varied, depending on plant variety, genetic properties, and development stage, as well as on the external environment and habitat conditions.Similar changes in photosynthetic activity were observed by Luquez, Starck, and Wróbel [82][83][84].
A rapid decrease in the intensity of photosynthetic and transpiration processes during the stage of seed development was also observed by the present authors in 2008.However, it was caused not only by natural senescence processes, but also by long-lasting high air temperature and low precipitation.In such conditions, plants close their stomata to cope with water loss, at the same time limiting CO 2 cellular access, necessary for the photosynthetic process [85][86][87].Indole-3-butyric acid (IBA) and 6-Benzylaminopurine (BAP) applied separately also significantly affected this process, either intensifying or decreasing it, depending on the year of research and the development stage (Table 1).This was also confirmed by Ashraf et al. [88], who treated barley plants with indole-3-acetic acid (IAA) at a concentration of 30 mg per L and found a significant increase in CO 2 photosynthetic assimilation.Similar results were obtained by Aldesque [89], who treated young barley seedlings with IAA at a concentration of 25 mg kg −1 per grain soaked in a solution of this hormone.He also used higher doses of synthetic auxin (50 mg kg −1 ), after which he found a substantial reduction in plant gas exchange processes.The same reaction of plants was observed by Pospišilova [90], who observed modification of gas exchange parameters depending on the concentration of a given growth regulator.Some authors [91][92][93][94] report an increase in CO 2 photosynthetic assimilation in soybean and cotton leaves after the use of synthetic auxins and cytokinins, probably caused by an increase in the activity of the photosynthetic enzyme.A significant increase in the activity of this enzyme was found in plants treated with synthetic auxin and cytokinin (Figure 2).In his research, Subrahmanyam [81] found a significant positive correlation between the intensity of photosynthetic processes and transpiration processes, but also between the intensity of those processes and stomatal conduction.However, the correlation between photosynthetic and transpiration intensity and CO 2 concentration in intercellular spaces was either positive or negative depending on the type of the phytohormone used.A positive correlation between photosynthetic and transpiration intensity and stomatal conductivity indicates efficient gas exchange between the leaf and atmosphere.The above author also found a significantly positive correlation between photosynthetic and transpiration processes and stomatal conductivity of soybean leaves.Chaves et al. [87] explain that an increase in CO 2 concentration in intercellular spaces may cause a decrease in the access of substrates needed for photosynthesis processes.Plant productivity depends, to a large extent, on the content of photosynthetic pigments, which is the most important factor affecting photosynthesis (Figure 2).Some studies report that the content of total chlorophyll and carotenoids increased in plants sprayed with indole-3-butyric acid.Aldesuqe [89] applied indole-3-acetic acid (IAA), belonging to the auxin class, at a concentration of 25 mg kg −1 , to wheat and noted a significant increase in total chlorophyll content.Using a slightly lower concentration of IAA (10 mg L −1 ), Galdallah [95] reported a clear increase in chlorophyll content of soybean leaves.In contrast, Pandey et al. [94] and Skalska [96] applied BAP to alfalfa at various concentrations, from 0.025 to 0.20%, and found a significant increase in chlorophyll a content, while spraying these plants with IAA did not affect it.Fu et al. [80] report that as soybean plants age, the chlorophyll content of carotenoids decreases.Exogenously applied cytokinin may increase chlorophyll content in aging leaf tissues by slowing the breakdown of this pigment and by delaying the senescence process [15].In their research on the effect of BAP on cabbage, Costa et al. [17] observed a slower degradation rate of total chlorophyll compared to control plants.At the same time, the authors determined the activity of enzymes mediating chlorophyll degradation, such as chlorophyllase and magnesium dechelatase.It turns out that in plants sprayed with BAP, there is a significant decrease in the activity of these enzymes compared to control plants.
Effect of Auxins and Cytokinins on Crop Yield and Morphometry
According to Nowak and Wróbel [97], fertilizers and plant protection products can no longer increase plant yields significantly, so more attention is paid to the use of various growth substances.According to the above authors, the purpose of such products is to increase plant yield potential in adverse weather or in any other unfavorable conditions not suitable for a given plant.In particular, according to von Richthofen [98] and Ulmasov et al. [99], growth-promoting substances are of great importance in the cultivation of leguminous plants, with their unstable yield and high sensitivity to weather conditions.Such substances include exogenously applied phytohormonal growth regulators used on x [88,89] x [81] o [81] x [91,93] o [91,93] Increase in total chlorophyll content x [95] x [89] o [17] Increase in the content of chlorophyll "a" o [94,96] Auxin (x); cytokinin (o).
Plant productivity depends, to a large extent, on the content of photosynthetic pigments, which is the most important factor affecting photosynthesis (Figure 2).Some studies report that the content of total chlorophyll and carotenoids increased in plants sprayed with indole-3-butyric acid.Aldesuqe [89] applied indole-3-acetic acid (IAA), belonging to the auxin class, at a concentration of 25 mg kg −1 , to wheat and noted a significant increase in total chlorophyll content.Using a slightly lower concentration of IAA (10 mg L −1 ), Galdallah [95] reported a clear increase in chlorophyll content of soybean leaves.In contrast, Pandey et al. [94] and Skalska [96] applied BAP to alfalfa at various concentrations, from 0.025 to 0.20%, and found a significant increase in chlorophyll a content, while spraying these plants with IAA did not affect it.Fu et al. [80] report that as soybean plants age, the chlorophyll content of carotenoids decreases.Exogenously applied cytokinin may increase chlorophyll content in aging leaf tissues by slowing the breakdown of this pigment and by delaying the senescence process [15].In their research on the effect of BAP on cabbage, Costa et al. [17] observed a slower degradation rate of total chlorophyll compared to control plants.At the same time, the authors determined the activity of enzymes mediating chloro-phyll degradation, such as chlorophyllase and magnesium dechelatase.It turns out that in plants sprayed with BAP, there is a significant decrease in the activity of these enzymes compared to control plants.
Effect of Auxins and Cytokinins on Crop Yield and Morphometry
According to Nowak and Wróbel [97], fertilizers and plant protection products can no longer increase plant yields significantly, so more attention is paid to the use of various growth substances.According to the above authors, the purpose of such products is to increase plant yield potential in adverse weather or in any other unfavorable conditions not suitable for a given plant.In particular, according to von Richthofen [98] and Ulmasov et al. [99], growth-promoting substances are of great importance in the cultivation of leguminous plants, with their unstable yield and high sensitivity to weather conditions.Such substances include exogenously applied phytohormonal growth regulators used on crops and vegetables [9,97,[100][101][102][103][104].
In their studies on the effect of synthetic auxin and cytokinin and of their mixture on soybean, Nowak and Wróbel [97,103] reported a significant yield increase.According to the authors, auxin was the most effective, followed by cytokinin and their mixture, with an increase, compared to control plants, of 34, 32, and 29%, respectively.
Reinecke et al. [105] reported an increase in pea yield in response to a hormonal regulator containing, indole-3-butyric acid.Furthermore, beneficial effects of synthetic auxins and cytokinins on the yield of some plants were presented by Czapla et al. [9], Barcley and McDavid [106], and Nowak et al. [107].At the same time, in response to auxin application Nowak et al. [107] reported a significant increase in field bean seed weight, on average by 10%.Kertikov and Vasileva [108] reported higher grain yield and better chemical composition in vetch.Treating soybean with auxin (IBA) and cytokinin (NAA) and their mixtures, Czapla et al. [9] found that auxin was the most effective in increasing the number of pods and seed yield.However, some other researchers did not observe significant effects of synthetic growth hormones on crops.
According to some authors [78], the rate of plant growth and development and the yield are primarily determined by the intensity of basic physiological processes, such as photosynthesis and transpiration.According to Reinecke et al. [105], auxins, as exogenously applied growth hormones, can increase the physiological activity of plants and thus affect their productivity.
Kuang et al. [109] and Peterson et al. [110] suggest that plants respond positively to synthetic hormones because they affect physiological processes, especially an earlier increase in tissue vascularization, which manifests itself in the thickening of such morphological organs as stems, leaves, and inflorescences.According to Rylott and Smith [111], synthetic auxin and cytokinin increase plant yield and make generative organs competitive over vegetative ones.This was confirmed by Pandey et al. [94], who using synthetic auxin on cotton plants found a significant increase in the number and weight of flowers.A similar trend was observed by Qifu et al. [112] and Kuang et al. [109,113].Using cytokinin, they reported better vascularization of plant tissues and an increase in the transport of photosynthesis products from vegetative to generative parts, which increased their concentration in generative organs, and consequently resulted in higher yield and better seed filling.Moreover, according to the literature [106,114], exogenously used phytohormones stimulate phloem transport of photosynthesis products, improving the level of nutrition of plant tissues, which improves plant condition and resistance to stress, increasing the yield and its quality.
The literature reports about positive effects of synthetic hormones on the growth of leaves.According to Aldesuquy [89], an auxin (indole-3-acetic acid) increased the number and area of barley leaf blades.In contrast, Khan et al. [115], applying auxin in the form of indole-3-butyric acid (IBA) and a-naphthylacetic acid (NAA) to lily and Pal and Das [116] to cabbage, at concentrations of 100 mg•kg −1 also observed an increase relative to control in the number and area of leaves.
Jamil et al. [117] found that synthetic IAA auxin was more effective than GA3 (gibberellin) in such morphometric characteristics as the number and size of flowers of Hippeastrum Herb. of the Amaryllidaceae family.A positive effect of exogenously applied auxin on long-flowered lily (Lilium longiflorum) inflorescence biometrics was also observed by Karaguzel et al. [118], Pal and Das [116] and Prakash and Jha [119].Clifford et al. [120] and Baylis and Clifford [121] argue that the levels of plant hormones, such as auxin and cytokinin, rise when the inflorescences are formed, which promotes the process.Contrary to that, their content often decreases during flowering, which, according to Reese et al. [114] and Nagel et al. [122], may be the reason for premature flower shedding.
Effect of Auxins and Cytokinins on Chemical Composition of Plant Biomass
The content of macroelements in plant dry matter depends, among others, on the species, the level of nitrogen fertilizer, the intensity of use and harvest time [9].An increase in potassium content in plants treated with synthetic hormones was observed by Wierzbowska and Nowak (Figure 3) [123,124].Using growth regulators on wheat plants, the authors found that kinetin and auxin significantly increased potassium content in wheat grains by 16.73% and 10.33%, respectively.Opposite results were presented by Czapla et al. [9], who reported a reduction in potassium content by an average of 9% in soybean after spraying plants with two synthetic auxins, i.e., IBA and NAA, separately and together.Additionally, when applying IBA, BAP, and IBA + BAP to lupine, they observed a decrease in potassium content, especially in seeds, in response to all treatments.In the experiment of Wierzbowska et al. [125], growth regulators applied in the form of gibberellin and auxin increased calcium content in wheat grains, blades, chaff, and in the oldest leaves by 28% compared to control.According to Wierzbowska and Bowszys [126], hormones also affect an increase in the accumulation of magnesium in spring wheat.They reported that gibberellin increased the magnesium content of stalks, chaff and the oldest leaves, and auxin increased it in most of those organs.
Agriculture 2023, 13, x FOR PEER REVIEW 9 of 14 In the experiment of Wierzbowska et al. [125], growth regulators applied in the form of gibberellin and auxin increased calcium content in wheat grains, blades, chaff, and in the oldest leaves by 28% compared to control.According to Wierzbowska and Bowszys [126], hormones also affect an increase in the accumulation of magnesium in spring wheat.They reported that gibberellin increased the magnesium content of stalks, chaff and the oldest leaves, and auxin increased it in most of those organs.Exogenously used auxin and cytokinin do not affect plant phosphorus content.This tendency was confirmed by the results obtained by Czapla et al. [9] and Nowak et al. [107], who found no change in its concentration in field bean, soybean, and lupine after the use of synthetic auxins and cytokinins.However, the literature [127,128] suggests that an increase in the content of certain minerals in the aboveground parts of plants in response to synthetic growth hormones occurs because of a better developed root system, in particular, because of the elongation of capillary roots.As a consequence, this leads to more intensive nutrient uptake from soil.Zhao [21] and Weijers et al. [129] report that auxins are the most effective here, sending signals informing about the course of physiological pro- Exogenously used auxin and cytokinin do not affect plant phosphorus content.This tendency was confirmed by the results obtained by Czapla et al. [9] and Nowak et al. [107], who found no change in its concentration in field bean, soybean, and lupine after the use of synthetic auxins and cytokinins.However, the literature [127,128] suggests that an increase in the content of certain minerals in the aboveground parts of plants in response to synthetic growth hormones occurs because of a better developed root system, in particular, because of the elongation of capillary roots.As a consequence, this leads to more intensive nutrient uptake from soil.Zhao [21] and Weijers et al. [129] report that auxins are the most effective here, sending signals informing about the course of physiological processes in acceptor organs of photosynthesis products and about increasing demand for nutrients.In addition, cytokinins together with auxins stimulate cambium activity and the formation of vascular tissues that facilitate the penetration of various types of nutrients in plants [130].
Conclusions
Research on plant hormones as growth regulators proves that hormones can have practical applications in the cultivation of many plant species.Auxins and cytokinins can be used to stimulate the rhizosphere regeneration process.Exogenous use of auxins and cytokinins in appropriate concentrations increases the dry matter yield of plants and also improves its stability.It also reduces the occurrence of diseases.Adverse effects of auxin and cytokinin include a decrease in the content of vitamin C and an increase in the content of phenolic compounds.These hormones contribute to better tillering, growth of foliage, and improvement of induction, mass, and intensity of flowering.Auxins and cytokinins in foliar applications affect the chemical composition of the dry matter of plants in different ways.Most often, they increase the content of potassium and calcium, but do not change the concentration of phosphorus in plants.
Figure 2 .
Figure 2. Influence of auxins and cytokinins on the process of CO2 assimilation in plants.
Figure 2 .
Figure 2. Influence of auxins and cytokinins on the process of CO 2 assimilation in plants.
Figure 3 .
Figure 3.The influence of phytohormones on the chemical composition of biomass: increase ↑, decrease ↓, no change =.
Figure 3 .
Figure 3.The influence of phytohormones on the chemical composition of biomass: increase ↑, decrease ↓, no change =.
Table 1 .
Influence of auxin and cytokinin on plant physiological processes in various crops. | 2023-03-24T15:03:10.462Z | 2023-03-21T00:00:00.000 | {
"year": 2023,
"sha1": "1f9320696fc3af3c4dff927da1d7f2b5bd95bb4e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/13/3/724/pdf?version=1679470716",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8e9c7e5428c5ccb2beecb29367ae5c40444739a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
1243791 | pes2o/s2orc | v3-fos-license | CpG sites associated with NRP1, NRXN2 and miR-29b-2 are hypomethylated in monocytes during ageing
Background Ageing affects many components of the immune system, including innate immune cells like monocytes. They are important in the early response to pathogens and for their role to differentiate into macrophages and dendritic cells. Recent studies have revealed significant age-related changes in genomic DNA methylation in peripheral blood mononuclear cells, however information on epigenetic changes in specific leukocyte subsets is still lacking. Here, we aimed to analyse DNA methylation in purified monocyte populations from young and elderly individuals. Findings We analysed the methylation changes in monocytes purified from young and elderly individuals using the HumanMethylation450 BeadChip array. Interestingly, we found that among 26 differentially methylated CpG sites, the majority of sites were hypomethylated in elderly individuals. The most hypomethylated CpG sites were located in neuropilin 1 (NRP1; cg24892069) and neurexin 2 (NRXN2; cg27209729) genes, and upstream of miR-29b-2 gene (cg10501210). The age-related hypomethylation of these three sites was confirmed in a separate group of young and elderly individuals. Conclusions We identified significant age-related hypomethylation in human purified monocytes at CpG sites within the regions of NRP1, NRXN2 and miR-29b-2 genes.
Innate and adaptive immune responses are affected by ageing. Elderly people have a decreased ability to maintain basic tissue homeostasis, impaired vaccination responses and an increased risk for infectious diseases, particularly influenza virus [1][2][3][4]. A diverse range of ageassociated changes has been reported in human innate immune cells [3,5], which are important during the early response to pathogens. Monocytes, which are circulating cells that originate from myeloid precursors, are the precursors of tissue macrophages and dendritic cells and constitute an essential part of the innate immune system. Although the number of monocytes does not change significantly during ageing, several functional age-related changes in monocytes, such as altered expression of cytokines, defective Toll-like receptor signalling and a decreased capacity for phagocytosis, have been reported [6]. Monocytes are also involved in the initiation of atherosclerosis on arterial walls and have been linked to a chronic inflamed state (referred to as inflamm-ageing), which is associated with increased cardiovascular and metabolic diseases in elderly individuals [7]. Recent studies have revealed the important role of epigenetic regulation in the development and cell-specific functions of blood cells. Changes in DNA methylation patterns occur gradually throughout an individual's lifespan [8,9] and may result in the age-related phenotypes of a specific set of genes [8]. The majority of these studies have examined DNA methylation changes in a mixed population of peripheral blood mononuclear cells (PBMCs) without purifying specific subsets of cells. In this study, we aimed to analyse the epigenomic changes in DNA methylation in purified monocyte cell populations from young and elderly individuals.
To study age-related DNA methylation profiles, we isolated monocytes from the peripheral blood of eight young (age range 22-25 years, mean 23.75 years; 4 females and 4 males) and eight elderly healthy volunteers (age range 77-78 years, mean 77.13 years; 4 females and 4 males). A whole genome methylation analysis was performed using the Infinium HumanMethylation450 BeadChip (Illumina Inc.). Altogether, we found 368 CpG sites that were significantly differentially methylated (p < 0.05), of which 26 CpG sites had an absolute β value differences greater than or equal to 0.2 between the young and old individuals ( Table 1). Most of the CpG sites, a total of 21 positions, were hypomethylated in the elderly individuals; only five positions were hypermethylated in these individuals. Decreased methylation during the ageing process has been previously described in a study of PBMCs [10]. The most significantly altered sites mapped within the NRP1, NRXN1, RASSF5, OTUD7A and PRM1 genes. The loci that did not reach the 0.2 β-difference threshold but were significantly different (p < 0.05) included two ELOVL2 sites, cg16867657 and cg24724428 (both with β-diff. of 0.17); two FHL2 sites, cg22454769 and cg24079702 (β-diff. of 0.15 and 0.14, respectively); and a PENK site, cg16419235 (β-diff. of 0.08); all these sites are associated with increased methylation in the peripheral blood mononuclear cells of older individuals [9,10].
To validate our results, we focused our investigation on the three differentially methylated CpG sites with the highest hypomethylation values, cg24892069, cg27209729 and cg10501210. The CpG site cg24892069, which had a very low standard deviation in both age groups (young STDEV: 0.05; old STDEV: 0.06), is located in intron 2 of the neuropilin 1 (NRP1) gene. NRP1 is a cell surface receptor with functional roles in several biological processes, including angiogenesis, immune response and regulation of vascular permeability [11,12], and has also been associated with increased cancer progression [13,14]. NRP1 is expressed in regulatory T cells [15] and is needed for prolonged cellular contact between regulatory T-cells and dendritic cells [16]. Another CpG site, cg27209729, is located in intron 9 of the neurexin 2 (NRXN2) gene. NRXN2 is a member of the neurexin family, which affects synaptic plasticity and cognitive functioning [17], and has been linked to autism spectrum disorders and schizophrenia [18]. The third CpG site, cg10501210, is located in putative regulatory region, approximately 1 kb upstream of the miR-29b-2 gene. miR-29b-2 belongs to the miR-29 family, which is important in thymic involution [19], T cell polarisation [20] and oncogenesis [19,21]. The miR-29b has been shown to target DNA methyltransferases DNMT3A and DNMT3B, and indirectly DNMT1 [22,23], leading to reduction of global methylation and expression of methylation regulated genes.
We replicated the array results of the three differentially methylated loci using the EpiTYPER assay (Sequenom Inc.) with a separate set of young and elderly samples. We added to our analyses two sex-matched control age groups, consisting of 10 young (age range 24-28 years, mean 26.4 years; 5 men and 5 females) and 10 elderly (age range 76-84 years, mean 79.4 years; 5 men and 5 females) samples. Using the EpiTYPER assay, we found hypomethylation of the NRP1-associated cg24892069 site in the monocytes of the older individuals, similar to the results from the HumanMethylation450 BeadChip analysis ( Figure 1A). We also analysed the methylation differences in men and women separately and observed a significant difference in both gender groups (p < 0.0001) ( Figure 1B). To explore this region further, we selected another CpG site, cg24892069-40 bp, which was located 40 bp upstream of the cg24892069 site in the genomic sequence; this site was not included on the methylation BeadChip. We found that the cg24892069-40 bp site had a statistically significant methylation difference between the studied age groups (p < 0.0001) ( Figure 1C) and that was observed in both sexes (p < 0.0001) ( Figure 1D). The similar DNA methylation pattern of the two CpG sites in close proximity is most likely the result of a shared, differentially methylated, region that is modified from the nearby methyltransferase binding site. We also found significant differences between the age groups at the cg27209729 and cg10501210 sites, located in the NRXN2 gene and upstream of the miR-29b-2 gene, respectively ( Figure 2). These CpG sites had statistically significant methylation differences in the combined study group (p < 0.0001) (Figure 2 A&C) as well as in the male and female study groups (p < 0.01 and p < 0.0001, respectively) (Figure 2 B&D). We also evaluated the expression of the three differentially methylated CpG sites in monocytes of young and elderly individuals, but the expression levels NRP1 and NRXN2 genes were under the detection limit of RT-PCR. This is in agreement with our previously published mRNA expression study, where NRP1 was expressed at very low levels in monocytes and demonstrated a significantly increased expression in monocyte-derived dendritic cells and macrophages, whereas NRXN2 expression remained low even after the differentiation to dendritic cells [24]. The mRNA level of miR-29b-2 gene was detectable, however, the expression between young and elderly individuals did not differ significantly (data not shown). As the CpG site cg10501210 is located approximately 1 kb upstream of miR-29b-2 gene, it might not have regulatory effect on miR-29b-2 gene expression.
In conclusion, we were able to identify age-related DNA methylation changes in purified monocytes at immunologically relevant genomic loci. We found that the majority of the altered CpG sites were hypomethylated in the elderly individuals. The top three hypomethylated CpG sites in the elderly were cg24892069, cg27209729 and cg10501210, which are located in or near the NRP1, NRXN2 and miR-29b-2 genes, respectively. Further investigation and a larger sample set are needed to define the functional role and significance of these CpG sites in the ageing process.
Purification of cell populations
The study is approved by Ethics Review Committee on Human Research of the University of Tartu. All of the participants gave written informed consent. Peripheral blood was obtained from healthy donors of Estonian Genome Center of University of Tartu. Peripheral blood mononuclear cells (PBMC) were extracted using a Ficoll-Paque (GE Healthcare) gradient centrifugation. CD14 + monocytes were extracted from PBMCs using microbeads (CD14+ #130-050-201) and AutoMACS technology (Miltenyi Biotec). The purity of monocyte cell population was analysed with FACSCalibur (BD Biosciences) using fluorescence conjugated antibodies against CD14 and CD3 (Miltenyi) to confirm the characteristic phenotype (Additional file 1: Figure S1).
DNA extraction, bisulfite treatment and DNA methylation measurement
Genomic DNA was isolated from cell pellets using QIAmp DNA Micro Kit (Qiagen). DNA concentration was measured with NanoDrop ND-1000 spectrophotometry. Extracted genomic DNA was bisulfite converted using EZ-96 DNA Methylation Kit (Zymo Research Corporation). DNA
Sequenom EpiTYPER assay
The Sequenom EpiTYPER technology was used to validate HumanMethylation450 array data. Samples were prepared using EpiTYPER T Complete Reagent Set (Sequenom) according to manufacturer's instructions. 25 ng of bisulfitetreated DNA was used as PCR input and CpG methylation was determined by the MassARRAY Analyzer 4 system (Sequenom).
Data analyses
The methylation signals were extracted with the methylation module v1.8.5 of the GenomeStudio v2010.3 software (Illumina Inc.) without background correction and normalisation. Probes with a detection p-value greater than 0.01, located on sex chromosomes or containing SNPs with a minor allele frequency of at least 5% in the Caucasian population according to the Hapmap project (http://hapmap.ncbi.nlm.nih.gov) were filtered out prior further analysis. The signals were corrected and normalised using subset quantile normalisation as described in [25]. For differential methylation analysis, 80% of the least varying probes according to interquartile range across all samples were removed and a linear model was used to assess the differences between two age groups considering arrays on different BeadChips as batches. Methylation sites with a FDR adjusted p-value less than 0.05 were considered differentially methylated. Median difference of beta values greater than 0.2 between groups was considered for selecting methylation sites for further analyses. | 2016-05-12T22:15:10.714Z | 2014-01-09T00:00:00.000 | {
"year": 2014,
"sha1": "e4c95df6479089162a266275b54889ef202562ab",
"oa_license": "CCBY",
"oa_url": "https://immunityageing.biomedcentral.com/track/pdf/10.1186/1742-4933-11-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4c95df6479089162a266275b54889ef202562ab",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12110625 | pes2o/s2orc | v3-fos-license | Conversational Interaction in the Scanner: Mentalizing during Language Processing as Revealed by MEG
Humans are especially good at taking another's perspective—representing what others might be thinking or experiencing. This “mentalizing” capacity is apparent in everyday human interactions and conversations. We investigated its neural basis using magnetoencephalography. We focused on whether mentalizing was engaged spontaneously and routinely to understand an utterance's meaning or largely on-demand, to restore “common ground” when expectations were violated. Participants conversed with 1 of 2 confederate speakers and established tacit agreements about objects' names. In a subsequent “test” phase, some of these agreements were violated by either the same or a different speaker. Our analysis of the neural processing of test phase utterances revealed recruitment of neural circuits associated with language (temporal cortex), episodic memory (e.g., medial temporal lobe), and mentalizing (temporo-parietal junction and ventromedial prefrontal cortex). Theta oscillations (3–7 Hz) were modulated most prominently, and we observed phase coupling between functionally distinct neural circuits. The episodic memory and language circuits were recruited in anticipation of upcoming referring expressions, suggesting that context-sensitive predictions were spontaneously generated. In contrast, the mentalizing areas were recruited on-demand, as a means for detecting and resolving perceived pragmatic anomalies, with little evidence they were activated to make partner-specific predictions about upcoming linguistic utterances.
Introduction
In conversation, the meaning of linguistic expressions such as "the red couch" is often ambiguous, such that interlocutors need to work together to make sure their interpretations are aligned (Pickering and Garrod 2004). One influential proposal assumes that interlocutors align their interpretations by processing language against their "common ground," the set of mutual beliefs and expectations that are shared, and critically, "known to be shared" with their interlocutors (Lewis 1979;Clark and Marshall 1981;Stalnaker 1987). This collaborative model of conversation assumes that the memory representations forming the common ground are built up through a process of "grounding" (Clark and Brennan 1991). Processing language consistently with common ground involves accessing representations that are known to be shared, and suppressing information known privately to oneself.
The process of inferring another's mental states, or "mentalizing" (e.g., Frith and Frith 2006) is likely to be involved not only in establishing common ground representations but also in selectively accessing and maintaining such representations in a context-sensitive manner. Language users cannot always count on others having the same perceptual states and experiences and so must on occasion modulate what information they use in speaking and understanding to be consistent with what they know about their interlocutor. Behavioral evidence suggests that during conversation, mentalizing is often called upon for resolving referential ambiguity, especially in cases in which there are clear differences in perspective (Keysar et al. 2003;Kronmüller and Barr 2007).
Given the inherent ambiguity of language, mentalizing would seem to be an essential ingredient of successful communication (Metzing and Brennan 2003;Kronmüller and Barr 2007). However, research over the past several decades indicates that mentalizing is effortful (Roβnagel 2000;Nilsen and Graham 2009;Lin et al. 2010) and that language users have access to other strategies for resolving ambiguity that do not involve mentalizing (Ferreira and Dell 2000;Pickering and Garrod 2004;Shintel and Keysar 2007;Barr 2014).
One case that is relevant to the current investigation concerns how basic memory processes activate contextually relevant information during conversational language processing. During interaction, the perceptual experiences associated with hearing and seeing a particular interlocutor will tend to increase the accessibility of information in long-term memory that is associated with that interlocutor; in this way, basic memory mechanisms promote contextually appropriate speaking and understanding. However, the episodic representations that are forged through conversational interaction are not identical to common ground, as episodic representations can be shared without also being known to be shared (Shintel and Keysar 2007). Furthermore, contextually appropriate representations may become activated in a contextually appropriate manner via the basic memory retrieval principle of encoding specificity rather than through mentalizing . Supporting this view, a recent study shows that when a speaker articulates an utterance designed by another speaker (e.g., reading someone else's email aloud), listeners activate information associated with the person delivering the message (the messenger/reader) rather than with the person who designed it, despite the relevant common ground being that which is shared with the utterance designer .
In short, because processes other than mentalizing can promote successful communication, in any given case in which language users adapt their language processing to context, it is an empirical question whether or not such adaptation involved mentalizing as an indicator of genuine partner-oriented processing. Our study therefore set out to investigate these questions using magnetoencephalography (MEG) to monitor listeners′ unfolding interpretations of referentially ambiguous expressions during live social interaction: We tested whether mentalizing would occupy a central, anticipatory role or a more "on demand" role within common ground processing, which has been of some debate recently (e.g., Metzing and Brennan 2003;Kronmüller and Barr 2007). MEG is well suited to the study of spoken language processing in context, because it provides the necessary temporal and spectral resolution to examine moment-by-moment changes in activation and modulation of neural oscillations, as well as sufficient spatial resolution to enable localization of function and the identification of brain networks.
Previous neuroimaging studies have addressed issues of referential ambiguity, memory, and mentalizing, but usually these issues are addressed separately in different studies, often in non-conversational settings with socially isolated participants. While EEG studies have documented a remarkably early sensitivity to referential ambiguity ("NRef" effect;Van Berkum et al. 1999;Van Berkum et al. 2003), it remains unclear whether such effects can be modulated by conversational memory or by beliefs about common ground, as participants in these studies were presented with prepared text or speech in social isolation. An fMRI study of communicative perspective taking reported activations related to referential ambiguity in the superior dorsal medial prefrontal cortex, bilateral middle temporal gyri, and the left temporal pole, whereas activations in the left precuneus and bilateral temporo-parietal junctions (TPJs) particularly reflected the presence vs. absence of an avatar during referential instructions (Dumontheil et al. 2010). However, this study used artificial avatars and did not involve interaction with live partners.
Other fMRI and lesion studies have identified regions of the brain that are likely to be responsible for building and/or maintaining representations of others′ beliefs and goals, including right-hemisphere structures in temporo-parietal areas such as the posterior temporal sulcus (pSTS) and the TPJ, in addition to certain medial (Saxe and Wexler 2005;Saxe and Powell 2006; Van Overwalle and Baetens 2009, for review) and especially ventromedial (e.g., Gregory et al. 2002;Atique et al. 2011) prefrontal areas. A recent study showed that these same regions may be involved in generating and inferring communicative intentions during live nonlinguistic communication (Noordzij et al. 2009). This is consistent with other recent studies that used relatively realistic social interaction (for review, see Hari and Kujala 2009) and which have shown increased activation in social cognition and reward brain areas (Redcay et al. 2010). In sum, although we have learned much about the various brain systems involved in processing referential ambiguity and in mentalizing, there is still little understanding of when, and how extensively, mentalizing networks might be activated when participating in realistic conversational interaction.
The absence of neuroimaging studies on conversational language processing reflects the existence of a number of technical and logistical challenges that have imposed a barrier to this kind of research. First, the required signal-to-noise ratio for neuroimaging data analysis typically necessitates a larger number of trials compared with behavioral studies, as well as a high level of control over the stimuli and stimuli presentation timings, in order to reduce any additional sources of variability. This need for large numbers of highly controlled trials is at odds with the characteristics of naturalistic interaction with live conversational partners, where it is difficult to predict what speakers will say and when they will say it. Furthermore, identifying the brain networks involved in the processing of conversational speech requires a neuroimaging technique that provides adequate spatial and temporal resolution. We surmounted these obstacles by using MEG with a novel communication-game paradigm "do-I-see-what-you-mean?" (see Fig. 1, Panel A) that enabled spontaneous, quasi-naturalistic conversation with trained confederates, but which still allowed us full control over stimulus characteristics and timing through interleaving prerecorded speech with live speech. Critically, we implemented this interleaving in a way that would lead participants to believe that they were experiencing a live interaction including only spontaneously produced speech by real participants.
The experiment alternated between blocks of trials comprising a "grounding" phase, characterized by spontaneous interaction between the participant and 1 of 2 confederate speakers, followed by a testing phase in which participants heard speech from 1 of the 2 speakers that was (unbeknownst to them) prerecorded and not produced live. During interactive grounding phases (n = 42 of interactions in each phase), participants built up temporary referring precedents (e.g., agreements to call a particular object a "couch"; see Brennan and Clark 1996) through live interaction with 1 of the 2 confederate speakers (1 male and 1 female) regarding how images presented on separate screens were to be named ( Fig. 1, Panel A, left and middle panel). On trials of the subsequent test phase (n = 26 of trials in each phase), both the participant and the confederate speaker allegedly saw 1 object on their respective screens ( Fig. 1, Panel A, right panel). In critical "precedent mismatch" trials, the participant saw a target object from the interactive grounding phase and then heard either the same or the other (confederate) speaker name their own object ( prerecorded speech), but using a different (mismatching) term from the one established for the target object during grounding (e.g., using the term "sofa" rather than the established term "couch" to refer to an object; Fig. 1, Panel A, right, top panel). Based on the description the speaker chose, the participant had to decide whether or not the speaker was looking at the same object.
In short, we manipulated 2 factors during the critical testphase trials (in relation to the grounding phase): Firstly, we manipulated whether the same or different (confederate) speaker would name the test pictures (blocked per test phase), and secondly, we manipulated (1) whether the current naming did not match a previously established precedent or (2) whether no precedent had been established at all. That is, in some grounding trials, objects had been referred to by their location (e.g., "top left"), potentially generating interactive memory traces for that object, yet without a naming precedent (see Fig. 1, Panel A, right, middle panel). These "no precedent" trials served as baseline conditions for the same-and differentspeaker conditions, respectively. Additionally, "precedent match" filler trials (and a number of other catch trials) were added as well to each test phase to ensure that participants would not learn to expect a precedent mismatch on a majority of trials (see Fig. 1, Panel A, right, bottom panel; see also Materials for detailed information).
Importantly, a speaker might use a new (mismatching) term for different reasons. The possibility that the speaker might be gazing at a different object provided a cooperative reason why the same speaker, who set up the precedent in the first phase, might use a new term; they might want to indicate that they now see a different picture than before. This could be inferred via mentalizing. In contrast, if a different speaker had established the precedent, the current speaker might simply have different preferences for naming things, so in that case, it is more probable that the speaker is looking at the same picture as the participant. The task made it important for listeners to keep track of who said what, because whether the name they hear is a match or a mismatch (by the same or a different speaker) provides relevant information for deciding whether the speaker sees the same or a different picture. It is important to note that listeners were informed before each phase of test trials whether or not they would hear the same speaker as in the preceding interactive phase or the other speaker. This allowed for consistent person-specific retrieval of previous interactions and possibly anticipation of the speaker's intentions.
Different predictions can be made for the different conditions about the activation time-course of functional neurocognitive networks. It is important to point out, however, that in order to separate the various functional networks and their activation dynamics, we had to rely on the substantial body of previous neuroimaging research about how brain areas relate to function. Thus, the functional separations of episodic memory, language, and mentalizing networks we propose here have to be understood as well-founded, yet hypothetical assertions. Before fleshing out the crucial hypotheses about the time-course of mentalizing, we spell out specific predictions about the time-course of activation of memory and language networks. These predictions are rather independent of the different views regarding the anticipatory vs. "on-demand" role of mentalizing but also relate to speaker-specific common ground vs. more generic speaker-independent processing.
First, the visual stimulus would elicit retrieval of any existing precedents (e.g., couch), if present, before hearing any speech, simply on the basis of the episodic memory of the previous interaction (e.g., Baddeley 2000). Such situated multimodal episodic short-and long-term representations have been associated with medial temporal lobe function (e.g., Squire and Zolamorgan 1991;Olson et al. 2006), allowing viewpointdependent retrieval of an object in its visuo-spatial context (e.g., Schmidt et al. 2007;Sulpizio et al. 2013), including the "negotiated" name for the object (e.g., Duff and Brown-Schmidt 2012 for review), along with the identity of the speaker (Imaizumi et al. 1997). Because listeners knew in advance of each test phase which speaker would be speaking, we expected stronger retrieval of the precedent in the same- and test (right) phases. Stimuli were presented in color during the experiment. The speaker's view was implied by the speaker's behavior without being seen by the participant and is presented here for clarity. Physical stimuli in the test phase were identical for the 4 experimental conditions (same-/different-speaker precedent match/no precedent) over participants, but test trials were never repeated within a single participant. Panel B: Visualization of predictions from the anticipatory and "on-demand" view of mentalizing about which areas are expected to be more active in the same-speaker precedent mismatch than the other conditions during different parts of the test phase.
speaker condition (i.e., speaker identity serving as a further retrieval cue). This could be reflected by differential activation in the middle temporal lobe (e.g., Imaizumi et al. 1997;Olsen et al. 2009) and the temporal poles (Imaizumi et al. 1997), that is, stronger anticipatory episodic retrieval, in conjunction with executive function areas in lateral prefrontal cortex (Sakai and Passingham 2004;Kessler and Kiefer 2005). Importantly, episodic retrieval may lead to anticipation of a specific linguistic expression in the right temporal lobe, which has been proposed in conjunction with right lateral prefrontal cortex (Tourville and Guenther 2011) as a locus for linguistic predictions and integration based on pragmatic (e.g., Gardner et al. 1983;Federmeier 2007) and/or visual episodic context (e.g., Marini et al. 2005). Such activations related to memory retrieval would be elicited by viewing the picture, so they can occur from the moment that the picture is presented but will probably be on-going throughout the trial. Furthermore, such activations would indeed reflect speaker-specific rather than generic speaker-independent memory processing. However, this fulfills a necessary but not a sufficient condition for evidencing the engagement of common ground. As pointed out earlier, it could merely reflect context-specific encoding and retrieval of representations that are shared without being known to be shared. Mentalizing processes, in contrast, would be a more genuine indicator of common ground use.
With regards to hypotheses concerning the involvement of mentalizing, previous research supports a prediction of greater involvement in the condition where the same speaker fails to match an established precedent, as this is the situation in which common ground is apparently violated (Metzing and Brennan 2003;Kronmüller and Barr 2007). In contrast, situations where no precedent had been established or where a different speaker fails to match a precedent would require corrective mentalizing to a much lesser degree (if at all). Thus, observing the strongest mentalizing brain network activation in the same-speaker, precedent mismatch condition would corroborate the general view that participants make use of common ground processing for resolving referential ambiguity in the current paradigm. To distinguish between an anticipatory vs. an "on-demand" view of mentalizing in common ground processing, we need to establish when mentalizing is engaged in relation to language processing (see Fig. 1, Panel B). Are perspective-taking processes spontaneously engaged prior to an anticipated communicative event, in the service of generating expectations about what a speaker might say and how she might say it? Alternatively, are they mostly engaged "on demand" after the listener suspects communication failure and makes a conscious effort to reestablish common ground? We therefore examined the timing of activation of mentalizing networks, typically associated with the TPJ, precuneus (PC), and the ventromedial prefrontal cortex (vmPFC, e.g., Gregory et al. 2002;Van Overwalle and Baetens 2009;Atique et al. 2011) to understand the extent to which these mentalizing networks are activated prior to naming by the same speaker or, in contrast, only in response to a precedent mismatch produced by the same speaker.
The patterns of activation timing, co-activation, and oscillatory coupling between distinct brain modules provided us with unprecedented detail about processes of human communicative interaction and further allowed us to disentangle a more partnerorientated from a more egocentric conception of interactive communication. However, we would like to re-iterate that the current segregation between functional brain modules is hypothetical (reverse inference), as it relies on previous neuroimaging research for linking brain areas to function. Our results must therefore be regarded as provisional, requiring further testing and confirmation by subsequent research.
Participants
We obtained MEG data from 16 British participants (8 males), all of whom reported speaking English as their native language. They were recruited from the participant pool of the psychology department of Glasgow University, were paid £6 per hour for their participation, and gave their informed consent. Data from an additional female participant were excluded because she clicked on the wrong picture too often (22 times) in the interactive phase (see Procedure).
Materials
We gathered 320 pictures (from the Internet) that were given 2 plausible names in an informal pilot as the experimental pictures (see Table 1 Overview and examples of the experimental and filler items used Note: See Materials for descriptions of the types of items. Example pictures (last row) are presented in black and white but were presented to participants in color. The third and fourth columns indicate how the picture was referred to in the respective phases (if the picture was absent or not named in one of the phases, this is indicated in italics). Experimental items are presented in the first 2 rows and the different filler categories in the last 5 rows. Table 1 for examples). The 2 names for the experimental pictures were selected to be as balanced as possible. The more dominant name was always used the first time the object was named (in the interaction phase, see below). This was done to preclude the explanation that the speaker had thought of a better way to name the picture. We selected 640 other pictures as filler items that were given a name that was clearly dominant in the informal pilot. These pictures were used for 5 different categories of fillers. First, to make sure that not all pictures that were seen in the interactive phase were named differently in the subsequent test phase, we used precedent match fillers (Table 1, third row). We used 80 pictures to appear twice in the interactive phase and once (with the same name) in the following test phase. Second, in order to demonstrate to participants that different pictures could be named in the same way in the 2 different phases, we added the "category filler" condition (20 pairs of pictures, Table 1, fourth row) with 2 different pictures from the same category (e.g., piano and violin), 1 appearing in the interactive phase and 1 in the subsequent test phase, which were both named by their superordinate term (e.g., musical instrument). Furthermore, 100 pictures only appeared in the test phase, 40 of which were named correctly ("new correct fillers," Table 1, fifth row), and the other 60 were named incorrectly ("new incorrect fillers," Table 1, sixth row). The names used in the incorrect filler condition were related to the real names for the pictures (e.g., jar-glass) so that participants needed to pay attention to spot subtle differences. Next to that, 420 pictures were used as "pure fillers" (Table 1, bottom row). These appeared once in the interactive phase but were never named.
The names for the test phase (see Procedure) were recorded, divided equally between the 2 confederates. Some of the filler (but no experimental) names were recorded with a hesitation, to make participants believe the pictures were named on the spot.
Procedure
Participants were first prepared for the MEG. HPI coils were attached to the participant's head behind the right and left ears, above the nasion, and on the right-and the left-hand side of the forehead. The coil positions and head shape were digitized before the scan using the Polhemus program and stylus (Polhemus Isotrak, Kaiser Aerospace, Inc.). Digitization is standardly used and allows to determine the head position in the MEG at the start and end of each block (maximum movement tolerated was 0.5 cm) and enables co-registration with the structural MRI for later source localization. Those participants that did not have a structural MRI were scanned after the experiment using a 3T Siemens Trio MRI (Siemens Medical Solutions). Preparation took about 45 min on average, during which the experimenter collected the confederates (instructed fourth-year psychology students, 1 female and 1 male) and introduced them to the participant. Participants were told that these 2 speakers would interact with them via a microphone from separate rooms. After 2 practice blocks, in which participants experienced the role of both listener and speaker, they were presented with 5 20-min runs consisting of 4 blocks each. The order of blocks was randomized per participant. Each block consisted of an interactive phase followed by a test phase. The speaker for each phase was always announced before the start of the phase and remained the same throughout the phase.
In an interactive phase, the speaker/confederate (following a script) asked the participant to click on pictures on the screen using an optical track ball. Participants were told they could interact with the speaker and ask questions. Participants always saw 9 pictures on the screen (see Fig. 1, Panel A, middle), and speakers were allegedly seeing the same stimuli and had to name those pictures marked by a red frame (see Fig. 1, Panel A, left). The speaker/confederate indicated most pictures by their name, but some by their location (e.g., "bottom right"). In the latter case, pictures in the marked location allegedly were obscured from the speakers view (see, e.g., Fig. 1, Panel A, left panel, bottom right picture). In each interactive phase, the speaker/confederate named 13 pictures twice in exactly the same way (later serving as precedent mismatch trials or as precedent match fillers) and 8 pictures twice by their location (later serving as no precedent trials).
In the test phases, the speaker/confederate was the same as for the preceding interactive phase in half of the cases and different in the other half. The participant saw only 1 picture at a time on the screen (see Fig. 1, Panel A, right). The speaker/confederate allegedly also saw only a single picture and named that picture (e.g., "sofa"). Participants were asked to indicate whether the speaker's picture was the same as or different from their own, using their dominant hand on the trackball (thumb for same picture and ring finger for different picture). They were not allowed to talk to the speaker. In reality, all utterances of the speaker/confederate in the test phase were recorded beforehand. Each trial started with a fixation cross in the middle of the screen, followed by presentation of the picture at the same location. In experimental trials (precedent mismatch or no precedent), the picture was always presented for 800 ms before the recorded name was played and stayed on the screen until a response was given. For filler trials, the preview interval before presentation of the name was varied. Each test phase started with 2 filler trials, followed by a random presentation of 8 precedent mismatch trials (named differently than in the interactive phase), 8 no precedent trials (indicated by their location in the interactive phase), 4 precedent match fillers (named the same as in the interactive phase), and 4 other fillers (see Materials and Table 1). Note that the physical stimuli in the test phase were identical for the 4 experimental conditions over participants, but items were not repeated within a participant. The different conditions were created by changing the speaker and the particular reference to this picture in the preceding interactive phase. Each experimental condition (same-speaker precedent mismatch; samespeaker no precedent; different-speaker precedent mismatch; differentspeaker no precedent) occurred 80 times throughout the experiment.
Apparatus MEG data were acquired using a 248-channel (or SQUIDs; superconducting quantum interference devices) whole head magnetometer (4D-Neuroimaging Magnes 3600 WH system) at the CCNi at the University of Glasgow, sampled at 508.63 Hz and band-pass filtered between 0.1 and 200 Hz. Trigger pulses via the parallel port were used to synchronize MEG data acquisition with experimental events. A Panasonic 3-chip DLP projector (PT-D7700E-K) was employed for visual stimulus presentation. Resolution was 1024 × 768 pixels covering a visual angle of 24°horizontal by 18°vertical. Each picture in the 3 × 3 matrices, employed during the interactive phases, covered a visual angle of 3.5°× 3.5°, whereas the single pictures presented during the testing phases covered 4.7°× 4.7°.
Ethical Statement
All procedures (including consent and participant debriefing) were reviewed and approved by the College Ethics Committee of the University of Glasgow and were in full agreement with APA and BPS guidelines, as well as with the Declaration of Helsinki.
Data Analysis
Preprocessing and statistical analysis was conducted using the Fieldtrip Matlab ® toolbox (Oostenveld et al. 2011) and was in agreement with recently published guidelines for MEG research (Gross et al. 2012). First, epochs were extracted from the MEG for all test phase trials from 500 ms before the picture was shown on the screen (i.e., 1300 ms before sound onset) until 500 ms after the response. Subsequently, linear trends were removed and all epochs were denoised to remove signals generated by the HPI coils. Trials with very large (movement and/or eye) artifacts were removed before PCA/ICA since this procedure can be unreliable if the data contain much noise. On average, only 1 or 2 trials were removed per experimental condition at this stage (maximally 5, no significant differences between conditions). Then, PCA was used to reduce data dimensionality for each participant to 40, 60, or 100 components, which were then subjected to ICA (Oostenveld et al. 2011;Gross et al. 2012). A higher number of components were used if signal and noise could not be separated clearly using the lower number of components. These components were inspected visually and removed if they contained only noise and/or artifacts (e.g., caused by heart beats or eye movements). The average proportion of removed components was 0.20 (range: 0.05 to 0.37). The remainder of the components was used to recreate the MEG signal. After that, individual trials with remaining artifacts were removed manually once more. This resulted in an average of 76.1 remaining trials (range: 69-80 out of 80). The numbers of remaining trials did not differ between the 4 conditions within the same analysis (Fs > 1). To identify the same-speaker "deliberation trials" (see ERF results below), we took all trials from the same-speaker, precedent mismatch condition in which participants gave a "different picture" response, plus the one-third "same picture" responses with the longest RTs per participant. This led to an average of 42.6 deliberation trials per participant (range: 32-66). Finally, 5 or 6 bad channels were interpolated for each participant based on the signal of neighboring channels. These preprocessed data entered the ERF and time-frequency ( power) analyses. For coherence analysis, we used preprocessed data without removing components via the standard PCA/ICA approach, since it has been suggested that removing ICs, when preceded by a PCA, distorts the oscillatory phase of the signal (e. g., Castellanos and Makarov 2006). All reported coherence effects are relative between conditions and unlikely to be biased by artifacts (e.g., heart, muscles), assuming a random distribution of artifacts across trials and conditions.
For evoked responses (ERF), trials of the same condition were averaged per participant. These averages were adjusted in relation to a baseline interval of 200 ms immediately prior to picture onset and filtered with a band-pass filter between 0.5 and 35 Hz. For timefrequency representations, the power of each frequency between 2 and 30 Hz (with steps of 1 Hz) was calculated on individual trials over time using a Hanning taper (Grandke 1983) with a window of 4 cycles (changing in length per frequency). For both ERF and time-frequency averages, planar gradient representations were calculated prior to sensor-level analysis. Often it is helpful to interpret MEG fields measured by magnetometers (and axial gradiometers, e.g., Gross et al. 2012) after transforming the data to a planar gradient configuration, that is, by computing the gradient tangential to the scalp. One advantage of the planar gradient transformation is that the signal amplitude typically is largest directly above a source. This transformation is particularly helpful for sensor-level analysis as it also allows for more communality across participants with the same source locations yet with differing orientations ( planar gradient represents the focus above the location of the source and not the more orientation-dependent fields around the source). However, for source level analysis, the original magnetometer representation and according lead fields were employed. To test for statistically significant differences between conditions and reduce the multiple-comparison problem, we used the cluster-based approach implemented in the Fieldtrip toolbox (Maris and Oostenveld 2007). This robust method reduces the multiplecomparisons problem and controls family-wise error across subjects in time and space. To examine differences between experimental conditions, paired t-tests are performed for each time-point, channel, and frequency (for time-frequency analyses) with a threshold of P < 0.05. Significant clusters in time, space, and frequency are identified on the basis of proximity (neighbors) in all dimensions of the cluster. Cluster statistics are calculated by taking the sum of t-values in every cluster. To obtain a P-value for each cluster, a Monte Carlo method is used to evaluate how extreme the cluster statistics of the 2 conditions are compared with random partitions of the samples. The proportion of random partitions that results in larger cluster statistics than the observed one is the P-value. The threshold was fixed to P = 0.05.
We employed 2-step analyses for emulating the interaction between 2 factors in time and frequency analyses. We first calculated a t-statistic for the difference between 2 conditions, for example, precedent mismatch vs. no precedent trials for each participant separately and then included the outcomes (t-values) of this first step statistic into a group statistic that compared a second difference, for example, same vs. different speaker (note, the first level t-statistic was calculated separately for the same-and different-speaker condition at the individual level). The comparison at the group level followed the robust statistics approach described earlier. The described 2-step analysis approached the interaction between speaker (same/different) and precedent ( precedent mismatch/no precedent).
To identify sources underlying the sensor-level effects, individual single-shell (Nolte 2003) head models were generated based on the individual MRI aligned with the MEG sensor array via the conducted head digitization. Voxel size was 6 mm, and all individual head models were normalized to a standard brain prior to analysis. ERF sources were identified using a Linearly Constrained Minimum Variance (LCMV) beam former (Van Veen and Buckley 1988), where we calculated a common LCMV filter for all 4 conditions (to increase SNR) per participant. This common filter was then used to transform ("beam") the individual conditions into source (voxel) space for comparisons between conditions. For identifying generators of theta oscillations, we employed Dynamic Imaging of Coherent Sources (DICS) beam formers (Gross et al. 2001). In this case, we were able to use condition-specific spatial filters that could potentially reveal qualitative differences between conditions. DICS was also employed for localizing (inter-trial) phase-coherent sources in theta (4-6 Hz) by means of cross-spectral density matrices in relation to particular reference signals (see Gross et al. 2004;Kessler et al. 2006). For statistical testing of sourcelocalizations underlying ERF and time-frequency effects and coherent sources, we used the same cluster-based approach, in this case only clustering over voxels. Time windows and frequency ranges (in case of time-frequency sources) were chosen based on significant sensor-level effects. Sources identified in theta-power analysis were employed as references for theta coherence analysis. For this type of analysis, we reduced the multiple-comparison problem by using a "false discovery rate" (FDR) approach, since it has been suggested to be more sensitive to spatially localized effects compared with a bias toward more widespread effects in cluster-randomization (Groppe et al. 2011).
Results and Discussion
Confirming the results of previous studies (Metzing and Brennan 2003;Kronmüller and Barr 2007;Matthews et al. 2010), our behavioral results indicated that listeners experienced greater confusion for precedent mismatches produced by the same speaker as compared with those produced by a different speaker. Figure 2 shows that both reaction times and choices revealed an interaction between speaker (same/different) and precedent (mismatch/no). ANOVAs confirmed the prediction of a larger precedent mismatch vs. no precedent effect in the same speaker than the different-speaker case for RT (longer RTs, F 1,15 = 8.43, P = 0.011) and for response choices (more "different" responses; F 1,15 = 21.15, P < 0.001). These interaction effects allowed us to specifically search for the neural substrates involved in generating these effects. We primarily analyzed MEG signals in the frequency domain since this type of analysis (in contrast to averaging, i.e., ERFs) is sensitive to evoked as well as to induced brain signals (e.g., Pfurtscheller and Lopes da Silva 1999). We found strong effects in the theta band (4-6 Hz) that reflected widespread differences in cortical activity across conditions. These will be reported in the next section, whereas the more confined effects in alpha (9-13 Hz) and gamma (66-78 Hz) are reported in Supplementary Figure S1.
Theta Oscillatory Effects
Theta Power (Sensor and Source Space) Analyses were time-aligned to the onset of the spoken expression, such that negative values of the time variable represent processes taking place during the picture preview ( prenaming), whereas positive values represent processes taking place after onset of the verbal expression ( post-naming). Consistent with other studies reporting theta oscillations in the context of episodic working memory and language processing (e.g., Bastiaansen et al. 2002;Hagoort et al. 2004;Bastiaansen et al. 2005;Jensen and Colgin 2007;Fuentemilla et al. 2010;Giraud and Poeppel 2012), we found significant modulations of frequencies between 4 and 6 Hz. A time-frequency analysis at the sensor level between −800 and 1000 ms in the range of 2-30 Hz revealed a significant cluster (P = 0.012) in the theta range (4-6 Hz) for the precedent mismatch vs. no precedent comparison within the same-speaker condition, in a time window around 350-650 ms after naming onset (Fig. 3, top row). The corresponding comparison for different speaker did not reveal a significant effect in theta (Fig. 3, bottom row) or any other frequency (see Supplementary Fig. S1), corroborating the special status of the same-speaker, precedent mismatch trials observed in the behavioral data.
We localized the sources of the observed sensor-level theta effect (same speaker: precedent mismatch vs. no precedent; see Fig. 3, top row) using DICS (see Methods) collapsing over a post-naming time window between 200 and 800 ms (to include ∼3 theta cycles) and 3 to 7 Hz. We chose these parameters to cover the maximum of the sensor-level effects across time samples and frequencies. The findings for the same-speaker, Figure 2. Behavioral responses; RT (left) and choice responses ("different" or "same" picture; right). *P < 0.05; **P < 0.001. Supplementary Table S1, Panel A) and revealed sources in areas that previous research (see Introduction) has related to: (1) mentalizing (right TPJ, ventromedial prefrontal cortex vmPFC, right PC), (2) episodic working memory including executive function (right parahippocampal gyrus PHG, left lateral (lat)PFC), (3) language (left temporal cortex TC, including left temporal pole TP), (4) attention (right posterior parietal cortex, PPC), and (5) motor functions (left lateral premotor and motor cortex, PMC). This source pattern conformed very strongly to our expectations regarding functional processing networks interacting in the postnaming interval (see Introduction).
In order to further strengthen our pattern of results, we conducted a two-step analysis (see Methods) with the frequency characteristics and within the time interval described earlier in order to analyze the speaker-by-precedent interaction in source space. This analysis revealed significant interactions between speaker and precedent that showed a strikingly similar pattern (1 spatially distributed cluster, P = 0.008; Fig. 4, Figure 4. Theta-power sources localized for the post-naming interval by means of DICS (see Methods). Sources in red show a power increase in 3-7 Hz for the same-speaker precedent mismatch as compared with the no precedent condition (Panel A) or as a result of an interaction between speaker and precedent (Panel B). The color-coded scale represents t-values. Labels are L for left and R for right hemisphere; SM1, primary sensori-motor cortex; PMC, premotor cortex; PPC, posterior parietal cortex; OCC, occipital cortex; latPFC, lateral prefrontal cortex; TP, temporal pole; TC, temporal cortex; TPJ, temporo-parietal junction; PHG, parahippocampal gyrus; PC, precuneus; vmPFC, ventromedial prefrontal cortex. Further explanations are given in the text and Supplementary Table S1. Panel B; Supplementary Table S1, Panel B) to that of the simple contrast in the same-speaker condition described earlier.
Additional effects in the two-level analysis extended to the right lateral PFC, typically associated with executive functions in working memory. Effects also seemed to be more pronounced and extensive in core areas around the right TPJ, PC, and vmPFC, previously linked to mentalizing (see Introduction). Episodic working memory in right PHG and the left latPFC also revealed more pronounced levels of significance in this two-step analysis. In the left language areas, the focus of significance was shifted toward TP but still comprised middle TC.
Overall, the pattern across the 2 types of analysis is reassuringly consistent and highlights the involvement of typical mentalizing, episodic working memory, and language areas (conforming to previous research, see Introduction); a pattern that is highly specific to the same-speaker, precedent mismatch condition and to the post-naming interval. Note that observing the same basic pattern of theta-power effects in the interaction as well as in the simple contrast rules out the possibility that a negative effect for the different-speaker conditions may have manifested as an overall positive effect compared with the same-speaker conditions, thus, potentially driving the interaction effect. This is further in line with the reported sensorlevel effects (see Fig. 3), where the same-speaker, precedent mismatch condition revealed the strongest theta-power increases.
Finally, in order to substantiate whether the sources observed for theta power in the post-naming interval, particularly in mentalizing areas, were indeed significantly less active in the pre-naming interval, we compared the 2 time periods directly by means of another two-step analysis. For the pre-naming interval, we chose a time window between −800 and −200 ms, of the same length as the 200-to 800-ms post-naming interval. The first step comprised of comparing the precedent mismatch to the no precedent condition for same speaker (controlling for low-level sensory differences) both pre-and post-naming. The pre-naming contrast was compared with the post-naming contrast in the second step. This led to 1 positive cluster (P = 0.006), see Figure 5 (and Supplementary Table S2). The results further corroborate our interpretation that most theta sources observed in the previous analysis and particularly those in typical mentalizing (and related social) areas such as TPJ, PC, TP, and parts of the vmPFC showed significantly stronger activation for same-speaker, precedent mismatch vs. no precedent trials in the post-naming interval. The right PHG and bilateral visual areas (occipital OCC) also revealed significantly stronger theta power for this comparison in the post-naming interval. Based on the existing literature (see Introduction), this suggests stronger episodic retrieval in the right hemisphere along with stronger visual processing in response to the naming mismatch. Left TC activation could indicate that a mismatch with an anticipated precedent (as compared with just hearing a new name without any precedent) may have led to more prominent language area activation than building up anticipation for a certain term pre-naming. As a form of "reality check," left motor areas (MC) also showed up in this analysis. This was expected because no difference in motor response should be present pre-naming, whereas after naming, response processing was stronger for precedent mismatch than for no precedent trials.
Theta Phase-Coherence (Source-Space)
To obtain a picture of the functional connectivity between these various brain areas, we analyzed phase-coherence in the 4-6 Hz band between 200 and 800 ms post-naming (see Methods). We contrasted precedent mismatch and no precedent conditions for the same speaker to identify cortical areas that revealed coherence differences relative to a particular reference site. The overall pattern of coherence is shown in detail in Figure 6 (and Supplementary Table S3) and reveals functional connectivity effects (statistical maps of significant theta phase-coherence effects) in relation to "seed" areas (reference sites) taken from the previous theta-power source analyses. Red color denotes areas that are coupled significantly stronger with the respective seed area in the mismatch compared with the no-precedent condition (mismatch > no precedent), whereas blue color denotes the reverse effect (mismatch < no precedent). Figure 5. Theta-power sources before and after naming, localized by means of DICS (see Methods). Sources in red show a power increase in 3-7 Hz for contrasting same-speaker, precedent mismatch vs. same-speaker, no precedent in the post-naming interval compared with the same contrast in the pre-naming interval. Within the same-speaker condition, we compared precedent mismatch vs. no precedent conditions separately for the post-and the pre-naming intervals and for each participant (first step) and then employed a group-level statistic (second step) for comparing the 2 intervals. The color-coded scale represents t-values. For source labels, see Figure 4. Further source details are reported in Supplementary Table S2.
Left TP and left TC were coupled to left PHG and left/right latPFC, and to vmPFC, which we interpret based on the existing literature (see Introduction) as the functional coupling between subnetworks related to language, episodic working memory, and mentalizing. This functional interpretation is further corroborated by the statistically significant coupling between right PHG and areas in latPFC, mTC, and left TPJ. Caudal anterior cingulate cortex has been related to conflict monitoring, and the currently observed coupling with temporal cortex could reflect conflict resolution between current and retrieved naming.
Importantly, when using the right TPJ as a reference site, we found the left vmPFC cortex to be significantly more coherent (mismatch > no precedent). This corroborates the notion that information exchange within the mentalizing network was engaged more strongly for precedent mismatch trials by the same speaker compared with when no precedent had been established during the interaction. In addition, and somewhat surprisingly, significantly less coherence (mismatch < no precedent) was found in relation to right middle TC and to right PMC. Reduced theta phase-coherence in mismatch compared with the no precedent condition could suggest active decoupling between 2 areas and could reflect suppression (e.g., Gross et al. 2004). The present result could be interpreted as TPJ suppressing predictions that were generated by the right hemisphere ( particularly TC) during the pre-naming interval (e.g., Gardner et al. 1983;Marini et al. 2005;Federmeier 2007 for review; Tourville and Guenther 2011). While this remains speculative because coherence is a correlative measure that does not allow inferring a direction of influence, the right middle TC also reveals a similar pattern of decoupling in relation to the left lateral PFC (Fig. 6), supporting the notion of top-down suppression of a "wrong" linguistic prediction after the same speaker used a term that mismatched with the expected precedent.
Analysis of "Same-Speaker Deliberation" Trials So far, the results reported for the oscillatory domain have revealed stronger theta activity in the post-naming interval for same-speaker, precedent mismatch trials as compared with no precedent trials, but no significant differences at all were observed in the pre-naming interval and a direct (theta power) comparison showed that the post-naming effects were significantly stronger compared with pre-naming. Importantly, the postnaming effects in theta-power and -coherence have included areas and their coupling, which previous research has identified as core mentalizing areas (see Introduction). The results so far, therefore, support the notion of "on-demand" involvement of perspective taking and mentalizing in language processing. However, it might be difficult for slow theta effects to reach significance in the relatively short (800 ms) pre-naming interval. Furthermore, to the extent that mentalizing might be taking place equally across all conditions, specifically in the Figure 6. Theta phase-coherence (4-6 Hz) localized for the post-naming interval by means of DICS (see Methods). Differences in coherent sources between same-speaker, precedent mismatch minus same-speaker, no precedent (FDR-corrected significant t-values) in reference to theta-power sources in latPFC, left TP, left mTC, right PHG, and right TPJ as identified in the reported power analyses (see Fig. 4). The color-coded scale represents t-values. New source labels in this figure are ACC, anterior cingulate cortex; SMA, supplementary motor area. For all other source labels, see Figure 4. Red-yellow sources denote stronger coherence in the same-speaker, precedent mismatch compared with the same-speaker, no precedent condition, whereas blue sources denote the opposite effect. Further explanations in the text and Supplementary Table S3. pre-naming interval, these networks might not show up in any cross-condition comparisons. Although this seems unlikely, given the robust mentalizing differences observed in the postnaming interval, we made a final attempt to identify anticipatory brain activity in general and activity that could be related to mentalizing in particular. To create the best opportunity for finding mentalizing effects, we specifically targeted those trials within the same-speaker, precedent mismatch condition in which the behavioral evidence suggested engagement of mentalizing processes, either through a "different picture" response or through a "same picture" response with a particularly slow response time (see Methods for details). As these are trials where some sort of deliberation has evidently occurred, we refer to these particular trials as "same-speaker deliberation trials." We compared these trials again to the same-speaker, no precedent trials, yet by selecting this subset with the highest probability of deliberation, conform to our behavioral indicators, any activation of mentalizing networks in the pre-naming interval that may have occurred should now become apparent.
Note that pre-selecting these trials is especially favorable toward a central, anticipatory view of mentalizing and works against more "on-demand," egocentric accounts. If participants give a "different picture" response, they show evidence that they took the perspective of the speaker and noted that she probably sees a different picture if using a different name than before. Slow responses at least indicate that participants probably did not expect this name, possibly because they used mentalizing to anticipate a certain name. Hence, if even this subset of trials does not support anticipatory mentalizing activity in the pre-naming interval, this would provide a strong argument against the idea that listeners use mentalizing spontaneously to generate speaker-specific linguistic predictions. Deliberative trial selection was applied separately for time-and frequency-data, yet only evoked responses in the time-domain (ERF; cf. ERP) revealed significant anticipatory effects in the pre-naming interval (Fig. 7). Complete ERF results are reported in Supplementary Figure S2.
ERF Analysis of Same-Speaker Deliberation Trials (Sensor and Source Space)
Focusing on same-speaker deliberation trials, as compared with the same-speaker, no-precedent trials, we found 3 significant clusters between −800 and 1000 ms. Suggesting anticipatory processing, 2 were in the pre-naming preview phase; 1 between −550 and −23 ms (P = 0.004), 1 between −306 and 0 ms (P = 0.004) and both with a predominantly righthemisphere topography (Fig. 7, left; Fig. 8, Panel A). One cluster (P < 0.00001) lasted between 67 and 680 ms after naming, with a maximum amount of significant channels around 400 ms and a predominantly left-hemisphere topography (Fig. 7, right; Fig. 8, Panel B). Source analysis was then employed for both the preand post-naming interval to examine whether the anticipatory clusters involved mentalizing in addition to episodic retrieval.
In the early interval, a time window between −350 and −150 ms (before naming onset) was centered on the peak of the sensor-level effect, and 1 significant, spatially distributed cluster was found (P < 0.00001). For LCMV beam former analysis of the post-naming interval, a time window between 300 and 500 ms was centered on the peak of the effect at sensor level and another significant, spatially distributed cluster was found (P < 0.00001). Conforming to the topography shown in Figure 7 (left), sources for the pre-naming interval were located predominantly in the right hemisphere (see Fig. 8, Panel A, and Supplementary Table S4, Panel A). Sources comprised areas typically associated (see Introduction) with episodic working memory (right dlPFC, ACC, left PHG), language (right mTC), and visual processing (left occipital temporal cortex OTC, left parieto-occipital cortex POC, right OCC). In contrast, no clear mentalizing activation could be identified according to the typical areas reported in the literature and reviewed in Introduction. This pattern of results is in agreement with egocentric processing, rather than partner-oriented anticipation (see Introduction). PHG involvement in particular suggests that participants retrieved the episodic context associated with the specific target object they were viewing. Differences in visual areas further support the notion that episodic retrieval of previously named objects was more visually detailed. PHG might therefore play an anticipatory role in conjunction with dlPFC for retrieving the episodic context of the interaction with the target object, including information about the speaker and the used name-prior to the current naming of the object (e.g., Imaizumi et al. 1997;Epstein and Kanwisher 1998;Olson et al. 2006;Rankin et al. 2009). The latter is in agreement with the observed effects in typical language-related areas of the middle TC in the right hemisphere, possibly indicating anticipation of the name previously associated with the picture during interaction (e.g., Gardner et al. 1983;Marini et al. 2005;Federmeier 2007; Tourville and Guenther 2011). Effects in ACC are compatible with the notion of anticipation of cognitive effort or conflict (Sohn et al. 2007;Aarts et al. 2008). Due to the substantial amount of precedent mismatch trials (see Table 1), participants may have learned to anticipate conflict . Panel A shows sources for the pre-naming interval (together with the corresponding ERF topography from Fig. 7, left). Panel B shows the sources for the late interval (ERF topography from Fig. 7, right). The color-coded scale represents t-values. Source labels do not conform to Figures 4 and 6 apart from POC, parieto-occipital cortex; OTC, occipito-temporal cortex; vlPFC, ventro-lateral prefrontal cortex. Further explanations are in the text and in Supplementary Table S4. with the linguistic predictions they generated based on their previous interaction.
While we found evidence that speaker-specific predictions were generated based on episodic retrieval, it is doubtful whether this activation reflects common ground processing or merely context-specific retrieval (see Introduction). Given the lack of evidence for a difference in activation of TPJ and other mentalizing areas reported in the current literature (see Introduction) between same-speaker deliberation and no precedent trials in this pre-naming period, we must conclude that the overall picture of results is consistent with the general idea that communicatively relevant, partner-specific information can be activated through basic memory processes without mediation by access to common ground in terms of active co-representation of the other's mental states (Horton and Gerrig 2005;Barr et al. 2014).
Sources for the post-naming interval were predominantly left lateralized (Fig. 8, Panel B,and Supplementary Table S4,Panel B) in concordance with the sensor topography of the ERF effects (Fig. 7, right). Effects in typical motor/premotor areas (PMC) might reflect more intense or more conflicting motor preparation that would fit with ACC involvement, which has been linked to conflict monitoring as well as anticipation (Sohn et al. 2007). PHG also seemed to be involved in both intervals (and across all types of analysis), possibly suggesting that episodic retrieval efforts may have been continuously engaged more strongly in deliberation trials. This is of particular interest in the context of mentalizing effects in left TPJ and vmPFC only "after" naming. Samson et al. (2004) pointed out the particular relevance of the left TPJ for reasoning about others' beliefs, which in the present context only seemed to be engaged "on-demand," when mentalizing was required to resolve a conflict.
Relating ERF and Theta Results
The pattern observed in the ERF analysis of same-speaker deliberation trials complements and corroborates our results based on theta power and phase-coherence. So far, a few of the sources had only been reported in theta phase-coherence analysis. These have now been confirmed in the ERF analysis of deliberation trials: left PHG, left TPJ, caudal ACC, right PMC, and right middle TC (see Figs 6 and 8). Importantly, however, potential mentalizing in the left TPJ was only confirmed for the post-naming interval (see ERF and coherence analyses). Furthermore, an area in the right middle TC that was most likely associated with linguistic predictions in the pre-naming interval (ERF analysis) revealed de-coupling of theta phase during the post-naming interval in relation to left lateral PFC and right TPJ (Fig. 6). This corroborates the proposed notion (see Section on theta phase-coherence) that "wrong" linguistic predictions might be suppressed when common ground is reestablished, that is, when an apparent precedent violation is resolved via mentalizing.
Although our results consistently show activation of areas previously related to mentalizing processes in the post-naming, but not the pre-naming interval, 2 potential concerns could be raised. First, areas such as the TPJ, vmPFC, and precuneus have reliably been related to mentalizing activities but could possibly also reflect other functions as well, since many brain areas participate in more than one cognitive activity. Next to mentalizing, the TPJ, for example, has been related to reorienting of attention to an unexpected stimulus (e.g., Corbetta and Shulman 2002;Mars et al. 2012). In the context of the present paradigm, a mismatching name might be considered an unexpected stimulus, because participants anticipate the previously mentioned name based on memory retrieval of the picture and its context. Insofar as the expectation for a certain name is stronger in the same-speaker condition (because of a tighter contextual similarity), the unexpectedness of the mismatch might be most prominent in the same-speaker, mismatch condition and therefore lead to the strongest TPJ response in that condition. It is difficult to rule out such an alternative explanation directly. However, we found both right and left TPJ to be involved across different analyses. Also, our theta phase-coherence results revealed a coupling between the right TPJ and the left vmPFC, 2 areas that have been strongly associated with the mentalizing network and their direct functional coupling is harder to reconcile with a stimulus expectancy account than with a mentalizing account (see also Mars et al. 2012). Nevertheless, further converging research, using similar paradigms, will be necessary to completely rule out alternative explanations and support our interpretation that more extensive mentalizing is required in response to experimental conditions such as the current same-speaker, mismatch condition. For example, an fMRI study, because of its better spatial resolution, could pinpoint more specifically the part of the TPJ involved, allowing for more precise functional interpretations (see, e.g., Mars et al. 2012). Alas, such an approach would not be able to corroborate our findings in terms of "when" mentalizing is engaged.
A further caveat concerns the fact that no differences in mentalizing areas were found in the pre-naming interval. This is in some aspects a null result, which should be interpreted with care and needs to be replicated by future studies. However, our direct comparison of theta power between the pre-and post-naming interval revealed that most effects (including TPJ) reported for the post-naming interval were significantly stronger compared with pre-naming, which corroborates our current interpretation. Finally, in our ERF analysis of deliberation trials, an extensive network of areas (typically related to episodic memory and language in previous research) was found to be activated in the pre-naming period, showing that our analysis was sensitive and powerful enough to pick up on differences between the conditions in that interval. Still, no typical mentalizing area such as TPJ was involved in this pattern, further corroborating our conclusion that these areas do not appear to be involved in anticipatory processing.
General Discussion
Our results indicate that brain areas typically related to language, vision, episodic working memory, and mentalizing in previous research are dynamically and jointly involved in encountering and resolving conflict after encountering a reference that mismatches with a previously negotiated precedent by the same speaker. The reported effects on behavior, evoked responses, oscillations, and sources were most pronounced when the mismatch occurred with respect to a precedent established by the same speaker, compared with processing of trials without a precedent or when the precedent had been established by a different speaker. It is important to note that the paradigm employed here was special in making the speaker's identity known to the listener before each block of test trials, thus giving ample opportunity for listeners to mentalize and access common ground to enhance speaker-specific linguistic predictions.
The dynamics of theta oscillations, sources, phase couplings, and evoked responses are fundamentally novel findings in themselves and overall have consistently revealed that listeners do access common ground previously established with a specific speaker (but not another speaker) and that these processes seem to involve speaker-specific episodic retrieval as well as mentalizing. These findings are compatible with most behavioral experiments showing partner-specific processing of referential precedents. However, the results across several, very different analysis approaches in source space (thetapower, -coherence, ERF) also shed light on the mechanisms that are engaged in anticipation of an upcoming linguistic reference in contrast to those mechanisms that are engaged in reaction to an apparent pragmatic violation of previously established common ground. Overall, we found more robust mentalizing effects (evoked and oscillatory theta activity) in the post-naming interval, suggesting that on a majority of trials, participants engaged in enhanced episodic retrieval and, most importantly, mentalizing in response to a perceived violation.
Only when focusing on those trials where mentalizing was most likely to occur-that is, same-speaker deliberation trialsdid we find anticipatory ( pre-naming) evoked activity (ERF analysis), but this was only for brain areas typically linked to episodic retrieval, linguistic predictions, and conflict anticipation, and not for areas typically involved in mentalizing. Importantly, episodic recall of speaker-specific representations is not identical to common ground, as episodic representations can be shared without also being known to be shared (Shintel and Keysar 2007). That is, representations may become activated via the basic memory retrieval principle of encoding specificity rather than through any process that genuinely reflects processing of the speaker's experience, such as produced by mentalizing .
It is important to emphasize that selecting this particular subset of same-speaker deliberation trials was most favorable for finding anticipatory mentalizing activity; if these trials do not show anticipation of the speaker's mental states in the TPJ that has been primarily associated with perspective taking, then no other trial type is more likely to do so. Thus, looking back at the 2 accounts we contrasted in Introduction, the results of our analyses support the conclusion that anticipating specific speakers' referential behavior based on mentalizing in relation to previously established common ground may not be a spontaneous, default process. In contrast, the default process seems to be more egocentric with anticipation only relying on episodic retrieval of visual and linguistic associations without any inference of the speaker's current mental states. The latter appears to be mainly engaged "on demand" once a pragmatic violation has been established and a deliberate decision has been made to account for it. Moreover, this evidence suggesting on-demand engagement of mentalizing has been obtained in the ecologically valid context of a realistic communicative interaction with live interlocutors.
Although the pattern of results was reassuringly consistent across our varied analyses, it is important to point out once again that the segregation of functional brain modules and the links between brain areas and function are based on the current state of the literature and must therefore be regarded as hypothetical and exploratory at the current stage. Future studies are necessary to corroborate these findings, yet we believe that the current results and interpretations will provide valuable constraints and hypotheses for future research. It should also be noted that our results were obtained with a novel, socially interactive paradigm that is quite different from most neurocognitive paradigms employed to date, which means that there is no clear evidentiary basis on which we could have generated more confirmatory-style predictions. Therefore, future research is needed to confirm the results of the present study, using similar or even more realistic paradigms. Further research is also needed to establish the generality of these results beyond the communicative scenario that we have investigated. It could well be the case that in certain scenarios mentalizing is indeed engaged more spontaneously, for example, in conversation with children who are not expected to adhere to common ground or to be able to fully engage in mentalizing themselves. Finally, compared with natural conversation, the present study still used a somewhat repetitive task, which was necessary at the current stage of our research for maintaining sufficient experimental control and statistical power. Still, the interactive nature of the paradigm should have been sufficient to have motivated participants to remain focused throughout the experiment. Our study could be a stepping stone toward even more naturalism, along a path that might eventually allow neuroimaging to tackle the unrestrained nature of real conversation.
Conclusions
In the current MEG study, we employed an innovative experimental paradigm that combined an initial phase of live conversational interaction with a confederate speaker and a subsequent test phase of prerecorded speech from either the same or another speaker. Naturalistic negotiation of referential precedents in the interactive phase was sometimes followed by a referential mismatch in the test phase. The critical condition was when the same speaker produced a mismatch, requiring participants to engage in mentalizing in order to judge whether the speaker was still referring to the same object.
Based on a substantial body of previous research relating specific cognitive functions to certain brain areas, our results for theta oscillations, theta sources, theta phase couplings and evoked responses consistently indicate that brain areas typically involved in language, vision, episodic working memory and mentalizing were dynamically and jointly involved in resolving conflict after encountering a mismatching reference by the same speaker (but not another speaker). However, we found more robust mentalizing effects in the post-naming than in the pre-naming interval, suggesting that on a majority of trials, participants engaged in mentalizing only in response to a perceived violation. Only when focusing on those trials where mentalizing was most likely to occur did we find any anticipatory ( pre-naming) activity, but this activity was confined to brain areas typically linked to episodic retrieval, linguistic predictions, and conflict anticipation, and did not include areas typically involved in mentalizing. Importantly, episodic recall of speaker-specific representations is not identical to common ground, as episodic representations can be shared without also being known to be shared (Shintel and Keysar 2007).
In conclusion, default processing of utterances that violate common ground seems to be quite egocentric at first, with anticipation relying primarily on episodic retrieval of visual and linguistic associations without any inference about the speaker's current mental states. The latter appears to be mainly engaged "on demand" once a pragmatic violation has been established and a deliberate decision has been made to account for it. | 2016-05-04T20:20:58.661Z | 2014-06-05T00:00:00.000 | {
"year": 2014,
"sha1": "67d17af23557bed732270c3bccfc1ae5ea00a300",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cercor/article-pdf/25/9/3219/14110370/bhu116.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f37fee738d44ff35b8beb19d296277e2d75fc8df",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
234626141 | pes2o/s2orc | v3-fos-license | The Application of Natural Dyes Soga Tingi on Tanned Leather for Dyeing with Jumputan Tie Technique
The purpose of this research is the application of natural dyes from the extraction of Soga Tingi wood (Cereopcandolleana L) on tanned leather for the process of dyeing colors with jumputan tie technique on color absorption and fastness. The material used is wood from Soga Tingi, the leather of sheep crust.is wood from Soga Tingi, the leather of sheep crust. Using the experiment method, with 4 (four) stages, namely: (1) Extraction of Soga Tingi wood counter current variation method(Dry Soga Tingi and Fermented Soga Tingi),(2) Application of natural dyes of Soga Tingi Through Dyeing method with Jumputan Tie technique, variation: (a) Soga Tingi dye concentration, (b) Dyeing time, (c) pH, dan (d) Rpm Turning and, (3) Absorption test and color fastness test for rubbing (wet and dry), Standard assessment using the gray scale and staining scale and the level of difficulty of the jumputan tie technique on sheep crust leather. Data analysis uses Analysis of variance. The results of the application of Soga Tingi dyes containing tannin for tanning the leather of sheep crust using the Throuds dyeing method. At a high concentration of 12.0% dyestuff, optimal% absorption was obtained in the leather of sheep crust 80.0% (Fermented Soga Tingi) and 76.0% (Dry Soga Tingi), pH 4.8 to 5.3, dyeing time 120 minutes and rpm play drum 12. The results of the color fastness test obtained dryness test dry rubbing 5.0 (good)or not fade and wet rub test 3.5 (good enough) on the dry Soga Tingi, and obtained dry rub test dryness value of 4.5 (good good), or not fade and wet rub test 4.0 (good) on SogaTingi Fermentation. The results of the level of difficulty in the application of the tie belt technique on sheep crust, for a single tie technique 20% (low),double tie technique 40% (medium) andcross-tie technique65% (high).
be used as coloring agents in the process of coloring on tanned skin with jumputan bonding technique so that the waste is safe for workers and has no toxic effect on the environment.
Material of the Research
Materials and tools for extraction, used Soga Tingi wood (Cereopcanddelliana L), aquades solvents and extraction tools, analytical scales, heaters, UV-Vis Spectrometer stirrers, pH meters, Baumeter devices. Materials and tools for coloring process, tanned leather (sheep crust), Soga Tingi coloring (dry/fermentation), aquadest and process water are used local well water and Rotary drum tools 50 cm in diameter, equipped with temperature and rpm settings, analytical scales, pH meters. The test uses a Crochmeter, a gray scale standard, and a staining scale to assess color stain, color absorption and fastness test
Research Methods
The experimental research was carried out in a laboratory with 4 (four) stages: (1) Extracting natural dyes of SogaTingi wood with counter current method with dry wood material and fermentation (2) Technique to tie the motive of jumping on the skin of sheep crust, (3) Application of Soga Tingi extraction for skin crust tanned by sheep crust Throuds Dyeing method,(4) Test of color absorption and color fastness. Data analysis of the results of testing the properties and characteristics of the dye material density, yield, and testing the quality of the coloring of Soga Tingi on the skin of a sheep crust against fade resistance using ANOVA.
The Result of Extraction of Dye
Extracts from vegetable tannins from the Soga Tingi (Cereopcandolleana L) plant as a natural color are ingredients of plants, taste bitter and chelate, containing tannin dyes derived from wood or pepagan for use in tanning leather, to be used as a tanning material or coloring can be used directly or in concentrated form by re-extracting the tannin material. Containing tannin dyes derived from wood or pepagan for use in tanning and dyeing leather, to be used as a dyeing material or coloring can be used directly or in concentrated form by re-extracting the tannin material. The experimented with fermented Soga Tingi (FST) treatment, namely Soga Tingi wood for 5 days fermentation process was carried out to obtain brown-yellow dye containing tannin which was more concentrated, and the treatment of dried Soga Tingi (DST) without fermentation, extraction is then performed using the counter current method, the principle of counter current extraction is a multiple extraction method of liquid-liquid extraction, the separation of substances is one of the various ways for 2 or more substances if the comparative distribution, of the substances is small. Weighing Soga Tingi 200.0 grams, performed 3 repetitions, for SogaTingi fermentation (FST1, FST2, and FST3), and for Dry SogaTingi(DST1, DST2, and DST3), with variations of immersion time (12 hours, 18 hours, 24 hours hour, 30 hours and 36 hours), at the ratio of material and solvent (4:10) w / v, at temperature (70-100 o C) to the color density and yield produced. Show that the results of the counter current extraction method on the color density were measured with a meter rise and fall obtained an optimum of 5.80 o Be for hours immersion time, temperature (70 o C-100 o C), ratio (4:10) in Soga Tingi from fermentation material (FST3), and the lowest color density of 0.65 o Be for 12 hours soaking time, temperature (70 o C-100 o C), ratio (4:10), Soga Tingi from dry matter (DSTI). This difference is due to the fermentation treatment is more concentrated than the dry height because 5 days fermentation time can improve the coloring ability, and each repetition is done 3 time (Suheryanto D, 2012) Table 2, the results of the extraction of dyes containing tannin from the Soga Tingi counter current method to the yield increase and decrease, the optimum yield obtained 7.84% at a time of 30 hours immersion with temperature (70-100 o C), ratio (4:10 ) on the fermentation level (FST3), and the yield is 1.29%, soaking time is 12 hours, temperature (70-100 o C), ratio (4:10), dry height (DST1). This difference is due to the longer immersion time means the heavier the dyestuff is used so that more dyes can dissolve, because the solid material extracted results get greater, because contact is more frequent with more solvents and it appears that the liquid has already started to saturate the dyes so the yield is starting to be constant. (Pujilestari, Titik, 2017) The results of the data analysis showed that the effect of SogaTingi wood (dry and fermented) and the immersion time variation on the color density was significant (Fcount = 65.012), the effect of Soga Tingi wood (dry and fermented) and the immersion time variation on significant yield (Fcount = 34.430) 1) The single binding technique is carried out by bonding the tanned skin with only one bond, so that one bond motif is obtained, (2) the binding technique Double bonding is done by bonding the tanned skin with more than one bond so that more than one or more bonding motives are obtained, and (3) Cross-binding technique by means of cross-binding so that the bond motif is obtained in the form of crossing one another. The three binding techniques by adding a shirt button filler on the crust of the sheep will reveal the surface of the nerf skin and the lower surface of the fleshing skin (Larsen, 2004).
Through Dyeing Method
Through Dyeing is a method that aims to penetrate the color throughout the skin cross section. This method uses a little water (Short float), high concentrations adjusting the pH of the solution with skin pH, low temperature, (Guthrie, Jeffry, 2008)
Effect of Dye Concentration
The result of using Soga Tingi dye concentration for each liter of water in the coloring process is based on the percentage of crust skin weight, the greater the percentage the stronger the color of the skin, because the amount of water used still ranges between 100% -150% of the skin weight, but for Soga Tingi coloring use 200%. The use of large amounts of coloring agent does not always mean good because it can cause uneven color. . The coloring process with Soga Tingi dyes on sheep crust skin is obtained at a concentration of 12.0%, optimal color absorption of 80.0% (FST)and 76.0% (DST). The result of coloring shows that the greater the concentration of Soga Tingi dyes added, the more Soga Tingi dyes absorbed into the skin, within a certain time limit there is a decrease in the color absorption of tanned skin, this is due to the absorption of SogaTingi dyes by the skin has reached balance limits, (Daniels, R and W Landmann,2008)
Effect of Time
The results of applying the 12% concentration of Soga Tingi dyes to the skin of sheep crust, with variations in the time of dyeing to the% absorbance results are as follows:
Figure 4: The Results of Soga Tingi Dyes Time
Variable to the % Color Absorption Figure 4. The process of coloring Soga Tingi influence the time obtained optimum absorption% 82.0% fermented Soga Tingi (FST) dyeing time 120 minutes and lowest absorption% 47.0% in Dry SogaTingi (DST) on the leather of sheep crust, 30 minutes. This difference is due to the longer coloring time (dyeing) the more SogaTingi dyes are absorbed into the leather, but after 120-150 minutes of staining time the Soga Tingi dyes are absorbed into the leather. This is due to the absorption of Soga Tingi dyes by the leather has reached the maximum limit is the leather has been saturated with dyes.
Effect of pH
The results of the application of a 12% concentration of Soga Tingi dye,120 minutes of dyeing time on the leather of sheep crust, on the effect on the% absorption of the results are as follows:
Code
The Relationship between the Initial pH and the% Absorption of the Soga Tingi Dye (2,0-2.5) (2,6-3. Table 3 shows that in the Soga Tingi Fermentation (FST) staining process, optimum absorption of 84.5% was obtained at the initial pH (4.8-5.3), the dyeing time was 120 minutes on sheep's tanned skin and the optimum absorption of dried Soga Tingi was obtained. % (DST), at the lowest initial pH (2.0-2.5) dyeing time 120 minutes on sheep's tanned leather.
The Effect of Rpm
The results of applying SogaTingi dye with a concentration of 12% on sheep's crust tanned leather, with variations in drum rotation speed (Rpm) and dyeing time against% absorption, are as follows:
Conclusion
Extraction of Soga Tingi wood (Cereopcandolleana L) natural dye using the counter current method from dry Soga Tingi. The optimum dye concentration (5.80 o Be) and yield (7.84%) in Fermentation Soga Tingi, immersion time 30 hours and optimum dye concentration (4.25 o Be) and optimum yield (6.23%) in Soga Tingi dry immersion time 36 hour.
The technique of fastening the jumputan motif on sheep's crust tanned leather is very much influenced by the thickness of the leather, the luster of the finished leather and the type of leather article to be used. The difficulty level for the single tie technique is 20% (low), the double tie technique is 40% (moderate) and the cross-tie technique is 65% (high).
The results of the application of sogatingi dye containing tannins for the Throudsdyeng method of sheep crust tanning. At a high dye concentration of 12.0%, the optimal % absorption in sheep crust leather was 80.0% (Fermented Soga Tingi) and 76.0% (Dry Soga Tingi), pH 4.8-5.3, dyeing time 120 minutes and Rpm turn the drum 12.
The results of the color fastness test showed that the dry rubbing test fading value was 5.0 (good) or did not fade and the wet rub test was 3.5 (good enough) on dry Soga Tingi, and the dry rubbing test was 4.5 (good) or did not fade and test rub wet 4.0 (good) on Fermented Soga Tingi.
Recommendation
As a basis for consideration in the selection to use of environmentally friendly dyes in the basic dyeing process of tanned leather, as an alternative to synthetic dye substitutes to reduce environmental pollution and toxicity for humans. | 2021-09-05T19:25:41.289Z | 2020-10-31T00:00:00.000 | {
"year": 2020,
"sha1": "14962dc2bbeb6536af9168635a912381b7ee94c0",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijst/article/download/156408/108123",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "12e26d4c0e82e48c9028dc8faa504d5a3fb6ee4a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
26078406 | pes2o/s2orc | v3-fos-license | The Chondroitin Polymerase K4CP and the Molecular Mechanism of Selective Bindings of Donor Substrates to Two Active Sites*
Bacterial chondroitin polymerase K4CP is a multifunctional enzyme with two active sites. K4CP catalyzes alternative transfers of glucoronic acid (GlcA) and N-acetylgalactosamine (GalNAc) to elongate a chain consisting of the repeated disaccharide sequence GlcAβ1–3GalNAcβ1–4. Unlike the polymerization reactions of DNA and RNA and polypeptide synthesis, which depend upon templates, the monosaccharide polymerization by K4CP does not. To investigate the catalytic mechanism of this reaction, we have used isothermal titration calorimetry to determine the binding of the donor substrates UDP-GlcA and UDP-GalNAc to purified K4CP protein and its mutants. Only one donor molecule bound to one molecule of K4CP at a time. UDP-GlcA bound only to the C-terminal active site at a high affinity (Kd = 6.81 μm), thus initiating the polymerization reaction. UDP-GalNAc could bind to either the N-terminal or C-terminal active sites at a low affinity (Kd = 266–283 μm) but not to both sites at the same time. The binding affinity of UDP-GalNAc to a K4CP N-terminal fragment (residues 58–357) was profoundly decreased, yielding the average Kd value of 23.77 μm, closer to the previously reported Km value for the UDP-GalNAc transfer reaction that takes place at the N-terminal active site. Thus, the first step of the reaction appears to be the binding of UDP-GlcA to the C-terminal active site, whereas the second step involves the C-terminal region of the K4CP molecule regulating the binding of UDP-GalNAc to only the N-terminal active site. Alternation of these two specific bindings advances the polymerization reaction by K4CP.
Chondroitin sulfate is a glucosaminoglycan that is a polysaccharide chain composed of the sugar molecules glucuronic acid (GlcA) 2 and N-acetylgalactosamine (GalNAc). A disaccharide unit (GlcA1-3GalNAc1-4) continually repeats to form a nonbranching polysaccharide chain that can consist of over a hundred individual monosaccharides, each of which can be sul-fated in various positions and quantities (1). Chondroitin chains are usually found attached to proteins, forming proteoglycans, and are present in the extracellular matrix and cell surfaces of various human tissues including the cartilage, aorta, skeletal muscle, eye, lung, and brain tissues (2)(3)(4). Through interaction with various extracellular proteins, chondroitin plays a critical role in the regulation of a variety of cellular activities including growth, development, and response to injury in the nervous system (5,6). For example, the loss of chondroitin sulfate from the cartilage is a major cause of osteoarthritis and intervertebral disc and cartilage degeneration (7,8). Multifunctional polymerases responsible for the synthesis of chondroitin chains have been cloned and characterized for their transfer reactions (9 -12). However, the molecular mechanism of the polymerization reaction catalyzed by these enzymes remains virtually unknown.
Bacterial cells also synthesize a chondroitin chain. Although mammalian chondroitin polymerases are Golgi membranebound enzymes, those of bacterial polymerases are soluble enzymes. For instance, the K4 strain of Escherichia coli contains an enzyme, encoded by the KfoC gene of the K4 gene cluster, that catalyzes the synthesis of the chondroitin chain, named K4 chondroitin polymerase or K4CP (9). The reduced amino acid sequence of K4CP revealed two conserved UDP-sugar binding motifs (DXD), one located in the N-terminal region and the other located in the C-terminal region of the molecule. Comparative sequence analysis shows that the system of two active sites is conserved in the other bacterial and mammalian chondroitin polymerases as well as the other polysaccharide polymerases such as the heparan polymerase Exostosins (10,(13)(14)(15)(16). We have previously employed isothermal titration calorimetry (ITC) to investigate the specific binding of donor substrate to UDP-N-acetylhexosaminyltransferase Exostosin Like-2 that possesses only one active site (17). Using the water solubility of the bacterial K4CP enzyme to our advantage, we have now extended our study to include the ITC measurement of binding specificity of donor substrates to the two active sites of K4CP to explicate the catalytic mechanism of how this enzyme executes the polymerization process.
We bacterially expressed a truncated enzyme of K4CP (from residues 55 to 686) that retained all of the original polymerase activity, from which the DXD motifs were mutated to generate various mutant enzymes. In addition, a deletion mutant lacking the C-terminal half of the K4CP was constructed. Wild type K4CP and these mutants were expressed in bacterial cells, purified, and subjected to ITC analysis to determine the nature of binding for the donor substrates UDP-GlcA, UDP-GlcNAc, and UDP-GalNAc. UDP binding was also investigated. Here we now present key elements of the specific donor bindings that are consistent with the catalytic mechanism of the alternative transfer reactions by K4CP.
EXPERIMENTAL PROCEDURES
Expression and Purification of Proteins-E. coli BL21 (DE3) cells transformed with pGEX K4CP were grown and harvested as previously reported (17). The cells were then disrupted in a French Press at room temperature under pressures varying between 8,000 and 24,000 p.s.i. Following ultracentrifugation, the K4CP protein was then purified from the supernatants by absorption onto glutathione-Sepharose resin (GE Healthcare) and eluted via thrombin cleavage overnight at 4°C. The eluted protein was then dialyzed overnight into 25 mM HEPES (pH 7.5), 100 mM NaCl, 20 mM MnCl 2 , and 1 mM CaCl 2 . Purification of protein was confirmed by taking 10 l of eluted protein and combining it with 4 l of NuPage 4ϫ LDS sample buffer (Invitrogen), 2 l of 2-mercaptoethanol (Sigma-Aldrich) and then running the sample on a NuPage 4 -12% Bis-Tris gel (Invitrogen). The gel was then stained with Coomassie Brilliant Blue G-250 (Fluka) and then destained to make the protein bands more clearly visible.
Isothermal Titration Calorimetry-Isothermal titration calorimetry measurements were carried out in HEPES buffer using a VP-ITC MicroCalorimeter (Micro Cal, Inc.) at 20°C. Substrate solution was injected into a reaction cell containing the protein. For substrate solutions containing UDP-GlcA, thirty injections of 3 l at 180-s intervals were performed. For substrate solutions containing UDP, UDP-GlcNAc, or UDP-Gal-NAc, 30 injections of 3 l at 300-s intervals were performed. Data acquisition and analysis were performed by the MicroCal Origin software package. Data analysis was performed by generating a binding isotherm and best fit using the following fitting parameters: n (number of sites), ⌬H (cal/mol), ⌬S (cal/mol/ deg), and K (binding constant in M Ϫ1 ), and the standard Levenberg-Marquardt methods (18). Following data analysis, K (M Ϫ1 ) is then converted to K d (M).
Site-directed Mutagenesis-Site-directed mutagenesis was performed using the QuikChange site-directed mutagenesis kit (Stratagene) following the protocols described in the accompanying instruction manual. The pairs of primers used to mutate the DCD-binding motif found in the N terminus region to ACA, DCE, and DCK were: TGGGTTCGGAGCCATAGCAC-AAGCCAGAATTGCAACATA and TATGTTGCAATTCT-GGCTTGTGCTATGGCTCCGAACCCA; TGGGTTCGGA-GCCATTTCACAATCCAGAATTGC and GCAATTCTGG-ATTGTGAAATGGCTCCGAACCCA; and TGGGTTCGGA-GCCATTTTACAATCCAGAATTGC and GCAATTCTGG-ATTGTAAAATGGCTCCGAACCCA, respectively. The pairs of primers used to mutate the DSD-binding motif found in the C terminus region to ASA, DSE, DSK, DSA, ASD, and KSD were: TGGTTCAAGAAAGTCAGCAGAGGCTAACTGAC-CTATATA and TATATAGGTCAGTTAGCCTCTGCTGA-CTTTCTTGAACCA; TTCAAGAAAGTCTTCAGAGTCTA-ACTGACCTAT and ATAGGTCAGTTAGACTCTGAAGA-CTTTCTTGAA; TTCAAGAAAGTCTTTAGAGTCTAACT-GACCTAT and ATAGGTCAGTTAGACTCTAAAGACTT-TCTTGAA; TTCAAGAAAGTCAGCAGAGTCTAACTGA-CCTAT and ATAGGTCAGTTAGACTCTGCTGACTTTC-TTGAA; TTCAAGAAAGTCATCAGAGGCTAACTGAC-CTAT and ATAGGTCAGTTAGCCTCTGATGACTTTCTT-GAA; and TTCAAGAAAGTCATCAGATTTTAACTGAC-CTAT and ATAGGTCAGTTAAAATCTGATGACTTTCT-TGAA, respectively. The primers used to generate the ACA and ASA mutants were also used to generate the double mutant, thereby mutating the DXD motifs found in both regions to AXA. The primers used to generate the ACA and DSA mutants were used to create the double mutant in which the N terminus-binding site was mutated to ACA and the C terminus-binding site was converted to DSA. The primers used to delete the C-terminal region of the enzyme were: CGCG-GATCCAAAGCTGTTATTGATATTGAT and CGGAAT-TCTTAATGCGTAAACTCTTCATCAAA. The mutations were confirmed by sequencing with the Big Dye terminator cycle sequencing reaction kit (Applied Biosystems).
Enzyme Assays-Hydrolysis was determined as follows. 53 g of protein were tested in an assay mixture containing 25 mM HEPES (pH 7. The resulting supernatants were subjected to a Hi-Load 16/60 Superdex 30pg column (GE Healthcare) equilibrated with 250 mM NH 4 HCO 3 containing 7% 1-propanol using the Ä KTA Purifier (GE Healthcare). Fractions were collected at a rate of 1 ml/min, and the amount of the hydrolyzed product UDP was quantified with a scintillation counter. Data analysis was performed using the Origin version 7.0 software programmed to fit data to an s/v-s plot to calculate the K m and V max values using Michealis-Menten kinetics.
RESULTS
Binding of UDP-GlcA-ITC was performed to examine the binding of UDP-GlcA to the truncated K4CP (hereafter referred to as K4CP). The calorimetric profile showed that UDP-GlcA bound to K4CP at a ratio of one donor molecule to one enzyme molecule (Fig. 1). The binding of UDP-GlcA was tested at various temperatures ranging from 5 to 40°C to ascertain the nature of donor binding (Table 1), from which the true values for the changes in entropy (⌬S) and enthalpy (⌬H) were directly determined by plotting Log K versus 1000/K°: ⌬S ϭ Ϫ25.18 Ϯ 1.43 cal/mol/deg and ⌬H ϭ Ϫ14,000.45 Ϯ 419.29 cal/mol (19). Using these values, it was found that the T⌬S was lower in value than the ⌬H value, suggesting that this binding is an enthalpy-driven reaction. In addition, the Gibbs free energy (⌬G) values remained constant over the various temperatures, implying that the nature of this binding reaction is spontaneous and that the maximum potential of K4CP to perform the binding remains the same over a range of temperatures. As expected, the K d values varied from 3.26 to 55.09 M depending upon the temperatures used for the assays, whereas the number of the UDP-GlcA molecules that bound to the K4CP molecule was the same at all temperatures ( Table 1).
Binding of UDP-GalNAc and UDP-ITC was performed at 20°C for determining the binding of UDP-GalNAc and UDP. Both UDP-GalNAc and UDP bound to K4CP at a one to one molecular ratio (Table 2). UDP-GalNAc was characterized by its extremely high K d value, 358.17 M at 20°C. This value was more than 40-fold higher when compared with the K d value of 8.5 M at 20°C yielded by UDP-GlcA binding and the averaged K d value (6. 81 Ϯ 1.72 M) for UDP-GlcA, a number arrived at by calculating from five different binding assays. The K m values of UDP-GlcA and UDP-GalNAc transfer reactions were previously reported (9). Intriguingly, the K d value of 358.17 M was over 10-fold higher than the reported K m value (31.6 M) of the UDP-GalNAc transfer reaction catalyzed by K4CP, whereas the corresponding K d and K m values for UDP-GlcA were similar. The inactive donor substrate UDP bound to K4CP at a K d value of 89.53 M, which was higher than the K d value of UDP-GlcA binding but was lower than that of UDP-GalNAc binding. Only one UDP molecule was found to bind to the K4CP molecule, which was unexpected given the fact that there are two binding sites for donor substrates. These experimental observations are at first glance puzzling but may provide the insightful clues necessary to understand the catalytic mechanism of the polymerization reaction; both UDP and UDP-GalNAc bind to K4CP at a one to one molecular ratio, and the K d value of UDP-GalNAc was higher than the reported K m value.
Delineating the Specific Donor-binding Site-Site-directed mutagenesis was employed to mutate the aspartic acid residues of the K4CP DXD motifs: either the N-terminal 239 DCD 241 or the C-terminal 519 DSD 521 to 239 ACA 241 or 519 ASA 521 . In addition, both 239 DCD 241 and 519 DSD 521 were simultaneously mutated to 239 ACA 241 and 519 ASA 521 , hereafter just referred to as DXDdm. These mutants were subjected to ITC analysis (Table 3). UDP-GlcA bound to the 239 DCD 241 mutant but not to the 519 DSD 521 mutant, and moreover, the K d value of this binding was close to that of the UDP-GlcA binding to K4CP. These results unequivocally identified the C-terminal DSD as the sole binding site of UDP-GlcA. UDP-GalNAc was found to bind to the both 239 ACA 241 and 519 ASA 521 mutants at similar affinities; their K d values were within the range of the K d value (358.18 M) of UDP-GalNAc binding to K4CP. Thus, UDP-GalNAc is capable of binding to either the N-or C-terminal DXD motif. This tantalizing observation lead us to measure UDP-GalNAc transfer activity by the DXD mutants: K4CP and the 519 DSD 521 mutant catalyzed the reaction at 5.895 and 5.469 pmol/min/g protein, respectively, but the 239 ACA 241 exhibited no transfer activity (Table 4). Because UDP-GalNAc was found to bind to K4CP at a one to one molecular ratio, the N-terminal 239 DCD 241 should be the binding site of UDP-Gal-NAc during the transfer reaction. However, the present binding assays using ITC could not determine the binding site of UDP-GalNAc. In a fashion similar to UDP-GalNAc, UDP bound to both the 239 ACA 241 and 519 ASA 521 mutants. Because the K d value for the former mutant was closer to that for K4CP (89.53 M) than that for the latter mutant, this one molecule of UDP appeared to bind to the C-terminal 519 DSD 521 of K4CP. As expected, when both 75 DCD 77 and 519 DSD 521 were mutated, the DXDdm possessed no capability to bind any of the donor substrates.
To further elucidate the binding of UDP-GalNAc, we attempted to construct deletion mutants by separating the K4CP molecule into N-and C-terminal fragments (residues 58 -357 and residues 358 -686, respectively). Although the C-terminal fragment did not express in our bacterial system, we were able to express and purify the N-terminal fragment for ITC analysis (Table 5). Consistent with our previous data, the N-terminal fragment abrogated the binding of UDP-GlcA. Conversely, both UDP-GalNAc and UDP retained their binding to the fragment. Most interestingly, the K d value of UDP-GalNAc binding became similar to the reported K m value of UDP-GalNAc transfer activity by K4CP. To reutilize the correlation of this K d value with the catalytic activity of K4CP, we measured hydrolysis of K4CP and the DXD mutants using UDP-GlcA and UDP-GalNAc as substrates (Table 4). No hydrolysis activity was observed with UDP-GlcA with any of these enzymes. On the other hand, UDP-GalNAc was hydrolyzed by all enzymes. The K m value of the hydrolysis by the 519 ASA 521 mutant was nearly identical to the K d value of UDP-GalNAc binding to the N-terminal fragment, and moreover, the corresponding value for hydrolysis by K4CP was closer to this K d value. These results are consistent with the conclusion that the N-terminal 239 DCD 241 is the binding site of UDP-Gal-NAc. Furthermore, the relatively high affinity of UDP-GalNAc binding provides critical information for us to hypothesize about the molecular mechanism by which K4CP regulates the binding of UDP-GalNAc to N-terminal 239 DCD 241 .
Defining the Functions of the C-terminal DSD Motif-Recent x-ray crystal structures of the glycosyltransferases GlcAT1 and EXTL2 have demonstrably shown that the first aspartic acid residue in the DXD motif interacts with the sugar moiety of the donor substrate and that the second aspartic acid residue interacts with the UDP portion of the donor substrate (20,21). Using this information as a guide, the first aspartic acid of the DXD motif was substituted with a negatively charged glutamic acid and a positively charged lysine to produce the following mutants: 239 DCE 241 , 239 DCK 241 , 519 DSE 521 , and 519 DSK 521 . These mutants were subjected to ITC analysis to decipher the role of aspartic acid in the regulation of donor substrate binding ( Table 6). UDP-GlcA was able to bind to all four mutants at a one to one molecular ratio. Thus, the GlcA moiety solely determined the binding of UDP-GlcA. Moreover, the results indicated that the mutations of the N-terminal 239 DCD 241 had no effect on the binding of UDP-GlcA to the C-terminal 519 DSD 521 . UDP-GalNAc bound to the 239 DCE 241 , 239 DCK 241 , and 519 DSE 521 mutants (Table 6), which is consistent with the facts that the second aspartic acid does not directly interact with the sugar moiety and that UDP-GalNAc can bind to either the N-terminal or C-terminal DXD motif. A surprising finding was that the 519 DSK 521 mutant abrogated the bindings of UDP-GalNAc as well as that of UDP. Because the second aspartic acid is the primary residue responsible for the ability of the enzyme to interact with the UDP moiety, it may be reasonable to con- clude that UDP does not bind to the C-terminal 519 DSK 521 site. UDP-GalNAc bound to the C-terminal 519 DSD 521 site weakly, and its sugar moiety may not be able to overcome the change to the second aspartic acid residue, thereby resulting in its inability to bind to the 519 DSK 521 site. The most surprising finding, however, was that no binding was detected for UDP-GalNAc and UDP despite the fact that the N-terminal 239 DCD 241 site remained unaltered, because both UDP-GalNAc and UDP bind to the N-terminal 239 DCD 241 site of the 519 ASA 521 mutant ( Table 3). The second alanine residue (Ala 241 ) was responsible for binding the two donors to the N-terminal 239 DCD 241 , because the 519 DSA 521 mutant retained the binding ability of UDP-Gal-NAc and UDP while losing its binding ability to UDP-GlcA (Table 6). These results suggested that, depending on the type of residue replacing the second aspartic acid, the C-terminal DSD motif differentially regulates the binding of UDP-GalNAc and UDP to the N-terminal DCD motif. Given this conclusion, the first aspartic acid residue of the C-terminal DSD motif was then mutated to lysine and alanine to produce the 519 KSD 521 and 519 ASD 521 mutants. This aspartic acid residue is the primary determinant for the binding of the sugar moiety of a donor substrate (20,21). The subsequent loss of UDP-GlcA binding was consistent with the fact that the C-terminal DSD motif is the only site for which UDP-GlcA had an affinity (Table 6). It would be reasonable to observe no binding of UDP-GalNAc to these mutants, if indeed the C-terminal DSD was the only binding site in the K4CP molecule. However, the fact that UDP-GalNAc did not bind to the N-terminal DCD motif of the 519 KSD 521 and 519 ASD 521 mutants reiterated the possibility that the C-terminal DSD motif may regulate donor binding to the N-terminal DCD motif, thus becoming a primary regulator of enzyme function and activity of K4CP. It was reasonable that a continuous binding of UDP to these mutants is observed because UDP does not interact with the first aspartic acid residue, and the double mutant 239 ACA 241 / 519 DSA 521 did not bind any of the donor substrates.
DISCUSSION
The chondroitin polymerase K4CP is a multifunctional enzyme that transfers GlcA and GalNAc from the donor substrates UDP-GlcA and UDP-GalNAc to synthesize the (GlcA1-3GalNAc1-4) n chain. The K4CP molecule contains the two potential binding sites for donor substrates. We have used bacterially expressed and purified K4CP enzymes and have employed ITC to investigate the specific donor bindings. One insightful result is the fact that K4CP is capable of binding to only one donor molecule at a given time regardless of the type of donor substrates: UDP, UDP-GalNAc, or UDP-GlcNAc. Another significant result is the observation suggesting that the C-terminal 519 DSD 521 site provides K4CP with not only the sole binding site for UDP-GlcA but also the ability to regulate binding of UDP-GalNAc to the N-terminal donor-binding site 239 DCD 241 . Thus, the chondroitin polymerase K4CP appears to utilize a fairly sophisticated mechanism to catalyze the polymerization reaction.
A critical question, the answer of which is required for understanding the catalytic mechanism of the polymerization, is whether K4CP has one active site at which both the UDP-GlcA and UDP-GalNAc transfer reactions take place or has two distinct active sites. K4CP catalyzes an inverting reaction, in which the ␣-configuration of the C1 bond of the donor sugar is inverted to the -configuration in a product formed (9). X-ray crystal structures of various inverting glycosyltransferases have determined the conserved structural signatures and the unique conformation and orientation of substrates at the active site (21). The structures of mammalian glycosyltransferases are represented by the so-called GT-A fold that is a single globular protein consisting of two subdomains (21)(22)(23)(24). The N-terminal subdomain has a Rossman fold and provides the binding site for the donor substrate, whereas the C-terminal subdomain consists of mixed -sheets and constitutes the acceptor substrate binding site. At the active site The experiments were performed in solution at 20°C and in triplicate to confirm the results using a fresh batch of enzyme produced for each experiment. In this mutant the entire C-terminal region is deleted, isolating the N-terminal region for analysis. The UDP, UDP-GlcA, UDP-GlcNAc, and UDP-GalNAc concentrations were 2 mM. KctDel concentration was 31 M. ND, no detectable binding. that resides in the open cleft across the two subdomains, the leaving oxygen of the phosphate group and the C1 atom of the donor sugar and the acceptor OH group align to advance a bimolecular nucleophilic substitution type in line displacement reaction. Also the inverting glycosyltransferases are characterized by the presence of a catalytic base residue that deprotonates the acceptor OH group. The x-ray structure of K4CP has now been solved and has revealed two distinct domains. Both domains are characterized by having all of the structural features of the GT-A fold, and both can be perfectly superimposed with known structures of other glycosyltransferases. In addition, the revealed K4CP structure has the two GT-A folds that orient the two active site clefts at an 180°angle, suggesting that the two domains function independently, with nonsequential and random binding and release of donor substrates, as the catalytic mechanism of the polymerase reaction.
Substrate
Our ITC study has concluded that UDP-GalNAc and UDP-GlcA specifically bind to the N-and C-terminal DXD motifs of the K4CP enzyme, respectively. The presence of two independent binding sites also been implicated in another glycosyltransferase, the Pasteurella multocida hyaluronan synthase (25). Consistent with this conclusion, the K4CP structure has led to the conclusion that UDP-GalNAc and UDP-GlcA are present at the N-and C-terminal GT-A domains, respectively. 3 In our ITC study, however, only one molecule of UDP is found to bind to one molecule of a given K4CP enzyme. Although the binding of UDP to the 239 ACA 241 and 519 ACA 521 mutants indicates that UDP could bind to either the N-or the C-terminal DXD site, in fact UDP appears to bind to the C-terminal DSD of the wild type K4CP enzyme. During the GlcA transfer reaction, this binding of UDP can be transient because its K d value is more than 10 times higher that the corresponding value of UDP-GlcA binding. Various experimental features associated with the use of protein crystals to solve structures, such as cocrystallization of K4CP with a high concentration of UDP, which may have led to UDP binding to both DXD sites. Intriguingly, structural analysis of multiple forms of the x-ray crystal structures found that either there is binding of two UDP molecules or no UDP binding at all. This finding may suggest that there is structural cross-talk between the N-and C-terminal DXD sites to regulate UDP binding.
Our binding assays demonstrated that UDP-GalNAc could bind to either the N-terminal or C-terminal DXD sites at similar affinities. Thus, during the polymerization reaction, UDP-GalNAc should preferentially bind to the N-terminal DCD site because the UDP-GalNAc transfer reaction is catalyzed by the N-terminal active site. We observed the selective binding of GalNAc to the N-terminal DCD only when the C-terminal DSD was mutated. Contrary to what analysis of the x-ray crystal structural data suggests, the C-terminal domain of K4CP appears to regulate donor binding and probably the transfer activity of the N-terminal domain. K4CP exhibited high affinity binding of UDP-GlcA to the C-terminal DSD motif only and did not hydrolyze UDP-GlcA, so that the enzyme can effectively convert the donor binding activity to its transfer activity. The inability of UDP-GlcA to bind to the N-terminal DCD motif and the constant occupation at the C-terminal DSD motif ensure that this enzyme will never transfer two glucuronic acids sequentially to an acceptor substrate and force UDP-GalNAc binding to the N-terminal DCD motif.
In conclusion, ITC analyses have now determined the specific binding of donor substrates to the bacterial chondroitin polymerase K4CP. In solution, only one donor substrate binds to K4CP at a given time. The C-terminal DSD motif binds to UDP-GlcA with a high affinity, apparently initiating the polymerase reaction and coordinating the N-terminal DCD motif to bind to UDP-GalNAc. The underlying molecular mechanism for control of the N-terminal domain by the C-terminal domain with subsequent transfer reactions is a subject ripe for further scrutiny and investigations. | 2018-04-03T00:56:55.315Z | 2008-11-21T00:00:00.000 | {
"year": 2008,
"sha1": "203379df56798730bb4a3bb2ea0eb19f289fb543",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/47/32328.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "d84ffc6b2dbc19548b0891fbf57e498b7bfda8f9",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
265550794 | pes2o/s2orc | v3-fos-license | Standards for clinical trials for treating TB
BACKGROUND: The value, speed of completion and robustness of the evidence generated by TB treatment trials could be improved by implementing standards for best practice. METHODS: A global panel of experts participated in a Delphi process, using a 7-point Likert scale to score and revise draft standards until consensus was reached. RESULTS: Eleven standards were defined: Standard 1, high quality data on TB regimens are essential to inform clinical and programmatic management; Standard 2, the research questions addressed by TB trials should be relevant to affected communities, who should be included in all trial stages; Standard 3, trials should make every effort to be as inclusive as possible; Standard 4, the most efficient trial designs should be considered to improve the evidence base as quickly and cost effectively as possible, without compromising quality; Standard 5, trial governance should be in line with accepted good clinical practice; Standard 6, trials should investigate and report strategies that promote optimal engagement in care; Standard 7, where possible, TB trials should include pharmacokinetic and pharmacodynamic components; Standard 8, outcomes should include frequency of disease recurrence and post-treatment sequelae; Standard 9, TB trials should aim to harmonise key outcomes and data structures across studies; Standard 10, TB trials should include biobanking; Standard 11, treatment trials should invest in capacity strengthening of local trial and TB programme staff. CONCLUSION: These standards should improve the efficiency and effectiveness of evidence generation, as well as the translation of research into policy and practice.
Design
Primary endpoint † without an intervening negative culture and without genotypic evidence of reinfection.Favourable outcome was defined as having a negative culture at the scheduled end of follow-up and not previously classified as having an unfavourable outcome.
REMoxTB (NCT00864383) 4 2008-2012 Multi-arm randomised controlled design Unfavourable outcome defined as the proportion of participants with bacteriological or clinical treatment failure or relapse within 18 months after randomisation.Relapse strains were those shown to be identical on 24-locus MIRU analysis.
STAND/NC-006 (NCT02342886) 5 * 2015 Multi-arm randomised controlled design Unfavourable outcome defined as the proportion of participants with bacteriological or clinical treatment failure or relapse 12 months after randomisation (from 50 to 54 weeks).Favourable outcome was defined as having a negative culture status (two consecutive negative cultures at least 1 week apart with no intervening positive result) at 12 months after randomisation, and not previously classified as having an unfavourable outcome.
S31/A5349 (NCT02410772) 6 2016-2018 Multi-arm randomised controlled design Favourable outcome defined as survival free of TB at 12 months after randomisation.Favourable status was assigned if a participant met all of the following criteria: was alive and free of TB at 12 months after randomisation; did not meet the criteria for unfavourable or not-assessable status; and had either an M. tuberculosis-negative result on the sputum culture at month 12,or at month 12 was unable to produce sputum or produced sputum that was contaminated but without evidence of M. tuberculosis.Unfavourable status was assigned if a participant had M. tuberculosis -positive cultures from two sputum specimens obtained at or after week 17 without an intervening negative culture, died or were withdrawn from the trial or lost to follow-up during the treatment period, had an M. tuberculosis-positive culture when last seen, died from TB during the post-treatment follow-up, or received additional treatment for TB.
TRUNCATE-TB (NCT03474198) 7 2018-2020 Strategy trial Unfavourable outcome defined as a composite of death before week 96 or ongoing TB treatment or active TB at week 96.
Drug-resistant tuberculosis
STREAM Stage 1 (NCT02409290) 8 2012-2015 Two-arm randomised controlled design Favourable outcome defined as cultures negative for M. tuberculosis at 132 weeks after randomisation and at a previous occasion during the trial period, with no intervening positive culture or previous unfavourable outcome.Unfavourable outcome defined by the initiation of two or more drug therapies that were not included in the assigned regimen, treatment extension beyond the permitted duration, death from any cause, a positive culture
Design
Primary endpoint † from one of the two most recent specimens, or no visit at 76 weeks or later.Participants who had reinfections with a different strain and those whose last two cultures were negative (including one at 76 weeks) but were lost to follow-up thereafter were considered to be unable to be assessed and were excluded from the primary analysis.
NExT (NCT02454205) Favourable outcome defined as not previously classified as unfavourable by week 73, and one of the following is true: [1] the last two culture results are negative.These two cultures must be taken from sputum samples collected on separate visits, the latest between weeks 65 and 73; or [2] the last culture result (from a sputum sample collected between weeks 65 and 73) is negative, and either there is no other post-baseline culture result or the penultimate culture result is positive due to laboratory cross contamination, and bacteriological, radiological and clinical evolution is favourable; or [3] there is no culture result from a sputum sample collected between weeks 65 and 73 or the result of that culture is positive due to laboratory cross contamination, and the most recent culture result is negative, and bacteriological, radiological, and clinical evolution is favourable.
TB-PRACTECAL (NCT02589782) 15 2017-2021 Phase 2-3, multi arm multi-stage randomised controlled trial Unfavourable status was defined as a composite of death, treatment failure, treatment discontinuation, loss to follow-up, or recurrence of TB at 72 weeks after randomization BEAT-India (CTRI/2019/01/017 310) 16 2019-2021 Uncontrolled cohort study Favourable outcome defined as 2 consecutive sputum cultures taken at least 4 weeks apart were negative, with clinical and radiological improvement at the end of treatment.
endTB-Q (NCT03896685) 14 2020-2023 Strategy trial Favourable outcome defined as the proportion of participants whose outcome is not classified as unfavourable at week 73, and for whom one of the following is true: the last two culture results are negative (these two cultures must be taken from sputum samples collected on separate visits, the latest between week 65 and week 73); the last culture result (from a sputum sample collected between week 65 and week 73) is negative, and either there is no other post-baseline culture result or the penultimate culture result is positive due to laboratory cross contamination, and bacteriological, radiological and clinical evolution is favourable; or there is no culture result from a sputum sample collected between week 65 and week 73 or the result of that culture is positive due to laboratory
Years of active enrolment
Design Primary endpoint † cross contamination; and the most recent culture result is negative; and bacteriological, radiological and clinical evolution is favourable.
2022present Duration evaluation
Favourable outcome defined as sustained cure at 76 weeks after randomisation without treatment failure or relapse.
* These trials are listed under both the drug-susceptible and drug-resistant sections due to inclusion of these two populations.In each case, study design differed for the drug-susceptible and drug-resistant populations.† The primary endpoints are listed here.Many trials had multiple secondary endpoints, with much longer follow up periods than the primary endpoints.
Bacteriological treatment failure was defined as negative culture status not attained or maintained during treatment.Clinical treatment failure was defined as a change from the protocol-specified TB treatment as a result of a lack of clinical efficacy, retreatment for TB, or TB-related death.
Unfavourable outcome defined as treatment failure (bacteriological or clinical) or disease relapse.Clinical treatment failure was defined as a change from the protocol-specified TB treatment as a result of a lack of clinical efficacy, retreatment for TB, or TB-related death through follow-up until 6 months after the end of treatment.Favourable outcome defined as resolution of clinical TB disease, negative culture status at 6 months after the end of therapy, and not previously classified as having an unfavourable outcome.Unfavourable outcome defined as the proportion of participants with bacteriological or clinical treatment failure or relapse 12 months after randomisation (from 50 to 54 weeks).Favourable outcome defined as having a negative culture status (two consecutive negative cultures at least 1 week apart with no intervening positive result) at 12 months after randomisation, and not previously classified as having an unfavourable outcome. | 2023-12-04T06:16:56.100Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "b2cc344365a4a05246fe02590cde1c4d2b56c22a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5588/ijtld.23.0341",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "923e359cef35cb63499d3c27097e0e8faf18a796",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9838690 | pes2o/s2orc | v3-fos-license | Transgender men and pregnancy
Transgender people have experienced significant advances in societal acceptance despite experiencing continued stigma and discrimination. While it can still be difficult to access quality health care, and there is a great deal to be done to create affirming health care organizations, there is growing interest around the United States in advancing transgender health. The focus of this commentary is to provide guidance to clinicians caring for transgender men or other gender nonconforming people who are contemplating, carrying, or have completed a pregnancy. Terms transgender and gender nonconforming specifically refer to those whose gender identity (e.g., being a man) differs from their female sex assigned at birth. Many, if not most transgender men retain their female reproductive organs and retain the capacity to have children. Review of their experience demonstrates the need for preconception counseling that includes discussion of stopping testosterone while trying to conceive and during pregnancy, and anticipating increasing experiences of gender dysphoria during and after pregnancy. The clinical aspects of delivery itself fall within the realm of routine obstetrical care, although further research is needed into how mode and environment of delivery may affect gender dysphoria. Postpartum considerations include discussion of options for chest (breast) feeding, and how and when to reinitiate testosterone. A positive perinatal experience begins from the moment transgender men first present for care and depends on comprehensive affirmation of gender diversity.
Introduction
Transgender individuals likely represent between 0.3 and 0.5% of the U.S. population. 1,2 Despite pervasive discrimination and invisibility, in recent years, transgender people have experienced significant advances in societal acceptance. This has led many organizations to look at their policies, programs, and educational materials to ensure that work within their sphere is both affirmative and inclusive. [3][4][5] Change has also been apparent in corporations 6 as well as educational institutions. [7][8][9] In many countries, even government programs have been on the forefront of change. [10][11][12] While programs that provide health care for transgender people have grown in recent years, there remains a gaping chasm between what is taught in health professional schools and postgraduate training programs and the transgender individual's needs. 8,13,14 This leaves many health professionals unprepared to provide quality care, with many needing to ''catch up'' or refer (possibly delaying care) to someone else, when a transgender person presents for care. 15 This is true throughout the basics of supporting and affirming gender affirmation, cross sex hormone therapy or a variety of surgeries, as well as routine primary care.
Indeed, medicine as a whole has not incorporated gender diversity into routine care. 16 For example, when should transgender men have routine chest (breast) cancer screening after chest reconstruction surgery? Or conversely how should one apply breast cancer screening protocols for transgender women; should we consider chronological age or length of exposure to exogenous estrogen? Other examples include, how and when to do prostate examinations for transgender women, and both timing and methodology of sexually transmitted infection (STI) and HIV evaluations for all transgender people? One question that is beginning to progress from media attention to clinical and academic focus is, how best to care for transgender men who desire to be or are pregnant?
Although Thomas Beatie's heart-rendering social, legal, and medical struggles through each of his three pregnancies brought him visibility and notoriety for being the first legally recognized man in the U.S. to give birth, 17,18 the struggles of men and other gender nonconforming individuals going through pregnancy and birth may be much more common than Mr. Beatie's press coverage might suggest. While no studies currently document the number of transgender men who have had a pregnancy, news reports, [19][20][21][22][23] documentaries, 24 social media list-serves and video-sharing sites, guidebooks, 25,26 fact sheets, 27 and the recent establishment of lists of health service provider with experience supporting transgender individuals in pregnancy and birth, 22 suggest numbers of transgender individuals who are seeking family planning, fertility, and pregnancy services could certainly be quite large. The focus of this commentary is to review the basic issues to be considered by clinicians who are caring for a transgender man or other gender-nonconforming individual whose gender identity is different than their female sex assigned at birth, and who are considering, are carrying, or who have completed a pregnancy. Additionally we hope this supports honest learning and teaching regarding reasonable standards of care to provide gender-affirming quality care.
Phenotypic sex, sexual orientation, and gender identity
Beginning with the basics of gender identity helps define the concepts discussed herein. While lesbian, gay, bisexual, and transgendercollectively abbreviated ''LGBT'' is often used as an acronym that includes ''T'' for transgender to capture sexual and gender minority experience, it is important to understand that while LGBT people have some common issues and history, each of these groups are quite different. Lesbian, gay, and bisexual (LGB) are terms that refer to sexual orientation, or a person's desires for intimacy with people of the same gender (lesbians and gay men) or both men and women in the case of bisexual people. Gender identity, on the other hand describes whether individuals identify themselves as a man, a woman, or one of many other genders. This is still different from one's phenotypic or physiological sex assigned at birth-referencing chromosomes, natal genitalia, and other anatomic and physiological characteristics that differ between human males and females. Both sexual orientation and gender identity can be fluid, dynamic, and change over time and can only be meaningfully self-defined by each individual. Everyone has a sex, sexual orientation, and a gender identity, but the three are independent. A transgender person is someone whose gender identity is not congruent with the sex they were assigned at birth. 28 Their transgender status implies nothing about who they are emotionally, romantically, or sexually attracted to. Affirming or transitioning one's gender is the process of bringing external gender expression (or how one lives their lives) and potentially one's physical body into alignment with one's internal gender identity. This process is variable for every person, can take months to years, and may involve social, legal, medical, and or surgical components.
Fertility and achieving pregnancy
A transgender man or trans man is someone who identifies as a man, but whose sex assigned at birth was female. Born with female reproductive organs, transgender men may elect to have any of a number of sex reassignment surgical procedures, although the largest survey on this subject showed that most have not, even though many wish to do so. 15 This leaves many transgender men with the capacity to bear children. Some whose sex assigned at birth was female may also identify as gender queer, a term that identifies someone living outside the malefemale gender binary, but nevertheless which could still allow for the possibility of a pregnancy. While many transgender men will want to begin cross-sex hormone therapy with testosterone, not all do. For those who elect to use testosterone, its use may affect fertility, fecundity, and impact fetal development. 16,29,30 Unfortunately, there is little data to inform balanced conversations on the topic. Thus, it is always important to discuss family planning-and in particular desires for genetically related children and or carrying a pregnancy prior to the initiation of cross-sex hormones. 16,30 Though little is known about the desires of transgender individuals for creating families and having genetically related children, likely desire for parenting and having genetically related children is present for many and it is incumbent on providers to help preserve and or support that desire. 25,31 For some transgender men, oocyte cryopreservation will be a viable, if expensive option, now made more available by changes in vitrification technology and increasing clinical use. 32 Despite uncertainty about predictable fertility effects, transgender men have successfully conceived and carried a pregnancy after using testosterone. Transgender men also have unintended pregnancies while taking or still amenorrheic from testosterone, which was mistakenly thought to preclude pregnancy. 33 In the case of transgender men or gender variant people who undergo surgery involving either hysterectomy or genital reconstruction with vaginal occlusion to affirm their identity, gestational pregnancy may no longer possible. The extent to which they can genetically contribute to a child or carry a pregnancy will depend on the specific surgical treatments. However, it is recommended that transgender men who may want to have genetically related children consider either embryo or oocyte cryopreservation (preferably prior to any testosterone treatment), and then, if they are unable to or do not desire to carry a pregnancy-work with their significant other or a surrogate to carry the pregnancy. The remainder of this review will cover the care for transgender men considering or in the midst of a pregnancy.
Psychological considerations
Principles of obstetrical practice regarding a transgender pregnancy are not complex once one has been appropriately trained in caring for people during pregnancy. While stories of pregnancies in transgender men are notable for challenges they pose to gendered notions of pregnancy, the clinical practice regarding care falls in the realm of routine obstetrical care. Review of the literature shows little research with two recent studies from 2014 33,34 of modest population samples discussed below and another from 1998. Not surprisingly all three studies highlight both psychological issues experienced by transgender men contemplating pregnancy or bearing a child as well as the unique medical implications for both parent and fetus. The former may be more complex and require more specialized training.
Ellis and colleagues used a qualitative approach employing a grounded theory to understand the experiences of male and gender variant gestational parents to guide clinical interactions. 34 As with most clinical studies of transgender people to date, the sample size was small. Their final sample included eight subjects whose natal assigned sex was female and who carried a pregnancy to term while identifying as male or gender variant at the time of conception and through delivery. Based on interview analysis, ''the unique finding of this study was that participants experienced significant and persistent loneliness'' and felt that ''the process of navigating identity required considerable energy and attention,'' especially in the context of having ''a lack of clear role models of what a positive, well integrated, gendervariant parental role might look like.'' They noted both internal and external struggles for parents. Internal challenges were typified by the conflict between one's identity as male and or gender variant and ''social norms that define a pregnant person as woman and a gestational parent as mother.'' Regarding the external world, contemplation and experience of pregnancy involved a constant tension about needing to ''manage others' perceptions and either disclosing or not disclosing what they were experiencing.'' Their recommendations, focused on providing affirming and inclusive care beginning with preconception counseling and continuing through the postpartum period. This level of support is within the scope of any perinatal provider. However, additional support and guidance from mental health colleagues may be beneficial; should an individual's experiences raise concerns of exacerbated personal psychological distress or safety.
Physical considerations
For transgender men with functioning natal reproductive organs, the major unifying medical issue regarding conceiving and delivering healthy children are related to whether they used testosterone and if so, the duration of use and timing in relation to pregnancy. Light et al. sought to address this by studying the experience of pregnancy and birth in a cross-sectional online survey of 41 transgender men-individuals who had a male or masculine identity, but who had been assigned the female sex at birth. 33 Of those studied, 25 (61%) reported testosterone use prior to pregnancy. Among testosterone users, 6 (24%) had an unplanned pregnancy and 14 (72%) conceived within six months. Of the prior testosterone users, 20 (80%) resumed menses within six months after stopping testosterone and five participants conceived while still amenorrheic from testosterone (though whether they were concurrently using testosterone at conception was unclear). The majority of respondents among both prior testosterone users and nonusers used their own oocytes and most used a partner's sperm.
Pregnancy completion and outcomes
Unfortunately, limited prior data make an understanding of the factors affecting mode of delivery challenging. In Ellis' study, individuals had salient reasons for desiring either vaginal birth or cesarean delivery. 34 In Light's work, more of the transgender men, 9 (36%) who had used testosterone delivered by cesarean than those who had not used testosterone 3 (19%). In addition, among the group who had used testosterone, 3 (33%) of the individuals who had a cesarean delivery requested this mode of delivery compared with 0 among those who had not used testosterone. Although these findings were not statistically significant, more attention into influencers on mode of delivery is warranted. 33 Specific considerations may be anticipated in the delivery suite with acceptance of a virulized man undergoing labor and delivery process, or patient concerns with or disassociation from natal female genitalia. 34 While the literature suggests that high (endogenous) androgen levels in pregnant women are associated with reduced birth weight 35,36 in this study ''pregnancy, delivery, and birth outcomes'' did not differ according to prior testosterone use, though testosterone levels and birth weight were not measured during pregnancy. Complications that were self-reported included hypertension (12%), preterm labor (10%), placental abruption (10%), and anemia (7%). Notably, anemia was not reported by any who had prior testosterone use. Study findings were limited by a small sample, retrospective selfreported outcomes, and insufficient power to observe differences between prior testosterone users and nonusers. The role of testosterone in influencing the genesis of obstetrical complications, remains unclear. Thus, at this time the management of any obstetrical pathophysiological entity that presents itself should be managed according to current obstetrical best practices and not determined in relation to gender identity or use of prior testosterone. Nonetheless these findings herald important considerations for future research and clinical practice.
Pregnancy and postpartum management
Salient themes regarding the impact of pregnancy on family structure, isolation, gender dysphoria during pregnancy, and differences in interactions with health care providers emerged from open-ended survey questions. Attention to the potential for postpartum depression is warranted, as baseline depression and suicide rates among transgender individuals are higher than adult average and lack of societal and familial support, discrimination, assault, insufficient health provider training and awareness, and individual loneliness throughout pregnancy and parenting have been reported. 15,33,34,37,38 Many of the respondents reported choice of health care provider was strongly influenced by providers' acceptance and support of the to-be parent's identity. Potentially commensurate with this finding, many respondents sought midwifery care (46%), much higher than U.S. national average (8.2%), 39 and many expressed desires to stay out of hospital settings. However, despite overarching themes, responses were rich and varied, and as a group conveyed the need to both learn more of these experiences and train health providers to support transgender people throughout the entire preconception to parenting period. A salient takeaway was that while having a family is something that many transgender individuals want, pregnancy can lead men to acknowledge that they still have female reproductive organs which for many can be difficult, however rewarding the pregnancy may ultimately be. Though pregnancy can be a difficult time regardless of gender identity, for these transgender men often the pregnancy led to increased experiences of gender dysphoria, which merit special attention and support.
During the postpartum period, new considerations arise along with age-old ones. Some transgender men will have to decide if and when to start or reinitiate testosterone. For those who seek to chest (breast) feed, and many may, an elevated testosterone level has been shown to suppress lactation. 40 While testosterone does not appear to significantly pass into breast milk or have a short-term impact on infants, 41 it is still recommended that men who do chest feed stay off testosterone. 29 Some transgender men defer chest reconstruction (also known as ''top'') surgery in light of a planned desire to chest feed. Those who have had ''top surgery'' may still be able to lactate or can engage in chest feeding with assistance of a support device. 42 Again, the choice to chest or breast feed is a personal one and may cause one to experience dysphoria as they take on (and challenge) this traditionally feminine role. Balancing well-known health benefits of breast milk and breastfeeding with the medical, surgical, logistical, and social challenges that this might incur for a man, indicates that-how one feeds their childshould be an informed personal choice supported by health care providers-as for any parent. [42][43][44] Discussion and learning points What becomes clear from qualitative study and more generalized experience caring for transgender people is that a positive psychological outcome will depend on the experience someone has from the moment they first present for care and depends on the total experience from beginning to end being inclusive and affirmative. 28 Many of the news reports on pregnancies of transgender men having children sensationalize what for trans men, as for all parents having children, should be a personal and intimate experience. Principles of care will depend on efforts to ensure that the experience of care is designed around the needs of patients, which may vary. This will mean asking all patients at the outset about their gender identity and assigned sex at birth in addition to questions about preferred name and pronouns. 28 Good examples and training on this topic are readily available. 45,46 Understanding all individuals' gender identity will support comprehensive health services. All staff from the front line receptionists to clinicians will need training to understand why gender affirming policies and behaviors are important. In particular, systems may need to be modified and specified to ask these questions accurately while treating the information confidentially and with discretion. Health care providers and staff are often unaccustomed to caring for any transgender people, let alone ones who may be pregnant. This results in many transgender people reporting discrimination and a lack of training in the health care setting and results in barriers to much needed health care. 15 We need to support training and inquiry that enhances medical and social understanding of the situation as well as advancing an individual's care but is not simply for the purpose of idle curiosity, gossip, or entertainment. The information should be received and protected as would any privileged patient information. Every health care system needs to systematically examine how they can comprehensively meet the needs of the gender diversity we have among our patients and community.
The authors work on opposite sides of the United States in two institutions working hard to meet the needs of transgender people. Dr. Makadon works primarily on organizational change and education at the National LGBT Health Education Center at The Fenway Institute in Boston, MA, which is part of Fenway Health, a community health center that also has a robust transgender health clinical program integrated into its core primary care model. Dr. Obedin-Maliver works at a large academic center: The University of California San Francisco (UCSF). At both institutions, we are working to encourage organizational change and enhanced resources with the goals of respect and service. At the Fenway Institute, we have a structured technical assistance program offered nationally, that begins with an organizational readiness assessment where we ask both staff and management about their knowledge, comfort, and attitudes as well as experience regarding care of transgender people, in fact of all sexual and gender minorities. Based on our findings, we follow-up with educational programs, training both front line and administrative staff as well as clinicians. Our experience is that however unintended, a rigid encounter with a registrar who does not understand the principles to explain one's name (and gender) vary from previous medical records or legal documents and questions patients publically about this, can lead to dismay and bring an abrupt and unfortunate end to one's visit. At UCSF, we have established a primary care transgender clinic but are also working to bring visibility of transgender needs into training at all levels of the health professional training process and for various specialties such as psychiatry, obstetrics and gynecology, midwifery, and urology. Efforts are underway to make system-wide change to recognize gender identity and sexual orientation in patients' electronic medical record to facilitate meaningful communication. We also host multispecialty grand rounds and training, national conferences, and work to advance research to inform evidence-based care of transgender individuals. Utilizing resources from The Fenway Institute and others, we at UCSF have also worked to train staff and trainees understand concepts of gender identity and the steps one may take when one affirms their gender identity must be understood by all on the health care team. Furthermore, we have found that training to facilitate this understanding, while eliminating the pull to indulge in curiosity, is a process that must be intentional and well thought out, but is not complex. In our two institutions we have found, institutionalized kindness and respect do need the support of organizational leaders and, optimally, a respected colleague who can champion this work. Overcoming these experiential issues will allow one to receive care with dignity and allow one to begin the process of raising a family. | 2016-05-12T22:15:10.714Z | 2015-10-28T00:00:00.000 | {
"year": 2015,
"sha1": "76f9e795f12c859fbcd6a489dd6a98f7bcf81c00",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1753495X15612658",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "76f9e795f12c859fbcd6a489dd6a98f7bcf81c00",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219232160 | pes2o/s2orc | v3-fos-license | Deep slab seismicity limited by rate of deformation in the transition zone
Similar to earthquakes at Earth’s surface, deep earthquakes occur where sinking tectonic plates deform the fastest.
INTRODUCTION
Deep earthquakes occur within cold (<900° to 1000°C) slabs at pressures of 10 to 25 GPa (350 to 680 km). At these high pressures, brittle failure by frictional sliding or fracture, as occurs near the surface of Earth, is inhibited. Therefore, brittle failure at these high pressures requires a different mechanism to overcome either the large normal stresses (e.g., embrittlement through high pore fluid pressure) or a weakening mechanism. Therefore, much of the research on deep earthquakes has focused on the conditions at which sufficient fluids are present for embrittlement to occur (1,2) or on the conditions at which weakening mechanisms can trigger failure. Other proposed failure mechanisms include thermal shear instability (3)(4)(5) and transformational faulting of metastable olivine (MO) (6,7) or pyroxene (8), both of which trigger faulting through localized weakening. However, at both low and high pressure, the rock must be at conditions where strain energy accumulates and can be released through the failure processes. This study is focused on better understanding the physical conditions that determine where sufficient strain energy accumulates and how those conditions are related to the observed spatial distribution of seismicity.
Despite the differences in pressure-temperature conditions and triggering mechanisms, seismological observations of deep earthquakes suggests that these events are similar to shallow events in several ways. Deep earthquakes have (mostly) double-couple mechanisms indicating that they occur as a shear failure, aftershock sequences follow a standard Omori law for aftershock decay, energy/moment ratios are similar to shallow events, and long-range triggering of deep earthquakes has been observed [for comprehensive reviews, see (9)(10)(11)]. Analysis of source-time functions also shows remarkable similarity between shallow and deep earthquakes when depthdependent differences in rigidity are taken into account (12).
However, compared with shallow events, deep earthquakes tend to have shorter rupture duration for a given earthquake size, exhibit more rupture complexity (i.e., multiple subevents), have a larger range in stress drop (1 to 100 s MPa) and rupture velocities [0.2 to 0.9 times shear velocity, with some earthquakes exhibiting supershear rupture velocity; see references in (11)], and exhibit depth-dependent aftershock productivity (e.g., almost absent at intermediate depths but present deeper than 550 km). In addition, some very deep events have very low radiation efficiency (<0.1), which has been interpreted to indicate melting during the rupture process (13). These seismic observations indicate that the rupture process for shallow and deep earthquakes is similar (i.e., a shear failure) despite different failure mechanisms and physical conditions (pressure, temperature, deviatoric stress), which affect the details of the rupture process and strain energy release.
Seismicity in the subducting lithosphere is often presented as the number of earthquakes per year versus depth for the world's subduction zones (i.e., the global slab seismicity depth profile; Fig. 1A). This profile has been interpreted as indicating that there are likely two mechanisms for slab earthquakes, with dehydration embrittlement occurring at intermediate depths (50 to 300 km) where fluids are being released from the slab, and transformational faulting occurring deeper, where the slab is likely drier and metastable olivine and pyroxene may be present. There is growing seismological evidence for a MO wedge in the Japan slab, although it has not been definitively detected elsewhere due to a lack of dense seismic networks above other slabs (11). In addition, Gutenberg-Richter statistics (i.e., b values) also differ for seismicity above and below 350 km, supporting the hypothesis that there are two different mechanisms (9).
This explanation for the depth distribution of seismicity, based on a depth-controlled failure mechanism, needs to be reevaluated in light of new observations. First, there is growing evidence that the deep slab may not be dry and fluids can be transported and released from the slab in the deep transition zone (14). Second, regional differences in b values suggest that rupture may occur by a combination of mechanisms including transformational faulting and thermal shear instability for deep earthquakes (15). Third, recent studies suggest that shear instability is also a viable mechanism for intermediate-depth earthquakes (4,5,16).
All three proposed mechanisms for triggering deep earthquakes are, to first order, controlled by the requirement that the slab remains sufficiently cold to the base of the transition zone. The thermal structure of slabs is primarily controlled by the age of the slab at the time of subduction (t sub ; older slabs are colder and thicker) and the rate at which slabs sink into the mantle (V s ; slower subduction allows more time for the slab to heat up). These two variables have been combined to define the thermal parameter, = V s t sub , and compared with the maximum depth of seismicity in the slabs, z max . This comparison has been shown to be consistent with the predicted depth extent of an MO wedge in kinematic thermal models (17) and in dynamic models (18), although thermal models accounting for the variability of thermal conductivity in the slab (19) or water effects on reaction kinetics (20) suggest the MO wedge could be substantially shorter.
However, perhaps more problematic for any possible failure mechanism that is thermally limited is the observation that many slabs exhibit large gaps in seismicity below 410 km (see figs. S2 and S3). Most notably, the Peru and Chile slabs have earthquakes at 550 to 650 km, but no deep earthquakes from 410 to 550 km, and the same is true for sections of the Java-Sumatra slab. Similarly, the shallow-dipping section of the Japan slab (between profiles 3 and 4 in fig. S2Da) has a large aseismic region (~150 km across from 200-to 660-km depth).
These observations appear to be inconsistent with any mechanism for deep earthquakes that is primarily controlled by temperature. This is because temperature contours in the slab are elongated concentric surfaces, and therefore, the critical temperature for the failure mechanism will be present at all depths up to the maximum depth. If earthquakes are occurring at 660 km because the slab is cold enough for the failure mechanism to operate at that depth, then there is a portion of the slab at all depths above 660 km that is also cold enough for earthquakes to occur. Large gaps in deep earthquake seismicity indicate that being cold enough for the deep earthquake failure mechanisms to be viable, while necessary, is not a sufficient condition to explain the distribution of deep earthquakes. Therefore, some other physical factor, in addition to temperature, must control the depth distribution of deep earthquakes.
An alternative explanation for the peak in seismicity in the transition zone is that it is due to higher stresses in the slab at this depth caused by the viscous resistance in the lower mantle (21), buoyancy forces, or stresses related to volume contraction associated with both equilibrium or metastable phase transitions [see (22) and references therein]. These studies demonstrate that the combination of available forces and slab rheology predicts high stress magnitudes (>500 MPa). However, they are instantaneous calculations, which cannot show if the resulting deformation of the slab is consistent with observations or occurs at high enough strain rates. They also do not address the spatial variability in seismic strain rate.
At shallow depths (<100 km) in the crust and lithosphere, the interior of tectonic plates is aseismic, while seismicity occurs at plate boundaries where deformation is localized and strain rates are high. Localized deformation at plate boundaries occurs through a feedback between tectonic forces and the rock rheology [e.g., (23)]. High strain rate is also known to be a factor affecting failure by thermal shear instability (4) and transformational faulting (6). This suggests that the discontinuous distribution of seismicity in slabs may also be determined by feedbacks between the rheology and forces acting on the slab, which leads to a discontinuous distribution of high strain rate regions in the cold slab.
Early consideration of the causes of deep earthquakes recognized that slab rheology is also an important factor in determining where deep earthquakes occur. Wortel (24) considered the requirement that the rheology of slabs must allow for the accumulation of stresses that are released in the process of an earthquake and used this to determine a critical temperature for deep seismicity of <900° to 1000°C. Above this temperature, the slab strength (viscosity) is too low and the stresses will be accommodated by viscous flow. This temperature is consistent with more recent estimates for the maximum temperature at which slabs deform through low-temperature plasticity or yielding (25). Brodholt and Stein (26) later argued that slabs are rheologically strong enough to support deep earthquakes beyond 660 km but assumed uniformly low strain rate of 10 −18 s −1 in the slab. At this low strain rate, the slab would be essentially rigid and would sink through the mantle without deforming internally, which is inconsistent with the occurrence of earthquakes and the deformed shapes of slabs inferred from seismicity (27,28). Therefore, both the occurrence of earthquakes in slabs and the geometry of slabs require that slabs support relatively high stresses but are able to deform internally. This is an important constraint not only for the generation of deep earthquakes but also for the rheology used in long-term subduction models. The requirement that slabs support high stress and deform internally is met by dynamic models of subduction that use a rheology with a strong temperature dependence from a composite diffusiondislocation creep viscosity for olivine and either yielding [e.g., (29,30)] or some other approximation (31) of low-temperature plasticity (25).
Here, I show that the strain rate pattern within deforming slabs in these models varies spatially and temporally and mimics the strong variability in seismicity within the world's subduction zones. I use this correspondence to argue that observed seismic strain rate, while only a fraction of the total strain rate, reflects the actual variations in strain rate in slabs. That is, in addition to the thermal constraints on the possible failure mechanisms, earthquakes occur where the strain rate is high, and the gaps in seismicity within cold slabs reflect regions of the slab that are deforming more slowly. This is not an entirely new concept, Tao and O'Connell (32) showed that the peak in seismicity in the transition zone could be explained by the high strain rate in a weak slab (same viscosity as the mantle) due to a jump in viscosity into the lower mantle. However, such a weak slab could not support the stresses required for earthquake generation nor explain the magnitude or orientation of stress in the slab (21). More recently, others have shown that earthquakes preferentially align in regions of high curvature [e.g., (27)] and that weakening the slab leads to strain rates in the deep slab that are sufficiently high to drive thermal shear instability (33). Here, I revisit this largely overlooked factor in the process of deep earthquake generation to explicitly argue that the seismicity distribution directly reflects the spatial variation in strain rate within strong but deforming slabs. Note that this is different from arguing that seismicity occurs where the stresses are higher [e.g., (21,22)]: In the models presented, the stress within the cold slab is high everywhere because it is deforming at the yield stress (1 GPa), but a reduction in viscosity through plastic yielding allows the slab to deform at a higher strain rate locally.
RESULTS
The hypothesis presented here was first motivated by examining the strain rate evolution in two-dimensional (2D) dynamic models of subduction and recognizing the similarity to the seismicity pattern for intermediate to deep earthquakes. Therefore, I first present observed seismic strain rate estimated from the moment release rate in the slab (see Methods) and then compare this to the strain rate distribution in slabs from simulations (see Methods and in Supplementary Materials).
Seismic strain rate depth profiles
Although much consideration of the mechanisms for deep earthquakes have been motivated by the distinctive global seismicity depth profile (Fig. 1A), there is considerable variability both between and within individual subduction zones. Figure 1 (B to I) shows the regional seismicity and strain rate profiles for eight subduction zones. The strain rate curves largely follow the seismicity pattern, except in regions with large events relative to the regional average, which exhibit strain rate peaks (e.g., Kuriles at 600 km). There are three types of regional depth profiles. First, Tonga, Kermadec, and Java-Sumatra are all similar to the global profiles with seismicity present at all depths and a seismicity/strain rate peak in the transition zone. Second, both Chile and Peru are distinctive due to the large gap in seismicity from 300 to 500 km (except for a few events in Chile). Last, the Kuriles, Japan, and Marianas lack the peak in the transition zone. In Japan, there are only a few earthquakes deeper than 600 km, and the Marianas appears to have two peaks centered at 450 and 600 km.
The variability in seismicity and strain rate observed between subduction zones is also present within individual subduction zones. Figure S2 (A to H) shows profiles spaced 100 or 200 km apart for all eight subduction zones. Figure 2 shows the seismicity and strain rate profiles for the central section of the Tonga-Kermadec slab from 32°S to 21°S. Even in this central region of an old (85 to 100 Ma) and cold slab, far from the effects of slab edges, the seismicity pattern and strain rate vary substantially from one profile to the next. Some of the profiles have the characteristic peak in seismicity within the transition zone, but the depth and width of the peak vary; still, other profiles do not have this peak at all. In adjacent profiles, the depth and width of the peak change continuously, suggesting that the processes or conditions that determine the location of seismicity are also changing over length scales of less than 100 km along-strike.
Fig. 2. Seismicity and strain rate exhibit continuous changes in peak depth and width along adjacent profiles.
Examples from the Tonga (1 to 5) and Kermadec (7 to 11) slabs. From profiles 1 through 5, there is a continuous change in the depth and width of the transition zone peak and the emergence of a narrow strain rate peak at 400-km depth. From profiles 11 to 7, the transition zone peak disappears, being replaced by two smaller peaks, and then a shift to a broad peak centered at 400-km depth. Locations of profiles are shown in fig. S2 (A and B).
The global average, while illustrative, does not capture the observed variability in seismicity and strain rate. More importantly, ignoring this variability is ignoring important information about the conditions at which deep earthquakes occur and the physical requirements for any failure mechanism. If strain rate is a key factor controlling the variable distribution of seismicity in slabs, then the seismic strain rate should reflect the actual strain rate in the slab (just as seismicity reflects strain rate within the plates at the surface). In this case, multiple failure mechanisms for deep earthquakes are possible, but all require sufficient strain accumulation, and therefore, earthquakes are limited to cold and high strain rate regions. Alternatively, if the strain rate is uniform within the slab, then the strain rate is not an important factor, and in this case, some other factor controlling appropriate conditions for the various failure mechanisms will determine the seismicity pattern. Because it is not possible to directly measure the strain rate in slabs, I use numerical simulations to estimate the strain rate distribution.
2D dynamic models of subduction
The strain rate (d/dt) within the slab depends both on the rheology used in the models and the effect of phase transitions on the timedependent evolution of the slab. For the strongly temperaturedependent viscosity of olivine, the slab rheology is primarily determined by the yield strength and maximum viscosity allowed in the models. This means that the slab interior is stiff and resists internal deformation. Therefore, the stresses caused by sinking of the slab are primarily accommodated by flow in the surrounding mantle. However, during episodes of slab folding, yielding within the slab leads to regions of localized internal deformation with higher strain rate.
Model 1 shows the evolution of an 80-Ma-old slab in which no phase changes are included ( Fig. 3A and movie S1). At the start of subduction, the stress due to negative slab buoyancy is small, and the main area of deformation is the bending region at the outer rise, which exhibits a classic hourglass-shaped yielding region (Fig. 3A, a). As the slab lengthens, it sinks rapidly through the upper mantle and experiences high strain rates throughout; the slab sinks faster than the trailing plate and stretches due to low viscous resistance from the surrounding mantle (Fig. 3A, b). Once the slab reaches the viscosity jump at the top of the lower mantle, it slows down as the buoyancy of the slab becomes partially supported by higher viscosity in the lower mantle. At the same time, the internal strain rate decreases markedly, and the effective viscosity increases (Fig. 3A, c). Subsequently, the slab dip shallows slightly, but there is otherwise little change to the slab shape or sinking rate for the remainder of the simulation (Fig. 3A, d).
In model 2, the evolution of the slab is quite different, owing to the effect of the phase transitions ( Fig. 3B and movie S2). The evolution of the slab is similar to model 1 until the slab starts to interact with the more viscous lower mantle. At this point, there is a strongly time-dependent strain rate pattern associated with the bending and buckling of the slab. As shown previously, the density anomalies associated with the phase transitions cause folding and buckling of the slab as well as forward and retrograde motion of the trench (30,34). High strain rate regions form in regions of bending with an hourglass pattern characteristic of bending with a neutral plane. In addition, there is a region of high strain rate at 550-to 650-km depth, which occurs with or without slab bending. This high-strain rate region occurs between the garnet-to-ilmenite (g-i; elevated) and garnetto-bridgmanite (depressed) transitions in the harzburgite layer.
For model 2, the maximum strain rate occurring below a temperature limit (e.g., 1000°C) is plotted as a function of depth (Fig. 3B, e to h). The depth profiles show high strain rates at shallow depths (<100 km) corresponding to the high rates of deformation along the slab surface and in the bending region at the trench. Below this depth, the strain rate magnitude generally decreases with a minimum near 300 km. In the transition zone, the shape of the profile is strongly time variable as the slab folds and buckles. During sizeable bending events, the maximum strain rate does not depend strongly on temperature, while at other times, the maximum strain rate is higher at temperatures of 900° to 1000°C. The depth and width of strain rate peaks, as well as the number of strain rate peaks, change in time. Note also that the strain rate drops sharply, crossing into the higher-viscosity lower mantle. These strain rate profiles have similar characteristics to the strain rate profiles calculated from observed seismicity. It is these similarities in strain rate between the long-term subduction models and the observations that argue in favor of strain rate as an important environmental variable determining the distribution of deep earthquakes.
Model 3 is the same as Model 2 except that the initial subducting plate age is younger (40 Ma). Here, the evolution of the slab and strain rate pattern is similar to model 2, exhibiting peaks in strain rate during bending and folding ( Fig. 3C and movie S3). However, because the slab is younger, it is also warmer with the 700° to 800°C isotherms restricted to depths less than 400 km and periods of time when the 1000 ∘ C contour does not extend past 660 km or becomes broken. Also, because the slab is warmer, it has a smaller integrated strength, and it deforms at higher strain rates overall. Note that in the deeper, warmer regions of the slab, the stress levels are lower because the slab is not deforming at the yield stress, and therefore, despite the higher strain rates, seismicity might not be expected to occur in these regions.
In all three models, the stress orientations indicate that the slab exhibits down-dip compression (DDC) along the top/central part of the slab (~200 to 600 km), except in regions of folding, consistent with earlier studies (21,35). In the folding regions, the location of the DDC shifts to the high-strain rate region on the underside of the slab, and the stretching orientation is parallel to the folded slab surface. This suggests that the envelope of seismicity defining the location of deep slabs may shift from the top of the slab to the bottom of the slab, depending on the orientation of folding at that depth.
Comparison of observations and models
The models show a dynamic view of continuously changing slab shape and peaks in strain rate. For Earth, however, we have only a single snapshot in time. One way around this is to recognize that for 3D slabs, their shape evolves in time and in space so that adjacent profiles capture the time evolution of the changing shape. This is a commonly used space for time substitution from structural geology.
Using this approach, comparison shows that both the observations and the models exhibit peaks in strain rate of variable magnitude, depth, and width. In the models, peaks follow regions of bending in time (Fig. 4), while in observations, there are many examples of the depth and shape of peaks systematically changing along-strike ( Fig. 2 and fig. S3) and of seismicity following lines of high curvature along-strike (27). In the models, there are times when strain rate is low throughout much of the slab, just as there are profiles in the observations that have very low seismicity adjacent to regions with higher seismicity (e.g., Japan).
In addition to the time-variable evolution of the strain rate profiles, there are two more general characteristics of the strain rate profiles to explain. First, note that in the models, the strain rate decreases to a minimum value beneath 660 km, for all times, regardless of the shape of the slab at this depth (Figs. 3 and 4). This drop in strain rate within the slab is caused by the increased viscous support provided by the higher viscosity of the lower mantle. This higher viscosity slows down the overall rate of deformation of the slab (by a factor of 100) and the surrounding mantle. The increase in viscous support by the surrounding mantle also means that much less stress must be supported by the slab itself. And, the slab is no longer bending and buckling but rather sinks passively (<1 cm/year) into the lower mantle. This suggests that the cessation of seismicity at 660-km depth could be controlled by the change in rheology, which causes an overall lower strain rate in the slab and surrounding mantle. However, it is also true that transformational faulting from MO also shuts off beyond 660 km because the transformation to bridgmanite plus ferropericlase is endothermic (36).
Second, there is a peak in strain rate just below 600 km that persists even when there is no appreciable bending of the slab at this depth (Fig. 4). Figure 5 shows that this peak occurs at the garnet-tobridgmanite transition in the harzburgite layer, which lies between the elevated garnet-ilmenite (g-i) phase transition and the depressed ilmenite-to-bridgmanite (i-B) and ringwoodite-to-bridgmanite plus ferropericlase (R-B + f) phase transitions. The region between these two phase transitions is subject to net compression caused by the , is a warmer and therefore weaker slab and deforms at higher strain rates but shows similar folding and buckling behavior to model 2.
negative buoyancy above g-i and positive buoyancy below i-B and R-B + f. This compression causes a higher localized stress, which due to yielding leads to a reduction in viscosity and higher strain rate. This result suggests that accurately accounting for the phase transitions in both the pyroxene and olivine components of the slab, as is done in these simulations, is essential for fully accounting for the buoyancy forces affecting the overall deformation of the slab and causing deep slab seismicity. Because the numerical simulations are fully dynamic, the evolution of subduction rates and trench motion, and thus slab shape, is determined by the time-evolving balance of forces. Therefore, the models do not correspond to any particular subduction zone on Earth. However, because the time steps in the models correspond to thousands of years, and the strain rates are determined by the instantaneous balance of forces and the rheology, snapshots from the models with similar geometry can be compared with observed profiles.
First is a comparison between a model snapshot and a profile from Chile (Fig. 6A). The Chile slab has a fairly planar shape below the shallow flat-slab segment (28) and is thought to be sinking directly into the lower mantle (37), similar to the model snapshot. Both the model and observations exhibit low strain rates above the transition zone with a peak around 600 km associated with the phase transitions at this depth. Similar profiles are also seen in Peru and portions of Java-Sumatra (see fig. S3, F and G).
Second is a comparison for the Mariana slab (Fig. 6B). The model snapshot shows an overturned slab with high strain rate regions between 350 and 500 km and a second peak associated with the g-i transitions near 600 km. While typical profiles of slabs do not exhibit this overturned shape, sections of the Mariana and Java-Sumatra slabs are clearly overturned (see fig. S2). The distance from the westernmost limit of the slab and the slab tip at 700 km is about 250 km in the model and about 200 km in the observations. The high strain rate region in the model at 350-to 500-km depth corresponds to a cluster of events in the Marianas slab with no seismicity above or below this bending region (except for one very deep event). However, note that the high stain rate region at 350-to 500-km depth in the model occurs on the bottom of the plate with DDC, while the deeper strain rate peak occurs across the slab width. If the location of deep earthquakes is controlled by strain rate, then these results require that some events initiate in the lithospheric portion of the plate, rather than the crust or harzburgite layers: an important observation for determining which failure mechanisms may be active in different locations in the slab.
Third is a comparison between a model snapshot and a profile from the Bonin slab (Fig. 6C). The profile is from the region of the slab between the shallow-dipping, planar slab to the north in Japan and the steeply dipping, curved slab to the south in the Marianas (see map in fig. S2E). Note that the shape of the profile seen here exists over 200 to 250 km along-strike. The seismicity indicates that the slab flattens to the west just below 500-km depth [the 1982 moment magnitude (M w ) 6.7 outboard event is noted]. However, there is also an event at 680-km depth behind (east) of the shallower seismicity and separated by a gap. This deep earthquake is the 2015 M w 7.8 Ogasawara (Bonin) Islands event (38). One possibility to explain the change in geometry across such a narrow region is a tear in the slab (39). A second possibility is that the slab is folded (38), as shown in the model snapshot. Note that in this snapshot, there is a region of high strain rate in the outboard, top portion of the fold, but the strain rate in the bottom of the fold is low. The fold in the model snapshot is also broader than the fold needed to explain the earthquakes, indicating that more intense weakening of the slab is required during folding.
DISCUSSION
The numerical models show that high strain rate regions occur where the slab is bending or folding, and locally between phase transitions with opposite Clapeyron slopes. Both the modeled strain rate and observed strain rate profiles exhibit the following: (i) peaks of variable magnitude, depth, and width; (ii) regions of very low strain rate (gaps in seismicity); and (iii) a sharp drop-off in strain rate below 660 km. The similarity in the strain rate profiles from the models and from the seismicity shows that seismic strain rate directly reflects the actual strain rate of the slab. Therefore, the spatial variation in deep earthquake seismicity is determined by the spatial pattern in strain rate within strong, deforming slabs.
This conclusion requires that the slab rheology is such that the slab is viscously strong (strong temperature-dependent viscosity) and that it can yield in response to localized higher stresses. If a Fig. 4. Time evolution of the maximum strain rate in the slab for model 2 demonstrates spatial and temporal variability. The color is the maximum strain rate occurring in the slab at temperatures less than 900°C as a function of depth and simulation time. The depth and width of strain rate peaks migrate in time following bending regions and folds. There is also a peak centered at 600-km depth that occurs at three times, independent of a major fold in the slab (see Fig. 5).
Fig. 5. Strain rate peak below 600-km depth is associated with garnet-tobridgmanite transition in the harzburgite layer. (A)
Zoom-in on the strain-rate in the slab for model 2 at 13.9 Ma (Fig. 3E) showing the location of the phase transitions (gray), and temperature contours (black). (B) shows the profiles of the strain rate peak at 1000°C (black) and 900°C (blue). This strain rate peak occurs for periods of 5 to 10 Ma (see Fig. 4) in the absence of appreciable folding or bending of the slab. gt, garnet; il, ilmenite; rw, Ringwoodite; fp, ferropericlase; brg, bridgmanite.
of 11
lower yield stress (or lower maximum viscosity) were used, the whole slab would deform at a higher rate. However, in these models, a lower yield stress causes the slab to break off because of the added negative buoyancy from the phase transitions (34) compared with other models that use a lower yield stress [e.g., (40)]. Also, a lower yield stress may not be consistent with the large stress drops (up to 100s of MPa) estimated for some deep earthquakes (9,41) or the differential stress at which low-temperature plasticity would occur in cold slabs (25).
The yield stress, or use of a strong power law exponent (31), is an approximation to low-temperature plasticity (i.e., Peierls creep). At shallow depth, the yield stress is also used to approximate brittle failure through frictional processes (i.e., Byerlee's law). A better approximation of the low-temperature plasticity may be achieved using a temperature-dependent power law exponent (42) and would likely lead to overall higher strain rates in the cold interior of the slab, but would still exhibit peaks in strain rate in regions of bending or folding. Similarly, including the effects of elasticity with a viscoelastic rheology would also decrease the magnitude of stress in the slab and can result in a lower apparent slab viscosity (43). It is also important to note that slab temperature remains an important factor for deep seismicity. Young and slowly sinking slabs will not have deep seismicity because they are too warm and therefore deform through diffusion and dislocation creep. In this case, the stress in the slab is relaxed viscously.
The idea that strain rate is an important factor in understanding deep earthquakes is not new, but it has been largely ignored or forgotten in the literature relating potential failure mechanisms to the physical state of the slab. The mechanism of shear instability explicitly relies on having a high enough strain rate (and stress) to cause shear heating (3); however, localized grain size reduction is likely necessary to reach sufficient strain rates for this mechanism to be viable (5). Transformational faulting of olivine also requires a sufficient strain rate, and the strain rate affects the window of temperatures at which this mechanism occurs in the laboratory [see Figure 4 in (6)]. However, while many previous models explore the state of stress in the slab [e.g., (21,22,44)] and evolution of an MO wedge, they do not also assess the strain rate requirements [e.g., (18)]. Last, the correspondence of high strain rate with regions of bending and buckling in the models also agrees with the observation that earthquakes appear to align with regions of high slab curvature (27) and the higher rates of seismicity in strongly deformed slabs [e.g., Tonga; (45)].
While several studies have used seismic strain rate as a minimum constraint on the slab strain rate (and maximum viscosity) required in dynamic models of subduction [e.g., (40,46)], these models have not considered the spatial variability in strain rate and how this may be related to the rheology of the slab. In contrast, many past studies of subduction have used simplified rheologies and inherently weak slabs [see (32) and review in (47)]. While these weak slab models may meet the minimum average strain rate requirement, they are inconsistent with laboratory constraints on rheology, the requirement that slabs be strong enough to store stresses that are released seismically (not through viscous flow), and the large observed stress drops for some deep earthquakes.
While the models presented here provide compelling evidence that strain rate is an important factor, together with the thermal structure, in determining the distribution of deep earthquakes, there are limitations that must be explored in future models, including low-temperature plasticity, elasticity, tectonic overpressure from phase transitions, inclusion of an MO wedge, compressibility, and 2D geometry (this list is addressed in the Supplementary Materials). Systematically addressing these limitations will surely affect the details of the slab deformation, and the magnitudes of the strain rates and stresses, but will unlikely affect the primary conclusions presented here because the slabs will still bend and buckle with spatially and temporally variable strain rate. To further link the dynamical models to possible failure mechanism, running identical subduction models, both with and without MO, for a range of thermal parameters would facilitate analyzing how stress magnitudes and orientations are related to portions of the slab that have appropriate temperature and strain rate conditions for the possible deep earthquake failure mechanisms.
Incorporation of better approximation of low-temperature plasticity, followed by elasticity and compressibility, should be primary targets for further study because this will allow more direct comparison between the strain rate and stress states in the models and the conditions required for each deep earthquake failure mechanism. It is important to note that Farrington et al. (43) show that the mode of subduction and slab morphology are not affected by inclusion of elasticity in 3D models of free subduction; therefore, the distribution of high-strain rate regions associated with bending in the slab would also not be affected. However, the location of the maximum stress (and its magnitude) and the stress orientations within the bending region is shifted because the stress and strain rate are not generally coaxial in viscoelastic materials. It is for this reason that the stress orientations from the slab models have not been analyzed in detail or compared directly with moment tensor solutions.
Progress in understanding the triggering mechanisms for deep earthquakes has been slowed by an insufficient physical framework to adequately demonstrate or refute the viability of proposed failure mechanisms. Taking into account the added constraint of strain rate should help to resolve which of these mechanisms are active in the subducting lithosphere, with the possibility that multiple mechanisms may be required to explain the variability within and between slabs of varying age and state of deformation. Considering the rate of deformation and the process through which strain accumulates in the slab more explicitly may also foster reevaluation of seismic observations related to rupture behavior (moment release, rupture time, b values, aftershock occurrence, and stress drop), and how these are related to different triggering mechanisms. For example, considering the detailed orientations of stress and strain in bending regions in conjunction with a physical model of the failure mechanism may help to explain the preference for near-horizontal failure planes for deep earthquakes (48) and the large depths (near 660 km) for the largest deep earthquakes (11). In addition, it is possible that the observed correlation of b values with thermal parameter (15) may also be related to the strain rate of the slab. This is because (i) faster sinking slabs are expected to undergo more internal deformation to accommodate the viscous resistance to sinking into the lower mantle, and (ii) the correlation between b value and thermal parameter is primarily controlled by the sinking rate. This is why, for example, Tonga has a higher thermal parameter than Japan, even though the subducting plate in Tonga is younger than that in Japan. Last, the spatially variable strain rate can be used to further constrain the appropriate rheology for the lithosphere by providing a direct link between the short time scale phenomena of strain release through earthquakes and the long-term deformation of slabs.
CONCLUSIONS
The spatial distribution of deep earthquakes exhibits marked variability between different subduction zones and along-strike within individual subduction zones. Existing explanations for deep earthquakes assume that the distribution of seismicity is primarily determined by whether the appropriate thermal conditions exist in the slab for a variety of special triggering mechanisms. Here, I have shown that the strain rate distribution from simulations of strong but deforming slabs has the same variability as the observed strain rate: peaks in strain rate at variable depths, regions of low strain rate, and a sharp drop-off in strain rate at 660 km. The results presented here cannot distinguish between possible failure mechanisms for deep earthquakes. However, they do suggest a new approach for testing these mechanisms that would combine the thermal and strain rate constraints with appropriate rheological models. With these models, it may be possible to more directly link the required conditions (e.g., pressure, temperature, and strain rate) for viable failure mechanisms with seismic observations of deep earthquakes (e.g., stress drop, radiation efficiency, and b values) and to better constrain the rheology of the lithosphere and mantle.
METHODS
I compared the time-dependent evolution of deformation within subducting lithosphere to the seismically accommodated strain rate observed in present-day slabs. For the numerical simulations, I used the second invariant of the strain rate tensor to quantify the magnitude of the strain rate. The seismic strain rate was calculated following the analysis of Bevis (49). While these are both measuring deformation within the slab, the seismic strain rate is, by definition, the deformation that is not accommodated by viscous deformation. Therefore, the spatial pattern (depth dependence) of the strain rates was compared, but not the magnitudes.
Dynamic subduction models
The subduction models are fully described in Billen and Arredondo (30). The time-dependent evolution of the sinking lithosphere (slab) was modeled in a 2D slice of a spherical shell extending from the surface to the core-mantle boundary and 61° in longitude ( fig. S1). Simulations were run using the CitcomS finite element code (50). CitcomS solves the conservation equations for mass, momentum, and energy using the extended Boussinesq approximation, which assumes incompressibility, but includes an initial adiabatic gradient, shear heating, and latent heat from phase transitions (51). The model setup allows for fully dynamic simulations in which only buoyancy forces drive subduction, plate, and trench motions (free subduction): All boundaries have a zero normal velocity and no tangential stress (free slip). To allow the plates to move freely toward or away from the sidewalls, we imposed a boxed region at the trailing end of both plates that has a fixed thermal profile and low viscosity. Subduction was initiated with a proto slab extending to a depth of 200 km.
Key features of the model setup include (fig. S1) the following: (i) a layered compositional density structure for the subducting and overriding plates, (ii) a composite viscoplastic rheology based on laboratory experiments for olivine, and (iii) compositionally dependent phase transitions. In addition, the basaltic crustal layer is modeled as a weak layer (maximum viscosity of 10 20 Pa s), which allows the subducting plate to slide past the overriding plate. The maximum viscosity reverts to the global maximum of 10 24 Pa s as the basalt transitions to eclogite (at 80 to 100 km). All of the parameters for the composite viscosity and phase transitions are documented in Billen and Arredondo (30).
Seismic strain rate calculation
The strain rate associated with seismicity in the subducted lithosphere was calculated following Bevis (49). This calculation relates the moment released within in a volume of the slab to the down-dip strain rate where ∑ T M o is the total moment released within a volume, V, during a time period, T, and is the rigidity. For this calculation, a volume of slab material, V = WHL is considered, where W is trench parallel width, H is slab thickness, and L is the down-dip slab length. Equation 1 results from considering that the slip in each event, D, is related to the seismic moment by M o = AD, where the fault area is A. The average slip accumulated within a time period, T, is where the average fault area is taken as A = WH. Assuming this slip accommodates down-dip deformation, the change in down-dip length is The down-dip strain is then given by ϵ = /L and the strain rate by ϵ = / LT .
The strain rate is calculated along evenly spaced trench-perpendicular profiles located every 200 km (W) along the trench, in 10-km-depth intervals (dz), assuming a constant seismogenic thickness of 80 km (H). For Tonga, I used W = 100 km because the seismicity rate is much higher than in other subduction zones. A value of 60 GPa was used for the rigidity (49). In the Preliminary Reference Earth Model (PREM), the rigidity increases by a factor of 2 with depth in the upper mantle. However, at the same time, the seismogenic width is also expected to decrease with depth as the slab warms. Therefore, these changes will largely cancel out. In addition, the changes in strain rate of interest vary by 10× to 100×. Therefore, for simplicity, I used a constant value for rigidity and seismogenic thickness. The down-dip length of the slab within each depth bin depends on the average dip of the slab within each depth bin, , as L = dz/ sin (). Using the Slab 2.0 geometry model (28), the average slab dip was calculated from all points in the depth bin and located within 100 km (0.5 W) of the profile.
Earthquake data for a 50-year time period (1964 to 2014) are downloaded from the ISC-EHB (International Seismological Centre-Engdahl-vanderHilst-Buland) catalog for depths of 100 to 700 km and magnitudes of 4.0 and greater (52). A map and the earthquake profiles for each region are shown in figs. S2 and S3. For this time period, all the earthquakes have been relocated using the EHB algorithm, which provides better earthquake locations and, in particular, better depth estimates than previous ISC determinations (53). Moment magnitudes are used when available; otherwise, body-wave magnitude is converted to moment magnitude using the relationship M w = 0.85m b + 1.03 (54). The moment for each event is then given by M o = 10 1.5(10.7 + Mw) (note that this gives the moment in dyne-cm; 1 N/m is 10 7 dyne-cm). Last, for each subduction zone, the regional strain rate as a function depth is determined by summing the profiles. Plots of the seismicity rate and strain rate as a function of depth along each of the profiles are included in the Supplementary Materials (fig. S3, A to H).
For any study using seismicity to constrain rates of deformation, one must be aware that the relatively short (50 years) duration over which data are available may not capture seismicity that occurs on a longer time scale. In particular, since the rate of seismicity in deep slabs is quite low compared with rates along plate boundaries at the surface, apparent gaps in seismicity may be filled in by longer observation times. Similarly, an isolated, but rare, large events can result in an apparent spike in strain rate. For example, the M w 8.3 Okhotsk event in the Kuriles is the largest deep earthquake recorded and appears as a strain rate spike at 600-km depth in Fig. 1F and fig. S3C (profile 8). | 2020-05-28T09:16:29.366Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "784fc2926b2b57da07fab64b83a37cf5661b952f",
"oa_license": "CCBYNC",
"oa_url": "https://advances.sciencemag.org/content/advances/6/22/eaaz7692.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61121cb395d1be331a6e2ddfb4a778d9972a554c",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Geology"
]
} |
270267101 | pes2o/s2orc | v3-fos-license | Assessing the Influence of Vehicular Traffic-Associated Atmospheric Pollutants on Pulmonary Function Using Spirometry and Impulse Oscillometry in Healthy Participants: Insights from Bogot á , 2020–2021
: Air pollution, particularly from particulate matter (PM 2.5 ) and black carbon (eBC), has been implicated in airway pathologies. This study aims to assess the relationship between exposure to these pollutants and respiratory function in various populations, including healthy individuals, while seeking an accurate assessment method. A cross-sectional study was conducted in Bogot á , evaluating respiratory function in the users of bicycles, minivans, and buses through spirometry and impulse oscillometry. Measurements were taken along two main avenues, assessing the PM 2.5 and eBC concentrations. The results reveal higher pollutant levels on AVE KR 9, correlating with changes in oscillometry values post-travel. Cyclists exhibited differing pre-and post-travel values compared to bus and minivan users, suggesting aerobic exercise mitigates pollutant impacts. However, no statistically significant spirometry or impulse oscillometry variations were observed among routes or modes. Public transport and minivan users showed greater PM 2.5 and eBC exposure, yet no significant changes associated with environmental contaminants were found in respiratory function values. These findings underscore the importance of further research on pollutant effects and respiratory health in urban environments, particularly concerning different transport modes.
Introduction
Numerous studies have examined the impact of air pollutants on human health, with a particular focus on particulate matter (PM 2.5 ) and black carbon (eBC) [1][2][3].Exposure to PM 2.5 and eBC has also been linked to an increased risk of cardiovascular diseases, such as heart attacks, strokes, and high blood pressure.Prolonged exposure to these pollutants has been associated with a higher mortality rate, especially among vulnerable populations like the elderly, children, and individuals with pre-existing respiratory or cardiovascular conditions [4,5].However, urban environments often have short-term, high-concentration exposure to these pollutants, leading to changes in the respiratory system, which can result in increased respiratory infections and asthma exacerbations, particularly in susceptible populations [6,7].These changes are mediated by the irritative effects on the airways [8][9][10].
Nevertheless, there is limited evidence regarding the short-term effects of PM 2.5 and eBC exposure on healthy participants.Findings from the ITHACA study, conducted in four corridors of Bogota and involving three different transport modes, indicated that PM 2.5 and eBC concentrations were the highest in buses and minivans compared to bicycles (p < 0.05) [11].However, the ITHACA study did not observe any effects on spirometry volumes among the participants, raising the question of whether spirometry is sensitive enough to evaluate the short-term effects of PM 2.5 and eBC on lung function [11][12][13].
Some authors have suggested that impulse oscillometry may be a more suitable test for evaluating changes in the airways associated with pollutant exposure [14,15].The present study aimed to assess the effects of exposure on respiratory function (spirometry and impulse oscillometry) among a group of healthy users of three different transport modes in Bogota.
Materials and Methods
We conducted a cross-sectional study aiming to assess respiratory function of individuals exposed to PM 2.5 and eBC using three different modes of transportation: bicycles, minivans, and buses.
Population and Study Area
Ten participants voluntarily agreed to travel by bicycle, minivan, and bus on different routes from August 2021 to December 2021.The participants were recruited from the Air Contamination And Health Effects in Microenvironments in Bogota (ITHACA) study [11].The study included men and women between 18 and 54 years of age who were nonsmokers and had no history of chronic noncommunicable diseases.For more details, please refer to the Protocol for a Mixed-Methods Study (ITHACA) on the Assessment of Personal Exposure to Particulate Air Pollution in Different Microenvironments and Traveling by Several Modes of Transportation in Bogota, Colombia [11].
The measurements were conducted on two routes in the northern zone of the urban area of Bogota: Avenida Carrera 9 (AVE KR 9) from Calle 161 to Calle 127 and Carrera 19 (KR 19) between streets 161 and 134 (Figure 1).The selection of these routes was based on three criteria agreed upon with the Secretaria Distrital of Mobility: (i) roads with high daily vehicular traffic (including bicycles, minivans, and regular buses) in the Integrated Public Transport System (SITP); (ii) routes with dedicated bike paths; and (iii) pathways that had not been previously monitored in similar studies.
Type of Study and Sample Size
We conducted a non-probabilistic study focused on measurements (number of trips) rather than individuals.To determine the sample size, we formulated a one-sided hypothesis to detect a difference in means greater than zero.The significance level was set at α = 0.05, and we aimed for a target power of (1 − β) = 0.80, with a margin of error of β = 0.20.Considering a constant of K = 6.3 and a minimum difference of 0.55 deemed significant in R5 (µ1 − µ2), we assumed a standard deviation (σ) of 1.1 in each group.To conduct this study, a minimum sample size of 50 trips was estimated.
Measurement of Concentrations of Particulate
Matter Less Than 2.5 (PM 2.5 ) and Black Carbon PM 2.5 measurements were performed using The SidePak™ AM520 (SP) personal aerosol monitor was manufactured by TSI Incorporated, which is headquartered in Shoreview, Minnesota, United States.The mass concentration of PM 2.5 was measured at 1 Hz using a laser scattering-based method.These instruments utilize laser wavelengths of 780 and 640, respectively [16].The size selection of the sampled particles was achieved through an inertial impactor located at the instrument inlet.Flow calibration was performed before each use to ensure proper selection of particle aerodynamic size.A thorough comparison among three SP instruments used in this study was conducted in a laboratory environment prior to the campaign.The instruments exhibited excellent agreement, with data averaged every 30 s showing a correlation coefficient of 0.90.The bias between the instruments was estimated at 15%.
Type of Study and Sample Size
We conducted a non-probabilistic study focused on measurements (number of trips) rather than individuals.To determine the sample size, we formulated a one-sided hypothesis to detect a difference in means greater than zero.The significance level was set at α = 0.05, and we aimed for a target power of (1 − β) = 0.80, with a margin of error of β = 0.20.Considering a constant of K = 6.3 and a minimum difference of 0.55 deemed significant in R5 (µ1 − µ2), we assumed a standard deviation (σ) of 1.1 in each group.To conduct this study, a minimum sample size of 50 trips was estimated.
Measurement of Concentrations of Particulate Matter Less Than 2.5 (PM2.5) and Black Carbon
PM2.5 measurements were performed using The SidePak™ AM520 (SP) personal aerosol monitor was manufactured by TSI Incorporated, which is headquartered in Shoreview, Minnesota, United States.The mass concentration of PM2.5 was measured at 1 Hz using a laser scattering-based method.These instruments utilize laser wavelengths of 780 and 640, respectively [16].The size selection of the sampled particles was achieved through an inertial impactor located at the instrument inlet.Flow calibration was performed before each use to ensure proper selection of particle aerodynamic size.A thorough comparison among three SP instruments used in this study was conducted in a laboratory environment prior to the campaign.The instruments exhibited excellent agreement, with data averaged every 30 s showing a correlation coefficient of 0.90.The bias between the instruments was estimated at 15%.eBC was measured using a MicroAeth AE51 (MicroAeth AE51, AethLabs, CA, USA).Raw data were corrected for low atmospheric pressure in Bogotá.The nominal flow rate was set at 150 cm 3 min −1 .Corrections were applied for filter loading effects using a linear correction method.Simultaneous measurements with different levels of attenuation were eBC was measured using a MicroAeth AE51 (MicroAeth AE51, AethLabs, San Francisco, CA, USA).Raw data were corrected for low atmospheric pressure in Bogotá.The nominal flow rate was set at 150 cm 3 min −1 .Corrections were applied for filter loading effects using a linear correction method.Simultaneous measurements with different levels of attenuation were used to infer the loading correction constant.The reported accuracy of the instrument was +/− 100 ng m −3 .Data were reported as 30 s averages, with a maximum attenuation of 140.Filters exceeding this maximum were used to infer the loading factor, and corrected data from less-loaded instruments were used for reporting concentrations [16].
More detailed information related to the calibration and review methods were described in a previously published protocol and prior studies [11].
Measurement of Pulmonary Function
Two tests were applied to measure the lung function of the participants: Impulse oscillometry and spirometry.Both tests were performed before the start of travel (pretravel) and at least two hours after having completed travel (post-travel).At least three repetitions of each of the tests were applied by a respiratory therapist.The selection of the best spirometry and oscillometry tests was performed according to the recommendations and criteria of the American Thoracic Association [17].
Selected Variables
According to the literature review, we included variables from spirometry and impulse oscillometry tests.Some authors have proposed that the difference between pre-and posttravel FEV1/FVC spirometry values is associated with short-to medium-term exposure to PM 2.5 [18].Additionally, it has been highlighted that it is possible to estimate changes in the resistance to air in the small airway (R5) related to short-term exposure to PM 2.5 [19].
In addition, it has been indicated that the increase in the difference between the peripheral (R5) and central resistance to air (D20) may be associated with exposure to particulate matter [16].
Statistical Analysis
A correlation of paired data was performed using a Wilcoxon test to analyze whether there were significant relationships between lung function test values and their possible alterations in relation to exposure to atmospheric pollutants.Total measurements were used for statistical analysis, showing the median as a measure of central tendency and an interquartile range.
Ethical Considerations
The study was conducted in accordance with the Declaration of Helsinki for studies involving humans.It considered the International Ethical Guidelines for Health-related Research Involving Humans.The protocol was approved by the Research Ethics and Methodologies Committee (CEMIN) at the National Institute of Health of Colombia (protocol code 014/2019).
Results
The study included 10 volunteers who completed 50 trips.Their average age was 31 years, and 60% (n = 6) were women.The average body mass index (BMI) was 23.Seven volunteers used all transportation modes; one used a minivan and bus; one used only a minivan; and one used a bicycle, with a total of 25 trips per route.The distribution of the participants was duplicated because exposure was measured for participants on both routes, with a total of 50 trips included in the study.
Respiratory Function Tests
There were no significant clinical changes in spirometry parameters across the routes and modes of transportation.The spirometry tests showed similar FEV1/FVC values before and after travel, and these values were consistent across different modes of transportation (Figures 3 and 4).The Wilcoxon test results for pulmonary function parameters were non-significant for both routes and all transportation modes (Table 1).
Respiratory Function Tests
There were no significant clinical changes in spirometry parameters across the routes and modes of transportation.The spirometry tests showed similar FEV1/FVC values before and after travel, and these values were consistent across different modes of transportation (Figures 3 and 4).The Wilcoxon test results for pulmonary function parameters were non-significant for both routes and all transportation modes (Table 1).
Respiratory Function Tests
There were no significant clinical changes in spirometry parameters across the routes and modes of transportation.The spirometry tests showed similar FEV1/FVC values before and after travel, and these values were consistent across different modes of transportation (Figures 3 and 4).The Wilcoxon test results for pulmonary function parameters were non-significant for both routes and all transportation modes (Table 1).The R5 and D5-20 values showed no significant differences based on sex, and there were no notable variations in the post-examination responses between the different routes (mean = 12.27 Hz; σ = 8.7 HZ for KR 19 and 15.63 Hz; σ = 8.8 HZ for AVE KR 9).It is important to highlight that 18% (n = 9) of the participants exhibited increased peripheral resistance values (R5 and R20/R5) during the pre-test impulse oscillometry, and this was not linked to the specific route or the participants' sex (p > 0.05) (Figure 4).However, post-test peripheral resistance only improved in 22% (n = 2/9) of the participants who initially had elevated peripheral resistance values (R5 and R20/R5).These changes were also not associated with the route or mode of transportation (p > 0.05).
The median D5-20 values for the three transportation modes (bicycles, buses, and minivans) were similar (mean = 13.78Hz; σ = 10.16HZ, 13.05 Hz; σ = 8.3 HZ, and 15.03 Hz; σ = 7.6 HZ, respectively) (Figure 5).Notably, the subjects using bicycles showed consistent pre-and post-travel values, whereas those using buses and minivans tended to have increased post-travel values, but the changes were not statically relevant (p > 0.05).The data presented in the graph reveal notable differences in the PM2.5 and eBC concentrations among the different modes of transportation.Specifically, minivan users experienced the highest levels of PM2.5 on KR 19, with an average concentration of 19.93 µg/m 3 .On the other hand, buses on KR 19 and bicycles on AV KR 9 exhibited the highest concentrations of eBC, with averages of 6.49 µg/m 3 and 7.69 µg/m 3 , respectively.
No significant clinical changes were observed in the spirometry parameters when comparing different routes and modes of transportation.The results of the spirometry tests indicate similar values for FEV1/FVC before and after travel, and these values remained consistent across the various transport modes.
The figure provides a comparison of pre-and post-travel values for two respiratory parameters: FEV1/FVC from spirometry (left panel) and R5 from impulse oscillometry (right panel).In the left panel, the graph displays the pre-and post-travel FEV1/FVC values for both routes.No statistically significant changes were observed in this spirometry The data presented in the graph reveal notable differences in the PM 2.5 and eBC concentrations among the different modes of transportation.Specifically, minivan users experienced the highest levels of PM 2.5 on KR 19, with an average concentration of 19.93 µg/m 3 .On the other hand, buses on KR 19 and bicycles on AV KR 9 exhibited the highest concentrations of eBC, with averages of 6.49 µg/m 3 and 7.69 µg/m 3 , respectively.
No significant clinical changes were observed in the spirometry parameters when comparing different routes and modes of transportation.The results of the spirometry tests indicate similar values for FEV1/FVC before and after travel, and these values remained consistent across the various transport modes.
The figure provides a comparison of pre-and post-travel values for two respiratory parameters: FEV1/FVC from spirometry (left panel) and R5 from impulse oscillometry (right panel).In the left panel, the graph displays the pre-and post-travel FEV1/FVC values for both routes.No statistically significant changes were observed in this spirometry parameter, indicating that the lung function remained consistent before and after travel on both routes.
In the right panel, the graph represents the pre-and post-travel changes in the R5 values for each route using impulse oscillometry.It is worth noting that 18% (n = 9) of the participants showed increased peripheral resistance values during the pre-test impulse oscillometry assessment.However, in the post-test measurements, there was an improvement in this parameter, indicating a decrease in peripheral resistance.Importantly, the changes in the R5 values were not found to be associated with the specific route taken (p > 0.05), suggesting that the route of travel did not have a significant impact on this respiratory parameter.
The D5-20 values demonstrated similar patterns across genders, with no significant differences observed between the routes in terms of the post-examination responses (mean = 12.27 Hz for KR 19 and 15.63 Hz for AVE KR 9).Importantly, these changes were not associated with the specific route or mode of transportation (p > 0.05).
The Wilcoxon test results for pulmonary function parameters did not show any significant differences among the routes and transportation modes, as indicated by the nonsignificant p-values obtained.This suggests that there were no statistically significant changes in the measured pulmonary function parameters across the different routes and modes of transportation.
Discussion
In our study, we aimed to assess the impact of short-term exposure to PM 2.5 and black carbon on lung function in a group of healthy individuals.Our results reveal that the participants who travelled by minivan and bicycles were exposed to higher levels of PM 2.5 compared to the other modes of transportation.Additionally, minivan users experienced higher concentrations of eBC regardless of the route taken.These heightened pollutant levels can be attributed to factors such as heavy traffic and the presence of cycling infrastructure in these specific areas.These findings are similar to previous results reported in Bogota [19], showing that the levels of pollution experienced by individuals inside diesel buses were significantly higher compared to pedestrians and cyclists [20].Authors underscored that the remarkably elevated concentrations of the eBC/PM 2.5 ratio indicate a significant contribution from emissions produced by diesel engines to the presence of fine particulate matter.
We observed no significant changes in the spirometry and impulse oscillometry values associated with air pollutants when analyzing different routes and modes of transportation.These results are consistent with previous reports performed in other sectors of the city which suggest that there is not an association between spirometry changes and exposure to air pollutants in the short term regardless of sex, mode of transportation, or route [11].Current evidence generally supports a positive link between active transportation and physical activity.Even in cities with moderate air pollution levels, the benefits of physical activity outweigh the potential harm caused by air pollution [21].However, some authors have suggested that long-term exposure to PM 2.5 and habitual physical activity may interact negatively [22], indicating that the increased intake of PM 2.5 during physical activity could diminish the benefits of regular physical activity on lung function [23].
We should underscore that this is the first study using impulse oscillometry to measure the effect of the exposure to PM 2.5 and black carbon in a sample of health participants.It is worth noting that 18% of the participants showed peripheral airway obstruction patterns in the pre-test oscillometry values.In the post-test, peripheral resistance decreased in 14% (n = 7) of the participants.The impulse oscillometry system has been suggested as a tool to evaluate short-term effects, but in high-risk populations such as children and patients with EPOC and asthma [24].Studies performed in those populations have indicated that higher concentrations of PM 2.5 and black carbon are associated with increased differences in central and peripheral airflow resistance [25].It is important to mention that there is a scarcity of evidence regarding this type of measurement in a healthy adult population [15].
These findings contribute to our understanding of the potential impact of environmental factors on respiratory health.Further research with larger and more diverse samples is needed to strengthen our understanding of the relationship among air pollution, transportation patterns, and respiratory function.Such knowledge can inform targeted interventions to protect and improve the respiratory health of individuals in urban environments.
These findings underscore the significant influence that transportation choices and infrastructure have on air pollutant concentrations.It highlights the importance of implementing targeted interventions to mitigate pollution levels in specific transportation settings [26].Of particular concern is the prevalence of minivans as a common mode of transportation for children and adolescents in Bogotá.Considering that the duration of exposure during home-to-school trips exceeds that of home-to-office trips in the city, there may be potential long-term effects on the respiratory health of younger individuals.
It is crucial to address these issues and implement measures to minimize the adverse respiratory health impacts associated with high pollutant concentrations.By doing so, we can create healthier environments for individuals, especially vulnerable populations, such as children and adolescents, and ultimately improve public health in urban areas.
This study has several limitations and flaws that should be acknowledged.Firstly, the small sample size used in this study may not be representative of the overall healthy adult population in the city, limiting the generalizability of the findings.Consequently, the study may lack sufficient statistical power to detect significant changes in impulse oscillometry values associated with different routes or modes of transportation.
Secondly, we did not consider night-time exposure to air pollutants, which could potentially affect the pre-travel oscillometry R5 and R20/R5 values.Future studies should account for variations in exposure during different times of the day to obtain a more comprehensive understanding of respiratory function changes.
Thirdly, the assessments of PM 2.5 and black carbon exposure during the journey were conducted in a minivan.Although efforts were made to minimize external air flow by keeping the windows closed, it was challenging to completely control the exposure to air pollutants.This may introduce some variability in the results and should be taken into account when interpreting the findings.
Furthermore, it is important to acknowledge the limitations associated with the utilization of oscillometry and spirometry techniques in this study.The equations utilized to evaluate these parameters were originally developed using data from non-Latin American populations, and as such, there is a potential for bias in the interpretation of the results.It is crucial to consider the potential variations in respiratory physiology and lung function characteristics among different ethnic and geographical populations, which may impact the accuracy and applicability of these equations in the context of the present study.Further research specifically focusing on Latin American populations is warranted to address these limitations and provide more accurate and context-specific reference values for oscillometry and spirometry measurements.
Lastly, the study design did not include a long-term follow-up.Future research should consider incorporating longitudinal studies to evaluate the impact of air pollution on respiratory health over extended periods of time.
In conclusion, our findings suggest that individuals using minivans and bicycles may experience higher exposure to PM 2.5 and black carbon.However, we did not observe any significant associations between air pollutant concentrations and short-term changes in spirometry and impulse oscillometry variables.It is important to note that our study had a limited sample size, and further research with larger cohorts is necessary to fully
Figure 1 .
Figure 1.(a) Spatial location of Bogotá and (b) Location of the two (2) routes monitored in the northern zone of the city of Bogotá.The dots indicate the start and end points of the parallel routes.
Figure 1 .
Figure 1.(a) Spatial location of Bogotá and (b) Location of the two (2) routes monitored in the northern zone of the city of Bogotá.The dots indicate the start and end points of the parallel routes.
Figure 2 .
Figure 2. Concentrations of PM2.5 (left panel) and eBC (right panel) by transportation modes and routes.
Figure 2 .
Figure 2. Concentrations of PM 2.5 (left panel) and eBC (right panel) by transportation modes and routes.
Figure 2 .
Figure 2. Concentrations of PM2.5 (left panel) and eBC (right panel) by transportation modes and routes.
Figure 3 .
Figure 3. Pre-and post-travel FEV1/FVC spirometry values by route and mode of transportation.Figure 3. Pre-and post-travel FEV1/FVC spirometry values by route and mode of transportation.
Figure 3 .
Figure 3. Pre-and post-travel FEV1/FVC spirometry values by route and mode of transportation.Figure 3. Pre-and post-travel FEV1/FVC spirometry values by route and mode of transportation.Atmosphere 2024, 15, x FOR PEER REVIEW 6 of 11
Figure 4 .
Figure 4. Comparison of pre-and post-travel FEV1/FVC spirometry values and pre-and post-travel R5 oscillometry values.
Figure 4 .
Figure 4. Comparison of pre-and post-travel FEV1/FVC spirometry values and pre-and post-travel R5 oscillometry values.
Figure 5 .
Figure 5. D5-20 pre-and post-travel oscillometry values by route and mode of transportation.
Figure 5 .
Figure 5. D5-20 pre-and post-travel oscillometry values by route and mode of transportation.
Table 1 .
Medians and Wilcoxon tests of spirometry and impulse oscillometry variables (post-travel values).
Table 1 .
Medians and Wilcoxon tests of spirometry and impulse oscillometry variables (post-travel values). | 2024-06-06T15:13:47.503Z | 2024-06-04T00:00:00.000 | {
"year": 2024,
"sha1": "1bcbb6604fc1aebc3e60a9094321181df2ec1c5a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/15/6/688/pdf?version=1717497526",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "043a56eb947ff7c6e8f207dc08123f487eade95c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
210297551 | pes2o/s2orc | v3-fos-license | Effect of air pollution on agricultural outcome over Faya
It has been reported that aerosol deposition on the leaf is absorbed as nutrient. The proximity of Faya to the Sahara desert has contributed immensely to the net aerosol loading over the area. The aerosol optical depth (AOD) was obtained from satellite measurement (Multi-angle Imaging Spectro-Radiometer). After treating the AOD satellite dataset of fifteen years (2000-2013), aerosol loading over Faya was derived. The dataset is important to understand the yearly aerosol loading influence over the area. In this way, the excess aerosol deposition on the leaf may be detrimental to the health of the plant and hinder its productivity.
Introduction
One of the known techniques for looking at the dimension of contamination over a zone is the aerosol optical depth (AOD). The AOD is directly proportional to the air pollution over a geographical area. The sources of air pollution is mainly anthropogenic. However, there are influences of natural pollution sources such as Sahara desert dust influx. Optical properties of airborne particles have extreme impact over the nearby radiative driving and radiation equalization of the earth. The collaboration among vaporized and sun powered radiation can be portrayed by AOD. Hence, AOD can be used to portray the airborne sunlight based radiation are the termination and dispersing coefficients that describe the aerosol transport and accumulation over a geographical area. Airborne optical properties are significantly depicted by the Multi-angle Imaging Spectro-Radiometer (at 500 nm wavelength). The AOD can be characterized as the negative normal logarithm of the portion of sun oriented radiation that isn't dispersed or assimilated on a way by airborne particles. Angstrom exponent depicts the reliance of the AOD on wavelength. There are a few procedures embraced as of late to measure optical properties. For instance, aerosol radiative properties have been utilized to ascertain its optical properties (Calvello et al., 2010) The water-solvent piece of airborne particles begins from gas to molecule change and comprises of different sorts of sulfates, nitrates, natural and water-dissolvable substances. The residue speaks to the incombustible dark carbon. Mineral airborne or desert dust comprises of a blend of quartz and earth minerals. Antarctic vaporized or sulfate airborne comprises of a lot of sulfate, that is, 75% H2SO4. It is viewed as just for figuring the vaporized optical profundity. In recent time, we begun to estimate the aerosols loading of over fifty towns and cities across West Africa using basically the optical properties. It was observed that the West Africa climate system was very unique and no single model could capture the aerosols loading of more than three towns or cities in West Africa. The West African regional scale dispersion model (WASDM) was then borne because of the neccessity to know the current state of pollution over West African cities. WASDM has been validated using the AOD dataset of over sixty locations in West Africa (Emetere et al. 2015(Emetere et al. , 2016 In this paper, the primary goal is to measurably and computationally decide the aerosol loading over Faya to see the future dangers that must be promptly evaded. The information gives a decent foundation for further examination on aerosol loading; the information gives specialist essential understanding towards designing sun-photometer over Faya-Chad; the information evaluates the degree of air contamination; the information gives modeler important knowledge on vaporized stacking and maintenance challenges over Faya-Chad. It was reported that aerosol deposition on the leaf is absorbed as nutrient (Burkhardt, 2010). However, our hypothesis is that if much aerosols are deposited on the leaf, it may affect the plant and indirectly affect the productivity of the plant.
Experimental Design, Materials and Methods
Faya-Largeau is the largest city in northern Chad and was the capital of the region of Bourkou-Ennedi-Tibesti. The latitude and longitude of Faya are 17°55'32.52"N and 19°6'15.41"E respectively ( Figure 1). The dataset was obtained from MISR (https://l0dup05.larc.nasa.gov/L3Web/download). The data was processed using excel. The conversion from AOD to aerosol loading was done using WASDM. The West African regional scale dispersion model (WASDM) is generally governed by three equations i.e. (1) is resolved in accordance to the lorentz atmospheric convection model.
The inverse of equation (4) Hence, equations (5-7) describes the the lorentz form. The analysis of equations (5-7) was done using the C++ codes.
Results and Discussion
The aerosol optical depth (AOD) is presented in Figure 1. The high peaks can be seen on the averages shown in Table 1. The WASDM was used to derive the aerosol loading over the study area as shown in Table 2. It can be seen that the aerosol loading is high. Hence, the likely aerosol deposition will be high. This means that the stomata of the leaf (Figure 2) may have excess aerosol on its surface. The statistics supports the likeliness of the stomata being affected by aerosols. Figure 3: The stomata mechanism that aid aerosol absorption (Source: askabiologist.asu.edu)
Conclusion
From the aerosol loading and statistical evidence, it can be inferred that the excess deposition of atmospheric aerosol is not advantageous to well-being of plants especially in regions of high aerosol loading. Hence, technical experiments show that the detailed absorption of aerosol at the stomata is required to further confirm the hypothesis presented in this paper. | 2019-10-10T09:27:56.901Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "d134acacb7bb13e5621b2cc625e7f2b010fdb037",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1299/1/012090",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f96d780972f64c011530eb563689f366e7880ca4",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
1453069 | pes2o/s2orc | v3-fos-license | An Automatic Matcher and Linker for Transportation Datasets
Multimodality requires the integration of heterogeneous transportation data to construct a broad view of the transportation network. Many new transportation services are emerging while being isolated from previously-existing networks. This leads them to publish their data sources to the web, according to linked data principles, in order to gain visibility. Our interest is to use these data to construct an extended transportation network that links these new services to existing ones. The main problems we tackle in this article fall in the categories of automatic schema matching and data interlinking. We propose an approach that uses web services as mediators to help in automatically detecting geospatial properties and mapping them between two different schemas. On the other hand, we propose a new interlinking approach that enables the user to define rich semantic links between datasets in a flexible and customizable way.
Introduction
Multimodality requires the integration of heterogeneous transportation data to construct a broad view of the transportation network.The transportation field is continuously evolving with new services that are growing quickly to take part in passengers' daily commute or travel, e.g., car pooling (https: //www.blablacar.fr/),car sharing (https://www.deways.com/or https://www.drivy.com/)and bike sharing (https://www.velib.fr).The problem is that these services differ in data representation, and there is no specific standard for them to follow.This results in people manually combining different sub-trips from different sources (websites or applications) in order to create optimized trips that fit their needs.Such a task requires users to be fully aware of the surrounding services in addition to a complex task of finding links between one system and another.This raises the need for integrating multiple transportation data in order to provide a global view of the network.Enabling such a solution for each company requires identifying the nearby services and finding ways to integrate them, which is a repetitive and tedious task, especially when done manually.This limits operators to isolated solutions, which have to understand, translate and integrate every single relevant data source.Even though this task is a complicated one, it becomes even more complex when considering the evolution of the integrated data and the necessity of maintaining them and keeping them up to date.Some approaches have moved into creating a public repository to integrate public transportation data (Google Transit (http://maps.google.com/landing/transit/index.html),Syndicat des transports d'ile-de-France (STIF) (http://www.stif.info));however, they still do not take into consideration highly-evolving datasets, such as car sharing, bike sharing, car pooling, etc.Such services are highly dynamic and do not always have the notion of a fixed transportation stop.
Our goal is to find a simple way for operators to identify nearby transportation services by providing a connection portal enabling one to identify the connections between one transportation data source to other sources.In this work, we approach this problem from two perspectives.The first one is at the schema level, and it targets the automatic integration of datasets with different schemas.The second one is at the instance level, and it targets the discovery of transportation relations between different entities scattered between the datasets.We propose a homogeneous light-weight representation of transportation connections (transfer points from one stop to another) and the means to discover them in a flexible and customized manner.With this representation, we can link different types of transportation services regardless of the mode or service they offer.All transportation systems need to know is just how to handle these light connections and use them to connect with the outer world, which is much simpler than handling heterogeneous data and maintaining them.
In this article, we tackle two main problems from both fields: automatic schema matching and data interlinking.
Schema Matching
Automatic schema matching/mapping aims at proposing an automated way of discovering matching rules between datasets.However, the domain of transportation has some specific characteristics that existing approaches cannot handle.Transportation data contain geospatial properties that are represented in various formats and structures.A simple example to be considered is how an address can be modeled in different sources.Figure 1 shows three different representations for the same real-world address.Figure 1a shows a model that combines the attributes Street1, Street2, Zip-code, City and a Country to represent an address.Figure 1b shows the address as a combination of latitude and longitude values, while the final Figure 1c shows a WKT (http://www.opengeospatial.org/standards/wkt-crs)(well-known text) representation of the same entity.
In order to detect a mapping between different representations, existing approaches use individual or combined matchers that work on schema and/or instance levels using various techniques, e.g., linguistic, constraint-based, data-type based, etc.However, the mathematical-based operators used to define the similarity between relations are not suitable alone to detect the complex relations in transportation data.For instance, there is no way to find out that a combination of street1, street2, zip-code, city and country is the same as a combination of latitude and longitude between two datasets by using only some mathematical functions.This problem raises the question of how we can be able to automatically identify and map different representations of geospatial characteristics between two schemas.
The fact that each transportation dataset may contain different instances is a challenge since we cannot rely on the basic instance matching techniques to know the schema mappings.Moreover, relying only on other properties, such as column names or value types, may not be sufficient by themselves.To tackle this problem, we introduce an instance-based approach to detect geospatial properties for transportation points of transfers by the use of geospatial web services.
Data Interlinking
Enabling a transportation integration solution requires access to transportation sources, which can be obtained from open data [1,2], which is gaining a great deal of popularity, and numerous transportation operators are using it to publish their data on the web in order to increase their market visibility (http://opendata.paris.fr/page/home/,http://www.strasbourg.eu/ma-situation/professionnel/open-data/donnees/mobilite-transport-open-data, http://www.uitp.org/tags/open-data).Many solutions have benefited from this to provide rich data for smart city applications.They use linked data techniques and data interlinking tools to provide extended information relevant to both transportation and passenger profile queries [3,4].These techniques address equivalence detection between entities to establish links between data sources.This may help in enriching data about entities.However, this is not always enough in transportation data.Further complex relations are required to reflect the nature of transportation connections.Beyond equivalence or sameAslinks, we are interested in finding connections between transportation data sources based on the geospatial characteristics of the data, which capture the reachability between different transportation networks.Furthermore, using the given tools, we face two main limitations.The first is the restriction to a predefined set of functions for composing linking rules, due to the lack of flexibility of existing systems in defining custom functions.For instance, to calculate information such as the closeness of two transportation points of transfer (bus stop, train station, etc.), we cannot define custom functions to calculate walking distances, driving distances, etc.The user is forced to dig into the code (if available) and modify it directly.The second limitation is the representation of the generated output.Supporting complex relations requires more complex output patterns.As an example, let us suppose that a link is established between two transportation points of transfers.Existing tools can provide the output (BusStop1 nextTo TrainStation132) which does not give information about the occurrence of this relation.They are next to each others, but how close are they, and what are the modes of transportation that we can use, etc.?
This article is structured as follows: In Section 2, we present the background work and related work to both the automatic schema matching and data interlinking domains.Our contributions are then presented in Sections 3 and 4. Section 3 discusses our automatic schema-matching approach for transportation datasets.Section 4 discusses our flexible and customizable way of generating transportation connections for open data transportation datasets.Later, both approaches are put up to test with a real case scenario represented in Section 5. Finally, we conclude our work and discuss some perspectives in Section 6.
Automatic Schema Matching
Automatic schema matching is one of the approaches to solve schema heterogeneity.It provides the means and the techniques necessary for uniform access to the data.
Based on [5][6][7], a mapping element is a five-uple: (id, e, e , n, R) where: • id is a unique identifier of the given mapping element • e and e' are the entities of the first schema/ontology, respectively • n is a confidence measure holding the correspondence between the entities e and e' • R is a relation (e.g., equivalence, more general, disjointedness, overlapping) holding between the entities e and e' The matching operation determines an alignment (a set of mapping elements) for a pair of schemas, with additional optional parameters, such as: an input alignment, matching parameters (weights, thresholds) and external resources (e.g., thesauri).See Figure 2. The domain of automatic schema matching has been studied by several computer science communities and used by many applications [8].Many interesting surveys [7,[9][10][11][12][13] and benchmarks [14,15] were provided through the past few years.Here, we will state the latest approaches regarding automatic schema matching in general and the approaches specific to geospatial data.
In [9], the authors indicate generic schema-matching approaches that do not take into account geospatial data.They mainly use string-based techniques, such as N-grams in [16], sub-string concatenations [17] or pattern based [18].These techniques are not well suited to geospatial matching since geospatial matching requires more than string similarities to be compared, and their attributes cannot have their values described by patterns [19].
In the artificial intelligence domain, the SEMINTsystem [20] uses a neural network solution to determine 1:1 mappings by learning attributes' meta-data and data values.In [21], the authors apply knowledge from domain ontology snippets and data frames to detect 1:n schema mappings.These techniques do not suit geospatial data since patterns are not sufficient in detecting attributes.Moreover, attributes can share similar meta-data or data value patterns while being completely different, e.g., city and county.
In the geospatial domain, schema-matching approaches mainly rely on external knowledge, such as domain ontologies and gazetteers or data instances to guide the matching task.Brauner et al. [22] propose an instance-based approach to match export schemas of geographical database web services.They assume the web services to be well described so that their input and output is known.A query formulator queries the web services WS1 and WS2 based on a set of global instances defined based on a global schema.The results are then compared to the global instances to find similarity between the global schema and the web service schema.This approach is simple and effective in the case that the databases share the same instances.Otherwise, it does not consider the possible different data type or structural representation between the input schemas.
In [23], the authors propose to take advantage of geographic reference databases for matching and visualizing thematic data by heterogeneous spatial references.They anchor different thematic references to the same reference geo-dataset using the geographic references databases as background knowledge resources.Then they derive equivalence or other relationships from the anchoring relationships.This approach requires knowing the schemas in advance.
The authors in [24] proposed another matching approach that translates qualitative queries in geospatial databases.They handle queries, such as left, right, near, above, etc.The queries are translated into SQL and are evaluated with a trip advisor application called the Bremen Tourist Advisor.
Handling geospatial queries was also targeted by [25] in their system OnGIS, where they propose broker techniques for answering user complex spatial queries.
In [26], the authors proposed an automatic matching technique for creating links between objects within different datasets that model the same real-world phenomenon.They first match nodes based on some distance metrics, then roads based on a shortest path algorithm.
The authors in [27] use an attribute relational graph to represent the pattern of geospatial objects.Probabilistic relaxation is then used to find the optimal matching of the objects among different geodata schemas.The limitation of such an approach is that it does not work in the case that two different representation exist between the datasets.In addition, they use only attribute names and values for the similarity measure, which is not accurate in all cases.
In [28], the authors propose a scalable instance matching approach named VMI.It automatically generate links between ontology instances by building a set of inverted index-based rules to get the primary matching candidates.User-customized property values are then used to further eliminate the incorrect matchings.Finally, the similarities are computed as the integrated vector distances, and the matching results are extracted.
The current trend in schema matching is now more focused on combining matchers instead of creating new ones.Most of the recent approaches are focusing on the problem of large-scale schemas and how to handle them efficiently.There is not much support for n:m alignments; otherwise, systems mostly focus on 1:1 ones.Evaluations in [29,30] show that regarding matching geospatial datasets, such as DBpediaand Geonames, the existing tools are efficient for simple geospatial representations, such as (latitude, longitude), while failing with more complex ones.
The transportation domain requires richer and more suitable mappings that are more relevant to its concepts.Geospatial patterns are still not found in the current systems, and the existing matchers lack the ability to match some complex transportation schemas.
Data Interlinking
The goal of data interlinking is to discover entities representing the same object over distinct RDF data sources in a semi-automatic fashion [31].The goal is to link similar instances in order to connect data sources.The survey presented in [32] describes data interlinking in more detail and highlights the characteristics of the most popular approaches.
Transportation data interlinking could be used to discover relationships between transportation entities.These relations describe how entities are semantically related to each other, e.g., near, reachable, can be accessed on (time), etc. Providing these relations enables a better view of the data and enables more accurate services.Existing tools detect equivalence relationships (sameAs) based on distance similarity metrics (string, geographical, numeric, etc.).
Many solutions have been provided to support data interlinking and publishing [33,34].They provide the necessary tools to transform, link, publish and query data extracted from multiple different sources with different formats.An example of a data publishing approach is GeomRDF [35].It is a tool that helps users to convert spatial data from traditional GIS formats to the RDF model.Regarding data interlinking, Silk [36] provides easy ways to add datasets, configure linking rules, use reference links and output configuration to generate links between the datasets.Interlinking geospatial data is done using some mathematical distance functions, e.g., Euclidean distance.LIMES [37] provides better geographical distance functions than Silk (e.g., orthodromic, Hausdorff, Frechet, etc.), which makes it more suitable for geospatial datasets.GNAT [38] works on music datasets and is based on a similarity aggregation algorithm to detect relations based on resource's neighbors in a graph.ODD-Linker [39] proposed an extensible framework for interlinking relational data with high quality links.Linking rules are expressed in the LinQLlanguage, which is later translated to SQL queries to compare and identify links.RKB-CRS (co-reference resolution system) [40] is an architecture for managing Uniform Resource Identifiers (URI)equivalences on the web of data by using consistent reference services.RDF-AI [41] is a dataset matching and fusion architecture based on string similarity using an external resource (WordNet).
Using the information provided in [32], we can summarize existing interlinking solutions with their properties and compare them to our approach, as shown in Table 1.
Techniques
Output Domain
RKB-CRS [40]
String owl:sameAs Publications GNAT [38] String, similarity-propagation owl:sameAs Music ODD-Linker [39] String link set Independent RDF-AI [41] String, WordNet alignment format Independent Silk [36] String, numerical, date owl:sameAs, user-specified Independent LIMES [37] String, geographical, numerical, date owl:sameAs, user-specified Independent Link++ User-defined User-defined Independent The user defined links provided by existing approaches are actually sameAs links, which have been renamed to suit the user preferences, unlike in our approach, where they have a complex structure specified by the user.
Other approaches, such as BLOOMS [29] and STROMA [42], provide links with different semantics as sameAs.BLOOMS uses Wikipedia as a background knowledge to detect semantic relationships between linked open data classes.The derived semantic relations are owl:subClassOf and owl:equivalentClass.STROMA extends the existing is-a and related correspondences provided by generating part-of relationships.
Analyzing existing link discovery approaches shows that they are more suitable to equivalence matching.They provide functions and aggregations to detect sameAs, part-of or subClassrelationships.These approaches may be suitable in some cases for geospatial data (the GeoKnow project [43] and LinkedGeoData [44]), but they are not sufficient for transportation data.Interlinking solutions must take into account both the spatial and temporal characteristics of transportation data in addition to the real-time state.Consider that we want to connect two transportation data sources with the intention of discovering how we can reach one stop from another.Doing so with existing tools limits us to equivalence detection due to the provided functions and output format.What is required is a more representative and semantic way to connect these sources [45] showing how they can be connected from a transportation point of view.
As a conclusion, the output of an interlinking process mainly focuses on detecting a set of owl:sameAs links.However, we need to have more information in the generated links to enable better post-processing and analysis and to reduce re-calculation costs (e.g., include information about a connection status and the distance between two connected entities in transportation links).
An Automatic Matcher for Transportation Datasets
Transportation data instances always refer to real-world objects, e.g., bike stations, bus or train stops, etc.These data are characterized by the description of an object's geographical location, represented by properties, such as coordinates, addresses, etc.The problem we are faced with is the different representations of this information.We aim at investigating a way to automatically identify and match geospatial information in transportation datasets despite their heterogeneity.
Geocoding services (https://developers.google.com/maps/documentation/geocoding/intr,http://dev.virtualearth.net/REST/v1/Locations/,http://cloudmade.com/documentation/geocoding/,http://www.mapquestapi.com/geocoding/,https://developer.yahoo.com/boss/placefinder/)provide the means of transforming a description of a location (name of a place, coordinates, etc.) to a location on the Earth's surface via geocoding and reverse geocoding functions.They work as a search engine where the output contains all possible information regarding the location of the queried data.
We believe that exploiting these services can guide the matching process in automatically identifying the geospatial characteristics in the datasets.The idea in general is the following: First we query a geocoding/reverse-geocoding web service with existing instances in order to find matching rules between the queried instances and the web service response.The schema of the web service must be known in advance, so a match between the queried instance and the web service instance will give us some information on the schema of the queried instance.This enables us to detect complex relations between two different representations by using the web service as a mediator.Data sources are mapped to the mediator at first, then by previously-known information about the structure of the mediator, we can detect the required matching rules.Due to the fact that we know how a web service is defined, we can detect n to m relations between the schemas.
Our system consists of four components that include: web service selection and query formulation, co-occurrence matrix construction and, finally, the matching rules generator.A preprocessing step precedes our approach in order to unify the structure representations in each data source and to do some filtering and/or modifications.Here, we use CSV format due to its simplicity.Moreover, since some columns on their own cannot provide meaningful input for a web service query, preprocessing can perform some random combination/split of columns as additional data that may improve the web service query results, e.g., combine street name with city name to get more precise results from the web service.The combination is done automatically and blindly without any prior information about the dataset schema.We note that even when the format is the same, the representation may be totally different.For example, both files are in CSV format, but each represents addresses differently.Figure 3 shows a global view of our system.
Web Service-Based Query Formulation
A web service stands as a mediator that maps the data sources.We can identify more formats and representations given a richer web service schema.Therefore, it is required to provide a web service that contains enough representations of addresses to cover any possible encountered format.In addition to the service definition, the knowledge of how the elements are mapped within the web services must be defined.For example, if a web service contains longitude, latitude and a WKT representation of an address, we must specify that a combination of latitude and longitude can be represented in WKT and vice versa.These information are saved as "inner mapping rules" of a web service that are used later in the matching task.
The objective of a query formulator is to query the selected web service with existing instances aiming to get richer information.This step is done on each dataset separately.The query formulator creates a query and sends it to the web service.Separate requests are issued for each column in a row as shown in Figure 4 or by random split/combination of columns previously done in a preprocessing phase, e.g., the fourth column in Figure 4 is the result of preprocessing the file by combining Columns 1 and 3. Note that Col1, Col2, Col3 represent any column names while v1, v2, v3 represent any possible value.The web service results are grouped by the queried columns and stored in a repository for later tasks.
Co-Occurrence Matrix Construction
Here, we use the web service results and the dataset instances in order to construct a co-occurrence matrix.A co-occurrence matrix is a matrix of n *m rows, where n and m are the number of columns in the dataset and the web service schema, respectively.Each entity in this matrix corresponds to the number of times an element a i j appears at the same time in the column i of the dataset schema and the column j of the web service result schema.
The element comparison is done via a similarity metric [46,47] in which each time a similarity is detected, the corresponding value in the matrix is incremented by one.The higher the value is, the higher the probability that these two columns map to each other.An example of what precedes is shown in Figure 5 with the elements in red representing common occurrences.We see two schemas, one representing a dataset schema and the second representing the web service schema.In the dataset schema, a street is represented by its name and zip-code written in English words, while it is in the web service schema represented by the set {Voie, CodeP and Ville} that stands for {Street, Postal code and City} in French.The co-occurrence matrix lists the columns of both schemas as rows and columns of the array, and each element in the matrix represents the number of times the same value appears in row/column combination.For example, we see that the columns "Voie" and "Street Name" have two values in common, which are "Rue Edme Bouchardon" and "Rue des Chantiers".To calculate the matrix, we first iterate over each row in the dataset and compare the value of each column with the column values of each row in the web service results.If the similarity between the values exceeds a threshold, the value at the specific row/column index in the matrix is incremented, e.g., if the value at Street Name is similar to the value at Rue, then the cell corresponding to column Street Name and row Rue is incremented.A co-occurrence matrix is created for each repository, where a repository represents the query results of each column's instance values.Since we have multiple co-occurrence matrices, we combine them with one aggregated matrix in order to maximize the similarity.This matrix represents the global view on how the columns of each dataset are related to the web service schema based on all of the queries.
Matching Rules Generation
The calculated co-occurrence matrices capture the possible matching rules between the data sources and the web service and in turn will help with generating the matching rules between their schemas.Here, we iterate over each row and select the highest value.Then, we generate a matching between the corresponding row/column if the number of co-occurrences is higher than some pre-defined threshold.
After having the matching between each dataset and the web service, we use the web service inner mappings to detect how elements from each dataset can be matched together.To illustrate, let us consider two datasets, DS1, DS2, and a web service, WS.Suppose that DS1 contains the columns a1 and b1, the WS schema contains ws1,ws2 and ws3 and DS2 contains a2.Knowing that the column ws3 is the combination of ws1 and ws2, a1 and b1 map to ws1 and ws2, respectively, and a2 maps to ws3, we can conclude that we can map DS1 elements to DS2 elements by the property "a1 combined with b1 is equivalent to a2".The global picture is shown in Figure 6.Summing up, the idea is to query each dataset element with a web service that has a known schema and inner mapping rules.We then use the resulting instances to create co-occurrence matrices for each dataset.The matrices are then used to define a matching between each dataset and the web service schema until finally using the inner mapping rules of the web service to create matching rules between the input datasets.
This process is done twice for both datasets.Using the matching rules from D1 to WS and from D2 to WS in addition to the inner mapping rules of WS, the process terminates by showing the matching between D1 and D2.
Discovering Semantic Connections between Transportation Datasets
Discovering connections between transportation points of transfer cannot be done using existing interlinking tools.A more complex connection generation process is needed to enable richer and more flexible connection representation.We introduce Link++ (shown in Figure 7), a system that enables flexible connection discovery and customized output definition using connection patterns, custom functions and linking rules.Connection patterns are templates for connection generation used to define both the content and format of a linking process output.
In general, the approach consists of two main phases:
•
The definition phase, where users define the connection patterns, the required functions and linking rules.
•
The generation phase, where the definitions are taken and applied to the datasets.The rule is applied to the entities, and when valid, a connection will be created and stored in a repository.
In a formal definition, a linking task T requires the following input for the process: • Input data sources D1 and D2 representing the datasets to be linked • O is the custom-defined connection pattern • R is the linkage rule that defines when a connection must be generated • F is a set of functions required for the linking task • L is a set of pre-defined libraries implementing the dependencies of F The following sections explain in detail the tasks required for an interlinking process.
Specifying Custom Functions and External Libraries
Users are able to write any functions to be used in their linking rules or similarity calculations.This ensures the flexibility of the approach and the ability to support any interlinking task.In addition, external libraries are supported and can be used within functions' implementations.These functions may represent a linking rule, a similarity metric, a transformation/preprocessing operations or any other function based on users' needs.The functions are gathered in a JAVA file accompanied with the used jar libraries.
Defining a Linking Rule
A linking rule specifies the conditions required to generate a connection between a given pair of entities.The main goal is to apply this rule to each entity pair in order to seek for a match and create the specified connection.Defining a rule requires a set of functions (similarity metrics and preprocessing functions) previously defined by the user.Each rule is defined with a root node that is either an aggregation or a comparison operator and sub-nodes specifying any other function chained in a way to suit the linking task.
An aggregation operator combines the values of different operators/values by applying the specified aggregation method, e.g., max, min, average, etc.It is defined by an aggregation function and a threshold.Each function contains a set of parameters that can be specified from the given data sources or directly by the user.The threshold defines whether the value of the operator must be evaluated as true or false in the linking rule.
Since data sources can be represented in different ways, we can use the transformation operator to modify how values are represented.To this end, we define a function that takes its parameters from the data sources or from the composition of other transformation operators, e.g., lowercase, uppercase, concatenation, round, ceiling, etc.
Finally, the comparison operator is used to define the similarity (or the relatedness) between two properties of the given data sources.A comparison is valid between operators themselves or with other transformation functions, and a threshold defines whether the value is accepted or not for the rule to be valid, e.g., distance, equality, etc.
Configuring a Connection Pattern
The connections are the final outputs of the interlinking task, and it is important to be precise when defining a connection pattern.A pattern specifies the format of the generated connections and the required information they must contain.In other words, it represents a template that will be filled when a connection is instantiated.
A connection pattern is composed of a set of properties, where each property is defined by a function that calculates it.Function parameters can be the inputs from the data sources or predefined by the rule composer.A connection pattern is freely chosen by a user according to the interlinking task and the post-processing needs.The formal definition of a connection pattern O is as follows: Definition 1.Let D 1 and D 2 be two data sources.Given V any data type and F a set of custom functions required to generate the patterns, Pr is a set of properties where each property is represented by a property name n, a value v and a corresponding function f , which calculates the property value during the generation process.
A connection pattern is formalized as: We will give a demonstration case with a real scenario of defining both the linking rule and the connection pattern in Section 5.
Once the configuration step is completed, the connection discovery is performed as described in the sequel.
Connection Discovery Algorithm
Algorithm 1 represents the pseudo-code of the implemented linking process.The algorithm iterates over each pair of entities in the two data sources and evaluates the linking rule between them.Based on the rule evaluation, the algorithm decides if a connection must be created or not.If a rule is triggered, a new connection is generated by evaluating the connection pattern and applying the corresponding function of each property.The values are calculated by the specified functions in the output pattern, and their parameters are filled from the currently-compared entities.Here, we instantiate the connection and fill in its information from the return values of the functions.The connection is stored in a specified repository, and the algorithm continues on the remaining pairs until all are treated.In the worst cases, the time complexity of the algorithm is O(n * m), where n and m are the sizes of the input datasets.The storage complexity (in terms of data pages) is the same as a nested loop join in databases that is equal to the size of the smallest dataset + one page, which usually fits in memory.This complexity may be reduced by using some pre-filtering techniques that the system may offer in a future version; for instance, using a spatial index to replace the inner loop by a search in an index (which reduces the cost to log(n)).Then, the specific rules and function defined by the user will be applied in a refinement phase automatically by the system.
Both the connection pattern and the linking rule files are described in XML files that conform a data type definition (DTD) ; custom functions are written using JAVA (users can write any JAVA file and use the defined methods in his/her connection pattern or rule), and the output is generated in RDF.An example with real linked datasets is presented in the evaluation section; it illustrates the configuration process and shows an instance of the XML files (output pattern and rule).
We have implemented our approach, and an executable version of the system can be found online via the link https://github.com/alimasri/link-plus-plus.git; in addition to a video tutorial on: https://youtu.be/u2gr7Wa4eT4.
Evaluation
We evaluate both of our two approaches using two datasets representing transportation companies in the Paris area, SNCF and Autolib, a railway company and a car sharing service, respectively.
The main idea is to provide missing connections between stops belonging to different transportation modes and see how this would improve users' trip planning.We first show how we automatically discover the geospatial properties between the two datasets and then how we can use this information to link them using the proposed interlinking approach.
The input data are collected from the open data portals for SNCF (http://gtfs.s3.amazonaws.com/transilien-archiver_20160202_0115.zip) and Autolib (http://opendata.paris.fr/explore/dataset/stations_et_espaces_autolib_de_la_metropole_parisienne/) in CSV representations.The number of instances in each of the SNCF and Autolib datasets is 1067 and 869, respectively.Figure 8 shows the original schema of the datasets.
Automatic Schema Matching
We will describe the process of automatically detecting the geospatial properties of both datasets according to the steps shown in Section 3.
In a preprocessing phase, we split columns containing special characters (commas, semi-colons) into two or more columns named by the original column's name with an incremented value concatenated to its end.Therefore here Autolib's column "Cordonnees geo" is split into two columns "Cordonnees geo 0" and "Cordonnees geo 1".
For the web service selection, we chose Google's geocoding web service (https://developers. google.com/maps/documentation/geocoding)with one function on top implemented by us to filter out the results in a simple schema that consists of three columns: formatted-address (representing a textual address representation), lng (longitude) and lat (latitude).
The query formulator queries the web service with each column's value for all of the existing rows, then groups the results by column names and saves them into a repository.The total number of issued queries is 20,185 divided into 8536 and 11,649 for SNCF and Autolib, respectively.
One co-occurrence matrix is constructed for each column ignoring columns that gave no results from the web service.The used similarity metric is the Levenshtein distance in order to show how a simple similarity metric can give us good results.However, more complex metrics can be used to increase the precision of the similarity calculation.An aggregation matrix is then created by calculating the mean value of all co-occurrence matrices' values.The resulting matrices for SNCF and Autolib are shown in Tables 2 and 3.In order to generate the matching rules, we iterate over each row, get the maximum value and assign a matching between the corresponding row/column pair.Using Tables 2 and 3, we obtain the following matching rules between each of them and the web service; for SNCF: (stop-id, lng), (stop-name, formatted-address), (stop-desc, formatted-address), (stop-lat, lat) and (stop-lon, lng); for Autolib: (ID, lat), (Identifiant Autolib', formatted-address), (Rue, formatted-address), (Ville, formatted-address), (Cordonnees geo-0, lat), (Cordonnees geo-1, lng) and (Autolib', formatted-address).The execution time took around 3.5 min on the given datasets, including a one-second cool-down per each ten queries to comply with the restrictions of the web service.
Analyzing the results for SNCF, our system correctly obtained matching of the latitude and longitude properties.Moreover, since the stop-name and stop-dec are normally names of the corresponding area, they were detected as geospatial properties, as well.Regarding the stop id, this false positive matching rule can be solved by combining the results with some constraint-based approaches.Regarding Autolib, the matching rules detected correct relations between rue and formatted-address and the same for the latitude and longitude with cordonnees geo 0 and 1.The false positive matches were: (ville, formatted-address), (ID, lat), (Identifiant Autolib', formatted-address) and, finally, (Autolib', formatted-address).The false negatives' matching rules can also be discarded using constraint-based approaches, for example by removing matching from repeated column values or id columns, etc.
The results show a 100% precision and 80% recall for SNCF and 100% precision 42% and recall for Autolib.Matching results could be improved in different ways: (i) choosing richer web services; (ii) refining the preprocessing of the output; or (iii) using alternative similarity metrics.Combining both matching rules, we can deduce the following valid rules between SNCF and Autolib: "Cordonnees geo" from Autolib maps to the combination of (stop-lat,stop-lon) in SCNF; "Rue" from Autolib maps to "stop-desc" in SNCF.
We tested the algorithm on other datasets to validate it.The chosen datasets are hospital locations in the U.K. and points of interests (POI)in Paris, in addition to the previous train and car stations.The idea here is that this approach can help in checking if the datasets contain geospatial information in addition to the ability to identify them and the relation to other datasets.This can be used in uses cases such as finding the nearest hospital from an accident location or finding some POIs near a hotel, etc.The results are shown in Table 4.
Link Discovery
After the detection of the geospatial properties, the following process is to find the transportation connections between both datasets.In transportation networks, a connection can be described as an accessible path from one transportation point of transfer to another.A point of transfer is any stop that allows users to change a transportation unit or mode.A connection contains properties describing both the departure and arrival stops in addition to other properties.We define a transportation connection as one of the following two types:
•
Timetable connection that has specific departure and arrival times.This type of connection will be referred to as a scheduled connection.It has the following properties: departure-time, arrival-time, departure-stop and arrival-stop.
•
Other connections that have no schedule information and for which availability is not restricted by timing constraints.We will refer to these connections as unscheduled connections.They have the following properties: departure-stop, arrival-stop and distance.
Data Preparation
In this phase, the goal is to represent the timetable information in a format compatible with our definition of connection.Instead of designing a network by a series of stops or other representations, we want to represent it by a series of connections between stops.Since SNCF is a public transportation company with data described in timetables, the task here is to extract scheduled connections from the given data.To this end, we have proposed an algorithm that transforms timetable data from GTFS files into scheduled connections.The algorithm iterates over the timetable information for each stop and creates a connection that starts from a departure stop at a departure time and ends with an arrival stop with the specified time.The process is repeated to a predefined date range to limit the number of connections created.
In case of Autolib, we do not have timetable information, so we need a way to discover the connections between its stops.Using our approach, we can match Autolib's dataset with itself (in order to know when a Autolib station is reachable from an another) to discover these unscheduled connections between.Since the configuration task is common and independent, the following section describes how to use our approach to discover the unscheduled connections for Autolib-Autolib and Autolib-SNCF.
Discovering New Connections
Two tasks are required one for Autolib-Autolib connections and one for Autolib-SNCF connections.In this example, unscheduled connections are driving or walking connections between Autolib-VELIBand Autolib-SNCF, respectively.We use our approach to search for connections that match a predefined criteria.Since our approach works on RDF data, we have used the DataLift [34] platform to transform both SNCF stops and VELIB CSV files into RDF turtle formats.In the sequel, we describe in detail all of the required tasks to achieve our goal.
•
Defining custom functions: Our system is flexible as it allows users to create any custom function to be used in the linking task.Users can use external dependencies, as well.In our example, we define the functions getWalkingDistance, getWalkingTime, getDrivingDistance and getDrivingTime.In a real scenario, we get this information from a web service, such as Google's distance matrix API (https://developers.google.com/maps/documentation/distance-matrix/),However, due to the query limit, we have chosen to implement them by local functions based on mathematical calculations (http://www.movable-type.co.uk/scripts/latlong.html).
•
Define the linking rules: Recall that the linking rule describes the condition that triggers the creation of a connection.Two rules are required, one for Autolib-Autolib and the other for Autolib-SNCF.For the first one, the condition of the defined rule is the following: "If a driving path exists within 200 km (the time before the battery is totally discharged), create a connection".For Autolib-SNCF connections, the rule is: "If a walking path exists from one stop to another within one kilometer, create a connection".Rules are written in XML format, and the functions that calculate the walking distance and time are referenced from the custom functions file.We note that the parameters "200 km" and "1 km" are given by the user who is responsible for the configuration.We set these parameters as the maximum feasible scope for a person to ride the car or walk from one station to another.Figure 9 shows an example of how a rule can be defined.• Defining the connection pattern: We define the output generated by the system at each valid rule.We have chosen the following properties to be represented in a connection pattern: source-id, target-id, walking/driving distance and walking/driving time.This pattern is the same for both tasks, and an example is shown in Figure 10.Executing these tasks with the above configuration enabled us to enrich the network with discovering 535,966 internal connections between Autolib car stations and 272 new connections between the two different transportation modes SNCF and Autolib.We will illustrate hereafter how to use these connections to calculate the earliest arrival time (EAT).
Calculating Routes Using Discovered Connections
EAT is the earliest time we can reach all stops in a transportation network given a departure stop and time.We have chosen this approach to get a broad view on how the newly-introduced connections can massively affect a large network.We have used the connection scan algorithm (CSA) [48] as an EAT implementation, since it matches with our notion of connection.In short, CSA works by receiving a stream of connections ordered by departure time and chooses the fastest way to reach one stop from another.Due to the fact that the connections are pre-sorted and can be accessed one by one in a single iteration, CSA is faster and more scalable than other existing algorithms.However, it has some limitations in our case.Firstly, it only supports timetable networks, which makes it unable to compute trips, including other services.Secondly, it does not support unscheduled connections.It only supports one footpath transition between two points of transfers.It is therefore not possible to combine scheduled connections, unscheduled connections and footpaths to create a more optimized trip.
CSA handles only public transportation networks with footpaths.In order to support multimodality, we have introduced unscheduled connections beside the ones based on timetables.We have also enabled multiple unscheduled connections between multiple points of transfer.The unscheduled connections are created when a connection is reached.For each iteration, all of the available unscheduled connections from an arrival stop are checked to create scheduled connections by setting the departure time to be equal to the arrival time at the station; to this is added the minimum transfer duration and the arrival time for the unscheduled connection.
We fed our new algorithm with both scheduled and unscheduled connections and tested the estimated arrival time for each stop.To check the effects of introducing generated connections, we have calculated the estimated arrival times with and without them, and we have compared the results.Figure 11 represents the estimated arrival time for every stop starting from the SNCF departure stop DUA8711617.The intuition is that the lower the value, the earlier a passenger can reach a stop point starting from a departure station.Analyzing Figure 11 shows that using the generated connections and integrating them in the transportation network can reduce the estimated arrival time.Therefore, introducing these connections decreases the waiting time for passengers and results in more optimized trips.We can now consider new types of mobility that were not previously taken into account (bike sharing, car sharing, etc.).This can be used to fit to passengers profiles by combining the appropriate connections while planning trips.Passengers will be able to define connection types, modes and find the best trip type.
Compared to the existing link discovery frameworks, our approach succeeded in discovering links with richer representations and extendable properties that can be used for numerous tasks (EAT in our example).
Conclusions
The diversity of transportation systems and services raises the need for a broader integrated view of the transportation network.This in turn can provide multimodality that greatly improves passenger's experience with more optimized and customizable trips.
In this paper, we proposed an approach to automatically detect geospatial data between transportation data sources in addition to a way to provide rich semantic connections between their entities.This enables a better way for transportation systems to access information about new services and integrate them with their own network.
We evaluated our approach with a scenario of integrating a car sharing and a train station company in France.The result shows that the approach was able to detect the geospatial entities and find relations between the dataset schemas.Moreover, using the rich generated links between the datasets, the integration of the new mode of transportation improved the earliest arrival time at each stop.
In the future, we want to adapt the approach to handle the dynamicity of the connections.This will make us able to maintain the status of existing connections and handle new services, such as dynamic ride-sharing, car sharing, etc.The problem here is how to track connections' evolution in real time.How can we make use of external events that may affect their use, etc.? Furthermore, some speed optimization is to be considered for both the automatic matching and interlinking approaches.We will target data sampling to reduce the web service calls and a smarter query formulator to more efficiently get relevant results from the web service.Integrating geospatial querying solutions shown in [25] may help with increasing the accuracy of the query formulator.The use of a web service to bridge the gap between different dataset representations could apply to other domains as long as web services are provided for these datasets.
Figure 3 .
Figure 3.A system for the automatic detection of geospatial information.
Figure 4 .
Figure 4. Querying web services to obtain instances with richer data.
Figure 6 .
Figure 6.Discovering matching rules between datasets using a web service as a mediator.DS, dataset; WS, web service.
Figure 7 .
Figure 7. Link++: An approach for flexible and customizable connection generation.
Figure 8 .
Figure 8.The original schemas of SNCF and Autolib datasets.
Figure 9 .
Figure 9.An example of rule definition in XML.
Figure 10 .
Figure 10.An example of a connection pattern in XML.
Figure 11 .
Figure 11.The estimated arrival time for each stop with and without our created connections.
Table 4 .
Evaluation of the matching algorithm. | 2017-01-23T08:43:12.842Z | 2017-01-20T00:00:00.000 | {
"year": 2017,
"sha1": "2e688b553d9e03b1beba0cdc2d1f37b83855906d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/6/1/29/pdf?version=1485086094",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e688b553d9e03b1beba0cdc2d1f37b83855906d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
37146888 | pes2o/s2orc | v3-fos-license | Helicobacter pylori VacA toxin causes cell death by inducing accumulation of cytoplasmic connexin 43
The principles underlying pathogenicity of microbial toxins with pleiotropic effects have been studied by investigators with different areas of specialization; microbiology, immunology, physiology, pathology, cell biology and proteomics. Their diverse contributions and broad perspectives lead to the development of anti-toxin vaccines, such as those used for the prevention of diphtheria and tetanus and translated knowledge of basic mechanisms into therapeutic advances. Recent genomic, cell biological and molecular advances have enabled the determination of cellular targets and mechanism of action of bacterial toxins, including the elucidation of the molecular pathways by which toxins bind to cellular receptors, translocate and modify the functions of intracellular targets, leading to intoxication of the host cell. Bacterial toxins are classified into several families, for example, some toxins exert their effects at the cell surface by damaging host cell membranes (pore-forming toxins and super antigens). 1 Other bacterial toxins such as diphtheria toxin cause cell death by ADP-ribosylation of a target protein, resulting in inhibition of protein synthesis. 1 Toxins may alter target protein function by specific modifications, having a major impact on cell survival. Thereby, the pathological changes caused by bacterial toxins may be responsible for the disease caused by bacterial infection. A
The principles underlying pathogenicity of microbial toxins with pleiotropic effects have been studied by investigators with different areas of specialization; microbiology, immunology, physiology, pathology, cell biology and proteomics. Their diverse contributions and broad perspectives lead to the development of anti-toxin vaccines, such as those used for the prevention of diphtheria and tetanus and translated knowledge of basic mechanisms into therapeutic advances. Recent genomic, cell biological and molecular advances have enabled the determination of cellular targets and mechanism of action of bacterial toxins, including the elucidation of the molecular pathways by which toxins bind to cellular receptors, translocate and modify the functions of intracellular targets, leading to intoxication of the host cell. Bacterial toxins are classified into several families, for example, some toxins exert their effects at the cell surface by damaging host cell membranes (pore-forming toxins and super antigens). 1 Other bacterial toxins such as diphtheria toxin cause cell death by ADP-ribosylation of a target protein, resulting in inhibition of protein synthesis. 1 Toxins may alter target protein function by specific modifications, having a major impact on cell survival. Thereby, the pathological changes caused by bacterial toxins may be responsible for the disease caused by bacterial infection. A vaccine targeting the toxin may prevent the disease.
Helicobacter pylori (H. pylori) is a helical Gram-negative pathogen, which infects the human stomach in over 50% of the population of the world. Persistent infection causes gastric inflammation, ulcer and cancer. 2,3 H. pylori has multiple virulence factors that participate in the pathogenesis of the diseases. H. pylori produces an exotoxin, vacuolating cytotoxin (VacA), which is an important virulence factor associated with gastritis and ulceration. Indeed, oral administration of VacA to mice caused severe gastric damage. 3 VacA consists of 33-kDa N-terminal domain involved in cytotoxicity and a 55-kDa C-terminal domain that binds to cell surface receptors. The primary sequence of VacA has no homology with any proteins. 4 The secreted VacA assembles into a large flower-like hexameric or heptameric complex. The anion-channel activity of VacA is involved in multiple biological processes, resulting in vacuole formation, autophagy and mitochondrial damage, leading to apoptosis. 2,3 The detailed mechanisms by which VacA induces apoptosis and autophagy remain unknown.
A recent study showed that the expression level of connexin 43 (Cx43) in cells has an important role in VacA-induced cell death. 5 Cx43, a member of the large human connexin (Cx) family, is ubiquitously expressed and a major component of gap junctions. It has a crucial role in intercellular communication, cell-cell channel formation and exchange of signaling molecules during development and in cell homeostasis. 6 Our recent study explored the role of Cx43 in VacA-induced cell death and its presence in H. pylori-infected human gastric mucosa. 7 It is known that Cxs in cultured cells undergo rapid turnover and have a short-life of about 1-5 h relative to other membrane proteins. 8 Interestingly, human duodenum carcinoma cells incubated with VacA accumulated cytoplasmic Cx43, accompanied by LC3-II generation, caspase activation and poly(ADP-ribose)polymerase (PARP) cleavage, in a time-and dose-dependent manner. The levels of Cx43 mRNA were not altered by VacA, indicating that VacA disrupted Cx43 turnover without altering its synthesis. Consistent with a previous study, 5 VacA-induced apoptotic signals (e.g., caspase activation and PARP cleavage) were inhibited in Cx43-knockdown cells, which showed increased basal expression levels of apoptosis inhibitors, Bcl-2 and Bcl-xL. VacA-induced PARP cleavage was suppressed in FLAG-tagged Bcl-xL-overexpressing cells. Our findings suggest that VacA-induced cell death involves a unique pathway with increased cytoplasmic Cx43 accumulation.
Under normal conditions, Cx43 is localized at gap junctions in plasma membranes, whereas the increased Cx43 seen with VacA accumulated in cytoplasmic compartments and colocalized with several vesicle markers, for example, LC3, LAMP1, Atg16L1 and LysoTracker. These results indicate that Cx43 is associated with cellular trafficking pathways involving endosomes and autophagy. In Cx43-knockdown cells, VacA-induced LC3-II generation and formation of LysoTracker-positive vesicles were not inhibited, indicating that Cx43 was not involved in the pathway leading to VacA-induced autophagic vesicle formation. In contrast, knockdown of Atg16L1, which plays an essential role in autophagy, 9 inhibited both Cx43 increase and LC3-II generation in VacA-treated cells as compared with control cells, suggesting that Atg16L1 is not only involved in LC3-II generation but also in Cx43 accumulation by VacA. Thus, VacA-increased Cx43 accumulated in a cytoplasmic fraction via effects on an autophagy signaling pathway. We further found that VacA-increased cytoplasmic Cx43 was colocalized with VacA in vesicles characterized by cholesterol-rich, detergent-resistant membranes. By localization of Cx43 with VacA in detergent-resistant membranes, degradation of Cx43 through an endosome/autophagy pathway might be suppressed, followed by an increase in cytoplasmic Cx43, leading to apoptotic cell death.
We explored if the reactive oxygen species/Rac1/ERK signaling pathway regulates both VacA-increased Cx43 and LC3-II generation. Prior study showed that VacA suppressed the turnover rate of intracellular GSH by impairing GSH metabolism. 10 N-acetyl-cysteine, an antioxidant and free radical scavenger, significantly suppressed VacA-induced Cx43 increase and LC3-II generation. VacA-induced ERK phosphorylation and Rac1 activation were suppressed in N-acetyl-cysteine-treated cells. Inhibition of ERK and Rac1 activities suppressed VacA-induced Cx43 accumulation and LC3-II generation. In agreement, knockdown of ERK and Rac1 by siRNAs reduced Cx43 increase and LC3-II generation by VacA. VacA-induced ERK phosphorylation was suppressed by inhibition or knockdown of Rac1. Interestingly, ERK knockdown significantly suppressed VacA-induced PARP cleavage. These data indicated that GSH level controls Rac1/ERK activation, which in turn regulate VacA-increased Cx43 and LC3-II generation.
As described above, channel activity of VacA is critical for its biological activity. The chloride channel inhibitor, DIDS, significantly suppressed VacA-induced ERK phosphorylation, Cx43 increase and LC3-II generation. These results indicate that VacA-mediated channel activity is a key trigger to initiate these events.
Finally, we investigated Cx43 content in human gastric biopsies. Our data showed that Cx43 expression was barely detectable in gastric mucosa of H. pylori-negative patients.
In contrast, Cx43 was elevated in the gastric epithelium of H. pylori-positive biopsy specimens. Our study provides new insights into the role of Cx43 in H. pylori infection as shown in Figure 1. Cx43 is a potential clinically relevant target in gastric inflammation and ulceration. VacA binds and is internalized by cells. Toxin channel activity impairs GSH metabolism (reactive oxygen species), activation of Rac1, and ERK phosphorylation. These signal transduction events lead to enhanced Cx43 endocytosis. Cx43 accumulated in cytoplasmic compartments through effects of toxin on pre-autophagy pathways and colocalized with autophagosomal marker LC3. Cells were incubated with 120 nM heat-inactivated (left panel) or wild-type VacA (right panel) for 10 h and then reacted with anti-Cx43 (green) and anti-LC3 antibodies (red) and were stained with DAPI (cyan). Bars represent 20 μm. The parts of yellow indicate the colocalization of Cx43 and LC3 | 2017-11-08T18:18:58.629Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "ac4250a9d0838162f522fb65f726b1539d6de321",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/cddis2015329.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac4250a9d0838162f522fb65f726b1539d6de321",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118691331 | pes2o/s2orc | v3-fos-license | The connection between radio halos and cluster mergers and the statistical properties of the radio halo population
We discuss the statistical properties of the radio halo population in galaxy clusters. Radio bi-modality is observed in galaxy clusters: a fraction of clusters host giant radio halos while the majority of clusters do not show evidence of diffuse cluster-scale radio emission. The radio bi-modality has a correspondence in terms of dynamical state of the hosting clusters showing that merging clusters host radio halos and follow the well known radio--X-ray correlation, while more relaxed clusters do not host radio halos and populate a region well separated from that correlation. These evidences can be understood in the framework of a scenario where merger-driven turbulence re-accelerate the radio emitting electrons. We discuss the main statistical expectations of this scenario underlining the important role of upcoming LOFAR surveys to test present models.
Introduction
Radio and X-ray observations of galaxy clusters prove that thermal and nonthermal components coexist in the intra-cluster medium (ICM). While X-ray observations reveal thermal emission from diffuse hot gas, radio observations of an increasing number of galaxy clusters unveil the presence of ultrarelativistic particles and magnetic fields through the detection of diffuse, giant Mpc-scale synchrotron radio halos (RH) and radio relics (e.g., Ferrari et al. 2008, Cassano 2009). RH are the most spectacular evidence of nonthermal components in the ICM. They are giant radio sources located in the cluster central regions, with spatial extent similar to that of the hot ICM and steep radio spectra, α > 1.1 (e.g., . There are well known correlations between the synchrotron monochromatic radio luminosity of RH (P 1.4 ) and the host cluster X-ray luminosity (L X ), mass and temperature (e.g., Liang 2000;Feretti 2003;Cassano et al. 2006;Brunetti et al. 2009). The most powerful RH are found in the most X-ray luminous, massive and hot clusters. These correlations suggest a close link between the non-thermal and the thermal/gravitational cluster physics.
Most important, RH are presently found only in clusters that show recent/ongoing merging activity. To this regard, Buote (2001) provided the first quantitative comparison of the dynamical states of clusters with RH and the properties of RH. He discovered a correlation between P 1.4 and the magnitude of the dipole power ratio P 1 /P 0 : the more powerful RH are hosted in clusters that experience the largest departures from virialization.
The RH-merger connection and the thermal-non-thermal correlations suggest that the gravitational process of cluster formation may provide the energy to generate the non-thermal components in clusters through the acceleration of high-energy particles via shocks and turbulence (e.g., Sarazin 2004, Brunetti 2011. The origin of RH is still debated. One possibility is that RH are due to synchrotron emission from secondary electrons generated by p-p collisions (e.g., Dennison 1980), in which case clusters must be gamma ray emitters due to the decay of the π 0 produced by the same collisions. However, the non-detections of nearby galaxy clusters at GeV energies by FERMI puts strong constraints to the contribution of secondary electrons to the nonthermal emission (Ackermann et al. 2010). Most important, the spectral and morphological properties of a number of well studied RH appear inconsistent with a pure hadronic origin of the emitting particles (e.g., Brunetti et al 2008Brunetti et al , 2009Donnert et al 2010;Macario et al 2010;Brown & Rudnick 2011).
A second hypothesis is based on turbulent re-acceleration of relativistic particles in connection with cluster-mergers events (e.g., Brunetti et al. 2001;Petrosian 2001). This model received recently support from the discovery of RH with very steep spectra (e.g., Brunetti et al 2008;Macario et al 2010). Future low-frequency radio telescopes (such as LOFAR and LWA) have the potential to test this scenario and to further explore the connection between RH and the process of cluster formation.
Here we focus on the most recent advances in the study of the statistical propertis of RH and their connection with cluster mergers, and discuss the importance of future surveys at low radio frequencies to test present models.
The radio bi-modality of clusters
An important step forward in our understanding of the statistical properties of RH and of their connection with the process of cluster formation has been recently carried out thanks to the Giant Metrewave Radio Telescope (GMRT) RH Survey (Venturi et al. 2007(Venturi et al. , 2008, a deep observational campaign of a complete sample of X-ray selected galaxy clusters (with X-ray luminosity≥ 5 × 10 44 erg/s in the redshift range 0.2 − 0.4) performed at 610 MHz with the GMRT.
These observations allowed for the first time to prove statistically that diffuse cluster-scale radio emission is not ubiquitous in clusters: only 30% of the X-ray luminous (L X [0.1−2.4 keV] ≥ 5×10 44 erg/s) clusters host a RH. Most important, it was possible to separate RH clusters from clusters without RH, showing a bimodal distribution of clusters in the P 1.4 − L X diagram (Brunetti et al. 2007): RH trace the well known correlation between P 1.4 and L X , while the upper limits to the radio luminosity of clusters without RH lie about one order of magnitude below that correlation (Fig. 1, left panel). Why clusters with the same thermal X-ray luminosity (and at the same cosmological epoch) have different non-thermal properties ? Based on information from the literature available for a fraction of the clusters of the GMRT RH Survey, Venturi et al. (2008) suggested that the behavior of clusters in the P 1.4 − L X diagram is connected with their dynamical state.
The dynamical state of GMRT clusters
To test the connection between RH and cluster mergers Cassano et al. (2010a) using Chandra archive X-ray data of a sub-sample of GMRT clusters provided a quantitative measure of the degree of the cluster disturbance adopting three different methods: the power ratios (e.g., Buote et al. 1995, Jeltema et al. 2005, the emission centroid shift (e.g., Mohr et al. 1993, Poole et al. 2006, and the surface brightness concentration parameter (e.g., Santos et al. 2008). A detailed descriptions of these measurements is given in Cassano et al. (2010a, and ref. therein).
They found a clear segregation between clusters with and without RH in terms of their dynamical state: RH are only found in dynamically disturbed clusters (those with high values of P 3 /P 0 , P 3 /P 0 > ∼ 1.2 × 10 −7 and w, w > ∼ 0.012, and low values of c, c < ∼ 0.2), while clusters with no evidence of Mpc-scale synchrotron emission are more relaxed systems. As an example in Fig.1 (right panel) we report the distribution of the clusters in the (w, P 3 /P 0 ) plane. This result was also tested quantitatively by running Monte Carlo simulations (see Cassano et al. 2010a for details) that proved that the observed distribution differs from a random one (i.e., independent of cluster dynamics) at more than 4σ.
We note also that not all the disturbed systems host RH: specifically, we found 4 "radio anomalies" in Fig.1 (right panel), Abell 781, MACS 2228, Abell 141 and Abell 2631, i.e., clusters that have the same morphological parameters (P 3 /P 0 , w and c) of clusters with RH but that do not host a RH.
3. The evolution of RH in the P 1.4 − L X diagram The radio bi-modality of galaxy clusters and the connection with their dynamical state suggest the following coupled evolution between RH and clusters: a) clusters host RH for a period of time, in connection with cluster mergers, and populate the P 1.4 − L X correlation (Fig. 1, left panel); b) at later times, when clusters become dynamically relaxed, the Mpcscale synchrotron emission is gradually suppressed and clusters populate the region of the upper-limits.
19 clusters of the GMRT sample have L X ≥ 8.5 × 10 44 erg/s, in which case the radio power of halos is ∼ 1 order of magnitude larger than the level of radio upper limits. Among these 19 clusters: 5 host giant RH, 11 are "radio-quiet" and only one is in the transition region (here not reported in Fig. 1, left panel). This allows to estimate the lifetime of RH, τ RH ≈ 1 Gyr, and the time clusters spend in the "radio-quiet" phase, τ rq ≈ 2 − 2.5 Gyr (Brunetti et al 2009). Most important, at these luminosities, the "empty" region between RH and "radio quiet" clusters in the P 1.4 − L X diagram constrains the time-scale of the evolution (suppression and amplification) of the synchrotron emission (Brunetti et al. 2007(Brunetti et al. , 2009 to be much shorter than both the lifetime of clusters in the sample and the period of time clusters spend in the RH stage. Monte Carlo analysis of the distribution of clusters in Fig. 1 (left panel) shows that the time interval that clusters spend in the "empty" region (thus the corresponding time-scale for amplification and suppression of RH) is τ evol ≈ 200 Myr, with the probability that τ evol is as large as 1 Gyr ≤ 1% (Brunetti et al. 2009).
The evolution of the radio properties of galaxy clusters in the plane P 1.4 − L X is driven by the evolution of the relativistic components (B and particles) in the ICM. The tight constraints on the timescale of this evolution, τ evol ≈ 200 Myr, provides crucial information on the physics of the particle acceleration and magnetic field amplification.
The role of cluster magnetic field
A possible explanation of the bi-modality is that cluster mergers amplify the magnetic field in the ICM leading to the amplification of the synchrotron emission on Mpc scales. In this case, merging clusters hosting RH should have larger magnetic fields, δB + B, with the excess δB being generated during mergers and then dissipated when clusters become "radio quiet" and dynamically more relaxed (Brunetti et al. 2007(Brunetti et al. , 2009Kushnir et al. 2009;Keshet & Loeb 2010).
A magnetic field evolution was postulated to reconcile a secondary origin of RH with the observed bi-modality; these models would indeed predict RH in all clusters, provided the ICM is magnetized at similar (few µG) level 1 .
A suppression of a factor ≥ 10 in terms of synchrotron emission constrains the ratio δB/B. At z ≈ 0.25 (typical of GMRT clusters) in the case δB + B << B cmb (where B cmb = 3.2(1 + z) 2 µG is the equivalent magnetic field of the CMB) the energy density of the magnetic field in RH clusters should be ≥ 10 times larger than that in "radio quiet" clusters, and even larger ratios must be assumed if δB + B >> B cmb (Brunetti et al. 2009). This significant difference between the magnetic field strength in RH and "radio quiet" clusters is a prediction of this scenario that, however, is not supported by present observations. Faraday Rotation measurements (RM) in galaxy clusters do not show any statistical difference between the energy density of the large scale (10-100 kpc coherent scales) magnetic field in RH clusters and that in "radio quiet" clusters (e.g., Carilli & Taylor 2002). In a recent paper, Govoni et al (2010) studied the σ RM − S X distribution (σ RM is the σ of the RM and S X the thermal X-ray cluster brightness, see Govoni et al. 2010 for details) of radio sources in a sample of hot galaxy clusters, including both "radio-quiet" and RH clusters. They showed that all clusters follow the same σ RM − S X trend, and since σ RM ∝ Λ c (n th B || ) 2 dl (Λ c the field coherent scale) this allow to conclude that the magnetic field strength in "radio-quiet" and RH clusters is similar (see also Brunetti & Cassano 2010).
More recently, Bonafede et al. (2011) investigate the fractional polarization trends in 39 massive clusters with different non-thermal properties. They found no statistical evidence for a difference in the depolarization trends, concluding that there is no evidence for different magnetic fields in these clusters.
All these results suggest that the bi-modality in the P 1.4 − L X plane cannot be attributed to a bi-modality in magnetic field properties.
The role of relativistic particles
Since present data suggest that the magnetic field is not the main responsible of the evolution of the radio properties of galaxy clusters, relativistic electrons must drive the generation and fading away of RH.
Turbulent acceleration models provide a natural way to explain the radio bi-modality of galaxy clusters and the fast evolution of RH. In these models relativistic electrons are re-accelerated in situ by turbulence on Mpc-scales during cluster mergers and cool as soon as the clusters become more relaxed due to the dissipation of turbulence. The cooling time of relativistic electrons emitting in the radio band is ∼ 10 8 yrs (e.g., Sarazin 1999) which is very short and consistent with (smaller than) the evolution timescale of RH constrained by the distribution of clusters in Fig.1 (left panel).
This scenario predicts that during a merger as soon as the turbulence reaches small (resonant) scales, particles are accelerated and generate synchrotron emission at GHz frequencies (the clusters move from the region of the upper limits to the P 1.4 − L X correlation and should appear dynamically disturbed) within a timescale of few 100 Myrs. The process should persist for a few crossing times of the central cluster Mpc region, that is fairly consistent with the RH lifetime τ RH ∼ Gyr, this is the period the clusters would appear on the P 1.4 − L X correlation. As soon as the turbulence starts to dissipate (at the end of the merging phase) the synchrotron power is suppressed and the synchrotron emission at higher frequencies will fall below the detection limit of radio observations (the clusters move again in the region of the upper limits and appear more relaxed at X-rays wavelength). A significant suppression of the synchrotron luminosity occurs in a time-scale of the order of a turbulent-eddy turnover time, ≈ 10 8 years, consistent with the fact that RH do not populate the region between the correlation and the upper limits in Fig.1 (left panel). The situation can be even complex thinking to the process of cluster formation and to the ensuing generation of cluster turbu-lence; cosmological simulations including a proper treatment of cosmic ray acceleration/cooling are necessary to shed light on these processes.
Testing the re-acceleration scenario with low frequency observations
The turbulent re-acceleration scenario explains the connection between RH and cluster mergers, the radio bi-modality of clusters and the fast evolution of clusters in the P 1.4 − L X diagram. It is important to point out that the peculiarity of this scenario is the fact that turbulent acceleration is a poorly efficient process; this has consequences on model expectations and allow for a prompt test of this scenario with future observations. Electrons can be accelerated only up to energies of m e c 2 γ max ≤ several GeV, entailing a high-frequency cut-off in the synchrotron spectra of RH, which marks the most important and unique expectation of this scenario (see Fig. 2). The presence of this cut-off implies that the observed fraction of clusters with RH depends on the observing frequency. The steepening of the spectrum makes it difficult to detect RH at frequencies larger than the frequency ν s where the steepening becomes severe. The frequency ν s depends on the acceleration efficiency in the ICM, which in turns depends on the flux of MHD turbulence dissipated in relativistic electrons (e.g., Cassano et al. 2006, Cassano et al. 2010b. Larger values of ν s are expected in more massive clusters and in connection with major merger events. As a consequence, according to this model, present radio surveys at ∼ GHz frequencies can reveal only those RH generated during the most energetic merger events and characterized by relatively flat spectra (α ∼ 1.1 − 1.5) (see Fig.2). These sources should represent the tip of the iceberg of the whole population of RH, since the bulk of cluster formation in the Universe occurs through less energetic mergers. Low frequency observations with new generation of radio telescopes (LOFAR, LWA) are thus expected to unveil the bulk of RH, including a population of RH which will be observable preferentially at low radio frequencies (ν ≤ 200 − 300 MHz). These RH, generated during less energetic but more common merger events, should have extremely steep radio spectra (α > ∼ 1.5 − 1.9) when observed at higher frequencies; we defined these sources Ultra Steep Spectrum RH (USSRH). Possible prototypes of these RH are those found in Abell 521 (α ∼ 2, Brunetti et al. 2008) and in Abell 697 (α ∼ 1.7, Macario et al. 2010).
In the framework of the turbulent re-acceleration scenario, the existence of merging clusters without Mpc-scale radio emission (Fig. 1, right panel; see also the case of Abell 2146 by Russell et al. (2011)), is not surprising for two main reasons. First, the expected lifetime of RH (∼ Gyr) can be smaller than the typical time-scale of a merger, during which the cluster appears disturbed, implying that not all disturbed systems should host RH (e.g., Brunetti et al. 2009). Second, and most important, a fraction of disturbed systems may host RH with very steep radio spectra (USSRH), difficult to detect even at low frequencies if the observations are not sensitive enough. USSRH are mainly expected in disturbed clusters with masses M v < ∼ 10 15 M ⊙ in the local Universe, or in merging and massive clusters at higher redshift (z > ∼ 0.4 − 0.5; Cassano et al. 2010b). In line with this scenario, 3 out of the 4 outliers in Fig.1 have X-ray luminosity close to the lower boundary used to select the GMRT sample (L X = 5 × 10 44 erg/sec), and the other is the cluster with the highest redshift in the GMRT sample (z ≃ 0.42). Interestingly, a deep GMRT follow-up at 325 MHz of one of the outliers in Fig. 1, Abell 781, has revealed the presence of a possible USSRH , which need to be confirmed by future deeper low-frequency observations. The recent case of Abell 2146 may also be in line with these hypotesis: it has a moderate X-ray luminosity (L [0.1−2.4]keV ∼ 6 × 10 44 erg/s) and it is at z ∼ 0.23. Only 5 RH are presently known in clusters with the same (or lower) X-ray luminosity but are all at smaller redshifts, which could suggest that the increase of the inverse Compton losses of relativistic electrons with redshift (∝ (1 + z) 4 ) contribute to disfavour the formation of RH emitting at GHz frequencies in these less massive clusters.
USSRH are expected to be less powerful than RH emitting at GHz frequencies (see Cassano 2010) and thus very sensitive low-frequency observations are necessary to catch them. The ideal instrument to search for USSRH is LOFAR (LOw Frequency ARray) that is already operating in the commissioning phase (e.g., Röttgering et al. 2010). Monte Carlo procedures, that follow the process of cluster formation, the injection and dissipation of turbulence during cluster-cluster mergers and the ensuing acceleration of relativistic particles in the ICM, allowed to derive quantitatively the statistical properties of RH (e.g., Cassano & Brunetti 2005). The expectations based on these procedures were found consistent with present observational constraints (e.g., Cassano et al. 2008) and were used to derive expectations for the planned LOFAR surveys. Accordingly to these predictions the Tier 1 "Large Area Survey" at 120 MHz (see Röttgering et al. 2010), with an expected rms sensitivity of 0.1 mJy/beam, should greatly increase the number of known giant RH with the possibility to detect about 350 RH up to redshift z ≈ 0.6 in the northern hemisphere with about half of these RH having very steep radio spectra (α > ∼ 1.9, Cassano et al. 2010b).
This implies that future LOFAR surveys will allow a powerful test of the merger-driven turbulence re-acceleration scenario for the origin of RH.
Conclusions
We discussed the most recent statistical evidences demonstrating the connection between giant RH and cluster mergers and the "transient" nature of the RH phenomenon. A step forward in this direction comes from the discovery that the radio bi-modality of clusters has a correspondence in terms of dynamical state of the clusters: clusters with RH are found to be dynamically disturbed, while clusters without RH are more dynamically relaxed. These observational evidences suggest that RH form in galaxy clusters and live for a period of time during mergers, when the clusters appear dynamically disturbed, while at the later times, when the clusters become dynamically relaxed, the synchrotron emission fades away. Two main ingredients may drive the evolution of the cluster synchrotron emission: the magnetic field and the relativistic electrons. Faraday Rotation measurements and observations of depolarization of cluster galaxies suggest that the magnetic field has a marginal role, and favour a scenario where the generation of RH is connected with the acceleration of relativistic particles.
These facts are naturally understood in the framework of one of the proposed pictures put forward to explain the origin of giant RH, the mergerinduced turbulence re-acceleration scenario (Brunetti et al. 2001, Petrosian 2001. The main expectation of this scenario, which is related to the poorly efficient nature of the turbulent acceleration mechanism in the ICM, is the existence of a population of clusters hosting RH with very steep radio spectra, USSRH. These RH, that should glow up preferentially at low radio frequency, are hidden in massive clusters undergoing minor merging and/or less massive clusters interested by major merger events.
LOFAR is an ideal instrument to test these expectations, and it is ex-pected to discover ∼ 350 RH in the Tier 1 "Large Area Survey" at 120 MHz, half of which should be USSRH. | 2011-08-01T12:00:47.000Z | 2011-08-01T00:00:00.000 | {
"year": 2011,
"sha1": "34547bfa8dbfda7d0213512d68002f9ea71713bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1108.0291",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "34547bfa8dbfda7d0213512d68002f9ea71713bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269927280 | pes2o/s2orc | v3-fos-license | Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study
ABSTRACT Objective: This study aimed to compare the insertion torque (IT), flexural strength (FS) and surface alterations between stainless steel (SS-MIs) and titanium alloy (Ti-MIs) orthodontic mini-implants. Methods: Twenty-four MIs (2 x 10 mm; SS-MIs, n = 12; Ti-MIs, n = 12) were inserted on artificial bone blocks of 20 lb/ft3 (20 PCF) and 40 lb/ft3 (40 PCF) density. The maximum IT was recorded using a digital torque meter. FS was evaluated at 2, 3 and 4 mm-deflection. Surface topography and chemical composition of MIs were assessed by scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS). General linear and mixed models were used to assess the effect of the MI type, bone density and deflection on the evaluated outcomes. Results: The IT of Ti-MIs was 1.1 Ncm greater than that obtained for the SS-MIs (p= 0.018). The IT for MIs inserted in 40 PCF test blocks was 5.4 Ncm greater than that for those inserted in 20 PCF test blocks (p < 0.001). SS-MIs inserted in higher density bone (40 PCF) had significantly higher flexural strength than the other groups, at 2 mm (98.7 ± 5.1 Ncm), 3 mm (112.0 ± 3.9 Ncm) and 4 mm (120.0 ± 3.4 Ncm) of deflection (p< 0.001). SEM evidenced fractures in the Ti-MIs. EDS revealed incorporation of 18% of C and 2.06% of O in the loaded SS-MIs, and 3.91% of C in the loaded Ti-MIs. Conclusions: Based on the findings of this in vitro study, it seems that SS-MIs offer sufficient stability and exhibit greater mechanical strength, compared to Ti-MIs when inserted into higher density bone.
INTRODUCTION
[3] To achieve optimal clinical performance when using MIs, these devices should be made of a material whose mechanical properties allow them to provide adequate stability to support immediate loads without suffering long-term alterations.
MIs are commonly made of titanium alloy (Ti-MIs; Ti-6Al-4V) or austenitic stainless steel (SS-MIs; AISI 316L). 4Current evidence seems to show that the MI's material would not be a determining factor in achieving clinical success using these devices. 56][7][8] Therefore, both are suitable for orthodontic use.However, in certain clinical contexts where there is a greater bone density and thickness at the insertion site (i.e., extra-alveolar regions), it would be interesting to choose MIs that provide greater mechanical resistance and, consequently, a lower risk of fracture.Thus, SS-MIs are usually recommended for extra-alveolar use instead of Ti-MIs, due to their greater toughness.Unfortunately, the literature on the differences in mechanical properties between Ti-MIs and SS-MIs is controversial.A previous study showed a higher insertion torque for SS-MIs, 6 while others demonstrated, through torque analyses and/or resonance frequency analysis, similar stability values for both types of MIs. 11,12Although greater flexural and torsional strength has been reported for SS-MIs, 13 there is also research demonstrating equal mechanical resistance between Ti-MIs and SS-MIs. 14garding surface deformation, inconsistent results were also observed.While a study reported a higher frequency of deformation in Ti-MIs, 15 another investigation did not show important morphological damage in the threads of any type of MI. 13 Since the evidence on the matter is still limited and inconclusive, new research is necessary to confirm or reject previous findings.Therefore, the present study aimed to provide further information on the topic, comparing the insertion torque, flexural strength and surface alterations between SS-MIs and Ti-MIs inserted in artificial bone of different densities.
MATERIAL AND METHODS
This in vitro study was conducted and reported following the Checklist for Reporting in vitro Studies (CRIS) guidelines.
INDEPENDENT VARIABLES ASSESSED
The independent variables evaluated in the present study were the type of MI (SS-MIs and Ti-MIs) and the density of the artificial bone.For the flexural strength evaluations, the variable degree of deflection was also evaluated.
A total of 24 MIs of 2 x 10 x 4 mm (diameter x length x transmucosal profile), made of stainless steel (n = 12; Morelli, Sorocaba/SP, Brazil) or titanium alloy (n = 12; Peclab, Belo Horizonte/MG, Brazil) were inserted on mechanical test blocks of artificial bone measuring 2 x 2 x 3 cm (length x width x height).Synthetic bone models constructed from solid rigid polyurethane foam (Nacional Ossos, Jaú, SP, Brazil) of 20 lb/ft 3 (20 PCF; 0.32 g/cm 3 ) and 40 lb/ft 3 (40 PCF; 0.64 g/cm 3 ) density were chosen as bone tissue equivalent for the present study.Solid rigid polyurethane foam has been recognized as a standard for testing orthopedic devices and instruments, including bone screws, due to its adequate representation of adult human bone (ASTM F-1839-08).
The selection of the densities used, as bone equivalents of lower (0.32 g/cm 3 ) and higher (0.64 g/cm 3 ) quality, was based on a previous study. 17According to the type of MI and density of the artificial bone, four study groups (n = 6 each) were finally defined, as shown in Figure 1.
OUTCOMES (DEPENDENT VARIABLES) ASSESSED
The dependent variables evaluated in the present study were insertion torque, flexural strength, surface topography and chemical composition.
Pilot holes of 1-mm depth were performed in the center of the test blocks, prior to the insertion of MIs, using a 1.0-mm diameter twist drill (Conexão Sistemas de Prótese, Arujá/SP, Brazil).
As previously described, 18 the MIs were inserted with a specific manual key for each type of MI, connected to a digital torque meter (model TQ-8800; Lutron, Taipei, Taiwan).Using a mechanical support, the insertion was carried out perpendicularly until
SS-MIs
(n = 12) all the threads of MIs were completely inside the artificial bone (Fig. 2A and 2B).The maximum insertion torque was recorded with a precision of 0.1 Newton-centimeter (Ncm).After the installation, the MIs received load on their head, perpendicular to their longitudinal axis, at a speed of 0.5 mm/min, and load of 50 Kgf using a Universal Testing Machine mBio (Biopdi, São Carlos/SP, Brazil) (Fig. 2C and 2D).Flexural strength at 2, 3 and 4-mm deflection was recorded (Ncm).
SAMPLE SIZE
To estimate the sample size, an a priori calculation was performed for pairwise comparisons of independent samples (two-tailed t-test) based on previous reported results on the insertion torque (Ncm) of SS-MIs (4.4 ± 0.56) and Ti-MIs
RANDOMIZATION AND BLINDING
The MIs were coded and randomized for each study group using a random sequence generator (https://www.random.org).
This procedure was carried out by a researcher who did not participate in the MIs insertion procedures or in the measurements of the outcomes.
Blinding was not possible for any of the phases of the research.residuals and Levene's test to assess homogeneity of residual variances.All tests were performed in Jamovi software (version 2.0), using a significance level of 5%.
NUMBERS ANALYZED
No losses were reported during the evaluations; therefore, all 24 MIs were part of the analyses.
OUTCOMES AND ESTIMATIONS
The EDS analysis confirmed the chemical composition of the MIs used in the present study.The new SS-MIs were made of Fe, Cr, Ni, Mo and Mn; while the new Ti-MIs contained Ti, Al and V (Table 1).
No significant effect of the interaction MI type*Bone Density on the insertion torque was detected (p = 0.565; Table 2).The MI type (p = 0.018) and bone density (p < 0.001) had significant independent effects on the insertion torque values (Table 2).
The insertion torque of the TI-MI was 1.The interaction MI type*Bone Density*Deflection showed a significant effect on the flexural strength values (p = 0.021, Table 4).As expected, in general, the greater the degree of deflection, the greater the flexural strength.A significant effect of the interaction Bone density*Deflection was demonstrated (p < 0.001, Table 4) -that is, the increase in flexural strength as Post-hoc comparisons showed that SS-MIs inserted in higher density bone (40 PCF) had significantly higher flexural strength than the other groups, at 2 mm (98.7 ± 5.1 Ncm), 3 mm (112.0 ± 3.9 Ncm) and 4 mm (120.0 ± 3.4 Ncm) of deflection.
The flexural strength values all the groups evaluated are reported in Table 5.The power of the model was greater than 90% for the sample size used.1. Insertion torque is parameter that reflects the frictional resistance between the screw and the surrounding bone, and is a widely used measure to evaluate mechanical stability. 19,20though with low certainty of the evidence, the orthodontic literature is somewhat consistent about the existence of a positive correlation between the primary stability of the MIs and the quality of the receptor bone site (i.e., cortical/compact bone thickness). 2,21The findings of the present study confirmed this information.Regardless of the MI material, the insertion torque values were significantly higher when the MIs were inserted into more compact bone blocks (i.e., 40 PCF).
Regarding the influence of the type of material, previous evidence comparing SS-MIs and Ti-MIs showed no or very little difference in stability values. 6,11An animal study assessing MIs of 6 x 1.6 mm reported that SS-MIs required significantly greater insertion torque than Ti-MIs; however, the values for each type of MI were not very distant from each other (SS-MIs: 12.00 ± 0.25 Ncm; Ti-MIs: 11.01 ± 0.24 Ncm). 6An in vitro study evaluating MIs of 10/12 x 2 mm inserted in artificial bone, but assessing primary stability by means of resonance frequency analysis, reported similar stability values for both MIs types. 11e findings of the present study also showed only a small difference in insertion torque values between both types of materials.There was even a trend for SS-MIs to show slightly lower values than Ti-MIs (SS-MIs: 13.40 ± 3.00 Ncm; Ti-MIs: 14.50 ± 3.00 Ncm).This would be in favor of SS-MIs, since it has been suggested that excessive stress could cause necrosis and local ischemia, which could prevent adequate secondary stability. 22,23Despite these findings, and evaluating the available evidence as a whole, it could be assumed that this difference seems not to be clinically relevant: both types of MI would show adequate primary stability when inserted in bone of greater or lesser density.
Flexural strength is the stress that exists at failure in bending.
It is desired that the MIs have high flexural strength to avoid fracture of the MIs, mainly in contexts where the MIs are placed in higher density bone. 9,10A previous study evaluating 8-mm MIs demonstrated greater flexural strength for SS-MIs than for Ti-MIs. 13Another research carried out with extra-alveolar MIs of 10 and 11 mm showed no difference in the flexural resistance between both types of material. 14However, this study applied the bending load in a MI's region close to the insertion surface and far from the head of the MIs, and evaluated 0. In the bone of greater density (i.e., 40 PCF), bone yields to a lesser extent, more load is necessary to displace the MIs and, consequently, greater deformation of the MIs is observed.In this last context, the SS-MIs showed better mechanical behavior.More force was necessary to achieve 2, 3, and 4 mm-deflections for the SS-MIs than for the Ti-MIs.The lower mechanical resistance of the Ti-MIs was evidenced by the presence of fractures close to the bending region of these MIs.
The above-mentioned results have an important clinical relevance, since they suggest that in higher density bone regions, SS-MIs would have greater mechanical resistance than Ti-MIs.
Previous evidence has suggested that a bone density of 0.32 g/cm 3 is mainly observed in the posterior region of the maxilla; while a density of 0.64 g/cm 3 can be expected in the anterior mandible, buccal shelf and midpalatal region.
CONCLUSIONS
Considering the limitations of this in vitro study, the results seem to demonstrate that: » Both SS-MIs and Ti-MIs provided adequate primary stability.
» Regardless of the type of MI material, MIs inserted in higher density bone greater primary stability.
» SS-MIs showed greater flexural strength and less surface deformation than Ti-MIs, when inserted into high-density bone.
Puls 3 Dental
GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study
Figure 1 :
Figure 1: Distribution of the study groups according to the mini-implant type and density of the artificial bone.SS-MIs = stainless steel mini-implants, Ti-MIs = titanium alloy mini-implants.
Figure 2 :
Figure 2: Mechanical tests.A, B) insertion of mini-implants in test blocks of 20 PCF and 40 PCF, respectively, for measurement of insertion torque.C, D) Application of perpendicular load on the head of the mini-implants for measurement of flexural strength.
9 Dental
Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study Press J Orthod.2024;29(2):e2423282 One MI per group, randomly selected, and a new MI (not submitted to mechanical loading) for each MI type, were used for surface topography and chemical composition analyses by means of scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS), respectively.The MIs were fixed on metallic platforms with the aid of colloidal graphite.An X-ray detector system coupled to a scanning electron microscope (JSM-6610LV, JEOL, Akishima, Japan), operating at 20kv, was utilized.The SEM Control User Interface program v. 3.06 was used to acquire photomicrographs of the surface of the head and the middle third of the MIs, with magnification of 30x.Higher magnifications were used to show characteristics of the observed failures.Surface characteristics were evaluated in a qualitative manner.The chemical composition was analyzed at the same sites, as the surface topography evaluation.The Oxford Aztec software (version 3.3) was used to obtain the relative amounts (%) of the chemical components of the region of interest.
( 7 .
5 ± 0.79). 12These values resulted in an effect size d = 4.53, which was used for the subsequent calculations.The estimates were made in G*Power 3.1 software, considering the following Puls GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study 10 Dental Press J Orthod.2024;29(2):e2423282 parameters: effect size d = 4.53, α error probability = 0.05, power (1-β error probability) = 0.8, and allocation ratio = 1.The calculation resulted in a minimum sample size of three MIs per group.Considering the possibility of using non-parametric statistics, the estimated amount was doubled, resulting in six MIs by group.Considering the limitations of the aforementioned approach, post-hoc calculations of the power achieved by the finally implemented statistical models were additionally performed.The power estimate for the general linear model was based on an effect size f = 2.89 (calculated based on η 2 p = 0.893, obtained from the implemented model), an α error probability = 0.05, total sample size = 24, numerator df = 1 and number of groups = 4.The power estimate for the mixed model was based on an effect size f = 10.49(calculated based on η 2 p = 0.991, obtained from the implemented model), an α error probability = 0.05, total sample size = 24, number of groups = 4, number of measurements = 3, correlation among repeated measures = 0.5 and nonsphericity correction = 1.Dental Press J Orthod.2024;29(2):e2423282
12 Dental
statistics were used to present the data of the evaluated outcomes.A general linear model was implemented to evaluate the effect of the MI type (SS-MI/TI-MI), bone density (20 PCF/40 PCF), and the interaction MI type*Bone density on the insertion torque.Furthermore, to evaluate the effects on flexural strength, a mixed model was implemented, in which the MI type, bone density, deflection and the possible interactions (i.e., MI type*Bone density, MI type*Deflection, Bone density*Deflection, and MI type*Bone density*Deflection) were considered as fixed effects of variation, and the mini-implants were considered as a random intercept in the models.Post-hoc comparisons between the study groups using the Bonferroni test were carried out, in case of significant effects of the interactions were detected.Test assumptions were verified using the Shapiro-Wilk test to assess normality of Puls GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study the deflection was greater was only evident in the MIs inserted in the 40 PCF test blocks, and not in the MIs inserted in the 20 PCF test blocks (Fig 3).A significant effect of the interaction Mini-implant*Bone density was also detected (p < 0.001, Table 4) -that is, the differences observed between SS-MIs and Ti-MIs were only evident in the MIs inserted in the 40 PCF blocks, and not in those inserted in the 20 PCF blocks (Fig 3).
Figure 3 :
Figure 3: Flexural strength according to the MI type, bone density and deflection.
Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study 16 Dental Press J Orthod.2024;29(2):e2423282 SEM analysis showed no or minimal surface alteration in any of the MIs' heads after being submitted to mechanical loading (Fig 4).The head surfaces of the MIs were homogeneous, well-polished and with minimal structural defects, such as striations.On the other hand, plastic deformation without fracture was observed in the threads of SS-MIs (Fig 4H and 4I), while obvious fractures on the middle third of the screw were observed in the threads of Ti-MIs (Fig 4K and 4L).Figure 5 shows, at higher magnification, a microfacture and oblique fracture lines in the Ti-MIs.The EDS analysis evidenced the presence of 18% of C and 2.06% of O in the SS-MIs inserted in test blocks of 20 PCF, after being submitted to loading.Incorporation of 3.91% of C was also observed in the Ti-MIs placed in 40 PCF blocks.The percentages of the chemical components on the surface of the evaluated MIs are presented in Table Puls GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study Dental Press J Orthod.2024;29(2):e2423282
Figure 4 :
Figure 4: SEM analysis.A, B, C, G, H, I) stainless steel mini-implants; D, E, F, J, K, L) titanium alloy mini-implants; A, D, G, J) control (new) mini-implants (not submitted to mechanical loading); B, E, H, K) mini-implants installed in 20 PCF test blocks and submitted to loading; C, F, I, L) mini-implants installed in 40 PCF test blocks and submitted to loading.
25 , 26 Puls 23 Dental
GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study 22 Dental Press J Orthod.2024;29(2):e2423282Therefore, the present findings reinforce the indication of SS-MIs instead of Ti-MIs for these insertion areas.However, it must be recognized that the artificial bone blocks used in the present study do not represent all the characteristics of specific regions.To do this, test blocks with different densities, representing both cancellous and cortical bone, should be prepared, in addition to working with different cortical thicknesses.In this way, the different regions of the bone, both interradicular and extra-alveolar, could be better represented.Thus, further research should investigate differences between the MIs evaluated in more specific representations of the different areas for the insertion of these MIs.It is important to mention that some confounding factors may have influenced the present results.The risk of fracture during the clinical use of extra-alveolar MIs depends on other variables, such as the diameter and length of the MIs, geometric design, and insertion angle, in addition to the type of alloy chosen.[13][14][15]The use of MIs from different brands implies possible variations in their geometric design.It has been previously demonstrated that even MIs with the same diameter but from different brands may present variations in their mechanical properties.27Therefore, future studies should evaluate the interaction of all the mentioned factors on mechanical parameters of MIs inserted in bone of different densities.Puls GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN -Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study
GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN - Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study 13 Dental
1 Ncm greater than that obtained for the SS-MI.The insertion torque for MIs inserted in 40 PCF test blocks was 5.4 Ncm greater than that for those inserted in 20 PCF test blocks.Detailed insertion torque values are reported in Table3.The power of the model was greater than 90% for the sample size used.
Table 1 :
Relative amounts (%) of chemical components in each mini-implant ( MI ) type.
Table 2 :
Effect of the variables MI type, bone density and interaction on the insertion torque values.
* Indicate statistically significant effect.
Table 3 :
Means ± SD and mean differences (95% CI) of insertion torque according to the mini-implant type and bone density.
Table 4 :
Effect of the variables MI type, bone density, deflection and interactions on the flexural strength values.
* Indicate a statistically significant effect.
Table 5 :
Mean ± SD of flexural strength, according to MI type*bone density and deflection.Different superscript letters indicate statistically significant difference among the values for the columns.
Puls GL, Marañón-Vásquez GA, Ramos CAV, Reis CLB, Reis AC, Stuani MBS, Romano FL, Matsumoto MAN - Insertion torque, flexural strength and surface alterations of stainless steel and titanium alloy orthodontic mini-implants: an in vitro study 21 Dental Press J Orthod. 2024;29(2):e2423282 completely
24gid (i.e., MIs displace within the bone),24the bending loads were applied with the MIs fixed in artificial bone of different densities.As expected, in lower density bone (i.e., 20 PCF), both types of MI showed similar flexural strength; the less dense bone initially yields to the loading application, the MI displaces, and a minor deformation of the MIs is observed.
the objective of simulating a more real clinical situation, where the MIs must resist bending in a biological tissue that is not | 2024-05-22T05:12:00.846Z | 2024-05-20T00:00:00.000 | {
"year": 2024,
"sha1": "1c9e1467b376b1a0e8d688cc31f1477faf7412e8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1c9e1467b376b1a0e8d688cc31f1477faf7412e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238407928 | pes2o/s2orc | v3-fos-license | Quantum Semi-Supervised Learning with Quantum Supremacy
Quantum machine learning promises to efficiently solve important problems. There are two persistent challenges in classical machine learning: the lack of labeled data, and the limit of computational power. We propose a novel framework that resolves both issues: quantum semi-supervised learning. Moreover, we provide a protocol in systematically designing quantum machine learning algorithms with quantum supremacy, which can be extended beyond quantum semi-supervised learning. In the meantime, we show that naive quantum matrix product estimation algorithm outperforms the best known classical matrix multiplication algorithm. We showcase two concrete quantum semi-supervised learning algorithms: a quantum self-training algorithm named the propagating nearest-neighbor classifier, and the quantum semi-supervised K-means clustering algorithm. By doing time complexity analysis, we conclude that they indeed possess quantum supremacy.
Introduction
Machine learning has made many seemingly impossible tasks possible: from visual and speech recognition, effective web search, to study of human genomics [1,2]. However, there are several long-standing bottlenecks in the field of machine learning, which slows down its pace in conquering more fields of science and technology. Two major challenges are the lack of labeled data, and the limit of computational power. In this paper, we propose a framework of quantum semi-supervised learning, which can overcome both difficulties at the same time.
Semi-supervised learning [3] combines a small amount of labeled data with a large amount of unlabeled data during training, which tackles the common issue of the lack of labeled data. Quantum computation [4] redefines the way computers create and manipulate information. Many quantum algorithms [5,6,7,8,9,10,11] have been demonstrated to possess quantum supremacy, which means that they can execute the same task substantially faster than their classical counterparts. Quantum semi-supervised learning combines the advantages of both semi-supervised learning and quantum computation, therefore represents the future of machine learning and quantum physics.
Since semi-supervised learning handles both labeled and unlabeled data, in the limiting case when we only have labeled or unlabeled data, we get back to supervised or unsupervised learning. Hence, in some way, semi-supervised learning is more generic, and its algorithms can be used in supervised or unsupervised learning with small modifications. Therefore, many discussions in this paper also apply to quantum supervised or unsupervised learning.
In section 2, we propose a general framework of quantum semi-supervised learning. In section 3, we provide a generic protocol for designing quantum machine learning algorithms with quantum supremacy, which can be extended beyond quantum semi-supervised learning. The recipe can also be used to realize quantum supremacy in deep quantum neutral networks. One feature of our time complexity analysis is that we give a clear separation between memory access time complexity and algorithmic time complexity, so that the algorithmic advantage on quantum computers isn't overshadowed by the exponential speed-up from the fast access of quantum random memories. Moreover, we show that naive quantum matrix product estimation algorithm outperforms the best known classical matrix multiplication algorithm.
In section 4, we introduce quantum self-training, and point out its source of quantum supremacy. We give a concrete example, which is the quantum propagating nearest-neighbor algorithm. By comparing its time complexity to the classical case, we demonstrate its quantum supremacy. In section 5, we present the quantum semi-supervised K-means clustering algorithm, and prove its quantum supremacy by time complexity analysis. We conclude the paper with outlooks in speeding up more complicated machine learning tasks including deep neural networks, along with an ultimate goal of using quantum-quantum learning to learn large quantum systems efficiently and reliably on a quantum computer.
Framework of Quantum Semi-Supervised Learning
We first review semi-supervised learning, and then propose a general framework of quantum semi-supervised learning. Since we are going to introduce different types of quantum semisupervised learning, we also call it classical or classical-classical semi-supervised learning.
Semi-supervised learning lies between supervised and unsupervised learning: we have a small set of labeled data {(x (i) , z (i) )} l i=1 , and a large set of unlabeled data {x (j) } l+u j=l+1 . Specifically, there are several different settings, including regression or classification with labeled and unlabeled data, constrained clustering, and dimensionality reduction with labeled instances whose reduced feature representation is given. We focus on the first setting.
In supervised learning, the training sample is fully labeled, so the goal is always to label the future test data. However, in a semi-supervised setting, the training sample contains unlabeled data. Hence, there are two different goals in semi-supervised learning: inductive semi-supervised learning aims at predicting the labels of future test data, while transductive semi-supervised learning predicts the labels of the unlabeled instances in the training sample.
, inductive semi-supervised learning learns a function f : X → Y so that f is expected to be a good predictor on future data, beyond {x (j) } l+u j=l+1 .
Definition 2.2. Transductive Semi-Supervised Learning. Given a training sample , transductive learning trains a function f : X l+u → Y l+u so that f is expected to be a good predictor on the unlabeled data {x (j) } l+u j=l+1 .
There are different settings of quantum semi-supervised learning. First, classical-quantum semi-supervised learning encodes both labeled and unlabeled classical data in quantum states, and then uses quantum processors to carry out the learning phase. With a stateof-art design of quantum algorithm, classical-quantum semi-supervised learning executes a suitable learning task much faster than its classical-classical counterpart. Later, we will provide a general protocol in designing such algorithms, and show several examples.
, classical-quantum semi-supervised learning encodes the data into quantum states, and then executes the learning task on a quantum computer. One convenient encoding is to map the training sample into product states: labeled data Second, quantum-classical semi-supervised learning maps quantum data to a classical data structure, and then the problem turns into a classical-classical semi-supervised learning problem. While this sounds simple, there are many underlying subtleties. Suppose our s are quantum states, and the labels σ (i) 's can be either classical or quantum. If we know ρ (i) 's and σ (i) 's exactly, then we essentially have classical data, and the mapping is completely trivial. Hence, the nontrivial case is when we have zero or partial knowledge of the training sample, but those quantum states are stored in a quantum memory nicely. We consider this case as the general setup of quantum-classical semi-supervised learning.
In this scenario, the novelty and challenge lie in the first step: how to efficiently and accurately extract classical information from the quantum data? Different mappings may result in different training performances and computational costs. The information loss when making quantum measurement, as well as the no-cloning theorem, adds additional difficulties to the problem. One simple but resource-consuming approach is to do efficient quantum tomography when many copies of same instances are present. To some extent, we have to learn the quantum states first.
When certain limitations forbid us to do complete tomography, the problem becomes more interesting. It has more of an unsupervised learning flavor, as our training data, even the labeled ones, doesn't give out concrete knowledge. The way we process the quantum data may have crucial influence on the learning performance. Again, even in the regime of classical machine learning, pre-processing data can have significant impact on the training output. For inductive learning, this can be even trickier. When we have new quantum data coming in, are we going to process the test data the same way as we did for the training set?
Intuitively, the answer is yes. A thorough discussion will be illuminated in future work.
, where ρ (i) 's and σ (i) 's are partially known or completely unknown, quantum-classical semi-supervised learning extracts classical information from the initial quantum data using certain quantum channel, and then train the resulting classical data on a classical computer. For inductive learning, it uses the same channel to process the test quantum data, and then makes prediction on the corresponding classical data.
Quantum-classical semi-supervised learning turns a complicated quantum problem to a simpler classical problem, and then solves it automatically on a classical computer, which is better understood at the current stage. The drawback is the loss of fidelity of the original data. To combat this, it is natural to skip the first step, which is the conversion of quantum data to classical data. When doing so, we learn the pattern of quantum data on a quantum computer, which is quantum-quantum learning.
, where ρ (i) 's and σ (i) 's are partially known or completely unknown, quantum-quantum semi-supervised learning executes the learning task on a quantum computer.
Intuitively, it is most natural to learn a quantum system on a quantum computer. However, with little classical information in this setting, we need a new set of theories and algorithms [12,13,14] in the training process, even for things as simple as gradient descent [15]. Another challenge is that we are still limited by near-term quantum devices. Hence, right now we still need quantum-classical learning to aid our process of learning a quantum system. In the near future, when fault-tolerant quantum computers are in commercial use, the default choice is to use quantum-quantum learning. These settings are not completely distinctive. Running tasks in a hybrid way can improve efficiency in time and space.
Quantum Supremacy of Classical-Quantum Learning
In this section, we provide a generic protocol in designing classical-quantum learning algorithms that possess quantum supremacy. The idea is to take advantage of the fact that certain data acquisitions and manipulations can be done faster on a quantum computer [16,10,17,18].
Theorem 3.1. QRAM data structure [16]. Let V ∈ R N ×d , there exists a data structure to store the rows of V such that 1. The time to insert, update, or delete a single entry v ij is O log(N d) . 2. A quantum algorithm with access to the data structure can perform the following unitaries in time O log(N d) .
Using a classical RAM data structure, these tasks take O(N d) time to complete. Hence, QRAM data structure provides an exponential speed-up. Many quantum algorithm references combine the memory access time complexity and algorithmic time complexity together when doing time complexity analysis. However, most traditional algorithm analysis takes the memory access time complexity as O(1), because modern computers allow processor caches, memory level parallelism, etc. To put the comparison of quantum and classical algorithms on an equal footing, in this paper, we take memory access time complexity as a constant. It is good to keep in mind that quantum computers are inherently exponentially faster in reading and writing at a memory location. [10,17]. Given data matrices X ∈ R l×d and Y ∈ R u×d stored in the QRAM data structure, where x (i) is the i-th row of X, and y (j) is the j-th row of Y . Suppose that the following unitaries |i |0 → |i |x (i) , and |j |0 → |j |y (j) can be performed in time Λ and the norms of the vectors are known. For any ∆ > 0 and ǫ > 0, there exists a quantum algorithm that computes the L 2 distance between two vectors x (i) and y (j) : |i |j |0 → |i |j |d 2 (x (i) , y (j) ) , where |d 2 (x (i) , y (j) ) − d 2 (x (i) , y (j) )| ≤ ǫ with probability at least 1 − 2∆ in time T =Õ |x (i) ||y (j) |Λ log(1/∆)/ǫ . Theorem 3.3. Inner Product Estimation [16,17]. Given data matrices X ∈ R l×d and Y ∈ R u×d stored in the QRAM data structure, where x (i) is the i-th row of X, and y (j) is the j-th row of Y . Suppose that the following unitaries |i |0 → |i |x (i) , and |j |0 → |j |y (j) can be performed in time Λ and the norms of the vectors are known. For any ∆ > 0 and ǫ > 0, there exists a quantum algorithm that computes the inner product between two vectors x (i) and y (j) : |i |j |0 → |i |j |(x (i) , y (j) ), where |(x (i) , y (j) ) − (x (i) , y (j) )| ≤ ǫ with probability at least 1 − 2∆ in time T =Õ |x (i) ||y (j) |Λ log(1/∆)/ǫ .
Theorem 3.2. Distance Estimation
The above results show that for quantum distance and inner product estimations, the algorithmic time complexity is O(1) in terms of the dimensions of vectors, while the same classical calculation takes O(d).
Since matrix multiplications can be interpreted as calculating many inner products, we propose and prove the following theorem: Theorem 3.4. Matrix Product Estimation. Given data matrices X ∈ R l×d and Y ∈ R u×d stored in the QRAM data structure, where x (i) is the i-th row of X, and y (j) is the j-th row of Y . Suppose that the following unitaries |i |0 → |i |x (i) , and |j |0 → |j |y (j) can be performed in time Λ and the norms of the vectors are known. For any ∆ > 0 and ǫ > 0, there exists a quantum algorithm that computes the product between two matrices X and Y T : Z = XY T , where |z ij − z ij | ≤ ǫ with probability at least 1 − 2∆ in time T =Õ |x (i) ||y (j) |luΛ log(1/∆)/ǫ .
For naive matrix multiplications between an m×k matrix and an k ×n matrix, quantum estimation takes time O(mn), while classical calculation takes time O(mnk). When dealing with n × n matrices, then the naive quantum matrix multiplication algorithm takes time O(n 2 ). This is dramatic, because the classical matrix multiplication algorithm with best asymptotic complexity runs in O(n 2.3728596 ) time, and the naive classical algorithm runs in O(n 3 ) time.
Theorem 3.5. HHL Algorithm [18]. Given a sparse N × N matrix A with condition number κ, and a vector b. Suppose M is a matrix, and x is a vector such that Ax = b. HHL algorithm estimates x † M x inÕ poly(log N, κ) time.
In contrast, the classical algorithm that estimates x † M x takes timeÕ(N √ κ). For the same task, HHL algorithm presents an exponential speed-up with respect to matrix size N .
Most machine learning algorithms require calculations of distance, inner product, matrix product and inverse. For these algorithms, if we can perform these calculations on a quantum computer, and make sure the quantum state preparation and classical information retrieval are not exponentially costly, we can realize quantum supremacy in all these algorithms. This sheds lights on training deep quantum neural networks faster, considering that many inner product calculations and matrix inverses are carried out in the training process.
For the remainder of the paper, we showcase general classes of quantum semi-supervised learning algorithms, and give some concrete examples. Moreover, we compare the time complexity of the classical and quantum versions, and demonstrate quantum supremacy in these scenarios.
Quantum Self-Training
Self-training is characterized by the fact that the learning process uses its own predictions to teach itself. It is also called self-teaching or bootstrapping because of this. Self-training assumes that its own predictions, at least the high confidences ones, tend to be correct. For classification tasks with well-separated clusters, this is usually the case.
The major goal of self-training is to learn an appropriate predictor f . We now show how this is done in quantum self-training, and point out which parts of the general quantum self-training algorithm possess quantum supremacy.
Step 3: Label Assignment. Set |z (j) = |z (i) to be the label of |z (j) . Remove |x (j) from U , add |x (j) |z (j) to L.
In the quantum algorithm, we follow the same logic as its classical version, but store and manipulate data on a quantum computer. At each iteration, let l = |L|, u = |U |, the point distance estimation takes time O(lu); the distance minimization takes O(lu), and the label assignment takes O(1), so the combined time complexity is O(lu).
As a comparison, at each iteration, the corresponding classical algorithm takes time O(lud) to calculate the distance, and then O(lu) for distance minimization, and O(1) for label assignment, with a combined time complexity O(lud).
The above discussion demonstrates the quantum supremacy of the quantum propagating nearest neighbor classifier. In particular, when the data points are of high dimension, the quantum speed-up becomes significant. By creating state-of-art superpositions, it is possible to reduce the running time of quantum distance estimation even further. Last but not least, we should keep in mind that QRAM processes data exponentially faster than RAM.
Quantum Semi-Supervised K-Means
K-means clustering is a simple and popular unsupervised machine learning algorithm. It plays a significant role in cluster analysis. There are different ways of extending it to a semi-supervised setting. We consider the case when some data points are labeled, i.e. we have labeled data {(x (i) , z (i) )} l i=1 , and unlabeled data {x (j) } l+u j=l+1 . One approach to classical semi-supervised K-means clustering is the following algorithm.
1. Initially, for any m ∈ [k], denote the set of labeled data points whose label is m as
5.
Step 2: Cluster Assignment. For labeled data, assign their original label; for unlabeled data, find the minimum distance among {d(x (j) , c t m )}, and assign m as the label of x (j) , i.e. z (j) = m. 6.
Step 3: Centroid Update. For each m, let (1) We now propose the quantum semi-supervised K-means algorithm.
5.
Step 2: Cluster Assignment. For labeled data, assign their original label; for unlabeled data, find the minimum distance among {d(x (j) , c t m )}, and assign m as the label of |x (j) , i.e. |z (j) = |m . Next, uncompute Step 1 to create the superposition of all points and their labels |j |z (j) .
Denote the set of data points whose label is |m as S t m . Measure the label register to obtain a state |χ t m = 1 Hence, the quantum algorithm is faster at every stage of the training, which demonstrates its supremacy.
Conclusion
We propose the framework of quantum semi-supervised learning, which resolves two longstanding challenges of machine learning at the same time. Semi-supervised learning tackles issue of the lack of labeled data, and quantum computation provides dramatic speed-ups so that limit of computational power is no longer the issue. We provide a protocol that systematically designs quantum machine learning algorithms with quantum supremacy and showcase examples. The recipe can be extended beyond supervised, unsupervised, and semisupervised learning. In the future, we will demonstrate how we provide quantum speed-ups to deep neural networks [21]. We also aim at developing more complicated quantum semisupervised learning algorithms, for example, quantum co-training, quantum graph-based training [22]. Furthermore, we keep an ultimate goal in mind: use quantum-quantum learning to learn large quantum systems efficiently and reliably on a quantum computer [15]. | 2021-10-07T01:16:04.228Z | 2021-10-05T00:00:00.000 | {
"year": 2021,
"sha1": "edaf0a50f37348af7fa37ae355eadef056ba5a3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6d62cdb6ccdbea704d4bb10028418557f04a9d9b",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
49608468 | pes2o/s2orc | v3-fos-license | Nanosynthesis of Silver-Calcium Glycerophosphate: Promising Association against Oral Pathogens
Nanobiomaterials combining remineralization and antimicrobial abilities would bring important benefits to control dental caries. This study aimed to produce nanocompounds containing calcium glycerophosphate (CaGP) and silver nanoparticles (AgNP) by varying the reducing agent of silver nitrate (sodium borohydride (B) or sodium citrate (C)), the concentration of silver (1% or 10%), and the CaGP forms (nano or commercial), and analyze its characterization and antimicrobial activity against ATCC Candida albicans (10231) and Streptococcus mutans (25175) by the microdilution method. Controls of AgNP were produced and silver ions (Ag+) were quantified in all of the samples. X-ray diffraction, UV-Vis, and scanning electron microscopy (SEM) analysis demonstrated AgNP associated with CaGP. Ag+ ions were considerably higher in AgCaGP/C. C. albicans was susceptible to nanocompounds produced with both reducing agents, regardless of Ag concentration and CaGP form, being Ag10%CaGP-N/C the most effective compound (19.5–39.0 µg Ag mL−1). While for S. mutans, the effectiveness was observed only for AgCaGP reduced by citrate, also presenting Ag10%CaGP-N the highest effectiveness (156.2–312.5 µg Ag mL−1). Notably, CaGP enhanced the silver antimicrobial potential in about two- and eight-fold against C. albicans and S. mutans when compared with the AgNP controls (from 7.8 to 3.9 and from 250 to 31.2 µg Ag mL−1, respectively). The synthesis that was used in this study promoted the formation of AgNP associated with CaGP, and although the use of sodium borohydride (B) resulted in a pronounced reduction of Ag+, the composite AgCaGP/B was less effective against the microorganisms that were tested.
Introduction
The synthesis and study of properties of new biomaterials has been emphasized lately with the improvement of nanotechnology. In this context, the development of nanomaterials has been the focus of many areas of chemistry, physics, and materials science because of the promising characteristics that these materials exhibit [1].
Nanotechnology aims to manipulate particles by creating new structures with favorable properties in many areas, such as medicine and dentistry [2], and new alternatives of treatment for oral pathologies are emerging. Metallic nanoparticles, in particular silver nanoparticles (AgNP), have been studied as an alternative antimicrobial agent against a broad spectrum of species in the control of oral biofilms [3][4][5]. Although there are several studies where AgNP are used as antimicrobial agents, their mechanism of action is not completely understood. Kim et al. [6] and Besinis et al. [4] related their antimicrobial action to the toxicity resulting from free metal ions dissolution from the surface of the AgNP. In addition, AgNP would lead to oxidative stress through the generation of reactive oxygen species (ROS), interacting with cytoplasmic and nucleic acid components by inhibiting enzymes of the respiratory chain and changing the permeability of the cytoplasmatic bacterial membrane [7][8][9][10][11].
Among oral pathologies, dental caries is one of the most common diseases in humans that relates to genetics, saliva, and diet of the host [9]. Streptococcus mutans is the main cariogenic microorganism owing to its ability to produce acids and glucans from sugar metabolism, which exceed the buffering capacity of saliva [9][10][11] and leads by a localized and irreversible destruction of the tooth structure [9,12]. However, recent evidence indicates the presence of C. albicans and S. mutans in oral biofilms, suggesting that the interaction between them can lead to the development of caries [9,13,14]. C. albicans colonization depends on the presence of the bacteria, which, besides promoting adhesion sites, act as a carbon source for yeast growth. On the other hand, yeasts reduce the levels of oxygen for streptococci [9]. Studies have shown the resistance of many microorganisms to antimicrobial agents currently used [15,16].
Studies since the 1930s [17] have reported the importance of using calcium phosphate derivatives for favouring the remineralization process in dental caries. Calcium glycerophosphate (CaGP) is an organic phosphate salt with anti-caries properties being demonstrated in studies carried out in monkeys [18] and in rats [19]. It is action in dental biofilms may be related to the increase of calcium and phosphate levels [20], buffering capacity [18], and reduction of the mass of the biofilms [21]. Becauses that it seems to interact with dental tissues [22], CaGP has been incorporated in dentifrices [23,24]. Do Amaral et al. [25] and Zaze et al. [26], when associating CaGP (0.25%) in toothpastes with fluoride at low concentrations, found the same efficacy against caries in enamel when compared to dentifrices that were supplemented with a higher concentration of fluoride demonstrating CaGP be an good option for oral products to both prevent caries and avoid fluorose in dental tissues.
The use of a biomaterial containing both an antimicrobial and a compound acting as a source of calcium phosphate for dental remineralization would have a great impact on the prevention and control of dental caries. Therefore, this study aimed to produce nanocompounds containing calcium glycerophosphate (CaGP) and silver nanoparticles (AgNP) by varying the reducing agent of silver nitrate (sodium borohydride or sodium citrate), the concentration of silver (1% or 10%), and the CaGP forms (nano or microparticulated), and analyze its characterization and antimicrobial activity against ATCC strains of Candida albicans and Streptococcus mutans.
Synthesis and Characterization of Ag-CaGP Nanocomposites
UV-Vis absorption spectroscopy (UV-Vis) showed that Ag-CaGP nanocomposites presented silver in nanosized dimensions in all of the nanocomposites synthesized, regardless of the reducing agent used. It was demonstrated by the presence of an intense absorption peak, denominated plasmonic band, which occurred between 420 and 450 nm (Figure 1a, Figure S1). It characterizes noble metal nanoparticles, with strong absorption band being observed in the visible region [27]. The CaGP did not exhibit absorption peak in the visible region of the electromagnetic spectrum.
X-ray diffraction (XRD) pattern indicated that all of the Ag-CaGP nanocomposites were composed of AgNP and CaGP for confirming the presence of silver in Ag-CaGP nanocomposites through comparison of the nanoparticles and CaGP. The typical powder XRD pattern of the prepared CaGP showed diffraction peaks at 2θ = 6. Antibiotics 2018, 7, x FOR PEER REVIEW 3 of 12 metal nanoparticles, with strong absorption band being observed in the visible region [27]. The CaGP did not exhibit absorption peak in the visible region of the electromagnetic spectrum. X-ray diffraction (XRD) pattern indicated that all of the Ag-CaGP nanocomposites were composed of AgNP and CaGP for confirming the presence of silver in Ag-CaGP nanocomposites through comparison of the nanoparticles and CaGP. The typical powder XRD pattern of the prepared CaGP showed diffraction peaks at 2θ = 6.30°, 12.3°, 26.4°, 41.1°, and 44.2° (Figures 1b, S2), and the corresponding crystallographic form (PDF No. 1-17) [28]. The typical powder XRD pattern of the silver nanoparticles showed (Figure 1b Nanostructured materials that exhibit a pattern of small nanoparticles scattered on a larger surface, similar to glass bead embellishments on a Christmas tree, are generally classified as a decorated material. The scanning electron microscopy (SEM) images of Figure 2 show this typical pattern, with spherical silver nanoparticles (indicated by arrows) decorating the surface of the CaGP microparticles in all synthesized nanocomposites containing 10% Ag (B4; B8; C4). In addition, transmission electron microscopy (TEM) was performed for the nanocomposite B4 ( Figure S3). Nanostructured materials that exhibit a pattern of small nanoparticles scattered on a larger surface, similar to glass bead embellishments on a Christmas tree, are generally classified as a decorated material. The scanning electron microscopy (SEM) images of Figure 2 show this typical pattern, with spherical silver nanoparticles (indicated by arrows) decorating the surface of the CaGP microparticles in all synthesized nanocomposites containing 10% Ag (B4; B8; C4). In addition, transmission electron microscopy (TEM) was performed for the nanocomposite B4 ( Figure S3). The energy-dispersive X-ray sprectroscopy (EDS) clearly showed the outline of Ag-CaGP nanocomposites in all micrographs. Also, Figure
Minimum Inhibitory Concentration
The results showed that the MIC values were related to the synthesis process and the Ag concentration used ( Table 1). Nanocomposites that were obtained using Na3C6H5O7 as reducing agent showed the most effective antimicrobial activity against C. albicans and S. mutans. In these composites, the lowest MIC values were observed for those containing 10% of Ag (C3 and C4), being between 19.05 and 39.05 µg/mL for C. albicans and 156.2 and 625 µg/mL for S mutans. The nanocomposites that
Minimum Inhibitory Concentration
The results showed that the MIC values were related to the synthesis process and the Ag concentration used (Table 1). Nanocomposites that were obtained using Na 3 C 6 H 5 O 7 as reducing agent showed the most effective antimicrobial activity against C. albicans and S. mutans. In these composites, the lowest MIC values were observed for those containing 10% of Ag (C3 and C4), being between 19.05 and 39.05 µg/mL for C. albicans and 156.2 and 625 µg/mL for S. mutans. The nanocomposites that were synthesized using NaBH 4 as reducing agent and isopropanol as solvent showed fungicidal effect varying between 100 and 1600 µg/mL, whilst no effect against S. mutans was observed. While the nanocomposites synthesized using the same reducing agent and deionized water as solvent did not show any effect against both microorganisms. In addition to the MICs found for the synthesized compounds, it was carried out the microdilution assay to find the MIC values for the solutions containing only AgNP or CaGP diluted in deionized water, besides the other compounds used in the synthesis reaction as reducing and surfactant agents. These data are showed in Table 2. Table 1. Values of minimum inhibitory concentration (MIC) of the nanocompounds based on µg of AgCaGP mL −1 and on µg of Ag mL −1 in each ones, synthesized using sodium borohydride (Group B) and sodium citrate (Group C), and silver ions concentration (µg Ag + /mL) in all nanocompounds tested.
Determination of Ag + Concentration
The Ag + concentration of all the nanocomposites containing Ag (AgNP and Ag-CaGP) is showed in Table 1. For samples that were obtained through NaBH 4 route (B1-B8), a reduction of ionic silver higher than 98% was observed, when considering the total amount of ionic silver added to the reaction was 500 µg Ag + /mL for B1, B2, B5, B6, and 5000 µg Ag + /mL for B3, B4, B7, and B8. While for the compounds that were synthesized using Na 3 C 6 H 5 O 7 as reducing agent, the ionic silver remaining was higher and reached about 10% in those samples that were produced using initially 5000 µg Ag + /mL in the reaction process (C3 and C4). C1 and C2 presented 61.1% and 33.3%, respectively, of ionic silver in samples as the total Ag + added in the reaction was 500 µg Ag + /mL. For AgNP with no CaGP added to the reaction (Table 2) obtained by Na 3 C 6 H 5 O 7 route (nanoAg(Na 3 C 6 H 5 O 7 )) the Ag + concentration was 107.25 µg/mL, whereas for AgNP produced through NaBH 4 (nanoAg(NaBH 4 )) the Ag + concentration was 576.19 µg/mL.
Discussion
In the present study, both of the synthesis methods proposed using sodium citrate or sodium borohydride as reducing agents, led to the anchorage between the silver nanoparticles and calcium glycerophosphate ( Figure 2). Besides, in general, the nanocomposites (AgCaGP) were effective against reference strains of Candida albicans and Streptococcus mutans. Notably, CaGP substantively increased the antimicrobial effectiveness of silver in the AgCaGP, reducing up to a quarter their minimum inhibitory concentration when compared to the respective AgNP controls (Tables 1 and 2).
Although the CaGP has been previously nanoparticulated before the Ag-CaGP synthesis, in our study it was not characterized as being in nanoparticulated form when associated with silver. It might be happen due to the poor solubility of calcium at pH = 7 [29], even when using the same dispersant (NH-PM), as preconized by Miranda et al., whom synthesized AgNP with hydroxyapatite. A pastier bulk was particularly noted in micrographics of Ag-CaGP when water was used as solvent instead of isopropanol (Figure 2c-f), regardless of the reducing agent that was used in the reaction. Although there has not been difference between micro and nanoparticulated-CaGP in the SEM images, its form influenced the amount of silver ions in the compounds (Table 1). In addition, our results showed the antimicrobial effectiveness against C. albicans and S. mutans for the samples of group C, and it could be explained by the highest amount of silver ions that are present in those compounds [4,[30][31][32][33][34][35].
This expressive difference in the quantity of silver ions between groups B and C would be related to the characteristics of the reducing agents used, being sodium borohydride considered a stronger reducing agent than sodium citrate [33]. Although silver ions are effective to kill several pathogenic microorganisms, they are easily dispersed, which quickly decreases its local concentration to levels of low effectivity. Moreover, ambient light reduces ionic silver forming typical black spots on skin or on any surface of contact [36]. This process causes aesthetic problems and it has potential to injure healthy living tissues. Silver nanoparticles, contrary to ionic silver, induce the production of reactive oxygen species (ROS), which is the primary antimicrobial mechanism [37]. However, AgNP tend to form aggregates in the absence of any support, reducing their efficacy. Therefore, substrates decorated with immobilized AgNP exhibits enhanced antimicrobial activity for longer periods, reducing the undesirable secondary effects that are associated to free ionic silver [38]. Although the difficult to separate the impact of free ionic silver from the AgNP antimicrobial action, differences that were observed in minimum inhibitory concentrations (MIC), as shown in Table 1, for C. albicans and S. mutans suggests the influence of their respective metabolism on the efficacy of silver against each microorganism.
Furthermore, other factors may influence the antimicrobial potential of AgNP [34]. For instance, how the compound containing silver interacts with the microorganisms would dependent on the characteristics of the AgNP formed, as well as the chemical and physical changes that may occur when they are added to the medium of interest [33]. In general, for the synthesis of AgNP, AgNO 3 is used as source of silver, water or ethanol as solvent, and sodium borohydride or sodium citrate as reducing agent [39]. Fabricated under similar conditions, the AgNP would have a negative surface charge [33,40], and this fact is noteworthy to elucidate the lower effectiveness of the compounds against S. mutans. Bacteria have a negative outer membrane charge [41] and the electrostatic attraction may have been hampered, and hence the action of the AgNP associated or not with CaGP on the S. mutans was diminished. On the other hand, apart from fungi present a neutral surface charge [41] and might enhance the attraction of AgNP, the presence of phospholipid components, which contain phosphate groups, may have improved the antimicrobial activity of silver by targeting these sites [42,43]. Indeed, the control of AgNP reduced by sodium citrate showed a lower amount of ions (107.2 µg Ag + /mL) than the control that was produced using sodium borohydride (576.2 µg Ag + /mL), and it was more effective against C. albicans, suggesting an antifungal potential of AgNP by itself, which may have afforded the disruption of the C. albicans cell membrane by damaging the inner layers of the cell wall, increasing their permeabilization and then allowing for the passage of these particles to into the cell.
On the contrary, against S. mutans plaktonic cells, Ag + may have played a preponderant role, particularly in view of the MIC values that are found for AgNO 3 (21.2 µg/mL) when compared to those for AgNP, regardless of the reducing agent that is used in the reaction (250 and 125 µg/mL, respectively, for AgNP (Na 3 C 6 H 5 O 7 ) and AgNP (NaBH 4 )). Noteworthy was the effect that was produced against S. mutans when CaGP was associated with AgNP (Table 1). CaGP afforded an increment in the silver activity and it could be related to the acidogenic and acidic characteristic of S. mutans, acting CaGP probably as a buffer, and hence might have prevented the proliferation of the cells in the medium [44][45][46]. So that, the CaGP buffer activity and the highest amount of Ag + ions could account for the better effectiveness of the samples of C group against that gram positive bacteria tested.
Synthesis of Silver-Calcium Glycerophosphate (Ag/CaGP) Nanocomposites
Ag/CaGP nanocomposites were synthesized at the Interdisciplinary Laboratory of Electrochemistry and Ceramics of the Chemistry Department in Federal University of São Carlos. Initially, the commercial form of calcium glycerophosphate (80% β-isomer and 20% rac-α-isomer, CAS 58409-70-4, Sigma-Aldrich Chemical Co., St. Louis, MO, USA) was acquired and was nanoparticulated using a ball mill for 24 h at 120 rpm, obtaining nanoparticles of approximately 10 nm. Then, two chemistry methods were employed for the synthesis. The first method was employed using sodium borohydride as reducing agent (NaBH 4 , Sigma-Aldrich Chemical Co., St. Louis, MO, USA) and was based on the methodology that was proposed by Miranda et al. [29]. The synthesis was carried out in an alcoholic medium (isopropanol) or deionized water. For this, suspensions containing 5 g of CaGP and silver nitrate (AgNO 3 Merck KGaA, Darmstadt, Hessen, Germany) at 0.85 or 0.085 g were prepared in the presence of 0.5 mL of a surfactant (ammonium salt of polymethacrylic acid (NH-PM), Polysciences Inc., Warrington, PA, USA) (Table 1). Then, NaBH 4 (0.015 g) was added to each suspension, which caused the reduction of Ag + to metallic silver nanoparticles in the presence of CaGP. The molar stoichiometric ratio between Ag + and NaBH 4 was 1:1.26, respectively. The second method was based on that proposed by Turkevich et al. [47] and Gorup et al. [48] (2011, p. 355). The reducing agent of AgNO 3 was sodium citrate (Na 3 C 6 H 5 O 7 , Merck KGaA) and the stoichiometric ratio of each compound was, respectively, 1:3. Thus, in a flask containing 100 mL of deionized H 2 O 5 g of CaGP was added following of 0.5 mL NH-PM and 1.4 g of Na 3 C 6 H 5 O 7 . This mixture was kept under magnetic stirring and heating. After reaching 95 • C temperature, AgNO 3 was added and this suspension was maintained stirring for 30 min until occurring the color change, which qualitatively indicated the formation of AgNP. Controls containing only the reducing agents and surfactant, and AgNP produced by both reducing were also prepared.
Characterization of Ag-CaGP Nanocomposites
In order to demonstrate the presence of AgNP and CaGP in the compounds, the UV-Vis absorption spectroscopy was employed. The measure is based on the phenomenon of plasmon resonance band, as observed in metallic nanoparticles. Thus, UV-Vis spectra of Ag-CaGP nanocomposites were obtained from aqueous solutions poured out in a commercial quartz cuvette with 1 cm optical path using a spectrophotometer (Shimadzu MultSpec-1501 spectrophotometer; Shimadzu Corporation, Tokyo, Japan) at 300 to 800 nm. Water was used as blank. After a drying step, the resulting powder, Ag-CaGP, was subjected to X-ray diffraction (XRD) phase characterization using Cu Kα radiation (λ = 1.5406 Å), generated at a voltage of 30 kV and a current of 30 mA with continuous sweep in the range of 5 • < 2θ < 80 • , at a scan rate of 2 • /min (Diffractometer Rigaku DMax-2000PC, Rigaku Corporation, Tokyo, Japan). The particles morphology was also characterized by scanning electron microscopy (SEM) on a Zeiss Supra 35VP microscope (S-360 Microscope, Leo, Cambridge, MA, USA), with field emission gun electron effect (FEG-SEM) operating at 10 kV. A drop of each sample were added with a micropipette and deposited on silicon metal plate (111) and dried at 40 • C for 2 h. With this technique, we can identify in the synthesized biomaterials the presence of silver, oxygen, silicon, phosphate, and calcium, which were artificially colored (Figures 3 and 4).
Minimum Inhibitory Concentration (MIC)
The MIC values for each sample were determined through the microdilution method and followed the Clinical Laboratory Standards Institute guidelines (CLSI, documents M27-A2 and M07-A9). Candida albicans (ATCC 10231) was cultivated on Sabouraud Dextrose Agar (SDA, Difco, Le Pont de Claix, France) and S. mutans (ATCC 25175) on Brain Heart Infusion Agar (BHI, Difco, Le Pont de Claix, France). Inocula from 24 h cultures on the respective media were adjusted to a turbidity equivalent to a 0.5 McFarland standard in saline solution (0.85% NaCl). This suspension was diluted (1:5) in saline solution, and afterwards diluted (1:20) in RPMI 1640 or BHI. Initially, the Ag-CaGP nanocomposite was diluted in deionized water in a geometric progression, from 2 to 1024 times. Afterwards, each Ag-CaGP nanocomposite concentration obtained previously was diluted (1:5) in RPMI 1640 medium (Sigma-Aldrich) for C. albicans and in BHI for S. mutans. The final concentrations of Ag-CaGP nanocomposite in the dispersion ranged from 5 to 0.01 mg/mL. Each inoculum (100 µL) was added to the respective well of microtiter plates containing 100 µL of each specific concentration of the Ag-CaGP nanocomposite solution. The microtiter plates were incubated at 37 • C, and the MIC values were determined visually as the lowest concentration of Ag-CaGP with no microorganism growth after 48 h for C. albicans and 24 h for S. mutans. All of the assays were repeated in triplicate on three different occasions.
Determination of Ag + Concentration
The evaluation of Ag + concentration in Ag-CaGP and AgNP, as obtained by both reducing agents, was determined by a specific electrode 9616 BNWP (Thermo Scientific, Beverly, MA, USA) coupled to an ion analyzer (Orion 720 A + , Thermo Scientific, Beverly, MA, USA). A 1000 µg/mL silver standard was prepared placing 1.57 g of dried AgNO 3 into a 1 L volumetric flask containing deionized water. This solution was stored in an opaque bottle in a dark location and diluted in deionized water at the moment of dosage in order to achieve the standard concentrations used. Thus, the combined electrode was calibrated with standards containing 6.25 to 100 µg Ag/mL at the same conditions of samples. A silver ionic strength adjuster solution (ISA, Cat. No. 940011) that provides a constant background ionic strength was used (1 mL of each sample/standard: 0.02 mL ISA).
Conclusions
In conclusion, the synthesis that is proposed in this study promoted the anchorage of AgNP with the CaGP, and the nanocomposites produced using sodium citrate as reducing agent were effective against both of the microorganisms tested. Also, the highlight of our study was that the addition of CaGP to AgNP expressively reduced the MIC values when it was compared with the MIC values of AgNP by itself. These promising results strongly encourage further studies with the purpose of producing biomaterials with antimicrobial and remineralizing functions in the near future, particularly in the dental field. | 2018-07-12T06:15:08.544Z | 2018-06-27T00:00:00.000 | {
"year": 2018,
"sha1": "0388ca44b88bf713bee048c004d24a9efe906c6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/7/3/52/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0388ca44b88bf713bee048c004d24a9efe906c6a",
"s2fieldsofstudy": [
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252471813 | pes2o/s2orc | v3-fos-license | The Medical versus Zoological Concept of Outflow Tract Valves of the Vertebrate Heart
The anatomical elements that in humans prevent blood backflow from the aorta and pulmonary artery to the left and right ventriclesare the aortic and pulmonary valves, respectively. Each valve regularly consists of three leaflets (cusps), each supported by its valvular sinus. From the medical viewpoint, each set of three leaflets and sinuses is regarded as a morpho-functional unit. This notion also applies to birds and non-human mammals. However, the structures that prevent the return of blood to the heart in other vertebrates are notably different. This has led to discrepancies between physicians and zoologists in defining what a cardiac outflow tract valve is. The aim here is to compare the gross anatomy of the outflow tract valvular system among several groups of vertebrates in order to understand the conceptual and nomenclature controversies in the field.
Introduction
The anatomical elements that in humans guard the unidirectional blood flow from the cardiac ventricles to the aortic and pulmonary arteries are the arterial (semilunar) valves. The valve that prevents blood backflow from the aorta to the left ventricle is the aortic valve, while that which performs this function between the pulmonary artery and the right ventricle is the pulmonary or pulmonic valve. The main medical interest in arterial valves is that their congenital malformations and diseases over a lifetime are clinically relevant [1][2][3]. Although both valves are subject to similar complications, those affecting the aortic valve cause the most severe effects [4][5][6][7].
The normal (tricuspid or trifoliate) condition of both the aortic [8][9][10][11][12] and pulmonary [9,13,14] valves is characterized by the presence of three leaflets (cusps) of similar size, each supported by its valvular sinus ( Figure 1A,B). The leaflets are the most mobile components of the valve, which open and close during the cardiac cycle. The sinuses are the portions of the arterial roots to the borders of which the leaflets attach following a parabolic line. The attachments of adjacent leaflets to the sinus walls join distally at three points named commissures. The attachments of the leaflets diverge from the commissures towards the ventricle. As a result of this divergence there is a triangular space between adjacent leaflets called the subvalvular fibrous interleaflet triangle [15]. Thus, in each valve there are three interleaflet triangles. The distal vertex of each triangle is the point at which two adjacent leaflets join to form a commissure. Regarding the whole aortic root, which is the conduit between the left ventricle and the ascending aorta that contains the aortic valve [16], there is still no definitive consensus on how to best define its anatomical components. The variety of responses given by several cardiothoracic surgeons to the questionnaire proposed by Sievers et al. [17] in an attempt to standardize the nomenclature of the aortic root already illustrates the significant discrepancies existing on this issue. Recently, a thorough study by Michelena et al. [18] has provided valuable information that favors the integration of criteria on the part of the various specialists in human hearts and, especially, on the bicuspid or bifoliate condition of the aortic valve. Possibly, not all the questions that remain up in the air have been resolved, but the progress that has been made is highly significant.
Beyond these controversies, the fact that we will emphasize here is that each of the complex structures located in the aortic root and at the base of the pulmonary artery preventing blood backflow to the ventricles is regarded by physicians as a valve, and not as Regarding the whole aortic root, which is the conduit between the left ventricle and the ascending aorta that contains the aortic valve [16], there is still no definitive consensus on how to best define its anatomical components. The variety of responses given by several cardiothoracic surgeons to the questionnaire proposed by Sievers et al. [17] in an attempt to standardize the nomenclature of the aortic root already illustrates the significant discrepancies existing on this issue. Recently, a thorough study by Michelena et al. [18] has provided valuable information that favors the integration of criteria on the part of the various specialists in human hearts and, especially, on the bicuspid or bifoliate condition of the aortic valve. Possibly, not all the questions that remain up in the air have been resolved, but the progress that has been made is highly significant.
Beyond these controversies, the fact that we will emphasize here is that each of the complex structures located in the aortic root and at the base of the pulmonary artery preventing blood backflow to the ventricles is regarded by physicians as a valve, and not as a set of valves. This notion also applies to avian [19][20][21][22][23] and mammalian [24][25][26][27][28][29][30][31] species used as animal models in embryological, pathological, and genetic studies of the arterial valves ( Figure 1B).
In contrast, the structures that prevent the return of blood to the heart in other groups of vertebrates, referred generally to as outflow tract valves of the heart, diverge notably from those of mammals and birds. The differences concern the number, shape, size, and spatial arrangement of the valves. This leads to disagreements between physicians and zoologists in defining what a cardiac outflow tract valve is.
The purpose here is far from suggesting any change that might affect the medical notion of arterial valves. It is well known that the earliest description and drawings of the anatomy of the human aortic valve stem from Leonardo da Vinci in the sixteenth century (see [32]). Thus, any attempt to modify concepts that have been consolidated over so many years, even though some of them are still controversial, would be inappropriate. The aim is to compare gross anatomically the cardiac outflow tract valves of several zoological groups to contribute to a better understanding and clarification of the conceptual and nomenclature discrepancies which persist between specialists, and which often cause confusion to scholars and students who are interested in the anatomy of the vertebrate heart.
The Cardiac Outflow Tract Valves of Chondrichthyans and Actinopterygians
The first relevant anatomical study of the cardiac outflow tract valves of chondrichthyans (cartilaginous fishes) and phylogenetically early actinopterygians (ray-finned fishes) was that of Stöhr [33]. In both groups, the outflow tract valves, located at the luminal side of the myocardial conus arteriosus, are usually termed conal or conus valves. Stöhr identified different valve morphologies, which he classified into four types, namely, pocket-like valves ('Taschenklappen'), small tongue-like valves ('Querleisten'), rudimentary valves ('Zwischenklappen') and knot-like valves ('Knötchen'). It should be noted that at that time, the term valve referred exclusively to the membranous structure that closes temporarily a part of the conus, permitting movement of blood in one direction only. Therefore, a conal valve was conceptually equivalent to a leaflet or cusp of the arterial valves of birds and mammals. The portion of the conus wall supporting the leaflet was not regarded as a valve component.
As far as is known, the study of Sans-Coma et al. [34] carried out in a shark species, the lesser spotted dogfish, Scyliorhinus canicula (Linnaeus, 1758), was the first to show that anatomically and functionally, each pocket-like valve is made up of the leaflet and its supporting sinus. The leaflet is the most mobile component of the valve which opens and closes during the cardiac cycle. The sinus is the hollow portion of the conus wall, whose borders support the leaflet. Since then, this notion has been adopted by several authors when describing the anatomy of the cardiac outflow tract valves of chondrichthyans [35], early actinopterygians [36], and teleosts (modern actinopterygians) [36][37][38][39][40].
In chondrichthyans (reviewed in [34]) and early actinopterygians [41], the number and arrangement of the conal valves are highly variable; both traits may even diverge between members of the same species [42]. The valves are usually distributed in several transverse rows along the conus arteriosus, which connects distally with the bulbus arteriosus ( Figure 1C). The pocket-like valves are by far the most common type ( Figure 1C). They play the main role of preventing blood backflow from the aorta, similar to the function of the mammalian aortic valve. An additional function of the cardiac conal valves in fish is to collaborate in the reduction of aortic pressure to protect the delicate vasculature of the gills [43]. The other, smaller valves ( Figure 1C'), when present, are mere protuberances that have much less effect in avoiding flow from the conus arteriosus into the ventricle. Descriptive studies on fossilized hearts of the extinct teleost species Rhacolepis buccalis have shed light on the debate about the direction of evolution of the cardiac outflow tract in osteichthyans. The finding of several rows of valves in the conus arteriosus of the species R. buccalis, representative of a basal group of teleosts, suggests that over the course of evolution the number of valves in the conus arteriosus of teleosts has been reduced, eventually dwindling to one or two at present (Maldanis et al., 2016). Indeed, in the extant teleosts, the number of conal valves is smaller in consonance with the reduction in length of the conus arteriosus. A few species belonging to ancient groups possess two transverse rows of valves. Most teleosts, however, have a single row composed of two major pocket-like valves, which often resemble anatomically the outflow tract valves of birds and mammals ( Figure 1D). One or two minor pocket-like valves are also present in several species.
The Cardiac Outflow Tract Valves of Early Sarcopterygians
In describing the cardiac outflow tract valves of the phylogenetically early groups of sarcopterygians (lobe-finned fishes), such as the crossopterygians (coelacanths), dipnoans (lungfishes), and amphibians, the concept has been usually applied that a valve is a leaflet or even any other anatomical element that might contribute to prevent the return of blood to the heart. As for the crossopterygians, exemplified by the coelacanth Latimeria chalumnae Smith 1939, it was briefly quoted that this species has 24 conal valves, without specifying their morphology [44].
In dipnoans (lungfishes), the concept of the cardiac outflow tract valve has been used ambiguously. The valves are located in the outflow tract portion that has myocardium in its walls. The valve shape is highly variable. There are pocket-like valves of different size, and also transverse protrusions, separated by furrows, and small incisura at the luminal side of the outflow tract, all of which have been described as valves [45][46][47][48].
The shape, size, and spatial distribution of the outflow tract valves are also notably variable in amphibians. In these animals, the valves are also placed in the myocardial portion of the outflow tract, the anatomy of which markedly differs between lung breathing and lungless species. In those having lungs, the outflow tract is partially divided into two channels, the cavum pulmo-cutaneum and the cavum aorticum, by a spiral fold that has been often termed 'spiral valve'. The spiral fold is vestigial or absent in most lungless amphibians [49]. The other anatomical elements which have been described as valves differ in shape and size, and are usually located at the proximal and distal ends of the myocardial portion of the outflow tract [49][50][51][52]. Most often, they are pocket-like valves ( Figure 1E) but other, simpler structures have also been included in the category of valves. In no case, however, has the portion of the outflow tract wall which supports the leaflet been regarded as a valve component.
The Cardiac Outflow Tract Valves of Reptiles
The sauropsids included in the classic group of reptiles have a cardiac ventricle divided partially by one or two septa, giving rise to two or three ventricular cavities, respectively. The only ventricle with a complete septum is that of crocodilians. In all cases, blood flows from the heart to the lungs through a single pulmonary trunk that divides into right and left pulmonary arteries, and to the body through two aortic or systemic arteries, right and left. The outflow tract valves are located at the base of the pulmonary trunk and at the anatomical origin of each aorta. These cardiac valves have received little attention and what can be gained from the literature is that they are usually semilunar or pocket-like in shape [53][54][55][56]. At the base of each artery there are generally two valves. Those of the pulmonary trunk have barely been studied. The valves of the aortic trunks, especially those of snakes, have been described in more detail [56]. Interestingly, these authors used the term aortic valve to refer to the set of two valves existing at the base of each aorta, thus opting for the nomenclature applied to birds and mammals.
The Cardiac Outflow Tract Valves of Birds and Mammals
In birds, namely the remaining group of sauropsids, the basal portions of the aortic and pulmonary arteries are usually composed of three valves. These two sets of valves have been termed aortic and pulmonary valves, respectively, probably because of the influence of medical viewpoints [23,57,58].
From the zoological viewpoint, mammals ( Figure 1A,B), and also birds, regularly have three valves at the base of the aortic and pulmonary trunks. Each of these six valves is, in fact, an anatomical unit composed of a leaflet and its sinus. However, the resulting trivalvular aortic and pulmonary complexes, established evolutionarily in concomitance with the regression of the conus arteriosus, have been successful in performing the enormous work to which they are subject over a lifetime. In this regard, it should be noted that in humans, the presence of three leaflets of similar size constitutes the most efficient geometrical condition to prevent blood backflow to the ventricles [59][60][61]. Valve complexes built up by leaflets of dissimilar sizes or by a different number of leaflets are less efficient; they are considered cardiac anomalies or malformations by physicians, because they very often entail the risk of clinically relevant complications. For example, the presence of only two leaflets in the aortic root, which is normal in the aortic trunks of reptiles, is regarded as an anomaly from the medical perspective [62,63]. In fact, this condition, termed bicuspid or bifoliate aortic valve, is the most frequent congenital cardiac defect in humans, with an estimated prevalence of between 0.5% and 2% [64][65][66]. Considering the serious complications occurring in at least one third of cases, the bicuspid aortic valve may be responsible for more deaths and morbidity than the combined effect of all other congenital heart defects [65]. Interestingly, this example illustrates that a valvular condition that is normal and does not lead to complications in certain vertebrates, such as reptiles, may be the cause of disease in their descendants, namely in mammals.
Concluding Remarks
In extant jawed vertebrates, such as chondrichthyans, ancient actinopterygians, and early sarcopterygians, the primitive valvular system of the cardiac outflow tract is characterized by the presence of multiple valves of different shapes and sizes. The system evolved to a more simplified and regular design in those groups where the conus arteriosus became remarkably reduced in size. This is the case of the extant teleosts and amniotes. From the zoological viewpoint, the presence of three valves at the base of each great artery of the heart is the most common condition in birds and mammals.
From the medical perspective, each set of three valves constitutes a morpho-functional unit. This notion, which has been used successfully during centuries for improvements in the treatment of patients with valve diseases, and requires, in our opinion, no conceptual change. Nonetheless, it is suitable that all heart anatomists are aware of the anatomical variability of the cardiac outflow tract valves. This should help to avoid further misunderstandings between physicians and zoologists when using the term cardiac outflow tract valve.
Author Contributions: Each author has contributed their own experience and data on different groups of vertebrates. V.S.-C. and B.F. drafted the manuscript, with critical revising by all other authors. All authors have read and agreed to the published version of the manuscript. | 2022-09-24T15:13:41.913Z | 2022-09-22T00:00:00.000 | {
"year": 2022,
"sha1": "9381ec072a7f98f666a8f8e50d904b5e513f6e70",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3425/9/10/318/pdf?version=1663831434",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bbe0c089b5ad4a0617ea1e5f495bd0e33f6124c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
121137354 | pes2o/s2orc | v3-fos-license | Physics of non-Abelian vortices in Bose-Einstein condensates
A wide order-parameter manifolds of Bose-Einstein condensates (BECs) with spin degrees of freedom enables various kinds of topological defects which never appear in conventional scalar BECs. In this paper, we focus on the characteristic example of them; non-Abelian vortices and their dynamics in the cyclic phase of a spin-2 BEC. Non-Abelian vortices shows the unique effect in their collision dynamics, i.e. unlike Abelian vortices, they do neither reconnect themselves nor pass through each other but create a rung vortex between them.
Introduction
Quantized vortices have been one of the main topics in superfluid systems, and had a long research history since their theoretical prediction [1] and experimental observations [2]. Quantized vortices have quantized circulations of the superfluid velocity, and a vortex in a scalar Bose-Einstein condensate, for example, has the circulation being integer multiple of h/M . Here h is the Planck's constant and M is the mass of atoms. The quantized circulation has been observed in superfluid 4 He [2], superfluid 3 He [3], and atomic BECs [4,5].
Quantized vortices are the topological defects of order parameters of superfluid, and can be classified as topological invariants defined by the topological structures of the order parameters. Topological invariants of quantized vortices (line defects) are defined by how the order parameters change along the closed path encircling the vortices. In the case of scalar BECs, for example, the order-parameter manifold is U (1), and the U (1) gauge changes by integer multiple of 2π along the closed path. As a result, topological invariants of vortices in scalar BECs are classified as the additive group of integers, which corresponds to the circulations of vortices being integer multiple of h/M .
For the case of multi-component BECs, a discrete symmetry enables vortices to have fractional circulations. Characteristic examples are half-quantized vortices in superfluid 3 He-A [6], vortex cores in superfluid 3 He-B, and the polar phase of a spin-1 spinor BEC . As other kind of vortices having interesting topological invariants, we discuss non-Abelian vortices here. For BECs with spin degrees of freedom, besides the U (1) gauge, the direction of the SO(3) spin rotates along the closed path encircling the vortices. Because SO (3) is the non-Abelian group, topological invariants can be classified as discrete non-Abelian subgroup of SO (3), defining non-Abelian vortices.
The non-Abelian property of non-Abelian vortices becomes remarkable in their collision dynamics. When two U (1) vortices collide, they reconnect themselves. This reconnection of vortices has been studied theoretically [10,11,12], and observed in superfluid 4 He [13]. As other abnormal examples, collision of two U (1) vortices in the attractive BEC [14,15] or two U (1) × U (1) vortices in the attractive two-component BEC [16] results in the formation of a new vortex bridging two colliding vortices, like a rung of a ladder. We define this vortex as a rung vortex.
When the vortices are non-Abelian, the situation changes. It was predicted that the collision of two non-Abelian vortices produces a rung vortex [17,18,19]. In this paper, furthermore, we show that for two vortices with non-commutative topological invariants, both reconnection and passing through are topologically forbidden and only the formation of a rung vortex is allowed. As the other dynamics demonstrating the genuine non-Abelian character, we show that the two twisted vortices with commutative topological invariants can unravel, whereas those with non-commutative ones cannot.
Non-Abelian vortices and their collision dynamics can be realized in the cyclic phase of the spin-2 BEC which has the non-Abelian tetrahedral symmetry.
In this paper, we briefly overview our study of non-Abelian vortices. In Sec. 2, we define non-Abelian vortices based on homotopy theory and discuss their collision dynamics algebraically. In Sec. 3, we review the spin-2 BECs and non-Abelian vortices, and show our recent results for several collision dynamics of non-Abelian vortices.
Non-Abelian vortices and their collision dynamics
In this section, we define topological invariants of quantized vortices based on homotopy theory and non-Abelian vortices. After that, we discuss collision dynamics of vortices.
Order-parameter manifold
Quantized vortices appear in the symmetry broken system such as superfluid phase of liquid helium and ultracold atomic BECs. Let the Lie group G be a set of transformations which do not change the free energy of the system, and the subgroup H be a set of transformations which do not change the order parameter ψ of the symmetry broken system: In cases of superfluid 4 He, s-wave superconductors, and scalar BECs, the order parameter can be described by ψ = |ψ|e iφ , where φ is the phase of the order parameter. Because the free energy remains invariant under the U (1) gauge transformation: φ → φ + δφ, then G U (1) and only the identity transformations remains ψ unchanged: H {1}. The degrees of freedom of the order parameter can be described as the element of G, whereas several elements in G are equivalent due to Eq. (1). Then, the exact degrees of freedom of the order parameter become the coset space: G/H, which is called order-parameter manifold.
Topological invariants of quantized vortices
Let us consider the order parameter of the system ψ(x) as a continuous function of the coordinate x in real space. A closed loop with a fixed base point in real space can be mapped into a loop in the order-parameter manifold (Fig. 1). If two loops in the order-parameter manifold can be transformed to each other through a continuous function, those loops are homotopic. If a loop in real space does not encircle a vortex, the corresponding loop in the order-parameter manifold is homotopic to the trivial loop: the base point. In contrast, a loop in real space encircles a vortex, the mapped loop in the order-parameter manifold is not homotopic to the base point. The topological invariant of a vortex can be defined by the set of homotopic loops in the order parameter manifold. When the order parameter can be described as a complex scalar function ψ(x) = |ψ(x)|e iφ(x) , for example, the order-parameter manifold G/H U (1) is isomorphic to a S 1 circle. Because loops encircling the S 1 circle n times for the nonzero integer n are not homotopic to a point, there exist vortices for such loops. Therefore, topological invariants of vortices can be classified as the integer n. When a loop encircle two vortices in real space, the topological invariants of which are classified as n 1 and n 2 respectively, the mapped loop in the order-parameter manifold encircle the S 1 circle n 1 + n 2 times, which means that two vortices with topological invariants n 1 and n 2 can combine to one vortex with the topological invariant n 1 + n 2 .
The topological invariant of a vortex can be defined by the map from the loop in real space to that in the order-parameter manifold, namely the fundamental group π 1 [G/H]. For G/H U (1) case, the above discussion is summarized as When G is connected (π 0 [G] 1), and simply connected (π 1 [G] 1), then Here, the 0th homotopy set π 0 [H] is the set of homotopic points. When H is discrete group, it satisfies that π 0 [H] H.
Collision of non-Abelian vortices
Here, we consider algebraic and geometric structures of collisions of vortices. Let us consider two colliding vortices and four paths a, b, c, and d, as shown in Fig. 3 (a), and assume that paths a and b define the topological invariants of vortices as A and B, respectively. With the fixed base Passing through in Fig. 3 (d) is also energetically unfavorable because the doubly quantized vortex is formed at just the moment of passing through. Therefore, reconnection is the most favorable dynamics in this case, except for specific situations such as the collision of attractive vortices as in type-I superconductors, where whether a rung vortex is formed or not is strongly dependent on the kinematic parameters of the collision [14,15].
2.4.3.
Non-commutative topological invariants: AB = BA When A and B are noncommutative for non-Abelian vortices, the transition from Fig. 3 (a) to 3 (e) is topologically forbidden because they are topologically distinct. Therefore, a rung vortex with the topological invariant of AB or BA −1 must be formed after the collision, regardless of the kinematic parameters such as the collision angles and the initial relative speed.
Collision of two twisted vortices
Collision of two twisted vortices as shown in Fig. 3 (g) shows the genuine non-Abelian character. Figure 3 (h) shows the formation of the rung vortex BAB −1 A −1 which always vanishes for Abelian vortices. Therefore, twisted vortices with commutative topological invariants can unravel, whereas twisted non-Abelian vortices with noncommutative ones cannot.
Non-Abelian vortices in spin-2 Bose-Einstein condensates
Spinor BECs with spin degrees of freedom admit various kind of topological excitations such as not only quantized vortices but also monopoles and skyrmions, reflecting their rich variety of order-parameter manifolds [20,21,22].
In this section, we show that non-Abelian vortices can exist in the cyclic phase of the spin-2 BEC. To do this, we first discuss the theory of the spin-2 BEC [23,24,25,26,27,28,29,30,31,34,35] and the cyclic phase which is one of possible ground states.
where M is the particle's mass, V ext is the external potential, and V int is the particle interaction, and repeated indices are assumed to be summed over 2, 1, · · · , −2. As an actual system, we consider spinor BECs of dilute cold atomic gases. In this system, the main inter-particle interaction is the two body s-wave scattering. When the dipole-dipole interaction is neglected, we obtain Here, a S is the two body s-wave scattering length for the total spin S channel and C S,Ms m 1 ,m 2 is the Clebsch-Gordan coefficient. From Eqs. (4) and (5), we obtain where,n are the density, spin density, and the pair-singlet operators, andN [· · · ] is the operator for the normal ordering. f m,n = (f x m,n , f y m,n , f z m,n ) is the 5 × 5 spin matrix: Coefficients c 0 , c 1 , and c 2 satisfy Here, we apply the mean-field approximation. Assuming that all particles occupy the singleparticle ground state, we obtain [27] where, are the density, spin density, and singlet-pair amplitude. Although, for actual cold atomic BECs, we have to consider the optical trapping potential and the linear and quadratic Zeeman effects due to the external magnetic field as V ext (x), we neglect these effects for simplicity and obtain the Hamiltonian density: where V is the volume of the system. (5) symmetries respectively [35], in addition to the U (1) gauge symmetry, G becomes
Order-parameter manifold
Here, g and s denote the U (1) gauge and spin parts respectively. To see the symmetry of the order parameter in detail, we define the density renormalized spinor order parameter ζ = (ζ 2 , ζ 1 , ζ 0 , ζ −1 , ζ −2 ) T : and the following nematic tensor [23,29]. Expanding ζ by the rank-2 spherical harmonics Y 2,m (e) (e is the unit vector in spin space), we obtain where is the traceless symmetric tensor. The spin rotation of the spinor ζ and the SO(3) rotation of Q(ζ) have 1 to 1 correspondence: Here, R(e, θ) is the 3-dimensional rotation matrix with the rotation axis e and the rotation angle θ. Including the U (1) gauge transformation, we can show that the Hamiltonian is invariant when the arbitrary Q(ζ) is transformed tô
Ground state
The ground state can be obtained by finding the ζ which minimize the Hamiltonian [23,24,27,34].
There are eleven discrete transformationsT [0, These transformations form the tetrahedral group H C T g+s with the identity transformation [19,26,30] and the order-parameter manifold becomes (v) ferromagnetic-2 phase (This phase cannot exist as the ground state).
Each phase can also be characterized by the spin density F 2 , the singlet-pair amplitude |A 20 | 2 , and the singlet-trio amplitude |A 30 | 2 defined as These values for each phase is summarized in TABLE 1 [32,34]. The symmetry of each phase can be visualized by ζ Σ . Figure 4 (a) shows the c 1 − c 2 phase diagram and the shape of ζ Σ (e). In the ferromagnetic-1 phase, ζ Σ (e) shows the disk shape and the phase changes by 4π around the disk. The rotation along the axis perpendicular to the disk and the U (1) gauge transformation are equivalent which corresponds to H f1 U (1) 2g+s . From the dumbbell and cloverleaf shapes of ζ Σ (e) for the uniaxial nematic and biaxial nematic phases, it is easy to see that these phases have cylindrical symmetry H un O(2) s and square symmetry H bn (D 4 ) g+s respectively. In the cyclic phase, ζ Σ (e) takes the triad shape which has the tetrahedral symmetry as shown in Fig. 4 (b).
Among four ground states, the biaxial nematic and cyclic phases gives the non-Abelian π 1 [G/H]. However, the biaxial nematic and uniaxial nematic phases are energetically degenerate in the mean-field framework. The zero-point fluctuation lifts this degeneracy [32,33,34,35]. In the mean-field theory, therefore, the biaxial nematic phase is difficult to study non-Abelian vortices and the cyclic phase is the best theoretical model to study those.
Non-Abelian vortices in the cyclic phase
We calculate the topological invariants of vortices in the cyclic phase from the order-parameter manifold of Eq. (26) [19,26,30]. Because SO (3) is not simply connected, lets consider the mapping from SO(3) to simply connected SU (2): Here, T * is the lifting up from the tetrahedral group T as the subgroup of SO (3) to the double cover as the subgroup of SU (2), and consists of 24 elements. To consider the action of the SU (2) rotation to ζ, we define the new tensor Q(ζ) ≡ Q(ζ)σ ⊗ σ and a new operation where σ is the Pauli matrix and U (e, θ) is the SU (2) rotation matrix. Topological invariant defined as π 1 [{U (1) g × SU (2) s }/T * g+s ] can be described by T . In the following, we concentrate on the energetically stable vortices and classify those by the conjugacy class. For one straight vortex along z-axis, we consider the order parameter Ψ in the cylindrical coordinate (r, ϕ, z) (f 1,2,3 (r) satisfy f 1,2,3 (r → 0) = 0 and f 1,2,3 (r → ∞) = 1).
All other vortices classified by
are energetically unstable and split into vortices discussed above.
Spin-2 BECs and the cyclic phase in experiments
In experiments, spinor BECs are realized as optically trapped cold dilute atomic gases. In this system, internal degrees of freedom are given by the hyperfine spin F = I +S, where I and S are the nuclear and electron spins. For spin-2 case, F = 2 87 Rb BECs (I = 3/2, S = 1/2) are usually used because of their long lifetimes [36,37,38,39,40]. For interaction parameters of F = 2 87 Rb, c 0 /(4π 2 a B /M ) = 106±4, c 1 /(4π 2 a B /M ) = 0.99±0.06, and c 2 /(4π 2 a B /M ) = −0.106±0.116, and the uniaxial nematic phase is suggested for the ground state [39]. However, the large error bar for c 2 cannot exclude the possibility of the cyclic ground state because of complications arising from quadratic Zeeman effects and hyperfine-spin-exchanging relaxations [27,34,40].
Collision dynamics of non-Abelian vortices
Here, we perform the numerical simulation of vortices in the cyclic phase and show that the results satisfy the geometrical consideration discussed in the Sec. 2.
To study the dynamics of vortices, we derive the Gross-Pitaevskii equation which describes the time evolution of the order parameter [27,28]. The Gross-Pitaevskii equation can be obtained by the Hamilton equation i ∂Ψ m /∂t = δH/δΨ m for the mean-field Hamiltonian: Simulations are performed by using the initial state with vortices and numerically solving the Gross-Pitaevskii equation in the cubic box with the Neumann boundary [41]. Numerical parameters are: (i) Interaction coefficients : c 0 > 0, c 1 /c 0 = c 2 /c 0 = 0.5.
Interaction coefficients which we use make the cyclic state stable. Although these values are not realistic for the actual 87 Rb BECs, we concentrate on the topological features for collisions of vortices, and do not pay attention to the consistency with actual experiments.
As initial states, we use two vortex configurations: (I) two straight vortices with an oblique angle and (II) two twisted vortices. As a pair of topological invariants, we use the following four patterns: In the following, we denote the type of collision as (I)-(i) (two straight vortices with the same topological invariants). Figure 6 shows the collisions of vortices with commutative topological invariants: (I)-(i) (Fig. 6 (a) Fig. 3 (f). We have performed numerical simulations with various combinations of topological invariants, relative velocities, and collision angles, and confirmed that passing through and reconnection occur only when the topological invariants of the two vortices are commutative, and that the formation of a rung vortex always occurs when the topological invariants of two vortices are non-commutative. We finally describe a possible experimental manifestation of rung vortices. The phase-contrast imaging experiment [42] enables the measurement of local magnetization, and vortices with ferromagnetic-1 and ferromagnetic-2 cores like Fig. 7 (c)-(f) manifest themselves as bridged structures of localized magnetization.
Conclusion
In Sec. 2, we explain quantized vortices and their topological invariants based on homotopy theory, and discuss the definition of non-Abelian vortices and their collision dynamics. Non-Abelian vortices are defined as vortices with non-Abelian topological invariants given by the fundamental group of the order-parameter manifold. When two non-Abelian vortices with noncommutative topological invariant collide, they can neither reconnect nor path through each other, and form a rung vortex bridging two colliding vortices. In the case of two twisted vortices with non-commutative topological invariants, they cannot unravel because of the formation of the rung vortex.
In Sec. 3, we discuss the cyclic phase of spin-2 BECs as the system in which non-Abelian vortices can be realized. The fundamental group of the order-parameter manifold in the cyclic phase can be described by the non-Abelian tetrahedral group. We perform numerical simulations of the Gross-Pitaevskii equation for spin-2 BECs and show that collision dynamics of non-Abelian vortices discussed in Sec. 2 can be realized in this system. | 2019-04-19T13:12:11.007Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "e5baf57dcde20a0efd0ccab7ac6fe184917e33d7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/297/1/012013",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d0b20ca848b8e27ae061b21950db610cb30d0098",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119351763 | pes2o/s2orc | v3-fos-license | On the ground states of an array of magnetic dots in the vortex state and subject to a normal magnetic field
Dipole-dipole interactions in a square planar array of sub-micron magnetic disks (magnetic dots) have been studied theoretically. Under a normal magnetic field the ground-state of the array undergoes many structural transitions between the limiting chessboard antiferromagnetic state at zero field and the ferromagnet at a threshold field. At intermediate fields, numerous ferrimagnetic states having mean magnetic moments between zero and that of the ferromagnetic state are favorable energetically. The structures and energies of a selection of states are calculated and plotted, as are the fields required to optimally reverse the magnetic moment of a single dot within them. Approximate formulae for the dipolar energy and anhysteretic magnetization curve are presented.
INTRODUCTION
Recently some peculiar properties of sub-micron magnetic particles (magnetic dots) fabricated from such soft magnetic materials as permalloy, Co, etc., and forming an artificial lattice have attracted great attention, see Refs. 1-5. These magnetic dot arrays constitute promising material for high-density magnetic storage media. The distribution of magnetization within the dots is quite nontrivial. In the absence of an external magnetic field, a small enough non-ellipsoidal dot exhibits a single-domain nearly uniform magnetization state, either a so called flower state or a leaf state. 6 On increasing the size of the dot above a critical value, a vortex state occurs. 7 This vortex state has been experimentally observed ( see Refs. 4,5,[8][9][10] for circular disk-shaped magnetic dots with diameters 2R = 200 − 800 nm and thickness L = 20 − 60 nm. In Ref. 11, magnetization reversal for an array of disk-shaped dots under the influence of a magnetic field applied in the plane of the dots has been investigated experimentally. In this planar geometry the main contribution to the total magnetization comes from the internal reorganization of each dot's magnetic structure, in particular, by displacement of the vortex from the dot center, leading eventually to annihilation of the vortex at the rim of the dot. In this process, dipolar interaction between the dots does not play an essential role.
In the present work, another case, namely, the ground state of an unbounded planar square lattice of thin circular disk-shaped dots in an external magnetic field perpendicular to the plane of the dots will be considered. We will show that the situation in this case is very different from that in which the field is applied in-plane: the main contribution to the total magnetization is now determined as much by the dipolar interactions of the dots as by the external field. In a perpendicular field, the dipole-dipole interaction between the dots results in a complex specific phase diagram. In particular, a cascade of phases with different patterns of dot magnetization has been found; these constitute the sequence of ground states as a function of the external magnetic field.
II. MODEL
In order to formulate the model we need to discuss briefly the character of M , the magnetization distribution within a single dot in the vortex state. In circular cylindrical coordinates, M = M (z, r, χ). In sufficiently thin material, such as that being considered in the present work, M does not depend significantly on the z-coordinate along the normal to the dot, so that we may more simply write M = M (r, χ), where r and χ are the polar coordinates in the dot plane. Then the Cartesian components of M for the vortex state inside the dot, M x = M s sin θ cos ϕ, M y = M s sin θ sin ϕ, M z = M s cos θ, where M s is the saturation magnetization, are determined by ansatz θ = θ(r), ϕ = χ + ϕ 0 . (1) Such a distribution is typical for magnetic vortices with a topological charge (vorticity) equal to one, in twodimensional easy-plane ferromagnets. The function θ(r) is determined by an ordinary differential equation and its solution can be easily found numerically -see the general discussion of magnetic vortices in Refs. 12, 13. For the theory of magnetic dots made of soft magnetic materials, the crystallographic easy-plane anisotropy is negligible, and the demagnetizing field H m plays the main role. The sources of H m are both the volume "magnetic charges", proportional to div M , and the discontinuity in the normal component of magnetization at the surface of the sample (the surface "magnetic charges"). The vortex distribution (1) has an advantage compared with others, because for (1) div M = M s cos ϕ 0 dθ dr cos θ + 1 r sin θ , and div M = 0 at ϕ 0 = ±π/2. Accordingly, the volume magnetic charges vanish and the sole source of the field is the z−projection of H m onto the faces of the dot. The states with ϕ 0 = +π/2 and ϕ 0 = −π/2 have the same energy, i.e. the vortex state of the dot is twofold degenerate with respect to the sense of the magnetization rotation. In the case of a thin enough dot, this gives H m = −4πM sˆ z, i.e. there is an effective easy plane anisotropy w m = 2πM 2 s cos 2 θ. The function θ(r) can be obtained by applying wellknown methods for treating magnetic vortices in easy plane magnets. In the center of the dot (r = 0) the function θ(r) is restricted to two possible values θ(r = 0) = 0, π ; so that cos θ = p = ±1. Here p is the so-called vortex polarization (the second topological charge). 13 The characteristic scale of the variation of the function θ(r) coincides with the value of the exchange length ∆ 0 where A is the inhomogeneous exchange constant. For r ≫ ∆ 0 in the vortex solution, the value θ(r) tends to π/2 exponentially. If the dot radius R ≫ ∆ 0 , then one can obtain an acceptable solution in which the limiting value θ(r) = π/2 is reached at r = R, the rim of the dot. In this case θ ≈ π/2 within the major part of the volume of the dot, where ∆ 0 < r ≤ R. Such states have been discussed in a theoretical treatment of magnetic dots that included an exact treatment of the magnetic dipole-dipole interaction. 7 That treatment showed that the out-of-plane magnetization M z is significantly different from zero only in the core region, r ≤ ∆ 0 , and that the total magnetic moment of the dot µ has one of two values: where ξ is a multiplicative constant of the order of 1; in fact ξ → 1.361 as ∆ 0 /R → 0. Thus, we arrive at the following simplified picture of the state of the single dot. In the greater part of the dot, the magnetization lies in the dot plane, rotating about the center of the dot, and so, because of its circular symmetry, does not contribute to the total moment µ of the dot. 7 The state of the dot is fourfold degenerate, with ϕ 0 = ±π/2 and the core polarization p = ±1. The value of ϕ 0 does not manifest itself in the magnetic moment of the dot and, as a consequence, has no influence on dot interactions. The magnetic moment is directed perpendicular to the plane of the dot and has either of the values ±µˆ z.
With regard to dot interactions, all dots are in one or other of two states: "up" and "down". Since the core volume is much smaller than the dot volume, the core magnetic moment is small compared with the saturation moment of the dot. Although the dipolar interaction between dots is not very strong, 14 it is nevertheless the sole source of interaction within the dot system. Because both the dipolar dot interaction field and the external magnetic field are very much lower than the effective fields (exchange and demagnetizing) internal to the dot, one can regard the magnetization distribution inside the dot as practically unaffected by them.
An important consequence of this robustness of the vortex state is that the polarization p and moment µ of a dot remain unchanged under the application of small enough external magnetic fields H e parallel toˆ z. 15 It is clear that not only the state with H e µ (light vortex) is stable, but also the state H e with antiparallel toˆ z (heavy vortex) remains constant and metastable up to | H e | = 4πM s . 15 Following from the above discussion, the Hamiltonian of a dot array can be presented as follows: where µ is the moment of a single dot, p l = ±1, l, l ′ are dot positions in a square lattice, l = a(mˆ x + nˆ y), m, n = 0, ±1, ±2, ... are integers, a is the interdot distance, and H is the external magnetic field parallel toˆ z. The first term describes the dipole-dipole interaction of the lattice of magnetic dots, the second term is the Zeeman energy. It should, perhaps, be stressed that the system modelled here, consisting of a square lattice of discrete dipoles, normal to the lattice plane, with only dipolar interactions, is very different from a continuous thin film with perpendicular anisotropy. Numerical treatments of such films, when performed using a square lattice discretization, eg. see Refs. 16, 17, give rise to a Hamiltonian that bears a superficial resemblance to that of the present system, but the essential continuity of the film, expressed in the exchange coupling between nearest neighbour elements of the numerical discretizationthe dominant interaction in any sufficiently refined discretization -necessarily results in magnetization patterns (stripe domain structures) that are entirely different from the patterns of discrete moments arising from pure dipolar interactions between discrete dots on a square lattice that are reported here.
III. MAGNETIC GROUND STATES
The dipole-dipole interaction is long-range, and it is not obvious a priori what structure will constitute the ground state. For a system of particles with dipoledipole interactions and in zero applied field, a theorem states that in the ground state the overall magnetic moment is zero, and this results in a specific antiferromagnetic (AFM) state. 18 For instance, in the case of a threedimensional simple cubic lattice of spherical particles, a four-sub-lattice structure with non-collinear magnetic moments is optimal. 19 This is possible, however, only if the magnetic moment of each particle is free to point in any direction. Here, because of the robustness of the vortex state supporting the dot magnetic moment, we have p = ±1 and uniaxial, giving rise to a quasi-Ising model. Determination of the ground state is reduced to geometrical considerations.
It is convenient to divide the initial simple square lattice, on which the dots are located, into elementary magnetic cells of rectangular shape with (k × l) dots, so that the overall spatial arrangement of up and down (p = ±1) dots (what we call their "structure", "configuration" or "pattern") can be produced from a single such cell by a translation T = a(Nˆ x + Mˆ y), where M = km and N = ln and m, n = 0, ±1, ±2, ... are integers. This is appropriate for a magnetic structure with sublattice number (i.e. smallest rectangular unit cell size) less than or equal to k ×l. Note that as the choice of specific values for k and l restricts the range of structures that can be represented in this way, the search for the ground state must, in principle, extend to all integer k, l. The Zeeman energy, however, depends only on the relative numbers of dots with p = +1 and p = −1 in a magnetic cell, i.e. on the mean moment per dot µ (= µ z ), and, of course, on the applied field H.
We have investigated a substantial number of states, namely, all states with k × l = 2, 3 and 4, and many states with larger values of k and l. Because the dipolar interaction falls off with the inverse cube of the moment separation, i.e. quite rapidly, it is clear that distributions in which dots of like orientation are well apart from each other, while those of opposite sign are as close as possible, will be energetically most favourable. In particular, one notes that the ratio of the interaction energy of a nearest neighbour dot pair, to that of a pair of next-nearest neighbours, is √ 8 : 1. It seems extremely improbable that two dots of the minority population (which we will consistently take to be the down dots with p = −1) could ever be nearest neighbours in a ground-state configuration. In zero magnetic field, the most energetically favorable distribution is the simple chessboard AFM structure, see Fig. 1. In this structure neither of the two (equal) populations contains any nearest neighbour pairs of parallel dots. Other AFM structures in which dots with the same value of µ do occur as nearest neighbours possess very much higher energy as is illustrated by the examples in Fig. 1.
For the AFM structure, the energy does not depend on the applied magnetic field, whereas for the ferromagnetic (FM) structure (with µ = +µˆ z for all dots) this dependence is maximal. The total mean energy per dot can be written as W = W m − µ H, where W m is the mean energy, per dot, of the dipole-dipole interaction and µ is the overall average moment per dot of the distribution (zero in the AFM case). In virtue of this, the energies W F M and W AF M of the FM and AFM states, for which W m = W F M , W AF M respectively, become equal at some field H * = 2(W F M − W AF M )/µ, and, if these were the only states possible, one could expect a first order phase transition from the AFM to the FM structure at H = H * . However, numerous other states are possible, and the situation is very much more complicated.
In the intermediate region between AFM and FM, numerous more complex structures with 0 < µ < µ ("ferrimagnetic" structures) may occur. For these states, the dipole-dipole interaction energy, W m , is higher than W AF M , that for the optimal chess AFM, but the total energy W m,H = W m − µ H is reduced with increasing field H. Therefore, such structures may constitute the ground state at finite magnetic fields. To describe these structures, it is convenient to introduce the dimensionless magnetization m = µ /µ.
In order to determine the values of H that fix the lower and upper bounds of such ferrimagnetic ground states, we have calculated the change in dipole-dipole interaction energy ∆W m that occurs when the magnetic moment of a single dot is reversed in the FM and chessboard AFM structures. A simple analysis shows that this energy change is determined by the energy per dot in the initial states. Reversing the magnetic moment of one dot in the FM state requires ∆W m = 4W F M , and in the chessboard AFM state ∆W m = 4|W AF M |. The value of W F M found numerically is 4.516811µ 2 /a 3 . Including the magnetic field, one can show that the total energy W F M of the pure FM state and that of the same state, but with one dot reversed, coincide at the value H = H 1 , where Evidently, for H > H 1 the FM structure is the most favorable, but with H < H 1 , some magnetic moments tend to reverse. When H < H 1 , but close to H 1 , the density of these reversed moments will be very low and µ ≈ µ.
The chessboard AFM state can be treated in the same way. One obtains W AF M = −1.322943µ 2 /a 3 and for the corresponding threshold field H 0 , above which it becomes favorable to switch a dot from antiparallel to parallel to the field direction, one finds H 0 = 2|W AF M |/µ = 2.645886 µ/a 3 .
Hence, intermediate phases with 0 < µ < µ exist within the finite range of fields, H 0 < H < H 1 , and µ → 0 as H → H 0 , and µ → 1 as H → H 1 . Let us consider the nature of the ground states in this field range. In the limit of low fields, these states are obtained from the chessboard AFM by reversing the magnetic moments of a small fraction of the down dot population, leaving the remainder undisturbed. At high fields H ≈ H 1 , the initial structure is the FM one. In both cases, one expects the flipped dots to be dispersed as far from each other as possible, in order to minimize their contribution to the dipole-dipole interaction energy of the system. If it were not for the constraints imposed by the square lattice on which all the dots are located, one would therefore expect these flipped dots to lie on an equilateral triangular lattice.
Guided by these considerations, we have sought and found excellent candidates for those configurations for which W m is minimal, for a number of fixed values of m = µ /µ between 0 and 1. The elementary rectangular magnetic cells that represent a selection of these "optimal" (we drop the parentheses hereafter) configurations are depicted in Figs. 2 and 3, together with their corresponding values of m and W m (with W m expressed in units of µ 2 /a 3 ). These W m (also the values for W F M and W AF M given above) were evaluated numerically by summing the contributions to the field, at each dot in the cell, of all dots within a radius 10, 000.5a. The contribution from all more distant dots was approximated by attributing a uniform areal dipolar moment density mµ/a 2 to the area of the plane of the dots outside that radius. All values of W m obtained in this way are plotted as the points in Fig. 4.
The sequence of configurations in Figs. 2 and 3 represent some of the (very many) stages in the anhysteretic magnetization of the dot array from the demagnetized AFM to the fully magnetized FM state. Owing to the stability of the vortex state in the dots, it is evident that these states probably cannot be accessed sequentially merely by increasing the applied field. They represent stages in an ideal anhysteretic sequence, probably accessible only by thermal, or quasi-thermal (e.g. magnetic "shaking") cooling through the Curie temperature (or quasi-Curie temperature), under the appropriate constant normal magnetic field H(m).
It is also evident that the configurations reported here constitute only a small sample drawn from an infinite sequence of such optimal configurations over the range 0 ≤ m < 1: for every rational value of m in this range, there exist, in principle, numerous different configurations, one (or possibly more) of which must possess the lowest value of W m . (Henceforward, unless the contrary is explicit, W m will be used exclusively to refer to this lowest energy for given m.) Less evident, but at least extremely plausible, is the hypothesis that W m increases monotonically with m. Consider the optimal configurational state, in zero applied field, for any specific reduced magnetization m. In both the majority "up" dot and minority "down" dot populations, the dots are occupying the "energetically best" locations available to them. However, by virtue of being in the minority population, even the least favorably located of the "down" dots is surely more favorably located than the least favorably located "up" dot: it has more dots of opposite sign with which to interact, and need have no dot of the same sign for a nearest neighbour; the least favorably located "up" dot, by contrast, is certain to have another "up" dot alongside. How, then can it be energetically favorable to reverse the moment of a "down" dot, thereby creating yet another "up" dot? Indeed, this argument can be pushed a stage further: not only must W m increase monotonically with m, but so must its rate of increase, dW m /dm, because the dots being reversed are selected in order of increasing stability and new sites for reversed dots are increasingly less favorable. It appears, however, that dW m /dm, though monotonically increasing, is discontinuous. For example, consider a state with a simple structure like that for m = 1/2 in Fig. 2. It is evident that the energy increase on reversing one of the minority down dots is substantially greater than the energy decrease on reversing one of the majority up dots (taking into account the decrease in the former energy change and increase in the latter on optimizing the two new states). Indeed we have carried out this procedure for all but one (that for m = 27/28 which is very close to the FM state) of our optimal configuration candidates, all of which support this prediction. This aspect is further discussed in Section IV.
IV. ANALYTIC ASYMPTOTIC APPROXIMATIONS
Analytic formulae designed to approximate the dipolar energy W m in the limits m → 0 and m → ∞ will now be derived. These formulae are both instructive and in remarkably good accord with the numerical values of W m calculated for specific states and represented by the points plotted in Fig. 4.
A. Approximations near FM state
Equilateral triangular superlattice
Consider states with m = 1 − ǫ, 0 < ǫ ≪ 1. In this region, as mentioned earlier, one expects the minority down dots to be distributed as far from each other as possible, thereby minimizing their positive dipolar interactions with one another and maximizing their negative interactions with the majority up dots. For a given density of minority dots, as prescribed by ǫ, their maximum separation is known to be ideally accomplished when those dots form an equilateral triangular lattice (ETL hereafter). In the present case this cannot be precisely achieved because all the dots are constrained to lie only at points on the fundamental square lattice of spacing a. However, when ǫ is very small, the spacing of the minority dots λ ≫ a , and they can adopt a fair approximation to an equilateral triangular distribution, and indeed, as ǫ → 0, this approximation will become very good. In developing our approximate formula, we will therefore assume the minority dots all lie on an ETL with spacing λ(ǫ).
In order to proceed, we need to know the interaction field and energy for dots in an FM state on an ETL. This was calculated numerically, in a manner similar to that used for distributions on a square lattice described above, again summing over a circular region, of radius R = 10000.5λ, surrounding a central dot at which the field of the others is calculated. At lattice spacing λ, there are (2/ √ 3)/λ 2 dots per unit area. The region outside R was represented, as before, as uniformly polarized with the mean dipole moment per unit area, (2/ √ 3)µ/λ 2 . The numerical calculation yields an energy per dot W F M△ = 5.517088 µ 2 /a 3 . For λ = a, W F M△ is substantially higher than the value W F M = 4.516811 µ 2 /a 3 obtained for the square lattice, but that is because the dot areal density is a factor 2/ √ 3 higher. For the same dot areal density, 1/a 2 , we require λ = [2/ √ 3] 1/2 a. Because of the inverse cube interaction law, the energy per dot, at the same dot density, 1/a 2 , is a factor [( √ 3)/2] 3/2 lower than 5.517088 µ 2 /a 3 . This gives W F M△ = 4.446373µ 2 /a 3 for an ETL of moment density µ/a 2 , about 1.56% lower than W F M for the square lattice, demonstrating the small but significant energetic advantage of the former configuration. (Indeed, the smallness of this energy difference, for such very different configurations, is reassuring, for it indicates that the small departures from the ideal ETL, that are imposed by conformity with the underlying square lattice, will not introduce any substantial error.) Analogous to H 1 = 2W F M /µ = 9.033622 µ/a 3 , we will write H △ = 2W F M△ /µ = 8.8927451 µ/a 3 . The self-energy per dot, of an ETL of dots, with moment µ per dot, and dot areal density (ǫ/2a 2 ), is (ǫ/2) 3/2 W F M△ in the FM state. Returning to the m = 1 − ǫ system, we can regard it as the superposition, on the uniform square FM dot lattice, of an (approximately) equilateral triangular system of "double-dots" of moment −2µ and of moment areal density −ǫµ/a 2 , i.e. with spacing λ = [(2/ √ 3)(ǫ/2)] 1/2 a appropriate to a dot areal density −ǫ/2a 2 . We superpose double-dots in order, in effect, to reverse the moments µ of that fraction, ǫ/2, of dots on the basic square lattice that constitute the (approximate) ETL of up dots that require to be reversed to achieve the required overall reduced moment m = 1−ǫ. The contribution to the energy of each double-dot, of moment −2µ, due to its interaction with all the other double-dots, is 4W F M△ (ǫ/2) 3/2 . This contributes 4(µH △ /2)(ǫ/2) 5/2 to the mean dipolar energy per dot of the overall square lattice, in this analytic asymptotic approximation for m ≈ 1, denoted W a1 . In addition, the double-dots also experience the field −H 1 of the underlying square FM lattice on whose dots they are superimposed. Consequently the interaction energy of the triangular and square lattices is −µǫH 1 , per square lattice dot. The self-energy of the square FM lattice is, of course, just W F M = µH 1 /2 per dot. Adding the three contributions gives for the mean energy per dot of the m = 1 − ǫ system: This formula is represented by the curve Wa1(m) that extends from m = 0.5 to m = 1 in Fig. 4. Agreement with the points W that represent the numerical values calculated for specific optimal structures is remarkably good, not only near m = 1, but over the whole of this range. One notes also that over the whole range, the approximate values W a1 ≤ W m , the "exact" numerical values for specific configurations. This is as it should be, because, in the approximation, the minority dots are located on an ideal ETL whereas, in the configurations treated numerically, the minority dots are restricted to points on the square lattice that are close to, but not precisely at, the ideal locations.
2.
Square superlattice Examination of the specific configurations treated numerically and illustrated in Fig. 3 reveals that, in some cases, notably those for m = 3/5 and m = 4/5, the requirement that all dots lie on the basic square lattice is so restrictive that the minority dots are obliged to lie on a square superlattice, no better approximation to the ideal ETL being available. It is instructive therefore to modify the above treatment and adapt it to a square lattice. Whereas the minority dots can only ever approximately conform to an ETL, they can lie precisely on a square lattice whenever the area per minority dot is k 2 + l 2 with k, l integers. Of the three contributions to the mean energy per dot expressed in Eq. (6), only the self-energy of the double-dot lattice requires modification: in place of H △ in that equation, we require H 1 . Or, expressed rather in terms of W F M = 0.5µH 1 , we obtain for the energy per dot of the system with minority dots on a square superlattice: This expression yields precisely the same values as those found directly numerically for the specific structures proposed for m = 3/5 and m = 4/5 in Fig. 3 and also, for m = 0, that quoted above for W AF M . We have . Because H △ and H 1 differ by only 1.56% and the double-dot lattice self-energy term is proportional to ǫ 5/2 , W ✷ differs very little from W a1 over the range 1/2 ≤ m ≤ 1; the difference in the worst case, m = 1/2, is only 1.11%. We do not include a curve representing W ✷ in Fig. 4 because it can scarcely be distinguished from that for W a1 . Over the range 1/2 ≤ m ≤ 1, whereas W a1 represents a close underestimate of W m , W ✷ provides a similarly close overestimate.
B. Approximation near AFM state
Here we consider states with positive m close to 0. The treatment resembles that in the environs of the FM state discussed above. We again expect the dots that depart from the chessboard AFM structure ("up" dots this time), to be distributed as far from each other as possible, in locations approximating an ETL. Again we assume these "exceptional" dots all lie on an ideal ETL with spacing λ, and replace ǫ in the above discussion of the triangular lattice energy by m. We now superpose the triangular lattice of "double-dots" on the AFM lattice instead of the FM one, an important difference being that we must, of course, place the positive double-dots only on top of negative dots of the underlying AFM lattice, whereas all dots in the FM lattice were equivalent and available for reversal.
Apart from the replacement of ǫ by m, the expression for the self-energy of the double-dot triangular lattice is the same: 4(µH △ /2)(m/2) 5/2 per dot of the overall square lattice. However, the interaction energy of the triangular and square lattices is now positive, +mµH 0 (as against −ǫµH 1 ) per dot of the square lattice and the self-energy of the square AFM lattice is −µH 0 /2 per dot. Adding the three contributions gives for the mean energy per dot of the overall lattice of weak reduced magnetization m: Note that the two asymptotic approximations W a0 and W a1 happen to coincide at m = 0.5. The expression for W a0 is represented by the curve W a0(m) that extends from m = 0 to m = 0.5 in Fig. 4. Again, agreement with the points W that represent the numerical values calculated for specific optimal structures is remarkably good, not only near m = 0, but over the whole range 0 ≤ m ≤ 0.5 . However, the agreement is not quite as good as was the case for W a1 , very probably owing to the additional constraint that only dots from the down-dot population are available for reversal, whereas in the FM case, all dots were available. This makes it somewhat harder (in an actual configuration, but not, of course, in the analytic approximation) to arrange the non-AFM dots close to the ideal triangular lattice. This cannot, however be the only reason for the lower agreement because, whereas for the approximation near the FM state all W a1 ≤ W m , some W a0 > W m . An outstanding example is that for m = 1/3 where W a0 − W m > 0.01µ 2 /a 3 . The reason for this is that, in an actual configuration, the constraint, assumed in the analytic treatment for m < 0.5, that the ideal configuration is obtainable by reversing only some negative dots in the chessboard AFM configuration and without any further rearrangement of the structure, does not apply and, for values of m sufficiently higher than zero, a lower energy than W a0 can sometimes be achieved by violating this supposed constraint -see, for example, the simple configuration for m = 1/3 in Fig. 2.
V. ANHYSTERETIC MAGNETIZATION CURVE
Consider two optimal configurations, one of reduced magnetization m, the other of higher magnetization (m + δm). Their energies in a normal magnetic field H, W m,H = W m − mµH and W (m+δm),H = W (m+δm) − (m+δm)µH, will be equal only in the field H m,(m+δm) = (W (m+δm) − W m )/(µδm). In the limit δm → 0, we obtain the following differential expression for the anhysteretic magnetization curve H m (m): (Our remarks above concerning local discontinuities in dW m /dm and its monotonic increase with m are clearly relevant here also to H m .) In Fig. 5 we have plotted points representing approximate values of H m (m) derived from the numerical values of W m calculated for our set of candidates for optimal configurations. They are labelled dW/dm, and represented by circles. These are approximations to H m given by (W m(i+1) −W m(i) )/µ(m (i+1) −m (i) ) and plotted at the values m = (m (i+1) + m (i) )/2 located midway between successive configurations of the known set. (All fields are shown in units of µ/a 3 .) Also plotted in Fig. 5, where they are labelled Ha1 and Ha0, are smooth curves representing H a1 and H a0 , the analytic approximations to H m obtained by differentiating the asymptotic equations (6) and (8) (The corresponding equation for H ✷ = −µ −1 dW ✷ /dǫ is not plotted in Fig. 5: it is practically indistinguishable from H a1 over the appropriate range 0.5 ≤ m ≤ 1.) Both Equations (10) and (11) provide a good fit to dW m /dm over their appropriate data ranges, 0 ≤ m ≤ 0.5 for H a0 and 0.5 ≤ m ≤ 1 for H a1 . As one would expect for such asymptotic approximations, the fits are best towards the limits m = 0 and m = 1. Unlike the expressions (6) and (8), from which they are derived, equations (10) and (11) do not lead to coincident values at m = 0.5. That expressions (6) and (8) should yield the same value for W m at m = 0.5 is a remarkable coincidence; that their slopes dW m /dm should also agree there, is scarcely to be expected! In fact the discontinuity between them is in remarkably good agreement with the corresponding step in the data for dW m /dm in the vicinity of m = 0.5.
VI. CONFIGURATIONAL STABILITY
The fields H + (m) and H − (m) required to render energetically favourable the reversal of a down or up dot in the optimal configuration for any value of m, and the difference between them, H + − H − , provide measures of the stability of that configuration. Limiting examples of these fields, for the AFM and FM states, namely H 0 ≡ H + (m = 0) and H 1 ≡ H − (m = 1), were discussed earlier in this article. (In fact, the AFM state also has H − (m = 0) ≡ −H 0 , so that it is stable over a wide field range, 2H 0 .) As mentioned earlier, we have determined numerically the extra energies ∆W m+ and ∆W m− required to reverse a down dot or an up dot respectively (including local rearrangement of the resulting configurations to minimize their energy) in almost all our candidates for the optimal states of reduced magnetization m.
The values of the corresponding fields, H ± (m) = ∆W m± /2µ, are plotted in Fig. 5. Some examples illustrating the local re-organization of the dot distributions after single-dot reversal are presented in Fig. 6. Intermediate values of m were selected for display in this figure because the patterns in that region are rather more complex and varied than those near m = 0 and m = 1 and usually give rise to greater ranges of field stability.
In Fig. 5 it is evident that a salient feature of the fields H + and H − in this intermediate range of magnetizations is indeed the very substantial width of the gap between them, H + − H − , that defines the field range of stability of the pattern against single dot reversal. Towards the limits m = 0 and m = 1, that gap shrinks towards zero. However, for the structure of the pure AFM state, precisely at m = 0, the gap is at its widest: 2H 0 . (H − for the pure AFM state is off-scale in Fig. 5 ) Throughout the range 0 < m < 1, one expects H + > H m > H − , but, of course, we do not have values of H m at the values of m that correspond precisely to the optimal configurations. However, one observes in Fig. 5 that, for almost every configuration (label i), H + (m (i) ) exceeds the next value of H m ≈ (W m(i+1) − W m(i) )/µ(m (i+1) − m (i) ), with m (i+1) > m (i) , while, in the reverse direction, H − (m (i) ) < H m(i−1) . This means that the field required to reverse a single down dot in one of our optimal configurations, even after allowing for local rearrangement of the pattern to minimize the increase in dipole-dipole interaction energy, usually exceeds the field required to render energetically favourable the reversal of the infinite array of down dots required to change (anhysteretically) the pattern to that of the optimal configuration appropriate to the next higher value of m. Similarly, to reverse a single up dot, also after local rearrangement, usually requires a reduction of the field to a value below that needed to favour the (anhysteretic) reversal of the infinite array of up dots needed to establish the optimal configuration for the next lower value of m.
The reason for this prima facie somewhat paradoxical behaviour is that, when only a single dot is to be switched in an optimal regular periodic configuration, only very local rearrangement of the dot pattern can reduce the energy of the resulting perturbed system whose overall pattern, remote from the switched dot, must remain optimised for the original level of mean magnetization. It follows that the sequence of points constituting the ideal anhysteretic magnetization "curve" must correspond to a discontinuous sequence of stable energetically optimal dot configurations. Magnetization by monotonic variation of the applied field alone will exhibit very substantial hysteresis and will realize very few, if any, of the ideal magnetization patterns.
VII. SUMMARY AND CONCLUSIONS
Unbounded two dimensional arrays of thin circular disk-shaped magnetic dots on a square lattice have been considered in the presence of a magnetic field perpendicular to the dot plane. The radii and thickness of the dots is such that a radially symmetric vortex magnetization structure is stable; at radii outside a relatively limited core, the magnetization lies almost in plane and parallel to the rim. The net moment µ is due to the vortex core and is normal to the dot plane; it is practically unaffected by normal fields of magnitude comparable to those from other dots.
For every rational value of the reduced magnetization of the array, m = µ /µ < 1, (ignoring sign) there exist very numerous possible arrangements of up and down dots, one (or more) of which must have minimum dipolar interaction energy W m . We have calculated that energy for a range of excellent candidates for these ground states at various values of m, in particular those of the chessboard AFM state of m = 0 and the uniform FM state with m = 1. We suggest and argue that, for the sequence of optimal configurations, W m and dW m /dm increase monotonically with m, and this is supported by our data in Fig. 4. (dW m /dm is likely to be locally discontinuous). An analytic formula for W m (m) is derived on the assumption that for m → 1, the minority down dots are located near points on an equilateral triangular super-lattice. A similar formula is derived for m → 0 relating to the excess up dots. Remarkably, these formulae agree at m = 1/2 (though their gradients differ) and fit the data for specific states very well, especially near m = 0 and 1. A similar formula involving a square super-lattice instead of the triangular super-lattice gives results in precise agreement with the data for specific structures with m = 0, 0.6 and 0.8, these being those structures in which the minority dots do indeed lie on square super-lattices. It differs from the first asymptotic formula by little more than 1.11% over the appropriate range 0.5 ≤ m ≤ 1.
Approximate data for the anhysteretic magnetization curve H m (m) = µ −1 (dW m /dm) are derived from the data for specific states and compared with the predictions of the analytic formulae in Fig. 5. The agreement is good, particularly near m = 0 and 1. The fields predicted by the two asymptotic formulae differ quite sharply at m = 1/2, where they indicate a step discontinuity in field that matches a similar step in H m (m), the data from the series of specific states.
The stability of the optimal configurations found was explored by determining the minimum field H + required to reverse a single down dot and likewise the maximum field H − at which a single up dot would reverse, assuming in both cases local reorganization of the resulting dot pattern to minimize its energy. The gap H + − H − between the two fields that indicates the stability of the configuration increases irregularly from near zero, for m just greater than zero, to a rough maximum around m = 1/2 and then decreases again towards zero at m = 1. An exception to this general trend is the very large value at m = 0 where H + = −H − = H 0 . For the optimal configurations at most values of m, H + > H m+ , and H − < H m− , where H m+ and H m− , refer to the optimal configurations studied at neighbouring values of magnetization, just above and just below m. It follows that there exists, with increasing m, a sequence of stable optimal configurations with energy barriers between them. It must be stressed, therefore, that the field curve H m (m) is strictly an ideal anhysteretic magnetization curve: owing to the stability of the individual dot moments, the individual dot distributions are likewise very stable, and the infinite sequence of energetic optimal states cannot be traced experimentally by monotonically increasing (or decreasing) the normal field alone. To realize any specific state it would be necessary -but perhaps not sufficientto cool the sample through the Curie temperature subject to a normal magnetic field of the appropriate strength. Another possibility would be to destroy the stability of the other metastable phases and achieve the phase with minimal energy by a form of magnetic shaking, e.g. by the application of fluctuating fields of decreasing amplitude.
For magnetic storage applications, on the other hand, the stability of the dot configurations is advantageous, indeed, essential. Experimental observation of these dot arrays and transitions between their states may perhaps also be useful for the determination of the basic dot parameters, in particular the radius of the vortex core and its magnetic moment.
Last, but not least: these results are clearly not restricted to a dot lattice of the type considered here, but also apply directly to any square lattice of identical dipoles that are restricted to the two senses of normal orientation. They may also be relevant to the description of other uniaxial dipole-coupled systems of small particles, for example, thin films of granular magnets with easy axial anisotropy (shape or crystallographic) with a perpendicular easy axis and negligible exchange coupling between granules. Such properties are characteristic for thin films prepared by simultaneous evaporation of permalloy and silver with small enough concentrations of permalloy. | 2019-04-14T01:58:52.396Z | 2001-11-12T00:00:00.000 | {
"year": 2001,
"sha1": "72502d7420a667922805d7174559e5c2314abd77",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "72502d7420a667922805d7174559e5c2314abd77",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
155581925 | pes2o/s2orc | v3-fos-license | Comparison of yoga versus physical exercise on executive function, attention, and working memory in adolescent schoolchildren: A randomized controlled trial
Purpose: Executive function, attention, and memory are an important indicator of cognitive health in children. In this study, we analyze the effect of yoga and physical exercise on executive functioning, attention, and memory. Methods: In this prospective two-armed randomized controlled trial, around 802 students from ten schools across four districts were randomized to receive daily 1 h yoga training (n = 411) or physical exercise (n = 391) for 2 months. Executive function, attention, and memory were studied using Trail Making Test (TMT). Yoga (n = 377) and physical exercise (n = 371) students contributed data to the analyses. The data were analyzed using intention-to-treat approach using Student's t-test. Results: There was a significant increase in numerical TMT (TMTN) values within yoga (t = −2.17; P < 0.03) and physical activity (PA) (t = −3.37; P < 0.001) groups following interventional period. However, there was no significant change in TMTN between yoga and PA groups (t = 0.44; P = 0.66). There was a significant increase in alphabetical TMT (TMTA) values within yoga (t = 6.21; P < 0.00) and PA groups (t = 1.19; P < 0.234) following interventional period. However, there was no significant change in TMTA between yoga and PA groups (t = 3.46; P = 0.001). Conclusion: The results suggest that yoga improves executive function, attention, and working memory as effectively as physical exercise intervention in adolescent schoolchildren.
Introduction
Executive function, attention, and memory are an important indicator of cognitive health in children. Preliminary studies have shown yoga to improve measures of attention and cognition in a small sample of schoolchildren. Yoga is a form of mind-body fitness that involves a combination of muscular activity and an internally directed mindful focus on awareness of the self, breath, and energy. [1] It aims at developing an integrated personality, where the growth of physical, mental, social, and spiritual planes is equally focused. [2] The advantage of yoga is that its benefits are available to students of all school-age groups. An integrated approach to yoga is necessary for the holistic development of memory, [3] attention, executive functioning, and cognitive processing speed. [4] Few studies that have directly compared the effects of participating in school-based yoga versus physical education have generally found positive effects of yoga. [5] Few studies have found that yoga is as effective as physical activity (PA) in improving cognitive performance and emotional and behavioral functioning. However, there are no studies that have compared yoga to a structured PA program in a large-scale student population. In this study, we have analyzed the effects of a structured yoga program with structured physical therapy on the measures of attention, information processing, mental speed, and working memory in adolescent high schoolchildren. Our study differs from the other related studies for the fact that our sample size is comparatively huge and the risk of bias due to smaller sample size is low.
Comparison of Yoga versus Physical Exercise on Executive Function, Attention, and Working Memory in Adolescent Schoolchildren: A Randomized Controlled Trial
In this prospective two-armed randomized controlled trial, around 802 students from ten schools across four districts were randomized to receive daily 1 h yoga training (n = 411) or physical exercise (n = 391) for 2 months. Executive function, attention, and memory were studied using Trail Making Test This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com and physical exercise (n = 371) students contributed data to the analyses. The data were analyzed using intention-to-treat approach using Student's t-test.
Results
There was a significant increase in numerical TMT (TMTN) values within yoga (t = −2.17; P < 0.03) and PA (t = −3.37; P < 0.001) groups following interventional period. However, there was no significant change in TMTN between yoga and PA groups (t = 0.44; P = 0.66). There was a significant increase in alphabetical TMT (TMTA) values within yoga group (t = 6.21; P < 0.00) but not in the PA group (t = 1.19; P < 0.234) following interventional period [ Table 1]. However, there was a significant change in TMTA between yoga and PA groups (t = 3.46; P = 0.001).
The results suggest that yoga improves executive function, attention, and working memory as effectively as physical exercise intervention in adolescent schoolchildren.
Discussion
Result from this two-armed randomized controlled trial is suggestive of equivalence between yoga practices and physical exercises with respect to visual scanning and cognitive processing and speed. [6] However, there was a significant improvement in executive functioning with yoga intervention compared to exercise. This study results differ from those of studies on yoga that have used small study populations. [4,7] Many other studies have found yoga practices to be more effective than physical exercises in many physiological functions such as in muscle control, balance, increased self-confidence, and other measures that are not studied in this trial. [8,9] Methods While physical exercises may need large play area and close supervision to prevent injuries, yoga practices could be carried out in a confined area, with little support and supervision once the principles are understood. Furthermore, it is possible for students with disability (especially, visual) to practice yoga and gain confidence and stability in gait and other activities. [10] Further, for the elderly, yoga is a boon because sitting and lying down asanas could be improvised to confer the same benefits as physical exercises.
Conclusion
On the whole, though there are equal benefits of yoga and physical exercises seen in this study, the advantages of performing yoga cannot be overlooked.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2019-05-17T13:33:50.106Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "8ec12be1d0dd1bc2bde0d389255853bc7265ce4b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijoy.ijoy_61_18",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "947d8a9a1ee57dda7406136553464a7d2a49357f",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
235795746 | pes2o/s2orc | v3-fos-license | Designing Recommender Systems to Depolarize
Polarization is implicated in the erosion of democracy and the progression to violence, which makes the polarization properties of large algorithmic content selection systems (recommender systems) a matter of concern for peace and security. While algorithm-driven social media does not seem to be a primary driver of polarization at the country level, it could be a useful intervention point in polarized societies. This paper examines algorithmic depolarization interventions with the goal of conflict transformation: not suppressing or eliminating conflict but moving towards more constructive conflict. Algorithmic intervention is considered at three stages: which content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface). Empirical studies of online conflict suggest that the exposure diversity intervention proposed as an antidote to"filter bubbles"can be improved and can even worsen polarization under some conditions. Using civility metrics in conjunction with diversity in content selection may be more effective. However, diversity-based interventions have not been tested at scale and may not work in the diverse and dynamic contexts of real platforms. Instead, intervening in platform polarization dynamics will likely require continuous monitoring of polarization metrics, such as the widely used"feeling thermometer."These metrics can be used to evaluate product features, and potentially engineered as algorithmic objectives. It may further prove necessary to include polarization measures in the objective functions of recommender algorithms to prevent optimization processes from creating conflict as a side effect.
Introduction
Polarization is a condition where myriad differences in society fuse and harden into a single axis of identity and conflict (Iyengar & Westwood, 2015), and has been increasing for multiple decades in several democracies (Boxell et al., 2020;Draca & Schwarz, 2018).Comparative studies that examine polarization across countries argue that increasing polarization is a contributing factor to the democratic erosion seen in many countries, including Venezuela, Hungary, Turkey, and the United States (McCoy et al., 2018;Somer & McCoy, 2019).Polarization produces a feedback loop where diverging identities lead to less intergroup contact which in turn leads to increased polarization, culminating in a hardened us-vs-them mentality that can contribute to the deterioration of democratic norms.Most conflict escalation models consider polarization a key part of the feedback dynamics that lead to violent conflict (Collins, 2012).Peace and security demand that we address situations of increasing polarization, which is why the international peacebuilding community concerns itself with polarization (Ramsbotham et al., 2016).
Scholars have long studied the relation between media and conflict, a tradition that now includes digital media (Hofstetter, 2021;Tellidis & Kappler, 2016) much of which is algorithmically selected and personalized.The algorithms that choose which items are shown to each user are called recommender systems and all major news aggregators and social media platforms have such a system at their core.Modern recommender systems select content based on a variety of information sources such as the content of each item, a user's expressed preferences, their past consumption behavior, the behavior of similar users, user survey responses, fairness considerations, and more (Aggarwal, 2016).Note that "recommender" is a computer science term of art that covers all algorithmic content selection on the basis of implicit information, i.e. not as the result of a search query.This content might be presented as "recommended for you," labelled as "news" or "trends," or appear as a feed or timeline.
There has been intense interest in the question of whether recommender systems affect large-scale conflict dynamics.Most of the work on recommenders and polarization has taken place within the "filter bubble" paradigm and therefore explored the idea of exposure diversity (Helberger et al., 2018).Selective exposure is the idea that individuals will preferentially choose news sources and articles that are ideologically aligned (Prior, 2013).Because recommender systems respond to user interests, there is the possibility of a feedback loop where both recommendations and user interests progressively narrow.Indeed, simulations have demonstrated such polarization-increasing effects in stylized settings (Jiang et al., 2019;Krueger et al., 2020;Rychwalska & Roszczyńska-Kurasińska, 2018;Stoica & Chaintreau, 2019).
However, available evidence mostly disfavors the hypothesis that recommender systems are driving polarization through selective exposure, aka "filter bubbles" or "echo chambers" (Bruns, 2019;Zuiderveen Borgesius et al., 2016).Algorithmically personalized news seems to be quite similar for all users (Guess et al., 2018), is typically no less diverse than selections by human editors (Möller et al., 2018), and social media users consume a more diverse range of news sources than non-users (Fletcher & Nielsen, 2018).Most recently, Feezell et al. (2021) find no difference in affective polarization scores between Americans who get their news from conventional sources vs. social media.
Non-news personalized content could still be polarizing.Lelkes et. al. (2017) compare the introduction of broadband access across U.S. states from 2004-2008 and find a small causal increase in affective polarization.Yet polarization began increasing in the U.S. decades before social media, and is increasing faster among individuals aged 65 and up, a demographic with low internet usage (Boxell et al., 2017).A cross-country analysis shows no clear relationship between polarization and increasing internet usage, as many OECD countries with high internet usage such as Britain, Sweden, Norway and Germany show decreasing affective polarization (Boxell et al., 2020).
Direct experimental intervention is probably the best way to study the causality of recommender systems.Allcott et. al. (2020) paid U.S. users to stay off Facebook for a month and found that an index of polarization measures decreased by 0.16 SD (standard deviations).This may have been due to a decrease in exposure to polarizing posts, comments, and discussions, but this intervention also decreased time spent on news by 15 percent, and news consumption can itself be polarizing (Martin & Yurukoglu, 2016;Melki & Sekeris, 2019).By contrast, in a similar study in Bosnia and Herzegovina users who deactivated Facebook during a genocide remembrance week showed greater polarization, a 0.24 SD increase on an index of ethnic polarization (Asimovic et al., 2021).The increase was smaller for users who had a more ethnically diverse offline social group, suggesting that Facebook was in this case providing depolarizing diversity.While these studies suggest causation, the effects are not unidirectional or straightforward.
Rather than asking if social media is driving polarization, it may be more productive to ask if social media interventions can decrease polarization.The main contribution of this paper is to propose several methods for building recommender systems that actively reduce polarization.
Note that polarization is conceptually distinct from radicalization.Polarization is a process that "defines other groups in the social and political arena as allies or opponents" while radicalization involves people who "become separated from the mainstream norms and values of their society" and may engage in violence (van Stekelenburg, 2014).There is a growing body of work studying the connection between recommender systems and radicalization (Baugut & Neumann, 2020;Hosseinmardi et al., 2020;Ledwich & Zaitsev, 2019;Munger & Phillips, 2020;Ribeiro et al., 2020) but this is methodologically challenging, and has not yet established a robust causal link.While social media is plausibly involved in radicalization processes the nature of this connection is complex and poorly understood.This work concerns polarization only, arguing that polarization itself is a bad outcome and a precursor to more extreme conflict.
In this paper I first make the moral argument for attempting to reduce polarization through recommender systems, framing it as a conflict transformation intervention.I then review definitions and metrics of polarization before considering depolarization interventions at three stages: which content is available (moderation), how content is selected and personalized (ranking), and content presentation and controls (user interface).The most commonly proposed depolarization intervention is exposure to ideologically diverse content, but this may not be effective because mere exposure does not necessarily depolarize, and sometimes polarizes further.While there are other promising approaches such as exposure to civil counterideological content, these may not be sufficiently robust to the incredibly diverse conditions of real-world platforms.Instead, I propose continuously monitoring survey measures of affective polarization so as to drive recommender outcomes in a feedback loop.Polarization metrics can be used both at the managerial level and at the algorithmic level, potentially through reinforcement learning.
Depolarization as conflict transformation
There are complicated questions around intervening in societal conflicts through media, and additional concerns around the use of AI for this purpose.At worst, algorithmically suppressing disagreements could amount to authoritarian pacification.The Chinese social media censorship regime is an instructive example of democratically questionable interventions in the name of harmony (Creemers, 2017;G. King et al., 2017).Therefore, I frame the goal of depolarization as conflict transformation: not eliminating or resolving conflict but making that conflict better in some way, e.g. less prone to violence and more likely to lead to justice (Jeong, 2019).Indeed, it's not clear that platform users want to be "depolarized," and in any mass conflict situation there will be people who argue for escalation in the strongest moral terms.There is a corresponding line of argument that polarization is beneficial.Political theorists have argued that polarization reduces corruption by increasing accountability (Melki & Pickering, 2020) and generally helps differentiate political parties in a way that provides a meaningful choice to voters.In the mid 20 th century mainstream political scientists worried that America wasn't polarized enough (American Political Science Association, 1950).Importantly, fights for justice or accountability can also increase polarization, such as the American Civil Rights movement of the 1960s (D. S. King & Smith, 2008).There are parallels to the idea of a just war.
Yet polarization also has severe downsides.Polarization at the elite level causes "gridlock" that makes effective governance difficult (F. E. Lee, 2015) but contemporary polarization reaches far beyond lawmakers.The politicization of all spheres of society destroys social bonds at the family, community, and national levels (A.H. Y. Lee, 2020).By some measures, cross-partisan dislike in the U.S. is now considerably stronger than racial resentment, and has large effects on social choices such as hiring, university admissions, dating, family relations, friendships, and purchasing decisions (Iyengar et al., 2018).Polarization erodes the norms that constrain conflict escalation, leading to "morally outrageous" behavior on all sides (Deutsch, 1969), and is a key precursor to violence (Collins, 2012).Ultimately, polarization appears to be a causal factor in the destruction of democracies (McCoy et al., 2018;Somer & McCoy, 2019).
There is a tension between peace and justice.Actions that promote peace may make justice harder, and vice versa.Yet a democracy requires both, an observation which leads to the concept of a just peace (Fixdal, 2012).Instead of trying to eliminate conflict, we can try to understand what makes it good or bad.In an agonistic theory of democracy it is considered normal for political adversaries to be engaged in "opposing hegemonic projects," and conflict is not to be eliminated but "tamed" (Mouffe, 2002).Perhaps the most sophisticated understandings of conflict come from the peacebuilding tradition, which came into its own as an applied discipline after World War II.Fifty years ago, Deutsch described the difference between "constructive" and "destructive" conflict, with particular attention to the dynamics of escalation: Paralleling the expansion of the scope of conflict there is an increasing reliance upon a strategy of power and upon the tactics of threat, coercion, and deception.Correspondingly, there is a shift away from a strategy of persuasion and from the tactics of conciliation, minimizing differences, and enhancing mutual understanding and good-will.And within each of the conflicting parties, there is increasing pressure for uniformity of opinion and a tendency for leadership and control to be taken away from those elements that are more conciliatory and invested in those who are militantly organized for waging conflict through combat.… It leads to a suspicious, hostile attitude which increases the sensitivity to differences and threats, while minimizing the awareness of similarities.This, in turn, makes the usually accepted norms of conduct and morality which govern one's behavior toward others who are similar to oneself less applicable.Hence, it permits behavior toward the other which would be considered outrageous if directed toward someone like oneself.(Deutsch, 1969) On the other hand, Lederach describes how conflict is necessary for positive social change and how conflict transformation moves towards better conflict processes: A transformational approach recognizes that conflict is a normal and continuous dynamic within human relationships.Moreover, conflict brings with it the potential for constructive change.Positive change does not always happen, of course.As we all know too well, many times conflict results in long-standing cycles of hurt and destruction.But the key to transformation is a proactive bias toward seeing conflict as a potential catalyst for growth.… A transformational approach seeks to understand the particular episode of conflict not in isolation, but as embedded in the greater pattern.Change is understood both at the level of immediate presenting issues and that of broader patterns and issues.(Lederach, 2014) Or as Ripley puts it: The challenge of our time is to mobilize great masses of people to make change without dehumanizing one another.Not just because it's morally right but because it works.(Ripley, 2021, p. 13) Polarization is potentially an important intervention point in conflict dynamics because it is involved in escalation pathways through multiple routes.Polarization can be exploited for political mobilization through us-versus-them rhetoric, as has long been understood by activists (Layman et al., 2010) and other "political entrepreneurs" (Somer & McCoy, 2019) and as demonstrated by the fact that the most politically engaged citizens are found at the ideological extremes (Pew Research Center, 2014).However, this kind of exploitation further increases polarization.Indeed, polarization is involved in a variety of pernicious feedback loops: polarization leads to less intergroup contact, which causes polarization (A.H. Y. Lee, 2020); polarization is a precursor to violence, which causes polarization (Collins, 2012); polarization leads to selective information exposure, which causes polarization (Kim, 2015) and so on.These causal dynamics suggest that polarization could be an important intervention point in conflict escalation.
Conflicts that involve democratic erosion or violence are deeply troubling, to the point where conflicttransforming interventions may be warranted on human rights grounds.In the U.S. support for violence in service of political ends is increasing on both the left and the right (Diamond et al., 2020).In short, partisans are willing to violate democratic norms when polarization is high.A recent review concluded that "the goal of these [depolarizing] interventions is to move toward a system in which the public forcefully debates political ideals and policies while resisting tendencies that undermine democracy and human rights."(Finkel et al., 2020)
Measuring polarization
Quantitative measures are needed to evaluate polarization at scale.This is not merely a problem of measurement, but of definition.Polarization has been studied through differences in legislative voting patterns (Hare & Poole, 2014) and the language used in U.S. Congressional speech (Gentzkow et al., 2017).At the population level it has been operationalized as the increasing correlation of policy preferences over multiple issues (Draca & Schwarz, 2018;Kiley, 2017) and as increasing animosity towards the political outgroup, known as affective polarization (Iyengar & Westwood, 2015).All of these indicators show increasing polarization in the US over the last 40 years.Globally the results are more mixed, with some OECD countries experiencing increasing polarization and others showing flat or decreasing trends (Boxell et al., 2020;Draca & Schwarz, 2018).
Affective polarization has become a key concept in the analysis of American politics as "ordinary Americans increasingly dislike and distrust those from the other party" (Iyengar et al., 2018).Affective polarization is a consequence of partisan identity, which is a better model of contemporary political conflict than differences in issue positions (Finkel et al., 2020).It also has the advantage of being operationalizable through straightforward survey measures, such as the feeling thermometer which is one of the oldest and most widely used polarization measures.This method asks respondents to rate their feeling about each political party on a scale from 0 (cold) to 100 (warm).The difference in scores, the net feeling thermometer, is taken to be a measure of affective polarization.This question has been asked on the American National Election Study since the 1970s, and is frequently used in studies of polarization and social media (Feezell et al., 2021;Levy, 2020;Suhay et al., 2018).While there are different measures of affective polarization they are mostly highly correlated (Druckman & Levendusky, 2019).
Affective polarization -negative feelings about the "other side" -has serious interpersonal consequences.Tellingly, 13 percent of Americans reported that they had ended a relationship with a family member or close friend after the 2016 election (Whitesides, 2017).Affective polarization correlates with dehumanization, "a significant step toward depriving individuals who belong to certain groups or categories of individual-level depth or complexity of feelings, motivation, or personality" (Martherus et al., 2021).It leads to the destruction of social bonds and increased outgroup prejudice across all facets of social and political life (Iyengar et al., 2018;A. H. Y. Lee, 2020;Somer & McCoy, 2019).In short, affective polarization now strongly colors the experience of daily life and relationships in multiple countries and has potentially grim consequences for democracy.
Algorithmic depolarization interventions
Recommender-based systems such as social media and news aggregators are more than just "algorithms," and an analysis of the polarization effects of this wide array of products and platforms could potentially be very broad.To narrow the scope, I will consider three key places where changes to recommender systems might be used for depolarization: Which content is available (moderation).Much previous work on polarization has concerned itself with which content is allowed on a platform.For example, hate speech and incitements to violence are routinely removed through a combination of human moderators, machine learning classifiers, and user flagging.
How content is selected (ranking).Algorithmic content selection is essentially a prioritization problem, and all contemporary recommendation systems score each item based on a number of criteria.An intervention in content ranking addresses the core question of who sees what.Most of the approaches considered in this paper are modifications to content ranking.
How content is presented (interface).
Selected items are presented to the user in some way, who can interact with the recommender system through predefined controls.Different presentations or different controls may be conducive to better or worse conflict.
It should immediately be said that there are many possible non-algorithmic social media depolarization interventions, such as community moderation (Jhaver et al., 2017).There are also hybrid approaches, like The Commons which uses automated messages (social media bots) to find people who want to engage in depolarizing conversations, then connects them to human facilitators (Build Up, 2019).There are also a wide variety of depolarization strategies entirely outside of algorithmic media, such as approaches based in journalism, politics, or education, any of which may prove to be more effective.Nonetheless this paper considers only algorithmic interventions in recommender systems because algorithmic content selection has been a central topic of concern, automation provides a path to scaling interventions, and the polarization properties of recommender algorithms are important in any case.
4.1
Removing polarizing content Many kinds of content are now removed from platforms, including spam, misinformation, hate speech, sexual material, criminal activity, and so on (Halevy et al., 2020).While the removal of violent material and incitements to violence may be particularly important in the context of an active conflict (Schirch, 2020), the removal of less extreme material is a blunt approach that may not be justified as a mass depolarization intervention.This kind of content removal is often called "moderation," but it's important to distinguish between community moderation and algorithm-assisted moderation at scale.At the level of an online community or discussion group, volunteer moderators are able to set and enforce norms that lead to productive discussion of polarized topics, as a study of the r/ChangeMyView subreddit shows (Jhaver et al., 2017).Such studies of the micro-dynamics of conflict provide important clues for potential depolarization interventions.Moderators remove posts and suspend accounts, but they also state reasons for their actions, take part in discussions about appropriate policy, and consider appeals.
Platform moderation, by contrast, operates at vast scale to identify unwanted content through a combination of paid moderators and machine learning models.It is acontextual, impersonal, and difficult to appeal (York & Zuckerman, 2019).The low rates of offending content mean that true positives (correctly removed material) may be vastly outnumbered by false positives (incorrectly removed material) unless automated classifiers can be made unrealistically accurate (Duarte et al., 2017).Further, content removal is concerning from a freedom of expression perspective, and the standards for removal are widely contested (Keller, 2018).Facebook alone is "most certainly the world's largest censorship body" (York & Zuckerman, 2019).
Given these concerns, there should be a high bar for automated content removal as a mass depolarization intervention.What should be the standard for unacceptably polarizing material?We could algorithmically remove all angry political comments, but do we want to?Removing all material which might intensify conflict would leave the public sphere arid, authoritarian and devoid of any real politics.
4.2
Increasing exposure diversity Most prior work on the relationship between polarization and social media has been based on the concept of exposure diversity.The most frequently proposed fix is to algorithmically increase the diversity of social media users' feeds (Bozdag & van den Hoven, 2015;Celis et al., 2019;Helberger et al., 2018) and a variety of recommender diversification algorithms have been developed (Castells et al., 2015).This is intuitively appealing, as inter-group contact has been demonstrated to reduce prejudice (Pettigrew & Tropp, 2006).This approach presupposes that a lack of diversity in online media content is causing polarization, which is questionable as discussed above."Diversity" is also poorly defined, and may refer to source diversity, topic diversity, author diversity, audience diversity, and more.A review of media diversity by Loecherbach et. al. (2020) notes that "research on this topic has been held back by the lack of conceptual clarity about media diversity and by a slow adoption of methods to measure and analyze it."Further, the causal connection between exposure diversity and polarization is complex and under some conditions exposure to outgroup content can actually increase polarization (Bail et al., 2018;Paolini et al., 2010;Rychwalska & Roszczyńska-Kurasińska, 2018;Taber & Lodge, 2006).
Yet increasing exposure diversity can work, at least somewhat.One experiment tested the effect of asking US Facebook users to subscribe to ("like") up to four liberal or conservative news outlets, measuring changes in affective polarization through a survey two weeks later.This level of exposure to outgroup information decreased affective polarization by about 1 point on a 100-point scale (Levy, 2020).By comparison, the rate of increase in affective polarization in the U.S. since 1975 is estimated at 0.6 points per year (Finkel et al., 2020).Rescaled to the same 100 point scale, the previously discussed experiment of leaving Facebook for a month resulted in about a 2 point decrease (Allcott et al., 2020, p. 652) though only on issue-based rather than affective measures.All of these estimates should be considered quite rough.
This demonstrates that increased exposure diversity can be a useful intervention point for depolarization, but the effect so far has been modest.Are different or better approaches possible?For example, Levy (2020) tested only news diversity, meaning professional journalism.Polarization may turn out to be more sensitive to non-news content or user comments.
4.3
Recommending civil arguments Several studies have attempted to determine the conditions under which polarization and depolarization occur.Kim & Kim (2019) found that those who read uncivil comments arguing for an opposing view rated themselves as closer to ideological extremes on a post-exposure survey than those who did not.Civility may not be depolarizing per se, but incivility does seem to be polarizing.Suhay et al. (2018) similarly show that comments that negatively describe political identities (e.g."Liberals are ignorant") increase polarization as measured by the feeling thermometer question.This effect also appears in the context of partisan media sources (e.g.MSNBC, Fox) where "incivility [of] out-party sources affectively polarizes the audience" (Druckman et al., 2019).
It seems likely that "civility" and "partisan criticism" can be algorithmically scored through existing natural language processing techniques, drawing on previous work classifying hate speech and harassment.All are conceptually close to the "toxicity" operationalized by contemporary comment classification models (Noever, 2018).While these models are mostly used for moderation --that is, removing offending comments -they could also provide a "civility" signal that is incorporated into recommender item ranking.Twitter has experimented with this idea (Wagner, 2019) but I am not aware of any production recommender that incorporates a civility signal in content ranking (as opposed to content moderation).
In addition to demoting uncivil content, it is possible to promote civil content.Experimental evidence shows that ranking high-quality comments at the top can positively alter the tone of subsequent discussion (Berry & Taylor, 2017).In effect, this intervention hopes to model respectful disagreement.This may not work if there are not many natural examples of productive inter-group conversation.In particular, there may be a lack of journalism content that takes a depolarizing approach to reporting on controversial issues (Hautakangas & Ahva, 2018;Prior & Stroud, 2015;Ripley, 2018).
Of course, uncivil language can be necessary and important.We certainly don't want an algorithmic media system that redirects attention away from anyone raising their voice.Indeed, several theories of democracy require such confrontation, such as critical approaches (Helberger, 2019) or agonistic models (Mouffe, 2002).Hence, there is a tension between encouraging expression and intervening to make the conversation more productive -this is the art of (algorithmic) mediation.
4.4
Priming better interactions Given a particular set of items selected for a user, it may be possible to present them in a way that encourages more productive conflict.Language seems particularly important in political disagreements.Intriguingly, replacing the usual "like" button with a "respect" button increased the number of clicks on counter-ideological comments, that is, people were more likely to "respect" something they disagreed with than to "like" it (Stroud et al., 2017).
While civility norms have been shown to contribute to successful online discussions of polarized topics (Jhaver et al., 2017) it is difficult to automate the promulgation and enforcement of such norms.One intriguing possibility is to change the content of automated messages, such as the message welcoming someone to a group.In a large scale experiment on r/science on Reddit, adding a short note explaining what types of posts will be removed and noting that "our 1200 moderators encourage respectful discussion" greatly reduced the rate at which newcomers violated community norms (Matias, 2019).
In a sense, changing user behavior is the strongest depolarization intervention.This is not at all simple to accomplish, but these studies demonstrate that simple user interface changes can have profound effects.
Learning to depolarize
The approaches discussed above are justified on the basis of sociological theory, from results in laboratory settings, or through modest platform experiments.Real platforms are enormous, diverse, and dynamic environments, and ecological validity is a serious problem for the development of social media interventions (Griffioen et al., 2020).It is likely to be difficult to predict which depolarization interventions will succeed.The best approach will vary between subgroups, in different contexts, and over time.
Effective management of polarization will therefore depend on continual monitoring of polarization outcomes by platform operators.Affective polarization measures may prove to be the most useful category of metrics, in part because they are agnostic to the type of content that drives polarization.More cognitive measures of polarization, such as issue position surveys (Draca & Schwarz, 2018;Kiley, 2017) may be less diagnostic for social media, where many interactions will not involve discussions of substantial policy preferences.
Platforms already monitor various non-engagement measures and incorporate them into recommender design and ranking (Stray, 2020).Facebook asks users whether specific posts led to a meaningful social interaction on or off the platform.This is a construct from social psychology that appears to be similarly interpretable across cultures (Litt et al., 2020).YouTube similarly incorporates user satisfaction ratings obtained by asking users what they thought of specific recommendations (Zhao et al., 2019).Such metrics are used to drive product choices at the managerial level by selectively deploying changes, a form of A/B testing.They are also incorporated directly into the predictive models underlying item ranking, as the next section describes, but the first and most fundamental depolarization intervention is simply to monitor for actual polarization outcomes, rather than betting on theory.
5.1
Optimizing for depolarization Survey responses can be used to train recommender ranking algorithms, for example by building a model that predicts whether an item is going to lead to a positive survey answer for a particular user in a particular context.This is, technically speaking, similar to predicting which items will result in a click.Optimizing for predicted survey responses is an important technique in the nascent field of recommender alignment, the practice of getting recommender systems to enact human values (Stray, 2021;Stray et al., 2020).
The feeling thermometer has been used experimentally to evaluate the polarizing effect of seeing a post, by taking the difference between treatment and control groups (Kim & Kim, 2019;Suhay et al., 2018).If it proves possible to know whether individual posts or conversations are polarizing, it should be possible to build a model to predict the polarization effect of showing novel posts.Similar classifiers are already in use to detect misinformation, hate speech, bullying etc.One plausible technique is the TIES model, which takes into account not only the text and image content of a specific post but the sequence of interactions around it, including discussions in comments, likes, shares, etc. (Noorshams et al., 2020).In the context of an online discussion, the goal would be to determine whether users are having a productive exchange of views or a divisive argument, so the history of interactions carries significant information.
Alternatively, affective polarization measures could be used longitudinally, perhaps by asking a panel of users to respond to a feeling thermometer question daily or weekly, thereby measuring attitudes over time.When compared to a control group, this amounts to a difference-in-differences design which gives robust causal estimates under certain assumptions (Angrist & Pischke, 2009, Chapter 5).That is, it should be possible to learn the actual polarizing effects of selecting different distributions of items.However, using longitudinal data to drive recommendation systems toward selecting depolarizing content is technically challenging due to the much longer time scale and higher level of abstraction as compared to feedback on individual items.
Reinforcement learning (RL) algorithms may be the most general and powerful approach to learning patterns of recommendation which optimize long term outcomes (Ie et al., 2019;Mladenov et al., 2019).In principle, affective polarization survey measures could be used as a reward signal for reinforcement learning-based recommenders.However, this sort of learning from sparse survey feedback has not yet been demonstrated.Additional algorithmic development will be necessary before longitudinal polarization measures can be incorporated into content selection algorithms, but the necessary technical research is underway because other sparse, long term signals such as user subscriptions have immediate business value.
In other words, the same methods that make it possible to predict what movies to show someone to get them to subscribe may also make it possible to learn which patterns of interaction increase or reduce polarization.
5.2
Unintended consequences and the necessity of specification The effective use of sociological metrics is complicated and can fail in a number of ways, regardless of whether the metric is used by people or algorithms.Using reinforcement learning to attempt large scale political intervention should be a particularly alarming prospect.While there is a strong moral case for designing recommender systems to depolarize, unintended consequences could swamp any positive effects.
A metric is an operationalization of some theoretical construct, and might be an invalid measure for a variety of reasons (Jacobs & Wallach, 2019).Even a well-constructed metric almost never represents what we really care about: clickbait lies entirely in the difference between "click" and "value."When used as targets, metrics suffer from a number of problems involving gaming and spurious correlations, which can be understood in causal terms as variations of Goodhart's law (Manheim & Garrabrant, 2018).It is particularly important to undertake ongoing qualitative methods and user research, to know whether current metrics are adequately tracking the intended goals -and to learn of whatever else may be happening.
Metrics often fail when used in management contexts because they are irrelevant, illegitimate, gamed, or aren't updated as the context changes (Jackson, 2005).Using metrics to train a powerful optimizing system introduces further concerns (Thomas & Uminsky, 2020).Different effects for different subgroups may be a particular problem for recommender systems which typically optimize average scores (Li et al., 2021).While it's always useful to monitor for slippage between a metric's intent and what it is actually measuring, this is particularly important when a measure becomes the target of society-wide AI optimization (Stray, 2020).If we choose to apply reinforcement learning to polarization metrics, those metrics will require continuous evaluation.
On the other hand, not using polarization measures in algorithmic content selection may be far worse.Optimization algorithms which do not penalize polarization measures might learn, as humans do, that polarization can be exploited for engagement.Or they might merely increase conflict as an agnostic side effect, which is no better.In general, under-specification is a serious hazard in the creation of machine learning models (D'Amour et al., 2020).If we do not specify the intended effect of a recommender system on polarization, we should not be surprised to find unexpected outcomes.
Conclusion
Polarization is a hardening division of society into "us" vs "them."It interacts with a number of conflict feedback processes and eventually leads to democratic erosion and violence (McCoy et al., 2018;Somer & McCoy, 2019).The goal of a depolarization intervention is not to suppress conflict, but to have better conflict that moves towards constructive societal change (Deutsch, 1969;Jeong, 2019;Lederach, 2014;Ripley, 2021).While all societies face complex tensions between peace and justice, depolarization interventions may ultimately be justified on human rights grounds (Finkel et al., 2020) just as other peacebuilding interventions are.
Available evidence suggests that social media usage is not driving increases in polarization at the country level (Boxell et al., 2017(Boxell et al., , 2020)).In particular, there is little empirical support for the idea that personalization is reducing exposure to diverse information (Guess et al., 2018;Zuiderveen Borgesius et al., 2016).Nonetheless, there is some evidence that social media-based interventions can reduce polarization among users.A recent experimental test of increasing news diversity produced a small decrease in polarization (Levy, 2020).Paying users to stay off Facebook for a month produced small decreases in issue polarization, though not affective polarization (Allcott et al., 2020).
Moderation, the removal of unwanted content, can be important especially in the context of a violent conflict (Schirch, 2020) but it is probably too blunt an instrument for depolarization.Content ranking defines what each user sees and is the most general intervention point.While exposure to diverse perspectives can actually increase polarization (Bail et al., 2018) increased exposure diversity does depolarize in some contexts (Levy, 2020;Pettigrew & Tropp, 2006).Recommenders could augment diversity by de-prioritizing content that has been shown to be polarizing, including uncivil presentations of outgroup opinions (Kim & Kim, 2019) and criticism of partisan identities (Suhay et al., 2018).Content presentation and user interface may also have depolarization effects, as has been shown in experiments changing "like" to "respect" (Stroud et al., 2017) and adding a message reminding users of community norms (Matias, 2019).
Yet none of the above approaches directly target the outcome of interest.Any depolarization method based on selecting content according to pre-existing theory may prove unable to cope with the radically diverse and dynamic contexts of a real recommender system.The solution is to directly and continuously measure polarization outcomes.
Existing polarization measures, particularly affective polarization measures, have been used to evaluate the effect of encountering different types of comments on news articles (Kim & Kim, 2019) and the same methods should generalize to other types of items including user posts, discussion threads, and so on.Such survey data can be used to evaluate recommender system changes and make deployment decisions.It can also be used to train polarization prediction models, much as existing recommender models predict meaningful social interactions and other survey results (Stray, 2020).Ultimately, polarization survey feedback could be used as a reward signal for reinforcement learning-based recommendation algorithms.This powerful emerging approach has the potential to learn what actually depolarizes, and continuously adapt to changes.Optimizing for such a signal may have unintended harmful consequences, so such a system would need to be continuously monitored in other ways, such as qualitative studies.In any case it may prove necessary to incorporate polarization measures into recommender systems to prevent the creation of conflict as a side effect of optimization (D'Amour et al., 2020).
It is unknown whether this sort of feedback-driven intervention would succeed in reducing the average dislike of the outgroup as compared to doing nothing, or more broadly whether intervening in platform recommenders will be an effective depolarization strategy within the complex and dynamic media ecosystem of any particular community, but there is reason to suspect this is possible.At the very least, the collection of individual-level affective polarization survey data provides a managerial incentive in the direction of depolarization.Nonetheless, the use of affective polarization survey data to drive platform recommender systems is a theoretically grounded, technically feasible, and potentially robust strategy for a social media depolarization intervention which deserves further study. | 2021-07-13T01:15:58.610Z | 2021-07-11T00:00:00.000 | {
"year": 2021,
"sha1": "370c499bc5e4230f96133f3d57d0a71cb8093377",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5210/fm.v27i5.12604",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "370c499bc5e4230f96133f3d57d0a71cb8093377",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
182947512 | pes2o/s2orc | v3-fos-license | Two-Week Aflibercept or Erlotinib Administration Does Not Induce Changes in Intestinal Morphology in Male Sprague–Dawley Rats But Aflibercept Affects Serum and Urine Metabolic Profiles
Gastrointestinal toxicity is a frequently observed adverse event during cancer treatment with traditional chemotherapeutics. Currently, traditional chemotherapeutics are often combined with targeted biologic agents. These biologics, however, possess a distinct toxicity profile, and they may also exacerbate the adverse effects of traditional chemotherapeutics. In this study, we aimed to characterize the gastrointestinal and metabolic changes after a 2-week treatment period with aflibercept, an antiangiogenic VEGFR decoy, and with erlotinib, a tyrosine-kinase inhibitor. Male rats were treated either with aflibercept or erlotinib for 2 weeks. During the 2-week treatment period, the animals in the aflibercept group received two subcutaneous doses of 25 mg/kg aflibercept. The erlotinib group got 10 mg/kg of erlotinib by oral gavage every other day. The control groups were treated similarly but received either saline injections or oral gavage of water. Intestinal toxicity was assessed by measuring intestinal permeability and by histological analyses of intestinal tissues. Metabolic changes were measured with 1H nuclear magnetic resonance in serum and urine. Neither aflibercept nor erlotinib induced changes in intestinal permeability or intestinal tissue morphology. However, aflibercept treatment resulted in stunted body weight gain and altered choline, amino acid, and lipid metabolism. Two-week treatment with aflibercept or erlotinib alone does not induce observable changes in gastrointestinal morphology and function. However, observed aflibercept-treatment related metabolic changes suggest alterations in intestinal microbiota, nutrient intake, and adipose tissue function. The metabolic changes are also interesting in respect to the systemic effects of aflibercept and their possible associations with adverse events caused by aflibercept administration.
Introduction
Gastrointestinal (GI) toxicity is a common and well-known adverse effect of chemotherapy [1]. Chemotherapeutic drugs such as 5fluorouracil (5-FU) and irinotecan are associated with a variety of GI symptoms such as diarrhea, abdominal pain, weight loss, and infections. Overall, these symptoms may significantly affect treatment outcomes [1]. Recently, the clinical outcomes of cancer treatment have improved with the introduction of targeted biologic agents, which, however, possess a distinct profile of adverse events that can differ from those of traditional cytotoxic agents [2]. In addition, combined with chemotherapy, biologics can also exacerbate the adverse effects associated with traditional chemotherapeutics [2].
Aflibercept is an antiangiogenic biologic agent that inhibits tumor growth by blocking the formation of new blood vessels [3]. New blood vessel formation requires several circulating growth factors such as vascular endothelial growth factors (VEGFs) and placental growth factors (PIGFs) that initiate angiogenesis by binding to their receptors (VEGFRs). Aflibercept acts as a soluble VEGFR decoy that binds VEGF-A, VEGF-B, PIGF-1, and PIGF-2 and thus blocks them from activating the angiogenesis cascade [3]. Aflibercept (as ziv-aflibercept, trade name Zaltrap) is approved in combination with 5-FU, leucovorin, and irinotecan (FOLFIRI) for the treatment of metastatic colon cancer (mCRC) that is resistant to or has progressed following an oxaliplatin-containing regimen [4]. Clinical studies have shown that combining aflibercept with FOLFIRI-regimen improves overall survival in patients with mCRC compared to FOLFIRI alone [5]. However, in a phase III study by Van Cutsem et al. (2012), better clinical outcomes were also accompanied by a higher incidence of grade III and IV diarrhea in the aflibercept arm compared to the placebo arm [6]. Folprecht et al. (2016) also observed an increased incidence of grade III and IV diarrhea but not any increases in efficacy when aflibercept was added to first-line treatment with oxaliplatin and 5-FU/folinic acid (mFOLFOX6) [7]. These observations suggest that aflibercept can exacerbate the GI toxicities associated with chemotherapy.
Epidermal growth factor receptor (EGFR) pathway mediates cell proliferation and replication, and EGFR is widely expressed throughout the body. Tumors frequently overexpress EGFR, making the inhibition of the EGFR pathway an important mechanism in the treatment of several different cancers such as colorectal, lung, and pancreatic cancer [8][9][10][11]. Cancer treatment regimens with EGFR pathway inhibition are based on either monoclonal antibodies against EGFR (panitumumab; a biologic agent) or on tyrosine-kinase inhibitors that selectively block EGFR activity (erlotinib; a small molecule drug). However, treatment with these agents is associated with adverse effects such as skin rash and diarrhea which may lead to dose reductions and treatment cessations [9,12].
The primary aim of this study was to investigate the GI effects of the antiangiogenic biologic agent aflibercept and the small molecule EGFR pathway inhibitor erlotinib in rats by measuring possible changes in intestinal permeability and performing a histological examination of the intestinal tissues after a 2-week drug treatment period. We hypothesized that any alterations in GI function and possible toxicities might also be associated with changes in the global metabolome. Thus, to further assess the potential adverse effects of these drugs, we conducted a metabolic profiling of urine and serum by 1 H nuclear magnetic resonance (NMR) spectroscopy.
Ethical Statement
The animal experiments conformed to the European (Directive 2010/63/EU) and Finnish (Act 2013/497 and Decree 2013/564) regulations on the protection of animals used for scientific purposes and were approved by the National Ethics Committee for Animal Procedures in Finland (project license ESAVI/114/04.10.07/2015).
Animals
A total of 48 male Hsd:Sprague-Dawley (SD) rats (Rattus norvegicus; Envigo, Udine, Italy) aged 6 weeks were used in this study. The animals were obtained and acclimatized for 18 days before the start of the experimental protocol. Health reports from the animal supplier indicated that the rats were free of known viral, bacterial, and parasitic pathogens. Upon arrival, the rats were housed under specific pathogen-free laboratory conditions using artificial lightening with a 12-hour light/dark cycle with lights on at 6 am in a temperature-(22°C ± 2°C) and humidity-(55% ± 15%) controlled room. The animals were housed in in social groups of four rats and kept in stainless-steel open cages (59.5 × 38.0 × 20 cm) with solid bottoms and filled with Aspen chips as bedding (Tapvei, Harjumaa, Estonia) and a cardboard tube for environmental enrichment. All rats were allowed free access to drinking tap water delivered in polycarbonate bottles and to a maintenance diet given ad libitum consisting of a rat chow (2018 Teklad Global 18% Protein Rodent Diet, Harlan Laboratories, Madison, WI). The rat colony's health status was monitored by a health monitoring program in the animal's holding room according to the Federation of European Laboratory Animal Science Associations guidelines.
Experimental Protocol
At the beginning of the protocol, the rats were 9 weeks old and their average body weight was 282 ± 14 g. The animals were block randomized to treatments and were divided into four experimental groups: 1) aflibercept control, 2) aflibercept, 3) erlotinib control, and 4) erlotinib (n = 12 per group). Baseline intestinal permeability was assessed in vivo (Measurement of Intestinal Permeability) after which the animals were administered with the experimental drugs (Drug Administration). Intestinal permeability was measured again, and urine was collected for metabolic profiling after a 14-day observation period before euthanasia. For euthanasia, the rats were fully anesthetized using isoflurane (Vetflurane 1000 mg/g, Virbac, Suffolk, UK) and subsequently exsanguinated by cardiac puncture (blood sampling for metabolic profiling) and by severing of the aorta. During the 2-week period after the first drug administrations, the animals were weighted and checked for diarrhea every other day.
Drug Administration
Rats in the aflibercept group received two subcutaneous doses of 25 mg/kg (injection volume approx. 0.3 ml) aflibercept on days 0 and 7. Vehicle for aflibercept contained sodium phosphate, sodium citrate, sodium chloride, 200 mg/ml sucrose and 1 mg/ml polysorbate 20 (provided by Sanofi-Aventis, France). Saline solution (0.9% NaCl; 1 ml/ kg, injection volume approx. 0.3 ml) was used for the aflibercept control group. All injections were administered under isoflurane anesthesia. Rats in the erlotinib group were administered every other day (starting on day 0; seven doses in total) with 10 mg/kg erlotinib (Tarceva 25 mg, provided by Roche, UK) dissolved in tap water by oral gavage. The erlotinib control group was gavaged with tap water.
Measurement of Intestinal Permeability
The intestinal permeability was assessed with iohexol (Omnipaque 300, 647 mg iohexol/ml, GE Healthcare, Oslo, Norway). The animals were weighed and gavaged with 1 ml of solution containing Translational Oncology Vol. 12, No. 8, 2019 Two-Week Aflibercept or Erlotinib Administration Forsgård et al.
647 mg iohexol. After iohexol administration, the animals were immediately placed in individual metabolic cages for urine collection. After 24 hours, the amount of collected urine was measured and stored in −80°C for later analysis. Samples were discarded if fecal contamination or incomplete urine collection was observed.
Analysis of Iohexol
The urine concentration of iohexol was measured by enzymelinked immunosorbent assay according to the manufacturer's instructions (BioPAL Inc., Worcester, MA). The percentage of excreted iohexol was calculated using the following equation:
Blood Sampling
The blood samples from the heart were collected in serum separation tubes [VenoSafe Clot Act. (Z), Terumo Europe, Leuven, Belgium] and centrifuged at 1500g for 10 minutes at 4°C. The separated serum was collected and stored in −80°C for later analysis.
Metabolic Profiling of Serum and Urine
For 1-mm proton nuclear magnetic resonance ( 1 H NMR) analysis, 20 μl of serum was mixed with 2.5 μl of sodium-3′-trimethylsilylpropionate-2,2,3,3-d4 (TSP, 2.5 mM) in deuterium oxide (D 2 O). For urine samples, 2 μl of a phosphate buffer solution (0.06 M Na 2 HPO 4 / 0.04 M NaH 2 PO 4 , pH 7) and TSP 2.5 mM were added to overcome the pH variation problem. A total of 20 μl of the mixture of each sample was then transferred into a 1-mm high-quality NMR tube individually. NMR spectra were recorded, at 310 K, on a Bruker Avance 600 spectrometer operating at 600.13-MHz with a 1-mm 1H/13C/15N TXI probe. All spectra were acquired using a standard one-dimensional pulse sequence with water suppression. Water presaturation was used during 1 second along the recycling delay for solvent signal suppression. Each free induction decay was zero-filled to 64 k points and multiplied by a 0.3-Hz exponential line broadening function before Fourier transformation. All spectra were manually phased and baseline corrected, and chemical shifts were referenced internally to alanine (at δ = 1.478 ppm) using MestReNova 8.1 (Mestrelab Research S.L., Spain). The spectra were binned into 0.01-ppm buckets between 0.5 and 9.5 ppm and mean centered for multivariate analysis and normalized to total aliphatic spectral area (0.5-4.3 ppm) to eliminate differences in metabolite total concentration. The relative concentrations of 66 and 111 regions for serum and urine, respectively, based on its metabolite's enrichment were exported to MATLAB (MathWorks, 2013a) for semiautomated in-house integration and peak-fitting routines. Using literature data and the Chenomx Profiler module (Chenomx NMR 7.6), the NMR peaks were assigned to their corresponding metabolites. Multivariable analysis was assessed for spectral regions containing contributions of a single or at most two metabolites. Data were pareto-scaled before analysis using orthogonal projection to latent structures-discriminant analysis (OPLS-DA). Score plots were used to visualize the separation of the groups, and variable importance in projection (VIP) values of OPLS-DA models N1.0 were used to determine which spectral variables most significantly contributed to the separation of the groups on the score plot.
Tissue Collection
Following euthanasia, tissue samples (1 cm) of jejunum and colon were harvested and flushed with cold PBS to remove any intestinal content. Tissue samples were then fixed in 10% neutral buffered formaldehyde (Sigma-Aldrich, St. Louis, MO) for 24-48 hours, embedded in paraffin, sectioned at 4-μm thickness, and stained with hematoxylin-eosin.
Histological Analysis
Jejunum and colon samples were evaluated, and mucosal lesions graded separately. Histological analysis was performed as previously described [13]. Briefly, a total of six change categories were assessed from jejunal samples: villous stunting, villous epithelial injury, crypt hyperplasia, crypt epithelial injury, Paneth cell injury, and leucocyte infiltration in lamina propria. Comparable six change categories were analyzed from the colon: surface epithelial injury, crypt hyperplasia, crypt dilatation and distortion, crypt epithelial injury, crypt atrophy (destruction), and leucocyte infiltration in lamina propria. Each change category was graded using a four-tier scale: minimal (1), mild (2), moderate (3), and marked (4). Histopathological assessment was done in a partly blinded manner. The reader of the slides (J.L.) was aware of the experimental design and which animals were in the same group but was unaware of the group identities.
Data Analysis
Normality of the data sets was tested with Kolmogorov-Smirnov test, and based on this analysis, statistical differences between treatment groups and their respective control groups were tested with independent-samples t test (SPSS Statistics 22.0, IBM, Armonk, NY). All data are expressed as means ± standard deviations. Differences between groups were deemed significant when P values b .05.
Drug Response
Aflibercept treatment stunted body weight gain during the 2-week experiment ( Figure 1A). At the end of the experiment, rats that received aflibercept had gained weight 12.3% ± 3.1% of their initial body weight, which was significantly (P = .03) less than the rats in the aflibercept control group (16.2% ± 5.1%). There were no differences in body weight gain between rats that received erlotinib and their respective controls (erlotinib control: 15.4% ± 1.7%; erlotinib: 15.4% ± 1.2%) ( Figure 1B). In both treatment groups, none of the animals showed any signs of diarrhea during the experiment.
Global Metabolome Variations
Aflibercept serum PCA showed two separate clusters (aflibercept treatment group and the control group), indicating that aflibercept induced metabolic shift in the rats' sera ( Figure 2A). The erlotinib groups clustered together, suggesting no metabolic differences between the treatment and control group. Urine PCA revealed three distinct clusters (aflibercept, aflibercept control, and both erlotinib treatment and control group), suggesting that aflibercept, but not erlotinib, also caused changes in the urinary metabolome ( Figure 2B). OPLS-DA discriminated well between the treatment groups and their respective controls (Figures 3 and 4). Relevant metabolites that contributed the most to the observed discrimination and the metabolites' levels compared to the control groups are visualized in Figures 3 and 4. Based on this analysis, the metabolic differences in the aflibercept group were mostly driven by decreased serum levels of multiple amino acids (tryptophan, phenylalanine, arginine, methionine, tyrosine, glutamine, threonine, and valine) and increased serum levels of very low-density lipoprotein (VLDL), lowdensity lipoprotein (LDL), and fatty acids moieties -CH 3 , =CH-CH 2 -CH=, and CH 2 -CH=C. In addition, aflibercept induced significant changes in methylamine metabolism, increasing the serum levels of N(CH 3 ) 3 (a choline moiety) while decreasing the serum levels of methylamines. In the urine, the metabolic differences between the aflibercept treatment and the control group were characterized by significant changes in tryptophan metabolism (tryptophan and its metabolites 3-indoxylsulfate and 5hydroxyindole-3-acetate), niacin metabolism (increased excretion of trigonelline), and increased excretion of branch-chained amino acids (isoleucine, leucine, and valine) and their metabolite 2hydroxyisovalerate. In the erlotinib treatment group, the metabolic changes were minor compared to the control group, with only a few metabolites showing significantly different resonances. Also, despite statistical significance, the biological significance of these changes is questionable considering that the observed metabolite resonances show only minor differences between the two groups (Table S2 and
Histological Analysis
Histological analyses of the jejunum and colon showed that neither treatment impacted intestinal tissue morphology (data not shown).
Discussion
Our results show that a 2-week aflibercept treatment caused stunted body weight gain but no other clinical side effects, increased intestinal permeability, or observable changes in the histology of intestinal tissues. These findings indicate that aflibercept treatment itself does not cause GI toxicity. Thus, it seems that the reported increases in GI symptoms observed in clinical studies featuring aflibercept in combination with other chemotherapeutics [6,7] may result from a synergic effect of angiogenesis blockade and cytotoxic insult. Also, adding aflibercept to chemotherapeutic treatment increases the risk of GI perforation [14], suggesting that aflibercept can indeed disturb normal GI function when combined with other chemotherapeutics. The exact mechanisms behind these effects are unknown, but Kamba et al. (2006) showed previously that VEGF inhibition regresses capillaries in the small intestinal villi in adult mice [15]. In addition, blocking VEGF activity decreases nitric oxide release [16], which subsequently may reduce intestinal blood flow. Hence, it is possible that aflibercept makes the intestine more susceptible to the toxic effects of traditional chemotherapeutics by decreasing the flow of oxygen and nutrients to the enterocytes. Additionally, our metabolomic analysis revealed that aflibercept treatment induced an increase in serum levels of N(CH 3 ) 3 (a choline moiety) and concomitantly decreased the levels of its degradation product trimethylamine. The formation of trimethylamine from choline is a microbial metabolism pathway [17] which would suggest treatment-induced changes in intestinal microbiota. Whether these alterations contribute to GI toxicity during cancer treatment is an interesting question warranting future studies.
Interestingly, aflibercept administration resulted in a significant stunting of body weight gain and significant alterations in amino acid and lipid metabolism. Specifically, the aflibercept-treated group exhibited decreased serum levels of multiple amino acids and increased serum levels of lipoproteins VLDL and LDL, as well as several lipid moieties, such as LDL-like lipid particles (-CH 3 ) and polyunsaturated fatty acids (=CH-CH 2 -CH=). Overall, these findings are to be interpreted in the context of the systemic effects of aflibercept. Firstly, aflibercept as well as other antiangiogenetic treatments have been associated with reduced appetite in experimental animals [18][19][20]. In our data, this effect could be reflected in the decreased serum levels of several essential amino acids (e.g., tryptophan, phenylalanine, methionine, threonine) and in the elevated urinary levels of branched-chain amino acids and 2hydroxyisovalerate indicating skeletal muscle protein breakdown and ketoacidosis [21], respectively. Additionally, the detected increase in serum levels of lipids moieties may result from increased fatty acid utilization under nutrient depletion. However, although decreased feed intake might explain part of our findings, in mice, decreased body weight gain seems to accompany antiangiogenic treatment independent of caloric intake [18,19]. This suggests the induction of some additional metabolic mechanisms during antiangiogenic treatment.
The aflibercept-induced changes in serum lipids are especially interesting considering that angiogenesis is a highly active process during adipogenesis coupling angiogenic factors to lipid metabolism. For example, angiopoietin-like protein 4 (ANGPLT4) modulates angiogenesis independent of VEGF but also inhibits the activity of lipoprotein lipase (LPL), an enzyme that cleaves plasma triglycerides from VLDL and chylomicrons [22]. The inhibition of LPL results in reduced uptake of fatty acids into the adipose tissue and elevated serum levels of VLDL and fatty acids [23] similarly to our observations. Aflibercept treatment has been shown to affect adipose tissue vasculature [20], and adipose tissue hypoxia is one of the driving factors for ANGPLT4 induction [24]. Thus, possibly, by suppressing adipose tissue angiogenesis, aflibercept treatment increases the concentration of circulating fatty acids which subsequently may result in decreased appetite. Different chemotherapy regimens have been shown to exert alterations in normal lipid metabolism possibly via mechanisms involving oxidative stress and inflammation [25][26][27]. Regarding the effects of anti-VEGF agents, Joerger (2010) described previously in a case report a breast cancer patient whose chemotherapy-induced hyperlipoproteinemia persisted until the cessation of monoclonal VEGF-inhibitor bevacizumab [28]. In addition, Jobard et al. (2015) have shown that a 2-week combination treatment with bevacizumab and temrolimus (an mTOR inhibitor) increases the levels of serum lipoproteins and lipids [29]. Although the authors attributed this effect to temrolimus, our results suggest that VEGF inhibition alone can increase serum lipid concentrations. In vitro studies with different glioma cell lines have also demonstrated increased levels of fatty acids and changes in choline metabolism after pharmacological treatment with VEGF receptor 2 inhibitor or with bevacizumab [30,31]. Overall, similar metabolic alterations have also been associated with increased apoptosis in several cell lines [30][31][32][33][34]. Although aflibercept does not appear to overtly induce apoptosis in cell models [35][36][37][38], the metabolic shifts observed in our study could reflect afliberceptmediated in vivo tissue hypoxia and oxidative stress that drive apoptotic processes. In addition, aflibercept has also been shown to enhance the generation of reactive oxygen species during oxidative stress [39] as well as promote the expression of inflammatory mediators in vascular endothelial cells [40]. Clinically, these findings suggest that aflibercept might exacerbate the oxidative stress of chemotherapy which contributes to the increased incidence of adverse effects observed during cancer treatment with antiangiogenic agents.
We did not observe any physiological changes in the erlotinib group. Additionally, the metabolic alterations were also minimal. Previously, Fan et al. (2014) reported that daily oral dosing of 18 mg/kg erlotinib stunts body weight gain in C57BL/6J mice as well as induces histological changes and inflammatory signals in murine intestine [12]. On the other hand, Higgins et al. (2004) administered tumor-bearing nu/nu-nuBR nude mice with 25 mg/kg of erlotinib daily for 3 weeks and reported no changes in body weight or other toxicities [41]. Similarly, even daily erlotinib doses up to 100 mg/kg for 2 weeks seem well tolerated in mice [42]. Thus, it is possible that the erlotinib dose and treatment duration in our study were not sufficient to induce any observable toxicities. Nevertheless, more research is needed to elucidate the pathophysiological mechanisms behind erlotinib-induced GI toxicity.
In conclusion, our results show that a 2-week aflibercept or erlotinib administration does not cause changes in intestinal permeability or induce notable histological damage in the intestine. However, aflibercept treatment stunted body weight gain and caused significant alterations in choline, amino acid, and lipid metabolism. These findings are interesting in respect to the systemic effects of aflibercept and their possible associations with adverse events associated with aflibercept administration. | 2019-06-11T13:08:49.808Z | 2019-06-06T00:00:00.000 | {
"year": 2019,
"sha1": "134165cc935996053c6306603f3da6e06408b52b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.tranon.2019.04.019",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9c7990987bc18268e9e24f001c15c748c18b0f0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266930435 | pes2o/s2orc | v3-fos-license | Modulation of TRPV1 and TRPA1 Channels Function by Sea Anemones’ Peptides Enhances the Viability of SH-SY5Y Cell Model of Parkinson’s Disease
Cellular dysfunction during Parkinson’s disease leads to neuroinflammation in various brain regions, inducing neuronal death and contributing to the progression of the disease. Different ion channels may influence the process of neurodegeneration. The peptides Ms 9a-1 and APHC3 can modulate the function of TRPA1 and TRPV1 channels, and we evaluated their cytoprotective effects in differentiated to dopaminergic neuron-like SH-SY5Y cells. We used the stable neuroblastoma cell lines SH-SY5Y, producing wild-type alpha-synuclein and its mutant A53T, which are prone to accumulation of thioflavin-S-positive aggregates. We analyzed the viability of cells, as well as the mRNA expression levels of TRPA1, TRPV1, ASIC1a channels, alpha-synuclein, and tyrosine hydroxylase after differentiation of these cell lines using RT-PCR. Overexpression of alpha-synuclein showed a neuroprotective effect and was accompanied by a reduction of tyrosine hydroxylase expression. A mutant alpha-synuclein A53T significantly increased the expression of the pro-apoptotic protein BAX and made cells more susceptible to apoptosis. Generally, overexpression of alpha-synuclein could be a model for the early stages of PD, while expression of mutant alpha-synuclein A53T mimics a genetic variant of PD. The peptides Ms 9a-1 and APHC3 significantly reduced the susceptibility to apoptosis of all cell lines but differentially influenced the expression of the genes of interest. Therefore, these modulators of TRPA1 and TRPV1 have the potential for the development of new therapeutic agents for neurodegenerative disease treatment.
Introduction
The accumulation of protein aggregates in Lewy bodies in dopaminergic neurons in Parkinson's disease is accompanied, also, by disturbances in energy metabolism, impairment of mitochondrial function, oxidative stress, and a dysfunctional mechanism of protein degradation [1].Such cellular dysfunction leads to neuroinflammation in various brain regions, further contributing to the progression of the disease [2,3].Patients suffer from a range of secondary motor and non-motor symptoms in addition to the classic symptoms of affection of the nigrostriatal dopaminergic pathways [4].The presence of abnormal forms of α-synuclein (α-syn) in the form of neurotoxic oligomers and fibrils is one of the key pathogenic features of Parkinson's disease (PD).A missense mutation in the alpha-synuclein gene corresponding to the A53T substitution was found in a family study of patients with autosomal dominant PD [5].As there are currently no neuroprotective therapies that can delay or prevent the progression of Parkinson's disease (PD), elucidating new potential therapeutic targets and treatment approaches is a major healthcare and societal challenge [6].
Transient receptor potential vanilloid-1 (TRPV1) and ankyrin-1 (TRPA1) ion channels and acid-sensing ion channel 1a (ASIC1a) are highly expressed throughout the brain, where they are involved in the regulation of many important physiological and pathological processes [7][8][9].Numerous studies have implicated these channels in the pathogenesis of Parkinson's disease, but understanding the mechanisms is still far from complete.
Recent studies strongly suggest that dysregulation of the TRP channel functions is involved in various pathological events in neurodegenerative disorders [7].Several studies have indicated that TRPV1 is involved in pain mechanisms in Parkinson's disease.This pain can be nociceptive or neuropathic in nature, and it has been shown in a 6-OHDA-lesioned mice model that blocking TRPV1 resulted in pain relief in animals [10].
There is growing evidence that acid-sensing ion channels (ASICs) play a functional role in neuronal differentiation, where they modulate membrane excitability, maturation of dendrites and neurites, Ca 2+ homeostasis, and dopamine secretion in dopaminergic neurons [8,11,12].
APHC3 has been reported as an analgesic peptide acting on the TRPV1 channel, with different modes of action depending on the type and strength of the stimuli.In wholecell patch clamp experiments, it significantly inhibited the response of TRPV1 to high concentrations of agonists (IC 50 18 nM) and potentiated the response to low concentrations of agonists [13].Administration of this peptide in mice leads to the analgesic and antiinflammatory effect, which corresponds to inhibition or desensitization of TRPV1-positive sensory neurons [14][15][16], and a decrease in core body temperature, which is a hallmark of TRPV1 agonists or positive modulators of response to low pH stimuli [14].
Ms 9a-1 was found to be a positive modulator of the TRPA1 channel (EC 50 32-210 nM), with significant analgesic and anti-inflammatory activity [17][18][19].Activation of TRPA1 on sensory neurons is the crucial point of the analgesic action of Ms 9a-1, and pretreatment of experimental animals with a selective TRPA1 antagonist totally reverses the analgesic effect of the peptide [17].Several weak activators of TRPA1 [20][21][22] can produce a similar effect by desensitizing the channel and decreasing the ability of TRPA1-expressing neurons to respond to other stimuli.
In the present work, we investigated the neuroprotective properties of two peptides, Ms 9a-1 and APHC3, in differentiated SH-SY5Y cells, a model that is more appropriate and widely used for experimental research on neurodegenerative diseases such as Parkinson's disease.A two-step differentiation into dopaminergic phenotype protocol with retinoic acid (RA) and further addition of brain-derived neurotrophic factor (BDNF) [23][24][25] was applied to stable neuroblastoma cell lines SH-SY5Y, which produce wild-type alpha-synuclein, and its mutant A53T, which is prone to accumulate thioflavin-S-positive aggregates [26].We analyzed the viability of the cells and the rate of late apoptosis/necrosis (cell death) and the change in mRNA expression in this model of several genes of interest upon cell differentiation and peptide treatment, which are primarily the genes for alpha-synuclein (SNCA), tyrosine hydroxylase (TH), BAX, and bcl2, as well as ASIC1a, TRPV1 and TRPA1.
Analysis of Neuronal Differentiation with RA + BDNF of Overexpressing Alpha-Synuclein SH-SY5Y Cells
We used three cell lines for differentiation into dopaminergic neuron-like cells using the RA + BDNF protocol: native neuroblastoma SH-SY5Y; cells SH-SY5Y stably producing wild-type alpha-synuclein (α-synWT); and cells SH-SY5Y stably producing mutant A53T alpha-synuclein (α-synA53T).To confirm that differentiated cells express mature neuronal markers, we performed immunocytochemistry for specific markers: β-III tubulin and NCAM (neural cell adhesion molecule, CD56), which are ubiquitously expressed in the nervous system.After RA + BDNF differentiation, expression of the neuronal markers NCAM and β-III tubulin was observed in all the investigated cell lines (Figure 1).During differentiation, we observed the growth of neurites in the cells and the formation of a dense neuron-like network over a period of 10 days.
We used three cell lines for differentiation into dopaminergic neuron-like cells using the RA + BDNF protocol: native neuroblastoma SH-SY5Y; cells SH-SY5Y stably producing wild-type alpha-synuclein (α-synWT); and cells SH-SY5Y stably producing mutant A53T alpha-synuclein (α-synA53T).To confirm that differentiated cells express mature neuronal markers, we performed immunocytochemistry for specific markers: β-III tubulin and NCAM (neural cell adhesion molecule, CD56), which are ubiquitously expressed in the nervous system.After RA + BDNF differentiation, expression of the neuronal markers NCAM and β-III tubulin was observed in all the investigated cell lines (Figure 1).During differentiation, we observed the growth of neurites in the cells and the formation of a dense neuron-like network over a period of 10 days.In addition, all differentiated cell lines showed tyrosine hydroxylase (TH) expression, which is often used as a marker for dopaminergic neuron-like cells.TH expression was not detected in undifferentiated cells, whereas all differentiated cells showed TH expression, which confirmed successful differentiation in dopaminergic-like phenotypes (Figure 2).According to the reverse transcription-quantitative polymerase chain reaction (RT-qPCR) results, the relative levels of TH are more strongly expressed in differentiated SHSY5Y cells than in differentiated α-synWT and α-synA53T.In addition, all differentiated cell lines showed tyrosine hydroxylase (TH) expression, which is often used as a marker for dopaminergic neuron-like cells.TH expression was not detected in undifferentiated cells, whereas all differentiated cells showed TH expression, which confirmed successful differentiation in dopaminergic-like phenotypes (Figure 2).According to the reverse transcription-quantitative polymerase chain reaction (RT-qPCR) results, the relative levels of TH are more strongly expressed in differentiated SHSY5Y cells than in differentiated α-synWT and α-synA53T.
2.2.Quantitative mRNA Difference in Alpha-Synuclein (SNCA) and Tyrosine Hydroxylase (TH) in Overexpressing Alpha-Synuclein SH-SY5Y after Differentiation with RA + BDNF To describe the changes in SH-SY5Y cells after differentiation with RA + BDNF, we determined the levels of mRNA transcripts for the genes of alpha-synuclein (SNCA) and tyrosine hydroxylase (TH) on day 10.Reverse transcription followed by qPCR experiments showed that the level of SNCA expression in α-synWT and α-synA53T cells is about 1.5-1.7 times higher compared with the control SH-SY5Y cells (Figure 3, Table 1).Interestingly, analysis of TH expression revealed that compared to differentiated SH-SY5Y neuroblastoma, overexpression of alpha-synuclein resulted in a 72% decrease in TH mRNA levels for the α-synWT and a 94% decrease in the α-synA53T cells (Figure 3).
Quantitative mRNA Difference in Alpha-Synuclein (SNCA) and Tyrosine Hydroxylase (TH) in Overexpressing Alpha-Synuclein SH-SY5Y after Differentiation with RA + BDNF
To describe the changes in SH-SY5Y cells after differentiation with RA + BDNF, we determined the levels of mRNA transcripts for the genes of alpha-synuclein (SNCA) and tyrosine hydroxylase (TH) on day 10.Reverse transcription followed by qPCR experiments showed that the level of SNCA expression in α-synWT and α-synA53T cells is about 1.5-1.7 times higher compared with the control SH-SY5Y cells (Figure 3, Table 1).Interestingly, analysis of TH expression revealed that compared to differentiated SH-SY5Y neuroblastoma, overexpression of alpha-synuclein resulted in a 72% decrease in TH mRNA levels for the α-synWT and a 94% decrease in the α-synA53T cells (Figure 3).To describe the changes in SH-SY5Y cells after differentiation with RA + BDNF, we determined the levels of mRNA transcripts for the genes of alpha-synuclein (SNCA) and tyrosine hydroxylase (TH) on day 10.Reverse transcription followed by qPCR experiments showed that the level of SNCA expression in α-synWT and α-synA53T cells is about 1.5-1.7 times higher compared with the control SH-SY5Y cells (Figure 3, Table 1).Interestingly, analysis of TH expression revealed that compared to differentiated SH-SY5Y neuroblastoma, overexpression of alpha-synuclein resulted in a 72% decrease in TH mRNA levels for the α-synWT and a 94% decrease in the α-synA53T cells (Figure 3).As reported earlier, SH-SY5Y cells express functional ASIC1a, TRPV1, and TRPA1 channels [27].To confirm the suitability of the differentiated neuroblastoma cell model for testing peptides that modulate TRPV1 and TRPA1 channel activity, we determined the levels of mRNA transcripts for TRPV1, TRPA1, and ASIC1a after 10 days of differentiation with RA + BDNF.Reverse transcription followed by qPCR experiments showed the presence of all transcripts of interest.The number of TRPA1 mRNA transcripts was approximately doubled in α-synWT and α-synA53T differentiated cells compared to differentiated SH-SY5Y cells (Figure 4).In the case of TRPV1, the expression level increased by 45% only in α-synA53T cells and did not change in the wild-type α-synWT cells (Figure 4).Finally, in both α-synWT and α-synA53T cells, ASIC1a showed no statistically significant change in expression compared to control cells (Figure 4).
Quantitative mRNA Difference in TRPA1, TRPV1, and ASIC1a in Overexpressing Alpha-Synuclein SH-SY5Y Cells after Differentiation with RA + BDNF
As reported earlier, SH-SY5Y cells express functional ASIC1a, TRPV1, and T channels [27].To confirm the suitability of the differentiated neuroblastoma cell m for testing peptides that modulate TRPV1 and TRPA1 channel activity, we determ the levels of mRNA transcripts for TRPV1, TRPA1, and ASIC1a after 10 days of diff tiation with RA + BDNF.Reverse transcription followed by qPCR experiments sh the presence of all transcripts of interest.The number of TRPA1 mRNA transcript approximately doubled in α-synWT and α-synA53T differentiated cells compar differentiated SH-SY5Y cells (Figure 4).In the case of TRPV1, the expression lev creased by 45% only in α-synA53T cells and did not change in the wild-type α-sy cells (Figure 4).Finally, in both α-synWT and α-synA53T cells, ASIC1a showed no s tically significant change in expression compared to control cells (Figure 4).The relative levels of ASIC1a, TRPA1, and TRPV1 mRNA transcripts at d10 (differentiated cells in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of differentiated SH-SY5Y cells (∆∆C T method).Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replications each).Statistical analysis was performed using the one-way ANOVA test followed by Dunnett's multiple comparisons test; *-p < 0.05.ns-not significant.
Viability, Cell Death, and Apoptosis of Overexpressing Alpha-Synuclein SH-SY5Y Cells after Differentiation with RA + BDNF
We estimated viability as the number of metabolically active cells at day 1 (undifferentiated SH-SY5Y) and day 10 (differentiated, dopaminergic neuron-like cells) using the CCK-8 assay (Figure 5A, Table 1).Compared to SH-SY5Y cells, the level of cell viability was reduced by 25% in α-synWT cells (Figure 5A).However, in dopaminergic neuron-like cells, viability equalized in all groups and was even higher for α-synA53T cells by 13% compared to SH-SY5Y cells (Figure 5B).
We have also compared cell death in all differentiated cell lines.Cell death was determined before RA + BDNF treatment on day 1 (undifferentiated cells) and after treatment on day 10 (differentiated cells) by fluorescent propidium iodide staining to identify cells in late apoptosis and necrosis.We assessed the number of propidium iodide-stained cells twice, before and after freeze-thawing, to estimate the percentage of cells in late apoptosis and necrosis.Cell death is represented as the percentage of PI-positive cells among all cells detected after freezing and thawing.Following sequential RA + BDNF treatment, cells of all three lines exhibited increased cell death.Compared to undifferentiated cells, cell death increased ∼10-fold in SH-SY5Y cells, ∼3-fold in cells α-synWT, and ∼6-fold in α-synA53T cells (Figure 5C), which is in agreement with previous reports [28][29][30].We analyzed the BAX/Bcl2 ratio using RT-qPCR to evaluate the susceptibility of different cell lines to apoptosis.Bcl-2 family proteins are among the major anti-apoptotic regulators in cells, while BAX family proteins oppose them, being pro-apoptotic.A high BAX/Bcl-2 ratio characterizes apoptosis susceptible cells [31,32].Differentiated cells of all studied cell lines showed increased susceptibility to apoptosis compared to the corresponding undifferentiated cells (Figure 5D).However, differentiated α-synWT cells showed less susceptibility to apoptosis than differentiated SH-SY5Y cells and α-synA53T cells.In differentiated SH-SY5Y cells, susceptibility to apoptosis, as assessed by the BAX/Bcl2 ratio, was increased 5.7-fold, whereas it was increased only 1.8-fold in α-synWT cells (Figure 5D).During differentiation, the relative expression of the pro-apoptotic gene BAX was slightly downregulated in SH-SY5Y cells, but the expression of anti-apoptotic Bcl-2 was markedly reduced by 86% compared to undifferentiated cells (Figure 5E,F).Expression of anti-apoptotic Bcl-2 decreased in α-synWT cells to a lower degree, i.e., by 55%.However, the expression of proapoptotic BAX did not change compared to differentiated SH-SY5Y cells, which leads to an increased BAX/Bcl-2 ratio in differentiated α-synWT cells compared to undifferentiated.α-SynA53T cells demonstrated a 7-fold increased susceptibility to apoptosis during differentiation, due to an increase in the expression of proapoptotic BAX (5.5-fold) without the change in the expression of Bcl-2 (Figure 5E,F).
Ms 9a-1 and APHC3 Peptides Ameliorate Cell Viability and Apoptosis Resistance of Neuron-like Cells
We examined the effects of Ms 9a-1 and APHC3 peptide exposure on the cell model viability by their metabolic activity using the CCK-8 assay.The viability of peptide-treated cells was expressed as a percentage relative to the viability of untreated cells.
We analyzed the effect of Ms 9a-1 and APHC3 treatment on dead cells' populations in differentiated cell lines.In differentiated SH-SY5Y and α-synWT cells, pretreatment for 24 h with 300 nM Ms 9a-1 showed a statistically insignificant reduction in PI-positive cells.However, in the differentiated α-synA53T cells, Ms 9a-1 reduced the dead cell population by 14.35 ± 4.05% (Figure 6B).Treatment with APHC3 peptide resulted in a reduction in cell death of 13.97 ± 7.4% in differentiated SH-SY5Y cells and 14.06 ± 1.28% in α-synA53T cells and did not alter the dead cell population in the differentiated α-synWT cells (Figure 6B).
We did not observe statistically significant changes in ASIC1a expression levels after 24 h treatment by Ms 9a-1 (300 nM) and APHC3 (300 nM) (Figure 7), except for α-synWT cells, which showed a minimal (~16%) decrease in ASIC1a expression after Ms 9a-1 treatment (Figure 7B).The APHC3 peptide did not produce a significant effect on the expression level of ASIC1a in any of the cell lines 24 h after treatment.We did not observe statistically significant changes in ASIC1a expression levels after 24 h treatment by Ms 9a-1 (300 nM) and APHC3 (300 nM) (Figure 7), except for α-synWT cells, which showed a minimal (~16%) decrease in ASIC1a expression after Ms 9a-1 treatment (Figure 7B).The APHC3 peptide did not produce a significant effect on the expression level of ASIC1a in any of the cell lines 24 h after treatment.
All the changes in differentiated cells induced by Ms 9a-1 and APHC3 treatment are summarized in Table 2.
Discussion
The importance of studying the mechanisms of cellular response to an increase in alpha-synuclein, especially its A53T mutant associated with Parkinson's disease, is still very relevant.Data obtained with peptides acting on the TRPV1 and TRPA1 channels indicate their involvement in the response of neurons to stress caused by the accumulation of thioflavin-S-positive aggregates, leading to their death in neurodegeneration.The accumulation of misfolded forms and aggregates of alpha-synuclein and other proteins disrupts the function of several cellular systems and leads to a range of disorders including proteasome and autophagy dysfunction, mitochondrial dysfunction, and oxidative stress.
According to current knowledge, membrane receptors and ion channels largely determine the functioning of a living cell, play a crucial role in the transmission of intercellular signals, and are involved in the development of various diseases and pathologies, including neurodegenerative diseases [33].Regulation of the function of these receptors by specific ligands can therefore alter intracellular metabolism, thereby stimulating or slowing down pathological cell transformation.Bulk transcriptomic approaches, including single-cell data, are also appropriate to identify differences in gene expression patterns and the functional biological processes to which they are linked in PD pathogenesis [34].One possibility for how the expression of genes involved in the pathogenesis of PD may be altered is at the level of transcriptional regulation, as has been shown for cerebral cavernous malformation disease, which also exists in sporadic and genetic forms [35].
In this work, we characterized a cell model of Parkinson's disease based on SH-SY5Y neuroblastoma cells with stable alpha-synuclein overexpression, which were differentiated towards dopaminergic neuronal phenotype by sequential addition of RA + BDNF.We observed that during differentiation, which itself stresses neuroblastoma cells and makes them more susceptible to apoptosis, wild-type alpha-synuclein overexpression decreased cell death, which is consistent with one of the normal functions of alpha-synuclein being neuroprotective [36].Mutant alpha-synuclein A53T lost the neuroprotective properties, significantly increasing the expression of the proapoptotic protein BAX (Figure 5).According to our previous data [26], α-synA53T cells accumulate significantly more thioflavin-Spositive aggregates than cells overexpressing wild-type synuclein.At the same time, a marked but relatively small increase in synuclein levels (expression increased 1.5-fold) was accompanied by a decrease in tyrosine hydroxylase levels.Both mRNA and protein levels of TH in dopamine neurons in the substantia nigra are known to be reduced in PD patients [37].Therefore, overexpression of alpha-synuclein is a native neuroprotective mechanism that can lead to a decrease in dopamine production and boosting of PD symptoms.In addition, we found an increase in TRPA1 expression levels both in α-synWT and α-synA53T cells, and in the case of TRPV1 only in the α-synA53T cells, allowing us to adequately test the effect of their ligands on cell survival in this model.
It is well known that the development of Parkinson's disease is accompanied by a series of cell dysfunctions, culminating in the loss of neurons and brain function [38,39]; however, effective prevention and treatment strategies have not yet been identified.Therefore, we investigated therapeutic approaches to pathophysiological phenomena mediated by transient receptor potential ion channels, such as TRPA1 and TRPV1.
TRPV1 is considered to play a major role in the disruption of calcium homeostasis under inflammatory conditions, which is a strong likelihood in the development of many neurodegenerative processes and in particular the pathogenesis of Parkinson's disease [40][41][42][43].The role of TRPV1 channels in PD is controversial [44].Activation of TRPV1 mediates cell death of DA neurons.TRPV1 may contribute to neurodegeneration in response to endogenous ligands such as AEA [45].However, when inhibition and activation of TRPV1 were studied in preclinical models of PD, both approaches possessed beneficial outcomes [44].
TRPA1 is a neuronal sensor of reactive oxygen species [46], while oxidative stress is considered to be one of the most important contributors to the death of substantia nigra cells in PD [47].However, the role of TRPA1 in PD pathogenesis is under-investigated.Neverthe-less, expression of TRPA1 in substantia nigra was reported [48,49].Moreover, an important role of acrolein, an endogenous TRPA1 agonist, was found in PD pathology [49][50][51].It is noteworthy that carvacrol (an agonist of TRPA1 from plants) might protect dopaminergic neurons in an animal model of PD [48].
APHC3 is a complex modulator of the TRPV1 channel.The action mode of APHC3 on the TRPV1 channel is bimodal and depends on the activation stimuli strength, it acts as a positive modulator of low-amplitude responses and inhibits high-amplitude responses [13].APHC3 at 300 nM acted differently on three tested cell lines.It significantly decreased apoptosis of SH-SY5Y differentiated in dopaminergic neuron-like cells and α-synA53T neuron-like cells but was unable to improve the antiapoptotic effect of α-synWT overexpression (Figure 6).Nevertheless, APHC3 significantly reduced the BAX/Bcl-2 ratio, decreasing the cell susceptibility to apoptosis in all cell lines (Figure 6C).Intriguingly, APHC3 produces different effects on the expression of Bax, Bcl-2, TRPV1, and TRPA1 in differentiated SH-SY5Y and α-synWT/ α-synA53T cells (Figures 6 and 7).Evidently, APHC3 increased cell viability in differentiated SH-SY5Y by suppression of BAX, TRPV1, and TRPA1 expression.In α-synWT/ α-synA53T cells, expression of both BAX and Bcl-2 was upregulated resulting in a decreased BAX/Bcl-2 ratio.
Peptide Ms 9a-1 is the positive modulator of the TRPA1 channel, which is isolated from the sea anemone Metridium senile.Ms 9a-1 significantly potentiates agonist-induced currents of TRPA1 in vitro [17], but intravenous or subcutaneous injection of Ms 9a-1 (0.1-0.3 mg/kg) reduces pain, inflammation, and hyperalgesia in different models of pain [17,18], including MIA-induced osteoarthritis [19].Ms9a-1 showed similar effects to APHC3 on all tested cell lines, which were mostly less pronounced.Only treatment with Ms 9a-1 of α-synA53T cells resulted in a more apparent decrease in the Bax/Bcl-2 ratio and consequently reduced cells' susceptibility to apoptosis (Figure 7C, Table 2).This fact confirms the more important role of TRPV1 than TRPA1 in the homeostasis of dopaminergic neurons.Nevertheless, modulation of TRPA1 can significantly affect cell viability and apoptosis.
This study provides evidence that Ms 9a-1 and APHC3 may become novel candidate molecules for the treatment of neurodegenerative conditions, including mutant alphasynuclein-induced neuronal injury.It is important to note the significant difference between all three tested cell lines in terms of expression of pro-and anti-apoptotic genes and response to modulators of TRPV1 and TRPA1 channels.Therefore, at least at the cellular level, mutant α-synA53T-induced PD differed from the PD of other etiology.
Despite confirming that the Ms 9a-1 and APHC3 peptides can protect dopaminergic neurons from α-synA53T -induced injury and apoptosis, this study has certain limitations.The function of the Ms 9a-1 and APHC3 peptides on alpha-synuclein-induced neurotoxicity in vivo needs to be further elucidated.
For differentiation to dopaminergic neuronal phenotype, a two-step protocol was used [28,30].On day 0, cells were seeded in 200 µL of Basic Growth Media at a density of 1 × 10 4 cells per well on 96-well round plates covered with 0.02 mg/mL PDL (#P6407, Sigma, St. Louis, MO, USA).On day 1, the medium was exchanged to the differentiation medium #1 (DMEM with 2.5% FBS, 2 mM L-glutamine, 1% fetal bovine serum, 100 U/mL penicillin, 100 µg/mL streptomycin, and 10 µM RA); cells were protected from light.On day 3, the medium was exchanged to the differentiation medium #2 (DMEM/F12 (1:1) with B-27, 2 mM L-glutamine, 100 U/mL penicillin, 100 µg/mL streptomycin, 50 ng/mL BDNF, and 10 µM RA); cells were protected from light.For the experiments, the cells were treated with 300 nM of each peptide on day 9. Cell death and cell viability assays were performed on day 10, and cells for real-time PCR were harvested on day 10.
Cell Counting Kit-8 (CCK-8) Cell Viability Assay
The cell viability was examined using Cell Counting Kit-8 (CCK-8) assay kits (#96992, Sigma, Hiroshima, Japan) according to the manufacturer's instructions.CCK8 contains the tetrazolium salt WST-8, which is metabolized by living cells to form a soluble in cell culture medium dye formazan, which can be detected by optical density.The amount of this dye is directly proportional to the number of living cells.On day 10 of differentiation after 24 h treatment of cells with peptides, 10 µL of CCK-8 reagent was added to each well and incubated at 37 • C in a humidified atmosphere with 5% CO 2 for 3 h.The absorbance at 450 nm was determined, using an Infinite F50 Tecan microplate photometer (Tecan, Grödig, Austria).Three wells of cells were used for each condition in every independent experiment.The cell viability of untreated cells was presented as a percentage of untreated control SH-SY5Y cells.The cell viability in peptide-treated cells was presented as a percentage of that in control or untreated cells in parallel experiments.
Propidium Iodide (PI) Staining Cell Death Assay
Cell death was examined using a PI staining assay (Invitrogen, Thermo Fisher, #P3566, Waltham, MA, USA).Cells plated and differentiated as described above in 96-well plates were 24 h pretreated with 300 nM Ms 9a-1 or 300 nM APHC3 peptide.PI was added into the culture medium with the final concentrations of 2 µg/mL.Cells were incubated at 37 • C for 10 min.For each condition in every independent experiment, three wells of cells were used.Absorbance fluorescence intensity was assessed using the NOVOstar microplate reader (BMG Labtech, GmbH, Ortenberg, Germany).Cell death was presented as a percentage of PI-positive cells to all cells identified by freeze-thawing in the same wells.
Immunofluorescence Imaging
Immunofluorescence was used to visualize differentiated cells.Cells grown on polyd-lysine were fixed with 4% paraformaldehyde in PBS.After washing in PBS, cells were permeabilized in 0.1% Triton X-100/PBS/0.1% FBS for 30 min at 4 • C and again washed twice with PBS/0.1% FBS.Then samples were incubated with 0.1% Triton X-100, PBS, and 1% FBS in PBS for 2 h.To identify nuclei, cells were incubated with 1 µg/mL Hoechst 33342 (Thermo Fisher Scientific).To assess cell differentiation, cells were incubated with anti-neural cell adhesion molecule monoclonal antibodies 1:200 in PBS containing 1% FBS and Triton X-100 (NCAM; 56C04; Thermo Fisher Scientific).Then, samples were treated for 1 h with rabbit anti-mouse IgG H + L conjugated with Alexa Fluor ® 546 (1:1000; Thermo Fisher Scientific), washed PBS containing 1% FBS and Triton X-100, and then stained with antibodies to β-III tubulin conjugated with Alexa Fluor 647 (BioLegend; 1:100).The images were obtained using an Eclipse Ti-E microscope with an A1 confocal module (Nikon Corporation, Tokyo, Japan) and a CFI Plan Apo VC 20×/0.75objective.Data were visualized using Nikon proprietary software (NIS-Elements).
RNA Extraction and cDNA Synthesis
Total RNA was extracted from frozen cell pellets with TRIzol™ Reagent (Life Technologies, Carlsbad, CA, USA) and the following chloroform phase separation and ethanol precipitation at −20 • C. Washed RNA was dissolved in Steril RNAse-free water (Thermo Fisher Scientific, Waltham, MA, USA).The final RNA concentration was measured using a spectrophotometer.To prevent RNA aggregation, total RNA samples were heated at 65 • C for 1-2 min before cDNA synthesis; samples were immediately used in cDNA synthesis.
The relative mRNA levels were calculated according to the 2 −∆∆CT method based on the threshold cycle (C T ) values.Undifferentiated cells or cells without any treatment served as controls.The quantity of transcripts of the control sample was normalized by the housekeeping gene β-actin; the experimental samples were compared with it.Samples without cDNA served as negative controls for each gene.All reactions were performed in triplicate.The threshold cycle (C T ) is the cycle at which the fluorescence level reaches a threshold value, these data were obtained using the 7500 Real-Time PCR System.∆CT method: ∆C T = (C T gene of interest − C T β-actin); 2 −∆CT is the measure of the mRNA expression level in the sample normalized to the housekeeping gene β-actin.∆∆C T method: ∆∆C T = ((C T gene of interest − C T β-actin) sample cDNA) − ((C T gene of interest-C T β-actin) control cDNA); 2 −∆∆CT is the relative expression value that is the measure of the mRNA expression in the sample normalized to the control sample.The real-time PCR protocol was as follows: denaturation (95 • C, 10 min), 40 cycles of denaturation (95 • C, 15 s), annealing (60 • C, 1 min), and elongation (72 • C, 30 s).The resulting PCR products created single bands of appropriate sizes in agarose gel electrophoresis.
Data Presentation and Statistical Analysis
The statistical analysis was performed with GraphPad Prism 8.0.1.Measurement data are expressed as mean ± SD. p < 0.05 was considered to indicate a statistically significant difference.The significance of the data differences was determined with a one-way analysis of variance (ANOVA) followed by Dunnett's multiple comparisons test.
Conclusions
We developed a new neuronal-like cell model system based on differentiated to dopaminergic neuron-like SH-SY5Y cells.We used three different types of cells: native neuroblastoma SH-SY5Y, cells overexpressing alpha-synuclein, and its mutant A53T, which is prone to aggregate.Differentiated cells significantly increased the susceptibility to apoptosis, but overexpression of alpha-synuclein, but not its A53T mutant, significantly reduced cell death.Expression of alpha-synuclein A53T mutant leads to a marked increase in the expression of the pro-apoptotic protein BAX, which could be attributed to the ability of mutant synuclein to form cytotoxic aggregates.Therefore, the expression of alpha-synuclein is a native neuroprotective mechanism, but in dopaminergic-like neurons, it causes a reduction in tyrosine hydroxylase expression, which may lead to a drop in dopamine production.Taken together, overexpression of alpha-synuclein may provide a model for the early stages of PD when alpha-synuclein allows neurons to survive but reduces dopamine production.Expression of mutant alpha-synuclein A53T mimics a genetic variant of PD when cytotoxic aggregates reduce both neuronal survival and dopamine production.We found expression of ASIC1a, TRPV1, and TRPA1 channels in dopaminergic neuron-like cells derived from the neuroblastoma SH-SY5Y.Peptide modulators of TRPV1 and TRPA1 significantly decreased the susceptibility of all cell lines to apoptosis.Therefore, modulators of TRPA1 and TRPV1 have the potential for the development of new therapeutic agents for the treatment of neurodegenerative diseases.In addition, the cellular model proposed and described in our work can be used for further studies of the involvement of ASIC1a, TRPV1, and TRPA1 channels in the neurodegeneration process, as well as for testing new ligands of these channels.
Figure 1 .
Figure 1.Immunofluorescence shows detectable levels of β-III-tubulin and NCAM in differentiated cells of all cell lines.Individual channel intensities were adjusted for appropriate visualization.The nucleus was stained with Hoechst 33342 (blue); the differentiated cells were stained with antibodies to βIII-tubulin (green) and anti-neural cell adhesion molecule NCAM (red).Confocal fluorescent microscope images taken at 20× magnification.
Figure 1 .
Figure 1.Immunofluorescence shows detectable levels of β-III-tubulin and NCAM in differentiated cells of all cell lines.Individual channel intensities were adjusted for appropriate visualization.The nucleus was stained with Hoechst 33342 (blue); the differentiated cells were stained with antibodies to βIII-tubulin (green) and anti-neural cell adhesion molecule NCAM (red).Confocal fluorescent microscope images taken at 20× magnification.
Figure 2 .
Figure 2. The relative levels of TH mRNA transcripts at d10 (differentiated cells) were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin (ΔCT method).Expression of TH mRNA in undifferentiated cells of the corresponding lines was not detected.Data are shown as mean ± SD.
Figure 3 .Figure 2 .
Figure 3.The relative levels of SNCA and TH mRNA transcripts at d10 (differentiated cells) in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of SH-SY5Y cells (ΔΔCT method).Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replica-
Figure 2 .
Figure 2. The relative levels of TH mRNA transcripts at d10 (differentiated cells) were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin (ΔCT method).Expression of TH mRNA in undifferentiated cells of the corresponding lines was not detected.Data are shown as mean ± SD.
Figure 3 .Figure 3 .
Figure 3.The relative levels of SNCA and TH mRNA transcripts at d10 (differentiated cells) in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of SH-SY5Y cells (ΔΔCT method).Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replica-Figure 3. The relative levels of SNCA and TH mRNA transcripts at d10 (differentiated cells) in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of SH-SY5Y cells (∆∆C T method).Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replications each).Statistical analysis was performed using the one-way ANOVA test, followed by Dunnett's multiple comparisons test; *-p < 0.05, **-p < 0.01, ***-p < 0.001.
Table 1 . 2 . 3 .
Difference in differentiated cell lines α-synWT and α-synA53T compared to native SH-SY5Y.The direction of the arrows indicates an increase or decrease in gene expression levels or cell viability.Quantitative mRNA Difference in TRPA1, TRPV1, and ASIC1a in Overexpressing Alpha-Synuclein SH-SY5Y Cells after Differentiation with RA + BDNF
Figure 4 .
Figure 4.The relative levels of ASIC1a, TRPA1, and TRPV1 mRNA transcripts at d10 (differen cells in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR an malized to the housekeeping gene β-actin and the corresponding mRNA of differentiated SH cells (ΔΔCT method).Data are shown as mean ± SD (data are from 3 independent experiments 3 technical replications each).Statistical analysis was performed using the one-way ANOV followed by Dunnett's multiple comparisons test; *-p < 0.05.ns-not significant.
Figure 4 .
Figure 4.The relative levels of ASIC1a, TRPA1, and TRPV1 mRNA transcripts at d10 (differentiated cells in control (SH-SY5Y cells), α-synWT, and α-synA53T were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of differentiated SH-SY5Y cells (∆∆C T method).Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replications each).Statistical analysis was performed using the one-way ANOVA test followed by Dunnett's multiple comparisons test; *-p < 0.05.ns-not significant.
Figure 5 .
Figure 5. (A,B) Changes in cell viability during differentiation normalized to the cell viability of corresponding control SH-SY5Y cells.(C) The cell death in undifferentiated and differentiated cells.(D-F) Relative mRNA expression levels were analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of undifferentiated cells (∆∆C T method).(D)The BAX/Bcl2 expression ratio in the undifferentiated and differentiated SH-SY5Y (white color), α-synWT (green color), and α-synA53T cells (violet color).The BAX/Bcl2 ratio was significantly increased in differentiated cells compared to undifferentiated cells of the corresponding cell line.Relative mRNA expression levels of BAX (E) and Bcl-2 (F) detected in the undifferentiated and differentiated SH-SY5Y, α-synWT, and α-synA53T cells.Data are shown as mean ± SD (data are from 3 independent experiments, with 3 technical replications each).Statistical analysis was performed using the one-way ANOVA test followed by Dunnett's multiple comparisons test; **-p < 0.01, ***-p < 0.001.ns-not significant.
Figure 6 .
Figure 6.(A) The cell viability of differentiated cells after 24 h peptide treatment (300 nM for each peptide) normalized to the cell viability of corresponding untreated cells.(B) The cell death in differentiated
Figure 7 .
Figure 7. Ms 9a-1 and APHC3 peptides modify the expression levels of ASIC1a, TRPA1, and TRPV1 channels.The relative levels of ASIC1a, TRPA1, and TRPV1 mRNA transcripts at d10 in differentiated cells treated with 300 nM Ms 9a-1 or 300 nM APHC3 for 24 h were detected and analyzed by RT-qPCR and normalized to the housekeeping gene β-actin and the corresponding mRNA of untreated differentiated cells (∆∆C T method).(A) Differentiated SH-SY5Y cells.(B) Differentiated α-synWT cells.(C) Differentiated α-synA53T cells.Data are presented as mean ± SD (data are from 3 independent experiments, with 3 technical replications each).Statistical analysis was performed using the one-way ANOVA test followed by Dunnett's multiple comparisons test; *-p < 0.05, **-p < 0.01, ***-p < 0.001 versus the control group.ns-not significant.
Table 2 .
Changes in cell lines induced by Ms9a-1 and APHC3 treatment.The direction of the arrows indicates an increase or decrease in gene expression levels or other parameters. | 2024-01-12T05:16:05.634Z | 2023-12-27T00:00:00.000 | {
"year": 2023,
"sha1": "14774c0c55c90c704213c27634a002032b0caba4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/1/368/pdf?version=1703659871",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14774c0c55c90c704213c27634a002032b0caba4",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1118799 | pes2o/s2orc | v3-fos-license | Annals of the New York Academy of Sciences Challenges in the Clinical Development of Pi3k Inhibitors
The PI3K/Akt/mTOR pathway is one of the most frequently dysregulated signaling pathways in cancer and an important target for drug development. PI3K signaling plays a fundamental role in tumorigenesis, governing cell proliferation, survival, motility, and angiogenesis. Activation of the pathway is frequently observed in a variety of tumor types and can occur through several mechanisms. These mechanisms include (but are not limited to) upregulated signaling via the aberrant activation of receptors upstream of PI3K, amplification or gain-of-function mutations in the PIK3CA gene encoding the p110␣ catalytic subunit of PI3K, and inactivation of PTEN through mutation, deletion, or epigenetic silencing. PI3K pathway activation may occur as part of primary tumorigenesis, or as an adaptive response (via molecular alterations or increased phosphorylation of pathway components) that may lead to resistance to anticancer therapies. A range of PI3K inhibitors are being investigated for the treatment of different types of cancer; broad clinical development plans require a flexible yet well-structured approach to clinical trial design.
Introduction
The widespread and pivotal role of the PI3K pathway in cancer has inspired the active development of a spectrum of drugs that target various components of the pathway. These drugs include allosteric mTORC1 inhibitors, Akt inhibitors, inhibitors of all four class I PI3K isoforms (so-called pan-class I PI3K inhibitors), dual pan-class I PI3K and mTORC1/2 inhibitors, and, most recently, isoform-specific PI3K inhibitors. Novel compounds in clinical development by Novartis include the pan-PI3K inhibitor buparlisib (BKM120), the dual pan-PI3K/mTORC1/2 inhibitor BEZ235, and the selective p110␣ inhibitor BYL719. In addition, the mTORC1 inhibitor everolimus is already approved for use in several types of cancer ( Fig. 1).
Due to the complexity of the PI3K pathway, and the extensive cross-talk with other pathways, one of the greatest challenges in PI3K inhibitor devel-opment involves identifying the patients that will benefit most from treatment. Early-phase singleagent trials with PI3K inhibitors have yet to identify a consistent and distinct association between typical PI3K pathway alterations (PIK3CA mutation and PTEN loss) and response to therapy. This may partly be due to the heterogeneous range of cancers treated in these trials. The PI3K pathway interacts with other signaling pathways at several points, and these interactions are known to vary in a tissue-specific manner. Therefore, the capability of predictive biomarkers, and the effectiveness of different types of PI3K inhibitors, may also vary across tumor types. As the development of PI3K inhibitors progresses from mid to late phase and expands into tumor-specific studies, Novartis is employing a flexible approach to biomarker-driven study design, using a range of strategies based on the phase of drug development, the type of PI3K inhibitor, the tumor type under investigation, and the specific context of treatment. This mini-review doi: 10.1111/nyas.12060 summarizes four distinct approaches to study design and describes the rationale for their use in terms of the currently enrolling trials with Novartis PI3K inhibitors.
Patient stratification based on PI3K pathway status (breast cancer)
PI3K inhibitors have demonstrated encouraging preliminary activity in the treatment of metastatic breast cancer, with responses observed in patients with and without PIK3CA and PTEN alterations. 1,2 Evidence for the activity of PI3K inhibitor-based therapy in breast cancer has been drawn from a phase I study in patients with hormone receptor (HR)-positive metastatic breast cancer. 3 In this trial, patients received continuous (n = 20) or intermittent (five days on, two days off; n = 31) doses of buparlisib in combination with letrozole. The majority of patients (n = 43) had received prior aromataseinhibitor therapy. The clinical benefit rate (complete responses plus partial responses plus stable disease) at six months was 30% and 29% in the continuous and intermittent cohorts, respectively. A correlation between duration of response or clinical benefit and the presence of PIK3CA mutation has yet to be observed in either cohort.
Given the aforementioned findings, the approach Novartis has taken in breast cancer has been to develop trials that are adequately powered to prospectively investigate efficacy in both the population as a whole and in the subpopulation of patients with PI3K pathway alterations. BELLE-2 (NCT01610284) is a multicenter phase III, placebo-controlled study of buparlisib plus fulvestrant that will enroll 842 postmenopausal women with HR-positive/HER2-negative advanced breast cancer whose disease has progressed on or after aromatase-inhibitor therapy, including ≥ 334 patients with PI3K pathway alterations. Enrollment will be stratified by the presence or absence of PI3K pathway activation, defined as PIK3CA mutation and/or PTEN alteration. BELLE-2 is designed to investigate progression-free survival (PFS) in the population as a whole and/or in the PI3K pathwayactivated subpopulation using a gate-keeping procedure based on a graphical approach to address the multiplicity of hypotheses. 4 The results of this study could provide prospective evidence regarding the use of these biomarkers in predicting response to PI3K inhibitor therapy. Other trials with buparlisib in breast cancer are employing similar approaches, including a placebo-controlled phase II trial with paclitaxel in the first-line treatment of HER2-negative metastatic breast cancer (BELLE-4; NCT01572727), and a phase II trial of neoadjuvant paclitaxel plus trastuzumab, with and without buparlisib (Neo-PHOEBE) in HER2overexpressing breast cancer patients.
Nonselective enrollment and mandatory tissue collection (prostate cancer and glioblastoma)
Another strategy is to conduct early-phase trials in tumor types with high frequencies of PI3K pathway alterations and strong preclinical evidence supporting the potential efficacy of PI3K-inhibition treatment. These trials enroll patients regardless of PI3K pathway status; however, enrollment is dependent upon the mandatory provision of tumor tissue, which can be used for exploratory post hoc analyses. Castration-resistant prostate cancer (CRPC) is one such tumor type being investigated using this strategy.
PTEN loss is one of the most frequent molecular aberrations to occur in prostate cancer, and ∼70% of metastatic cases have some form of alteration in the PI3K pathway. This high frequency of alterations supports the rationale for investigating PI3K inhibitors in this tumor type. Furthermore, interaction and reciprocal feedback regulation between the androgen receptor and PI3K pathways has been suggested as a potential mechanism of resistance to androgen-deprivation therapy in CRPC. PI3K inhibitors may therefore have the potential to reverse resistance in this context. In preclinical experiments, the combination of BEZ235 and enzalutamide (an androgen-receptor antagonist) demonstrated near-complete tumor regression in a PTEN-deficient murine model and in human prostate cancer xenografts. 5 A phase Ib proof-ofconcept trial of BEZ235 or buparlisib in combination with abiraterone acetate is currently enrolling patients with CRPC after progression on abiraterone acetate (NCT01634061).
Glioblastoma multiforme (GBM) is another tumor type with a high frequency of PI3K pathway alterations, with PTEN loss reported in up to 35% of cases. Buparlisib has demonstrated an ability to cross the blood-brain barrier and inhibit the PI3K pathway in the brain, and has shown synergy with temozolomide and docetaxel in murine xenografts of PTEN-null GBM. 6 A phase I trial is investigating buparlisib in combination with adjuvant temozolomide and with concomitant radiotherapy and temozolomide in newly diagnosed GBM (NCT01473901). Two other ongoing phase I/II trials are investigating single-agent buparlisib or the combination of buparlisib and bevacizumab in patients with relapsed disease (NCT01339052 and NCT01349660, respectively). Enrollment in both of these trials is dependent on the provision of tumor biopsy material for the analysis of PI3K pathway alterations.
Preselection of patients with PI3K pathway activation-enrichment strategy (nonsmall cell lung cancer)
Certain contexts may necessitate the design of trials that selectively recruit patients with PI3K pathway alterations only. Lung cancer treatment has recently moved toward a customized approach based on the molecular characteristics of tumors: patients with EGFR mutations may show improved benefit from EGFR tyrosine kinase inhibitors (TKIs; e.g., erlotinib and gefitinib), and those with ALK translocations from ALK inhibitors (e.g., crizotinib). Preclinical experiments have suggested that PI3K pathway alterations may predict a differential response to PI3K inhibitors in models of nonsmall cell lung cancer (NSCLC), 7 and PI3K pathway activation has been identified as one of the factors driving resistance to EGFR TKIs in preclinical models. 8 An ongoing phase II study (NCT01297491) is therefore evaluating single-agent buparlisib versus docetaxel or pemetrexed in patients with squamous or nonsquamous metastatic NSCLC with PI3K pathway alterations (PIK3CA mutation and/or PTEN alteration). Patients who have been pretreated with one or two prior antineoplastic treatments are eligible.
Isoform-specific PI3K inhibitors may theoretically offer an improved therapeutic window and narrower toxicity profile compared with pan-PI3K inhibitors. The selective PI3K␣ inhibitor BYL719 has shown preferential sensitivity in PIK3CAmutated cell lines, and a first-in-man study with this agent (NCT01387321) is enrolling patients with PIK3CA mutation or amplification only to maximize the potential benefit of treatment. 9 Preliminary results from this phase I trial of single-agent BYL719 in patients with advanced solid tumors suggests a favorable safety profile, with two confirmed partial responses observed (one each in patients with HR-positive breast cancer and cervical cancer). 9
Enrollment of patients that have progressed on mTORC1 inhibitor-based therapy
The BOLERO-2 trial showed substantial improvements in PFS with the combination of everolimus and exemestane, compared with exemestane alone, in patients with advanced HR-positive breast cancer who had progressed on nonsteroidal aromatase inhibitors. 10 Despite these improvements in PFS, resistance to the combination of everolimus and exemestane can occur. Inhibition of mTORC1, but not mTORC2, can cause paradoxical reactivation of the PI3K pathway through the alleviation of feedback loops dependent on S6K. 11 PI3K inhibitors, which target the pathway upstream of mTORC1, may therefore show utility in contexts in which mTORC1 inhibitors are unsuccessful or no longer effective. The potential use of PI3K inhibitors in the post-mTORC1 inhibitor treatment setting is being investigated in BELLE-3 (NCT01633060), a placebo-controlled phase III study to investigate the safety and efficacy of buparlisib plus fulvestrant in postmenopausal women with HR-positive/HER2negative advanced breast cancer who have received aromatase-inhibitor treatment and progressed on or after mTORC1 inhibitor-based therapy. Like BELLE-2, BELLE-3 is stratifying enrolling patients according to PI3K pathway activation status, to investigate the treatment effect in patients with PI3K pathway activation and/or the population as a whole.
Summary
The burgeoning field of PI3K inhibitor development is associated with many ongoing challenges. PI3K signaling is complex and can be modulated by crosstalk with other kinase cascades, such as the Ras/Raf/MEK pathway. This complexity is further compounded by tissue-specific effects, which may complicate the identification of predictive biomarkers.
It remains unclear whether preclinical observations of improved responses to PI3K inhibitors in tumors with PIK3CA and PTEN alterations will be borne out in clinical trials. Early-phase, single-agent trials with PI3K inhibitors have yet to establish a consistent and distinct association between the most common alterations in the PI3K pathway and response to therapy. Explanations for this are numerous and include heterogeneity in the patient population, use of archival specimens for biomarker assessment, and a low number of responses to single-agent PI3K inhibitors. Future trials of PI3K inhibitors as combination therapy in more homogenous patient populations may be more likely to establish a link between typical PI3K alterations and clinical response.
The successful evaluation of PI3K pathway biomarkers is complicated by many factors, such as observations of discordance between primary and metastatic lesions and issues with intratumoral heterogeneity in molecular alterations. It is possible that future studies will require the prospective collection of biopsies immediately before and after study treatment to address these difficulties. Advances in noninvasive technologies, such as circulating DNA and/or tumor cell analysis, may eventually allow this approach. Future studies may also benefit from deeper analyses into pathway alterations and signaling, such as those offered by high-throughput, next-generation sequencing and phosphoproteomic analyses.
Finally, the spectrum of different PI3K inhibitors also presents its own dilemma: How are the optimal indications for each class of inhibitor identified? Whereas more selective inhibitors may offer improved therapeutic windows and narrower toxicity profiles, certain tumor types or treatment contexts may require more comprehensive inhibition of the PI3K pathway. These assessments will include the identification of the optimal dose and dosing schedule of each inhibitor and the tumor types in which they can best be used, and will be planned based on robust preclinical evidence.
In conclusion, PI3K inhibitors show great promise in the treatment of a wide range of cancers; a well-structured approach to study design will be required to maximize the potential of this exciting class of therapy. | 2018-04-03T03:10:41.393Z | 0001-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "323ebcb5a1bf0eca21e92206ef2595c7da381fc2",
"oa_license": "CCBY",
"oa_url": "https://nyaspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nyas.12060",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "323ebcb5a1bf0eca21e92206ef2595c7da381fc2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
2782326 | pes2o/s2orc | v3-fos-license | Military vaccines in today’s environment
The US military has a long and highly distinguished record of developing effective vaccines against pathogens that threaten the armed forces. Many of these vaccines have also been of significant benefit to civilian populations around the world. The current requirements for force protection include vaccines against endemic disease threats as well as against biological warfare or bioterrorism agents, to include novel or genetically engineered threats. The cost of vaccine development and the modern regulatory requirements for licensing vaccines have strained the ability of the program to maintain this broad mission. Without innovative vaccine technologies, streamlined regulatory strategies, and coordinating efforts for use in civilian populations where appropriate, the military vaccine development program is in jeopardy.
Historical Perspective
The number of military personnel admitted to US Army hospitals as the result of infectious diseases was much higher than admissions due to wounds or other injuries incurred in WWII or the wars in Korea, Vietnam or the Persian Gulf. 1 It is not surprising; therefore, that since George Washington first ordered mandatory variolation of new recruits to the Continental Army to prevent smallpox in 1777, vaccination of military personnel has been a crucial component of deployments (reviewed in refs. 2 and 3). Because of the large number of diverse endemic disease pathogens encountered by military personnel around the world, it is also not surprising that the US Department of Defense (DoD) has historically maintained an extensive vaccine research and Not only are endemic diseases of concern for the military, so are potential exposures to agents deliberately introduced into the environment through biological warfare (BW) or bioterrorism, to include toxins such as ricin, botulinum toxin, Staphylococcus enterotoxin B, and pathogens causing anthrax, plague, tularemia, glanders, smallpox, Ebola and Marburg hemorrhagic fevers or Venezuelan, eastern or western equine encephalitis. Further, genetically engineered novel threats are now a possibility, which has expanded the scope of military vaccine research and development.
Because it is recognized that some of these same BW or endemic disease agents are also potential threats to civilians, significant funds have been programmed for the Biomedical Advanced Research and Development Authority (BARDA) to stockpile vaccines against a few of the most likely pandemic disease threats or bioterrorism agents, such as pandemic influenza, anthrax and smallpox. Although there is overlap in the missions of BARDA and DoD, their ultimate goals differ in that BARDA focuses on countermeasures for treating the population after exposure to a bioterrorism agent or in response to a pandemic, whereas the DoD aims to provide protective immunity to the armed forces prior to exposure. Today, however, while vaccination of deployed troops remains a matter of national security, the cost of vaccine development has increased to the point where, without innovation and renewed commitment, the current scope of military vaccine development efforts is not sustainable.
The Cost of Licensing Military Vaccines
The overall expense associated with a single new FDA-licensed vaccine has been estimated to average between $600 million and $1 billion dollars. 4 In such circumstances, the FDA requires that informed consent is documented. As it is extremely difficult to maintain adequate records under combat conditions, this is not really a practical solution to a vaccination requirement and is not consistent with the goal of using only the most effective and safest products in troops.
The other special situation in which vaccines developed by the military are used under IND status with informed consent is in the Special Immunizations Program (SIP) located at USAMRIID. The vaccines given in the SIP are intended to provide added protection to individuals with an occupational risk of exposure to pathogens (e.g., laboratory scientists, animal caretakers, facilities and equipment maintenance staff, etc). Numerous problems with the SIP have been recognized in recent years, and are well described in a 2011 National Academies Publication "Protecting the Frontline of Biodefense Research: The Special Immunizations Program." Among the issues highlighted are the limited remaining supplies and age of the vaccines (mostly developed in the 1970s and 1980s under different regulatory standards), and arguably the most important issue, the cost of maintaining the SIP (approximately $6 million per year) with no dedicated funding source. The NAS committee emphasized the worth of the SIP, and recommended that the cost of the program be supported by all users and that the vaccines be replaced with newer licensed or IND vaccines as they become available. Both are absolutely critical if vaccination of personnel who deal with these dangerous pathogens is to continue, and if these vaccines or other military vaccines developed through IND status are to remain an option for additional use in emergencies.
The value of maintaining such vaccines that have already been tested under IND was illustrated most recently when a vaccine against the mosquito-borne Chikungunya virus, which was developed by the Army in the 1970s, was transferred to French scientists for further study after an explosive outbreak of Chikungunya in the Indian Ocean Islands in 2006. 5,6 The live-attenuated Chikungunya vaccine was previously evaluated through phase 2 clinical studies by the military, with very promising results; i.e., 57 out of 58 vaccinees developed neutralizing antibodies by day 28, and 85% were still seropositive a year later. 7 Lack of funding was the overriding reason for the termination of the Chikungunya vaccine development effort by the DoD at that time, in that there was no commercial partner interested in pursuing the vaccine, and there was no clear path toward licensure due to the unpredictability of outbreaks.
The same funding obstacles exist today with a number of vaccines that the military is developing. For example, a vaccine under development for HFRS caused by hantavirus infections is currently in Phase 1 clinical testing, (ref. 8 and unpublished information). If the vaccine is shown to be safe and immunogenic in early clinical studies, as encountered with the Chikungunya vaccine, it might be difficult to find a commercial partner or a field testing site with sufficient disease to support FDA-licensure. Even in regions, where a phase 3 trial might be possible; that is, areas of China, Russia and possibly Finland, 9,10 without a commercial partner, the cost would probably be prohibitive in that thousands of volunteers would need to be enrolled, and the cost of such a study would likely be well over $100 million. 11,12 Given issues such as these, if military vaccines for diseases like HFRS and several others are going to be licensed and available for use in the armed forces and in civilian populations, significant government or industry investments and innovative paths to licensure will be required.
Alternative Licensing Strategies and Incentives
In cases where it isn't possible to do human studies (cost not currently being an acceptable reason), an alternative licensing strategy must be pursued. Specifically, the recently defined "animal rule," allows licensure based on efficacy results of studies performed in well-defined animal models that reflect the human disease (reviewed in ref. 13). Safety studies in humans would still be required. This pathway to licensure is not necessarily easier or quicker than a traditional path, given that it is very difficult to correlate animal disease with human disease, and in some cases there are no animal models of disease (e.g., in the case of HFRS). In that situation, another unconventional strategy that the FDA has outlined involves the use of surrogate endpoints obtained in well controlled clinical studies that are shown to be reasonably likely to predict clinical benefit (USFDA 21CFR314.510). If marketing approval is granted using these criteria, then post-marketing studies would also be required to verify and describe the clinical benefit. For example, if neutralizing antibodies could be established as a surrogate marker of protection, then it might be possible to obtain marketing approval from the FDA without a traditional phase 3 study, but efficacy as well as safety measurements would be included in the follow-up study. Incentives for commercial development of vaccines with limited expected profitability also exist already and include the designation of Orphan Drug Status, which the FDA can grant for vaccines that will be administered to less than 200,000 people per year in the US. This incentive is particularly attractive to Pharma, in that developers receive a 50% tax credit for qualified clinical research expenses, a waiver of fees for the Biologics License Application (BLA), and a 7-year marketing exclusivity period (USFDA 21CFR316, Orphan Drug Act). Even more attractive to a commercial partner is the possibility of obtaining a "Priority Review Voucher," which can be awarded by the FDA when a Biological Licensing Agreement (BLA) is filed for a vaccine against a neglected disease. This process is intended to shorten the normal FDA review time by at least six months, and importantly, the vaccine developer can save this voucher to use for priority review of a more lucrative product, or they can even transfer or sell it to another company. Other means of shortening the review process would also very likely be attractive to commercial partners if they were available.
Innovations
Novel vaccine design and delivery methods are being intensely pursued by researchers in Government, Academia and Industry. Development of broad spectrum platforms that are suitable for "plug and play" types of vaccines could provide a means to generate multiagent vaccines that would both reduce the time to availability and also the shot burden for military personnel (and civilians). The platform that has so far come closest to this goal is plasmid DNA vaccines, which involves delivery of DNA plasmids engineered to express one or more genes of interest. To date, DNA vaccines have been tested in numerous phase 1 and phase 2 clinical studies, both for prophylactic and therapeutic purposes (reviewed in refs. 14 and 15). Overall, the potential of DNA vaccines has been limited mostly by the need for better delivery methods, which will result in sufficient immune responses in humans. A similar concept, but with synthetic RNA instead of DNA, is in early developmental stages by several companies, and could offer the same plug and play advantages as DNA, but would avoid the need for delivery to host cell nuclei for gene expression.
Other platforms that might be suitable for many different types of vaccines are also under development, including virus-like particles displaying immunogenic proteins, nanoparticle vaccines produced by trapping proteins or nucleic acids in particulate substances (some with inherent adjuvanting properties), or even platforms that can modulate host immune responses. It is doubtful that a single platform will answer all vaccine needs, and to date, none of the innovative platforms have resulted in a licensed vaccine, although DNA vaccines have been approved for veterinary use.
Conclusion: What Should the Modern Military Vaccine Program Encompass?
Protecting the health of military personnel is clearly in the best interest of the US, and vaccination is the best way to prevent endemic and BW disease threats. The question, therefore, is how to pay for the numerous vaccines that would need to be developed to accomplish this goal. One answer might be for the military to just fund all of the efforts required. Many comparisons of the cost of medical countermeasures vs. the cost of fighter jets, tanks, etc. have been made, and while it is true that the DoD medical research program is small compared with the acquisition of artillery and vehicles, such comparisons are not really helpful, as the requirement for one does not negate the requirement for the other. Realistically, the chances of major increases in the DoD budget to pay for vaccines are not good. Consequently, it will be necessary to either reduce the scope of the effort to only a few high impact diseases, or to develop novel vaccine platforms and innovative (and shortened) licensing strategies to meet the need to protect deployed troops, and for spillover benefits to the civilian community. | 2016-05-12T22:15:10.714Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "bbf26b491583c8451abfba519f0a55236c4cf612",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/hv.20503?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbf26b491583c8451abfba519f0a55236c4cf612",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54549009 | pes2o/s2orc | v3-fos-license | A-Legality and the Death of the Refugee
...a legal order is concrete because it actualizes a determinate realm of practical possibilities, in the twofold sense of certain legal possibilities and certain possibilities of illegality. As exclusion, the closure which inaugurates a legal collective relegates everything that is beyond the pale of joint action and its normative point to the residual domain of the unordered. The unordered compromises a surfeit, rather than a dearth, of practical possibilities, yet a superabundance of possibilities that have been levelled down to the status of the irrelevant and unimportant, as the price that must be paid if there is to be any legal empowerment at all. (Lindahl 2013, p. 255)
Introduction
It is reported that between April and September 2015 approximately 1500 drowned in European waters. The largest single incident resulting in loss of life was on 19 April 2015 when at least 700 people drowned after their boat capsized just off the Libyan coast. In the wake of these deaths came an event when the question of human 'right' was momentarily divorced from the question of citizenship.
There are, no doubt, several explanations for this sudden and unexpected bringing to consciousness of the 'abstract nakedness of being human and nothing but human' (Arendt 1968, p. 297) after decades of state-centred discourses, structured upon the opposition of economic migrant/refugee. In this short piece I suggest that the deaths were the result of two oppositional pre-emptory norms of international law being tested to their limits.
More pertinently, I argue that the testing of these norms could equally have brought a radical re-conception of our political existence-at least insofar as that existence is represented in the figure of the refugee. This re-conception will occur if what is now seen as irrelevant, impossible-even ridiculous-in the convergence of those norms were seen instead as carrying infinite possibilities.
Article 33 and International Law Doctrine
Geneva Convention Article 33 states that: No contracting state shall expel or return (refouler) a refugee in any manner whatsoever to the frontiers of territory where his life or freedom would be threatened on account of his race, religion, nationality, membership of a particular social group or political opinion. (Convention Relating to the Status of Refugees 1951, p. 137) There is almost universal consensus that the prohibition against refoulement is absolute and 'unconditional'-that it is 'a jus cogens norm'-assuming the place in the hierarchy of international law above that of treaties. It is said that nonrefoulement is becoming extra-conventional, superseding the wishes expressed by states-it permits of no derogation (UNHCR 2007, paras. 12, 13 and 15). However, the same level of consensus exists over the relation between the principle of non refoulement and access to territory of a Convention refugee: state practice and opinion juris permits only one conclusion: the individual has no right to be granted territorial asylum (UNHCR 2007, para. 8).
Against these two (peremptory) positions-an absolute obligation to respect the non-refoulement principle applicable to Convention refugees against an absolute right to withhold territorial asylum to Convention refugees-we can and do contemplate a range of possibilities and actions directed against refugees. States can (and do) acknowledge a de facto right of territorial asylum, reasoning that to accord such a de facto right is the only way in which the non-refoulement principle can be upheld. States can (and do) circumvent Article 33 in cases where a refugee has a formal connection with another state (such as dual nationality). States can (and do) ignore the imperative of Article 33 and return refugees to the place of nationality or domicile and to his or her uncertain fate there.
Bringing us now to the recent crisis of human movement: states can (and do) allow asylum-seekers to perish in icy waters whilst they await a ruling as to the extra-territorial reach of Convention Article 33 (UNHCR 2007, para. 8), or some process of burden sharing between states.
Between the clearly legal (de facto asylum/connection with an alternative state), clearly illegal (direct refoulement) and the not-yet (ill)legal (allowing asylumseekers to drown on the basis that they are not strictly in territory) lie possibilities which have for too long been deemed by the legal collective-here represented in the international community of nations-irrelevant/unimportant (Lindahl 2013, p. 255).
The possibility that the absolute and unconditional principle of non-refoulement can be upheld at all times in relation to the Convention refugee without the need to offer the refugee a territorial solution is what is at issue here. Read together, in the extremes of their articulation, the two absolute positions command that a refugee is neither returned to a place from whence he/she came nor be offered territorial protection. What Lindahl's work offers is a way to confront this seemingly impossible challenge/conundrum-intellectually, politically and practically.
Considerations of space preclude me from doing full justice to Lindahl's rich thesis and so I ask the reader to hold the idea that sits at its core (produced in the opening quotation), which invites thinking beyond the legal/illegal and the soon to be designated illegal/or legal, and instead to look toward the unordered/the chaotic within that otherwise regulated realm. Un-order is the pre-condition of A-Legalitythe state of affairs to come-that which for now is dismissed as irrelevant-the 'dialogos-that is a logos-a rationality-of the in between' (Lindahl 2013, p. 255) still a 'fault-line' but one that contains the seeds or agents that could radically disrupt the extant legal order.
Wherein can be located the state of un-order/the productive chaos of Article 33? According to Lindahl, A-Legality (intrudes) when 'certain types of acts or behaviours' take place where they ought not to take place, but the normative disorientation that this 'disruption' or 'irruption' 'ensues' demonstrates that such behaviours cannot simply be described as illegal, since illegal behaviour is always within the sphere of contemplation within any concrete legal order (Lindahl 2013, p. 158).
As far as Article 33 is concerned, there is nothing more evidently dismissed as irrelevant and unimportant as is the question that states have put aside in favour of behaving legally or illegally-that is the question of whether human life can be respected whilst simultaneously retaining the non-derogatory nature of Article 33 and the non-enforceable right of asylum-a question which invites chaos (Lindahl 2013, p. 162).
Whenever it is asserted that the principle of non-refoulement is an absolute and unconditional norm not capable of designing or engineering access to territory, the speaker is forced to contemplate the political significance of the 'human that is nothing but human'-returning to Arendt's formulation (Arendt 1968, p. 297).
In the extremes of its application-extremes that recent scholarly and policyoriented articulations seek to avoid-the principle of non-refoulement commands that a refugee is neither returned to a place in which his/her life or freedom may be placed in peril, nor be offered territorial protection. The recent loss of life on European seas has moved the A-Legal sphere of Article 33 from the realms of the irrelevant to the sphere of the monstrous, the terrible; but the monstrous, terrible possibilities that have long been dismissed must be confronted if the problem of mass migration is to be confronted also. The non-refoulement principle unqualified maintains that the refugee with no place on earth to go and thus possessing none of the positive rights derived from the primary law of the earth, still/yet cannot be sacrificed, and thus possess something of value although that something may exceed our conception of human rights.
For those intent on contemplating the importance of territory to social and political life, I suggest that Article 33(1) provides such a focus. As it has been widely interpreted until quite recently, Article 33 is an exceptional legal instrument, providing an important frame from within which to interrogate/challenge the biopolitical order.
Article 33 challenges us to think beyond the trinity of state-people-territory and thus to think beyond the bio-political condition. The content of the obligation contained in Article 33 is clear: it seeks to protect the asylum-seeker from the violence that we know is meted out through and on territory. If the way beyond the tragedy of mass statelessness is to accord equal value to the 'abstract nakedness' (Arendt 1968, p. 297) of the human being then the challenge is a singular one and must begin from a philosophical position that disputes that only death results from non-territory-from the unruly seas or the cold air. Article 33 promised to be a source from which to confront our bio-political existence. It has not yet been captured within the logic of territorial immanence and so may yet do so. | 2018-12-02T05:26:20.569Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "a74dc2efee7f6a7929288d6118408fab23d4c02c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10978-015-9172-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "52301bd984efd4460defc3f0f7a39e5d6ab63db3",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
} |
252554883 | pes2o/s2orc | v3-fos-license | Long-Term Effects of Cold Atmospheric Plasma-Treated Water on the Antioxidative System of Hordeum vulgare
Facing climate change, the development of innovative agricultural technologies securing food production becomes increasingly important. Plasma-treated water (PTW) might be a promising tool to enhance drought stress tolerance in plants. Knowledge about the effects of PTW on the physiology of plants, especially on their antioxidative system on a long-term scale, is still scarce. In this work, PTW was applied to barley leaves (Hordeum vulgare cv. Kosmos) and various constituents of the plants’ antioxidative system were analyzed 30 days after treatment. An additional drought stress was performed after foliar PTW application followed by a recovery period to elucidate whether PTW treatment improved stress tolerance. Upon PTW treatment, the Total Antioxidant Capacity (TAC) in leaves and roots was lower in comparison to deionized water treated plants. In contrast, PTW treatment caused a higher content of chlorophyll, quantum yield and total ascorbate content in leaves compared to deionized water treated plants. After additional drought application and subsequent recovery period, an enhancement of values for TAC, contents of malondialdehyde, glutathione as well as activity of ascorbate peroxidase indicated a possible upregulation of antioxidative properties in roots. Hydrogen peroxide and nitric oxide might mediate abiotic stress tolerance and are considered as key components of PTW.
Introduction
Facing climate change and the continuous population growth, many challenges arise for securing the global demand of crops (Anderson et al. 2020). Climate change in particular risks food security due to the progressive occurrence of extreme weather events. Drought and desertification are two of the most pervasive ecological consequences and exacerbate quality and quantity of crop products (reviewed by Raza et al. 2019). Consequently, extensive strategies encompassing agricultural adaption to climate change are required to deal with climatic future challenges and to ensure food security (Pretty et al. 2010).
Cold atmospheric plasma (CAP) gained considerable attention as promising 'green technology' for future agricultural applications (Puač et al. 2018). Plasma is an ionized gas and referred as the fourth state of matter containing electrons, ions, neutral atoms and molecules, radicals, reactive species, different kinds of electromagnetic radiation (e.g., UV, visible light), and electric fields (Lu et al. 2016;Zhou et al. 2020). A huge variety of methods exists to generate CAP for treatment of biological targets under physiological temperatures, which range from the feed gas to electrical parameters to ignite plasma and configuration of the plasma devices (Šimek and Homola 2021;Zhou et al. 2020). Plasma-treated water (PTW) that is produced by exposing water to plasma, has the advantage that it can be generated in bigger quantities to treat plants roots or shoots and by that, omitting effects of, e.g., UV radiation or electromagnetic fields. The chemistry of PTW is based on complex reactions between plasma-gas-liquid interfaces (Graves et al. 2019;Thirumdas et al. 2018;Zhou et al. 2020). In principle, energy transferred to molecular oxygen-and nitrogencontaining gas leads to the generation of reactive oxygen and nitrogen species (RONS). Gaseous RONS, e.g., ozone (O 3 ) or nitric oxide (NO), can diffuse to certain extend to the liquid but are relatively unstable due to further reactions. Hydrogen peroxide (H 2 O 2 ), nitrite (NO 2 − ), and nitrate (NO 3 − ) ions accompanied with decrease in pH are frequently detected in PTW (Hu et al. 2021;Zhou et al. 2020). PTW has multifaceted effects on plants, comprising the activation of plant vitality, inactivation of phytopathogens, enhancing seed germination, and plant growth as well as influencing the antioxidative system (Ito et al. 2018;Zhou et al. 2020;Adhikari et al. 2020). Moreover, recent studies evaluated the effects of PTW to stimulate biotic and abiotic-related stress responses in barley (Gierczik et al. 2020), grapevine (Laurita et al. 2021), maize (Lukacova et al. 2021), periwinkle (Zambon et al. 2020), and tomato (Adhikari et al. 2019).
Plants perceive biotic as well as abiotic changes. If environmental changes extend in strain, outside influences may result in oxidative stress (Kranner et al. 2010;Demidchik 2015). Since plants are aerobic organisms and utilize molecular oxygen (O 2 ) in several biochemical processes, stress metabolism leads to the enhanced generation of toxic byproducts called reactive oxygen species (ROS) which include singlet oxygen ( 1 O 2 ), superoxide anion (O 2 − ), hydroxyl radical (OH • ), and H 2 O 2 (Choudhury et al. 2017;Mittler 2002). ROS naturally occur upon the partial reduction of O 2 in many parts of the metabolism. The reactivity due to the high oxidizing potential can cause damage to nucleic acids, proteins, carbohydrates, and lipids. ROS triggers cell death by overwhelming the redox homeostasis if oxidative stress is severe (Bartosz 1997). If the damage cannot be reversed, programmed cell death might be initiated (Mittler 2002). In contrast, if kept in transient concentrations, ROS may also function as second messengers (Alscher et al. 1997;Foyer and Noctor 2005). They are responsible for fine-tuning several signal transduction processes involved in defense mechanisms against biotic and abiotic stresses (Dumanović et al. 2021). Besides ROS, it was shown that an imbalanced redox homeostasis also results in the production of reactive nitrogen species (RNS), essentially NO and derivates (Wang et al. 2013). NO is the most studied RNS and participates in many physiological processes of higher plants. Versatile interactions between ROS and RNS are noticeable (Astier et al. 2018). Under stress conditions, an accumulation or de-regulated synthesis of RNS can prevail, leading to nitrosative stress which is possibly involved in oxidative stress (Del Río 2015). Since plants are sessile and possess limited capabilities of stress avoidance, they developed a flexible scavenging system as an adaption to changing environmental conditions, which is universally referred to as the antioxidative system. The ascorbate-glutathione cycle plays an important role in the antioxidative system since it facilitates the efficient detoxification of H 2 O 2 (Foyer and Halliwell 1976). It consists of the low molecular mass antioxidants ascorbic acid (Asc), glutathione and the enzymes ascorbate peroxidase (APX), monodehydroascorbate reductase (MDHAR), dehydroascorbate reductase (DHAR), and glutathione reductase (GR).
Typically, plants cope with drought stress by activating the antioxidative system as well as producing compatible solutes (also known as osmolytes) to counteract osmotic disequilibria (Fang and Xiong 2015). The upregulation of enzymatic and non-enzymatic antioxidants requires a complex network of signaling pathways which to date is still not completely understood. Next to phytohormones, interlinking molecules, protein kinases, and transcription factors, H 2 O 2 and NO play a pivotal role in this signaling network (Qiao et al. 2014;Ilyas et al. 2020;Lau et al. 2021). Many strategies have been designed to improve drought tolerance in plants. An improved stress tolerance based on physiological or metabolic adjustment due to an earlier exposure to a mild stress is referred to as 'priming ' (or 'hardening'). It represents one of the most promising crop protection approaches for production of resilient crops (Li and Liu 2016). The exogenous application of certain compounds which can enhance tolerance to above-ground plant parts is a well-established method both in research and agriculture to improve the performance of crops (Merewitz 2016). Since primed plants show improved stress responses, this phenomenon is part of the concept of a 'stress memory' (Hilker and Schmülling 2019). Associated molecular mechanisms are still unclear (Li et al. 2019), although the same compounds involved in the signaling pathways for the 'natural' development of stress tolerance seem to participate in priming induced stress tolerance. The exogenous application of droughtresponsive phytohormones, H 2 O 2 and NO-donating chemicals resulted in enhanced drought tolerance (Molassiotis et al. 2016). A common feature for these signaling pathways are complex cross-talks between signaling components (Molassiotis and Fotopoulos 2011).
In this work, we investigated the long-term effects of PTW on the antioxidative system of Hordeum vulgare leaves and roots under no stress and drought stress conditions. Furthermore, it is discussed whether the foliar application of PTW, and particularly the PTW containing compounds H 2 O 2 and NO, sustainably mediates drought stress tolerance.
Production of Plasma-Treated Water
The alternating current (AC)-driven plasma system used in this study consisted of a pin-to-liquid discharge configuration with four metal electrodes placed approx. 3 mm from the water surface (Schmidt et al. 2019). Deionized water (DW) was mixed with 7.5% (v/v) of tap water prior to plasma treatment, as sufficient concentration of ions (≥ 80 µS cm −1 ) was needed to ignite the plasma between electrodes and water surface. Water mixture of 900 ml was treated for 20 min. The spatial boundary layers between transient spark discharges, water surface, and ambient air led to the formation of reactive and excited nitrogen and oxygen species, which were transported into the water volume by constant stirring during treatment. PTW application to plants and physicochemical analysis were performed 5-10 min after treatment.
The pH of the water was measured using the pH 3210 m (WTW, Weilheim, Germany) and the conductivity with help of the TetraCon 325 electrode on an inoLab Multi Level3 with inoLab Terminal Level3 (WTW, Weilheim, Germany).
Determination of Nitrite and Nitrate Concentration in PTW
Nitrate and nitrite ions were analyzed by ion exchange chromatography using Dionex ICS 6000 system (Thermo Scientific, Dreieich, Germany) equipped with an anion-exchange column (Dionex IonPac AS 18, Thermo Scientific, Dreieich, Germany) and a guard column (Dionex IonPac AG 18, Thermo Scientific, Dreieich, Germany) according to manufactures instructions. Undiluted PTW was injected with 5 µl, ions were separated in 23 mM KOH at 0.25 ml min −1 under isocratic conditions, and conductivity signals were recorded. Concentration of ions was calculated based on a calibration curve established by Dionex 7-ion standard solution (Thermo Scientific, Dreieich, Germany). Ions were determined from four independent PTW solutions.
Determination of Hydrogen Peroxide Concentration in PTW
The level of H 2 O 2 in PTW was determined with the potassium iodide (KI) method according to Junglee et al. (2014). The assay was performed within 1 ml test volume containing 500 µl 1 M KI in MES-KOH (50 mM, pH 6.0) 50 µl PTW and 450 ml H 2 O deion . Absorbance was read at 350 nm after 30 min incubation at room temperature. A standard curve was obtained with H 2 O 2 standard solution prepared in H 2 O deion .
Determination of Nitric Oxide Release from PTW
Constant and specific measurements of gaseous NO were accomplished via ANALYZER LCD 88 sp (Eco Physics) chemiluminescence-based NO detector equipped with an ozone generator (Stöhr et al. 2001;Stöhr and Stremlau 2006). 500 µl PTW were placed on a petri dish (diameter 5 cm) within the custom-made reactor chamber. The experiment was performed at 30 °C under anoxic conditions, and sample was constantly stirred. N 2 carrier gas transported the emitted NO with a flow rate of 400 ml min −1 (mass flow meter GFM ANALYT-MTC) to the analyzer. Data were recorded until NO was no longer detectable.
Plant Material and Cultivation
Seeds of Hordeum vulgare cv. Kosmos were pre-germinated in Petri dishes on filter paper soaked with 0.5 mM calcium sulfate for 2 days in dark. Each seedling was placed in a pot (∅ 12 cm) with a homogenous mixture of 2:1 (v:v) coarsegrained and fine-grained quartz sand. Plants were grown under a light/dark rhythm of 14/10 h at 22/18 °C air temperature, respectively. Dependent on the weather conditions, sunlight was supplemented by the light of high-pressure sodium lamps. The pots were rotated 2 times a week to ensure uniform growth conditions. All plants were watered daily with a defined nutrient solution containing 5 mM nitrate (Stöhr and Ullrich 1997) except for the drought period.
PTW Application and Drought Treatments
Four plant groups were treated as follows: I. "DW no stress" was sprayed with deionized water instead of PTW (DW: 1.7 ml) as a control 18, 19, and 20 days after sowing (DAS). II. "PTW no stress" was sprayed with PTW (1.7 ml) 18, 19, and 20 DAS. III. "DW drought" was sprayed with DW as mentioned in (i) and did experience drought stress by omitting/ ceasing the nutrient solution on 33, 34, and 35 DAS followed by a recovery: rewatering on 36 DAS with a double amount of nutrient solution and normal watering thereafter. IV. "PTW drought" did experience both PTW application (ii) and drought stress (iii) followed by a recovery as indicated above.
Photosynthetic measurements were performed 49 DAS, one day before harvesting. Plants were harvested in the morning before rewatering 36 DAS for proline determination only (another batch) and 50 DAS after the recovery phase for biochemical assays. For MultispeQ measurements, 10 biological replicates of each treatment were used, and biochemical assays were performed on 4 biological replicates.
Photosynthetic Measurements
Spectroscopic measurements were done with the hand-held MultispeQ (v2.0) spectrophotometer (Kuhlgert et al. 2016) using the 'Photosynthesis RIDES' protocol linked to the PhotosynQ platform (https:// photo synq. org). They were performed in the middle of an intact fully expanded leaf (third leaf from top) to estimate the fraction of light energy captured by Photosystem II (quantum yield or operating efficiency of PSII, Phi2, ΦII).
Soil Humidity
The measurement of soil moisture was conducted with the FOM/mts device with non-standard probes LP/ms (E-Test Ltd., Stasin, Poland; on the base of Time Domain Reflectometry technique). FOM/mts provides readout of volumetric water content according to the empirical calibration of Malicki et al. (1996). Under well-watered conditions, plants met a soil moisture of about 10% (v/v), whereas the drought-stressed plants experienced a soil moisture of 2% (v/v) at the lowest point.
Biochemical Assays
Fully expanded leaves and roots were harvested 50 DAS (36 DAS for proline determination), ground with liquid nitrogen and stored at − 80 °C. Frozen tissue powder was treated with individual extracting reagents according to each assay.
Determination of Proline Content and Lipid Peroxidation
Proline content was determined according to the method of Bates et al. (1973). Lipid peroxidation was determined and calculated in terms of thiobarbituric acid-reactive substances (TBARS) using the Malondialdehyde (MDA) assay according to Cavalcanti et al. (2004) with some modifications. 0.2 g frozen tissue were treated with 1 ml ice cold 1% (w/v) trichloroacetic acid (TCA). The homogenate was centrifuged at 18,000×g, 4 °C and 15 min. For the assay, 250 µl of the supernatant were incubated with 750 µl 0.5% (w/v) thiobarbituric acid in 20% (w/v) TCA. After 1 h incubation at 98 °C, the reaction was stopped on ice and centrifuged at 15,000×g, 4 °C, and 5 min. The absorbance was read at 532 and 600 nm. Calculation was done with A 532 nm -A 600 nm and the extinction coefficient ε = 155 mmol −1 cm −1 .
Determination of Chlorophyll Content and Total Antioxidant Capacity
Methanolic extraction was done for chlorophyll content and total antioxidant capacity (TAC). 0.1 g frozen tissue was treated with 1 ml of 99% (v/v) methanol and incubated in an ultrasonic bath (62 °C, 15 min, 100% DEGAS). Extraction was repeated 3 times.
Measurements and calculations of chlorophyll in methanolic extracts were performed according to Lichtenthaler and Buschmann (2001).
TAC was determined using 2,2-diphenyl-1-picrylhydrazyl (DPPH) as described by Mahieddine et al. (2018) with some modifications. In this study, methanolic extract (100 µl for leaves, 200 µl for roots) was incubated with 900 µl of 0.1 mM DPPH ethanolic solution for 30 min at room temperature in dark and absorbance was read at 520 nm. A blank for each sample was performed by adding 900 µl ethanol to each extract instead of DPPH solution and subtracted from sample value. Ascorbic acid was used for standard and content was calculated in terms of ascorbic acid equivalents (AAE).
Determination of Ascorbate and Glutathione Content
0.1 g frozen tissue were extracted with 1.8 ml 6% TCA. The colorimetric assay was performed according to , and absorbance was read at 525 nm. Asc tot and Asc red : Asc ox ratios were calculated using ascorbic acid as a standard.
For determination of the glutathione content, a modified protocol of the enzymatic recycling procedure according to Noctor et al. (2016) was performed. Reduced glutathione (GSH) and oxidized glutathione (GSSG) were used as a reference. 2-vinylpyridine was used as a masking reagent for GSH. The assay was performed in a 96-well microtiter plate and absorbance read at 405 nm. GS tot and GSH:GSSG ratios were calculated.
APX Activity
The ascorbate peroxidase (APX) activity was determined according to Noctor et al. (2016) with some modifications. 1 ml of the extracting agent (0.1 M sodium phosphate buffer, 0.1 mM EDTA (pH 7.0), 5% (w/v) polyvinylpolypyrrolidone (PVPP), 1 mM Asc) was added to 0.25 g frozen powder and centrifuged at 25,000×g, 4 °C, 15 min. The assay was performed in 96-well microtiter plates and absorbance read at 290 nm. A baseline was recorded with 175 µl test buffer (0.1 M potassium phosphate buffer (KPP, pH 7.0) and 0.1 mM EDTA), 25 µl of 5 mM ascorbic acid, and 25 µl of diluted extract. APX activity started with the addition of 25 µl of 1 mM H 2 O 2 . Specific APX activity was calculated using the extinction coefficient 2800 l * mol −1 * cm −1 . The protein content was determined according to Bradford (1976).
Statistical Analysis
All the obtained data have mean values ± standard deviation (SD). The data were statistically analyzed by student's t test with Excel. Significant differences are denoted according to p < 0.05 (⁎), p < 0.01 (⁎⁎), and p < 0.001 (⁎⁎⁎).
Plasma Treatment of Water Caused Increased Occurrence of NO x Species and Hydrogen Peroxide
Plasma treatment of DW for 20 min resulted in accumulation of hydrogen peroxide, nitrite, and nitrate ions in µmolar concentrations (Table 1). The observed decrease in pH in PTW (within several publications denoted as 'plasma activated water,' PAW) in comparison to DW is in line with published data (e.g., Adhikari et al. 2019;Hu et al. 2021;Kang et al. 2019). In addition, significant amounts of NO released from stirred PTW into the gas phase were detected. The detection method of NO applied in this study is based on the reaction of NO with ozone resulting in excited nitrogen dioxide species that emit detectable photons when dropping back to the ground state (Stöhr and Stremlau 2006). The occurrence of NO in plasma-treated liquids has been noticed for other plasma treatment systems as well by measuring NO within PTW via amperometric microsensors (Kang et al. 2019) or by EPR spectroscopy (Tian et al. 2017).
Proline Content Increased in Leaves and Roots Directly After Drought Stress
Proline content was measured to estimate the stress status of the plants. Directly after drought stress (36 DAS), in leaves, proline content was 150 times higher in DW drought compared to DW no stress (1048 compared to 7 µmol proline * g FW −1 ) and 136 times higher in PTW drought compared to PTW no stress (1023 compared to 7 µmol proline * g FW −1 ) (Fig. 1a). Regarding roots, proline content was 5.4 times higher in DW drought and 7.6 times higher in PTW drought compared to the respective no stress group (Fig. 1b). After two weeks of recovery following drought application, only minor differences between proline contents under no stress conditions and drought stress conditions could be observed in leaves and roots independent of the PTW treatment (Fig. 1c, d).
PTW Treatment Resulted in Enhanced Chlorophyll Content and Quantum Yield
Assessing the content of photosynthetic pigments is a suitable indicator for photosynthetic activity (Ghosh et al. 2004) as well as photooxidative stress (Pinto-Marijuan and Munné-Bosch 2014) and considered as an overall requirement for the effective cultivation of plants (Sonobe et al. 2020). Under no stress conditions, significantly higher values of Chl (a + b) content were observable upon PTW treatment relative to DW-treated plants (18%, Fig. 2a). Under drought stress conditions, higher contents of Chl (a + b) in comparison to DW treatment were detected, despite not being significant (10%, Fig. 2a). MultispeQ measurements revealed the same pattern regarding the quantum yield of Photosystem (PS) II: Significantly higher values were obtained upon PTW treatment relative to DW treatment under no stress (11%), whereas under drought conditions, the changes were not significantly higher (7%; Fig. 2b). No morphological differences could be obtained.
Total Antioxidant Capacity Increased Significantly in Leaves and Roots upon PTW Treatment Under Drought Stress Conditions
Plant enzymatic and non-enzymatic antioxidative mechanisms jointly contribute to the TAC. It serves as biochemical were calculated from four replicates of each treatment. Bars with an asterisk indicate significance differences (⁎: p < 0.05; ⁎⁎: p < 0.01; ⁎⁎⁎: p < 0.001). Statistical analysis was performed using student's t-test marker for the plant response to environmental changes since it assesses its redox status (Ghiselli et al. 2000;). It reflects the scavenging capacity of reducing agents such as antioxidants towards DPPH (Pyrzynska and Pȩkal 2013). In both organs, treatment with PTW affected TAC in a similar pattern under no stress and drought stress conditions. Under no stress conditions, treatment resulted in significantly lower values for TAC (61% in leaves, Fig. 3a; 51% in roots, Fig. 3b). Under drought stress conditions, values for TAC were significantly higher upon treatment with PTW (56% in leaves, Fig. 3a; 85% in roots, Fig. 3b).
TBARS Content Increased Significantly in Roots upon PTW Treatment Under Drought Stress Conditions
Membrane damage can be a result of oxidation by RONS. Lipid peroxidation estimated as the content of TBARS serves as an indicator for oxidative stress. However, after recovery, TBARS content may also indicate acclimation processes facilitating stress tolerance. In spite of not being significant, a tendentiously higher TBARS content was observed upon PTW treatment under both no stress (17%) and drought conditions (14%) in leaves (Fig. 4a). In roots, no differences in TBARS content were obtained under no stress conditions after PTW application, whereas under drought conditions, significantly higher values were observed in comparison to DW treatment (37%; Fig. 4b).
PTW Treatment Influenced Components of the Ascorbate-Glutathione-Cycle Under no Stress and Drought Stress Conditions
Asc is one of the most important antioxidant metabolites in plants and an essential component of the ascorbate-glutathione-cycle. By controlling the cellular redox state, it contributes to the development of stress tolerance (Latowski et al. 2010). Changes in the Asc red : Asc ox ratio can act as indicators for abiotic stresses as it directly responds to altered turnover rates of antioxidant enzymes (Tausz 2004). The Asc tot content was fivefold higher in leaves compared to the roots (Fig. 5a, b). Treatment with PTW resulted in significantly higher Asc tot contents in leaves under no stress conditions compared to DW-treated plants (32%), while no differences were noticeable in roots. Under drought stress conditions, minor Asc tot contents were observed non-significantly in leaves (23%) and significantly in roots (49%) relative to DW treatment. Based on the difference between Asc tot content and Asc red content, the Asc ox content and further Asc red : Asc ox ratio can be calculated (Fig. 5c, d). The Asc ox content was generally higher in roots than in leaves, although a significant shift towards the Asc red content was notable in roots under drought stress conditions after PTW treatment and subsequent recovery period. Moreover, only minor non-significant changes regarding the Asc red : Asc ox ratio were detected. Glutathione is the other important metabolite of the ascorbate-glutathione cycle and crucial for the preservation of the cellular redox homeostasis (Latowski et al. 2010;Noctor et al. 1998). The concentration of total glutathione correlates with the adaption to environmental stresses and alterations in the GSH: GSSG ratio may indicate a response to changes in environmental conditions (May et al. 1998). Besides the Asc red : Asc ox ratio, changes in the GSH: GSSG ratio represent a direct consequence of altering turnover rates of enzymes of the ascorbate-glutathione cycle (Tausz 2004). The overall GS tot contents were approximately 10 times higher in leaves compared to the roots (Fig. 6a, b). The difference between GS tot content and GSSG content allows the calculation of GSH content as well as the GSH:GSSG ratio (Fig. 6c, d). Regarding the contents of GS tot and the GSH:GSSG ratio, no long-term effects of treatment with PTW were obtained in leaves, neither under no stress nor under drought stress conditions (Fig. 6a, c). Values of GS tot content were significantly higher in roots upon PTW treatment relative to DW treatment under drought stress conditions (Fig. 6b). With regard to the GSH:GSSG ratio in roots, with an asterisk indicate significance differences (⁎: p < 0.05; ⁎⁎: p < 0.01; ⁎⁎⁎: p < 0.001). Statistical analysis was performed using student's t-test no differences between DW-treated and PTW-treated plants were obtained (Fig. 6c, d).
In leaves, non-significant alterations regarding the APX activity occurred, as the PTW treatment led to higher values under no stress conditions (33%) and minor values under drought stress conditions (18%) compared to DW treatment (Fig. 7a). In roots, no alterations are noticeable under no stress conditions, whereas under drought stress conditions, values of APX activity were significantly higher upon PTW treatment (45%; Fig. 7b).
Discussion
Numerous physical and chemical reactions between CAP and water lead to the generation of a variety of RONS with different reactivity. Mainly, nitrogen oxides (NO x ) are converted to nitrite and nitrate ions in water while hydroxyls are converted to hydrogen peroxide (Graves et al. 2019). The typical constituents NO 2 − , NO 3 − , and H 2 O 2 were also found in the PTW used in this study. Interestingly, PTW contained 331 µM NO that was liberated to the gas phase (Table 1). Only few studies documented the occurrence of NO in PTW (e.g., Kang et al. 2019;Tian et al. 2017). It is known that the radical NO can possess different half-lifetimes from microseconds to hours depending on concentration and chemical environment (Procházková et al. 2015). Kang et al. (2019) could still detect 15-30 µM NO within PTW 16 h after generation. It is proposed that PTW containing H 2 O 2 , NO x , and specifically NO might play a prominent role in plant responses to PTW (Kang et al. 2019;Adhikari et al. 2019). It has been reported that PTW irrigation resulted even in enhanced endogenous levels of H 2 O 2 and NO x in tomato seedlings (Adhikari et al. 2019
Fig. 5
Total ascorbate content (Asc tot ) in leaves a and root b and distribution c, d of reduced ascorbate (Asc red ; light gray bars) and oxidized ascorbate (Asc ox ; white bars) after treatment with deionized water (DW) or plasma-treated water (PTW) under no stress and drought stress conditions. Mean values (± SD) were calculated from four replicates of each treatment. Bars with an asterisk indicate significance differences (⁎: p < 0.05; ⁎⁎: p < 0.01; ⁎⁎⁎: p < 0.001). Statistical analysis was performed using student's t test important signaling molecules in plants and responsible for many short-term and long-term reactions to environmental stresses and developmental factors during plants life cycle (Farnese et al. 2016;Neill et al. 2002;Sanz et al. 2015).
In response to environmental stresses, proline accumulates in many plant species including barley (Hanson et al. 1979). Hence, it was described as reasonable indicator of plant reactions to water deficit (Dar et al. 2016). The higher proline content in drought-stressed plants compared to nonstressed plants implied water deficit and corroborates the efficacy of drought stress application (Fig. 1a, b). Two weeks of recovery following drought application the proline content in drought-stressed plants did not differ from non-stressed plants, which indicates that the drought-stressed plants were recovered (Fig. 1c, d) independently of the PTW treatment.
In this study, PTW treatment sustainably resulted in higher chlorophyll content regardless of the application of drought stress, whereby significantly higher values could be observed for non-drought-stressed plants (Fig. 2a). The effect on photosynthetic pigments caused by PTW was reported by other studies as well (Adhikari et al. 2019;Gierczik et al. 2020;Ndiffo Yemeli et al. 2021) and might be a direct effect of H 2 O 2 and NO. This conclusion is supported by studies on the treatment of marigold with NO and H 2 O 2 (Liao et al. 2012) or on the treatment of Ficus deltoidea (Nurnaeimah et al. 2020) and maize (Gondim et al. 2013) with H 2 O 2 . Additionally, foliar H 2 O 2 treatment prior to osmotic stress enhanced chlorophyll content of pistachio (Bagheri et al. 2021), soybean (Guler and Pehlivan 2016), and quinoa (Iqbal et al. 2018 6 Total glutathione content (GS tot ) in leaves a and root b and distribution c, d of reduced glutathione (GSH; light gray bars) and oxidized glutathione (GSSG; white bars) after treatment with deionized water (DW) or plasma-treated water (PTW) under no stress and drought stress conditions. Mean values (± SD) were calculated from four replicates of each treatment. Bars with an asterisk indicate significance differences (⁎: p < 0.05; ⁎⁎: p < 0.01; ⁎⁎⁎: p < 0.001). Statistical analysis was performed using student's t-test chlorophyll by ROS generated in the chloroplast (Farooq et al. 2017). The quantum yield of PSII followed the pattern of the chlorophyll content: the treatment with PTW sustainably resulted in a higher quantum yield of PSII after no stress and drought stress with significantly higher values for PTW without drought. Škarpa et al. (2020) stated that the quantum yield of the electron transport of the photosystem II was not significantly influenced by foliar PTW application in maize. On the other hand, intensive PTW application on the maize plants leads to damage of the photosystem apparatus (Škarpa et al. 2020). Considering the concentrations of H 2 O 2 and NO x in PTW in this study, they were much lower compared to our treatment. Yemeli et al. (2021) found that watering with PTW increased the concentration of photosynthetic pigments and simultaneously had no or negative impact on net photosynthesis of barley and maize, respectively. They had comparably high concentrations of H 2 O 2 (5 times higher) in PTW and watered only with PTW for 4 weeks. Liao et al. (2012) treated marigold explants with NO and H 2 O 2 . Their data suggest that the application of exogenous NO or H 2 O 2 could effectively mitigate the damage of drought stress on leaves by protecting the ultrastructure of mesophyll cells. This was accompanied by a rapid photosynthetic electron transfer rate and higher PS II electron transfer activity under drought conditions. That is in agreement with the presented data, where drought stress resulted in lower values for quantum yield of PS II for DW-treated as well as PTW-treated plants relative to no stress conditions (Fig. 2b). Plants that encountered drought stress after PTW treatment were able to keep the photosynthetic performance up to a higher level as DW-treated plants that did not experience drought.
Asc and GSH have discrete and specific functions in photosynthesis and associated redox signaling (Foyer and Noctor 2009). Following Foyer and Shigeoka (2011), Asc has several important roles in photosynthesis: It is a cofactor for violaxanthin deepoxidase, an enzyme required for nonphotochemical quenching formation, and it participates in the abscisic acid-mediated regulation of stomatal closure. Finally, it can strongly influence the expression of both nuclear and chloroplast genes encoding photosynthetic components (Kiddle et al. 2003). Enhancing the activities of antioxidant enzymes and/or the accumulation of low molecular weight antioxidants by genetic manipulation may increase tolerance to a variety of stresses through more efficient removal of ROS. By improving ROS removal in plant tissue, the photosynthetic processes are desensitized to environmental change (Foyer and Shigeoka 2011).
In this study, the exposure of barley plants to PTW took effect on the plants' antioxidative system. Most of the significant changes occurred in the root after foliar treatment of PTW, drought stress, and subsequent recovery period, revealing that PTW induces systemic signaling.
Values for TAC were significantly higher in leaves and roots upon PTW treatment under drought stress conditions compared to treatment with DW. Elevated values for TAC denote the accumulation of DPPH-reducing agents, which might include antioxidative compounds. In general, higher levels of constitutive or induced antioxidants facilitate tolerance against different environmental stresses including drought stress (Reddy et al. 2004;Miranda et al. 2014). Chutipaijit (2016) activity imply abiotic stress tolerance in rice-seedling radicles (Kang and Saltveit 2001) and cucumber-seedling radicles (Kang and Saltveit 2002). In respect of drought, Štajner et al. (2013) and Weidner et al. (2009) used the DPPH assay as a potential parameter for drought stress tolerance. With regard to our study, PTW-treated plants possibly adapted to drought stress more efficiently than DW-treated plants by acquiring the ability to scavenge more RONS as reflected by elevated TAC in leaves and roots. The combined treatment of plants with PTW followed by drought may has resulted in improved detoxification of prooxidants and might has facilitated the induction of drought stress tolerance. It might be possible that the elevated GS tot content in roots under drought stress conditions contributed to elevated values for TAC since the highly reductive thiol group of GSH reacts with DPPH (Viirlaid et al. 2009 The range of lipid peroxidation as determined by the MDA content was significantly higher in roots under drought stress conditions, whereas in leaves, it did not differ significantly between no stress and drought stress conditions upon PTW treatment relative to DW treatment (Fig. 4). Although an elevated MDA content is deemed to be an indicator for oxidative stress, it might also correspond to acclimation processes rather than to damage. Depending on intracellular levels, MDA is described as either toxic or gene activating (Missihoun and Kotchoni 2017) and facilitates expression of abiotic stress genes (Weber et al. 2004). It was suggested that MDA present in low concentrations can implement cell protection under oxidative stress by activating regulatory genes involved in plant defense and development and cellular redox homeostasis. Fine tuning of MDA as a gene activator requires the activity of aldehyde dehydrogenases (ALDHs) (Missihoun and Kotchoni 2017). ALDHs were shown to control MDA levels by catalyzing the oxidation of aldehydes to the corresponding carboxylic acid utilizing NADP + as the oxidizing agent. For its part, ALDH activity can be induced by H 2 O 2 (Zhao et al. 2018). Summarizing, MDA levels are highly balanced in plants and may act as a signal molecule and not as a damager. Cui et al. (2010) suggested that lipid peroxidation and increasing H 2 O 2 levels might be involved in the activation of secondary metabolite accumulation. In our study, such an accumulation might count for the elevated TAC in roots under drought stress conditions.
Another indicator of the upregulation of the antioxidative system caused by PTW in roots under drought stress conditions might be the higher percental Asc red pool within the Asc tot content (Fig. 5d). It is hypothesized that the maintenance of a high Asc red : Asc ox ratio might be a key element for the protection against abiotic stress-induced ROS (Fotopoulos et al. 2010). Contrarily, PTW-treated roots under drought stress conditions exhibited higher Asc red :Asc ox ratios while simultaneously featuring decreasing contents of Asc tot compared to DW treatment. The GSH:GSSG ratio remained unaltered, while GS tot content displayed significantly higher values compared to DW treatment (Fig. 6b). Higher total concentrations of glutathione may indicate acclimation processes (Tausz 2004;Cheng et al. 2015). Even though the GSH pool remains unaltered, the higher GS tot content might imply the enhancement of antioxidative traits.
APX activity was significantly higher upon PTW treatment in drought-stressed roots (Fig. 7b). The increase in synthesis or activity of antioxidant enzymes may facilitate drought stress tolerance (Kusvuran et al. 2016;Sallam et al. 2019). Therefore, the PTW induced elevated APX activity in roots under drought stress conditions might indicate enhanced drought stress tolerance. The treatment of rice (Farooq et al. 2009) and turfgrass (Boogar et al. 2014) with NO donors resulted in the drought stress alleviation by upregulating APX activity. This supports the idea that NO might be one of the key components responsible for the reported effects of PTW.
With respect to the overall enhancement of antioxidative traits effected by PTW, the results of this work indicate that the treatment with PTW itself does not show significant evidence of tolerance development before plants meet the stressor (Fig. 8a, c). The application of drought stress was necessary to obtain visible signs of the upregulation of the antioxidative system and systemic signaling caused by components of PTW (Fig. 8b, d).
In conclusion, this study indicates that components of PTW affect the antioxidative system of barley on a long-term scale. These alterations imply that the treatment with PTW might lead to enhanced drought stress tolerance and renders PTW as a putative priming agent. Author Contributions FB, CaSch: devising the experiment and statistical analysis; FB, CaSch, AK, HB: data collection; FB: writing; CaSch, AK, HB, ChSt: revising and reviewing the manuscript; ChS: funding acquisition, project design and supervision.
Funding Open Access funding enabled and organized by Projekt DEAL. Research was partly funded by the Federal Ministry of Education and Research of Germany within the framework of the project 'Physics for Food' (FKZ 03WIR2806B and FKZ 03WIR2806C).
Data Availability
The data supporting the findings of this study are available from the corresponding author, Christine Stöhr, upon request.
Declarations
Competing interest The authors declare no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-09-28T15:18:07.051Z | 2022-09-26T00:00:00.000 | {
"year": 2022,
"sha1": "8f12ebeeed949b1b58b256a37092d23445cb76bd",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00344-022-10789-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "7c497863f9f5a82617449e4cb7e3f9319030f0f8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
225951081 | pes2o/s2orc | v3-fos-license | Initial combination therapy with macitentan and tadalal in pulmonary arterial hypertension: a retrospective cohort study
Purpose: Initial combination therapy with ambrisentan and tadalal has been demonstrated superior to either agent alone in pulmonary arterial hypertension (PAH). More recently, the OPTIMA trial showed ecacy of another combination of endothelin receptor antagonist and phosphodiesterase 5-inhibitor, macitentan and tadalal, as initial therapy for PAH. The objective of this study was to assess the effectiveness, tolerability, and safety of macitentan and tadalal in a real-world clinical setting. Methods: This single centre, retrospective cohort study identied adult patients newly diagnosed with PAH between January 2014 and December 2017 who were started on macitentan and tadalal. Patients were retrospectively followed for one year. Effectiveness was evaluated via change from baseline in disease risk prole based on a validated score incorporating World Health Organization functional class, 6-minute walk distance (6MWD), B-type natriuretic peptide (BNP), and hemodynamics on follow-up right heart catheterization. Secondary endpoints included change in 6MWD, BNP, and hemodynamic variables. Drug tolerability and adverse events were assessed. Results: The cohort included 46 patients, 8 of whom (17%) did not tolerate and discontinued either macitentan or tadalal. Median time to follow-up was 161 days (IQR 72). 42% of patients with an initially moderate or high risk disease prole improved to low risk. Secondary endpoints showed a reduction in the geometric mean of pulmonary vascular resistance of 45% (95% CI 29, 57%) and improvement in 6MWD of 88m (95% CI 27, 148m). Conclusion: In a real-world setting, macitentan and tadalal as initial combination therapy for PAH was well tolerated and yielded clinical benet.
Background
Pulmonary arterial hypertension (PAH) is a disease characterized by vascular remodelling leading to progressive elevation of pulmonary vascular resistance, right heart failure and death [1]. While there is no cure to the proliferative pulmonary arteriopathy, three mechanistically different pathways are known that may be targeted pharmacologically to delay disease progression [2]. The AMBITION trial demonstrated that, in treatment-naïve patients, upfront combination therapy targeting two such pathways via ambrisentan (an endothelin receptor antagonist, ERA) and tadala l (a phosphodiesterase type 5 inhibitor, PDE-5i) was superior to either drug as monotherapy, with most of this bene t derived from a lower rate of hospitalization for worsening PAH [3,4].
Based in part on these ndings, initial combination therapy with ERA/PDE-5i has become a recommended treatment strategy in patients newly diagnosed with PAH, although the interchangeability of different ERA/PDE5-i agents is not certain [5,6]. Macitentan is a relatively newer ERA with higher biochemical antagonistic potency, longer half-life and suggested improved adverse effect pro le compared to ambrisentan [7]. Most recently, the prospective OPTIMA trial showed macitentan and tadala l were well tolerated and resulted in a 47% reduction in the geometric mean of pulmonary vascular resistance after 16 weeks of therapy compared to baseline [8].
While the OPTIMA trial demonstrated the relative e cacy of macitentan and tadala l, we sought to add to this experience by assessing the effectiveness of macitentan and tadala l in a real-world clinical setting over longer follow-up and with particular focus on improvement in validated, guideline-based disease risk category [5].
Methods
We conducted a single-centre, retrospective cohort study. Consecutive patients referred to the Pulmonary Hypertension Program at the Toronto General Hospital between January 2014 and December 2017 were eligible for inclusion. A manual chart review identi ed patients started on upfront combination therapy with macitentan and tadala l. Patient records were reviewed for treatment effect, safety, and tolerability of macitentan and tadala l over one year of follow-up. The study was approved by the institutional Ethics Board (UHN17-5845).
Inclusion and Exclusion Criteria
Individuals aged 18 years and older were included if they were newly diagnosed with PAH in the previous 6 months and started on macitentan and tadala l within 6 weeks of one another. Full doses consisted of macitentan 10mg and tadala l 40mg daily. PAH etiologies included idiopathic, heritable, and PAH associated with anorexigen use, connective tissue disease (CTD), congenital heart disease, or human immunode ciency virus infection. A diagnosis of PAH was established by right heart catheterization (RHC) demonstrating mean pulmonary arterial pressure (mPAP) of ≥25mm Hg , pulmonary capillary wedge pressure (PCWP) ≤15mm Hg , and pulmonary vascular resistance (PVR) of ≥3 Wood Units (WU).
Exclusion criteria consisted of PAH secondary to portopulmonary hypertension, concurrent therapy with a prostacyclin, or any prior PAH-speci c therapy.
Effectiveness
Treatment effects of macitentan and tadala l were assessed in all individuals who received at least one dose of each drug, similar to an intention-to-treat analysis. Effectiveness was evaluated over one year of follow-up based on the 2015 European Society of Cardiology (ESC)/European Respiratory Society (ERS) risk assessment table for prognostication in PAH [5]. We examined change in World Health Organization . For three patients who expired during the one-year study, highest risk categories were imputed on follow-up. Overall risk pro les at baseline and follow-up were calculated by taking the mean value among all prognostic variables rounded to the nearest integer. The primary endpoint was percentage of patients improving to the low overall risk category following treatment with macitentan and tadala l. Secondary endpoints were changes in WHO FC, 6MWD, BNP, and hemodynamic variables on follow-up RHC. For the three patients who expired, the worst value recorded in the study was imputed for each variable on follow-up. For all other individuals, each patient's baseline data was imputed on followup wherever the latter was missing (Table S1).
Tolerability and Safety
Safety endpoints assessed were the occurrence of hypotension (systolic blood pressure <85mm Hg or diastolic blood pressure <50mm Hg ), edema (new or worsening peripheral edema), headache, anemia (hemoglobin decline ≥15g/L from baseline to an absolute value <100g/L) or liver enzyme elevation (transaminases greater than 3 times the upper limit of normal) within one year of follow-up. Tolerability endpoints included agent discontinuation as well as reason for and time to discontinuation.
Statistics
Simple descriptive statistics were used to express patient characteristics and the primary endpoint. Change in categorical variables before and after treatment was assessed using a Chi squared/Fisher's exact test, as appropriate. Secondary endpoints consisting of change in continuous variables were estimated by simple linear regression; BNP, PVR, and CI data were lognormally distributed and the ratio of geometric means at baseline and follow-up was used to assess change.
Results
Between January 2014 and December 2017, 46 patients newly diagnosed with PAH were started on combination therapy with macitentan and tadala l, receiving at least one dose of each medication. Demographic data and disease etiology are presented in Table 1. The median age was 56 years and 85% of patients were female. The etiology of PAH was relatively evenly divided between idiopathic PAH (50%) and PAH associated with connective tissue disease (43%), most commonly scleroderma (26%). (7) Systemic lupus erythematosus 2 (4)
Effectiveness
Cohort composition by overall risk category at baseline and follow-up is shown in Figure 1. The median time from start of therapy to follow-up and reassessment via RHC was 161 days (IQR 72). Forty-three of 46 patients (93%) were at high or intermediate risk at baseline, of whom 18 patients (42%) met the primary endpoint and improved to low risk category on follow-up. Three patients died within 6 months of starting treatment: 2 were initially at moderate risk and did not tolerate macitentan, discontinuing the drug within one week of initiation, and 1 was initially high risk and remained so despite dual therapy. All 3 individuals who died were women with scleroderma aged >65 years.
Changes in PAH prognostic variables at baseline and follow-up are described in Table 2 analysis was restricted to the subset of patients who tolerated and did not discontinue macitentan or tadala l (Table S2). Change in risk category for individual PAH prognostic variables as per 2015 ESC/ERS guidelines are shown in Table S3. Adverse events and drug discontinuations are summarized in Table 3. In total, 8 of 46 patients (5 macitentan and 3 tadala l; 17% total) discontinued therapy due to an adverse effect or intolerance. Baseline characteristics of individuals who discontinued either macitentan or tadala l are shown in Table S4. Of these 8 patients, 2 expired within one year due to progressive disease, having only tolerated tadala l monotherapy. Two other patients who discontinued macitentan or tadala l went on with monotherapy; the remaining 4 had the ERA or PDE-5i substituted with a different drug of the same class.
The most common adverse events associated with combination therapy were headache and edema, occurring in 50% and 30% of individuals, respectively. Headache occurred within days of starting therapy, resulting in discontinuation of tadala l in 2 individuals (4%). A further 4 cases required transient stopping and gradual re-introduction of tadala l. Edema led to discontinuation of macitentan in 3 cases (7%), of which two patients required admission to hospital for intravenous diuresis.
Anemia occurred in 6 cases (13%) between 20 to 171 days of starting macitentan. Three of these cases were complicated by superimposed confounding factors felt to be the primary drivers of the anemia. One case required red blood cell transfusion, but anemia did not lead to discontinuation of macitentan in any of the 6 cases. Transaminitis greater than 3 times the upper limit of normal occurred in one patient 22 days after initiation of macitentan, reaching an ALT of 230 IU and AST of 456 IU from previously normal baseline values. Macitentan was discontinued and the patient was later started on ambrisentan without recurrence of transaminitis.
An additional 2 patients discontinued PAH therapy due to other adverse effects. One patient elected to stop tadala l 18 days from initiation due to signi cant epistaxis, though this was also in the setting of a supratherapeutic INR of 5 and felt unlikely to be related to tadala l. In another case, macitentan was discontinued within 8 days due to unremitting nausea and decreased appetite, possibly confounded by worsening PAH.
Discussion
Our study describes a Canadian experience with macitentan and tadala l as initial therapy for newly diagnosed PAH. Among this cohort, a majority of individuals (83%) tolerated macitentan and tadala l, and 42% of patients with initially moderate or high risk disease improved to low risk based on a validated prognostic score composed of WHO functional class, 6MWD, BNP, and hemodynamic parameters [5,9]. These ndings provide real-world effectiveness data supporting the use of this particular ERA/PDE5-i combination, which has only been speci cally studied in one previous trial [8].
While the AMBITION trial demonstrated that initial combination therapy with ambrisentan and tadala l reduced the composite endpoint of death or worsening PAH compared to either drug in monotherapy [3], generalizability to all ERA/PDE-5i combinations is not guaranteed. Indirect data from the SERAPHIN trial suggests bene t of macitentan and PDE-5i. The addition of macitentan to background PDE-5i therapy (62% of trial patients) in this study reduced the primary endpoint of morbidity and mortality by 38%, similar to those not on background therapy [10]. However, of those individuals on background therapy, most were on sildena l and had been stabilized on this drug for at least 3 months, limiting the information that can be extrapolated regarding the particular combination of macitentan and tadala l [11]. This led to the French OPTIMA trial designed speci cally to examine macitentan and tadala l in newly diagnosed PAH over 16 weeks [8]. This prospective, open-label, single-arm, multicentre trial enrolled 46 patients and found treatment with macitentan and tadala l led to a 47% reduction in the geometric mean of PVR and 36m improvement in 6MWD [8]. Only 2 individuals discontinued drug therapy before 16 weeks due to either a revision in etiology of PAH or an adverse event.
Our ndings con rm the results of the OPTIMA trial outside of the rigorous trial setting. Speci cally, we observed a near-identical reduction in PVR of 45%, as well as an improvement in 6MWD of 88m. Compared to OPTIMA, we found more individuals discontinued either macitentan or tadala l due to adverse effects (17% compared to 4%). While the lower number of discontinuations in OPTIMA likely re ects selection bias in the highly motivated patients who took part in the trial, the current study continues to suggest that this combination is well-tolerated in most individuals with relatively minor adverse effects. We observed headache considerably more frequently in the current study compared to OPTIMA (50% vs 24%), while instances of edema, anemia, and transaminitis were similar (30% vs 28%, 13 vs 13%, 2% vs 2%, respectively). All adverse effects were comparable to previous major trials [3,10], and there were no cases of treatment discontinuation due to anemia or transaminitis.
The current study has several strengths, including a real-world setting, lengthy follow-up time of one year that exceeds most trials in PAH, and application of the validated ESC/ERC prognostic score for assessing response to therapy and low-risk disease status as a primary outcome. This prognostic score has been independently validated in three studies [12][13][14] and constitutes an endpoint that is increasingly recognized and clinically relevant to long-term outcomes in PAH [9]. Furthermore, follow-up assessment of hemodynamics on RHC was available for a vast majority of individuals in the current study, demonstrating meaningful improvement in physiologic variables shown to be directly related to improved clinical outcomes [15]. Although a retrospective trial design, analysis was conducted as intention-to-treat and missing data was handled as conservatively as possible, either carrying forward baseline values or imputing the worst value observed across the entire study for those patients who died. This lends credibility to the signi cant bene cial effect observed. The limitations of the study include its retrospective nature, small number of participants, lack of control group, and possible bias in clinician selection of those individuals to start on dual oral therapy. While the study was restricted to a single Canadian centre, the close agreement of the current results with the French OPTIMA trial is reassuring with regards to the generalizability of the ndings.
Conclusions
Our retrospective analysis suggests that macitentan and tadala l are a viable ERA/PDE-5i combination in a real-world setting. Drug tolerability remained high and 42% of individuals who started as intermediate or high risk had improved to a low risk pro le, which has been associated with excellent 5-year transplantfree survival. However, approximately 60% remained intermediate or high risk, highlighting the limitations of initial oral combination therapy.
Declarations
Funding: This research did not receive any speci c grant from funding agencies in the public, commercial or not-for-pro t sectors.
Con icts of interest: The authors declare that they have no competing interests.
Availability of data and materials: The datasets generated and/or analysed during the current study are not publicly available but are available from the corresponding author on request. | 2021-06-22T17:55:54.662Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "01035bea7e781cea4dca40a2202a629574743e52",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-431468/v1",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c30426aa6010c6a0474459a739c4f9d5115403d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253043819 | pes2o/s2orc | v3-fos-license | Impact of the SARS‐CoV‐2 pandemic on the health of individuals with intoxication‐type metabolic diseases—Data from the E‐IMD consortium
Abstract The SARS‐CoV‐2 pandemic challenges healthcare systems worldwide. Within inherited metabolic disorders (IMDs) the vulnerable subgroup of intoxication‐type IMDs such as organic acidurias (OA) and urea cycle disorders (UCD) show risk for infection‐induced morbidity and mortality. This study (observation period February 2020 to December 2021) evaluates impact on medical health care as well as disease course and outcome of SARS‐CoV‐2 infections in patients with intoxication‐type IMDs managed by participants of the European Registry and Network for intoxication type metabolic diseases Consortium (E‐IMD). Survey's respondents managing 792 patients (n = 479 pediatric; n = 313 adult) with intoxication‐type IMDs (n = 454 OA; n = 338 UCD) in 14 countries reported on 59 (OA: n = 36; UCD: n = 23), SARS‐CoV‐2 infections (7.4%). Medical services were increasingly requested (95%), mostly alleviated by remote technologies (86%). Problems with medical supply were scarce (5%). Regular follow‐up visits were reduced in 41% (range 10%–50%). Most infected individuals (49/59; 83%) showed mild clinical symptoms, while 10 patients (17%; n = 6 OA including four transplanted MMA patients; n = 4 UCD) were hospitalized (metabolic decompensation in 30%). ICU treatment was not reported. Hospitalization rate did not differ for diagnosis or age group (p = 0.778). Survival rate was 100%. Full recovery was reported for 100% in outpatient care and 90% of hospitalized individuals. SARS‐CoV‐2 impacts health care of individuals with intoxication‐type IMDs worldwide. Most infected individuals, however, showed mild symptoms and did not require hospitalization. SARS‐CoV‐2‐induced metabolic decompensations were usually mild without increased risk for ICU treatment. Overall prognosis of infected individuals is very promising and IMD‐specific or COVID‐19‐related complications have not been observed.
reported. Hospitalization rate did not differ for diagnosis or age group (p = 0.778). Survival rate was 100%. Full recovery was reported for 100% in outpatient care and 90% of hospitalized individuals. SARS-CoV-2 impacts health care of individuals with intoxication-type IMDs worldwide. Most infected individuals, however, showed mild symptoms and did not require hospitalization. SARS-CoV-2-induced metabolic decompensations were usually mild without increased risk for ICU treatment. Overall prognosis of infected individuals is very promising and IMD-specific or COVID-19-related complications have not been observed.
| INTRODUCTION
Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) was isolated first in January 2020 and declared as pandemic by the WHO in March 2020. 1 More than 565 million patients were infected worldwide up to July 2022, (https://covid19.who.int; accessed on July 25, 2022).
SARS-CoV-2 is a single-stranded RNA virus. 2 The clinical spectrum of SARS-CoV-2 infections varies from asymptomatic to severe respiratory infections ("COVID-19") and lethal courses. Frequent symptoms include fever, coughing, and respiratory distress, but also fatigue, anosmia, muscle pain, headache, weight loss, vomiting, or diarrhea. 3 Compared to adults, symptoms in children are less severe. 4 General hospitalization rate for all patients are 7.3%, but differs between adults >60 years (21.9%), newborns and infants (8.8%), and children and adolescents (0.7%-1.3%). Intensive care unit (ICU) treatment was required in 1%-5% of children and 7%-10% of adults. 5,6 However, children and adolescents with chronic medical conditions more frequently require inpatient and ICU treatment. 7,8 COVID-19-associated complications in this age group comprise myocarditis 9 and pediatric inflammatory multisystem syndrome (PIMS). 10 Postacute entities comprise "subacute" (4-12 weeks) and "chronic" COVID-19 (>3 months) with variable multisystem involvement primarily described in adults, while prevalence in children is thought to be low. 11,12 For individuals with inherited metabolic diseases (IMDs), previous studies reported on significant impact of the COVID-19 pandemic on medical health care in Europe 13 and other continents. 14 Disease courses of SARS-CoV-2 infections in patients with IMDs in general, however, were mostly mild. 15,16 Within IMDs, the subgroup of intoxication-type IMDs, such as organic acidurias (OA) and urea cycle disorders (UCD), are characterized by accumulation of small metabolites, such as ammonium, organic acids and corresponding CoA esters, causing endogenous intoxication through impairment of energy metabolism and ureagenesis. 17 Acute metabolic decompensations, occurring in all age groups, are life-threatening and precipitated by catabolic episodes such as infections. 18,19 International patient registries, such as European Registry and Network for intoxication type metabolic diseases (E-IMD), have been established to investigate long-term disease course and impact of treatment. [18][19][20] Systematic analyses of SARS-CoV-2 infections in individuals with intoxication-type IMDs do not exist and it is unclear, whether SARS-CoV-2 infections in these patients are associated with increased risk for metabolic decompensation or long-term complications. This study aims at evaluating the impact of the COVID-19 pandemic on medical health care in pediatric and adult patients with intoxication-type IMD and to characterize the disease course and outcome of SARS-CoV-2 infections using an international survey.
| Survey organization
The online questionnaire (Table S1) was developed using LimeSurvey (https://www.limesurvey.org/de). It consisted of 17 main questions on (1) general informations on the participating metabolic center (e.g., number and diagnoses of IMD patients, adaptions in medical care during the pandemic), (2) management and care of patients with IMDs and SARS-CoV-2 infection (e.g., outor inpatient care, requirement of ICU treatment), and (3) outcome and mortality. The survey covered a 22 months observation period (February 1, 2020-December 1, 2021), was distributed to all 44 E-IMD study centers, followed by two reminders, and was closed for data entryon December 31, 2021. The principle investigator at the coordinating study center at the University Children's Hospital Heidelberg, Germany, was responsible for administrative management and communication with the local investigators, providing assistance to participating clinical centers in study management and record keeping.
| Ethical and legal aspects
E-IMD was first approved by the local ethics committee of Heidelberg Medical Faculty (application no. S-525/2010), followed by approvals of the respective ethics committees of further associated and collaborating partners contributing to the registry.
| Statistical analysis
All data were extracted from the questionnaires and analyzed using Microsoft Excel and R language 21 for statistical computing. For comparative analyses of OA and UCD pediatric and adult groups, we used χ 2 test and Pearson residuals to interpret differences in observed and expected frequencies.
| RESULTS
Fifty percent (n = 22/44) of E-IMD study centers responded (Supporting Information Material S1). The participating E-IMD centers in 14 countries from three continents ( Figure 1) manage a total of 792 patients (n = 479 pediatric; n = 313 adult) with intoxication-type IMDs (n = 454 OA; n = 338 UCD; Figure 2). All participating centers managed both pediatric as well as adult patients.
| Impact of the SARS-CoV-2 pandemic on management and medical health care of patients with intoxicationtype IMDs
During the observation period (February 1, 2020-December 1, 2021), 41% (n = 8/22) of the participating E-IMD study centers provided less regular outpatient visits for patients with intoxication-type IMDs (mean reduction rate visits of 40%, range 10%-50% [n = 5]). Almost all centers (95%; 21/22) were increasingly contacted throughout the pandemic by the patients and families with intoxication-type IMD managed at the center asking for COVID-19-specific advices, including general information, additional risks, recommendations for prevention, school attendance/home office, or indication for vaccination. The vast majority of participating centers (86%; 19/22) compensated the increased consultation requests by phone/video consultations. COVID-19-specific recommendations, that is, in addition to the general recommendations for managing potential metabolic risk situations for intoxicationtype IMDs such as febrile infections (e.g., specific letter, webinars, tele care, information) were provided to the patients by 32% (7/22) of the study centers. Noteworthy, only 33% of centers (7/21) reported reimbursement for their additional COVID-19-associated services, while significant problems with medical supply (medication, prescriptions, dietary products) were reported to be rare during the study period (5%; 1/22). This included shortage of hydroxocobalamin in the United States. , that is, 7.4% of all intoxication-type IMD patients managed at the study centers, were reported to suffer from a SARS-CoV-2 infection. Infection rates with respect to age and diagnosis group were analyzed and revealed significant differences (χ 2 = 36.096, df = 4; p < 0.0001). According to Pearson residuals, noninfected pediatric UCD patients were underrepresented compared to noninfected adult UCD patients and, vice versa, noninfected adult OA patients were underrepresented compared to noninfected pediatric OA patients ( Figure S1). Infection rate was higher in adult OA (n = 23) compared to pediatric OA patients (n = 13), but expected frequencies did not differ between pediatric UCD (n = 17) and adult UCD patients (n = 6).
| Incidence, disease course, and management of SARS-CoV-2 infections in intoxication-type IMDs
While 49 of 59 (83%) of infected individuals were treated at home showing no or only mild clinical symptoms, 17% of the SARS-CoV-2-positive individuals with intoxication-type IMD (n = 10 [7 pediatric, 3 adults]; n = 6 OA [4 pediatric; 2 adult]; and n = 4 UCD [3 pediatric, 1 adult]) had to be admitted to a hospital ( Figure 3). Hospitalization rate did not differ between age (pediatric/ adult) and diagnosis groups (OA/UCD; χ 2 = 0.08, df = 1; p = 0.778). Indications for hospitalizations and clinical characteristics of hospitalized patients are summarized in Table 1. Median (range) age at admission of pediatric patients was 10 (3-17), and 31 (22-32) years for adult F I G U R E 2 Total patients (adult and pediatric) with intoxication-type IMDs followed by contributing E-IMD study centers. Seven hundred ninetytwo patients (N = 478 pediatric; N = 314 adult) with (IMDs) N = 454 organic acidurias (OA; yellow-orange); N = 338 urea cycle disorders (UCD; green blue) are followed-up at the participating centers. All centers cared pediatric as well as adult patients. patients. Of the 10 patients, 9 were admitted with acute manifestation of SARS-CoV-2 infection. Four (two pediatric, two adult) of five admitted MMA patients earlier underwent liver and/or kidney transplantation, of which three of four patients (p3, p8, p9) were admitted due to acute symptoms, while one pediatric MMA patient (p2) due to deteriorated chronic MMA-complications, possibly triggered by immunosuppression. All 10 admitted patients were treated on normal wards for a median (range) length of seven (1-25) days. ICU treatment, invasive ventilation or extracorporeal membrane oxygenation were not required in any patient. One adult OTC patient (p10) with pneumonia and secondary pulmonary embolism, and one pediatric MMA patient (p1) with obstructive bronchitis received ventilation assistance with oxygen (Table 1). Of the 10 admitted patients, 2 received total parenteral feeding. Thirty percent (p2, p4, p5) of admitted patients showed mild to moderate laboratory signs of metabolic decompensation effectively managed by metabolic emergency treatment which was (preventively) administered to all but two (p8, p9) admitted patients. In contrast, only six (12%) of infected individuals (three pediatric OA, two adult OA, one adult UCD) managed at home received metabolic emergency treatment.
| Selected case descriptions of hospitalized patients
P1: A 4-year-old girl with late-onset mut À /mut 0 MMA, diagnosed symptomatically at age 11 months after severe metabolic decompensation (pH 6.9; NH 3 178 μM). Genetic evaluation revealed an additional heterozygous missense variant in SPINK1, a genetic risk factor for chronic pancreatitis. The patient experienced repetitive episodes of acute pancreatitis and febrile infections frequently resulting in metabolic decompensation requiring hospitalization. She developed chronic kidney disease (CKD) III-IV and is currently under peritoneal dialysis. However, SARS-CoV-2-positive obstructive bronchitis did not result in metabolic decompensation or signs of pancreatitis. After treatment with IV fluids (preventive), metabolic emergency treatment, and noninvasive oxygen administration, she fully recovered and received combined liver/kidney transplantation 3 months later.
P2: A 9-year old individual with homozygous mut 0 MMA, who underwent liver transplantation 1 year before she suffered from CKD. SARS-CoV-2 infection started with loss of taste and chronic diarrhea. Subsequently, she lost 10 kg (20%) of her body weight. Admission to hospital followed an epileptic seizure, probably caused by calcineurin induced neurotoxicity with high tacrolimus levels induced by a decline of kidney function due to dehydration. Tacrolimus was discontinued and switched to everolimus. A mild metabolic decompensation (Table 1) was treated with carglumic acid, IV fluids, IV scavengers, and total parenteral feeding. She did not fully recover and still shows mild neurological deficits.
P5: 3.5-year-old boy with genetically confirmed earlyonset OTC deficiency who suffered from a severe neonatal hyperammonemic crisis at Day 3 (NH 3 5200 μM) followed by repetitive (>15) metabolic decompensations precipitated by febrile infections since then. SARS-CoV-2 infection started with mild coughing, repetitive vomiting, lethargy and reduced general state of health. Laboratory work-up showed moderate hyperammonemia (NH 3 251 μM), elevated plasma glutamine concentration (1440 μM), and . Metabolic emergency treatment was started as sodium benzoate including additional bolus, followed by IV fluids with glucose, L-arginine and accompanied by dietary emergency regimen. Glycerol phenylbutyrate was continued via gastrostomy tube. After 3 days, he showed full clinical recovery and normalization of laboratory parameters. P10: 32-year-old female patient with late onset female OTC deficiency. SARS-CoV-2 infection started with shortness of breath and mild hypoxia (oxygen saturation 90%) 2 weeks postpartum. Ammonium level was only mildly elevated (67 μM), but she showed neutropenia and lymphopenia. She was treated with oxygen, anti-COVID-19 remdesivir, dexamethasone, and antibiotics due to pulmonary bacterial superinfection. Thorax CT scan showed marked multifocal consolidation and ground glass with parenchymal involvement of 30% (predominantly the right lower lobe). Brain CT was normal. She received metabolic emergency treatment and IV medication. She was discharged after 6 days, but readmitted 1 day later with fever, secondary deterioration and hypoxia (oxygen saturation 86%). D-Dimer concentration was raised (11 770 μg/L, N 0-550) and thorax CT scan confirmed pulmonary embolism with low volume thromboembolic clot in the distal arteries leading to the right lower lobe. The 4C Mortality Score and 4C Deterioration score (ISARIC 4C), used for hospitalized COVID-19 patients, was assessed as 4-8 (intermediate risk, 9.9% inpatient mortality). Treatment consisted of oxygen, tocilizumab, and therapeutic enoxaparin followed by apixaban for 3 months. She recovered completely. Causal embolic factors (e.g., postpartum period and/or SARS-CoV-2 infection) could not be clearly identified.
| Outcome of SARS-CoV-2 infections in intoxication-type IMDs
Survival rate during the observation period for the reported SARS-CoV-2 infections in patients with intoxication type IMDs was 100%. All patients (100%) managed at home or the outpatient department and the vast majority of hospitalized individuals (90%) with intoxication-type IMD and SARS-CoV-2 infection showed full recovery. One hospitalized pediatric MMA patient (p2) developed persistent mild neurological deficits following the acute treatment phase. Of note, this patient was admitted due to chronic MMA complications and earlier underwent liver transplantation. Outcome in the other three admitted MMA patients with immunosuppression due to transplantations was favorable all showing full recovery. No deterioration or new onset of intoxication-type IMDspecific disease manifestations was reported. No case of COVID-19-related complications such as subacute (4-12 weeks after infection) or postacute (beyond 12 weeks after the infection) manifestations (PIMS, chronic pulmonary symptoms, subacute or chronic long COVID syndrome) were reported for the observed period.
| DISCUSSION
The main findings of this study on SARS-CoV-2 infections in individuals with intoxication-type IMD in a 22 months period from 2020 to 2021 are that (1) the pandemic significantly affects but not endangers medical care of patients, (2) increased request of medical services is effectively alleviated by remote techniques, (3) disease course is mostly mild and managed without hospitalization, (4) metabolic decompensations in hospitalized patients are rare and, if present, mild to moderate (5) hospitalization rate is higher than in the general population but comparable to other reported chronic medical conditions, (6) mortality rate is not increased, and (7) overall prognosis of patients is promising without increased incidence of IMD-specific or COVID-19-related complications.
| Challenge of medical health care for individuals with intoxication-type IMDs during the pandemic
Studies on the impact of COVID-19 on medical health care of patients suffering from IMD are scarce and especially systematic analyses for individuals with intoxication-type IMDs do not exist. Previously published data for the beginning of the pandemic analyzed patient satisfaction and management in IMDs. In the beginning of the pandemic (March-May 2020), a significant disruption of care of 50-100% was reported for Europe 13 and of 60%-80% in 16 centers in Asia, Africa, and Europe. 14 Regular follow-up appointments in these studies were canceled (55%) or missed frequently (75%-100%) during this period 13 and the median worldwide reduction rate of medical services was reported to be 60%-80%. 14 In comparison, the lower rate of 41% of centers reducing medical services for intoxication type IMDs in our study may highlight improved compensating strategies of metabolic healthcare providers to reduce the rate of missed followup investigations for this vulnerable patient group.
In contrast to other studies reporting on problems with treatment administration in IMD patients 22 or treatment discontinuation in up to 65%, 13 however, the rate of relevant problems with medical supply for intoxication-type IMDs was comparably low in this study, demonstrating effective compensation mechanisms developed by healthcare providers and pharmacological companies enabling continuous treatment for patients with intoxication-type IMD. Of note, most patients themselves (n = 175) stated in an online survey not to have faced significant problems in receiving special therapies (91%) but suffered from the consequences of quarantine (e.g., crowded apartments, isolation 23 ).
Telemedicine has experienced a strong upward trend since the beginning of the pandemic and people have become familiar with the technology. 24 Many centers started telemedicine appointments and used videoconferences as a replacement for face-to-face meetings. 13,25 Rates of satisfaction among patients proved to be high, with lack of laboratory tests 26 and medication being reported by patient organizations 13 as the main disadvantages. Rate of telemedicine used for IMD patients (90%) was reported to be higher compared to the general rare disease community in Europe and United States. 13 Our study confirms the high compensation rate of increasingly requested medical services using remote techniques (83% of centers) and highlights that these relevant and new digital tools for reducing physical and geographical barriers have been successfully established in health care for IMD patients by metabolic centers worldwide.
The low overall incidence of 7.4% positive SARS-CoV-2 infections in our study cohort may be hampered by the fact that the observation period did not cover very recent viral variants such as Omicron showing a much higher infection rate. 27 Of note, another study 16 also found an infection rate of 7% in 272 patients covering an observation period of 12 months (March 2020-2021). Furthermore, as SARS-CoV-2 infections showed asymptomatic to mild diseases courses in children, also underreporting has to be considered.
| The disease course of SARS-CoV-2 infections is mostly mild in intoxicationtype IMDs
In general, pediatric patients with SARS-CoV-2 infections are more frequently asymptomatic compared to adults and show milder symptoms. 4 One single center study reported that disease severity of SARS-CoV-2 infections in 272 patients with different IMDs was comparable to the reference population in pediatric and adult patients. 16 Another study reported on 44 IMD patients with mostly mild symptoms. 22 A pan-European survey by the European Reference Network for hereditary metabolic diseases (MetabERN; https://metab.ern-net.eu) among healthcare providers following about 26 000 metabolic patients 15 reported 452 (213 pediatric) cases of COVID-19 in the first year of the pandemic with the majority of adult and pediatric patients being asymptomatic and 37.5% of individuals showing mild symptoms. Complementary, our study confirms that, similarly to the general population, the clinical course of SARS-CoV-2-infections is mostly asymptomatic or mild also for individuals with intoxication-type IMDs who, in more than 80% of individuals, could be managed at home and did not require hospitalization.
Overall hospitalization rate for patients with SARS-CoV-2-infections in Germany was reported to be 7.3%, but differ between adults >60 years (21.9%), newborns and infants (8.8%), and children and adolescents (0.7%-1.3%). 28 The hospitalization rate of 17% found in our study are therefore higher than in the general population, and is comparable to another report on SARS-CoV-2-positive patients with diabetes. 15 For IMDs, one surveybased multicenter study found severe symptoms and hospitalization rates in 1%-25% of IMD patients reported by less than 10% of survey respondents, 15 while one single center study reported on 1 of 19 hospitalized patient. 16 Further data for hospitalization rates for patients with IMDs or intoxication-type IMDs are not available.
ICU treatment of SARS-CoV-2 infections are required in 1%-5% of pediatric and 7%-10% of adults. 5,6,29 However, children and adolescents with chronic diseases are admitted to hospital and treated on ICU more frequently. Risk factors for severe disease course in children comprise age <1 month, male sex, asthma, obesity, diabetes mellitus, immunosuppression or trisomy 21. 7,8 Of note, our study demonstrates that intoxication-type IMDs are not a risk factor for ICU treatment in adult or pediatric patients.
| Inpatient management and risk for metabolic decompensations
Compared to patients with other IMDs, patients with intoxication-type IMDs have an additional risk for lifethreatening acute metabolic decompensations during catabolic episodes such as intercurrent infections, that may require metabolic emergency treatment, hospitalization, or even ICU admission with potentially hemofiltration. These episodes are associated with increased morbidity and mortality. [30][31][32][33] For UCD patients for instance, a recent study found a mean frequency of 0.6-1.7 hyperammonemic events per years requiring hospitalization, depending on disease severity. 34 It was hypothesized that in case of SARS-CoV-2 infection the risk for metabolic decompensation may even be higher and the disease course may show gradual worsening, or even progressive neurological deterioration. 13,25 However, a systematic evaluation whether intoxication type IMDs are at increased risk for life threatening metabolic decompensation triggered by SARS-CoV-2 was not performed so far. Although our study cohort is too small to assess the general risk for metabolic decompensation and further data for comparison rates of emergency visits do not exist, our findings of only 30% mild to moderate metabolic decompensations in 10 admitted patients without any further complications, ICU treatment, invasive ventilation, or extracorporeal membrane oxygenation demonstrates that SARS-CoV-2-related decompensations in intoxication-type IMDs do not seem to be aggravated compared to other intercurrent infections or conditions likely to induce catabolism (such as febrile reactions to vaccinations) and are generally well manageable. In line with this, one recent review found no increased risk for metabolic decompensation in OA or UCD following vaccinations. 35 Of note, several patients in our study admitted to hospital without metabolic decompensation received preventive metabolic emergency treatment. However, more studies are necessary to evaluate the risk for metabolic decompensation in patients with intoxication-type IMDs in case of SARS-CoV-2 infections.
Anosmia and gastrointestinal dysfunction like nausea, vomiting, and diarrhea that affect both, food intake and absorption frequently accompanied COVID-19. About 15% of hospitalized patients with COVID-19 are reported to need parenteral nutritional support, in particular those with the severe conditions, high inflammasome profile, or ICU treatment. 36 but also patients with intoxication-type IMDs treated on normal wards as our data show (2 of 10 admitted patients). To recover, high protein content diet for about 8 weeks following discharge from hospital, is recommended for SARS-CoV-2-infected patients with chronic endocrinological disorders. 37 Certainly, this is not practicable in patients with intoxication-type IMDs treated since high protein intake in this disease group can precipitate life-threatening metabolic decompensations.
| Outcome and mortality
Mortality in SARS-CoV-2 has been reported to be 0.08% in children. 5 Most centers in one survey-based study did not report on death among their patients (85% adults to 97% pediatric), 15 but in contrast, several lethal courses in pediatric and adult patients with IMDs have been published. 15,22 In our study, survival rate was 100%, showing that SARS-CoV-2-infection does not seem to be associated with increased mortality in intoxication-type IMDs. Although the vaccination status was not covered in our questionnaire, these data are insufficient to decide whether or not individuals with intoxication-type IMD should be classified as a high priority group for receiving the COVID-19 vaccination.
Furthermore, we observed full recovery in 100% of outpatients and 9 out of 10 admitted infected individuals including four admitted MMA patients treated with immunosuppression following transplantation, while only one female MMA patient with immunosuppression suffered from chronic mild neurological deficits after discharge. No cases of acute COVID-19-related complications like myocarditis or PIMS have been detected. This observation also confirms that chronic COVID-19-related complications comprising "subacute" (4-12 weeks) and "postacute" COVID-19 (>3 months) are not increased in our cohort of pediatric or adult patients with intoxication-type IMD.
| Limitations
First, the amount of collected data is relatively small and only involves half of the E-IMD study centers. Second, the questionnaire was only directed at healthcare professionals and not to patients, since the main study focus was to evaluate the impact on disease course and treatment decisions and the resulting effect on health outcome. As a consequence, data do not reflect the patient perspective. Third, assessment of hospitalization and severe cases may be hampered by the fact that the age group with the most severe courses in the general population, i.e. individuals aged 60 years and older and suffering from various comorbidities, is underrepresented in our cohort of individuals with intoxication-type IMDs. Fourth, impact of vaccinations, becoming broadly accessible at the end of the observation period and to a variable degree in each country, was beyond the scope of this study, and, therfore, could not be evaluated. Fifth, SARS-CoV-2-associated long-term complications might be underreported since data entry was closed shortly after the end of the observation period.
| CONCLUSION
In conclusion, this study demonstrates a significant impact of the COVID-19 pandemic on medical health care of individuals with intoxication-type IMDs worldwide with different medical services being increasingly requested by patients and families, while medical supply and management was not endangered. Notably, the clinical course of SARS-CoV-2-infections in this subgroup is predominantly mild and mostly managed in the outpatient clinic. Rates of metabolic decompensations for admitted patients, mostly receiving preventively metabolic emergency treatment, was not increased and the overall prognosis for the vast majority of affected individuals is promising. | 2022-10-22T06:16:32.415Z | 2022-10-20T00:00:00.000 | {
"year": 2022,
"sha1": "776f055ec76629befbe272587b4ab53523f59fc3",
"oa_license": "CCBYNCND",
"oa_url": "https://pure.eur.nl/ws/files/73371680/Impact_of_the_SARS_CoV_2_pandemic_on_the_health_of_individuals_with_intoxication_type_metabolic_diseases_Data_from_the_E_IMD_consortium.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0ab88c41efcde7be40e8a33ee0cce6d436f78ad",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40089704 | pes2o/s2orc | v3-fos-license | Strain typing of Mycoplasma cynos isolates from dogs with respiratory disease
The association of Mycoplasma cynos with canine infectious respiratory disease is increasingly being recognised. This study describes the strain typing of 14 M. cynos isolates cultured from trachea and bronchoalveolar lavage samples of six dogs with respiratory disease, from two separate kennels in the United Kingdom. The genetic similarity of the isolates was investigated using pulsed-field gel electrophoresis (PFGE) and random amplified polymorphic DNA (RAPD). Most of the isolates from four dogs housed at a re-homing kennel were genetically similar and some isolates from different dogs were indistinguishable by both PFGE and RAPD. These isolates were cultured from dogs with non-overlapping stays in the kennel, which may indicate maintenance of some strains within kennels. A small number of isolates showed much greater genetic heterogeneity and were genetically distinct from the main group of M. cynos strains. There was also a high degree of similarity of the M. cynos type strain (isolated from a dog with respiratory disease in Denmark in 1971) to at least one of the United Kingdom isolates using PFGE analysis, which may suggest possible conservation of pathogenic strains of M. cynos.
Introduction
Canine infectious respiratory disease (CIRD or kennel cough) is a multifactorial disease complex and the agents traditionally associated with this disease are Bordetella bronchiseptica, canine parainfluenza virus (CPIV), canine adenovirus (CAV), and canine herpesvirus (CHV). Recently, a novel canine respiratory coronavirus (CRCoV; Erles et al., 2003) and Streptococcus equi subsp. zooepidemicus (Chalker et al., 2003a) have also been found to be associated with the disease.
Within this microbial complex, Mycoplasma spp. are found to be ubiquitous in the upper respiratory tract of dogs and are thought to be normal flora (Rosendal, 1982;Randolph et al., 1993). However, mycoplasmas have also been the sole bacterial isolate in a number of clinical cases of canine respiratory disease, but unfortunately these isolates were not speciated and viral causes of CIRD were not investigated (Kirchner et al., 1990;Jameson et al., 1995;Chandler and Lappin, 2002). The involvement of M. cynos in CIRD has been noted for some time (Rosendal, 1972(Rosendal, , 1978(Rosendal, , 1982. Evidence for this has been mounting recently, as Chalker et al. (2004) found that M. cynos was the only mycoplasma significantly associated with canine respiratory disease. In addition, dogs entering a re-homing kennel that developed an antibody response to M. cynos were more likely to suffer respiratory disease (Rycroft et al., 2007). M. cynos has been isolated from dogs with pneumonia (Rosendal, 1972(Rosendal, , 1978Chvala et al., 2007) and was particularly abundant in the most necrotic areas of the lung (Chvala et al., 2007). Furthermore, M. cynos was the only detected agent in a case of severe bronchopneumonia in a litter of young puppies which resulted in the deaths of some puppies, but which was resolved in the surviving littermates after the administration of appropriate antibiotics .
The association of Mycoplasma cynos with canine infectious respiratory disease is increasingly being recognised. This study describes the strain typing of 14 M. cynos isolates cultured from trachea and bronchoalveolar lavage samples of six dogs with respiratory disease, from two separate kennels in the United Kingdom. The genetic similarity of the isolates was investigated using pulsed-field gel electrophoresis (PFGE) and random amplified polymorphic DNA (RAPD). Most of the isolates from four dogs housed at a rehoming kennel were genetically similar and some isolates from different dogs were indistinguishable by both PFGE and RAPD. These isolates were cultured from dogs with non-overlapping stays in the kennel, which may indicate maintenance of some strains within kennels. A small number of isolates showed much greater genetic heterogeneity and were genetically distinct from the main group of M. cynos strains. There was also a high degree of similarity of the M. cynos type strain (isolated from a dog with respiratory disease in Denmark in 1971) to at least one of the United Kingdom isolates using PFGE analysis, which may suggest possible conservation of pathogenic strains of M. cynos.
Crown Copyright ß 2008 Published by Elsevier B.V. All rights reserved.
Materials and methods
2.1. Mycoplasma cynos isolates M. cynos isolates cultured from respiratory samples from dogs with moderate to severe respiratory disease were identified from an earlier large study. Isolation and identification of these isolates has been previously described (Chalker et al., 2004). Briefly, bronchoalveolar lavage (BAL) and trachea samples were obtained from euthanized dogs from a re-homing centre with a history of endemic CIRD (population A). Alternately, BAL samples were taken from dogs with persistent coughs at a training centre (population B). Dogs were graded for respiratory signs prior to sampling or euthanasia. M. cynos was cultured on Mycoplasma media (Mycoplasma Experience) and identified by PCR specific for the 16S/23S rRNA intergenic spacer region. Cultures of the single-cloned M. cynos isolates were stored frozen at À70 8C.
The type strain M. cynos H381 NCTC10142 was obtained from the National Collection of Type Cultures (NCTC), Collindale, London.
Bacterial and viral screening
Bacteriological screening of the samples has been previously described (Chalker et al., 2003a,b). Briefly, BAL and trachea samples were inoculated onto MacConkey agar and two blood agar plates (incubated aerobically and anaerobically) and incubated at 37 8C. Gram positive, catalase negative, beta-haemolytic colonies were identified as streptococci and sero-grouped into Lancefield Groups, then identified to the species level with API 20STREP (Biomerieux). Oxidase positive colonies with typical B. bronchiseptica growth characteristics were identified as such with API 20NE.
Virus screening of the samples has been previously described (Erles et al., 2004). Briefly, RNA and DNA were extracted from the respiratory tissue samples and PCR and reverse transcription-PCR were used to detect CPIV, canine herpesvirus (CHV), CAV, canine distemper virus (CDV), and CRCoV. In addition, RT-PCR for canine influenza virus (CIV) was carried out using primers AMP227F and AMP622R directed to the M gene (Ellis and Zambon, 2001). Equine influenza virus (H3N8) served as a positive control.
Pulsed-field gel electrophoresis
Aliquots (20 ml) of stationary phase M. cynos culture (maximum absorbance A 600 of approximately 0.3) were used for PFGE analysis. Cells were harvested by centrifuga-tion (3500 Â g for 20 min at 4 8C), washed three times with PBS buffer with 10% (w/v) glucose and resuspended in 300 ml cold PBS/glucose buffer. Agarose plugs were made from a 1:1 mixture of 2% low-melting-point agarose (Biorad) and the cell suspension. Plugs were incubated in lysis buffer (10 mM Tris-HCl, 1 mM EDTA, 1% lauroyl sarcosine, 1 mg/ ml proteinase K) for 48 h at 56 8C. Plugs were washed four times with Tris-EDTA buffer for 30 min at 4 8C. Slices (2 mm) were cut aseptically from plugs and equilibrated in restriction buffer (Promega) for 1 h. Subsequently, restriction digestion was performed by using 30 U of SmaI (Promega) for 16 h according to the manufacturer's instructions. The fragments were resolved on 1% pulsed field certified agarose (Biorad) gels using a CHEF-DRIII system (Biorad) at 6 V/cm, with a running time of 20 h at 14 8C; included angle of 1208; initial pulse time of 4 s; final pulse time of 40 s. Gels were stained with ethidium bromide (0.5 mg/ml) for 15 min, destained in distilled water for 1 h and photographed under UV light. A lambda ladder PFGE marker (Sigma) was used for fragment size determination. The Bionumerics package (Applied Maths) was used for gel analysis and dendrograms were produced using the Jaccard Coefficient and unweighted pair group method using arithmetic averages (UPGMA) cluster analysis.
RAPD
The single primer Hum4 5 0 -ACGGTACACT-3 0 (Hotzel et al., 1998) was used for the generation of RAPD profiles. Amplification was performed in a 50-ml total reaction volume containing 100 ng of DNA sample, 10 mM Tris-HCl (pH 9.0), 1.5 mM MgCl 2 , 50 mM KCl, 0.1% Triton X-100, 0.2 mM each deoxynucleoside triphosphate, and 0.5 U of TaqGold (PerkinElmer). Cycling conditions included an initial denaturation step at 94 8C for 5 min, followed by 40 cycles of 94 8C for 15 s, 37 8C for 60 s and 72 8C for 90 s. The last cycle included a final elongation at 72 8C for 7 min. PCR products were resolved by electrophoresis on 10 cm 2% agarose gels at 60 mA for 1.5 h, stained with ethidium bromide and visualized under UV illumination. The Bionumerics package (Applied Maths) was used for gel analysis and dendrograms were produced using the Jaccard Coefficient and unweighted pair group method using arithmetic averages (UPGMA) cluster analysis.
Dogs
Six dogs with moderate to severe respiratory disease from which M. cynos was isolated were identified from an earlier large study (Chalker et al., 2004). Four dogs were housed at a re-homing centre with a history of endemic CIRD (population A) and two dogs at a training centre (population B). All six dogs had respiratory disease with symptoms of either bronchopneumonia (respiratory score 5) or cough and nasal discharge (score 3). Trachea and/or BAL samples were taken from the dogs within 4 weeks of the first symptoms of CIRD. The dogs were 1-3 years old and of various breeds. The group consisted of entire and neutered males and females (Table 1).
Bacteriology and virology screening
M. cynos was cultured from the BAL of each dog and also the trachea where that sample was available (Tables 1 and 2).
Testing of BAL samples from the two dogs from the training centre (B-1 and B-2) was negative for the viruses CRCoV, CHV, CPIV, CAV, CDV and CIV. In addition, these samples yielded no bacterial growth except that of M. cynos. In comparison, the four dogs from the re-homing kennel had other bacteria cultured from the respiratory samples (see Table 1).
PFGE analysis of M. cynos isolates
PFGE analysis of the M. cynos type strain and the 14 isolates from dogs with respiratory disease, resulted in six different PFGE profiles (Fig. 1). The PFGE profiles consisted of 3-5 DNA bands, which ranged in size between approximately 6 and 425 kb. The PFGE profiles of the isolates can be divided by similarity into three groups. Group 1 contains 10 isolates and it is a genetically homogeneous group with at least 78% similarity; isolates 185,190,191,210,253,312,387,417,428, and 429 all form this group. These are all of the isolates from the population A dogs except isolate 214 from dog A-1.
Group 2 contains 491 and 492 and these isolates are indistinguishable from each other but quite distinct to all the other isolates with only 28% similarity by cluster analysis. These isolates are from two different dogs from the training centre population (dogs B-1 and B-2).
The third group contains the type strain and isolates 510 and 214. The type strain and 510 are indistinguishable from each other, but 214 is distinctly different with only about 46% similarity to the other two. Isolate 510 was from dog B-2 while isolate 214 was from dog A-1.
RAPD analysis of M. cynos isolates
When the same M. cynos isolates were subjected to analysis with RAPD with the primer Hum4, 12 different profiles were obtained (Fig. 2). The profiles consisted of 3-13 bands which ranged in size between approximately 240 and 2200 bp. Two broad groups of similar isolates were formed. The type strain and the 11 isolates 185,190,191,210,253,312,387,417,428,429 and 510 had similar profiles and are considered to be a homogeneous group with more than 68% similarity (group 1). This group comprises all of the isolates from the population A dogs, except for isolate 214, but also includes 510 from dog B-2 and the type strain.
Isolates 214, 491 and 492 formed a heterogeneous group about 60% similar to each other, but only about 26% similar to the group 1 isolates. These isolates are from dogs B-1, B-2 and isolate 214 from dog A-1.
The PFGE and RAPD grouping of isolates is summarised in Table 2.
Discussion
This is the first genetic typing study of M. cynos. The isolates from each kennel were found to be genetically similar. Indeed, isolates from dogs that had been housed in the same kennel 4 and 8 months apart were found to be indistinguishable using both genetic analysis methods (isolates 417 and 191 from dogs A-2 and A-3, and isolates 429 and 387 from dogs A-1 and A-4, respectively). The dogs had stayed at the kennels for between 8 and 16 days. This may suggest that there is maintenance of M. cynos strains within a kennel situation. M. cynos can be isolated from the upper respiratory tract of healthy dogs (Chalker et al., 2004) and it is probable that some strains are passed between subsequent dogs, resulting in the survival of these strains. In addition, environmental survival may aid the continued existence of some strains. Although the environmental survival of M. cynos is not known, the environmental survival of other mycoplasma species varies from a week to several months (Nagatomo et al., 2001) and M. cynos can be isolated from the air (Chalker et al., 2004). Recently it has been shown that biofilm formation is important for persistence of mycoplasmas and may aid environmental survival , it seems feasible that M. cynos may be able to persist in the kennel environment as an adherent biofilm layer.
The M. cynos type strain was isolated from the lung of a dog with CIRD in Denmark in 1971 (Rosendal, 1973). This M. cynos type strain was indistinguishable by PFGE to isolate 510 from dog B-2 and was more than 68% similar by RAPD analysis to 11 of the M. cynos isolates from both kennels. The high degree of similarity of the type strain to these United Kingdom isolates from 1999 to 2000 suggests a low level of diversity of this organism in CIRD. However, this study also shows that some isolates have a relatively low level of similarity with each other (for example isolates 214, 491 and 492 appear to be dissimilar to the group 1 isolates). Indeed, this study suggests the potential for mixed M. cynos infections, as the same bronchoalveolar lavage sample from dog B-2 yielded M. cynos isolates 492 and 510, which are dissimilar strains. Similarly, the BAL sample from dog A-1 resulted in the culture of the 214 isolate which was dissimilar to the other isolates from this sample. A larger strain typing study of more isolates is required to consolidate these observations. M. cynos was the only CIRD agent detected in two out of the six dogs (dogs B-1 and B-2). Similarly, recently Zeugswetter et al. (2007) described lethal bronchopneumonia in puppies where M. cynos was the only CIRD agent detected from the puppies. Mycoplasmas have been the sole bacterial isolate in a number of other cases of CIRD, but unfortunately these isolates were not speciated (Kirchner et al., 1990;Jameson et al., 1995;Chandler and Lappin, 2002). However, in the current study, both dogs were on a course of antibiotics preceding the sampling date (dog B-1 cephalosporin; B-2 erythromycin), which may have precluded the isolation of other bacterial agents. Likewise, in the case of Zeugswetter et al. (2007), the puppies had been treated with amoxicillin prior to isolation of M. cynos from the lung tissue.
M. cynos has also been previously implicated in canine respiratory disease along with other bacterial or viral pathogens (Rosendal, 1978;Chalker et al., 2004;Chvala et al., 2007). This was also found in the current study as other respiratory pathogens apart from M. cynos were detected in the four dogs from the re-homing kennel, for example B. bronchiseptica, S. equi subsp. zooepidemicus, CHV and CRCoV. Multi-pathogen respiratory disease is commonly reported and it has been suggested that the pathogens may interact synergistically to produce disease (Randolph et al., 1993).
Escherichia coli, which was detected in one dog in the present study, has been previously isolated from BAL from a puppy with CIRD and was thought to be a contaminant (Williams et al., 2006). This is likely to be the case in this study as Enterococcus spp. was co-isolated from the same sample. In addition, M. spumans was isolated from two dogs, one of which also had M. canis and Ureaplasma spp., however, these species were not found to be significantly associated with respiratory disease in dogs (Chalker et al., 2004).
In summary, the PFGE and RAPD genetic typing methods were in basic agreement and showed that many of the isolates were highly similar. Strain maintenance is suggested by strains which are indistinguishable by genetic typing, being isolated from dogs housed months apart within the same kennel. There was also a high degree of similarity of the M. cynos type strain (isolated from a dog with respiratory disease in Denmark in 1971) to at least one of these United Kingdom isolates, which suggests possible conservation of pathogenic strains of M. cynos. | 2018-04-03T04:42:40.642Z | 2008-09-21T00:00:00.000 | {
"year": 2008,
"sha1": "671d5eda194557c44c95f522635ecb1f8a0da371",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.vetmic.2008.09.058",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1e174b0879aaccd5aabe97c66359f2f79fd5eb1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254020004 | pes2o/s2orc | v3-fos-license | Baseline immune signature score of Tregs × HLA-DR+CD4+ T cells × PD1+CD8+ T cells predicts outcome to immunotherapy in cancer patients
Background The use of immunotherapy (IT) is rapidly increasing across different tumor entities. PD-L1 expression is primarily used for therapy evaluation. The disadvantages of PD-L1 status are spatial and temporal heterogeneity as well as tumor type-dependent variation of predictive value. To optimize patient selection for IT, new prediction markers for therapy success are needed. Based on the systemic efficacy of IT, we dissected the immune signature of peripheral blood as an easily accessible predictive biomarker for therapeutic success. Methods We conducted a retrospective clinical study of 62 cancer patients treated with IT. We assessed peripheral immune cell counts before the start of IT via flow cytometry. The predictive value for therapy response of developed immune signature scores was tested by ROC curve analyses and scores were correlated with time to progression (TTP). Results High score values of “Tregs ÷ (CD4+/CD8+ ratio)” (Score A) and high score values of “Tregs × HLA-DR+CD4+ T cells × PD1+CD8+ T cells” (Score B) significantly correlated with response at first staging (p = 0.001; p < 0.001). At the optimal cutoff point, Score A correctly predicted 79.1% and Score B correctly predicted 89.3% of the staging results (sensitivity: 86.2%, 90.0%; specificity: 64.3%, 87.5%). A high Score A and Score B statistically correlated with prolonged median TTP (6.13 vs. 2.17 months, p = 0.025; 6.43 vs. 1.83 months, p = 0.016). Cox regression analyses for TTP showed a risk reduction of 55.7% (HR = 0.44, p = 0.029) for Score A and an adjusted risk reduction of 73.2% (HR = 0.27, p = 0.016) for Score B. Conclusion The two identified immune signature scores showed high predictive value for therapy response as well as for prolonged TTP in a pan-cancer patient population. Our scores are easy to determine by using peripheral blood and flow cytometry, apply to different cancer entities, and allow an outcome prediction before the start of IT.
. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Introduction
The use of checkpoint inhibitors as monotherapy as well as concomitant to chemotherapy is rapidly increasing across different tumor entities (1). Tumor cells suppress antitumor immunity via different signaling pathways including programmed death-ligand 1 (PD-L1) and programmed cell death protein 1 (PD-1) (2). By blocking these molecules, immunotherapy (IT) leads to an enhancement of CD8 + T cell activity resulting in antitumor immunity (2,3). Local antitumor immune response depends on an interaction with and contribution of the systemic immune system. Peripheral immune cells augment, sustain, and reactivate local IT effects by interaction with the tumor microenvironment. For example, circulating CD8 + T cells are assumed to migrate into the tumor microenvironment enhancing local antitumor immunity (4, 5).
Several FDA approvals of IT are based on PD-L1 expression for patient selection. However, the predictive value of PD-L1 status varies widely depending on the tumor type (6). Further disadvantages of PD-L1 status are spatial and temporal heterogeneity, lack of standardized laboratory methods, and different PD-L1 staining cutoffs in trials (7).
New prediction markers are urgently needed to optimize selection of patients profiting from IT and to avoid severe adverse events in non-responders. Beyond PD-L1 status, research has focused on the composition of tumor-infiltrating immune cells showing relevance for therapy response (8). However, this method could not be established in everyday clinical practice. In general, tumor tissue-based analysis is limited by feasibility of re-biopsies with corresponding risk.
Due to the crucial role of systemic antitumor immunity for effective tumor control, there is an increasing interest in the immune signature of the peripheral blood as a predictive biomarker for therapeutic success for clinical routine (5). Previous studies have investigated immune cell lines or laboratory parameters of peripheral blood in specific cancer types without testing across different tumor entities, focusing on changes of biomarkers during IT without predictive value before therapy start. Investigated study populations were mainly treated with single IT without additional chemotherapy or radiotherapy, which only partially reflects IT use in clinical practice (8)(9)(10).
The objective of this study was to establish immune signature scores of peripheral blood cells predicting success of IT before therapy start in pan-cancer population.
Materials and methods
We conducted a retrospective clinical study of patients treated with IT for metastatic cancer at a single tertiary care center between May 2015 and October 2021. Inclusion criteria were at least one radiological staging after start of IT and one flow cytometry testing. Patients with different tumor entities were enrolled, mainly with lung cancer, head and neck cancer, and skin cancer.
IT could be applied as monotherapy or IT doublet as well as concomitant to radiotherapy or chemotherapy. Investigated drugs were the PD-L1 inhibitor atezolizumab, the PD-1 inhibitors nivolumab and pembrolizumab, and the cytotoxic Tlymphocyte-associated protein 4 (CTLA-4) inhibitor ipilimumab.
Initial therapy response was evaluated by the first conducted CT or MRI scan after treatment start (median time: 63 days after first IT application) according to the local hospital guidelines. Therapy response was defined as stable disease, partial response, or complete response. Time to progression (TTP) was calculated from the date of start of IT to documented progress and censored at the last visit until which no disease progression was observed. The follow-up time was limited to 24 months and stopped in case of documented tumor progression or death.
For each cancer patient, we assessed a detailed manual chart review. Levels of serologic parameters and immune cell subsets (tested by flow cytometry) were analyzed. The number of patients varied for the observed parameters, depending on the type of laboratory tests performed upon treatment start. The median time of flow cytometric analysis was 22 days before start of IT.
Statistical analysis
To evaluate the association between treatment response and laboratory parameters, analyses by Mann-Whitney U test, Student's t-test, and Kruskal-Wallis test were applied. In case of a statistical and expected pathophysiological relationship, the variables were combined in predictive scores. To measure the predictive power of the individual score, receiver operating characteristic (ROC) curves were generated. The optimal cutoff point of the scores was defined as the point at which the Youden index was maximized by the ROC curve and was calculated by the formula "J = sensitivity + specificity − 1". Comparison of area under the curve (AUC) values for ROC curves was calculated by DeLong test. TTP curves were generated by the non-parametric Kaplan-Meier method and compared with log-rank test. Correlations were tested by simple Cox regression analyses. In case of statistically and clinically significant relationship, variables were included in multiple Cox regression to analyze the robustness of their prognostic values for TTP after adjustment for covariates.
All analyses and figures were performed using STATA software (version 15.1). A p-value < 0.05 was considered as statistically significant. The study was conducted in accordance with the Declaration of Helsinki and was approved by the ethics committee of the Medical Department of the University of Bonn (#340/21). Only previously documented data and routine diagnostic interventions were analyzed; no informed consent was needed. All patients were anonymized through the use of codes.
Baseline characteristics
A total of 62 patients treated with IT for metastatic cancer were analyzed. The mean age was 63 years (range: 34-90 years); 71.0% of patients were men and 29.0% were women; 59.7% of patients had a documented ECOG performance status 0-1 before the start of IT. The main documented tumor types were lung cancer (24.2% NSCLC, 8.1% SCLC), head and neck cancer (19.4%), skin cancer (6.5% melanoma, 6.5% non-melanoma skin cancer), and urinary tract cancer (9.7%); 25.8% of patients were treated with chemotherapy in addition to IT, and 21.0% of patients were treated with radiotherapy; 24.2% of patients received IT as firstline treatment and 25.8% received IT as second-line treatment. Initial therapy response was evaluated by the first conducted CT or MRI scan after treatment start, with a median time of 63 days after the first IT application. An initial therapy response was detected in 39 of 62 patients (62.9%), 18 with stable disease (29.0%), 20 with partial response (32.3%), and 1 with complete response (1.6%). 23 patients showed no response to IT (37.1%) ( Table 1).
Single-cell lines and immune signature scores correlate with response to IT at first staging A statistically significant correlation with therapy response was seen for higher levels of HLA-DR + CD4 + T cells (p = 0.001), PD1 + CD8 + T cells (p = 0.028), PD1 + NK cells (p = 0.001), and Tregs (p = 0.049). In patients with a lower CD4 + /CD8 + ratio, we detected a clinically relevant trend of higher response rates (p = 0.089). To improve the predictive value for therapy response to IT, we were able to identify two scores based on the previous analyses.
Score A was calculated by the division of Tregs by the CD4 + / CD8 + ratio, significantly correlating with response at first staging (p = 0.001). To further optimize the precision of Score A, we performed a subclassification of CD4 + and CD8 + T cells. Score B was calculated by multiplication of Tregs, HLA-DR + CD4 + T cells, and PD1 + CD8 + T cells. Thereby, Score B showed the strongest significant correlation with response at first staging (p < 0.001). In comparison, other developed scores including Tregs, CD4 + subsets, and CD8 + subsets showed lower statistical significance for prediction of response (Table 2). For other tested laboratory parameters and previously published prediction scores, we observed trends but no statistical significance ( Table 2). For Scores A and B, we observed significantly higher score values in patients with response upon IT compared to patients with progress ( Figure 1). Score A "Tregs ÷ (CD4 + /CD8 + ratio)" predicts response at first staging To validate the predictive value of Scores A and B, ROC curves were drawn (Figure 2). Score A significantly predicted response at first staging (AUC = 0.776, 95% CI 0.633-0.919, p < 0.001). The optimal cut-point value was determined from the ROC curve. Patients with Score A ≥14.78 showed a higher probability for response than patients with Score A <14.78. Sensitivity of Score A was 86.2% and specificity was 64.3%. The positive predictive value was 83.3%, while the negative predictive value was 69.2%. Score A correctly predicted 79.1% of the staging results (Figure 2A). The prognostic value of Score A was independent of sex (p = 0.259), age (p = 0.202), and concomitant chemotherapy (p = 0.606).
Score B "Tregs × HLA-DR + CD4 + T cells × PD1 + CD8 + T cells" predicts response at first staging Score B significantly predicted response at first staging (AUC = 0.881, 95% CI 0.743-1.000, p < 0.001). Patients with Score B ≥0.01782 showed a higher probability for response than patients with Score B <0.01782. Sensitivity of Score B was 90.0% and specificity was 87.5%. The positive predictive value was 94.7%, while the negative predictive value was 77.8%. Score B correctly predicted 89.3% of the staging results ( Figure 2B). The prognostic value of Score B was independent of sex (p = 0.477), age (p = 0.839), and concomitant chemotherapy (p = 0.732).
Scores A and B correlate with time to progression
In our population of 62 patients with a median follow-up of 7.06 months, 48 progression events occurred. The median TTP was 6.03 months.
As Kaplan-Meier curves show, higher Score A and Score B values were significantly associated with improved TTP (p = 0.025; p = 0.016). Median TTP was 6.13 months for patients with a higher Score A and 2.17 months for patients with a lower Score A. Median TTP was 6.43 months for patients with a higher Score B and 1.83 months for patients with a lower Score B (Figure 3).
Higher Score A and Score B values showed a statistically significant risk reduction for TTP
Discussion
For a large group of different tumor types, IT is a standard treatment used as monotherapy or additional to chemo-or radiotherapy. Despite this increasing use, there is still a lack of biomarkers with predictive value for therapy response (1).
Beyond blocking local immunosuppression, IT success is achieved by systemic antitumor immunity, relying on the functionality and composition of the individual immune system (5). Therefore, we developed scores based on the individual immune signature of the patient's peripheral blood and investigated their predictive value for therapy success of IT. The scores with the best statistical power consisted of three cell lines: CD4 + T cells, CD8 + T cells, and Tregs. By a precise subclassification of these cell lines, an increase of the predictive power could be achieved.
lower CD4 + T cells and higher CD8 + T cells. CD8 + T cells are considered to be the main effector cells of IT causing direct cytotoxic damage (11). In tumor tissue analyses, high numbers of tumor-infiltrating CD8 + T cells correlated with response to IT (12). While total count of peripheral CD8 + T cells showed no relevant influence for response, we detected a statistically significant predictive value of the PD1 + CD8 + T cell subpopulation. We assume that the inhibitory receptor PD-1 as an exhaustion marker is expressed on CD8 + T cells accessible for IT. Blocking PD-1 by checkpoint inhibitors, the cytotoxic antitumor effect of the CD8 + T cells gets unleashed (13). Other studies similarly reported a prognostic value of elevated PD1 + CD8 + T cells in the peripheral blood as a baseline and monitor marker for therapy response in solid tumors (13,14). Not all subsets of CD8 + T cells positively impact IT (12,15). For example, high senescent CD8 + T cells are discussed to negatively impact response to IT (16).
As an inhomogeneous group, CD4 + cells may differentiate into immunosuppressive or immune-stimulating cells (17). For the subgroup of HLA-DR + CD4 + T cells, we could show a statistically significant correlation with therapy response. We assume that in the subsets of HLA-DR + CD4 + T cells, immuneactivating cells outnumber inhibitory cells. The relevance of CD4 + T cells for IT success is not completely understood. CD4 + T cells may support antitumor immunity by activation of CD8 + cells, modulation of the immune system through effector cytokines, and a supposed direct cytotoxic effect (17).
In our study, increased baseline count of Tregs significantly correlated with therapy response at first staging. While high peripheral Treg counts were associated with poorer prognosis in the pre-IT era, an investigation of stromal infiltrating T cells in NSCLC patients showed correlation of increased PD1 + Treg counts with response to IT (8,18). In addition, an elevated Treg count in the peripheral blood was also associated with clinical benefit in NSCLC patients undergoing IT (19). This special observation of Tregs in IT patients is explained by the immune modulatory effects of IT. By deactivating Tregs, IT might reduce tumor-related inhibition of the immune system (2). PD-1 blockade was shown to downregulate intracellular FoxP3 expression of Tregs, indicating an inhibiting effect on this cell population (20). We assume that increased peripheral Tregs indicate a high level of tumor-induced immunosuppression identifying patients susceptible for IT. In line with these findings, nivolumab reduces in vitro suppressive capacity of Tregs and additionally enhances CD8 + T-cell resistance to Treg suppression (20,21).
Since the IT-induced antitumor effect is based on a complex interaction of activated and deactivated effector cells, it is mandatory to consider more than one cell line to optimize response prediction. Our developed Score A "Tregs ÷ (CD4 + / CD8 + ratio)" and Score B "Tregs × HLA-DR + CD4 + T cells × PD1 + CD8 + T cells" showed a high statistically significant correlation with treatment success and correctly classified 79.1% and 89.3% of therapy responses at first staging, respectively. Furthermore, patients with a higher Score A and Score B had a prolonged TTP and a relevant risk reduction for progression of 55.7% and 73.2%, respectively. In multiple Cox regression, Score B remained statistically significant.
The higher predictive value for therapy response and TTP of Score B compared to Score A may be explained by the specific selection of relevant cell subsets for IT success. However, Score A was reliable and might be easier to implement in clinical practice due to its feasibility.
Limitations have to be considered when interpreting our findings. Due to the retrospective small patient cohort, it is necessary to investigate these scores prospectively in a larger patient population. Furthermore, flow cytometry was performed with two different staining protocols over the study period. Limited by the study design, we are not able to make a statement about the correlation between peripheral blood and tumor-infiltrating lymphocytes. Further investigation to understand the high predictive value of the calculated scores is needed, including markers for Tregs like FoxP3 and PD-1.
To our knowledge, these are the first developed immune scores with statistically proven high sensitivity and specificity predicting therapy response as well as TTP in a pan-cancer population treated with IT. In clinical use, these scores might optimize the prediction of therapy success based on the individual immune signature of the patient's peripheral blood before therapy start.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the ethics committee of the Medical Department of the University of Bonn (#340/21). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
RM: conception and design of the work/acquisition, analysis, and interpretation of data/drafting of the manuscript/statistical analysis. STH: conception and design of the work/acquisition, analysis, and interpretation of data/drafting of the manuscript/ statistical analysis. SAH: technical and material support. PB: critical revision of the manuscript. AH: conception and design of the work/analysis, and interpretation of data/critical revision of the manuscript/supervision. All authors contributed to the article and approved the submitted version.
Funding
Research in the Heine lab is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC2151 -390873048. | 2022-11-28T14:22:10.766Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "a0770b229041d03b52e12ce056d3a225ace484d7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a0770b229041d03b52e12ce056d3a225ace484d7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220291415 | pes2o/s2orc | v3-fos-license | Tuberculin skin test conversion in patients under treatment with anti-tumor necrotizing factor alpha agents
Background Despite successful clinical outcomes of biologic medications in patients with chronic rheumatic diseases, some considerable adverse effects such as infections remain a major concern. Possibility of tuberculosis (TB) reactivation over treatment with anti-tumor necrotizing factor (TNF) alpha agents has necessitated a screening test before initiation of treatment. However, screening over the course of treatment is not recommended in those patients with negative baseline screening tests. This study aimed to evaluate the efficacy of tuberculin skin test (TST) before treatment in patients with chronic rheumatologic diseases who were indicated to receive anti-TNF-alpha therapy and the necessity of repeating this test over the course of treatment. Methods In this prospective study, patients with chronic rheumatologic diseases receiving anti-TNF-alpha agents were studied in a two-year period. TST was performed before treatment and those with positive results were excluded from the study. Thereafter, treatment with anti-TNF-alpha agents was initiated with the indicated dose. TST was repeated before administration of biologic treatment until TST became positive or 16 weeks after the initiation of treatment with anti-TNF-alpha. Results A total of 51 cases were studied, of whom one patient (1.9%) was excluded due to positive TST before treatment. All participants received infliximab and the TST test became positive in one patient (2%) 2 weeks after receiving the first dose. Also, the results of further tests at weeks 6, 10, and 14 were all negative for the remaining patients. Conclusion Due to the possibility of TST conversion after administration of anti-TNF-alpha therapy, it is important to consider TB monitoring in patients under treatment with these agents using available methods such as TST.
Background
The Community Oriented Program for Control of Rheumatic Diseases (COPCORD) and the International League of Associations for Rheumatology (ILAR) by the collaboration of the World Health Organization (WHO) revealed that rheumatic complains were the commonest complaint in the community, and soft tissue rheumatism, ill-defined musculoskeletal symptoms, and osteoarthritis were the most prevalent disorders [1]. The urban COPCORD study in developing countries such as Iran demonstrated that in the population over the age of 15 years rheumatic complains were seen in 41.9% of people. Degenerative joint disease and inflammatory disorders were also reported in a considerable proportion of patients [2]. Different therapeutic options have been recommended for rheumatologic diseases, such as non-steroidal anti-inflammatory drugs, traditional diseasemodifying anti-rheumatic drugs (DMARDs), and glucocorticoids [3,4]. Moreover, numerous biologic therapies have emerged in the recent decades with significantly successful outcomes, including tumor necrosis factoralpha (TNF-alpha) blockers, CTLA4-Ig, anti-interleukin I (IL-1) and anti-IL 6 receptors, and rituximab (an anti CD20 antibody) [5][6][7]. However, some complications, particularly infections, are not uncommon by using these medications, both as a direct consequence of the treatment or due to the underlying disease process [8][9][10]. Reactivation of tuberculosis (TB) has also been widely reported in patients receiving biologic therapies, in particular anti-TNF-alpha agents [11][12][13]. Therefore, tuberculin skin test (TST) or interferon-gamma release assay (IGRA) is strictly recommended before the initiation of therapy [13]. Most current guidelines and expert reviews recommend that in case of the absence of risk factors and clinical suspicion for TB, there is no need for repeating TB screening tests [13,14]. However, there are some reports of TB infection in patients under treatment with biologic therapies and negative TST at initiation [15][16][17]. These reports raise the concern about the inadequacy of a single TST test before initiation of treatment. However, no prospective study has been conducted in this regard. Therefore, we aimed to evaluate the efficacy of TST before treatment in patients with chronic rheumatologic diseases who were indicated to receive anti-TNF-alpha therapy and necessity of repeating this test over the course of treatment.
Methods
This prospective observational study was conducted on patients (in any age or sex) with a chronic rheumatologic disease referred to Imam Reza Teaching Hospital of Tabriz University of Medical Sciences for receiving anti-TNF-alpha agents in a two-year period (March 2017 to March 2019). Patients were excluded if they had a medically confirmed history of active or latent TB infection, household TB contact, or unevaluated symptoms that could possibly be due to TB infection, such as chronic cough. Informed consent was obtained from all participants. TST was performed 10 days before treatment with the standard method by an internal medicine specialist and was confirmed by another internal medicine specialist. Patients with positive TST tests were excluded from the study and referred to the TB control centers for further diagnostic and/or therapeutic procedures. The study was continued with TST negative patients. One week later, TST test was repeated and the positive tests were considered as booster effect; these cases were also excluded from the study. Thereafter, treatment with anti-TNF-alpha agents was initiated with the indicated dose. Infliximab was administered at weeks 0, 2, and 6 and then every 8 weeks. The timing of the TST test and infliximab administrations are illustrated in Fig. 1. TST was repeated by the same person with the same procedure before administration of treatment until TST became positive or 16 weeks after the initiation of treatment with anti-TNF-alpha.
TST was done by an internal specialist on the volar side of the left forearm with the Mantoux method. Ten units of tuberculin purified protein derivative (PPD) was injected intradermally and the injected site was marked (PPD RT23; Staten Serum Institute, Copenhagen, Denmark). The appearance of any induration was evaluated 72 h after injection using the ballpoint method [18]. The same procedure was repeated each time the TST was performed. An induration of more than 5 mm was considered in patients receiving immunosuppressive drugs, such as methotrexate or cyclosporine; otherwise, 10 mm or higher was considered positive. Also, any increase in diameter of induration of TST was defined as positive TST [19,20].
Statistical analysis
Continuous data were reported as mean and standard deviation. The frequency and percentage of qualitative variables were also reported. SPSS version 24 was used for all analyses.
Results
A total of 51 patients participated in this study, out of whom one patient with ankylosing spondylitis (AS) and a positive TST before initiation of treatment was excluded from the study. The study was conducted on 50 patients, including 28 males (56%) and 22 females (44%). The mean age was 31.2 ± 6.55 years (range: 20-50 years). Also, 33 patients (66%) had AS and 17 patients (34%) had rheumatoid arthritis (RA). Concurrent use of methylprednisolone was reported in 17 (34%) patients (Table 1). All patients had received Bacillus Calmette-Guérin (BCG) vaccination in their childhood.
All patients with negative TST included in our study received infliximab with the standard dose. Before administration of the second dose of infliximab (2 weeks after the first dose of infliximab), a male 37-year-old patient with AS developed a positive TST (induration, 8 mm; Table 2). The TST induration of this patient prior to biologic treatment was 3 mm; he received indomethacin concomitantly but did not receive prednisolone or other non-steroidal anti-inflammatory drugs. Moreover, he had no household TB contact. The patient was referred to TB control center for further evaluation and the study continued with the remaining patients. However, no other positive TST cases were seen when TST was repeated in the following weeks. In addition, none of the patients had symptoms of TB.
Discussion
The possibility of TB reactivation by anti-TNF-alpha treatment has been well-established by several studies, and guidelines recommend performing screening tests before initiation of these drugs. However, the majority of current guidelines suggest that there is no need for re-screening TB infection after initiation of biologic treatments [14]. We evaluated the sufficiency of TST in patients with chronic rheumatic diseases indicated to receive biologic therapy. Our results demonstrated that there is a possibility of positive TB infection after administration of biologic drugs despite a negative prior screening test (conversion of TST) that can be detected by repeating TST over the course of treatment. Although this was seen only in one over 50 patients included in the study, neglecting this finding and poor detection and management of this highly communicable disease can lead to terrible consequences.
Several studies have revealed the risk of TB infection in patients who receive TNF-alpha inhibitors [21][22][23]. Askling et al. investigated the Swedish Inpatient Register RA cohort (62.321 patients) and reported that 230 individuals in this cohort were diagnosed with TB during the 14 years follow-up period, of whom, 15 patients had received TNF-alpha inhibitors (11 patients were treated with infliximab) [24]. A conversion from negative TST to positive after treatment with anti-TNF-alpha has also been reported by some studies. Park et al. reported a considerable ratio of 32.6% of patients having a conversion from negative to positive TST by using biologic medications [17]. Also, Slouma et al. reported that there were two cases of active pulmonary TB among patients receiving anti-TNF-alpha therapy with initially negative TST and QuantiFERON-TB Gold test [16]. Another study by Mobini et al. also reported a case of seropositive RA under treatment with infliximab that got an active TB infection despite a previous negative TB screening test [15]. This finding has also been reported for patients with non-rheumatic diseases. Celine Debeuckelaere et al. reported two patients with chronic inflammatory bowel disease (IBD) that developed a TB infection after treatment with anti-TNF-alpha agents, despite a negative screening test [25]. Also, a Korean study reported de novo TB infection in 3.1% of IBD patients after anti-TNF-alpha therapy [26].
A panel of experts recommend that annual TB screening test should be considered in patients with RA, AS, psoriatic arthritis (PsA), or psoriasis under treatment with anti-TNF-alpha agents if they travel or work in situations where TB exposure is likely regardless of negative screening test at baseline [13]. However, this has not been adequately appreciated in the current guidelines.
There are two predominant screening tests for TB including TST and IGRA. Despite well-known falsenegative and false-positive TST results, the standard screening test is still TST along with a comprehensive medical history and chest X-ray [27]. Furthermore, TST is simpler, has lower costs, and is a widely available test. Therefore, in our study we did not perform IGRA and only TST was conducted as a screening test. Meanwhile, we attempted to diminish the disadvantages of TST; for example, TST was administrated meticulously by an expert and under supervision to reduce the negative impact of misperformance. We did not have any falsepositive results as both TST-positive patients (before and after treatment) were confirmed to have active TB by further evaluations. Nonetheless, we could not roll out false-negative TST in our patients due to their immunosuppression treatment. Oral prednisolone is reported to have some impact on TST results; however, this impact is predominantly dose dependent [28]. Kleinert et al. and Ponce de Leon et al. demonstrated that 7.5-10 mg/day may impair TST results [29]. However, majority of our patients received prednisolone with a dose of equal or less than 5 mg/day. Regarding the patient who presented positive TST after treatment, it is unlikely that the negative TST before treatment was due to immunosuppression because he was not under treatment with immunosuppressive drugs and he received the medications with the same dose along with infliximab.
A conversion in TST, defined as a change from negative to a positive test, can occur when a new or enhanced hypersensitivity arise due to de novo TB infection or non-TB mycobacteria, including BCG vaccination [19]. This reaction has been variedly reported to occur 3 to 7 weeks after exposure [19]. In our study, positive TST was seen after 2 weeks of baseline TST (2 weeks after initiation of treatment). This could be due to the booster effect; however, considering that we conducted a second TST 3 days before treatment to roll out this phenomenon, a booster effect was also unlikely to be considered for our patient.
TNF-alpha has an important role in both the host immune response to TB infection and in its immunopathology [30]. It is produced by a variety of immune cells in response to various pathogens, such as lipopolysaccharide or viral and bacterial infections [31]. TNF-alpha in response to TB infection brings about several positive effects. The main receptor of TNF-alpha, acting against TB infection is TNF receptor 1 (TNFR1) [32]. In vitro studies have demonstrated that TNFR1 is essential in both granuloma formation and in susceptibility to intracellular pathogens during TB infection. This results in controlling the mycobacteria and preventing their dissemination [30]. Therefore, it is conceivable that inhibition of this mediator by anti-TNF-alpha agents leads to poor immune reaction potency against TB infection.
The global prevalence of RA is more than AS; but in our study 66% of patients who received infliximab were AS. The main reason was our center's strategy for treatment of RA. We use biologics after failing combination therapy with three DMARDs for controlling the disease activity. However, some rheumatologists use rituximab as the first biologic for treating seropositive RA.
Our study had some limitations. It was better to perform IGRA and chest X-ray for more comprehensive screening of the patients. But due to our center's protocol and some aforementioned reasons we only performed TST. Moreover, due to the relatively small number of patients in our study we found only one positive TST after TST positive patients were referred to TB control centers and TB infection was confirmed initiation of treatment. It could be possible to detect more positive cases in a larger sample size.
Conclusion
Our study demonstrated the possibility of TST conversion (positive TST) after the administration of infliximab. Therefore, it is important to consider re-screening TB in patients receiving infliximab after initiation of the treatment even if the screening tests were negative before treatment. | 2020-07-02T15:46:07.142Z | 2020-07-02T00:00:00.000 | {
"year": 2020,
"sha1": "a0ae5b5d6b298354b85b1b19efdae401e5018046",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05166-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0ae5b5d6b298354b85b1b19efdae401e5018046",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236496443 | pes2o/s2orc | v3-fos-license | Cardiorespiratory fitness assessment using risk-stratified exercise testing and dose–response relationships with disease outcomes
Cardiorespiratory fitness (CRF) is associated with mortality and cardiovascular disease, but assessing CRF in the population is challenging. Here we develop and validate a novel framework to estimate CRF (as maximal oxygen consumption, VO2max) from heart rate response to low-risk personalised exercise tests. We apply the method to examine associations between CRF and health outcomes in the UK Biobank study, one of the world’s largest and most inclusive studies of CRF, showing that risk of all-cause mortality is 8% lower (95%CI 5–11%, 2670 deaths among 79,981 participants) and cardiovascular mortality is 9% lower (95%CI 4–14%, 854 deaths) per 1-metabolic equivalent difference in CRF. Associations obtained with the novel validated CRF estimation method are stronger than those obtained using previous methodology, suggesting previous methods may have underestimated the importance of fitness for human health.
Development and validation of CRF estimation method for the UKB-CRF test
We recruited 105 female (mean age: 54.3y ± 7.3) and 86 male (mean age: 55.0y ± 6.5) validation study participants (Supplemental Fig. 1).Participants completed a series of UKB-CRF tests and a submaximal steady-state test to characterise exercise HR response across different test protocols.VO 2 max was directly measured during an independent maximal exercise test (Fig. 1A).All participants contributed submaximal exercise test data.Some maximal exercise test data were excluded due to missing HR and VO 2 response data (n = 25) and failure to achieve predefined maximal exercise threshold criteria (n = 33).Participant characteristics were generally similar in these subsamples.
HR response features derived from the UKB-CRF test vary with ramp rate of the assigned protocol and, if unaccounted for, will result in biased CRF estimates (Fig. 1B).HR response features may also be of poor quality or missing, a common situation in exercise testing.We addressed these issues by integrating UKB-CRF test and steady-state test HR response features (Fig. 1C-E) into a multilevel CRF estimation framework (Supplemental Table 1).The framework uses these features to estimate the HR-WR relationship that would be established if a longer steady-state test had been completed instead of the short ramped UKB-CRF test.Maximal WR is then estimated by extrapolation to age-predicted maximal HR (HRmax).This framework approach minimises bias introduced by protocol assignment of ramp rate.It also enables the application of different estimation models to different data availability scenarios.We derived several estimation models, notated as M1 through M5 in order of comprehensiveness.The highest-level model (M1) used more HR response features to estimate CRF, while lower-level models (M2 through M5) used fewer.
An accepted method for estimating VO 2 max from submaximal cycle ergometry is to estimate maximal steadystate WR from HR response and convert the estimated maximal steady-state WR to VO 2 max using the American College of Sports Medicine (ACSM) metabolic equation for cycling 9 .Before applying this equation, we verified that maximal WR estimated from the UKB-CRF test by the multilevel framework corresponded to maximal steady-state WR by comparing WR estimates with WR measured at the respiratory compensation point (RCP) from the maximal exercise test.The WR at RCP is equivalent to maximal steady-state WR and is the WR above which anaerobic metabolism is needed to sustain exercise until exhaustion.Figure 2 shows agreement with WR at RCP using the most comprehensive model for each test type (M1: ramp tests; M4: flat tests).Agreement for remaining estimation models is shown in Supplemental Table 2. Across estimation models (M1 through M5), estimated WR were strongly correlated with WR at RCP (Pearson's r range 0.81-0.86)with no significant mean bias (Bias range in women: − 3.7 to 3.8 W; in men: − 5.2 to 0.1 W).Mean bias also did not differ between low and high ramped tests, but root-mean-square error was generally lower for flat tests.Correlations were higher when WR was computed using features from ramp-and recovery-phase data (models M1 through M3) compared to using only flat-phase data (models M4 and M5), although all models were relatively precise.We also compared estimated WR with WR measured at the lactate threshold (Supplemental Table 3) and WR measured at VO 2 max (Supplemental Table 4).
After confirming that WR computed from the multilevel framework corresponded to WR at RCP, we estimated VO 2 max by applying the ACSM metabolic equation.We then compared these VO 2 max estimates with VO 2 max directly measured from the maximal exercise test.Figure 3 demonstrates VO 2 max agreement using the most comprehensive estimation model for each test type (M1: ramp tests; M4: flat test).Agreement for remaining estimation models is shown in Supplemental Table 5.Estimated VO 2 max was correlated with measured VO 2 max (Pearson's r range 0.68-0.74)with no significant mean bias (Bias range in women: − 0.8 to 0.4 ml O 2 kg −1 min −1 ; in men: − 0.3 to 0.3 ml O 2 kg −1 min −1 ), establishing the multilevel framework's absolute validity.We also quantified the proportion of bias emerging from uncertainty when using age-predicted versus measured HRmax in the multilevel framework (Supplemental Table 6); agreement improved only slightly when using measured HRmax.As a sensitivity analysis, we also evaluated agreement between estimated and directly measured VO 2 max in all participants with usable maximal test data and relaxing the criteria for maximal effort (n = 178).Estimated and directly measured VO 2 max were not statistically significantly different across all comparisons in this analysis.
We next addressed whether the multilevel framework would yield similar CRF estimates from different UKB-CRF test protocols completed by the same validation study participant (i.e.internal validity).Within-participant, we compared VO 2 max estimates from low and high ramp tests across estimation models M1-M3 and M5, as well as between flat tests across M4 and M5 (Supplemental Table 7).VO 2 max estimates from different UKB-CRF test protocols were highly correlated across estimation models (Pearson's r range: 0.91-0.99).While mean bias was minimal across all comparisons (Bias range: − 0.6 to 0.0 ml O 2 kg −1 min −1 ), some were statistically significantly different from zero mean bias.We also examined differential bias by protocol ramp rate (from 0 W min −1 for flat www.nature.com/scientificreports/tests and 7.5-25 W min −1 for ramp tests), finding mean bias to be minimal across all ramp rates tested (Fig. 3, lower panel).
For method-comparison purposes, we also assessed the absolute and internal validity of a simple linear regression CRF estimation method used previously by our group and a two-point CRF estimation method used by other groups working with UKB data.These latter two methods relate exercise HR response to WR without accounting for protocol ramp rate.Both methods demonstrated overestimation bias and low precision when applied to ramped tests but had low bias when applied to flat tests (Supplemental Fig. 2).
To assess measurement consistency of the UKB-CRF test, we evaluated short-term test-retest reliability in validation study participants and long-term test-retest reliability in UKB participants (Fig. 4).In validation study participants with short-term repeat tests (within 2 weeks, n = 87), estimated VO 2 max values from the first and second UKB-CRF tests were highly correlated (r = 0.91) with no mean difference.Agreement was nearly as strong (λ = 0.79) in UKB participants with long-term repeat tests (about 2.8 years between tests, n = 2877).
Application of VO 2 max estimation method in UKB cohort
After establishing the absolute validity, internal validity, and test-retest reliability of the multilevel framework for interpreting UKB-CRF test data, we applied the framework to estimate CRF in UKB (Supplemental Fig. 3 and Supplemental Table 8) and examined prospective associations with health outcomes.In total, 42,351 women and 37,650 men from UKB were considered in this analysis.Baseline participant characteristics are shown by sex-specific and age-adjusted CRF tertiles, across half-decade age groups, in Supplemental Table 9.Estimated VO 2 max was higher in men compared to women, and in younger versus older adults.Participants in the middle and higher CRF tertiles also had better baseline measures of heart and lung function, lower body weight, and better self-perceived health than participants in the lower tertile.
To examine associations between CRF and prospective health outcomes in UKB, we used Cox proportional hazards regression to estimate hazard ratios per 1-metabolic equivalent difference in CRF (METs; 1 MET = 3.5 ml O 2 kg −1 min −1 ) for fatal and nonfatal events, excluding those experiencing the event in question in the first 2 years of follow-up.In total, 2670 participants died during a median 9.9 years (interquartile range 9.7-10.0) of follow-up (746,377 person-years).After adjustment for potential confounders, each 1-MET difference in CRF was associated with approximately 8% (95%CI 5-11%) lower all-cause mortality (Fig. 5); this equates to a difference in mortality of about 23% between top and bottom CRF tertiles.Associations were stronger for deaths from respiratory disease (RD; 14% lower per 1-MET, 95%CI 8-19%) and similar for deaths from cardiovascular disease (CVD; 9%, 95%CI 4-14%), and cancers (8%, 95%CI 5-12%).Higher CRF was more strongly associated with lower mortality risk in obese compared to non-obese participants (Supplemental Fig. 4).
For method comparison purposes, associations were also examined for CRF computed using the simple linear regression CRF estimation method.Associations were generally shallower and estimated with less uncertainty using simple linear regression, an analysis which also included fewer participants compared to the analysis using the multilevel framework.
We used cubic spline regression to examine natural variation and potential nonlinearity in dose-response associations between CRF and prospective health outcomes (Fig. 6; obesity-stratified result in Supplemental Fig. 5).For method comparison purposes, separate sets of dose-response curves were modeled for CRF when computed with the multilevel framework and when using the simple linear regression method.For mortality outcomes, CRF was inversely associated with death from all causes, CVD, RD, and cancer in the CRF range of 3-11 METs.Relationships were steeper at the low end of that range and shallower at higher CRF levels.The shape of estimated CRF dose-response relationships varied considerably across incident disease outcomes.In Differences between these associations and those observed using the simple linear regression to estimate CRF were most evident at the tails of the distributions.We performed several sensitivity analyses to determine whether health associations with CRF computed from the multilevel framework were altered by restricting the analytical sample or by using different estimation models in the multilevel framework to compute CRF.Compared to the main analysis, health associations with CRF were slightly weaker but estimated dose-response curves were qualitatively similar when the analytic sample was restricted so as to be matched with the sample used when the simple linear regression method was applied (Supplemental Figs. 6 and 7).To examine differences in health associations by estimation models in the multilevel framework, we split the analytic sample into two separate sets of analyses for those who either performed a ramp test (Supplemental Figs. 8 and 9) or a flat test (Supplemental Fig. 10).In the ramp test analysis, associations were generally stronger using more comprehensive estimation models (M1-3) compared to less comprehensive models (M5).Estimated dose-response curves were qualitatively similar for all outcomes.The association with incident AF, however, was a positive monotonic relationship at all estimation models in the ramp test subsample (event rate 217 per 100,000 person-years), rather than the U-shaped relationship found in the main analysis.In the subsample allocated a flat test (Supplemental Fig. 10), the association with incident AF was positive, non-significant, and at a higher event rate (460 per 100,000 person-years).The analysis in the flat test subsample is primarily a comparison of associations for CRF estimated by M4 and M5.Associations were generally similar between the two estimation models but non-significant due to the small sample size.We did not estimate dose-response curves in the flat test subsample for the same reason.
Discussion
We have developed a novel multilevel CRF estimation method based on HR response to short, individualised submaximal exercise tests and applied it in one of the largest and most inclusive population-based studies of CRF.We establish the validity and reliability of this new method in an independent validation study and demonstrate advantages over other methods applied in previous population studies.
The multilevel framework method estimates VO 2 max by modeling maximal steady-state exercise capacity from HR response to the UKB-CRF test.This approach minimises CRF estimation bias that may be introduced by the UKB-CRF test individualisation process.We demonstrate that for each 1-MET difference in CRF estimated using the multilevel method, all-cause mortality was 8% lower and CVD mortality was 9% lower; associations nearly twice as strong as those estimated when using a method that did not use external data to map between ramped and steady-state exercise 15 .Improvements in CRF estimation validity and disease outcome characterisation may have broad implications for future research in UKB.
We have reported on a range of associations between CRF and prospective health outcomes in UKB, demonstrating the protective effects of CRF on all-cause and cause-specific mortality and morbidity.Our findings are largely in agreement with numerous other populations-based studies 1-3 , however associations between CRF and non-fatal incidence rates of IHD, stroke, and AF were not significant.In previous work, dose-response associations were J-shaped for stroke 22 and U-shaped for AF 23 .Another study with longer follow-up time and more incident events reported inverse relationships between CRF and AF 24 .As for cancer mortality, we found an inverse relationship between CRF and all-cancer mortality, in agreement with previous studies 25,26 .Additional follow-up time is warranted to investigate CRF associations with site-specific cancers in UKB.Additional follow-up would, however, also increase the probability that participants may change their CRF level over time, which would dilute observed associations between a single baseline measure of CRF and health outcomes.The Cox proportional hazards model estimates the difference in hazards of the outcome in question by baseline exposure level, and if this exposure level is very stable over time, the estimated association will be a more accurate reflection of the importance of that exposure than if it were more variable.We show in this work that CRF has a long-term reliability of around 0.80, indicating good stability.However, if we were to formally apply correction for the element of instability using regression dilution bias methods, the point estimate of hazard ratios would be about 20% stronger than what we report here but with wider confidence intervals.
We have developed a novel multilevel estimation framework that optimises the validity of VO 2 max estimates from the UKB-CRF test.The key strengths of the framework are that it: (1) uses HR response features across all test phases to infer the relationship between HR and exercise intensity; (2) is flexible to the availability or absence of HR response features due to data quality issues; and (3) harmonises the inferential modeling of HR response features to the within-person invariant relationship between steady-state HR and exercise intensity.For these reasons, predicted CRF values and health associations we have presented diverge from previous reports of CRF in UKB.In a previous analysis 15 , we estimated CRF by using simple linear regression to establish the relationship between ramp test HR response and WR.This approach is currently the most common in the field but does not account for the protocol individualisation process of ramp rates used in UKB.Using external validation data, we show that this approach overestimates VO 2 max differentially by test protocol, thus limiting the ability to validly compare VO 2 max estimates from different UKB-CRF tests.Lacking protocol comparison data, previous work could only partially address the impact of this by a meta-analytic approach.Our new approach resolves this issue at the individual level and demonstrates stronger associations with all-cause and cause-specific mortality and morbidity compared to this previous non-validated method.We can have more confidence in these associations because validity of the exposure is now documented.Other approaches also use simple linear regression, but establish the relationship between HR and exercise capacity by relating resting HR to only a single measurement of HR during the test 16,17,[19][20][21][27][28][29][30] . HR meaurement noise will greatly decrease precision of this approach, and resulting CRF estimates are still subject to bias that will differ by protocol ramp rate.Another reported approach 18,[31][32][33] is to use the maximally achieved WR to infer CRF.As most participants completed their test, this approach merely reflects the protocol that participants were assigned according to age, sex, body size, resting HR, and exertional chest risk, with the latter feature most indicative of test risk-stratification.While prospective associations of such an exposure measure with CVD endpoints do to a degree validate the stratification of risk, it is not possible to interpret these results solely as associations with CRF.At best, it is a composite score of exercise capacity and cardiac risk.
Our CRF estimation approach may also have implications for exercise prescription in clinical environments.CRF testing in UKB has demonstrated that it is safe to obtain valid VO 2 max estimates in a population setting while including some individuals with contraindications to exercise.In practice, such individuals would be prescribed a less strenuous exercise test that could contain less information about their physiological state compared to a more strenuous test.The multilevel framework approach we describe in this work for interpreting such test results yields unbiased estimates of CRF, addressing a well-recognised limitation of exercise testing 34 .Future research is warranted to investigate whether the multilevel framework can be generalised to other ramp-style exercise tests.
The strengths of our study include independent validation work for our VO 2 max estimation approach prior to estimating dose-response relationships with disease outcomes in the UK Biobank.There are also some limitations.The exercise capacity of validation study participants was slightly higher than the average capacity of UKB participants.Furthermore, the comparatively relaxed testing conditions in the validation study may not directly match those in UKB, where testing was conducted in large testing centres and where a variety of additional exposures were examined under stringent time constraints.We also did not directly evaluate the validity of UKB-CRF test protocols with ramp rates at 2.5 and 5.0 W min −1 .Agreement for ramp rates above and below these untested rates were unbiased, however.Finally, we examined non-fatal health outcomes using only hospitalisation data; this does not necessarily capture all disease events in a given category.
Conclusions
We have demonstrated the absolute validity, internal validity, and test-retest reliability of a novel VO 2 max estimation method for individualised ramped exercise tests that can be safely and efficiently applied in population studies.Our analytic approach uses a generalised multilevel modeling framework that bridges the gap between steady-state and ramped incremental exercise, addressing a persistent problem in exercise physiology www.nature.com/scientificreports/and prescription.CRF estimated in this way is more strongly associated with mortality and other disease endpoints than previous methodology, strengthening the case for promoting CRF in the general population.
UKB-CRF test description
The UKB-CRF test protocol design and individualisation process are described in detail by the most recent test manual 35 .Briefly, participants were categorised into separate risk levels according to questions adapted from the Rose-Angina questionnaire.Participants with "minimal" and "small" risk completed an individualised ramp test, those with "medium" risk completed a flat test, and those with "high" risk did not complete an exercise test.Ramped tests began with a 2-min flat-phase at a single WR (30 W for females, 40 W for males) followed by a 4-min ramp-phase where WR increased continuously to a pre-specified target WR.The target WR was calculated as a risk-adjusted percentage (50% for those with "minimal" risk, 35% for "small" risk) of the maximal WR predicted from an equation derived from maximal exercise (cycle ergometer) testing data collected in the Danish Health Examination Survey 2007-2008 36 .The computed value for target WR was combined with participant sex ("F" for female, "M" for male) to notate different exercise protocols.For example, a male participant with "minimal" risk and predicted WR at VO 2 max of ~ 200 W would have a target work rate of 100 W and be individualised to UKB protocol "M100".Flat tests consisted of a single 6-min flat-phase.Participants cycled at a 60-rpm cadence while WR and heart rate (HR) were monitored.All tests ended with a 1-min recovery-phase where participants sat quietly and motionless on the ergometer.No adverse events were reported when this test was applied in UKB.
Validation study participants
We recruited a subsample of participants from the Fenland study, a population-based study in Cambridgeshire, UK 37 , using an age-, sex-and BMI-stratified random sampling procedure (Supplemental
Experimental procedure and equipment
Validation study participants were screened according to standardised procedures used for the UKB-CRF test 35 .Then, participants completed the UKB flat test, two UKB ramped tests at different ramp rates, a steady-state test (unique to the validation study), and another ramped test (validation only) to elicit VO 2 max (Fig. 1A).Tests were conducted consecutively, separated by at least 15 min of rest, and were specified according to the test that the participant would have been assigned had they been part of UKB (see Supplemental Table 11).The target (highest) WR for the second ramped test was at least 30 W greater than the first; thus, each participant completed a "low" and "high" ramped UKB test.The steady-state test consisted of four incremental 4-min flat-phases with each WR increment ranging from 10 to 20 W. For the ramped max test, participants were fitted with a face mask to measure respiratory ventilation and gas exchange and cycled while WR increased until exhaustion.VO 2 max was considered reached if two of the following criteria were met: a respiratory exchange ratio exceeding 1.20; no VO 2 increase despite increasing WR (< 2.5 ml O 2 kg −1 min −2 ); and no HR increase despite increasing WR.During data analysis, the levelling-off criterion was confirmed by inspecting whether the first differential of HR and VO 2 data approached zero over the last 1-min period of the maximal test.VO 2 max was expressed as the average of the two highest VO 2 measurements in the last forty-five seconds of the maximal exercise test.WR values were measured at VO 2 max (i.e.maximal work rate achieved on the test, WRmax), at the lactate threshold (LT), and at the respiratory compensation point (RCP).The work rate at LT was measured at the point when both ventilatory equivalent of oxygen (V E /VO 2 ) and end-tidal pressure of oxygen (P ET O 2 ) increased with no increase in ventilatory equivalent of carbon dioxide (V E /VCO 2 ).The work rate at RCP was measured at the point when both V E /VO 2 and V E /VCO 2 increased and end-tidal pressure of carbon dioxide (P ET CO 2 ) decreased (see Supplemental Fig. 11) 38 .Directly measured WR at LT and RCP were determined visually by three independent investigators, blinded to all other measures except the variables above needed for making direct measurements.The median value among investigators was considered the final value.
Cycling was performed on an electromagnetically-braked stationary bike (eBike ergometer, GE) while electrocardiography (ECG) was recorded using 4-lead ECG (Cardiosoft) on the forearms and a Actiwave Cardio device (CamNtech, Papworth, UK) on the chest with sampling frequency of 128 Hz.The 4-lead ECG leads were placed on the cubital fossa and ventral wrist of the left and right arms (mimicking the UKB protocol).Cycling WR were controlled by computer software.Respiratory gas measurements were conducted using a computerised metabolic system with Hans Rudolph face masks (Oxycon Pro, Erich Jaeger GmbH, Hoechberg, Germany) as validated elsewhere 39 .
All ECG signals were processed using the Physionet Toolkit implementation of the SQRS algorithm 40 , which applies a digital filter to the measured ECG and identifies the downward slopes of the QRS complexes 41 .The resulting inter-beat-intervals were converted to beats-per-minute values using the "ihr" package in the PhysioNet Toolkit, as described previously 15 .Pulmonary gas exchange data were sampled breath-by-breath.All data were linearly interpolated to derive quasi-continuous HR response and respiratory measures at 1 s time resolution.Sections of linearly interpolated HR data greater than 1 s in duration were removed prior to analysis.www.nature.com/scientificreports/CRF estimation framework: conceptual and modeling framework Our approach for estimating VO 2 max from UKB-CRF test HR response is illustrated in Fig. 1B-E.Here we first describe a VO 2 max estimation method for HR response to steady-state exercise.We then adapt this method to the UKB-CRF test by using a multilevel hierarchical framework of linear models to harmonise HR response features extracted from flat and ramped UKB-CRF tests to those extracted from steady-state exercise.
Conceptual framework.VO 2 max can be estimated from HR response to exercise at steady-state WR increments using linear extrapolation of the submaximal HR-to-WR relationship 42 .For this approach, an individual exercises at two or more submaximal WR increments while HR is recorded.The steady-state HR response at each test increment is then regressed against WR to establish a line-of-best fit for the observed HR-to-WR relationship (W bpm −1 ).This relationship can be represented as: where WR t and HR t are paired measurements at several test increments, β 1 ss is the linear regression slope repre- senting the steady-state HR-to-WR relationship, and β 0 ss is the intercept of that regression.The regression line is extrapolated to age-predicted maximal HR (HRmax) 43 to estimate the maximal steady-state WR that would be achieved if the exercise test was completed to exhaustion.VO 2 max is then estimated by converting the extrapolated maximal steady-state WR value to net VO 2 using a caloric equivalent of oxygen and adding an estimate of resting VO 2 plus the VO 2 required for unloaded cycling 44 .
The HR-to-WR linear extrapolation approach presents challenges when applied to ramped exercise HR response.Assuming HR and VO 2 responses are linearly related, the principal methodological issues are [45][46][47] : (1) within-participant, the VO 2 -to-WR relationship and total time delay for VO 2 response to achieve linearity after ramped exercise onset will vary across ramped tests as a function of ramp rate; (2) The ramped VO 2 -to-WR relationship decreases asymptotically with ramp rate and, as ramp rate approaches zero, becomes similar to values determined from steady-state exercise; (3) the VO 2 -to-WR relationship has high test-retest variability; and (4) the VO 2 -to-WR relationship diverges from linearity above RCP.Thus, the HR-to-WR linear extrapolation approach will induce VO 2 max overestimation bias as a function of ramp rate, demonstrate low test-retest reliability, and have poor precision if the WR computed at age-predicted HRmax is greater than the WR at RCP. Modeling framework.We addressed these methodological issues by constructing a multilevel CRF estimation framework that computes a participant's steady-state HR-to-WR relationship using features extracted from HR response to flat or ramp UKB-CRF test protocols.The framework was derived using a three-stage hierarchical linear model.The first stage equates WR computed from steady-state test HR response (Eq. 1) with WR computed from dynamic regression coefficients that vary between and within individual participants as a function of lower-stage hierarchical features.Within every ith individual participant, each having completed a set of p exercise protocols: Stage-1 (base-stage equating steady-state test HR response with UKB-CRF flat and ramped HR response): where (1) β 0 p[ss]i and β 1 p[ss]i are linear regression coefficients estimated from the steady-state protocol ( www.nature.com/scientificreports/ of HR-response and protocol features (P x p[UKB]i ) as well as different sets of participant characteristics (I x i ) .We leveraged this adaptability to derive five WR estimation models (notated as M1-M5; Supplemental Table 1), each using different combinations of HR response feature sets, so that our approach was robust to different data quality scenarios encountered when analysing HR response data in UKB.Additional details regarding the extraction of feature sets included in P x p[UKB]i and I x i are provided in Supplemental Methods.
Application of multilevel framework VO 2 max was estimated from the multilevel framework by extrapolating the linear fit defined by β 0 p[UKB]i and β 1 p[UKB]i to age-predicted HRmax 43 and converting the resultant maximal steady-state WR value to VO 2 max using the American College of Sports Medicine metabolic equation for cycling 9 .We also estimated WR and VO 2 max values using a simple linear regression approach 15 , a two-point estimation method 19 , and an approach for steadystate tests 15 (see 'Prediction of VO2max using alternative methods' in Supplemental Materials).
Agreement analyses
We used Bland-Altman analysis 48 to quantify agreement between estimated WR and VO 2 max values with those directly measured during the maximal exercise test.Correlations between estimated and directly measured values were quantified using Pearson's r and Spearman's rho.One-sample t-tests were performed to determine whether mean biases were statistically significantly different from zero mean bias.Estimation model precision was expressed as the root mean square error (RMSE) between estimated and directly measured values.ANOVA repeated measures were used to test differences between estimated and directly measured values across estimation models.
Short-term test-retest reliability
To assess short-term test-retest reliability, a subsample of 87 validation study participants completed a second UKB-CRF test within 2 weeks after main testing, identical to either the low or high ramped test at the main visit.Estimated VO 2 max values from first and second tests were compared using agreement analysis.
UKB participants
The UKB is a prospective cohort study of 502,625 older adults.Baseline data collection was conducted between 2006 and 2010 where a variety of physical measurements, biological samples, and health questionnaires were administered; repeat-measures visits were conducted between 2012 and 2013.The UKB-CRF test was offered approximately 100,000 times (last 79,209 participants from baseline and 20,218 from the repeat-measures visit).
The study was approved by the North West Multicentre Research Ethics Committee and participants provided written informed consent.
Implementation of CRF estimation in UKB
VO 2 max values were estimated in UKB participants, largely as described above for the validation study.Supplemental Fig. 12 describes specific criteria used to assign the multilevel framework estimation models for the primary analysis.Age-predicted HRmax was reduced by 20 beats-per-minute in those taking beta-blockers 49 .
Long-term test-retest reliability of CRF
To assess long-term test-retest reliability, we compared estimated VO 2 max values at baseline and the first followup test in those UKB participants with repeat tests (n = 2877, mean follow-up time 2.8 years).The follow-up UKB-CRF test protocol was re-individualised at the time of testing and therefore may have differed from the baseline protocol.
Health characteristics across CRF levels in UKB Health characteristics were described across age-adjusted and sex-specific CRF categories 50 .We age-stratified the UKB cohort in half-decades as < 50, 50-54, 55-59, 60-64, and ≥ 65 years, defined CRF categories by tertiles ("lower", "middle", and "higher") of estimated VO 2 max levels from each age group, and combined CRF categories from each age group to form CRF categories for the entire UKB cohort.Health characteristics were compared across CRF tertiles for men and women separately.
Survival analyses
Cox regression with age as the underlying timescale was used to estimate log-linear associations between estimated VO 2 max levels (in METs; 1 MET = 3.5 ml O 2 kg −1 min −1 ) and mortality and incident disease outcomes.We compared prospective associations between two VO 2 max estimation approaches: the multilevel framework developed in this study and the previously described method using simple linear regression.Vital status and hospital episodes of UKB participants were established by linkage to national registry data obtained from the Health and Social Care Information Centre (now NHS Digital) for England and Wales and the Information Services Department (ISD) for Scotland.The censoring date for mortality outcomes was 31st March 2020.Censoring dates for incident disease outcomes were 31st January 2018 in England and Wales, and 30th November 2016 in Scotland.International Classification of Diseases (ICD) 10th edition (ICD-10) codes were used to define health outcomes; heart failure (I50, I11.0, I13.0, I13.2), stroke (I60-166), ischaemic heart disease (IHD; I20-I25), atrial fibrillation (AF; I48), and chronic obstructive pulmonary disease (COPD; J44).Fatal outcomes were all-cause
Figure 1 .
Figure 1.Conceptual framework and design for validation study.(A) Overview of the five exercise tests performed by validation study participants (3 UKB-CRF tests (flat protocol, low ramp protocol, high ramp protocol), 1 steady-state test unique to the validation study, and 1 maximal exercise test to measure VO 2 max).X-axes: Time; Y-axes: Work rate (WR).Tests were completed consecutively, and work rates were individualised according to standardised criteria (See `Experimental procedure and equipment' in Methods).UKB-CRF test and steady-state test data were used for method development.Maximal exercise test data were withheld from method development and used for validation purposes only.(B) Conceptual plot of WR-to-VO 2 response during steady-state and ramped exercise tests.VO 2 increases linearly at a rate proportional to the rate of change in WR (i.e.ramp rate) until VO 2 max is reached (in an exhaustive test).The WR-to-VO 2 relationship (line slope) changes depending on the ramp rate of the test.As ramp rate decreases, the WR when VO 2 max is achieved approaches the maximal WR for an exhaustive steady-state test.Note that VO 2 is extrapolated to maximal values for demonstrative purposes, but in the validation study ramped and steady-states tests were non-exhaustive.(C) Exemplar HR data (blue scatter and grey line; upper panel), WR data (red line; lower panel), and test phase annotation for ramp test.(D,E) Feature extraction for ramp phase using simple linear regression model and for recovery phase using first-order exponential decay model (see Supplemental Methods).
Figure 2 .
Figure 2. Scatterplots (top row) and Bland-Altman plots (bottom row) demonstrating agreement between work rates measured at the respiratory compensation point (RCP) and work rates estimated from flat ramp tests (left column), low ramp tests (middle column), and high ramp tests (right column) using the most comprehensive prediction equation from the multilevel framework (M1 for ramp tests; M4 for flat test).r: Pearson's correlation coefficient, rho: Spearman's rank correlation coefficient.RMSE: Root-mean-square error.
Figure 3 .
Figure 3. Scatterplots (top row) and Bland-Altman plots (second row) demonstrating agreement between directly measured VO 2 max and VO 2 max estimated from flat tests (left column), low ramp tests (middle column), and high ramp tests (right column) using the most comprehensive equation from the multilevel framework (M1 for ramp tests; M4 for flat test).Below these (bottom row), the box plot demonstrates agreement across all ramp rates tested using estimates from the multilevel framework, the simple linear regression method, and the two-point estimation method.r: Pearson's correlation coefficient, rho: Spearman's rank correlation coefficient.RMSE: Root-mean-square error.
Figure 5 .
Figure 5.Hazard ratio and 95% confidence interval (CI) for prospective log-linear associations (Cox regression) between fatal and non-fatal outcomes in the UK Biobank with cardiorespiratory fitness in metabolic equivalents (METs, per 3.5 ml O 2 •kg −1 min −1 ) computed from the multilevel framework and simple linear regression methods.Event-rate per 100,000 person-years.AF: atrial fibrillation; COPD: chronic obstructive pulmonary disease; CVD: cardiovascular disease; IHD: ischaemic heart disease; RD: respiratory disease.COPD incidence mostly reflects severe COPD since only ~ 25% of cases end up in hospital.Mortality and incidence event-rates differ between fitness prediction methods owing to different inclusion criteria at the estimation level.
Figure 6 .
Figure 6.Hazard ratio and 95% confidence interval (CI) for nonlinear associations (cubic splines, Cox regression) between fatal and non-fatal outcomes in the UK Biobank with cardiorespiratory fitness in metabolic equivalents (METs, per 3.5 ml O 2 •kg −1 •min −1 ) computed from the multilevel framework and simple linear regression methods.Hazard ratios were computed relative to a fitness reference point of 8.0 METs.AF: atrial fibrillation; COPD: chronic obstructive pulmonary disease; CVD: cardiovascular disease; IHD: ischaemic heart disease; RD: respiratory disease.Mortality and incidence counts (superimposed histograms) differ between fitness estimation methods owing to different inclusion criteria at the estimation level. https://doi.org/10.1038/s41598-021-94768-3
Table 10
). Exclusion criteria were: heart pacemaker; unable to walk without aid; history of angina pectoris; blood pressure greater than 180/110 mm Hg; musculoskeletal injury that would impair cycling on the ergometer; pregnancy; and currently taking cardioactive drugs (e.g.beta-blockers, aspirin).Ethical approval was obtained by the University of Cambridge Human Biology Research Ethics Committee (Ref: HBREC/2015.16).All participants provided written informed consent.
WR tp[ss]i is a sequence of t steady-state WR values computed with β 0 p[ss]i , β 1 p[ss]i , and HR tp[ss]i (thus, a matrix representation of the line defined by Eq. 1); and (4) β 0 p[UKB]i and β 1 p[UKB]i are dynamic regression coefficients that, while unique to each UKB protocol (p[UKB]) and individual, converge to the values of β 0 p[ss]i and β 1 p[ss]i by their linkage with WR tp[ss]i .β0p[UKB]i and β 1 p[UKB]i are estimated at the second stage using combina- tions of HR-response and protocol-based features:Stage-2 (HR-response and protocol features extracted from flat and ramped UKB-CRF tests):where (1) γ 0x i and γ 1x i are sets of a fixed regression coefficients for HR-response and protocol-level features P x p[UKB]i ; and (2) γ 00 i and γ 10 i are the mean intercept and slope for the ith individual participant.γ 00 i and γ 10 i are estimated at the third stage: Stage-3 (pretest participant characteristics): p[ss]);(2) HR tp[ss]i is a sequence of t simulated steady-state HR values, equally spaced and spanning the submaximal intensity range;(3)where (1) δ 00x and δ 10x are sets of b fixed regression coefficients for participant characteristics I x i ; and (2) δ 000 and δ 100 are the model-invariant intercept and slope.β 0 p[UKB]i and β 1 p[UKB]i can be estimated using different sets (1) | 2021-07-30T06:18:05.937Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "a8ad47286f439027609ff7217f0644c855e888c2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-94768-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f973c982fdda0c369bab5f97ce8bdcfbc89b05a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226291783 | pes2o/s2orc | v3-fos-license | Estimating Shifts in Phenology and Habitat Use of Cobia in Chesapeake Bay Under Climate Change
Cobia (Rachycentron canadum) is a large coastal pelagic fish species that represents an important fishery in many coastal Atlantic states of the U.S. They are heavily fished in Virginia when they migrate into Chesapeake Bay during the summer to spawn and feed. These coastal habitats have been subjected to warming and increased hypoxia which in turn could impact the timing of migration and the habitat suitability of Chesapeake Bay. With conditions expected to worsen, we project current and future habitat suitability of Chesapeake Bay for cobia and predict changes in their arrival and departure times as conditions shift. To do this we developed a depth integrated habitat model from archival tagging and physiology data from cobia that used Chesapeake Bay, and applied the model to contemporary and future temperature and oxygen output from a coupled hydrodynamic-biogeochemical model of Chesapeake Bay. We found that estimated arrival occurs earlier and estimated departure time occurs later when temperatures are warmer and that by mid- and end-of-century cobia may spend on average up to 30 and 65 more days, respectively, in Chesapeake Bay. By mid-century we do not expect habitat suitability to change substantially for cobia, but by end-of-century we project it will significantly decline and shift closer to the mouth of Chesapeake Bay. Our study provides evidence that cobia will have the capacity to withstand near term impacts of climate change, but that their migration phenology varies from year to year with changing temperatures. These findings emphasize the need to incorporate the relationship between fishes and their environment into how fisheries are managed. This information can also help guide managers when deciding the timing and allocation of a fishery.
INTRODUCTION
Cobia (Rachycentron canadum) is a large coastal pelagic fish species that uses waters along the mid-and south-Atlantic regions of the U.S. east coast throughout the year. Along the east coast of the U.S., cobia migrate into bays and estuaries, such as Chesapeake Bay, in late spring/early summer to spawn and feed (Joseph et al., 1964;Smith, 1995;Perkinson et al., 2019). They remain in these habitats until late summer/early fall when they migrate primarily offshore to the shelf waters ranging from North Carolina to Florida (Crear et al., 2020b). The exact timing of both inshore and offshore migrations fluctuate each year and are thought to be driven by temperature cues (Smith, 1995;Lefebvre and Denson, 2012). Anecdotal evidence from fishermen suggests that cobia have been entering Chesapeake Bay earlier in recent years, consistent with habitat suitability models suggesting that future climate warming will result in arrival into inshore habitats, like Chesapeake Bay, earlier in the spring (Crear et al., 2020b).
Cobia support a valuable recreational fishery on the U.S. east coast from Florida to Virginia. Estimated cobia landings from the recreational fishery occur primarily in Virginia or North Carolina state waters (SEDAR, 2020). With an average of approximately 225,000 cobia trips occurring annually in Virginia alone, valued between $488-$685 per trip (Scheld et al., 2020), the cobia fishery is extremely important for coastal states like Virginia. In recent years, estimated landings exceeded the Atlantic cobia allowable catch limits, which led the National Marine Fisheries Service (NMFS) to close the fishery in federal waters (NCDENR, 2016;NMFS, 2017). Despite the closure in federal waters, the cobia fishery remained open in state waters (within 3 nautical miles of the coast) because of the importance of the cobia fishery to many coastal states.
Warming within these ecologically and economically important inshore habitats has been occurring and is expected to intensify in the future with climate change (Najjar et al., 2010). As a result of atmospheric warming we expect to see an approximately 2 • C increase by mid-century and a 5 • C increase by end-of-century in Chesapeake Bay inferred from Saba et al. (2016) and Muhling et al. (2018).
Being adjacent to human populations, coastal habitats like Chesapeake Bay are often impacted by anthropogenic inputs (Brown et al., 2018). Specifically, anthropogenic nutrient inputs combined with warming waters has led to an increase in the extent and severity of hypoxic regions within Chesapeake Bay (Hagy et al., 2004;Rabalais et al., 2009;Najjar et al., 2010). We expect that as climate change continues these impacts will be exacerbated. Irby et al. (2018) project that the largest increase in cumulative hypoxic volume in Chesapeake Bay will occur between oxygen concentrations of 2-5 mg l −1 . With an increase in 2 and 5 • C and corresponding solubility changes, phytoplankton growth rates, and organic matter remineralization, Chesapeake Bay is expected to see estimated reductions in dissolved oxygen of 0.5 and 1.5 mg l −1 by midcentury and end-of-century, respectively (Irby et al., 2018). These environmental changes may impact the suitability of Chesapeake Bay for cobia and could affect their arrival and departure time, a trend that has been seen in other migratory species (Sims et al., 2004;Jansen and Gislason, 2011).
The relationship between fish physiology and the environment is one way to understand the impacts of climate change on fish. A recent physiology study found that cobia are able to withstand temperatures as warm as 32 • C; however, when exercised to exhaustion in these conditions, 30% of individuals suffered mortality (Crear et al., 2020a). Furthermore, this study showed cobia had a very high hypoxia tolerance, where individuals could tolerate oxygen levels as low as 1.7-2.4 mg l −1 at temperatures between 24 and 32 • C (Crear et al., 2020a). Based on these results, it appears cobia are more hypoxia tolerant than many active predatory species and therefore might be less impacted by future decreases in dissolved oxygen concentration.
Habitat modeling has been used to assess the impacts of climate change on a number of marine species (Pinsky et al., 2013;Muhling et al., 2016;Kleisner et al., 2017;Morley et al., 2018;McHenry et al., 2019;Crear et al., 2020c). These studies have been used to identify both habitat reductions and range shifts. Although a recent study assessed climate impacts on cobia distribution along the U.S. east coast (Crear et al., 2020b), the spatial resolution of the analysis was too coarse to assess the changes in the habitat quality of Chesapeake Bay.
To predict future changes in phenology and habitat suitability for cobia within the Chesapeake Bay, we developed a habitat model parameterized with our physiology data (Crear et al., 2020a) and archival tagging data. This model was used to project the current arrival and departure times of cobia into Chesapeake Bay and the changes to this phenology in the future under climate change. In addition, our model was used to project changes in habitat suitability in Chesapeake Bay as a function of temperature and oxygen concentration.
Tagging
Cobia were caught on rod and reel using typical recreational methods in Chesapeake Bay during the 2017-2018 summer months. Cobia were placed upside down in a V-board, and a hose with water pumping through it was inserted into the mouth. Cobia were measured and tagged by making a 2 cm incision in the abdominal wall, and inserting two tags. The first tag was an acoustic transmitter (V16-4L/4H coded transmitter, 16 mm diameter x 68 mm long, pulse interval 30-120 s, estimated battery life 1,613-3,650 days, 152-158 dB, 24 g in air, Vemco Inc., herein referred to as an "acoustic tag"). The second tag was a data storage tag (G5 data storage tag, 8 mm diameter x 31 mm long, 2.7 g in air, Cefas Technology Limited, herein referred to as a "data logger"), which was programmed to record temperature every 20 min and depth every 1 min for 2 years. A conventional tag was fixed to the data logger and designed to protrude from the incision to alert fishers that caught a tagged fish that a data logger was present inside the fish and that a monetary reward would be given if the tag was returned. The incision was closed with 3 interrupted sutures (PDS II) or 5-8 staples (Conmed Reflex One Skin). An external dart tag was inserted at the base of the dorsal fin. Fish were immediately released following tagging unless the fish appeared lethargic. When this occurred, we held the fish underwater as the boat moved forward slowly to irrigate the gills until the fish was able to swim off on its own. All fish capture, handling, and surgical procedures were approved by the College of William & Mary Institutional Animal Care and Use Committee (protocol no. IACUC-2017-05-26 133-kcweng).
Habitat Model
The habitat model followed similar methods described in Eveson et al. (2015) and Crear et al. (2020b), which uses the ratio between habitat use and habitat availability to determine habitat suitability of the fish species. Habitat use was characterized by the temperatures utilized by tagged cobia (section "Habitat Use Densities" below) and habitat availability was the thermal distribution of the environment, as predicted via biogeochemical modeling (section "Habitat Availability Densities" below). A value greater than 1 indicates suitable conditions (i.e., the conditions the fish occupied occurred in a greater proportion than those conditions in the available habitat data), below 1 indicates unsuitable conditions, and equal to 1 represents no difference than random. In addition to the data from the data loggers, we used the environmental conditions simulated by the three-dimensional (3D) ChesROMS-Estuarine-Carbon-Biogeochemistry (ECB) model. This coupled hydrodynamic-biogeochemical model had a horizontal resolution of approximately 1 km × 1 km and 20 terrain-following vertical levels (i.e., depth levels that follow the contour of the bottom) (Shchepetkin and McWilliams, 2005) that have a higher vertical resolution near the surface and bottom of the water column (Feng et al., 2015;Da et al., 2018;Irby et al., 2018). The results from the ChesROMS-ECB model had three uses: estimates of habitat availability (daily outputs), for predictions of arrival and departure time of cobia to and from Chesapeake Bay over contemporary and future time periods (daily outputs), and for habitat suitability predictions over contemporary and future time periods (across summer averages). Details of the complete habitat model are described below in six steps (Figure 1).
Habitat Use Densities
Habitat use data came from the data loggers and was defined as the temperatures occupied by tagged cobia when the fish were in Chesapeake Bay. Presence inside and outside Chesapeake Bay was determined using the acoustic detections from these fish that were detected on acoustic receiver stations at the mouth of Chesapeake Bay (75.98 • W). A fish was deemed inside Chesapeake Bay from when the fish was first detected west of 75.98 • W to the last time the fish was detected west of 75.98 • W. This method was selected because there was not an acoustic array with a fine enough resolution to determine when the fish was outside Chesapeake Bay. Data within the first 24 h of tagging and during the day of recapture were removed from each fish's dataset to disregard handling and tagging stress behaviors. Temperature and depth data were summarized by hour for each fish over a specified time range. Densities were extracted from temperature histograms with 0.5 • C bins ranging from 1.5 to 33.5 • C for each fish. The densities for each temperature were averaged over all fish present in Chesapeake Bay over a specified time range. These histograms and densities were generated for the months cobia arrive to (May and June) and depart from (August and September) Chesapeake Bay, as well as over all 5 months of Chesapeake Bay occupancy (May-September) combined. These densities were considered habitat use for cobia in Chesapeake Bay.
Habitat Availability Densities
Habitat availability information for Chesapeake Bay were temperatures and oxygen derived daily from the ChesROMS-ECB model for the time cobia are typically found in Chesapeake Bay (May 15-September 30) over the summers tagged cobia were at-liberty (2017-2019). We did not want to include all of May because the available temperatures would be skewed lower than what is actually available to cobia during the second half of May. Because the vertical levels in the model are not equally spaced, we generated eight depth bins at 3 m intervals (0-3 m, 3-6 m, 6-9 m, 9-12 m, 12-15 m, 15-18 m, 18-21 m, 21+ m). To allow each depth to be treated equally, all temperatures for a given latitude and longitude from levels within a depth bin were averaged over each day. To integrate cobia hypoxia tolerance quantified in Crear et al. (2020a), we removed those portions of the dataset corresponding to physiologically uninhabitable waters. These experiments showed that hypoxia tolerance declines in warmer waters (Crear et al., 2020a). To remove those portions where habitats were physiologically unavailable to cobia, we adjusted cells from depth bins to not available values (NAs) where temperatures were between 24 and 28 • C and dissolved oxygen levels were less than or equal to 1.7 mg l −1 , where temperatures were greater than 28 • C and less than 32 • C and dissolved oxygen levels were less than or equal to 2 mg l −1 , and where temperatures exceeded 32 • C and dissolved oxygen levels were less than 2.4 mg l −1 (Crear et al., 2020a). Because salinity preference is unknown for adult cobia while inhabiting Chesapeake Bay, we generated an area based on where cobia are caught while in Chesapeake Bay. This area extended slightly north of the mouth of the Potomac River (38.10 • N) and excluded all areas in Chesapeake Bay tributaries (James, York, Rappahannock, and Potomac Rivers). We also excluded ocean waters, i.e., those east of the Chesapeake Bay mouth at 75.98 • W. From here on, this region will be referred to as "Chesapeake Bay." The accuracy of the ChesROMS-ECB model has not been well-evaluated in shallow depths; therefore, any cells where bottom depths were less than 3 m were not included in these data. All temperatures over all eight depth bins for a specified time period were combined and a histogram and accompanying densities were created from 1.5 to 33.5 • C with 0.5 • C bins. These densities were generated for the months cobia arrive to (May and June) and depart from (August and September) Chesapeake Bay, as well as over all 5 months (May 15-September) combined. These densities were considered habitat availability for cobia in Chesapeake Bay.
Create Ratios
Ratios were calculated for each arrival month (May and June) and departure month (August and September) by dividing the corresponding habitat use densities by the habitat availability densities for those months. Ratios were also calculated from habitat use and habitat availability densities for all 5 months combined. Together this resulted in five sets of ratios (May, June, August, September, and all months combined).
Step 1: Histogram of habitat use densities from data loggers of tagged fish in Chesapeake Bay averaged over a specified time period; 2 arrival months (May, Jun.), 2 departure months (Aug., Sept.), and all 5 summer months combined (May-Sept.).
Step 2: Histogram of habitat availability densities generated from daily 3D temperature arrays from Chesapeake Bay (ChesROMS-ECB model), summarized vertically into eight depth bins combined for each of the aforementioned time periods.
Step 3: Ratios generated by dividing habitat use densities by habitat availability densities for each time period.
Step 4: Either daily 3D arrays (for arrival and departure months) or 3D arrays averaged across two summer periods (May 15-Sept. 30 and Jun. 1-Aug. 31) were extracted from the ChesROMS-ECB model and ratios from Step 3 were assigned to each grid cell at each depth bin based on temperature in that grid cell.
Step 5: The vertical habitat distribution of fish in Chesapeake Bay from data loggers was used to generate a depth weighting factor for the above eight depth bins, for each time period. The ratios generated in Step 4 were then multiplied (*) by the appropriate depth weighting factor based on each ratio's depth in each grid cell for each time period.
Step 6: Sum the weighted ratios through the water column to get daily 2D surfaces of weighted ratios for arrival months (May, Jun.) and departure months (Aug., Sept.) for each year and yearly 2D surfaces of weighted ratios for the two summer periods (May 15-Sept. 30 and Jun. 1-Aug. 31). scenarios predicted to occur by mid-century and end-of-century within Chesapeake Bay by adding deltas to the contemporary habitat data. We selected the mid-century deltas to be +2 • C and −0.5 mg/l and the end-of-century deltas to be +5 • C and −1.5 mg/l (based on Irby et al., 2018). It is important to mention that, similar to Irby et al. (2018), deltas were not selected to reflect any particular Representative Concentration Pathway (RCP) scenario or global climate model (GCM), but to more generally represent what is believed will occur by mid-and end-of-century and thus understand cobia's sensitivity to these changes. Deltas were applied to all 20 years and evenly over horizontal space and throughout the entire water column since observations suggest that climate change impacts temperatures along the north/south gradient and the temperature of the surface and bottom waters of Chesapeake Bay similarly (Preston, 2004;Irby et al., 2018;Hinson et al., in review). These 3D arrays were then summarized into the eight predefined depth bins and the other adjustments to the arrays described above (section "Habitat Availability Densities") were also done here. This resulted in daily 3D gridded arrays for three different 20-year timeseries, for contemporary, mid-century, and end-of-century.
To represent arrival (May and June) and departure months (August and September) daily 3D temperature arrays for each year were used. To represent the summer in Chesapeake Bay, the temperature arrays were averaged across days for all 5 months (May 15-September 30) and averaged across days for June 1-August 31 (months when cobia most heavily occupy Chesapeake Bay) for each year. Ratios for the arrival and departure months were then assigned to each grid cell at each depth bin based on the daily temperature in that grid cell and given month for all 20 years. Ratios for the 5 months combined were applied to the two average temperature arrays (5 months combined and June-August combined) based on the temperature in that grid cell at each depth bin for each year.
Weight Ratios by Depth
To produce a single ratio value for each latitude and longitude, depth weighting factors were generated for the arrival and departure months and for all 5 months when cobia were present in Chesapeake Bay. The depth weighting factor was calculated by taking the proportion of hourly depth observations from the data loggers at each of the eight depth bins for the arrival and departure months and for all 5 months combined. Based on specified time period and the depth bin the ratio was in, the ratio was multiplied by the appropriate depth weighting factor. For example, if the ratio at a specific latitude and longitude was 2 at the 3-6 m depth bin in June and the depth weighting factor at 3-6 m was 0.5 in June, then the new weighted ratio would be 1.0 at the 3-6 m depth bin in June.
Sum Ratios Through Water Column
Once all ratios were weighted, the eight weighted ratios were summed through the water column at each grid cell for each month (May, June, August, and September) and the two arrays of combined months. This resulted in daily 2D surfaces of weighted ratios for May, June, August, September for each year and yearly 2D surfaces of weighted ratios for the May 15-September 30 time period and June 1-August 31 time period. Suitable habitat was considered to be any cell within the Chesapeake Bay habitat where the predicted ratio was greater than 1. Any predicted ratios below 1 were considered unsuitable habitat and any equal to 1 were considered no preference.
Arrival/Departure Analysis
To determine arrival and departure time of cobia in Chesapeake Bay we calculated available suitable habitat in Chesapeake Bay each day from May 1 to June 30 (for arrival) and August 1 to September 30 (for departure) for each year (2000-2019). To do this, the number of grid cells in the Chesapeake Bay area with predicted ratio values greater than 1 were tallied. Arrival day was considered the first date in May or June where greater than 50% of the cells were deemed suitable (>1). Departure day was considered the first date in August or September where less than 50% of the habitat was deemed suitable. We selected a 50% threshold because it estimated dates that fell within one standard deviation of the mean arrival and departure dates for cobia that were acoustically tagged. Specifically, we focused on departures in 2018 and arrivals in 2019 when there were 33 and 32 acoustically tagged cobia that left and entered Chesapeake Bay, respectively. To accommodate expected warming in our future scenarios, we extended Chesapeake Bay cobia habitat projections into April (for arrival) and October (for departure). There were very little or no contemporary habitat use data for cobia in Chesapeake Bay for the months of April and October; therefore ratios and depth weighting factors from May and September were used to predict over mid-century and end-of-century habitat in April and October, respectively.
To assess if arrival or departure day significantly changed over the current 20 year period (2000-2019) or over temperature we ran two linear models. The response variables were estimated yearly arrival day relative to May 1st and estimated yearly departure day relative to September 1st, for the arrival and departure model, respectively. The fixed effects for the arrival model were overall mean May water temperature in Chesapeake Bay each year and year, while the fixed effects for the departure model were overall mean September water temperature in Chesapeake Bay each year and year. Linear mixed effects models were run to determine if arrival and departure dates differed over the contemporary, mid-century, and end-of-century time periods using the nlme package (Pinheiro et al., 2013). For these models, the response variables were again estimated arrival day and estimated departure day for each year, for the arrival and departure model, respectively. The fixed effect was time period, while the random effect was year (2000-2019) in these models. Tukey's post-hoc tests were run to determine differences among the time periods. All statistics were evaluated at a significance level of α = 0.05.
Habitat Suitability
The yearly 2D predicted ratio surfaces generated from the two summer periods (May 15-September 30; June 1-August 31) for each of the three time periods (contemporary, mid-century, and end-of-century) were used to calculate habitat suitability values for Chesapeake Bay. The predicted ratio values greater than 1 were summed over Chesapeake Bay for each year of each time period for each summer period to get yearly total habitat suitability index values. A linear mixed effects model was used to determine whether total habitat suitability index changed through each long term time period (contemporary, mid-century, and end-of-century) for the two summer periods (May 15-September 30; June 1-August 31). An interaction was used between these two fixed effects, long term time period and the two summer time periods, while the response variable was total habitat suitability index each year. Year was a random effect in this model. All R code for modeling and statistical analyses can be found in the Supplementary Material.
Data Retrieval
We received eight data loggers from fishermen. These eight fish ranged from 78.7 to 139.7 cm total length (mean ± SD: 106.0 ± 18 cm) ( Table 1). Days-at-liberty within Chesapeake Bay ranged from 26 to 151 days (92 ± 46) days, yielding a total of 736 days of data.
Habitat Model
Habitat suitability ratios were generated for each arrival month (May and June), for each departure month (August and September), and for all summer months combined. During arrival and departure months, cobia preferred temperatures from 21.5 to 27 • C and 24.5 to 31 • C, respectively. Over the entirety of the summer cobia preferred 22.5-28 • C (Figure 2). Depth weighting factors were generated for each arrival and departure month and for all summer months combined. During early arrival (May) cobia preferred 0-6 m, but later into June cobia selected 0-9 m. During early departures (August) cobia were observed between 0 and 9 m, but during September cobia were most common at slightly deeper depths (0-12 m). When all summer months were combined, cobia were observed most frequently at depths between 0 and 9 m (Figure 3). Figure 4B). Arrival time significantly differed [F (2, 38) = 106.6, p < 0.001] among time periods (contemporary, mid-century, end-of-century; Figure 4C), where contemporary mean arrival time (mean ± SD; 27.8 ± 9.0 days) significantly differed from mid-century arrival time (16.0 ± 7.9 days; p < 0.05) and end-ofcentury arrival time (−1.5 ± 7.0 days; p < 0.05). Arrival times for mid-century and end-of-century also significantly differed (p < 0.05). Similar to arrival time, estimated departure time relative to September 1st (all departure values from here on are relative to September 1st) also varied over the last 20 years, and there was no significant trend (t = 0.23, p > 0.05). Despite this, the mean estimated departure time between earlier years was 3.0 days since September 1st, but 15.4 days for later years ( Figure 5A). As average September temperature increased, estimated departure time significantly increased (t = 6.0, p < 0.001), where for every • C increase, departure time occurred 9.4 days later in the Fall (Figure 5B). Estimated departure time also significantly differed among time periods [F (2, 38) = 154.6, p < 0.001; Figure 5C]. Specifically, contemporary mean departure time (10.1 ± 8.3 days) significantly differed from mid-century (27.7 ± 10.6 days; p < 0.05) and endof-century (45.2 ± 6.7 days; p < 0.05). Departure times for mid-century and end-of-century significantly differed as well (p < 0.05).
Habitat Suitability
An interaction between long term time period (contemporary, mid-century, end-of-century) and the two summer time periods FIGURE 2 | Habitat suitability ratios from 16 to 32 • C for each arrival month (May and June), departure month (August-September), and all summer months combined for when cobia were inside Chesapeake Bay (May-September). The ratios were developed from dividing the habitat use densities (red lines) by the habitat availability densities (blue lines) at each temperature during times when cobia were inside Chesapeake Bay. Dashed lines is at a ratio of 1.0. The most suitable habitat during June-August for cobia in Chesapeake Bay spans from north of the James River all the way to the northern extent of the study region (north of the Potomac River) for the contemporary time period (Figure 6A). Despite the lack of a significant difference between the total habitat suitability index for the two summer time periods (p > 0.05; Figure 7), mean habitat suitability does appear to decline slightly throughout most of the Chesapeake Bay cobia region when we incorporated days in May and all of September ( Figure 6D). Mean habitat suitability from June-August, during mid-century shifted further south, closer to the mouth compared to the contemporary period ( Figure 6B). In addition, the total habitat suitability index significantly decreased between the contemporary and mid-century periods for June-August (p < 0.05; Figure 7). However, when assessing May 15-September 30 during mid-century, total habitat suitability index did not decline relative to the contemporary period. Although there is no significant difference, it does appear that total habitat suitability index increased slightly by midcentury (p > 0.05; Figure 7). This is also reflected in habitat improvements over much of Chesapeake Bay (Figure 6E). Total habitat suitability index also significantly differed between the two summer periods during mid-century (p < 0.05; Figure 7). For end-of-century, we project a significant decrease in suitable cobia relative to mid-century for June-August (p > 0.05) and May 15-September 30 (p > 0.05; Figure 7). This is reflected in habitat loss throughout most of Chesapeake Bay and a shift toward the bay's mouth (Figures 6C,F). Total habitat suitability index was also significantly lower for June-August compared to May 15-September 30 (p < 0.05; Figure 7). Frontiers in Marine Science | www.frontiersin.org FIGURE 7 | Total habitat suitability index for cobia in Chesapeake Bay during contemporary, mid-century, and end-of-century time periods for each summer period (Jun-Aug and May 15-Sep 30). Error bars represent standard deviation. Black symbols represent the mean total habitat suitability index values for June-August and red symbols represent the mean total habitat suitability index values for May 15-September 30. Different lower case letters indicate a statistical difference among time periods within a summer period. For example, in the Jun-Aug summer period all three times periods are different from one another. An * indicates a statistical difference between summer periods within a time period.
DISCUSSION
This study presents the first attempt at describing the distribution of cobia within Chesapeake Bay. We generated a depth integrated habitat model to predict contemporary and future distributions of cobia within Chesapeake Bay using temperature, dissolved oxygen, and depth. By developing a novel model incorporating 3D habitat and physiology, we limit our model variables to those that are available in 3D, but we felt it was more important to incorporate depth than include other variables (many of which are not available at a fine enough resolution) because cobia use the entire water column. Although weighting and summarizing by depth has its benefits (e.g., two-dimensional output; patterns more easily discernable), the approach does have some limitations. For example, there may be small pockets of more suitable habitat at various depths (sub-gridscale) that are not expressed in our results and thus could potentially lead to an underestimation in some suitable habitat predictions. While inhabiting Chesapeake Bay, cobia are highly driven by biotic factors, like spawning and feeding, which were not included in our model; however, we believe environmental variables constrain cobia to certain areas in Chesapeake Bay, which are expressed in our model output. Because of this, we only assessed habitat suitability for the entire summer as a whole. The phenology of cobia arrival and departure to Chesapeake Bay appears to be cued by temperature, which then leads to inshore spawning and foraging. Therefore, we believe our temperature driven habitat model is justified in describing cobia phenology. It is important to note that another limitation of this study is the low sample size of cobia used in the model and that individuals used in our model may not be a full representation of cobia that summer Chesapeake Bay. An increase in sample size may lead to shifts in estimated phenology and habitat suitability. Despite this, our phenology estimates fell within one standard deviation of actual departure and arrival days based on over 30 acoustic tagged cobia. Lastly, we also would like to reiterate that the trends estimated from climate change projections are not intended to represent shifts under any RCP scenario or GCM, but more generally demonstrate cobia's sensitivity to future oxygen and temperature conditions likely to occur around mid-and end-of-century.
Contemporary Trends
It is clear temperature is a major driver of cobia arrival to and departure from Chesapeake Bay. Over the last 20 years, when temperatures were warmer in May, cobia arrived earlier. Tag pop off locations and modeling suggest that cobia overwinter offshore along the U.S. shelf from North Carolina to Florida (Crear et al., 2020b;Jensen and Graves, 2020). Although cobia are unaware of the temperature in Chesapeake Bay when they are in their overwintering offshore waters, warm temperature cues on the shelf are most likely reflected in Chesapeake Bay as well. This has been observed in mackerel, which arrived to their spawning grounds earlier when sea surface temperature was warmer at a rate of −15 ± 12.1 days/ • C (Jansen and Gislason, 2011). Although the trend was not significant, it does appear that when comparing estimated arrival time approximately 20 years ago to today, cobia may be migrating into Chesapeake Bay earlier in recent years. Earlier migrations have been recorded in various tuna species as well, which have migrated to feeding grounds up to 14 days earlier over a 25 year period (Dufour et al., 2010).
Once cobia enter Chesapeake Bay in May, the occurrence of high ratio values generated from the habitat use and habitat availability densities and the use of shallower habitats suggest that cobia are likely seeking out the warm shallow habitats until some of the deeper areas (>6 m) warm up. During the main summer months in Chesapeake Bay (June-August), most of the areas in southern Chesapeake Bay appear to be suitable for cobia. The suitability of most of these areas allow cobia to spawn and feed freely without being restricted by less optimal conditions, except for areas that are excluded as a result of low oxygen. However, because cobia have a high hypoxia tolerance the negative impacts of low oxygen is likely minimal. These favorable conditions are ideal for cobia, which are indeterminate batch spawners and are capable of spawning multiple times over the spawning season (Brown-Peterson et al., 2001;Lefebvre and Denson, 2012). To further define cobia habitat use in estuaries, it would be useful for future studies to examine the relationships between cobia and the location of bathymetric features, manmade structure (e.g., buoys and pilings), salinity, tidal currents, and bait schools (e.g., menhaden) all of which are thought (based on anecdotal evidence) to influence cobia movements while inshore.
Typically, once spawning is complete, individuals have foraged, and temperatures cool in Chesapeake Bay, cobia begin their migration out onto the shelf. However, when temperatures in September are warmer than usual, cobia remain in Chesapeake Bay longer. Similar to arrivals, despite no significant trend over time, it appears that in recent years cobia are leaving Chesapeake Bay later compared to 20 years ago. Although we did not directly look at changes in dissolved oxygen levels on cobia phenology, it is likely not a major driver because of cobia's hypoxia tolerance and the lack of large hypoxic zones during the months they arrive and depart to and from Chesapeake Bay. Overall, these results suggest that cobia phenology has already been impacted by climate change over the last 20 years.
Future Trends
Phenology trends observed over the last 20 years are projected to extend more rapidly in the future as climate change contributes to even warmer conditions. By mid-century and end-of-century, conditions in Chesapeake Bay may allow cobia to arrive by mid-May and late April/early May on average, respectively. Furthermore, departure time is predicted to extend to the end of September and mid-October by mid-century and end-of-century, respectively. When combining the estimated earlier arrival and later departure, our results indicate that cobia may increase their residence time in Chesapeake Bay by an extra 30 days by mid-century and 65 days by end-ofcentury. Despite this large increase in the number of days, cobia may be faced with more unsuitable habitat during the months when temperatures are the warmest. When combining more favorable conditions during the last 2 weeks of May and all of September, suitable habitat does not change much by mid-century. If climate change continues at its current rate, suitable habitat is expected to decrease significantly and shift closer to the Chesapeake Bay mouth by the end-of-century, even when incorporating the second half of May and all of September. Further, these trends should be interpreted as the average summer cobia distribution, which in turn, could potentially hide periodic marine heatwave events that could result in displacement and further habitat reduction. Future decline in suitable habitat has similarly been projected for many other coastal species (Albouy et al., 2013;Brown et al., 2016). For example, an increase in sublethal temperatures in the San Francisco Estuary as a result of climate change will likely cause behavioral avoidance of these temperatures and considerable habitat reduction for the Delta smelt (Hypomesus transpacificus) (Brown et al., 2016).
Although habitat shifts and community composition has favored warm-adapted species (Howell and Auster, 2012), the predicted occurrence of more extreme temperatures has the capability to negatively impact warm-adapted species like cobia. For example, if cobia migrate into Chesapeake Bay earlier, spawning may occur earlier. This could impact the survival of eggs and larvae, which depend on the timing of specific temperatures and favorable primary production conditions (Durant et al., 2007). On the other hand, if spawning duration is extended and phytoplankton blooms align, larval survival may improve (Kristiansen et al., 2011). If substantial spawning habitat is lost for estuarine species like cobia, we may see populations decline. We may also see species shift their spawning habitat to more poleward estuaries or offshore habitat where conditions are more favorable for spawning adults and larvae. Recent genetic studies suggest that cobia already have a separate offshore spawning group (Darden et al., 2014;Perkinson et al., 2019), meaning cobia have the ability to spawn in offshore waters. Furthermore, Crear et al. (2020b) found that over the next 60-80 years, there will continue to be an increase in the proportion of suitable cobia habitat in state waters (within 3 nautical miles of shore) from Maryland to Massachusetts during the summer spawning months. Likewise, non-warmadapted species like, Northeast Artic cod (Gadus morhua) have already shifted their spawning habitat further north over the last half a century, a behavior likely linked to climate change (Sandø et al., 2020). If cobia shift their spawning habitat further north or extend their time inshore, they may subsequently shift their overwintering grounds to be closer to their spawning habitat. Because cobia offshore migrations are driven by temperature, we hypothesize that their overwintering grounds are likely plastic. Therefore, although suitable habitat may still be available further south in the winter (Crear et al., 2020b), it may be less energetically costly to migrate off the shelf toward the Gulf Stream instead of migrating to shelf waters between North Carolina and Florida. If further migrations are made, females may be required to divert energy away from egg production to compensate. Although we talk about these impacts being decades away, some of these hypotheses can be tested today as marine heatwaves become more prevalent along the Northeast Shelf.
Cobia may have the ability to behaviorally adapt to climate change within Chesapeake Bay. The fact that cobia could withstand water temperatures as warm as 32 • C (Crear et al., 2020a), suggests that if waters warmed throughout Chesapeake Bay, areas with water temperatures up to 32 • C could still be habitable or maybe even suitable. Meaning, temperatures between 22.5 and 28 • C may be preferred, but if unavailable, cobia could still inhabit warmer temperatures. If this is the case, our projected future habitat suitability maps may underestimate the amount of suitable habitat in Chesapeake Bay. If this is possible, it is still unknown whether other essential functions like growth or reproduction could be compromised.
Management Implications
Hundreds of thousands of recreational fishermen enjoy fishing for cobia each year in Virginia alone and it appears this number has increased in recent years (B. Watkins pers. comm.). As the amount of time cobia spend in Chesapeake Bay increases with climate change, management will need to be prepared for catch increases. In recent years, the fishery in Virginia has been open from June 1 to various dates in September. If the fishing season dates remain the same, we may expect to see an increase in the catch and release of more cobia in May and more cobia retained later in the season. Our study and a previous study (Crear et al., 2020a) suggest cobia have the capacity to withstand near term (+30 years) impacts of climate change, which is a good sign for a fishery that has grown over the last decade.
A dynamic approach to management may prepare managers for the early migrations to or late departures from Chesapeake Bay. Dynamic management provides managers with the opportunity to adjust managed areas temporally and spatially in time when our coastal waters are changing faster than we are accustomed to (Maxwell et al., 2015;Dunn et al., 2016;Welch et al., 2019). Specifically, as the predictability of coastal ocean models improve, we will have the capacity to couple them with our cobia habitat model to project the timing of cobia migrations months to seasons in advance. This information could be used to guide the timing of the fishing season in Virginia and also influence allocation of cobia among states on a broader scale. As fish behaviorally adapt to changing water conditions, it is critical that management be prepared to adapt as well.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The animal study was reviewed and approved by the College of William & Mary Institutional Animal Care and Use Committee (protocol no. IACUC-2017-05-26 133-kcweng).
AUTHOR CONTRIBUTIONS
DC substantially contributed to project conception and design, animal tagging, analysis and interpretation of data, and drafting the manuscript. BW contributed to animal tagging and assisted in manuscript revision. MF and PS provided funding support, generated environmental data, and provided manuscript revisions. KW contributed funds to the project, participated in project conception and design, animal tagging, and assisted in manuscript revision. All authors contributed to the article and approved the submitted version. | 2020-11-11T14:09:32.577Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "2656648643154fdd29256a0bee914fbbf14e2e4a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2020.579135/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "2656648643154fdd29256a0bee914fbbf14e2e4a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
20765095 | pes2o/s2orc | v3-fos-license | Predictive abilities of cardiovascular biomarkers to rapid decline of renal function in Chinese community-dwelling population: a 5-year prospective analysis
Background Predictive abilities of cardiovascular biomarkers to renal function decline are more significant in Chinese community-dwelling population without glomerular filtration rate (GFR) below 60 ml/min/1.73m2, and long-term prospective study is an optimal choice to explore this problem. Aim of this analysis was to observe this problem during the follow-up of 5 years. Methods In a large medical check-up program in Beijing, there were 948 participants with renal function evaluated at baseline and follow-up of 5 years. Physical examinations were performed by well-trained physicians. Blood samples were analyzed by qualified technicians in central laboratory. Results Median rate of renal function decline was 1.46 (0.42–2.91) mL/min/1.73m2/year. Rapid decline of renal function had a prevalence of 23.5% (223 participants). Multivariate linear and Logistic regression analyses confirmed that age, sex, baseline GFR, homocysteine and N-terminal pro B-type natriuretic peptide (NT-proBNP) had independently predictive abilities to renal function decline rate and rapid decline of renal function (p < 0.05 for all). High-sensitivity cardiac troponin T (hs-cTnT), carotid femoral pulse wave velocity and central augmentation index had no statistically independent association with renal function decline rate and rapid decline of renal function (p > 0.05 for all). Conclusions Homocysteine and NT-proBNP rather than hs-cTnT had independently predictive abilities to rapid decline of renal function in Chinese community-dwelling population without GFR below 60 ml/min/1.73m2. Baseline GFR was an independent factor predicting the rapid decline of renal function. Arterial stiffness and compliance had no independent effect on rapid decline of renal function. This analysis has a significant implication for public health, and changing the homocysteine and NT-proBNP levels might slow the rapid decline of renal function.
Background
During the last few decades, rapid decline of renal function continues to grow in prevalence, and this trend is particularly obvious in patients with cardiovascular disease (CVD) [1]. Patients with CVD have clearly higher morbidity and mortality of chronic kidney disease (CKD) compared with those without CVD [2,3]. As the established cardiovascular biomarkers, homocysteine, Nterminal pro B-type natriuretic peptide (NT-proBNP) and high-sensitivity cardiac troponin T (hs-cTnT) are considered to be elevated in patients with rapid decline of renal function, especially those with end stage renal disease (ESRD) [4,5]. Previous studies have observed the relationships between cardiovascular biomarkers and renal function decline in patients with glomerular filtration rate (GFR) below 60 ml/min/1.73m 2 , and yielded the controversial results [6][7][8]. However, predictive abilities of cardiovascular biomarkers to renal function decline are more significant in community-dwelling population without GFR below 60 ml/min/1.73m 2 , and it is essential to analyze this problem in this population. Moreover, these cardiovascular biomarkers might play an etiologic role in renal function decline, and analyzing their relationships could promote the development of preventive strategies to slow the rapid decline of renal function.
Predictive abilities can not be fully evaluated by crosssectional and short-term studies, and long-term prospective study is an optimal choice to explore the predictive abilities of cardiovascular biomarkers to renal function decline [9]. Moreover, this problem differs between racial groups, and there are few studies about this problem in China [10]. Therefore, aim of this prospective analysis was to observe the predictive abilities of cardiovascular biomarkers to renal function decline during the follow-up of 5 years in Chinese community-dwelling population without GFR below 60 ml/min/1.73m 2 .
Study population
This prospective analysis was performed in 1680 participants at least 18 years of age through a medical examination in Beijing, China. Participants were enrolled between May 2007 and July 2009, and the follow-up visit was conducted between February 2013 and September 2013. After 181 participants were lost, 1499 participants were followed up for 5 years. Exclusion criteria were death (52 participants), GFR below 60 ml/min/1.73m 2 (12 participants) and missing values for variables (487 participants). The final study population was comprised of 948 participants.
Physical examination
Physical examination was conducted by well-trained physicians. Resting blood pressure (BP) was determined by taking the mean of two measurements from the right arm of participants in the seated position with a standard mercury sphygmomanometer (Yuwell Medical Equipment & Supply Co., Ltd., Jiangsu, China). Hypertension was defined as mean systolic blood pressure (SBP) ≥ 140 mmHg, mean diastolic blood pressure (DBP) ≥ 90 mmHg, and/or use of any anti-hypertensive medications.
Biochemical evaluation
Fasting blood samples were collected between 8 a.m. and 10 a.m., and sent to our central laboratory. Cardiovascular biomarkers were evaluated according to standard methods described by the manufacturers. Concentrations of fasting blood glucose (FBG), triglyceride (TG), high-density lipoprotein cholesterol (HDL-c) and low-density lipoprotein cholesterol (LDL-c) were quantified with Roche enzymatic assays (Roche Products Ltd., Basel, Switzerland) on a Roche autoanalyzer (Roche Products Ltd., Basel, Switzerland). Oral glucose tolerance test was done at timed intervals of two hours after drinking 75 g glucose load. Diabetes was defined as FBG ≥ 7.0 mmol/L, postprandial blood glucose (PBG) ≥ 11.1 mmol/L, and/or use of oral hypoglycemic medications or insulin. Concentrations of NT-proBNP were measured with electrochemiluminescence immunoassay (Roche Diagnostics GmbH) on an autoanalyzer (COBAS 6000; Roche Products Ltd., Basel, Switzerland). Concentrations of homocysteine were measured by high-performance chromatography with fluorometric detection. Concentrations of hs-cTnT were measured with Elecsys Troponin T high sensitive assay (Roche Products Ltd., Basel, Switzerland) by electrochemiluminescence immunoassay on a Modular Analytics E170 autoanalyzer (Roche Products Ltd., Basel, Switzerland). All assays were performed by qualified technicians without knowledge of clinical data.
Arterial stiffness and compliance assessment
Central arterial stiffness was assessed by automated measurement of carotid femoral pulse wave velocity (cfPWV; Créatech, Besançon, France). cfPWV (m/s) was measured with two strain-gauge transducers (TY-306 Fukuda pressure-sensitive transducer; Fukuda Denshi Co., Tokyo, Japan) fixed transcutaneously over the carotid and femoral arteries (all on the right side), and then calculated from pulse transit time and distance between two recording sites [distance (m)/time (s)]. Central arterial compliance was assessed by automated measurement of central augmentation index (cAIx; SphygmoCor, Sydney, Australia). Central pressure waveform was obtained using its transfer function after mean radial artery waveform was calculated.
Renal function decline
GFR was calculated using Chinese modified Modification of Diet in Renal Disease (MDRD) eq. [175 × plasma creatinine −1.234 × age −0.179 × 0.79 (if female)], where plasma creatinine was in mg/dL [11]. Concentrations of serum creatinine were measured with enzymatic assay (Roche Diagnostics GmbH) on a Hitachi 7600 autoanalyzer (Hitachi, Tokyo, Japan). Renal function decline rate was the change in GFR per year. Rapid decline of renal function was defined as a loss of GFR more than 3 ml/ min/1.73m 2 per year [12,13].
Statistical analysis
Continuous variables with normal distribution were described as mean (standard deviation) and compared with Student's t-test. Continuous variables with skewed distribution were described as median (interquartile range) and compared with Mann-Whitney U test. Categorical variables were described as number (percentage) and compared with χ 2 test. Baseline characteristics were compared between participants with and without rapid decline of renal function. Pearson and Spearman correlation analyses were used for simple correlation. Wilcoxon signed-rank test was conducted to compare the GFR levels at baseline and follow-up. Linear and Logistic regression analyses were conducted to observe the predictive abilities of cardiovascular biomarkers to renal function decline rate and rapid decline of renal function, respectively, after adjustment for age, sex, coronary artery disease, hypertension, diabetes, BMI, SBP, DBP, TG, HDL-c, LDL-c, FBG, PBG, baseline GFR, homocysteine, NT-proBNP, hs-cTnT, cfPWV and cAIx. Data were dealt with Statistical Package for Social Sciences (SPSS) version 17.0 (SPSS Inc., Chicago, IL, USA). Statistical significance referred to p < 0.05.
Logistic regression analysis confirmed that not only age, sex and baseline GFR levels, but also homocysteine and NT-proBNP levels had independently predictive abilities to renal function decline rate (NT-proBNP: p = 0.001; others: p < 0.001 for all; Table 2). Levels of hs-cTnT, cfPWV and cAIx had no statistically independent association with renal function decline rate (p > 0.05 for all). Linear regression analysis confirmed that not only age, sex, baseline GFR levels and hypertension, but also homocysteine and NT-proBNP levels had independently predictive abilities to rapid decline of renal function (hypertension: 0.029; NT-proBNP: p = 0.027; others: p < 0.001 for all; Table 3). Levels of hs-cTnT, cfPWV and cAIx had no statistically independent association with rapid decline of renal function (p > 0.05 for all).
Discussion
Predictive abilities of cardiovascular biomarkers to rapid decline of renal function have been mainly discussed in clinical patients, and few studies have been performed in community-dwelling population without GFR below 60 ml/min/1.73m 2 [6][7][8]. Moreover, most of conflicting evidences have been provided by cross-sectional and short-term studies, and it is essential to perform the long-term prospective study to analyze this problem [9]. Almost no long-term prospective study has evaluated this problem in China [10]. This prospective analysis had the following findings during the follow-up of 5 years in Chinese community-dwelling population without GFR below 60 ml/min/1.73m 2 : 1. homocysteine and NT-proBNP rather than hs-cTnT had the independent abilities to predict the rapid decline of renal function, with their higher levels indicating more rapid renal function decline rate; 2. baseline GFR was an independent factor predicting the rapid decline of renal function; 3. elderly and females had a more rapid decline of renal function compared with others; 4. role of hypertension in rapid decline of renal function could not be ignored; 5. arterial stiffness and compliance had no independent effect on rapid decline of renal function. Cross-sectional researches have realized that homocysteine levels were negatively related to GFR in black and white adults [14]. However, an increase in homocysteine levels was not an independent risk factor for renal disease in a study with the follow-up of 2.2 years [15]. During the follow-up of 5 years, this prospective analysis demonstrated the independently predictive ability of homocysteine to not only renal function decline rate, but also rapid decline of renal function, strongly supporting the adverse effect of homocysteine on renal function. Homocysteine might play an etiologic role in renal function decline through injuring the renal blood vessels. As a pro-oxidant, homocysteine could diminish Abbreviations: BMI: body mass index; cAIx: central augmentation index; CAD: coronary artery disease; cfPWV: carotid-femoral pulse wave velocity; CI: confidence interval; DBP: diastolic blood pressure; FBG: fasting blood glucose; GFR: glomerular filtration rate; HDL-c: high-density lipoprotein cholesterol; hs-cTnT: high-sensitivity cardiac troponin T; LDL-c: low-density lipoprotein cholesterol; NT-proBNP: N-terminal pro B-type natriuretic peptide; PBG: postprandial blood glucose; HR: hazard ratio; SBP: systolic blood pressure; TG: triglyceride the nitric oxide-mediated vasodilation, promote the thrombosis and impede the fibrinolysis [16][17][18][19]. Elevated homocysteine levels could be a target of future intervention studies to slow the rapid decline of renal function, and folic acid might be a choice in clinical therapy of renal injury. Professor Seki N has proposed that natriuretic peptide was an independent risk factor for renal function decline rate [20]. Professor Spanaus KS has reported that elevated NT-proBNP levels indicated an increased risk for accelerated progression of renal disease [21]. This prospective analysis, directing at Chinese communitydwelling populationn without GFR below 60 ml/min/ 1.73m 2 , identified the NT-proBNP as an independent risk factor for rapid decline of renal function. Endogenous NT-proBNP at physiological levels affects the glomerular filtration and renal function [22]. Glomerular hyperfiltration induces the glomerular hypertension and stretches the mesangial cells. Stretched cells secrete the cytokines that stimulate the production of extracellular matrix proteins, accumulation of which promotes the progression of renal injury. Moreover, natriuretic peptide receptor antagonist or angiotensin receptor blockade and neutral endopeptidase inhibition (ARNI, LCZ696) might be useful to prevent the renal injury [23,24]. Therefore, effective monitoring of NT-proBNP levels could slow the rapid decline of renal function.
Age and baseline GFR have been suggested to correlate with ESKD [25]. Based on Epidemiologia do Idoso (EPI-DOSO) Study, age and baseline GFR were associated with progressive decline in renal function [26]. Professor Seki N has realized that baseline GFR had significantly positive association with renal function decline rate [27]. This prospective analysis discovered in Chinese communitydwelling population without GFR below 60 ml/min/ 1.73m 2 that baseline GFR was significantly related to rapid decline of renal function, which was consistent with prior finding that glomerular hyperfiltration was a determinant of renal function decline [28,29]. Accordingly, the correction of glomerular hyperfiltration might be valuable for slowing the rapid decline of renal function. Meanwhile, the finding from Cardiovascular Health Study has suggested that elevated BP contributed to renal function decline in elderly [30]. According to Tromso Study, high BP predicted a decline in GFR [31]. However, Leiden 85-Plus Study has found that low BP was related to renal function decline in elderly [32]. This prospective analysis demonstrated a potential effect of hypertension on rapid decline of renal function in Chinese community-dwelling population without GFR below 60 ml/min/1.73m 2 .
Professor Ford ML has suggested that arterial stiffness was related to renal function decline rate [33]. In 482 community-dwelling individuals free from ESRD, renal function decline was associated with increased arterial stiffness [34]. However, professor Kim CS has illustrated that arterial stiffness was not associated with rapid decline of renal function in participants without GFR < 30 mL/min/1.73m 2 [35]. In this prospective analysis, arterial stiffness and compliance had no statistically independent effect on rapid decline of renal function in Chinese community-dwelling population without GFR below 60 ml/min/1.73m 2 .
The findings of this prospective analysis had public health relevance. Rapid decline of renal function has a growing prevalence in China. Given that there are limited interventions available, public health initiatives are needed for slowing the rapid decline of renal function. This prospective analysis confirmed that elevated homocysteine and NT-proBNP levels contributed to rapid decline of renal function. Future studies are required to determine whether therapies changing the homocysteine and natriuretic peptide levels ultimately affect the rapid decline of renal function. Effects of folic acid, natriuretic peptide receptor antagonist, angiotensin receptor blockade and neutral endopeptidase inhibition (LCZ696) and other medications associated with homocysteine and NT-proBNP levels on renal function are necessary to be paid special attention in pharmaceutical research and clinical practice.
The current analysis had some limitations. Firstly, 487 participants (30%) were excluded due to missing values for variables, and it was difficult to determine their all variable information. However, there were only 181 participants lost during the follow-up of 5 years. Secondly, MDRD equation rather than Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation was applied to evaluate renal function in the current analysis. However, MDRD equation is more commonly applied in epidemiological investigation compared with CKD-EPI equation. Moreover, MDRD equation but not CKD-EPI equation has Chinese modified version (CMDRD). CMDRD equation is more suitable for Chinese community-dwelling population and has superior accuracy than CKD-EPI equation.
Conclusions
This prospective analysis demonstrated that homocysteine and NT-proBNP rather than hs-cTnT had independently predictive abilities to rapid decline of renal function in Chinese community-dwelling population without GFR below 60 ml/min/1.73m 2 . Moreover. baseline GFR was an independent factor predicting the rapid decline of renal function. Meanwhile, arterial stiffness and compliance had no independent effect on rapid decline of renal function. This prospective analysis has a significant implication for public health, and further studies are warranted to establish the benefit of interventions changing the homocysteine and NT-proBNP levels on slowing the rapid decline of renal function. | 2017-11-15T10:51:10.859Z | 2017-11-09T00:00:00.000 | {
"year": 2017,
"sha1": "d8077746729f8a2aa9869da076bd8db59141633e",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-017-0743-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8077746729f8a2aa9869da076bd8db59141633e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269328129 | pes2o/s2orc | v3-fos-license | Association of JAK2V617F allele burden and clinical correlates in polycythemia vera: a systematic review and meta-analysis
Janus kinase 2 (JAK2) V617F mutation is present in most patients with polycythemia vera (PV). One persistently puzzling aspect unresolved is the association between JAK2V617F allele burden (also known as variant allele frequency) and the relevant clinical characteristics. Numerous studies have reported associations between allele burden and both hematologic and clinical features. While there are strong indications linking high allele burden in PV patients with symptoms and clinical characteristics, not all associations are definitive, and disparate and contradictory findings have been reported. Hence, this study aimed to synthesize existing data from the literature to better understand the association between JAK2V617F allele burden and relevant clinical correlates. Out of the 1,851 studies identified, 39 studies provided evidence related to the association between JAK2V617F allele burden and clinical correlates, and 21 studies were included in meta-analyses. Meta-analyses of correlation demonstrated that leucocyte and erythrocyte counts were significantly and positively correlated with JAK2V617F allele burden, whereas platelet count was not. Meta-analyses of standardized mean difference demonstrated that leucocyte and hematocrit were significantly higher in patients with higher JAK2V617F allele burden, whereas platelet count was significantly lower. Meta-analyses of odds ratio demonstrated that patients who had higher JAK2V617F allele burden had a significantly greater odds ratio for developing pruritus, splenomegaly, thrombosis, myelofibrosis, and acute myeloid leukemia. Our study integrates data from approximately 5,462 patients, contributing insights into the association between JAK2V617F allele burden and various hematological parameters, symptomatic manifestations, and complications. However, varied methods of data presentation and statistical analyses prevented the execution of high-quality meta-analyses. Supplementary Information The online version contains supplementary material available at 10.1007/s00277-024-05754-4.
Introduction
Polycythemia vera (PV), along with essential thrombocythemia and primary myelofibrosis, constitute the classic Philadelphia-negative myeloproliferative neoplasms (MPNs), a group of rare hematologic cancers characterized by the overproduction of one or more blood cell types.PV is typically marked by erythrocytosis, and in many cases concurrent leukocytosis and thrombocytosis.The excessive levels of blood cells result in blood thickening and a reduction in blood flow, elevating the risk of symptoms such as hemorrhage and thrombosis.These complications significantly impact the quality of life and in severe cases can be fatal.As early-stage patients are often asymptomatic for many years, and the symptoms of PV lack distinct features, suspicion of PV and subsequent diagnosis frequently occur later, following the exclusion of other diseases.
In 2005, several research groups independently identified a mutation in the Janus kinase 2 (JAK2) gene, JAK2V617F, that revolutionized the diagnosis and treatment of MPNs [1][2][3][4].JAK2, a member of the Janus family of nonreceptor tyrosine kinases, plays a crucial role in hematopoiesis.Upon binding to associated receptor molecules, JAK2 induces conformational changes that phosphorylate specific tyrosine residues on the intracellular domain of the receptor, creating docking sites for specific signaling molecules [5].The JAK2V617F mutation removes the intrinsic inhibitory mechanism and results in the overactivation of the JAK2 protein.This leads to constitutional activation of its receptors, aberrant downstream signaling, and an increase in hematopoiesis [6].This mutation is present in 95% of patients with PV and 50-60% of patients with essential thrombocythemia or primary myelofibrosis [7].The remaining PV patients without the mutation often harbor other mutations located on exon 12 of JAK2 [8].Therefore, the involvement of JAK2 mutations in PV underpins its significance in this disease.
One persistently puzzling aspect in PV that remains somewhat unresolved is the association between JAK2V617F allele burden (or variant allele frequency) and the relevant clinical characteristics.Numerous studies have reported associations between allele burden and both hematologic and clinical features of MPNs.For instance, a high allele burden has been correlated with increases in thrombosis and disease transformation [9].While there are strong indications linking high allele burden in PV patients with symptoms and clinical characteristics, not all associations are definitive, and disparate and contradictory findings have been reported.To the best of our knowledge, a meta-analysis has yet to be conducted to investigate the association between JAK2V617F allele burden and the clinical characteristics of PV.Hence, this study aimed to synthesize existing data from the literature to better understand the association between JAK2V617F allele burden and relevant clinical correlates.
Selection process and data collection process
Two authors (JLC & AJL) independently screened the title and abstract of each study for initial inclusion in our systematic review.Studies upon which both authors reached consensus were included.Any disagreements were resolved through discussion or by a third author if necessary.Two authors independently reviewed the full text (JLC & LHY) for data applicability.
Only studies presenting data pertaining to a hematologic parameter or clinical outcome correlating with a quantified JAK2V617F allele burden were included.Discrepancies were resolved through discussion or by a third author if necessary.Both title and abstract screening as well as full-text review were conducted on the Covidence platform (app.covidence.org).Data extraction was performed using a standardized form in Microsoft Excel by one author (JLC), with the accuracy of the extracted data verified by two authors (CCC & HHA).
Data items
Peripheral blood or bone marrow samples from patients were taken at the time of diagnosis or during follow-up.Some patients were on treatments for PV, which included aspirin, phlebotomy, and/or cytoreductive agents.JAK2 allele burden was measured using validated methods.Various clinical outcomes and hematologic parameters were assessed.Continuous variables extracted included red blood cell count (RBC), platelet count (PLT), white blood cell count (WBC), hematocrit (Hct), hemoglobin (Hb), spleen size, and JAK2V617F allele burden.Count variables extracted included splenomegaly, pruritus, thrombosis, hemorrhage, post-PV transformation to myelofibrosis (MF), and post-PV transformation to acute myeloid leukemia (AML).The mean and standard deviation of the continuous variables were extracted.Correlation coefficients and sample sizes were extracted where Pearson or Spearman correlation tests were performed.The following information was also extracted: surname of first author, year of publication, country of study site, sample size, source of DNA, JAK2V617F quantification method, sample collection time point, JAK2V617F allele burden data presentation, and applied statistical methods.
Study risk of bias assessment
Two authors (JLC & LHY) independently evaluated the quality of studies using critical appraisal checklists from the Joanna Briggs Institute (JBI) [11].The criteria included the following items: (1) Were the criteria for inclusion in the sample clearly defined?; (2) Were the study subjects and the setting described in detail?; (3) Was the exposure measured in a valid and reliable way?; (4) Were objective, standard criteria used for measurement of the condition?;(5) Were confounding factors identified?; (6) Were strategies to deal with confounding factors stated?; (7) Were the outcomes measured in a valid and reliable way?; and (8) Was appropriate statistical analysis used?.Each item received a response of "Yes," "No," or "Unclear," corresponding to 1, 0, or 0 points, respectively.Studies consistent between the two authors with fewer than three items marked as "No" or "Unclear" were included in the systematic review and metaanalysis.Disagreements between authors were resolved through discussion or by involving a third author if needed.
Effect measures and synthesis methods
Qualitative descriptions and summaries of evidence were provided, and meta-analyses were conducted using Comprehensive Meta-Analysis 3.0.Pooled odds ratio (OR), standardized mean difference (SMD), correlation coefficients, 95% confidence intervals (95%CI), and standard error (SE) were calculated using the software.
Due to the diversity of the included data, we categorized them based on how JAK2V617F allele burden was presented: (a) JAK2V617F allele burden tested against another variable using correlation tests; (b) patients grouped by JAK2V617F allele burden level, with mean values and standard deviations of their clinical characteristics presented; or (c) patients grouped by JAK2V617F allele burden level, with count data presented for their clinical measurements (e.g.record of later MF transformation).For uniformity, continuous variables were converted into the same units (e.g.10^9/ml).
All included studies following full-text review were tabulated (Table 1).Meta-analyses were depicted as forest plots (Figs. 2, 3 and 4).Random effects models were employed to address heterogeneity in all meta-analyses.
Mixed effects models were used for subgroup analyses where applicable.Measures of heterogeneity, including Cochran's Q, I², and Tau², were reported.Sensitivity meta-analyses were not conducted due to the limited number of publications.In cases where mean and standard deviation were unavailable, the range rule was applied for estimation.
Study selection, study characteristics, and risk of bias in studies
A flow diagram illustrating the screening process is presented in Fig. 1.Initially, 1,851 studies were identified.After removing duplicates and non-original articles, 985 studies remained.Following title and abstract screening, 120 studies were considered for full-text review.After reviewing the full text, 39 studies [9, provided evidence related to the association between JAK2V617F allele burden and clinical correlates (Table 1).Details on the excluded 74 records (1 duplicate) are presented in Supplemental Information 1.
A total of 21 studies were included in meta-analyses, spanning the years 2006 to 2021 and originating from 12 countries (Belgium, China, Denmark, France, Iran, Italy, Japan, Korea, Macedonia, Spain, Turkey, and the USA).DNA source for JAK2V617F allele burden quantification was derived from various cells (e.g., bone marrow, granulocytes, and leukocytes), assessed using different polymerase chain reaction (PCR) and sequencing techniques.Six studies collected samples exclusively at diagnosis, eight studies had a mix of samples at diagnosis and during follow-ups, one study had only follow-up samples, and six studies did not report the collection time point.
Thirteen studies employed correlation tests for JAK2V617F with clinical correlates, while ten studies categorized patients into low and high allele burden groups.Notably, despite the initial screening of clinical trials, relevant evidence for our objectives came from cross-sectional and cohort studies, as clinical trials did not investigate the association between clinical characteristics and allele burden.A summary of the risk of bias assessment using the JBI checklist is provided in Supplemental Information 2.
Meta-analyses of correlation
We examined the correlation of JAK2V617F allele burden with blood cell counts and spleen size.WBC and RBC were significantly and positively correlated with JAK2V617F allele burden, whereas PLT was not significantly correlated with JAK2V617F allele burden.In addition, spleen size was significantly and positively correlated with JAK2V617F allele burden.
Meta-analyses of standardized mean difference
We explored the SMD in clinical correlates between patients with high or low JAK2V617F allele burden.WBC, Hct, and lactate dehydrogenase were significantly higher in patients with higher JAK2V617F allele burden, whereas PLT was significantly lower in patients with higher JAK2V617F allele burden.In addition, Hb and RBC were not significantly different between allele burden groups.
Qualitative analysis of patients categorized by clinical characteristics
Several studies examined the JAK2V617F allele burden in patients categorized by hemogram thresholds or the presence of a symptom/complication. Consistent with both our meta-analysis on WBC, three studies identified higher patients) indicated a significant SMD, with higher lactate dehydrogenase in patients with a higher JAK2V617F allele burden (Fig. 3D: SMD = 0.1360; SE = 0.535; 95% CI=[0.311,2.408];p = 0.011; TauSq = 1.078).The SMDs of Hb and RBC (SI3) between high and low allele burden groups were not significantly different, involving five and two studies, respectively.
Meta-analyses of odds ratio
We investigated the odds ratio (OR) of developing symptoms or complications with higher JAK2V617F allele burden.We found that patients who had higher JAK2V617F allele burden also had a significantly greater OR for developing pruritus, splenomegaly, thrombosis and transformation to MF or AML.
Four cohorts (1,233 patients) reported a significant OR for pruritus, indicating that patients with higher JAK2V617F allele burden had a greater OR for developing pruritus (Fig. 4A: OR = 2.200; 95% CI= [1.512,3.199];p < 0.001; TauSq = 0.070).Six cohorts (1,388 patients) reported a significant OR for splenomegaly, indicating that patients with higher JAK2V617F allele burden had a greater OR for developing splenomegaly (Fig. 4B: OR = 2.133; 95% Fig. 4 Forest plots of meta-analyses of odds ratio of (a) pruritus, (b) splenomegaly, (c) thrombosis, (d) history of thrombosis, (e) myelofibrotic progression, (f) transformation to acute myeloid leukemia, by allele burden group (high allele burden vs. low allele burden group) association between JAK2V617F allele burden and WBC in PV patients.However, relationships with RBC count, Hct, and Hb levels are less conclusively established.Our meta-analysis of correlation suggests a positive association with RBC, while our meta-analysis of SMD indicates a positive association with Hct, implying some evidence of a positive association between erythrocyte-related parameters and JAK2V617F allele burden.Moreover, the intriguing observation of a negative association between PLT and JAK2V617F allele burden suggests a potential shift from thrombopoiesis to myelopoiesis when JAK2V617F allele burden is elevated, warranting exploration into the biological processes influencing this phenomenon.
Our study explored the association between JAK2V617F allele burden and thrombosis, wherein a positive association was observed.Despite the few studies included in meta-analysis, it is crucial to highlight that several independent studies, although could not be synthesized in our meta-analysis, have also presented compelling evidence for a robust association between JAK2V617F allele burden and thrombosis [9,40,45].Vannucchi et al. [45] categorized 173 patients into four distinct groups according to their JAK2V617F allele burden.They observed that patients with an allele burden of 75% or higher exhibited a significantly elevated risk of thrombosis during the follow-up period.However, due to the scarcity of studies segmenting patients into four groups based on JAK2V617F allele burden, a metaanalysis was not feasible.In a similar vein, Alvarez-Larran et al. [9] classified 163 patients into two groups based on their JAK2V617F allele burden.Their findings revealed that patients with an allele burden exceeding 50%, or those with fluctuating JAK2V617F allele burden, demonstrated a significantly increased incidence of thrombosis.Nevertheless, the absence of comparable studies assessing incidence rates precluded the possibility of conducting a meta-analysis.Additionally, Sazawal et al. [40] stratified 45 patients based on the occurrence of thrombosis events.They found that patients experiencing a thrombosis event had a significantly higher JAK2V617F allele burden compared to those without such events.However, the limited number of studies that classified patients based on the occurrence of thrombosis events rendered a meta-analysis unattainable.
Our study also delves into the association between JAK2V617F allele burden and symptomatic manifestations as well as disease progression.Concerning spleen size, despite a limited number of studies available for meta-analysis, additional studies [26,40,41] reported consistent results, affirming the positive correlation between JAK2V617F allele burden and splenomegaly.Similarly, pruritus gains additional validation from another study [26], which reinforces the association between pruritus and JAK2V617F allele burden.Furthermore, our study underscores a robust JAK2V617F allele burdens in PV patients with elevated WBC [40,41,47].As stated above, two studies echoed our finding of a negative association between PLT and JAK2V617F allele burden [40,47].
In terms of splenomegaly, three studies [26,40,41] supported our correlation meta-analysis in which spleen size was positively correlated with allele burden.One study [45] found that the risk ratio for splenomegaly and spleen size over 15 cm was higher when comparing patients with low allele burden to patients with high allele burden (0-25% vs. 51-75% and 75-100%); two studies [33,47] found similar trends between spleen size and allele burden.
For thrombosis, Vannucchi et al. [45] found a significantly higher risk ratio for total thrombosis at follow-up in patients with an allele burden of 75-100% compared to patients with an allele burden of 0-25%.Alvarez-Larran et al. [9] found that patients with JAK2V617F allele burden greater or equal to 50% had a higher incidence of thrombosis.Sazawal et al. [40] found that those patients who had experienced a thrombosis event also had a higher allele burden; one study [24] found a similar trend, whereas another study [47] did not.
Finally, Vannucchi et al. [45] found a significantly higher risk ratio for pruritus in patients with 50-75% and 75-100% allele burden.Koren-Michowitz et al. [26] reported a trend for higher allele burden with presence of pruritus; however, this was not significant.
Interpretation
This review integrates data from 39 publications, encompassing approximately 5,462 patients.To the best of our knowledge, our study represents the first concerted effort to comprehensively evaluate and synthesize the existing literature on the association between JAK2V617F allele burden and clinical correlates in PV.Despite over 15 years since the initial discovery of this association, the difficulty of obtaining high-quality data may have impeded previous publications and systematic reviews on this subject.Our combined qualitative analysis and meta-analysis reveal a robust positive association between JAK2V617F allele burden and WBC, along with an increased risk of MF transformation.Additionally, positive associations were observed with Hct, RBC, pruritus, splenomegaly, thrombosis, and an increased risk of transformation to AML, while a negative association was noted with PLT.
Our study contributes insights into the association between JAK2V617F allele burden and various hematological parameters.The results unequivocally confirm a positive treatments influence not just the JAK2V617F allele burden but also bear significant implications for the long-term outcomes of patients.This complexity adds a layer of challenge to the interpretation of data in this context.
Several assumptions were employed to address heterogeneity during data synthesis.Firstly, heterogeneity arose from the diverse statistical methods used for the meta-analysis of correlation.For example, correlation tests were assumed the same when eleven studies used Spearman's correlation, four studies used Pearson's correlation, and two studies did not report the type of correlation test.Secondly, another source of heterogeneity in the meta-analysis of SMD and OR stemmed from the varying cut-off values for JAK2V617F allele burden.While the majority of studies divided the patients using a 50% JAK2V617F allele burden as a cutoff, one study used 58% [28] and another used 70% [33].Although a 50% cut-off represents the separation of heterozygosity and homozygosity, using a higher cut-off could better reflect the true impact of JAK2V617F allele burden on clinical correlates, such as a more accurate representation of the risk of thrombosis.Thirdly, the inclusion years in our systematic review spanned from 2007 to 2022, during which various diagnostic criteria for PV were utilized, including Polycythemia Vera Study Group (PVSG), World Health Organization (WHO) 2008, and WHO 2016 classification.Consequently, the criteria were not consistent across all studies, and it was assumed that patients diagnosed under different criteria were similar.Lastly, there were differences among studies in the biological samples collected and the methods used to quantify JAK2V617F allele burden.
Implications
Based on our findings, we propose several suggestions for future research aiming to investigate the association between JAK2V617F allele burden and clinical correlates.Firstly, detailing the specific time point of sample collection (e.g., at diagnosis, before treatment, or after treatment) is crucial information to include, given the potential impact of certain treatments on JAK2V617F allele burden and clinical correlates.Particularly for measurements related to erythrocytes, it is essential to explicitly include Hb, Hct, and RBC without recent phlebotomy, preferably within a three-month timeframe.Attention to the timing of blood sample collection concerning treatment regimens is critical for a more accurate assessment of the relationship between JAK2V617F allele burden and hematological parameters across the entire hemogram.Secondly, considering the heterogeneity in study design, data presentation, and statistical methods, the limited amount of data available for synthesis underscores the need for improved feasibility in future meta-analyses.We recommend that researchers consider body of evidence linking a high JAK2V617F allele burden with an increased risk of MF transformation.This observation posits that elevated JAK2V617F allele burden serves as a predictor for MF transformation.Lastly, our study also observed some evidence of positive association between a high JAK2V617F allele burden with an increased risk of AML transformation.
In addition to our data synthesis efforts, our investigation reviewed studies that presented valuable insights into the association between JAK2V617F allele burden and specific clinical parameters.Notably, a substantial number of studies focused on the relationship between JAK2V617F allele burden and splenomegaly, thrombosis, and pruritus, which could have provided further data of 597, 502, and 274 patients, respectively.The majority of these studies consistently reported a statistically significant positive association between JAK2V617F allele burden and the aforementioned clinical factors.
Limitations of evidence and review process
One of the primary constraints in our work stems from the heterogeneity that impeded data synthesis.Despite the identification of 39 studies examining the relationship between the JAK2V617F allele burden and clinical correlates, the varied methods of data presentation and statistical analyses prevented the execution of high-quality meta-analyses.For instance, we encountered 16 studies reporting data on allele burden and WBC, of which only 9 could be incorporated into a correlation meta-analysis.Among the remaining 7 papers, data were presented in diverse formats, such as the stratification of data into two to five allele burden groups, and values reported as mean only, mean and range, median and range, median and 95%CI, and mean ± standard deviation.Unfortunately, the inadequate homogeneity across the available studies hindered the synthesis of data, thereby impeding the extraction of conclusive insights.
The reliability of hemogram data may be susceptible to bias owing to the influence of clinical treatments.Among the parameters relevant to erythrocyte count, the most significant variability may arise from phlebotomy.Furthermore, careful consideration is advised when interpreting blood samples obtained during routine check-ups post-diagnosis, as they may be susceptible to underestimation attributed to ongoing treatments such as phlebotomy or the administration of cytoreductive agents.For example, treatment with interferon alpha has been demonstrated to effectively diminish the JAK2V617F allele burden, as evidenced by studies from Ianotto et al. [123] and Kiladjian et al. [124].This reduction in allele burden may subsequently impact the risks associated with thrombosis, myelofibrotic transformation, and leukemic transformation.Consequently, these However, a significant limitation in the current research landscape is the predominance of studies focusing solely on single time point measurements.This methodological constraint restricts the depth of understanding regarding the dynamic nature of clonal expansion over time and its interactions with various treatments.Further research on the clinical value of the long-term monitoring of JAK2V617F allele burden could prove valuable in inferring prognosis, guiding monitoring strategies, and designing treatment plans.
This systematic review and its protocol were registered in the international prospective register of systematic reviews (PROSPERO) under the registration number: CRD42024219346.
providing additional data or statistical analyses as supplemental information.Alternatively, utilizing data repositories for sharing relevant datasets could enhance collaboration and facilitate more comprehensive meta-analyses.
This review highlights the varying degrees of association between JAK2V617F allele burden and clinical correlates.While some might intuitively infer that reducing JAK2V617F allele burden could benefit the status and prognosis of patients, others may argue that a mere observation of association does not necessarily imply a call for action.Nevertheless, there are preliminary data suggesting the potential benefits of reducing JAK2V617F allele burden.For instance, a retrospective study involving 381 MPNs patients treated with interferon revealed that approximately 50% of patients who achieved complete hematological response and maintained a JAK2V617F allele burden below 10% did not have a relapse for at least ten years after discontinuing interferon treatment [125].A Phase II clinical trial, MAJIC-PV, comparing ruxolitinib with the best available therapy in patients with PV who are resistant or intolerant to hydroxyurea, demonstrated a higher frequency of molecular responses in those treated with ruxolitinib [126].Additionally, indirect evidence from molecular analyses and clinical correlations indicates that patients achieving a partial molecular response exhibit improved outcomes in terms of progression-free survival, event-free survival, and overall survival [126].Another indirect piece of evidence comes from the Continuation-PV study, where patients receiving ropeginterferon alfa-2b demonstrated a general reduction in JAK2V617F allele burden and experienced fewer thromboembolic events, less disease progression, and fewer deaths [127].These findings suggest that novel therapeutic interventions aimed at lowering allele burden could improve not only hemogram but could also manage symptoms, reduce thrombosis risks, and reduce risks of disease progression [128].Of which, reducing the risks of thrombosis and disease progression are especially important from the perspective of patients [129].However, a real-world nationwide study in Taiwan showed that around 48.8% low-risk and 26.1% high-risk PV patients were not undergoing active treatment [130].Additionally, another study in the United States based on a veteran database reported that 53% of patients were not receiving active treatment [131].As there are some evidence showing that JAK2V617F allele burden may progressively increase with age [24,27,40,43,44], patients without active treatment or monitoring JAK2V617F allele burden may be prone to worse outcomes.The rate of clonal expansion exhibits considerable variability among individuals.While some of this variation may be intrinsic, it may also be linked to the type of treatment received by the patient.This relationship underscores the intricate interplay between therapeutic interventions and cellular responses.
Fig. 1 Fig. 2 Fig. 3
Fig. 1 Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) flow diagram describing the identification, screening, and inclusion process
Table 1
Study details on JAK2V617F allele burden measurement and data presentation with clinical correlates | 2024-04-25T06:16:09.420Z | 2024-04-23T00:00:00.000 | {
"year": 2024,
"sha1": "7cc4091fdbce5e62f219735c23ac39e3d2e20cb5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00277-024-05754-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed51a90747e6e99e2929161e8cfc09e62fc650fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196507154 | pes2o/s2orc | v3-fos-license | Effect of Refresh Plus® preservative-free lubricant eyedrops on tear ferning patterns in dry eye and normal eye subjects
Purpose To evaluate the tear ferning patterns in dry eye and normal eye subjects in the 3 hours following application of Refresh Plus® preservative-free lubricant eyedrops. Methods Thirty men with dry eye (mean age 22.14±2.34 years) and 30 age-matched men with normal eyes (mean age 23.91±3.24 years) were enrolled. Eyes were classified as normal or dry based on their Ocular Surface Disease Index score, tear meniscus height, and noninvasive tear breakup time. The tear ferning test was performed before and 30, 60, 120, and 180 minutes after application of a Refresh Plus eyedrop into the right eye in each subject. Results There was a significant change in tear ferning grade after application of Refresh Plus eyedrops (P=0.02, Wilcoxon test) in the group with dry eye, but not in the normal eye group (P=0.19, Wilcoxon test). The correlation of tear ferning grade was moderate (r=0.484, P=0.049) at 60 minutes after application of the eyedrops and strong at 120 minutes (r=0.560, P=0.019) and 180 minutes (r=0.726, P=0.001). There was also a strong correlation (r=0.865, P=0.001) between tear ferning grades obtained 120 and 180 minutes after application. In the normal eye group, there was a moderate (r=0.407, P=0.029) correlation between tear ferning grades obtained before and 60 minutes after application of the eyedrops. There was also a strong correlation (r=0.532, P=0.003) between tear ferning grades obtained 120 and 180 minutes after application. Conclusion Tear ferning patterns improved significantly after application of Refresh Plus preservative-free lubricant eyedrops in subjects with dry eye. Artificial tears containing sodium carboxymethylcellulose, such as Refresh Plus, can be used to improve tear ferning patterns in dry eye for at least 3 hours.
Introduction
Tear film is a thin moist layer that covers and moisturizes the surface of the cornea. 1 Upon blinking, the tear film spreads over the cornea to avoid drying of the ocular surface. 2 Various factors affect the rate of spontaneous eye blinking, including stability of the tear film, thickness of the lipid layer, composition of meibum produced by the meibomian gland, and the eye-drying rate. 3,4 Blink rate has been found to depend on age and high in infants and adults up to the age of 25 years. 5,6 In adults, the meibum is less saturated, more oily, less ordered, and contains flexible chains of lipids, whereas in infants the meibum is highly saturated, stiff, and contains waxy lipids. 7 Furthermore, there is more meibum in the lipid reservoir in adults than in infants. 8 This explains why adults up to the age of 25 years have a higher rate of spontaneous eye blinking (15-30 blinks per minute) than infants (approximately four blinks per minute). 9 Dry eye is a common ocular surface disorder that develops because of instability within the tear film, and causes uncomfortable symptoms in 5%-50% of the population worldwide. 10 Dry eye can be caused by a high tear-evaporation rate, a deficiency in tear secretion, or both. 10 Furthermore, a change in osmolarity of the ocular surface may occur, which can lead to apoptosis of cells on the epithelial surface and consequent loss of goblet cells that produce mucin. 11,12 The symptoms of dry eye have a considerable impact on quality of life. 13 Therefore, it is important to detect this condition as early as possible for optimal management. 14 Given that eye dryness is complex and affected by a number of parameters, a combination of diagnostic tests, along with dry-eye questionnaires, such as the Ocular Surface Disease Index (OSDI), are needed for diagnosis. For example, tearevaporation rate, 15 Schirmer's test, 16 phenol red thread, 16 tear meniscus height (TMH), 17 noninvasive tear breakup time (NITBUT), 18 osmolarity, 19 and the tear ferning (TF) 20,21 test can be used to detect dry eye. Artificial tears can be used to lubricate the tear film and relieve some of the uncomfortable symptoms associated with moderate dry eye. 22,23 Previous studies shown that the TF test can be used to assess the quality of tears in subjects with dry eye, including smokers, 24 those with diabetes, 25 subjects with normal eyes after consumption of a single dose of hot green tea 26 or peppermint, 27 and after oral vitamin A supplementation for 3 consecutive days. 28 The aim of the present study was to compare TF patterns in the 3 hours following application of Refresh Plus ® (Allergan, Marlow, UK) preservative-free lubricant eyedrops in dry-and normal eye subjects.
Methods Subjects
Thirty men with dry eye (mean age 22.14±2.34 years) and 30 age-matched normal eye men (mean age 23.91±3.24 years) were enrolled. A slit lamp was used to examine abnormalities of the eyelids, eyelashes, conjunctiva, cornea, and iris. Subjects with no eyelid or lash abnormalities, those who had recently undergone ocular surgery or were receiving ophthalmic medication, contact-lens wearers, smokers, and patients with diabetes, anemia, or a thyroid disorder were excluded. OSDI and TMH scores and NITBUT test results were used to classify subjects as having or not having dry eye. The study was approved by the College of Applied Medical Sciences Ethics Committee, King Saud University and performed according to the tenets of the Declaration of Helsinki. Written informed consent was obtained from each study participant before commencement of the research. All tests were performed by the same examiner in an environment that was controlled for humidity (<40%) and temperature (23°C). 25 After subjects had completed the OSDI questionnaire, the TMH, NITBUT, and TF tests were performed, with a 10-minute interval between each test.
Refresh Plus preservative-free lubricant eyedrops
Refresh Plus preservative-free lubricant eyedrops (30 single-use containers each containing 0.4 mL; Allergan, Marlow, UK) are aqueous-based artificial tears. The drops contain sodium carboxymethylcellulose (CMC) 0.5% as the active ingredient, as well as sodium lactate and various electrolytes, including sodium chloride, potassium chloride, magnesium chloride, and calcium chloride. They can be used to relieve symptoms of dry eye, ie, irritation, burning, and discomfort. 22
Ocular Surface Disease Index
The OSDI was completed by each study participant first, and a score <13 was considered as normal eye. 29
TMH and NITBUT tests
The TMH and NITBUT tests were performed in each subject's right eye after completion of the OSDI, with a 5-minute interval between the two tests. Both tests were performed using a Keratograph 4 system (Oculus, Wetzlar, Germany). Fluorescein was added to the subject's eye. For the NITBUT test, the subject was asked to refrain from blinking while the tear film was observed. 30 A yellow barrier filter was used to enhance the visibility of tear-film breakup. TBUT was recorded as the number of seconds that elapsed between the last blink and the appearance of the first dry spot in the tear film. The inferior TMH images were captured and measured perpendicularly to the lid margin at the central point relative to the pupil center using an integrated ruler. Both tests were performed three times and average measurements recorded. The eye was defined as normal if the tear height in the lower lid was >0.2 mm for TMH and the TBUT was >10 seconds for NITBUT measurements.
Tear ferning test
A tear sample (1 μL) was collected from the lower meniscus of the right eye in each subject using a glass capillary tube (10 μL; Sigma-Aldrich; St Louis, MO, USA). The tear sample was dried at 23°C for 10 minutes at a humidity of <40%. A DP72 microscope (10× magnification; Olympus, Tokyo, Japan) was used to observe and capture the tear ferns. Each TF pattern was graded according to the 5-point TF-grading scale using increments of 0.1. 21 The TF test was repeated for each subject at 30, 60, 120, and 180 minutes after application of the artificial eyedrops.
Results
Data collected from all tests were found not to be normally distributed (P˂0.05, Kolmogorov-Smirnov test); therefore, the median (IQR) was used to represent the average values. Median (IQR) scores for the OSDI and TMH, NITBUT, and TF tests in dry-and normaleye groups are shown in Table 1. There were significant differences (P<0.05, Wilcoxon test) in the OSDI scores and NITBUT measurements between dry-and normal eye subjects, but not significant between-group differences for TMH scores (P>0.05, Wilcoxon test). In the dry-eye group, there were significant differences (P˂0.05, Wilcoxon test) in TF grades obtained at the five different time points after administration of the eyedrops, but not in the normal eye group (P>0.05, Wilcoxon test). Examples of the five TF images obtained from dry-and normal eye subjects before and 30-180 minutes after application of artificial tears are shown in Figure 1. Box plots for the TF scores on the tests are shown side by side for the dry-and normal eye groups in Figure 2.
Correlations between scores from the OSDI and TMH, NITBUT, and TF tests in dry-and normal eye groups are shown in Tables 2 and 3, respectively. In the group of subjects with dry eye, the correlation between the TF grades obtained before and after application of the eyedrops was moderate (r=0.484, P=0.049) at 60 minutes and strong at 120 minutes (r=0.560; P=0.019) and 180 minutes (r=0.726, P=0.001). There was a strong correlation (r=0.865, P=0.001) between TF grades obtained 120 and 180 minutes after application of the eyedrops. There was also a strong negative correlation (r=-0.542, P=0.025) between OSDI scores and TF grades obtained 60 minutes after application.
For normal eye subjects, there was a moderate correlation (r=0.407, P=0.029) between TF grades obtained before and 60 minutes after application of the eyedrops and a strong correlation (r=0.532, P=0.003) between those obtained 120 and 180 minutes after application. Furthermore, there was a moderate correlation (r=0.425, P=0.022) between TMH and NITBUT values.
Discussion
Dry eye is a common ocular problem that causes various uncomfortable symptoms. Artificial tears are the first choice for management of symptoms of dry eye. Indeed, most of the ocular discomfort, ie, inflammation, burning, itching, and dryness, can be relieved by the use of eye lubricants. 32 Various types of eye lubricants with different pH, viscosity, physical, and chemical properties are available. Lubricants with a certain pH might be suitable for some but not all individuals with dry eye. 33 Furthermore, artificial tears contain different active ingredients, preservatives, electrolytes, and polymers in varying concentrations. 22 The different types of artificial tears also vary in their mode of action. 32 For example, eyedrops containing hydrogel in borate solution tend to create a viscous matrix within the eye in which the comfort effect lasts longer when compared with other types of artificial tears. 34 Another mechanism involves augmentation of some tear components and improvement in the thickness of the lipid layer. 35 In the present study, Refresh Plus preservative-free lubricant eyedrops were used to improve the TF grade and decrease the symptoms of dry eye. These drops contain CMC 0.5% and have a pH of 6.5 and an osmolarity of 276 mmol/kg. 32 There was a significant (P<0.05) improvement in TF grade obtained in subjects with dry eye after application of a single drop of these artificial tears that lasted for at least 3 hours. Lubricants containing CMC are known to improve the stability of tear film and increase the density of goblet cells. 36,37 For example, Optive eyedrops, which contain sodium CMC 0.5% and glycerin 0.9%, can be used to relieve the symptoms of eye dryness. 38,39 Highviscosity or gel-like artificial eyedrops are preferred by many patients with dry eye, because they have a longer ocular residence time. 40 It has been reported that Refresh Plus eyedrops can significantly reduce the epithelial defects produced during laser in situ keratomileusis (P=0.020). 41 Another study showed that use of Refresh Plus eyedrops led to significantly lower average ocular TF30 TF60 TF120 TF180 TF0 TF30 TF60 TF120 TF180 Dry eye subject
A B
Normal eye subject surface-staining scores following laser in situ keratomileusis in patients with myopia when compared with those obtained by Bion tears (P=0.015). 42 Eyedrops containing sodium hyaluronate 0.1%-0.3% have been found to be more effective than saline for reducing the symptoms of dry eye and improving NITBUT Table 2 Correlations between OSDI scores and TMH, NITBUT, and TF test results in subjects with dry eye -r (P-value ) Test OSDI TMH NITBUT TF0 TF30 TF60 TF120 TF180 OSDI scores in subjects with moderate eye dryness. 43 Insertion of a cross-linked hyaluronic acid gel (0.2 mL) in the lower eyelid led to improvements in corneal fluorescein staining, increased TBUT scores, and improved Schirmer's test results in subjects with dry eye. 44 A combination of artificial tears containing a CMC salt and hyaluronic acid was found to be more effective in the management of symptoms of dry eye than eyedrops containing either ingredient alone. [45][46][47] This combination lead to high shear viscosity and reduced stickiness and blur during blinking. This study had some limitations, including a limited sample size and inclusion of only male subjects. Furthermore, the effect of artificial tears was observed for only 3 hours and only one type of artificial tears was used. Moreover, the possibility of a confounding effect of environmental factors, such as sunlight and extremes of humidity and temperature, immediately prior to taking part in the study cannot be excluded. Therefore, a further study is needed to test the longer-term effects of various type of artificial tears on tear film in a larger number of subjects with and without dry eye.
Conclusion
TF patterns obtained from subjects with dry eye improved significantly after application of Refresh Plus preservative-free lubricant eyedrops. There was some improvement in the quality of TF after application of artificial tears in normal eye subjects but the change was not statistically significant. Artificial tears containing sodium CMC, such as Refresh Plus, can be used to improve TF patterns in subjects with dry eye for at least 3 hours. | 2019-07-15T22:29:28.677Z | 2019-06-14T00:00:00.000 | {
"year": 2019,
"sha1": "ecfe09b811b852aba9dc31b5f8d7bac225c1842f",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=50531",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecfe09b811b852aba9dc31b5f8d7bac225c1842f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235238797 | pes2o/s2orc | v3-fos-license | Study of risk factors associated with breast cancer: a case control study
In many developing countries, common infectious diseases are still major unresolved health problem, emerging noncommunicable diseases related to diet and lifestyle have been increasing over the previous two decades, which had created a double burden of disease and impacting severely on already inadequate health services in these countries. There is a gradual shift in the age pattern of mortality from younger to older ages, as acute infectious diseases are reduced, and chronic degenerative diseases are increasing. In industrialized countries epidemiologic transition was observed to be emerged towards the early 1900s, with an increasing trend of non-communicable diseases (NCD) that certainly peaked by the mid 1950s, accompanied by a significant fall in infectious-disease morbidity and mortality.
INTRODUCTION
In many developing countries, common infectious diseases are still major unresolved health problem, emerging noncommunicable diseases related to diet and lifestyle have been increasing over the previous two decades, which had created a double burden of disease and impacting severely on already inadequate health services in these countries. 1 There is a gradual shift in the age pattern of mortality from younger to older ages, as acute infectious diseases are reduced, and chronic degenerative diseases are increasing. In industrialized countries epidemiologic transition was observed to be emerged towards the early 1900s, with an increasing trend of non-communicable diseases (NCD) that certainly peaked by the mid -1950s, accompanied by a significant fall in infectious-disease morbidity and mortality. 2 Omran has described the epidemiologic transition to occur in 3 stages in 1971, first being the pre-transitional stage, the second stage is the age of receding pandemics and third stage as the age of degenerative and manmade diseases. 3 India has been described as-nations within a nation because of the marked differences in the epidemiological transition levels (ETLs) between its states. 4 Kerala showed the fastest epidemiological transition in India. Whereas the most populous empowered action group state of Uttar Pradesh remained in the lowest ETL group. 5 The cancers types in India are also undergoing a transition, similar to Japan. 6 There has been an decline of cancers in India caused by infections, such as cervical, stomach, and penile cancer, and an increase in cancers associated with lifestyle and ageing, such as breast, colorectal, and prostate cancers. 7
Incidence of breast cancer
Breast cancer is the most frequently occurring cancer among women, affecting 2.1 million women each year and also causes the highest number of cancer-related deaths among women in 2018. Breast cancer accounts for nearly 1 in 4 cancer cases among women. It is estimated that 627,000 women died from breast cancer that is approximately 15% of all cancer deaths among women. While breast cancer rates are higher among women in more developed regions however, rates are increasing in almost every region worldwide. Among females, breast cancer is the most commonly diagnosed cancer and the principal cause of cancer death. It is the most frequently diagnosed cancer in 154 out of 185 countries and is also the leading cause of cancer death in 100 countries. 8 India is witnessing more and more increasing cases of breast cancer in the younger age group. Almost 48% patients are below 50 years of age. Increasing numbers of patients are in the age group of the 25 to 40 years, which is an alarming trend. One-fourth of all female cancer cases are breast cancers. 9 Cancer of breast has replaced cancer of the cervix as the leading site of cancer in all urban population-based cancer registries. Age-adjusted incidence rates vary from place to place region-wise in different cancer registries. During the period 2012 to 2014, age-adjusted incidence per 100, 000 population was highest in Delhi which was 41.0, among others it was 37.9 in Chennai, 34.4 in Bangalore and 33.7 in Thiruvananthapuram. 10 Breast cancer is the most common cancer in women in Nagpur. 31.9% of all cancers in women accounts for breast cancer in Nagpur. 11 Breast cancer resulted in 15.1 million disability-adjusted life years for both sexes worldwide. 95% of which came from years of life lost and 5% from years lost due disability. 12 Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from deoxyribonucleic acid (DNA) replication errors. 13 Hereditary, genetic factors, family history of breast and inherited mutations in BRCA1, BRCA2, and other breast cancer responsible genes, account for 5% to 10% of breast cancer cases. Studies of migrants' population have shown that nonhereditary factors are mainly responsible for the observed international and interethnic differences in incidence. On Comparing, low-risk populations migrating to high-risk populations have shown that breast cancer incidence rates increase in successive generations. 14 Increased incidence rates in higher Human Development Index countries are attributed to a higher prevalence of known risk factors and the increased incidence rates in transitioned countries are the result of a higher prevalence of known risk factors related to menstruation (early age at menarche, late age at menopause), reproduction (nulliparity, late age at first birth, and fewer children), exogenous hormone intake (oral contraceptive use and hormone replacement therapy) and anthropometry (greater weight, weight gain during adulthood, and body fat distribution); whereas breastfeeding is known protective factors. 15 Most of the breast cancer cases are generally diagnosed in the advanced stages, though screening tests and biomarkers for early detection are available. Early detection helps in preventing complications, improve quality of life and increase survival period. Hence, it is essential to identify women at risk and to benefit them by avoiding adverse complication following disease initiation. 16 Considering the high burden of breast cancer in India and various factors affecting its occurrence, this study was conducted to determine the risk factors that contributed to the development of breast cancer.
METHODS
The present study was conducted in research institute and tertiary care centre for cancer in central India to study various risk factors associated with breast cancer. Histopathologically confirmed cases of breast cancer from female surgery ward and surgery out patients department were selected. Controls were women accompanying patients coming to general outpatient department of rural and urban field practice area of study institute. One age group matched control with matching for urban and rural place of residence was selected for each case.
Study design
The study was a hospital based case control study.
Study period
The period of the study was from July 2017 to December 2019.
Study settings
The study was conducted at the research institute and tertiary care centre for cancer in central India.
Cases
Histo-pathologically confirmed cases of breast cancer were taken from female surgery ward and surgery out patients department of same hospital.
Controls
Women without any palpable lump in breast at time of study. Controls were women accompanying patients coming to general outpatient department of rural and urban field practice area of study institute.
Matching
One age group matched control with matching for urban and rural place of residence was selected for each case.
Inclusion criteria
The study included histopathologically confirmed female cases of breast cancer; and patients willing to participate and giving written informed consent.
Exclusion criteria
The study excluded: bed ridden patients, male cases with breast cancer, and patients suffering along with other cancer.
Sample size
Sample size of 96 for cases was calculated with odds ratio for duration between age at menarche and age at first child birth more than 6 years with power of 80% and confidence level of 95% using the data from Balasubramaniam et al study. 17 Considering ratio 1:1, sample size for control came to be 96. Though, 100 cases and 100 controls were included in the final study.
The study subjects were interviewed with a pre-tested interview schedule after obtaining informed consent. Presence of female attendant was ensured during the interview of the subject. Variables studied were, marital status, education, occupation and socioeconomic status, present and past history of medical illness and personal habits, age at menarche, age at birth of first child, difference between age at menarche and birth of first child, parity, duration of breast feeding, age at menopause, history of abortion, family history of breast cancer, history of benign breast condition, use of oral contraceptive pills, use of hormonal replacement therapy, history of radiation exposure during thelarche, body mass index, waist to hip ratio and dietary habits. Controls were enrolled after explaining to them in detail about the purpose of the study and their role in the study.
This study was done after getting clearance from Institutional Ethics committee of Indira Gandhi Government Medical College and General Hospital, Nagpur.
Standard definitions were used for data collection. The economic status of an individual was determined by B. G. Prasad's classification based on per-capita income and consumer price index as per August 2019. 19 Body mass index (BMI) was classified according to cut off values for Asian population given by World Health Organization (WHO) expert consultation. 20 Statistical analysis was done Microsoft office excel 2013, Epi info 7.1.4, 2014, and STATA 13.0, 2013. Mean and standard deviation were used to summarize data. Chi-square test, odds ratio and logistic regression (backward stepwise method) were used to identify and quantify the risk. P value less than 0.05 was taken as statistically significant.
RESULTS
Mean age of cases was 48.47±9.50 years and mean age of controls was 48.00±10.13 years. This difference of ages between cases and controls (p<0.73) was not statistically significant. Table 1 shows the sociodemographic characteristics of cases and controls. The age of women in both groups varied from 31 to 75 years. Maximum number of cases and controls was observed in 41 to 50 years (50% of a cases and controls). Most of cases and control were Hindu by religion, 69% and 70% respectively. Table 2 shows univariate analysis of socio-demographic, past history of benign breast lesion, family history, dietary and anthropometric risk factors associated with breast cancer. The women with education of graduation or of above level were found to be at higher risk of developing breast cancer than women having education less than graduation (OR=3.58, 95% CI=1.12-11.41). Majority of cases 39 [39%] and controls 33 [33%] were belonging to class IV of B.G. Prasad's socio-economic class. For analysis purpose class I, II, III were grouped together as upper class and was analysed against taking together class IV and V as lower class. Socio-economic status was not found to be significantly associated with breast cancer (OR=0.92, 95% CI=0.52-1.60, p=0.88). Women with history of benign breast disease were at higher risk of developing breast cancer as compared to those who don't have history of benign breast disease. (OR=2.68, 95% CI=1. 16-6.20). 4% cases and 1% control were having family history breast cancer in first degree relatives. This difference was statistically not significant (OR=4.12, 95% CI=0.45-37.5). Type of diet (vegetarian versus non-vegeterian) was not found to be significantly associated with breast cancer. Body mass index was not found to be significantly associated with Breast cancer, whereas, women with waist/hip ratio more than 0.85 was having higher risk of developing breast cancer (OR=2.30, CI=1. 24-4.16). Table 3 shows univariate analysis of reproductive risk factors associated with breast cancer.
Following reproductive risk factors were found significant on univariate analysis, mean age at menarche for cases was 12.64±1.59 years and for controls was 13.96±1.44 years. This difference between age at menarche of cases and controls was statistically significant, (p=0.000). Women who attained menarche by less than or equal to 11 were having higher risk of development of breast cancer as compared to those who attained menarche at more than 11 years. (OR=4.69, 95% CI=2.02-10.8). 3 [3%] cases and 2 [2%] controls were nulliparous and nulliparity was not found to be statistically significant with risk of developing breast cancer. Mean age at first child birth for cases was 22.24±2.95 years, and for controls was 20.88±3.5 years, this difference between cases and controls for age at first child birth was statistically significant (p=0.004). Women with age at first child birth more than or equal to 21 years were at higher risk of development of breast cancer as compared to women having age at first child birth less 21 years (OR=2.62, 95% CI=1.44-4.77). 12 [12%] cases and 4 [4%] controls reported no history of breast feeding. Cumulative duration of breast feeding in month was assessed from 0 month to 60 months. Significant decreasing risk of breast cancer was noted as cumulative duration of breastfeeding increases (x 2 for linear trend p=0.01). Women who had breastfeeding duration for less than or equal to 24 months were at higher risk of developing breast cancer than those women who had breastfeeding for more than 24 months (OR=3.02, 95% CI=1.31-6.91). Cases and Controls those have attained menopause were 52% and 56% respectively. Majority of cases 23/52 [53.85%] and controls 26/56 [46.42%] has attained menopause in age group of 46 to 50 years and 41 to 45 years respectively. Mean age at menopause for cases was 46.88±3.26 years and for controls was 44.46±3.45 years, controls were having early age at menopause than cases, this difference for age at menopause between cases and controls was statistically significant (p=0.000). Women achieving menopause at age more than 45 years were having higher risk for breast cancer than those with age at menopause less than or equal to 45 years (OR=2.71, 95% CI=1.24-5.91). Also, it was observed that as the age at menopause increases odds for risk of breast cancer increases. Rising trend between age at menopause and risk of breast cancer was observed (x 2 for linear trend p=0.004). Odds for breast cancer for age at menopause 41 to 45 years, 46 to 50 years and more than 50 years as compared to less than or equal to 40 duration for more than 30 years were at higher risk of developing breast cancer than those women who had reproductive life duration for less than or equal to 30 years (OR=9.50, 95% CI=3.49-25.8). Women with difference of age at first childbirth and age at menarche with 7 to 12 years are having higher risk of developing breast cancer than those with difference of less than or equal to 6 years (OR=6.42, 95% CI=3. 24-12.9). Those with difference greater than 12 years are having risk of developing breast cancer than those with difference of less than or equal to 6 years. (OR=6.44, 95% CI= 2.31-17.9). Women with history of abortion were at higher risk of developing breast cancer (OR=2.25, 95% CI=1.30-4.46). Induced abortion was studied against natural abortion but it was not found to be significantly associated with breast cancer (OR=0.78, 95% CI=0.22-2.72). Irregular menses was not found to be significantly associated with breast cancer (OR=1.27, 95% CI=0.69-2.36). Women consuming oral contraceptive pills for more than 6 months were at higher risk of developing breast cancer than those women consuming OC pills for less than or equal to 6 months (OR=4.88, 95% CI=1. 21-19.71). History of radiation exposure during thelarche and consumption of hormonal replacement therapy was also asked.
None of the study participants was able to recall the history of radiation exposure and all the study participants had no history of consumption of hormonal replacement therapy. Table 4 gives the factors independently associated with breast cancer as identified by a backward stepwise logistic regression analysis of all factors, which were significant in cases by univariate analysis. As there were 48 [48%] cases and 44 [44%] controls who had not attended menopause, hence age at menopause and duration of reproductive life could not be entered as it was calculated only for those who had attained menopause. As there were 3 [3%] cases and 2 [2%] controls who were nulliparous hence age at first childbirth, difference between age at first childbirth and age at menarche were not entered and as 12 [12%] cases and 4 [4%] controls had never done breastfeeding hence breast-feeding duration was also not entered. 76 [76%] cases and 84 [84%] controls had never consumed oral contraceptive pills hence duration of consumption of oral contraceptive pills was not entered. A full model of multiple logistic regression was prepared and individual effect of each risk factor was studied when all other factors were adjusted. The final model of multiple logistic regression was thus prepared by backward deletion of nonsignificant factors, (p>0.05). On multiple logistic regression, age at menarche, history of abortion, waist/hip ratio more than 0.85 were found to be significantly associated with risk of breast cancer.
DISCUSSION
Breast cancer is showing an upward trend in Indian women which is reflected in the cancer registries. 10 The change in reproductive pattern, lifestyle, dietary patterns and demographic features may be contributing to this increase. This study was hence conducted to explore these risk factors. This study was feasible because study institute had a tertiary care cancer specialty centre. All incident cases were chosen to avoid the survival bias due to prevalent cases. Controls were chosen by random sampling and no major exclusion criteria were used to avoid selection bias. The major confounders were identified from prior literature and their individual risks were quantified. Confounding was adjusted using the multiple logistic regression methods. Our study population consisted of hospital population, which was mainly from adjoining urban areas of Nagpur. On univariate analysis, education of graduation or of above level, waist/hip ratio more 0.85, age at menarche less than or equal to 11 years, age at first child birth more than or equal to 21 years, breastfeeding duration for less than or equal to 24 months, age at menopause more than 45 years, reproductive life duration of more than 30 years, difference of age at first childbirth and age at menarche more than 6 years, history of abortion, history of benign breast disease, OC pills consumption for more than 6 months was found to associated with risk of developing breast cancer. There was no sufficient gradient with respect to many variables. Hence, we found large confidence intervals, even for significant variables. On multiple logistic regression age at menarche, history of abortion, waist/hip ratio were found to be significantly associated with risk of breast cancer. We found one and half times increased risk of breast cancer with waist to hip ratio of more than 0.85 which in accordance with study findings of Nagrani et al and Fei et al. 34,35 Though this factor was found significant, bias of temporal causality might have affected this factor also.
We found one and half times increased risk of breast cancer with early age at menarche. We had not any significant assocition between breast cancer and family history of breast cancer. One of the reasons for the family history of breast cancer not showing up as a factor may be that the sample size was ineffective to find this risk. Pakseresht et al and Montazeri et al also not found family history of breast cancer associated with risk of breast cancer. 21,37 The strength of this study was the care taken to control biases. Selection bias was controlled with careful selection of controls. Information was collected over a year by the same interviewer and allotting similar time to both the groups and hence avoiding information bias. Confounding was managed well with regression models. This was the first study in central India to look at the all the possible risk factors of breast cancer in women, which was identified through extensive search of literature. The main limitation of this study was that it was a hospital-based study and may not be representative of the underlying population. Many of the factors after adjustment had wide confidence intervals even though they were statistically significant. This may be due to small numbers in the risk group and needs careful interpretation.
The other limitation was that we had calculated waist to hip ratio and BMI at present condition whereas disease had already occurred. We tried to assess radiation exposure history during thelarche period among participants but recall bias hindered its assessment. This study gives an insight into the central obesity but needs follow up study and detailed assessment on this aspect and radiation exposure history during thelarche period.
CONCLUSION
Women with age at menarche less than or equal to 11 years, history of abortion, waist/hip ratio more than 0.85 were found to be significantly associated with increased risk of breast cancer. Screening of high-risk group by yearly breast examination of nulliparous women and women with previous history of biopsy for a benign breast lesion can help in early detection. Teaching self-breast examination to these individuals will be beneficial. Breastfeeding for longer duration should be promoted. Increased awareness regarding physical activity for those at risk as well as maintenance of waist/hip ratio less than 0.85 should be promoted. | 2021-05-28T18:36:16.537Z | 2021-04-27T00:00:00.000 | {
"year": 2021,
"sha1": "fa581af7ac1cc01310982860fa4d77e01c0b0c31",
"oa_license": null,
"oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/7896/4925",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fa581af7ac1cc01310982860fa4d77e01c0b0c31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16412967 | pes2o/s2orc | v3-fos-license | A growing animal model for neonatal repair of large diaphragmatic defects to evaluate patch function and outcome
Objectives We aimed to develop a more representative model for neonatal congenital diaphragmatic hernia repair in a large animal model, by creating a large defect in a fast-growing pup, using functional pulmonary and diaphragmatic read outs. Background Grafts are increasingly used to repair congenital diaphragmatic hernia with the risk of local complications. Growing animal models have been used to test novel materials. Methods 6-week-old rabbits underwent fiberoptic intubation, left subcostal laparotomy and hemi-diaphragmatic excision (either nearly complete (n = 13) or 3*3cm (n = 9)) and primary closure (Gore-Tex patch). Survival was further increased by moving to laryngeal mask airway ventilation (n = 15). Sham operated animals were used as controls (n = 6). Survivors (90 days) underwent chest X-Ray (scoliosis), measurements of maximum transdiaphragmatic pressure and breathing pattern (tidal volume, Pdi). Rates of herniation, lung histology and right hemi-diaphragmatic fiber cross-sectional area was measured. Results Rabbits surviving 90 days doubled their weight. Only one (8%) with a complete defect survived to 90 days. In the 3*3cm defect group all survived to 48 hours, however seven (78%) died later (16–49 days) from respiratory failure secondary to tracheal stricture formation. Use of a laryngeal mask airway doubled 90-day survival, one pup displaying herniation (17%). Cobb angel measurements, breathing pattern, and lung histology were comparable to sham. Under exertion, sham animals increased their maximum transdiaphragmatic pressure 134% compared to a 71% increase in patched animals (p<0.05). Patched animals had a compensatory increase in their right hemi-diaphragmatic fiber cross-sectional area (p<0.0001). Conclusions A primarily patched 3*3cm defect in growing rabbits, under laryngeal mask airway ventilation, enables adequate survival with normal lung function and reduced maximum transdiaphragmatic pressure compared to controls.
Background
Grafts are increasingly used to repair congenital diaphragmatic hernia with the risk of local complications. Growing animal models have been used to test novel materials. 6-week-old rabbits underwent fiberoptic intubation, left subcostal laparotomy and hemi-diaphragmatic excision (either nearly complete (n = 13) or 3*3cm (n = 9)) and primary closure (Gore-Tex patch). Survival was further increased by moving to laryngeal mask airway ventilation (n = 15). Sham operated animals were used as controls (n = 6). Survivors (90 days) underwent chest X-Ray (scoliosis), measurements of maximum transdiaphragmatic pressure and breathing pattern (tidal volume, Pdi). Rates of herniation, lung histology and right hemi-diaphragmatic fiber cross-sectional area was measured.
Results
Rabbits surviving 90 days doubled their weight. Only one (8%) with a complete defect survived to 90 days. In the 3*3cm defect group all survived to 48 hours, however seven (78%) died later (16-49 days) from respiratory failure secondary to tracheal stricture formation. Use of a laryngeal mask airway doubled 90-day survival, one pup displaying herniation (17%). Cobb angel measurements, breathing pattern, and lung histology were comparable PLOS
Introduction
Congenital diaphragmatic hernia (CDH) occurs in 2,6/10,000 live births [1]. Once the initial problems of ventilatory insufficiency and pulmonary hypertension due to pulmonary hypoplasia are managed, the defect requires surgical closure. This can be undertaken primarily for small defects or in the case of larger defects a patch repair may be required. Prenatal lung size predicts for post-natal outcome which is closely linked to defect size [2][3][4]. As promising in utero treatments to accelerate lung growth emerge, a new population of survivors will require a more challenging defect closure [5]. Reported postnatal patch rates in prenatally treated fetuses are around 70%, whereas it is 23% in the population covered by the CDH registry [4,6]. Diaphragmatic patch repairs exhibit significant re-herniation rates [7][8][9]. With children undergoing patch repair in comparison to those repaired primarily, reporting higher rates of scoliosis (10% vs. 0%), chest wall deformities (14% vs. 6%), small bowel obstruction (12% vs. 6%) and poorer long term pulmonary function testing [10,11]. The contribution of the nature of the patch to these problems is not well understood. None of the available materials mimic the complex muscular-tendinous structure of the diaphragm, which has important roles in both the respiratory system and gastrointestinal tract [12]. To improve long term outcomes, more biocompatible and functional diaphragmatic substitutes are needed, to overcome problems such as recurrence, small bowel obstruction, adhesions and gastro esophageal reflux disease [10]. Most of these problems occur in the first two years of life. This mirrors the time of greatest change in the thorax; by two years the rounded pattern of infancy is superseded by the more ovoid cross sectional shape of adults [13]. Growing animal models including rats, rabbits, lambs, pigs and dogs have been used to mimic the rapidly expanding rib cage in the human [14][15][16][17][18]. Large animal models, particularly the rabbit have the advantage that they are more similar to human in terms of thoracic size, pulmonary function and growth rates [19]. Previously, we tested patch biocompatibility in a growing rabbit model of diaphragmatic hernia [15]. Rabbits are relatively inexpensive, easy to both house and handle and very fast growing, reaching full size by approximatively six months of age [20]. Also, experimental rib fusion at young age induces scoliosis, a condition seen clinically in CDH [21].
As an important respiratory muscle, diaphragmatic replacement can be assessed by its impact on pulmonary function alongside its biomechanical function. In the experimental literature pulmonary outcomes are not frequently reported. As far as we are aware there are limited comprehensive studies analyzing pulmonary function and correlating it to lung histology, hence it remains unclear if pulmonary function is of any value at later time-points [22,23]. Functional evaluation of diaphragmatic replacements includes electromyography (rats) and movement on videoscopic X-ray (dogs, rabbits, pigs) [24][25][26][27]. Transdiaphragmatic pressure measurements are used as an index diaphragmatic force and contraction, hence its functional capacity as a respiratory muscle [28]. Transdiaphragmatic pressures have previously been used to assess diaphragmatic re-innervation in rabbits [29].
Herein, we used the young rabbit to develop a more representative animal model for neonatal CDH-repair. Such a model could then later be used for studying novel implants. Firstly, we compared survival rates for different defect size and ventilation modes. Once survival was adequate, we focused on the development of more comprehensive and representative outcome measures, in a controlled study comparing Gore-Tex repaired animals and sham controls.
Materials and methods
This experiment was approved by the institutional animal ethical committee (KU Leuven P112/2014). To obtain adequate survival there was several phases of this experiment described in detail below: i) tracheal intubation, laparotomy and subtotal left hemi diaphragmatic (HD) excision, ii) tracheal intubation, laparotomy and reduced left hemi diaphragmatic defect size (3 Ã 3cm), iii) laryngeal mask airway (LMA) intubation, laparotomy and reduced defect size (3 Ã 3cm). Finally, we compared sham operated to Gore-Tex implanted animals.
The left upper abdomen and thorax were shaved ( 1 Aesculap ISIS, Beringen, Belgium) and the animal secured in the operative position ( Fig 1A). Under aseptic conditions local anesthetic was infiltrated (1% lignocaine, AstraZeneca, Brussels, Belgium) at the incision site. Following a left subcostal transverse incision (Fig 1B), sharp and blunt discussion with diathermy ( 1 Force2, Valleylab, Dre, Louisville, USA) allowed access to the peritoneal cavity. The liver was gently retracted with a damp gauze permitting division of the left triangular ligament ( Fig 1C) with visualization of the left hemi-diaphragm. A diaphragmatic defect was created via excision of the musculo-tendious left hemi-diaphragm. Initially, there was subtotal left hemidiaphragmatic excision leaving a 1cm antero-postero-lateral and 2cm medial rim (n = 16; Fig 1D). Because of lower than expected survival rates we moved to induction of a smaller yet standardized 3x3cm posterior defect ( Fig 1E; n = 11). To repair the defect a dome-shaped 1-mm single layer 1 Gore-Tex Dual Patch (depending on size: 5cmx3,5 or 3,5x3,5cm) was sutured in place using non-absorbable interrupted suture (Prolene 4-0, Ethicon) ( Fig 1F). The laparotomy was closed in two layers (4-0 vicryl) followed by SC skin closure (4-0 Monocryl Ethicon) ( Fig 1G). The ketamine/ propofol infusion was slowed and stopped during closure. Removal of the LMA was attempted only when the animal gained consciousness and began to reject the airway. Recovery was in a quiet and warm area with 4L O 2 via a nose cone. For three days a single daily SC injection of meloxicam and enrofloxacin was administered for pain control. Animals were monitored daily; if they developed respiratory distress O 2 saturations and clinical examination were undertaken at least twice daily. Low saturations <93% despite oxygen therapy and treatment with antibiotics, or visible discomfort despite adequate analgesia, signified by poor oral intake, reduced movement and weight loss over several days were considered humane endpoints. Subsequently we created 6 "sham" operated controls, who underwent a similar procedure via laparotomy, division of the posterior triangular ligament of the liver and their abdomen packed with damp swabs for 20 mins with no further surgical intervention.
Outcomes at 90 days
Outcome measures were assessed at 90 days. In case the animal died before 90days a post-mortem examination was undertaken to determine cause, alongside harvesting appropriate histology specimens.
Chest x-rays prior to harvest 24-48hrs prior to outcome measurements, animals were sedated (IM ketamine: 35mg/kg with xylazine: 5mg/kg as previously) to have posterior-anterior and left lateral chest x-rays ( 1 Embrace DM 1000 Mammography System; Agfa-Gevaert, Mortsel, Belgium: collimation size 24x29cm, thickness 280mm, voltage 28 kVp, engine load 65 and 80mAs). Chest x-rays were assessed by an observer blinded to the experimental surgery for re-herniation. To determine the degree of scoliosis the Cobb angle was measured between the 4 th and 10 th vertebrae in animals with no obvious curvature or at the extremes of the spinal curvature (
Trans-diaphragmatic pressure measurement and breathing pattern
At 90 days animals were sedated (IM ketamine/ xylazine) and commenced on an IV infusion of ketamine/propofol (as above), with 4L O 2 given via a nose cone. They were placed supine, neck extended and shaved. A tracheostomy was performed in a stepwise fashion. 1% lignocaine was infiltrated 5mm below the thyroid cartilage on the neck. A 2cm horizontal incision neck incision was made and subplatysmal flaps were elevated with division of strap muscles. The thyroid was gently separated or avoided and pre-tracheal fascia opened. The tracheal cartilage was exposed from the cricoid cartilage to 5 th tracheal ring. A small vertical incision between the 2 nd to 4 th tracheal ring with a 3-0 endotracheal tube (35mm cut short). The tracheal tube was secured by passing two sutures (2-0 Vicryl) around the trachea. The nose cone was moved to cover the tracheal tube.
Measurement of transdiaphragmatic pressure (Pdi) required placement of a pneumotachograph (PTG), intra-thoracic and intra-abdominal pressure catheters. The PTG enabled measurement of tidal volume (Vt) and airflow (ml/s). It was attached to a heater control unit (8411B, Hans Rudolph, Sawnee, KS USA) was attached to the end of the tracheal tube and a pressure transducer (Biopac MP150, Cerom, Paris, France). Two esophageal balloon catheters (5Ch, Cooper surgical, Trumbull, USA) were advanced from the mouth into the stomach (approx. 30cm) and attached to the pressure monitors giving a positive signal. One catheter was retracted until the signal became negative and then assumed to be in the distal esophagus which was synonymous to thoracic pressure. The catheters were labeled esophageal (Peos) or abdominal (Pab) and secured with tape. Once the animal was in an established breathing pattern a run of 5 consecutive breaths was recorded at rest. Then the ET tube (at tracheostomy site) was occluded and 5 attempted breaths recorded. Following these the animal was euthanized with an IV injection of T61 (MSD, Brussels, Belgium).
Histopathology
Animals were assessed for any signs of incision site breakdown (infection/ dehiscence) before a midline laparotomy was performed. The abdominal cavity was opened and inspected for fluid collections, infection or herniation of the diaphragm. A left thoracic window was also created with removal of the anterior section of the 4 th to 6 th ribs. The diaphragm was then explanted en bloc. Right hemi diaphragmatic specimens were taken from the lateral muscular edge in both sham and Gore-Tex operated animals. The lungs and trachea were removed en bloc and inflated at 20cm H2O with 4% neural buffered formaldehyde (bFA), the trachea was tied with a knot and immersed in bFA for 24 hours. Random sections were made from the left superior lobe (upper and lower lobe) and from the left inferior lobe (upper and lower). In the right lung a random section was taken from the superior and antero-inferior lobe.
Paraffin blocks were cut into 4μm sections of the lung and right hemi-diaphragmatic specimens and stained with hematoxylin and eosin for airway morphometry and cross sectional area respectively. Microscopic quantification was done by a single observer (MPE) blinded to the experimental group, with a Zeiss light microscope (Axioskop, Carl Zeiss, Oberkochen, Germany) at a magnification of 200x. We assessed the left lower lobe as it is adjacent to the patch border. Each lower lobe was divided into 20 non-overlapping fields with three measurements taken: mean terminal bronchiolar density (MTBD), which is inversely proportional to the number of alveoli supplied by each bronchus, mean linear intercept (Lm) which is related to airspace size, mean wall transection length (Lmw) which is an index of the thickness of alveolar septa [31]. Right-hemi diaphragmatic specimens had the perimeters of at least 100 circular randomly selected fibers delineated and cross-sectional area (CSA) was automatically calculated (Zen 2.3, Carl Zeiss) [32]. This was then grouped by CSA (0-1000μm 2 , 1000μm 2 -2000μm 2 etc), the percentage of fibers in each group was calculated for total fiber load and a histogram was created for Goretex or sham operated animals.
Statistics
All procedural information was documented as per the recordkeeping guidelines[33]. Data was analyzed using Prism for Windows version 5.0 (Graphpad software, San Diego, CA, USA). A power calculation based on our previous study (tensiometry results) suggested a repair group size of n = 6 would give 80% power (independent t-test) [15]. Data was checked for normality of distribution using a Kolmogorov-Smirnov test, then presented as a mean with SD or median and IQR. Comparison between groups was done by unpaired students t-test or Mann Whitney test. A p-value <0.05 was considered significant. Survival curves are presented as Kaplan-Meier graphs with group comparsions using a Mantel-Cox test. To compare the CSA of fiber sizes a two-way ANOVA with a Bonferroni multiple comparison test was used.
Reduced defect size with tracheal intubation. Defect size was then systematically reduced to 3 Ã 3cm (7,1cm 3 ) to overcome the immediate (<48hours) post-operative mortality (Fig 2). Eleven rabbits were anesthetized, two (18%) died before the operation either during intubation (n = 1) or secondary to iatrogenic pneumothorax (n = 1). No operated animals died within the first 48hrs. Overall, there was an improvement in survival (p <0.05). Of the remaining 9 rabbits, 7 died around D27 . The majority exhibited significant respiratory distress with significant desaturation (<80%). Post-mortem examination disclosed a small organized hemothorax in one animal. Lung pathology in all but one animal showed edema, respiratory infiltrates and hyaline membrane disease (Fig 3C). This was worse at later time points (Fig 3B and 3C) in keeping with the clinical pattern of respiratory failure. Tracheal pathology revealed strictures with mucosal loss, fibrosis and narrowing of the lumen (Fig 3D) which prompted us to move to working without tracheal intubation (see below). All animals had significant adhesions to the patch. In one animal there was herniation at D = 49. Eventually two out of the remaining nine (22%) survived to 90 days, one with a herniation. Initial weight (1.3±0.5 kg) and operation time (75±11 mins) similar between animals surviving 90 days and those not.
Reduced defect size or sham operation with laryngeal mask airway ( 1 V-Gel). To avoid the development of tracheal strictures we moved to securing the airway with an LMA ( 1 Vgel). Another nine rabbits underwent standardized induction of a reduced size defect (3 Ã 3cm) and primary repair with Gore-Tex to complete the envisaged number of survivors in that group. We then also added six sham operated animals. With this ventilation strategy overall survival was improved to 67% (Fig 2, p <0.05), no sham operated animals died. Notably, after 48 hours post-operatively there were no further episodes of respiratory distress suggestive of tracheal injury. There was normal tracheal architecture at D90 in these animals (Fig 3E). Four (44%) Gore-Tex rabbits survived to 90 days with three early post-operative deaths, all in the Gore-Tex group. Two had confirmed aspirations on post-mortem and no cause of death was found in the other. There were two late deaths at D73 (diarrhea) and D75 (pneumonia). No Gore-Tex repairs had recurrence.
Discussion
Herein, we improved a large diaphragmatic hernia repair animal model by increasing diaphragmatic defect size to one that could not be directly closed with permissible survival, and by changing ventilation technique. We implanted the most commonly used patch for defect repair; Gore-Tex. This did not induce scoliosis although there was a 17% herniation rate with changes in transdiaphragmatic pressures and compensatory contralateral diaphragmatic hypertrophy.
To increase diaphragmatic defect size to the extent where primary closure was impossible required several modifications to our existing anesthetic protocol [15]. Initially, a large defect despite mechanical ventilation resulted in the majority of animals dying in the immediate post-operative period. As rabbits are diaphragmatic breathers, we speculated that they were unable to compensate for the near complete left hemi diaphragmatic replacement with a patch [34]. A reduction in defect size to around what is categorized as a type B defect (CDH study group staging) improved our immediate post-operative losses (<2hrs) [35]. However, we then ran into the problem of later losses due to tracheal stricture formation. Several models have been used to produce tracheal strictures in rabbits; one implanting an endotracheal tube wrapped in 1 Surgicel into the tracheal for one week [36]. Another study related tracheal stricture formation to the duration of intubation; with intermittent tracheal trauma throughout the Growing animal model of diaphragmatic hernia for testing patch repairs intubation. However, it was only following six hours of trauma that they could routinely produce respiratory distress and stricture formation [37]. Our operative time of around 75mins produced significant and reproducible damage within 2-3 weeks. Despite intubating under direct vision using a guide wire to minimize trauma there are several possible contributors to the damage. In our previous study, with an identical intubation technique we had 19% mortality within two weeks of surgery, the cause of which was unclear. In this study we lost 54% at an intermediate time-point exhibiting significant respiratory distress secondary to tracheal stricture formation. Although we often re-used tracheal tubes rendering them stiffer and more likely to induce damage, these were uncuffed. This combined with the vulnerable immature rabbit airway may explain some of the lost in both of these experiments. The increased mortality may be due to the additional use of mechanical ventilation with positive pressure possibly further exacerbating the injury through mechanical injury. Furthermore, our larger defects may have meant that this tracheal insult was less well tolerated than in the previous experiment. Regardless, these problems were circumvented by the use of a LMA.
We also measured the transdiaphragmatic pressure to evaluate patch repair in the setting of CDH, which is to our knowledge has not been previously reported. Clinically, children with CDH have poorer exercise performance and respiratory function tests, demonstrating obstructive airways [38] [39]. The most significant contributor must be the degree of pulmonary hypoplasia but the contribution of the diaphragmatic repair has not been so well considered. We demonstrated a change in diaphragmatic force with a Gore-Tex patch implant with no accompanying change in lung histology. Indeed, patch closure is a predictor for a worse outcome in pulmonary function testing [11,40]. Gore-Tex does not permit significant tissue remodeling whereas a structure which actively encourages tissues ingrowth and re-innervation of the bridging tissue may in time lead to a better functional result [15]. Regardless, in our model the de-innervated area of diaphragm replaced by the Gore-Tex patch will never function as well as native tissue. The phrenic nerve in children with a repaired diaphragmatic defect has been shown to have a prolonged latency and twitch diaphragmatic pressure on the affected side representing a further challenge to the ideal repair [41,42].
This model does not reproduce the primary pulmonary deficit present in CDH survivors. Of note is that rabbits alveolarize in the first few weeks after birth and by the time of patch implant lung development is already complete [43]. Any pulmonary dysfunction would therefore be considered secondary to the surgery or its complications. We found that measurements of breathing patterns confirmed by airway morphometry was relatively unaffected, both in treated and control animals. Although, herniation may lead to atelectasis and lung compression it is unlikely to modify alveolar structure. Previously, in adult rats direct closure of a muscular diaphragmatic defect reduced forced vital capacity (FVC), forced expiratory volume (FEV1) and elastic properties of the lungs significantly within 90 minutes compared to Gore-Tex closure [22]. It is to be expected that a tight primary repair leads to early ventilatory changes however these are likely compensated for in time. Regardless, we confirm here that the patch in itself does not impact respiratory function.
Re-herniation rates in humans vary between centers and it has been reported in the most expert hands they are actually relatively low [8,10]. We report a (re-)herniation rate of 20%, essentially only in one animal, this is likely reflective of the human situation. Although we did not identify any scoliosis, in previous animal models high rates of scoliosis are reported when the patch is secured directly to the ribs, which was not the case in this experiment [14]. It is likely that the aetiology of scoliosis is more complex than we can reproduce in this model. A reduced ipsilateral thoracic size resulting from the restrictive/ obstructive ventilatory defect, excessive tension from the diaphragmatic repair or a smaller lung volume necessitating a smaller thoracic size due to changes in the recoil pressure of the lung all may contribute [44,45].
Finally, we investigated whether there was a compensatory change in the right hemi-diaphragm in response to the left sided intervention. We speculated that the right hemi-diaphragm may hypertrophy in response to increased effort due to the lack of movement in the left and this was confirmed by measuring muscle fiber CSA. Rabbits are known to have around 20% type 1 fibers in their diaphragm and this differs according to region [46]. Under increased strain there is a documented muscle fiber switch from type 2b to type 2a and this could act as another compensatory mechanism [47]. An investigation of muscle fiber type could provide more insight into this disease model, yet we did not pursue this.
To further this study, one could also consider more extensive and non-invasive pulmonary function testing using whole body plethysomography [48]. Furthermore, fluoroscopic x-ray could provide interesting information on ipsilateral and contra-lateral diaphragmatic movement following diaphragmatic patch repair [26]. We also acknowledge that we have compared animals with different ventilation strategies and failed to report the passive biomechanical properties of our diaphragmatic repair. Finally, it would be interesting to explore the host cellular response to implants. This would be most valuable with additional examination of implants at earlier time-points.
Regardless, we present a more comprehensive model for neonatal diaphragmatic hernia repair with novel readouts for testing patch repairs. Future studies in this model could now explore novel solutions for diaphragmatic repair. Fig 2, Fig 4C and 4D | 2018-04-03T06:17:36.738Z | 2017-03-30T00:00:00.000 | {
"year": 2017,
"sha1": "deae951893cb5bd1a5ae7b2c1ef0698026469123",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0174332&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "deae951893cb5bd1a5ae7b2c1ef0698026469123",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211022630 | pes2o/s2orc | v3-fos-license | Randomized Efficacy and Safety Trial with Oral S 44819 after Recent ischemic cerebral Event (RESTORE BRAIN study): a placebo controlled phase II study
Background The GABAA-α5 receptor antagonist S44819 is a promising candidate to enhance functional recovery after acute ischemic stroke (IS). S44819 is currently evaluated in this indication; RESTORE brain study started in Dec 2016 and was completed in March 2019. Methods/design The study is a 3-month international, randomized, double-blind, parallel group, placebo-controlled phase II multicentre study. Patients in 14 countries who suffered an IS leading to a moderate or severe deficit defined by NIHSS score ranging from 7 to 20 and are aged between 18 to 85 years are included between 3 and 8 days after the stroke onset. Approximately 580 patients are to be included. The primary objective of the study is to demonstrate the superiority of at least one of the two doses of S44819 (150 or 300 mg bid) compared to placebo on top of usual care on functional recovery measured with the modified Rankin scale at 3 months. Comparisons between two doses of S44819 and placebo are assessed with ordinal logistic regression evaluating the odds of shifting from one category to the next in the direction of a better outcome at day 90. Secondary objectives include the evaluation of S44819 effects on neurological examination using the National Institute of Health Stroke Scale total score, activities of daily living using the Barthel Index total score, and cognitive performance using the Montreal Cognitive Assessment scale total score and Trail Making Test times. Safety and tolerability of the two doses of S44819 will also be analyzed. Discussion The RESTORE BRAIN study might represent the first proof of concept study of an innovative therapeutic approach that is primarily based on enhancing functional recovery after IS. Trial registration Randomized Efficacy and Safety Trial with Oral S 44819 after Recent ischemic cerebral Event, an international, multi-centre, randomized, double-blind placebo-controlled phase II study. ClinicalTrials.gov, NCT02877615; Eudract 2016–001005-16. Registered 24 August 2016
Background
Post-stroke recovery relies on brain neuroplasticity [1,2], which is compromised by the sustained hypoexcitability observed in the peri-infarct cortex due to the increased activity of GABAergic neurons [3].
S44819 is a potent and selective antagonist of GABA A -α5 receptors [4] that enhances motor and cognitive recovery when administered chronically from day 3 after IS in rodent models [5]. A transcranial magnetic stimulation study has demonstrated in humans that S44819 reaches the human cortex and is capable of increasing cortical and cortico-spinal excitability by reducing GABA A receptor-mediated activity [6].
The design of a phase II clinical trial testing the efficacy of two doses of S44819 on functional recovery after acute supratentorial IS is presented.
Methods/design
Design In accordance with the EMA's guidance on stroke [7], the study includes a selection period and a double-blind treatment period of 90 days. A follow-up period of 15 days after the end of treatment allows evaluation of the safety of patients (Fig. 1).
Patient population
Male or female patients aged 18-85 years are randomised between 3 and 8 days after the stroke event based on the following criteria: 1) acute IS confirmed by MRI or CT; 2) NIHSS score between 7 and 20 both inclusive [8,9]; 3) lack of previous disability defined as an estimated prestroke mRS score 0-1; 4) written informed consent.
Patients are ineligible if: 1) an acute hemorrhagic stroke or symptomatic hemorrhagic transformation or cerebral venous thrombosis occurs; 2) the required rehabilitation is impossible to undertake; 3) a carotid endarterectomy is planned within the next 3 months; 4) a previous clinically significant condition interferes with the study evaluation; 5) brain imaging shows a severe microangiopathy (Fazekas grade 3 with severe small vessels disease on MRI or CT scan); 6) receiving a treatment that could have an impact on treatment efficacy (i.e., interacting with GABA-A receptors) or a clinically relevant abnormality likely to interfere with study outcome, such as any renal or hepatic severe impairment and any repeated QTcF prolongation. Patients who are not able to cooperate as well as those presenting with geographic or social factors making their study participation impossible are excluded.
Setting
The study is conducted in 92 hospital stroke centers in 14 countries (Australia, Belgium, Brazil, Canada, Czech Republic, France, Germany, Hungary, Italy, Poland, South Korea, Spain, The Netherlands, United Kingdom) that have experienced specialists in stroke medicine (e.g., are able to perform intravenous thrombolytic therapy, brain imaging for diagnosis). At discharge from hospital stroke centers, patients receive rehabilitation therapy (in-or out-patient rehabilitation) in accordance with the standard of care
Randomization
The treatment is assigned at the inclusion visit by a balanced, non-adaptive randomization, with stratification by country and previous revascularization therapy status (thrombolysis and/or endovascular therapy). Patients are randomized into one of the three groups: S 44819 (150 mg or 300 mg twice a day) or placebo to reach about 194 patients per treatment arm. Randomization and allocations are centralized by Interactive Web Response System (IWRS) under blind conditions for subjects, caregivers, investigators, study-related staff, and sponsor. The placebo is made up of hydroxypropyl methylcellulose acetate succinate and different iron oxide colorants and is an off-white to yellow powder in a sealed sachet for oral suspension. Investigational Medicinal Product (IMP) is provided in the form of sachets (two sachets per intake) with an identical appearance and taste for all treatment groups. The circumstances under which blinding may be broken in IWRS are any serious AEs or severe medical conditions where the knowledge of the treatment is necessary for safety follow-up of the patient.
The Methodology and Analysis of Clinical Data Division of I.R.I.S is responsible for designing and constructing the blinded randomization list.
Treatment
Study treatments are S 44819 (150 or 300 mg/bid) and placebo. The choice of doses is mainly based on a Transcranial Magnetic Stimulation study which has demonstrated that S 44819-at least at doses > 100 mg-reaches the human cortex and increases corticospinal excitability by reducing GABA A α5-mediated inhibition [6].
As no validated post-acute phase treatment exists to improve functional recovery after stroke events, no active comparator is available. In this context, a placebo comparator group is commonly employed and required by the guidelines to demonstrate efficacy in controlled clinical trials [10].
During the double-blind treatment period (D0 to D90), S 44819 or placebo is provided in the form of sachets (two in the morning and two in the evening) of identical appearance and taste for all treatment groups. Patients remain on the same IMP and dose throughout the treatment period. Depending on the patient's condition, different methods of treatment administration are planned, e.g., with a glass of water or thickened water, if necessary through a nasogastric tube.
Criteria for discontinuing treatment are any adverse event according to the judgment of the investigator, any QTcF prolongation, any severe hepatic event, any suicide attempt, any symptomatic haemorrhagic stroke, or any new IS. Also, consent withdrawal or any event which could jeopardize the patient's safety lead to treatment discontinuation. In such a situation, the withdrawal reason is reported and all examinations are expected to be performed.
Compliance with treatment is assessed at each study visit by deduction of the number of sachets dispensed and returned.
Endpoints and measurements
The primary objective is to demonstrate the superiority of at least one of the two doses of S 44819 versus placebo on functional recovery after IS based on the modified Rankin Scale (mRS) [11] measured after 90 days of treatment. The mRS is administered at days 5, 30, 60, and 90 and at the follow-up visit (day 105).
The secondary objectives are to assess the efficacy of two doses of S 44819 versus placebo on stroke recovery using the National Institute of Health Stroke Scale (NIHSS) [12], on activities of daily living using Barthel Index (BI) total scores [13], and on cognitive performance tests (MoCA, TMT), as well as the safety and tolerability of S 44819.
Cognitive performance is assessed using the Montreal Cognitive Assessment scale (MoCA) [14] and Trail Making Test (TMT) [15]. The MoCA (total score) and TMT (A and B, times) results are obtained at days 30 and 90. In order to evaluate cognitive performances when the condition begins to stabilize, cognitive assessment is obtained at 1 month and 3 months to assess the course of cognitive impairment in each stroke patient.
A visual analog scale evaluates various sub-dimensions of the participant's quality of life (appetite, sleep, daytime alertness, mood, anxiety, and pain).
Suicidal ideation and suicidal behavior is assessed using the Columbia Suicide Severity Rating Scale (C-SSRS) [16] and is administered at each visit from days 5 to 105.
Measurements are also performed in case of premature withdrawal.
A SPIRIT figure is shown in Fig. 2 and a SPIRIT checklist is available in Additional file 1.
Sample size estimates
The sample size (580 patients) was estimated based on the mRS score at day 90, using the last observation carried forward (LOCF) approach, to detect a treatment effect between at least one dose of S 44819 and placebo in the full analysis set (FAS) using Whitehead's formula [17] for ordered categorical criteria. To maintain the experiment-wise type I error at 5%, the Bonferroni correction is to be applied. A drop-out rate of 5% until D5 was considered. This sample size The placebo distribution was adapted from the NEST-1 trial [18] with similar inclusion criteria concerning severity (NIHSS score between 7 and 22). The results of NEST-1 show that such a placebo distribution is a reasonable estimation.
Statistical analysis Main analysis
The primary efficacy objective is assessed in the FAS (all patients of the RS=Randomised Set having taken at least one dose of treatment and having at least a value for the primary efficacy endpoint after D5) from the mRS score at day 90. An ordinal logistic regression assesses the odds of shifting from one category to the next on the mRS scale. Analysis includes the fixed, categorical effects of treatment (including the three treatment groups), country, and previous revascularisation therapy. Missing data are imputed with the LOCF approach. The stepdown Dunnett procedure accounts for multiplicity of comparisons.
To assess the robustness of the primary analysis results with regard to the method of handling missing data, an ordinal logistic regression, including the fixed, categorical effects of treatment, country, and previous revascularisation therapy, is carried out in the FAS considering the multiple imputation method. The same analysis as the primary analysis is performed in the FAS, including in addition the continuous fixed, covariates of age, and baseline NIHSS score. Multiplicity is handled in the same way as in the primary analysis.
Secondary analyses
The same analyses as for the primary analysis are performed in patients of the per protocol set (all completed patient data of the FAS without relevant deviation). The difference between each S 44819 dose and placebo is studied in the FAS at day 90 on the dichotomized mRS scores (0-1 versus 2-6 and 0-2 versus 3-6) using a logistic regression including the fixed, categorical effect(s) of treatment, country, and previous revascularisation therapy. Missing data at day 90 are imputed with the LOCF approach. The step-down Dunnett procedure is used to account for multiplicity of comparisons.
Descriptive statistics by treatment group are provided for all analytical approaches of the primary efficacy endpoint in patients of the FAS. The mRS score is described at each visit in patients of the per protocol set.
Secondary endpoints The secondary efficacy endpoints include 1) the value of the NIHSS score at baseline and at each post-baseline visit, 2) the value of the BI total score at each visit, 3) the value of the MoCA total score at each visit, and 4) the TMT times at each visit.
The difference between each S 44819 dose and placebo is studied in the FAS at day 90 for the NIHSS and BI scores using a Mann-Whitney test. Missing data at day 90 are imputed with the LOCF approach. Descriptive statistics by treatment group are provided for the NIHSS, BI, MoCA, and TMT endpoints in patients of the FAS.
Number of events and the number and percentage of patients reporting at least one adverse event are provided for serious and emergent adverse events. These events are described according to the seriousness, intensity, relationship, action taken regarding S 44819, the requirement of added therapy, time from onset, and outcome.
Vital signs, laboratory parameters, ECG parameters, and For suicidal ideation score from the C-SSRS, the number and percentage of patients is assessed considering their maximum suicidal ideation score during treatment. Suicidal ideation score is also described at follow-up visit. For the other outcomes, the number and percentage of patients that reach each outcome during treatment (defined as a "yes" answer at any time during treatment) and at follow-up visit are described.
Patient quality of Life (QoL) is scored at each visit and descriptive statistics are reported for visual analogue scale scores for the D0-D90 period as well as at day 105.
Data Monitoring Committee
1A Data Monitoring Committee (DMC) is responsible for reviewing on a regular basis strictly confidential data related to the safety of patients participating in the study. Based on the review of the safety data, the DMC gives written recommendations to the sponsor about the continuation of the study per the protocol until the next DMC meeting, or about the continuation of the study with modification(s) that have no impact on the study design, or about the premature discontinuation of the study.
Discussion
S 44819 exerts long lasting improvement of post-stroke recovery in rodent IS models when administered from day 3 after the ischemic insult [5] and has been shown to increase cortical excitability by decreasing GABAmediated inhibition in healthy volunteers [6]. It is hypothesized that S 44819 may enhance post-stroke neuroplasticity in patients by counteracting enhanced cortical tonic inhibition.
Accumulating evidence suggests that there is a critical period of increased neuroplasticity during the early poststroke recovery phase [3,19] and it is crucial to initiate therapy during this time window. In this trial, S 44819 is administered to patients starting 3 to 8 days after stroke onset.
The study protocol considers all ethical principles of a placebo-controlled trial and the best standards of care. To increase the acceptability of the present study, the probability to receive placebo is fixed at 33%. Given that S 44819 is expected to improve recovery in IS patients regardless of their initial condition, a shift analysis is recommended [20]. Comparisons between S 44819 and placebo are assessed using ordinal logistic regression to assess the odds of shifting from one category to the next.
The mRS, NIHSS, and BI scales are analyzed as secondary efficacy scales; they are recommended by EMA guidelines to assess the efficacy of medicinal products for treating acute stroke [7].
S 44819 does not bind to benzodiazepine sites or to GABA A receptors containing α1, α2, and α3 subunits [4] and presumably does not cause adverse effects triggered by these subunits; so far, no safety concern has arisen from the phase I and the on-going phase II study.
Conclusion
This trial may provide the first proof of concept for an innovative therapeutic approach based on the enhancement of functional recovery after IS. | 2020-02-05T01:10:15.690Z | 2020-02-03T00:00:00.000 | {
"year": 2020,
"sha1": "dcc99db9934ba59ef90e9d1667ea1c594a33f2f1",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-020-4072-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcc99db9934ba59ef90e9d1667ea1c594a33f2f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209424519 | pes2o/s2orc | v3-fos-license | Development and validation of a simple risk model to predict major cancers for patients with nonalcoholic fatty liver disease
Abstract Objective To recognize risk factors and build up and validate a simple risk model predicting 8‐year cancer events after nonalcoholic fatty liver disease (NAFLD). Methods This was a retrospective cohort study. Patients with NAFLD (n = 5561) were randomly divided into groups: training (n = 1254), test (n = 627), evaluation (n = 627), and validation (n = 3053). Risk factors were recognized by statistical method named as a Cox model with Markov chain Monte Carlo (MCMC) simulation. This prediction score was established based on the training group and was further validated based on the testing and evaluation group from January 1, 2007 to December 31, 2009 and another 3053 independent cases from January 1, 2010 to February 13, 2014. Results The main outcomes were NAFLD‐related cancer events, including those of the liver, breast, esophagus, stomach, pancreas, prostate and colon, within 8 years after hospitalization for NAFLD diagnosis. Seven risk factors (age (every 5 years),LDL, smoking, BMI, diabetes, OSAS, and aspartate aminotransferase (every 5 units)) were identified as independent indicators of cancer events. This risk model contained a predictive range of 0.4%‐37.7%, 0.3%‐39.6%, and 0.4%‐39.3% in the training, test, evaluation group, respectively, with a range 0.4%‐30.4% for validation groups. In the training group, 12.6%, 76.9%, and 10.5% of patients, which corresponded to the low ‐, moderate ‐, and high‐risk groups, had probabilities of, <0.01, <0.1, and 0.23 for 8‐year events. Conclusions Seven risk factors were recognized and a simple risk model were developed and validated to predict the risk of cancer events after NAFLD based on 8 years. This simple risk score system may recognize high‐risk patients and reduce cancer incidence.
| INTRODUCTION
Nonalcoholic fatty liver disease (NAFLD) has a high prevalence and increasing morbidity in China and other countries. [1][2][3][4] Patients with NAFLD are at high risk of developing cancers, such as primarily hepatocellular carcinoma, colorectal cancer, or breast cancer. 5,6 NAFLD-related cancers have been causing a significant burden for healthcare in China and other countries. 7,8 It has been shown that patients suffering from NAFLD show high potential for cardiovascular diseases and cancer development, which remains the main cause of death among patients with NAFLD. 9 However, few patients are aware of the severe outcomes of NAFLD, partly because of its common benign course with a low risk of aggravating cirrhosis. Thus, different prognoses in different risk stratifications and histopathological subtypes are frequently ignored. 10 Although the risk factors for NAFLD-relevant hepatocellular carcinoma 11 and a noninvasive model predicting liver fibrosis have been determined and investigated in the West, 12,13 the establishment of a specific model to forecast NAFLD-associated cancer is still urgently needed, especially based on the elderly Chinese cohort. One explanation is that risk factors (such as age, sex, and body mass index) for cancers are not identical between China and developed countries. In addition, NAFLD-relevant carcinomas were ubiquitously ignored, compared with the prediction of liver fibrosis in previous studies. In the majority of patients, NAFLD is bidirectionally associated with metabolic risk factors (such as central obesity, diabetes mellitus, dyslipidemia, and hypertension), which might represent an important etiology of the increasing morbidity of various solid tumors beyond that of liver cancer. 14,15 Furthermore, a number of studies focus predominately on the mortality of NAFLD with a long-term history. 16 In contrast, a focus on the onset and prophylaxis of various cancers, rather than an investigation of the mortality of cancer, will provide much more significant clinical outcomes. 5,16 Therefore, much more importance should be attached to the long-term outcomes than to the mortality of NAFLD to determine the complete healthcare experience of patients, particularly in developing countries. The determination of cancerous features helps patients and clinicians predict the future risk of cancer, thus promoting intensive follow-up and risk factor adjustment as well as further relief of the financial pressures caused by the high incidence of cancer.
Accordingly, our study developed and evaluated a simple risk model by identifying significant clinical risk factors to predict NAFLD-related cancers on the basis of 5561 patients with NAFLD in the First Affiliated Hospital of Zhengzhou University, the largest tertiary medical hospital in China, which has approximately 10 000 beds, 15 000 inpatients/ day and 20 000 outpatients/ day. Patients with NAFLD diagnosed between 1/1/2007 and 2/13/2014 were included and followed until cancer diagnosis, death, or through 12/31/2017. In this cohort, all the samples were extracted from medical records about patients with NAFLD and were well designed to further stratify risk after NAFLD diagnosis. The main endpoint of the study was to predict the presence or absence of cancers by a combination of simple and clinically relevant variables in the elderly Chinese population.
| Study Group
A total of 5561 patients with NAFLD in this study were first confirmed by ultrasound. The radiological features of these patients included fatty liver and increased or heterogeneous echogenicity. They visited the First Affiliated Hospital of Zhengzhou University between January 1, 2007 and February 13, 2014 more than two times and were followed until cancer diagnosis, death or through December 31, 2017. Medical records were extracted individually by three doctors; consistency was 97% for main data elements. Based on 8-year follow-up of NAFLD patients, information on all types of cancer events was collected from medical records, including liver, breast, esophagus, stomach, pancreas, prostate, and colon cancers.
Patients with liver disease of other etiologies were appropriately excluded, including autoimmune or viral hepatitis, alcohol-induced, or drug-induced liver disease and cholestatic or genetic liver disease. These other liver diseases were excluded applying specific clinical, biochemical, radiographic, and/ or histological criteria. All patients had a negative history of Conclusions: Seven risk factors were recognized and a simple risk model were developed and validated to predict the risk of cancer events after NAFLD based on 8 years. This simple risk score system may recognize high-risk patients and reduce cancer incidence.
K E Y W O R D S
cancers, hepatocellular carcinoma, nonalcoholic fatty liver disease, risk model ethanol abuse, as indicated by a weekly ethanol consumption of ≤140 g in women and ≤210 g in men. A history of alcohol consumption was specifically investigated from medical records. Patients with clinical or imaging evidence of decompensated cirrhosis were specifically excluded from this study because they most likely had cirrhotic-stage NAFLD.
The cohort from 2007 to 2009 included 2508 patients. These samples were randomly divided into three independent groups named as training group(50% [1254patients]), test group(25% [627patients]), and evaluation group(25% [627 patients]). The training group was used to identify risk factors of cancer events for NAFLD patients with 8-year follow-up. The test group and evaluation group were used for validation. The other independent cohort from 2010 to 2014 enrolled a total of 3053 unique patients for further validation analysis. This study was agreed by the Institutional Review Board of the First Affiliated Hospital of Zhengzhou University ( Figure 1).
| Potential risk factors
The candidate risk factors contained clinical and laboratory data were easily and dependable collected within hospitalization for NAFLD as well as were selected by their clinical meaning, supported document. Not only detailed medical history but also entire physical examination were abstracted from patients. Initial factors included patient demographics (age, sex, and body mass index calculated with the formula weight (in kilograms)/height (in meters 2 )), medical history (hypertension, diabetes mellitus, obstructive sleep apnea syndrome, family history of cancer, hyperlipoidemia), lifestyle factors (smoking, drinking), laboratory evaluation including routine liver biochemistry (alanine aminotransferase and aspartate aminotransferase levels, total bilirubin, albumin, and alkaline phosphatase), complete blood count, total cholesterol, HDL cholesterol, LDL cholesterol, and total triglycerides.
The definitions of comorbidity that were used in this study included the following: hypertension (systolic blood pressure ≥140, diastolic blood pressure ≥90 or treatment of previously diagnosed hypertension); diabetes mellitus (fasting glucose ≥126 mg/dL or treatment with antidiabetic drugs); and obstructive sleep apnea syndrome (based on the respiratory disturbance index (RDI) ≥5 obstructive events/h of sleep, for which patients were diagnosed with obstructive sleep apnea syndrome).
| Outcome
In this risk model, the outcome was 8-year cancer events, a binary variable recognized as the occurrence of cancers, including those of the liver, breast, esophagus, stomach, pancreas, prostate, and colon, within 8 years of diagnosis for NAFLD. Messages on the outcome were received and acknowledged by medical records.
| Risk factor selection and test
In the training group, we fitted the statistical MCMC simulation and computed a posterior probability for the whole risk factors. 17 The posterior probability judges the strength of a correlation between a factor and the outcome. A factor with a posterior probability more than 0.95 was regarded statistical significant for predicting 8-year cancer events and contained in the ultimate risk factor register. 18 We developed the ultimate risk model to predict outcome by matching the Cox model to the training group, using the selected risk factors. Routine demographic, comorbidity, and laboratory variables were analyzed by multivariate modeling to predict the presence or absence of 8-year cancer events.
We test this predictive model performance by the following statistics method: Harrell's c-statistic to evaluate the total accuracy of prediction, 19 ROC curves depended on time to evaluate the predictive accuracy during 8 years, 20 partial residuals and Hosmer and Lemeshow's Goodness of Fit Test Statistic to evaluate the proportional hazards assumption and calibration, 21 and Schemper and Henderson measurement to estimate explained variation. 22 Distinction was evaluated within the observed cancer events by stratification described as deciles of predictive probabilities. 23 In the training group, we divided samples into 10 independent risk grades on the basis of these deciles, classifying the grades from minimum risk to maximum risk for validation. 24 Additionally, this predictive model performance was evaluated and compared in the test, evaluation, and validation group, respectively.
| Risk score
For the convenient application of elected risk factors and this predictive model, we developed a easily applied score system for every patient with NAFLD on the basis of the regression coefficients assessed from the predictive model with the training group. The scores for every risk factor were counted by grading the coefficient of risk factor by the sum of all coefficients in this model, multiplying by 100, meanwhile rounding to the nearest integer. Then, the risk score was calculated by summing points of patients. [24][25][26] Furthermore, we classed patients with NAFLD into three risk groups of cancer events based on the spread of this score: high-risk group(>90th percentile), moderate-risk group (10th-90th percentile), and low-risk group (<10th percentile). Analyses were conducted between August 10, 2018 and November 22, 2018 using SAS version 9.4.
| Study cohort
A total of 5561 (1254 training, 627 test, 627 evaluation and 3053 validation) patients were enrolled. The mean age was 69.4 years (standard deviation [SD] 8.1), and 49.4% were female. The common comorbidities were diabetes mellitus (27.2%), obstructive sleep apnea syndrome (19.2%), hypertension (27.8%), and dyslipidemia (70.5%). There were not significant differences in basic characteristics of patients across the training, test, and evaluation group. However, the common comorbidities in validation groups were higher than in the other three groups (Table 1). Figure 2).
| Risk factors selection and evaluation
The MCMC simulation identified seven candidate factors that had a posterior probability <.95 (Table 2), including age (every 5 years), body mass index, diabetes mellitus, obstructive sleep apnea syndrome, smoking, LDL, and aspartate aminotransferase (every 5 units) ( Figure S1). According to these seven risk factors, the risk model was developed and the training group showed good differentiation and calibration. The total c-statistic of this predictive model was 0.94. The average observed 8-year outcome of predicted decile extended from 0.4% to 37.7% ( Figure S2). The Hosmer and Lemeshow's Goodness of Fit Test' p value in the training group was .77, in the test group was .78, and in the evaluation group was .98 meaning that the predicted cohort was well matched with the observed cohort ( Figure S3). Schemper and Henderson measurement was 0.51 as well as partial residuals test represented that each of risk factors satisfied the proportional hazards assumption.
The model also performed well in the test and evaluation group in accordance with the training group. The total c-statistic was 0.91 and 0.92 in the test and evaluation group, respectively; the rate of cancer events following up 8 years in the observed samples extended from 0.03% to 39.6% and the rate of that in the predicted samples from 0.04% to 39.3%.
| Risk score system
The points of risk factors extended from 6 (aspartate aminotransferase every 5 units) to 25 (LDL) ( Table 2). The training group had a average risk score of 2.85 (SD 0.99). The average score was 2.80 (SD 1.03) for the test group and 2.85 (SD 0.99) for the evaluation group ( Figure 2). In the training group, 12.6%, 76.9%, and 10.5% of patients were stratified into the low-, moderate-, and high-risk groups, respectively, in accordance with probabilities of < 0.01, <0.1, and 0.23 for outcomes of 8 years (Figure 3). The stratifications for the test and evaluation group were similar to those for the training group (Figures 3 and 4; Table S1).
| Validation
For the validation group, the rates of cancer events were 2.2% (95% confidence interval [CI] 1.7-2.8). The average observed cancer outcome extended from 0.4% to 30.4% in the predicted decile ( Figure S2). The Hosmer and Lemeshow's Goodness of Fit Test' p value was 0.21 ( Figure S3).
For the validation group, the mean (SD) of the risk score was 2.90 (SD 0.91) ( Figure S4). In the validation group, 10.8%, 79.6%, and 9.5% were stratified into the low-, moderate-, and high-risk groups, respectively, in accordance with probabilities of <.01, <.1, .31 and for cancer events (Figure 3). The probability of 8-year cancer events in high-risk group in the validation group was lower than those for the training group, whereas the probability of 8-year cancer events in moderate-and low-risk groups were similar to those for the training group (Figure 4 and Figure S4; Table S1).
| DISCUSSION
In this large cohort study, we found that seven risk factors, including age (every 5 years), low-density lipoprotein-cholesterol, smoking, body mass index, diabetes, obstructive sleep apnea syndrome, and aspartate aminotransferase (every 5 units), were independent indicators of 8-year cancer events in patients with NAFLD. This simple risk model and the score system were developed and validated to predict 8-year cancers NAFLD diagnosis. Importantly, the risk model performed well in another independent validation NAFLD patients. These factors were selected on the basis of data selected from medical records and ease of collection and ready availability at the time of discharge as well as long-term follow-up. Furthermore, statistical algorithms in this study is robust. Not only this predictive model but also this score system help clinicians recognize patients with NAFLD at increased risk of 8-year cancers and assist patients understand their risk of cancers. The capability to recognize patients with the highest risk of cancers after NAFLD diagnosis may provide targeted, higher-quality, and intensive healthcare after discharge.
Our study, based on information selected from medical records and continuing for 8 years of follow-up, presents a large cohort study of risk factors that predict the spread of outcomes for senior citizens with NAFLD in the central plains of China. Furthermore, the patients represented in these data always visit the same hospital many times to acquire the comprehensive and professional treatment and healthcare in this general teaching and urban hospital. Importantly, evidence shows that most of the patients developing NAFLD present at least one of the traits of metabolic syndrome (MS). 27,28 Several studies indicate a potent association between metabolism syndrome and the risk of certain types of cancer, in addition to hepatocellular carcinoma. 29 However, previous studies and different types of risk scores examining advanced liver fibrosis or the natural history of NAFLD originate from specialist centers in which patients had been mostly selected from developed countries. [30][31][32][33] In this study, we validated and classified the risk of NAFLDassociated different types of cancers, comparing with prior studies which only focused on advanced fibrosis.
The MCMC algorithm was used to evaluate the strength of the association between the risk factors and the outcome. On the basis of data from 2007 to 2009, we developed and evaluated this simple noninvasive predictive risk model. In contrast to other studies, our study had better predictive accuracy based on another independent cohort of patients with NAFLD from January 1, 2010 to February 13, 2014, which was used to revalidate this scoring system. 12,33 NAFLD may evolve into a tumor, but it is easily overlooked in the stage of fatty liver. Our predictive model was constructed on the characteristics and comorbidities at baseline of patients with NAFLD, and it is simple to use. The previous studies that constructed predictive scores with some biomarkers were inconvenient to review periodically. 33,34 The lack of availability of these serum markers of fibrosis in most centers makes it difficult to apply the proposed scoring system on a daily basis. 35,36 Our risk factors were recognized on the basis of a large cohort that always visited and followed up in the same hospital for many times. Data for an effective risk factor ought to be stabilized by clinical illustration, conveniently collected, widely received during hospitalization and at discharge. In this study, these seven risk factors found fitted the whole criteria, meanwhile the majority of them have been recognized in many studies. 37 Most of these risk factors in our study are related to metabolic dysregulation and could be improved by effective long-term follow-up. LDL and diabetes were the top two factors. It has been demonstrated that hyper-LDL cholesterol is associated with colorectal adenomas, breast cancer, and prostate and liver cancer. 38,39 It is said that persons with diabetes, rather than only obese individuals, are apt to develop cancers. 40 In 2010, there was convincing evidence that diabetes, either alone or as a cofactor, was associated with an increased risk of liver, colorectal, pancreatic, and breast cancer from the American Diabetes Association and the American Cancer Society. 41,42 Although NAFL steatosis is generally a benign disorder, patients with the disorder may still suffer from cancer in the presence of risk factors as determined in our study. These risk factors elevated the levels of reactive oxygen species (ROS), overloaded mitochondrial capacity for oxidative stress, and promoted DNA damage to liver tissues and other increased visceral adipose tissues by the proinflammatory signaling pathway. 43,44 In our study, many risk factors of cancers were changeable and led to different outcomes and prognoses in the long term. At baseline, patients in the high-risk group were small, whereas patients in the moderate risk group may fall to cancer events in the long term. Given their knowledge of the risk score, patients in different stratifications should all be aware of the risks for poor prognosis as well as avoid and improve their risk factors to transition themselves from the high-risk group to the lowrisk group. With the improvement in patients' postdischarge outcomes and the reduction in cancer rates, the economic burden on healthcare can be relieved, and more individuals with cancer events can be saved.
All cases with NAFLD were diagnosed by abdominal ultrasound of hepatic steatosis, which provides less accuracy than diagnosis by liver biopsy. However, ultrasonographic detection has been widely used in other studies to verify fatty liver. 45,46 Additionally, only 30 percent of the 5561 patients were reported to have severity typing descriptions in ultrasonographic detection. Therefore, the predictive model and risk factors recognized in our study still need to be validated and updated.
In conclusion, this simple risk model had a robust predictive scope and could provide a basis for clinicians to better understand patients' risk of long-term cancer events after NAFLD. It assists clinicians make better-targeted, evidence-based decisions for postdischarge NAFLD management. | 2019-12-21T14:04:37.003Z | 2019-12-20T00:00:00.000 | {
"year": 2019,
"sha1": "9baf7eb7ced96644d384ef7af125898d37985f6b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.2777",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cd9477bd26a9b577f9d0cc3f33a563cf78f5309",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119234169 | pes2o/s2orc | v3-fos-license | The large $N$ limit of the topological susceptibility of Yang-Mills gauge theory
We present a precise computation of the topological susceptibility $\chi_{_\mathrm{YM}}$ of SU$(N)$ Yang-Mills theory in the large $N$ limit. The computation is done on the lattice, using high-statistics Monte Carlo simulations with $N=3, 4, 5, 6$ and three different lattice spacings. Two major improvements make it possible to go to finer lattice spacing and larger $N$ compared to previous works. First, the topological charge is implemented through the gradient flow definition; and second, open boundary conditions in the time direction are employed in order to avoid the freezing of the topological charge. The results allow us to extrapolate the dimensionless quantity $t_0^2\chi_{_\mathrm{YM}}$ to the continuum and large $N$ limits with confidence. The accuracy of the final result represents a new quality in the verification of large $N$ scaling.
Introduction
One of the main successes of the large N limit of SU(N) Yang-Mills theories is the explanation of the large mass of the η meson. The solution is given through the Witten-Veneziano formula [1,2], which relates the mass of the η meson to the topological susceptibility χ YM in the pure Yang-Mills theory where F π is the pion decay constant, N f the number of massless flavours and q = 1 32π 2 ε µνρσ Tr F µν F ρσ is the topological charge density. The quantity on the right can only be computed directly on the lattice, provided that one employs a correct definition of the topological charge density q.
Our main result is the large N and continuum limit extrapolation of χ YM . We use the theoretically clean definition of χ YM through the Yang-Mills gradient flow [3] and open boundary conditions [4] in order to avoid the freezing of the topology. In this contribution we expand on the results presented in Ref. [5] by discussing all the systematics involved in the computation of χ YM for each gauge group, and those coming from the continuum and large N extrapolations.
Observables
In the continuum, the composite fields we are interested in are the energy density e t and the topological charge density q t , defined as where G µν is built in terms of the gauge fields B µ evaluated at positive gradient flow time t [3]. Using the gradient flow, correlators built out of the fields e t and q t are finite and have a trivial renormalization. In particular, the quantity χ t YM as defined in Eq. (1.1) has a finite and unambiguous continuum limit, which is independent of t, and obeys the correct chiral ward identities to be inserted in the Witten-Veneziano relation [6].
In order to compare the theories at different N, we need to define a common scale to be used to express our results. In this sense, the reference scale t 0 introduced in Ref. [3] for SU(3) is a good choice, as it can be computed up to very high accuracy with a moderate cost. For general N, we want this quantity to be constant at leading order in 1/N, so we generalize its definition to be such that it coincides with the value of 0.3 for SU (3). The scale t 0 will be used to express all our results in dimensionless units, while we use the value of √ t 0 = 0.166 fm only as a reference, for the clarity of the presentation, to quote values for the lattice spacing and lattice dimensions. From now on all the observables are computed at flow time t = t 0 unless stated otherwise.
Lattice details
We consider SU(N) Yang-Mills gauge theory on the lattice with the standard Wilson plaquette action and open boundary conditions in the time direction [4]. For each gauge group (N = 4, 5, 6), we simulate at three different lattice spacings in a range between 0.096 fm and 0.065 fm and a size of the spatial dimension of L ≈ 1.5 fm. The details of the ensembles are given in Table 1 of Ref. [5].
Because of the use of open boundary conditions, the vacuum expectation value of the observables is extracted in a plateau region sufficiently far away from the boundaries. This region is parametrized by the distance to the boundary d, so that the sum in the time direction is performed from Considering this, the estimator for e t in the lattice is given by where e t ( x, x 0 ) is computed through the standard clover definition of the field strength tensor.
Concerning the topological susceptibility, we define its estimator in a similar way as in Ref. [7] In this case, the definition of χ t YM includes an extra parameter, r. As we explain in the next section, this parameter can be chosen as to minimize the statistical uncertainties, while keeping the systematic effects under control.
Systematic effects from the definition of the observables 3.1 Open boundaries
Open boundaries are instrumental to achieve the finer lattice spacings in this work. Although we did not perform a dedicated comparison between open and periodic boundary conditions, the scaling of autocorrelations found for the larger N is compatible with a polynomial scaling law (our evidence even suggests τ int ∝ a −2 ); in comparison with the exponential growth observed in Ref. [8]. The details of our update algorithm are given in Ref. [5].
In order to fix the parameter d in Eqs. (2.3) and (2.4), we fit the symmetrized data to an ansatz of the form f (x 0 ) = A + Be −mx 0 . The criterion to define the plateau region is to require that | f (d) − A| < σ /4, where σ is the average statistical error for x 0 > d. This guarantees that the systematic effects are negligible compared to the statistical uncertainty. Following this prescription, a good choice forē t andC t is d = 9.5 √ t 0 , and d = 7.5 √ t 0 , respectively. An example of how this fit works is shown Fig 1 (left).
for an SU(4) ensemble at β = 11.14. The fit to a one excited state contribution agrees very well with the data. The red vertical line denotes the value of d = 9.5 √ t 0 , which defines the plateau region for this observable. Right: ∆ dependence of the q t (x 0 )q t (x 0 + ∆) correlator. The red (open) symbols show the results when using a standard algorithm and statistics comparable to the ones used for our large N simulations, while in black (filled), we show the precise data obtained using a multilevel approach and approximately 10 times more statistics. After the value of ∆ = 7.0 √ t 0 (red vertical line), the contribution of the tail is negligible compared to the statistical uncertainty.
Large distance behaviour of the topological charge correlator
The definition of χ t YM in Eq. (2.4) has an extra parameter r. For a given statistical accuracy, the existence of an appropriate r is guaranteed from the exponential fall-off ofC t (∆). In practice however, this behaviour is hidden by the statistical fluctuations of the data, and one has to deal with a severe signal to noise problem. This is particularly relevant in the pure gauge theory, where the large mass of the pseudoscalar glueball produces an extremely fast decay in the signal.
One way to deal with the signal to noise problem is to use multilevel techniques, which have the potential to dramatically improve on the scaling of errors of the standard Monte-Carlo algorithm used in lattice QCD simulations. We use the algorithm described in Ref. [9] to obtain high precision data for an SU(3) ensemble at β = 6.11 (a = 0.078 fm) on a lattice of L ≈ 1.6 fm. Assuming that the relative contribution of the tail in the sum of theC t (∆) correlator does not depend strongly on N, the estimation of the tail obtained from the high precision SU(3) data can be used to truncate the sum in the rest of SU(N) ensembles. Figure 1 (right) shows a comparison between the correlator computed using the multilevel algorithm with a total of N 0 × N 1 = 784 × 280 = 201600 measurements and the standard algorithm with N 0 = 15600 measurements. Clearly, the reduction in errors obtained from the multilevel algorithm is larger than the one expected simply from an increase in statistics.
We use the high precision data to estimateD t (r) = ∑ ∆>rC t (∆), and then compare it toC t (∆) for each of our ensembles. Basically, at large distances, the contribution of the tail in the correlator is much smaller than the statistical variation, and therefore, summing it up to arbitrarily large values of r increases only the statistical fluctuation, without an improvement in the signal. To find the right value of r at which the systematics from the truncation can be neglected, we impose the condition αD t (r) < σ /4, where σ is the statistical error ofC t (∆) at ∆ = r, and α is a normalization factor to account for possible N dependences in the observable. With this criterion, the choice of r = 7.0 √ t 0 guarantees that the systematic effects coming from neglecting the tail of the correlator are negligible within our statistics.
Finite volume checks
One final source of systematic uncertainty comes from the finite volume used in lattice simulations. All our ensembles have a physical size L ≈ 1.5 fm, which are slightly larger than the SU(3) ensembles used in Ref. [6]. The statistics in Ref. [6] are one order of magnitude larger than ours, and no finite size effects are observed. In order to validate this for the larger N, we simulated lattices with L = 1.1 fm and 2.3 fm for both SU(4) and SU (5). An additional lattice at L = 2.0 fm was also generated in the case of SU (5). The results are shown in Fig. 2 (left) and show that finite size effects are below the statistical fluctuations. Figure 2: Left: Check of finite volume effects for SU(4) and SU(5) at the lattice spacing a ≈ 0.96 fm. The SU(5) points have been shifted to improve legibility. Also in the case of SU(3), with much larger statistics, no finite size effects are observed [6]. Right: Plot of the ratio χ t YM /χ t 0 YM as a function of a 2 /t 0 . Even with this high precision observable, there is no noticeable N dependence on the cut-off effects.
Large N and continuum limit fits
The final part of the analysis is the large N and continuum limit extrapolations. The data used for this purpose is shown Fig. 3 (left), together with the final extrapolation. In order to assess the systematics from the extrapolations, several fits were performed and a summary is shown in Fig. 3 (right). The various fit strategies are described in the following.
For the final result all the points are fitted to a global function which accounts for the leading order in the Symanzik and large N expansions Given that the scaling violations are of the same order of the statistical errors, a conservative choice is to use only the two finest points for each lattice. In this way, the assumption on the region of validity of the leading order Symanzik expansion is constrained, thus systematics are reduced at the expense of an increase in the statistical uncertainty. We use this approach and furthermore restrict the use of the SU(3) data only to fit the coefficient c 2 in Eq. (4.1). Again, not using SU (3) to fit c 1 reduces the systematics from the large N extrapolation. Using this fit strategy (NGF2), we obtain a result for t 2 0 χ YM (0, 0) = 7.03(13) · 10 −4 . If one extra point in SU (3) is used (NGF3), the result t 2 0 χ YM (0, 0) = 7.13(10) · 10 −4 is obtained, which is compatible with the one from NGF2.
Among the rest of fits attempted, the simplest one is to perform a continuum limit fit group by group and later apply the large N extrapolation (LF3). Additionally, one can use Eq. (4.1) and fit it to all the points without restrictions (GF3), or in a similar fashion, as for NGF3, use the three points from SU (3), but only the two finest from the rest of gauge groups (GF2). The former produces a result of t 2 0 χ YM (0, 0) = 7.06(7) · 10 −4 , while the latter gives a value of t 2 0 χ YM (0, 0) = 7.09(7) · 10 −4 . Both are compatible with the results quoted previously, but notice that the errors are half as small, so the choice made on NGF2 is a more conservative one, accounting for possible systematic effects.
In addition, an extra term of the form a 2 /N 2 can be added to Eq. (4.1). However, our data suggest that both the 1/N and the O(a 2 ) corrections are small; a fact which is further supported by the N independence of the ratio χ t YM /χ t 0 YM as a function of a 2 /t 0 . This quantity can be captured up to very high accuracy as shown in Fig. 2 (right). In spite of this, a fit including the sub-leading a 2 /N 2 term (GFF3) was also considered in our analysis.
As can be seen in Fig. 3 (right), the different fit strategies are all compatible, and the fluctuations in the final result cannot be directly associated with a systematic effect. In fact, systematic effects cannot be discerned from the data, so the more conservative choice in NGF2 is the one we choose for our final result. data is from Ref. [6], while the rest is taken from Ref. [5]. The fit corresponds to NGF2. Right: Summary of several fits employed. For each fit we report the value of χ 2 /dof on the upper axis. The band shows the result from the fit NGF2, which we report as the central value for t 2 0 χ YM (0, 0) and is compatible with the rest of fits we have tested.
Conclusions
In this work we have presented the computation of the large N limit of the topological susceptibility χ YM using a theoretically sound definition on the lattice through the Yang-Mills gradient flow. Our final result t 2 0 χ YM = 7.03(13) · 10 −4 has a 2% error and represents a new verification of the Witten-Veneziano formula that gives mass to the η meson. We have presented a detailed discussion of the systematic effects involved in this calculation and at the level of accuracy of our results, we observe no significant finite N or finite a corrections. | 2016-10-27T14:29:54.000Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "af081f15c7c954701b88dc385f519a0b6398382a",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/256/350/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "3ffcc5ebc245141041de01d0e37095543ead787a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5705456 | pes2o/s2orc | v3-fos-license | Medicare and physician autonomy.
It would be possible to view the impact of Medicare on physicians from many perspectives: the impact on individual physicians, on a particular specialty, on academic physicians, on graduate medical education and physician specialization, on quality of care, on physician incomes, on physician autonomy, or on a variety of other aspects of medical practice. We have chosen to focus on physician autonomy, a topic that has gained prominence recently as a result of its perceived erosion.
One of the critical questions that has been raised about physician autonomy and Medicare is whether or not physicians have traded reduction of clinical autonomy or discretion for preservation of economic autonomy. Although often couched in terms of quality of care and access to care, physicians, particularly through organizations such as the American Medical Association (AMA), have in fact focused on the economic autonomy of physicians. Yet concern about loss of clinical autonomy is a major morale issue within the medical profession (Lee and Culbertson, 1990). Lewis et al. (1991) reported “growing dissatisfaction with the practice of internal medicine, primarily related to concerns over loss of clinical autonomy….”
Freidson has been identified for the last two decades as the leading theorist on professional autonomy. He has recently defined autonomy in the following manner:
Taken as an ideal type, complete autonomy is sustained by an occupational monopoly embracing several dimensions. It is first of all an economic monopoly: the profession controls recruitment, training, and credentialing so it can regulate directly the number of practitioners available to meet demand. This has obvious implications for income. Economic monopoly is viable, however, because professional autonomy also includes a political monopoly over an area of expertise; the profession is accepted as the authoritative spokesman on affairs related to its body of knowledge and skill, and so its representatives serve as expert guides for legislation and administrative rules bearing on its work. Furthermore, the profession has an administrative or supervisorial monopoly over the practical affairs connected with its work; its members fill the organizational ranks which are concerned with establishing work standards, directing and evaluating work. “Peer review” rather than hierarchical directive is the norm. Clearly as I have defined it, professional autonomy represents a privileged position of some significance (Freidson, 1994).
Freidson's emphasis in this recent definition on economic dimensions of autonomy is a departure from the thought of earlier theorists who stressed the clinical aspect of autonomy. The prominent American sociologist Talcott Parsons emphasized the superior position by virtue of technical expertise of the physician as essential to the public good. Parsons held that in order to maintain regulation of their patients, physicians must have the right as a profession to control the conditions of their clinical work and the patients they accept (Parsons, 1964).
INTRODUCTION
It would be possible to view the impact of Medicare on physicians from many perspectives: the impact on individual physicians, on a particular specialty, on academic physicians, on graduate medical education and physician specialization, on quality of care, on physician incomes, on physician autonomy, or on a variety of other aspects of medical practice. We have chosen to focus on physician autonomy, a topic that has gained prominence recently as a result of its perceived erosion.
One of the critical questions that has been raised about physician autonomy and Medicare is whether or not physicians have traded reduction of clinical autonomy or discretion for preservation of economic autonomy. Although often couched in terms of quality of care and access to care, physicians, particularly through organizations such as the American Medical Association (AMA), have in fact focused on the economic autonomy of physicians. Yet concern about loss of clinical autonomy is a major morale issue within the medical profession (Lee and Culbertson, 1990). Lewis et al. (1991) reported "growing dissatisfaction with the practice of internal medicine, primarily related to concerns over loss of clinical autonomy…." Freidson has been identified for the last two decades as the leading theorist on professional autonomy. He has recently defined autonomy in the following manner: Taken as an ideal type, complete autonomy is sustained by an occupational monopoly embracing several dimensions. It is first of all an economic monopoly: the profession controls recruitment, training, and credentialing so it can regulate directly the number of practitioners available to meet demand. This has obvious implications for income. Economic monopoly is viable, however, because professional autonomy also includes a political monopoly over an area of expertise; the profession is accepted as the authoritative spokesman on affairs related to its body of knowledge and skill, and so its representatives serve as expert guides for legislation and administrative rules bearing on its work. Furthermore, the profession has an administrative or supervisorial monopoly over the practical affairs connected with its work; its members fill the organizational ranks which are concerned with establishing work standards, directing and evaluating work. "Peer review" rather than hierarchical directive is the norm. Clearly as I have defined it, professional autonomy represents a privileged position of some significance (Freidson, 1994).
Freidson's emphasis in this recent definition on economic dimensions of autonomy is a departure from the thought of earlier theorists who stressed the clinical aspect of autonomy. The prominent American sociologist Talcott Parsons emphasized the superior position by virtue of technical expertise of the physician as essential to the public good. Parsons held that in order to maintain regulation of their patients, physicians must have the right as a profession to control the conditions of their clinical work and the patients they accept (Parsons, 1964).
CONTEXT
In viewing the relationship of Medicare to physician autonomy, it is useful to recall the historic opposition of the medical profession, particularly organized medicine, to any role of the Federal Government in health care financing, except for a limited role in relation to indigent care.
The historic opposition by organized medicine, particularly the AMA, to a significant Federal role in the financing of national health insurance or the more limited proposals related to the elderly have been documented in detail by others (Starr, 1982) and will not be repeated. The context and the flavor of the times were elegantly described by Ball (1995) in his article, "What Medicare's Architects Had in Mind." Although President Truman first proposed a program of national health insurance in 1945, it was not until after his election in 1948 that AMA leadership became alarmed about the possibility that Congress might do something. The AMA campaign was well organized and well financed and included pamphlets in physicians' offices, press attacks, public speakers, and vigorous lobbying against the proposal supported by President Truman. The attack was bitter and ultimately successful. The idea of hospital insurance for the elderly was first floated by Oscar Ewing, head of the Federal Security Administration in 1952. President Eisenhower was elected for his first term later that year, Oscar Ewing departed, and there was little support for such proposals in the political levels of the executive branch for the next 8 years (Ball, 1995). In 1957, Representative Aime Ferand (D-RI) introduced the first of a series of bills to provide hospital insurance for the elderly (Litman and Robins, 1984). In 1961, after President Kennedy's election, it was re-introduced in the House and Senate as the King-Anderson Bill.
It is important to recall the context of the mid-1960s when Medicare was enacted and implemented. The Civil Rights Act was passed less than 2 years before Medicare's passage in 1965, without any serious consideration of its later impact on the practice of medicine through the desegregation of hospitals, particularly in the South, and the resultant enhanced patient access to care-a laudable but unforeseen consequence. Finally, the impact of the rising costs of health care on Medicare policy was not fully appreciated in the beginning. In time, rising costs far in excess of increases in gross domestic product became the overriding force driving the Medicare policies affecting physicians.
After President Johnson's landslide victory in 1964, the likelihood that Medicare would be enacted was substantially increased, but Congress included a number of provisions to mute physician opposition. Medicare was to build on the existing system, not reform it. Claims processing and payment were to be administered by private organizations under contract as Medicare carriers to provide a buffer between physicians and government. The Blue Shield plans and commercial health insurance plans that became carriers were allowed wide discretion in interpreting Medicare policy.
Congress also adopted a payment method designed to attract physicians, permitting them to bill what they normally charged their privately insured patients, the "customary, prevailing, and reasonable" (CPR) charge. Medicare payment was based on this "reasonable" charge, defined as being the lower of the physician's actual charge, the physician's customary charge (the physician's median charge for service from the previous year), or the prevailing charge in the locality (set at the 75th percentile of the distribution of customary charges in a locality). In addition, physicians were allowed to bill their patients directly through the practice of balance billing, which allowed them to collect more than Medicare's reasonable charge.
This was the context in which Medicare was enacted and signed into law on July 30, 1965. At the time, Congress mandated in section 1801 of title XVIII that "nothing in this title shall be construed to authorize any Federal officer or employee to exercise any supervision or control over the practice of medicine or the manner in which medical services are provided …" (Blumenthal, 1988). This initial intent showed the desire of the Federal Government to avoid conflict with the perceived sphere of influence of medicine, thus guaranteeing wide protection of both clinical and economic autonomy.
Within 3 months, it became apparent that the Civil Rights Act (title VI) would have to apply to hospitals if they were to receive payment from Medicare. The U.S. Public Health Service, under the leadership of Dr. William Stewart, Surgeon General, was enlisted to assist the Social Security Administration in a broad-based, intensive effort to ensure hospital compliance with the Civil Rights Act. In sum, the hospitals that had practiced segregation agreed to desegregate-everything from separate drinking fountains to inpatient and outpatient care. This action by the Federal Government had a profound impact on physician autonomy. In the definition presented earlier, clinical autonomy was defined in part as the ability of practitioners to select or reject patients/clients from their practices. Now, physicians were no longer free to segregate their patients when they were hospitalized.
CONCEPT OF PHYSICIAN AUTONOMY
Let us turn to a more detailed review of Medicare and physician autonomy, as the Medicare program has become the dominant force in setting physician payment policy. We will consider the concept of physician autonomy, specifically its economic and clinical dimensions, and the embodiment in public policy of these dimensions of autonomy in the Medicare program and corresponding influences upon the medical profession.
Autonomy has been cited by Freidson as the key defining characteristic in the organization of professions. Freidson (1970) suggests that "functional autonomy" is defined in medical occupations by "the degree to which work can be carried on independently of organizational or medical supervision, and the degree to which it can be sustained by attracting its own clientele independently of organization or referral by other occupations, including physicians." The key point here is that medicine as a profession is at the pinnacle of this occupational hierarchy, and in much of the 20th century has been able to control, in cooperation with the Federal and State governments, the basic terms of medical work. Self-governance of the profession is key to a definition of autonomy. Perhaps Starr (1982) portrayed this concept most effectively when he referred to medicine as a "sovereign" profession. Schulz and Harrison (1986) have attempted to define specific elements of autonomy based on an empirical survey of physicians. Five elements of their definition can be described as "clinical" in nature and include control over: (1) the nature and volume of tasks; (2) the acceptance of patients; (3) diagnosis and treatment; (4) the evaluation of care; and (5) other professionals. Three elements of their definition might be considered economic in character and include freedom of choice of specialty and practice location and control over earnings (Schulz and Harrison, 1986).
Freidson contends that the clinical and economic interests of the profession have become mixed, and in the process, corrupted. Freidson makes autonomy and its preservation the foundation of the economic and consequent political strategies of the medical profession. He notes the resistance of medicine to involvement of external entities in its affairs as defined by the profession itself. He then notes the established monopoly position of the profession over the use of select scarce resources and services and suggests that "freedom to set the terms of compensation is, without some form of professional self-regulation in the public interest, obviously subject to abuse" (Freidson, 1970).
Freidson argues that the profession has made no effort at self-regulation of fee practices on the part of its members. Rather, it has left any attempt at redressing patient grievances to the courts. He suggests that in the United States, the profession has made little effort "to insure that its members do not abuse their privileged economic position by seeking more than a 'just price'" (Freidson, 1970). He states that society in the United States has had a difficult time establishing a concept of a "just price," but he is certain that a free market model of competition will not achieve this because physicians enjoy a regulated advantage in the division of labor as a result of preferential licensing acts.
Freidson concludes that a "flaw" exists in the autonomy of physicians, in which the economic and clinical interests of the profession become intertwined, and in which economic interests may at any point in time prevail. In this regard, his critique anticipates the professional concerns voiced most forcibly by Relman (1986) in his observations of the fiduciary responsibility of the physician and the debasement of this responsibility that he sees occurring in profit-oriented medicine. Freidson refuses to fall prey to the notion that autonomy is a purely economic device, choosing to see its development from a variety of social forces. In refuting a purely economic causal theory, he states that, "Consulting professions are not baldly self-interested unions struggling for their resources at the expense of others and of the public interest" (Freidson, 1970). Rather, it is a perception of an entitlement to a superior level of resource as a result of the insularity of the profession from the public that creates this "flaw." Reinhardt (1988) is particularly persuasive in calling attention to the connection of clinical issues of autonomy and economic conditions, especially as viewed from within the profession of medicine. He cites a physician colleague who summarizes this theme as "the serious damage society inflicts upon patients when limits are placed on physicians' clinical freedom to compose medical treatments as they see fit and on their economic freedom to charge whatever honoraria they deem honorable" (Reinhardt, 1988). As an economist, he is especially sensitive to the potential drift of Evans' (1984) "not only for profit" medicaleconomic ethic to one that is distinctly forprofit, first and foremost. He adds that the economic imperative of joint ventures in which physicians become economic partners of hospitals and investors or of direct ownership of imaging and laboratory devices to which the physician refers patients will further erode the trust basis of autonomy (Reinhardt, 1988). As Gray (1983) notes in the introduction to his study of forprofit health care, trust as a basis for professional autonomy is under attack as a myth of the profession to enhance status while at the same time preserving monopoly privilege and power in the economic sphere. This is a significant criticism, for Freidson has defined on several occasions a service orientation of trust as a social contract of the profession with society that necessitates and legitimizes autonomy for the profession (Freidson, 1970). If this contract is violated, how can autonomy for the profession legitimately be sustained?
It is Reinhardt's assertion that the absence in the United States of an overall program of budgetary control over medical expenditures, as is characteristic of the prominent European systems, results in unparalleled micro-management at the clinical level to achieve cost control unattainable on a larger scale. He writes that "...if the bureaucrats cannot somehow impose upon the healers an overall budget constraint ex ante, then they will sooner or later be driven to control their outlays on an ongoing basis, by monitoring each and every transaction for which they pay-that is, by second guessing both the providers' clinical and pricing decisions" (Reinhardt, 1988). This appropriation of the clinical dimension of autonomy would be regarded as intolerable by physicians in other medical care systems. He suggests that "European and Canadian physicians would be appalled at the numerous intrusions into clinical decisions now routinely made by these external monitors in the United States. They probably would rise up in arms over that loss in clinical autonomy" (Reinhardt, 1988).
It seems problematic that physicians in the United States would willingly and knowingly sacrifice the clinical element of autonomy that Freidson considered to be the more consequential element of his twopart definition of autonomy. Clinical autonomy, after all, constitutes the primacy of the physician in the health care division of labor and is the basis on which arguments for political and economic autonomy are formed.
Reinhardt's answer to this seeming paradox is that physicians in the United States have traded off clinical autonomy "in their tenacious fight to preserve the individual physician's right to price his or her services as they see fit" (Reinhardt, 1988). This observation has been distilled into a formula referred to as Reinhardt's "Law" or "Irony." Reinhardt has summarized his law as follows: "In modern health care systems, the preservation of the healers' economic freedom appears to come at the price of their clinical freedom" (Reinhardt, 1988). The application of Reinhardt's Law to the late-20th-century United States scene would appear to indicate a priority on the part of physicians to pursue economic betterment at the expense of clinical autonomy. If so, this would be critical in reformulating a definition of autonomy for the future, for this observation implies the willingness of physicians to sacrifice control of the division of labor. This strategy may also ultimately undermine the ability of physicians to continue their dominance of the political economy of health services.
MEDICARE'S IMPACT ON PHYSICIAN AUTONOMY
At the time of the establishment of Medicare, the Federal Government deferred to the medical profession's definition of autonomy in both clinical and economic realms by accepting the principle of usual, customary, and reasonable fees. This was based on the convention that it was the physician's prerogative to establish prices for services (Starr, 1982). Physicians were to be left alone by public policy design to structure their clinical work and exercise relative freedom in the economic arena.
As Starr has observed, however, the tension "between a medical care system geared toward expansion and a society and state requiring some means of control over medical expenditures" led to modifications in Medicare, which were first observed in the area of economic autonomy and subsequently in the clinical dimension (Starr, 1982). Medicare expenditures for physician services grew rapidly from the outset of the program, and both the price and volume of services rose rapidly. Part B of Medicare (primarily physician visits) grew from 18.1 million visits in 1967 to 43.8 million in 1970 and 155 million in 1980. Expenditures rose from $900 million in 1967 to $10.1 billion in 1980 (Health Care Financing Administration, 1996).
Initially, the impact of these program modifications was observed in the economic realm. However, as Reinhardt predicted, the perceived reduction in economic benefit to the profession has also resulted in programmatic compromises that have limited clinical autonomy. These latter changes have been more subtle than the economic changes but are nonetheless real elements of the historical development of the Medicare program. These alterations in the program are summarized in Table 1.
Wage and Price Controls (1971-74)
The first intrusion into the economic autonomy of physicians occurred in 1971, with the introduction by the Nixon administration of wage and price controls. Although this program was part of a general approach to deal with inflation throughout the economy, the health industry was singled out for specific attention. Fee increases were limited according to stringent Federal price guidelines, constituting a direct attack on the premise of economic autonomy. This program remained in effect through 1974 for the health sector, the last segment of the economy to be relieved of such controls (Litman and Robins, 1984).
Professional Standards Review Organizations (1972-Present)
In 1972, the first foray into clinical autonomy through economic sanctions was instituted in the passage of Public Law 92-603. This program, established in the face of significant but unsuccessful opposition by organized medicine, established a review program to ascertain the appropriateness and quality of care delivered in hospitals to beneficiaries of Federal programs. Certainly in retrospect, it may be argued that this program was a benign one with respect to its impact on clinical autonomy. It functioned on the basis of peer review committees within the structure of the hospital organization, which were in turn comprised primarily of physician members. It may be argued that this approach was not in conflict with the key characteristic of professional autonomy identified by Freidson of judgment of practice by one's own professional colleagues.
Furthermore, the economic impact upon physicians of the Professional Standards Review programs was quite muted as well. Sanctions, when applied, were limited to reduction of hospital payment for inappropriate stays or lengths of stay and were applied concurrently or retrospectively (Gray, 1991). It may be argued that a pattern of indirection in matters that might impact upon clinical autonomy was deliberately built into the Professional Standards Review Organizations and was to be a continuing feature of Medicare policy throughout the next 15 years.
Medicare Economic Index
In 1975, the Medicare Economic Index (MEI) was established to address concerns regarding medical price inflation following the discontinuation of price controls. Under this program, the MEI was used to adjust prevailing charges. The significance of this program in relation to the economic autonomy of physicians was the break in the linkage of actual charges to Medicare payment rates. Following the enactment of the MEI, physicians might raise their rates for fee-for-service patients but observed significantly lesser increases in Medicare-allowed payments for comparable services through the generally lower allowable percentage adjustments of the MEI.
Diagnosis-Related Groups (1983)
A revolutionary change in the payment of hospitals under Part A of the Medicare program occurred in 1983 with the enactment of a system of prospective payment for hospitals. This system dramatically restructured financial incentives by defining specific diagnosis-related groups (DRGs) to represent conditions for which patients are hospitalized and setting specific payment amounts for each group.
This program placed hospital organizations at risk for formula-based payments under Medicare, whereas previously, payment of "costs" to the hospital had been assured. The DRG system was geared to equate levels of care with resources necessary to produce that care and to penalize "inefficient" hospitals.
Hospital-based physicians, such as radiologists, anesthesiologists, and pathologists, were brought directly into these discussions of economic issues and their consequences for hospitals. Practice arrangements of these physicians in many cases were restructured into contractual or private-practice arrangements to remove these expenditures from overall hospital costs. Although attending physicians were not placed directly at risk for hospital performance under this program, policy changes in hospital payment clearly affected physician behavior. Shorter lengths of stay and fewer hospital admissions attained through this program led to a change in practice and movement of physician services away from the inpatient setting to ambulatory environments.
The enactment of the DRG program, although not directly infringing upon the clinical autonomy of physicians, was nonetheless a cause of concern for the medical community. Colombotos and Kirchner (1986) published a study based upon a survey of physician attitudes in which physicians linked the DRG concept for treatment and the direct control of physician fees by the government as the two most distasteful proposals for the future practice of medicine. They suggested that DRGs would result in explicit protocols and standards for care, which would in turn limit the clinical autonomy of physicians. Direct government control of fees would obviously limit their economic autonomy (Colombotos and Kirchner, 1986). Their prediction was that physicians would experience both forms of infringement on their historic autonomy in the 1990s. They projected that "during the next decade clinical protocols and standards, spearheaded by the DRG concept, will probably exercise an increasing influence on the clinical decision-making of physicians. In addition, the fees of physicians will probably be fixed, first under Medicare, and then under other government-financed programs, such as NHI" (Colombotos and Kirchner, 1986). They then proceed to construct a specific scenario for the future of clinical autonomy and its economic counterpart and state that "the clinical autonomy of physicians-and their pocketbooks-are likely to fare better if clinical protocols and physicians' fees are negotiated between government and organized medicine than if they are left to the whim of market forces, a market in which the for-profit chains would have the upper hand over individual physicians competing with each other. Collective autonomy would replace individual autonomy in both clinical decisionmaking and in physician reimbursement" (Colombotos and Kirchner, 1986).
This statement, of course, refutes the conservative ideology for a classical economic model of physician competition at the level of multiple small providers and purchasers. Instead, the authors make the ironic proposition that physicians will find greater remnants of their autonomy preserved by cooperation with government than with less benign powerful large payers who concentrate economic power against the profession.
Deficit Reduction Act of 1984
With hospital payment reform under its belt, Congress again turned its attention to physician payment. Developing a strategy for reform of physician payment, however, would prove to be far more difficult and would be years in the making.
In 1984 Medicare defrayed only 49 percent of the medical care costs incurred by the average beneficiary. This left substantial out-of-pocket expenses for premiums, coinsurance, charges by physicians in excess of Medicare payments, and uncovered services (drugs, long-term care).
In the Deficit Reduction Act of 1984, Congress imposed a freeze on physician fees and established the Participating Physician and Supplies Program (PAR), under which physicians could agree to accept assignment (the Medicare-approved charge as payment in full) on all claims. In return, they would be listed in a directory available to beneficiaries and would receive expedited claims processing. Moreover, they were permitted to raise their submitted charges during the freeze, which affected their charge profile in determining future payments but not those during the freeze.
In voluntarily accepting assignment, physicians gave up the ability under Part B to "balance bill" the patient for the full fee. This feature of the program conflicted directly with deeply held values of the medical profession regarding economic autonomy. The 1987 Report to Congress of the Physician Payment Review Commission (PPRC) notes that 80 percent of all physicians surveyed who initially refused to participate believe that physicians should have the right to set their own fees (Physician Payment Review Commission, 1987). The establishment of the PAR represents the first effort to move away, albeit by incentives, from physician control of their price or fee-a key element of economic autonomy. In deference to the historic autonomy claims of the profession, however, participation was strictly voluntary. As the program developed, participation rates increased steadily over the decade of the PAR program's existence. Whereas 30.6 percent of practitioners had signed participation agreements on January 1, 1987, this percentage had increased to 52.2 percent as of January 1, 1992(Physician Payment Review Commission, 1992 205 (Physician Payment Review Commission, 1992).
The importance of the clinical autonomy of the physician is evident in analysis of the factors responsible for the continuing rise in Medicare expenditures for physician services. During the 1980s, the change in the number and average age of Medicare beneficiaries accounted for only about 2 percent of annual Part B growth. From 1981 to 1986, increases in fees represented about 6 percent of total growth in expenditures per enrollee, and rising volume accounted for about 7 percent.
Clearly these policies preserving the clinical autonomy of practitioners had direct economic ramifications. Medicare spending on physician services from 1983 to 1986, the period during which fees were frozen, increased nearly 30 percent. Almost three-quarters of this growth was attributable to more services per beneficiary and changes in the mix of services (Physician Payment Review Commission, 1987).
It was increasingly evident that Medicare payment policies were contributing to the cost increases. The reliance on historical fee patterns resulted in a payment system of pricing that came to be considered irrational, confusing, and unfair. Over the years, wide payment differentials were perpetuated among types of procedures, specialties, geographic areas, and practice sites that could not be explained by differences in the costs of physicians' practices.
Two distortions were particularly noteworthy. First, because payments were based on past charges, two physicians providing identical services could receive markedly different payments. Second, the value of surgical and technical procedures became increasingly distorted relative to visits and consultations.
Consolidated Omnibus Budget Reconciliation Act of 1985
In the Consolidated Omnibus Budget Reconciliation Act (COBRA) of 1985, Congress began to take steps to realign the pattern of payments to physicians. Applying this concept of "inherent reasonableness," it authorized the Secretary of Health and Human Services (HHS) to identify services for which Medicare-allowed charges were out of line with relative costs and to depart from the CPR methodology in adjusting payments for those services. In addition to providing a mechanism to change payments for selective services, COBRA created a framework for more comprehensive reform. The legislation directed the HHS Secretary to develop a resourcebased relative value scale (RBRVS).
Congress also created the PPRC to advise on changes in the methods of paying physicians under Medicare. The creation of the PPRC signaled both the intention of Congress to reshape physician payment policy and the need for independent analytic support and policy advice. The commission began its work in the fall of 1986 and issued its first Report to Congress in the spring of 1987 (Physician Payment Review Commission, 1987).
Omnibus Budget Reconciliation Acts of 1986 and 1987
In the years after the establishment of the PPRC, Congress continued to squeeze physicians, particularly in the area of their economic autonomy. In the Omnibus Budget Reconciliation Act (OBRA) of 1986, Congress placed maximum allowable actual charge (MAAC) limits on the amounts nonparticipating physicians could bill above the Medicare-approved charge. The MAACs were only intended to be a transitional solution to controlling balance bills (bills in excess of Medicare-allowed amounts), but the establishment of charge limits set an important precedent for payment reform. Beginning with OBRA 1986, Congress began to take steps to both realign the pattern of relative payment and achieve budget savings by reducing prevailing charges for cataract surgery and anesthesia during cataract surgery.
The PPRC agreed with this approach and argued that Congress should move policy in the direction of longer term reform by reducing payments for overvalued procedures. It followed the principle of inherent reasonableness to identify 12 families of procedures it considered to be overvalued in relation to Medicare payments for other services. In OBRA 1987, Congress continued this pattern by reducing prevailing charges and imposing special limits on these services.
By 1988, the PPRC had endorsed the concept of replacing the CPR system with a fee schedule for Medicare. The study of an RBRVS, commissioned by HCFA and conducted under the direction of Professor William Hsaio of Harvard, was well under way, and Congress had begun to incrementally adjust relative payments and to strengthen beneficiary protection from balance billing (Hsaio et al., 1988).
In 1989, the PPRC submitted a set of proposals to Congress to rationalize the pattern of payments of physicians, to improve beneficiary financial protection, and to control program spending without diminishing access and quality of care. The cornerstone of the payment reform proposal was replacement of the system of payment of fees based upon usual, customary, and prevailing fee structures with a Medicare fee schedule based primarily on resource costs.
The commission recommended that the RBRVS be resource-based and composed of three elements: (1) physician work, reflecting the time and intensity of physician effort in providing a service; (2) practice expenses, including costs such as office rent, salaries, equipment, and supplies; and (3) a separate malpractice-expense component that reflects professional liability insurance premium expenses (Physician Payment Review Commission, 1989).
The RBRVS is translated into a fee schedule when multiplied by a dollar conversion factor. The PPRC recommended that the initial conversion factor be budgetneutral so that outlays for physician services projected under the fee schedule would be the same as those under the current system.
The second element of the PPRC proposal was a limit on charges for unassigned claims at a fixed percentage of the fee schedule amount. The charge limits would replace the physician-specific MAAC limits with a single limit applied to all physician services. This element of the package directly impinged on the economic autonomy of physicians by creating for the first time a fee limit for all physicians.
The third and most controversial piece of the PPRC package was its recommendation to base annual updates in the conversion factor on a comparison of actual increases in expenditures with a target rate of increase. The expenditure target (ET) would reflect projected increases resulting from inflation and growth and aging of the beneficiary population along with decisions concerning how much expenditure growth could exceed these factors to allow for increases in volume of services.
The ET proposal became the major obstacle to agreement. Not surprisingly, the AMA was strongly opposed to ETs. The American College of Surgeons, in contrast, supported this approach. It may be argued that this opposition was based on the possibility of infringement of Medicare into the clinical realm. In the face of this opposition from significant elements of the profession, Congress compromised and established a more complicated approach, called the volume performance standard (VPS).
Omnibus Budget Reconciliation Act of 1989
OBRA 1989 included the four components of the PPRC proposal: the Medicare Fee Schedule, charge limits, the VPS to determine updates in the conversion factor, and increased Federal support for clinical effectiveness research. The previously mentioned VPS implemented in 1990 "sets an annual volume target through Congressional action, or if the Congress does not act, through a default formula. The difference between this target and actual volume partly determines future physician rate updates, with low volume growth rewarded by higher updates" (Physician Payment Review Commission, 1994). Beginning in 1991, the newly established charge limits including limits on balance billing began to replace the MAACs, with full implementation in 1993. The Medicare Fee Schedule was fully implemented in 1996.
AND BEYOND
Thirty years after the implementation of Medicare, physicians have found dramatic changes in their level of economic autonomy and, to a lesser extent, in their clinical autonomy. As noted in the discussion of the chronology of the program (and in Table 1), much of the activity of the Medicare program can be seen as reflecting a policy of observing the original congressional mandate of non-interference in the private practice of medicine with respect to its clinical dimension. Economic adjustments to the program have been quite subtle in their influence on clinical activity, and it may be argued that Medicare's processes of control of clinical utilization at the level of the individual practitioner have been quite limited-especially in contrast to the more heavy-handed utilization control methods of private insurers.
In the realm of economic autonomy, the picture is different. Following the initial attempts in the 1970s by policymakers to limit fee increases, Medicare has moved more directly to limit physician discretion in economic matters. The creation of the PAR Program in 1984 has led to the current limitation on balance billing that has effectively curtailed the potential for even non-participating physicians to exceed a mandated payment level in billing of Medicare beneficiaries. Although this may not appear as a major intrusion upon economic autonomy, the issue of balance billing has been an explosive one when viewed in other industrialized nations. The Ontario physicians' strike of 1986 provides a specific example of the volatility of this issue as perceived by physicians (Iglehart, 1986). Glaser (1989), whose cross-national work on physician payment policies has been widely recognized since the early 1970s, has boldly asserted that the decision to balance or extra-bill the patient beyond insured levels is "in every country … the most explosive issue between public authorities and medical profession." Culbertson (1991) has suggested that "balance billing offers an 'escape valve' for the government and climate in which expenditure control is a consuming governmental objective." With increased pressure on Medicare Part B to contain increases in expenditures for physician services, balance billing will emerge as a public policy debate between beneficiary advocates and the medical profession in the congressional consideration of medical savings accounts.
The move to the congressionally mandated Medicare Fee Schedule based on relative value units has effectively removed control of Medicare fees or prices from the hands of the medical profession. This factor, of course, violates Schulz and Harrison's (1986) definition of autonomy as inclusive of economic control by the profession. The remaining policy debate, encompassed in the heated controversy over expenditure targets and subsequent volume performance standards, centers on control of numbers of procedures performed. Economists have long debated whether physicians attempt to achieve a target income through performance of additional procedures when revenue is contained through price controls (Evans, 1984). As Evans explains this theory, "When average workloads and incomes fall, due to exogenous increases in supply, physicians change their practice patterns to increase utilization" (Evans, 1984). It is this that has made the notion of volume performance standards controversial for, on a macro level, it is suggested that physicians will lose clinical autonomy through overall programmatic budgetary limitations, which will have a detrimental effect on the clinical judgment of individual physicians.
What is debatable in this assertion is whether the economic consequences for an individual physician are sufficiently great to cause him or her to either inappropriately withhold service for fear of negatively impacting global budgets or to prescribe excessive services to make up for loss of marginal income. The experience of large physician groups such as the Permanente Group does not appear to support either of these assertions when risk is placed at the level of a larger entity such as the medical group rather than at the level of the individual physician. In its present form, it may be argued that the Medicare program and Medicare trust funds continue to be the ultimate holders of risk and therefore insulate individual physicians to some extent from their own decisions.
What of the future? It appears that much in the same way that financing and payment for health care services for individuals under 65 years of age is moving away from fee-for-service payment toward capitated managed care plans, Medicare may follow the same pattern. Indeed, some have argued that Medicare is the last bastion of fee-for-service medicine in the United States-a remarkable concession to its founders' commitment to the autonomy of physicians. If Medicare moves in this direction on a wider scale, it will in effect transfer risk from its general funds to the management of its contracting providers, as private insurers have done in the 1990s.
At the outset, Medicare permitted certain prepaid organizations to receive payment on a cost-reimbursement basis for their Medicare enrollees. In April 1985, before the advent of risk contracting, there were 916,000 Medicare enrollees in 109 plans receiving cost reimbursement (Health Care Financing Administration, 1996). A risk-sharing contract option for health maintenance organizations (HMOs) was instituted in 1972, 1 year before the Federal HMO act was passed. Progress was very slow at the outset. Greenlick has noted that this program began in 1978 as a demonstration at five original sites. It required 5 years, from 1982 to 1987, to enroll 1 million beneficiaries under Medicare risk contracts. In 1985 HCFA implemented changes enacted in 1982 to provide for a managed-care capitated payment option based on a prospective payment methodology. In the original risk-sharing contracts, growth continued to be slow. The second 1 million beneficiaries were enrolled from 1987 to 1991, and by 1995, the third million had entered into this arrangement (Greenlick, 1996).
Medicare HMO enrollment has increased steadily since risk contracting began in 1985. The number of beneficiaries in cost-reimbursement plans remained relatively steady from 1985 until 1996, when there was a significant decline (Health Care Financing Administration, 1996).
In 1996, nearly 9 percent of Medicare beneficiaries were enrolled in risk-contracting HMOs, and an additional 2 percent were enrolled in cost-reimbursed HMOs. In recent years, Medicare risk enrollment has grown rapidly (41 percent from December 1994 through January 1996). Enrollment in Medicare risk-contracting HMOs is particularly significant in California (36 percent), Oregon (34 percent), Arizona (31 percent), and Hawaii (31 percent) (Health Care Financing Administration, 1996).
It is not clear that managed care and atrisk payment for physician services will either limit or enhance physician autonomy. Certainly at this time, capitation is being widely touted as a means of preserving and, in some instances, expanding physician autonomy in both the clinical and economic arenas. Economically, physicians are presumed to gain autonomy under capitated arrangements through control and management of professional dollars. This control is further enhanced when physicians also control the distribution of hospital funds and are placed at risk for their expenditure as well (Sokolov, 1995). David DeValk encourages physicians to undertake capitation as it "engages the provider fully in the modification of 'American medicine'; physicians are empowered to make decisions and changes (rather than dealing with bureaucratic hassles and 1-800-nurse-authorization lines)" (Medical Group Management Association, 1995). This is clearly a challenge to physicians to reassume clinical discretion that has arguably been lost to other organizations and indeed other professions. The depth of emotion surrounding the clinical autonomy issue in the private sector is a result of the widely held perception in the medical community that insurers have dramatically eroded autonomy in pursuit of economic advantage. This has been accomplished through intrusive utilization controls and requirements for prospective authorization of procedures that exceed those traditionally associated with the Medicare program (Gray, 1991). These review activities, often involving other professionals in the review of physician judgments, have not been well received by the profession as having any significant impact on quality of care. Rather, the prevailing assumption among physicians is that motivation of these private organizations is purely economically driven.
Will physicians, given the opportunity to manage capitated premiums on behalf of beneficiaries, behave in a different manner? This is certainly the position of the leadership of much of the medical profession. It has been argued "that physician-led organizations delivering health care would avoid the stockholder-satisfying mentality of many for-profit insurance companies and, therefore, that physician-directed enterprises would direct more resources toward patient care and fewer to providing a return on stockholders' investments" (Goldfarb, 1995). However, findings from a study undertaken by Kerr et al. (1995) suggest that physicians may adopt behaviors that are equally detrimental to the exercise of clinical autonomy. The authors of the study conclude that "physicians are responding to capitation by using utilization management techniques, some at early stages of development, that were previously used only by insurers. This physician-initiated management approach represents a fundamental transformation in the practice of medicine" (Kerr et al., 1995). If economic judgments concerning the allocation of Medicare dollars currently exercised at a global level are placed at the level of smaller organizations, Freidson's principle of the primacy of peer review may or may not be distorted. The collegial professional group, which is the backbone of peer judgment in Freidson's typology of clinical autonomy, will then be forced to balance its clinical judgments with economic judgments when the use of limited resources is at stake.
Can clinical autonomy and economic autonomy be balanced and maintained in the future? Jones and Ethridge (1996) have argued that "operating in a rapidly changing insurance marketplace, Medicare is shifting from a social insurance model toward a private insurance model-expanding the number and type of alternative health plans it offers-and growing numbers of beneficiaries are enrolling in these plans." If this is so, perhaps the historic commitment of the Medicare program envisioned by its founders to respect and reinforce the clinical autonomy of physicians will no longer be a relevant policy issue. Medicare was established in a political and economic climate in which the attainment of both clinical and economic autonomy for the medical profession was an economically realizable and socially supported policy objective. The test of the future will be to attain, as Reinhardt has suggested, the clinical objectives of the best in scientific achievements and traditions of the medical profession, while providing this care at an economic level that society as a whole can sustain. | 2016-05-04T20:20:58.661Z | 1996-01-01T00:00:00.000 | {
"year": 1996,
"sha1": "85bc042fccdc648c1a4daac008352a4d922cb1d9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "85bc042fccdc648c1a4daac008352a4d922cb1d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
244569822 | pes2o/s2orc | v3-fos-license | Heterogeneity in the guidelines for the management of diabetic foot disease in the Caribbean
The prevalence of diabetes mellitus, diabetic foot (DF) disease and, as a result, lower extremity amputation rates remain high in the Caribbean. This study was undertaken to determine whether Caribbean countries have designated individuals that monitor DF disease and whether there are DF protocols consistent with the International Working Group on the Diabetic Foot (IWGDF) guidance documents. Relevant DF health care personnel(s) from the CARICOM and Dutch Caribbean countries were called or sent questionnaires regarding the presence of structured programs to monitor and manage DF problems in the population. All 25 countries (100%) responded. 81% of respondents could not identify any Ministry, Hospital or individual initiatives that monitored the DF. Only 9 (36%) countries had any guidelines in place. Only 3 countries with guidelines in place utilized IWGDF guidelines. Only 6 (24%) countries had podiatrists and 10 (40%) had vascular surgery availability. 7 (28%) countries had the components for a multidisciplinary team. The presence or the appointment of a designated individual and/or a multidisciplinary approach within the countries for DF disease was absent in the majority of respondent countries. Only a minority of countries implemented DF guidelines or had expertise available to organize a DF multidisciplinary team. Vascular surgery and podiatric care were noticeably deficient. These may be critical factors in the variability and reduced success in implementation of strategies for managing DF problems and subsequent amputations amongst these Caribbean countries.
Introduction
Lower extremity (LE) ulceration is prevalent throughout the world and poses a major threat to limb integrity and life. Foot ulcers occur in up to 25 percent of patients with diabetes and precede more than 8 in 10 non-traumatic amputations [1]. In 2014, the World Health Organization (WHO) estimated that 422 million people were diagnosed with diabetes with worldwide prevalence rates reaching nearly 9.3% [2]. This is a major global public health problem and it is estimated that a major amputation occurs every 20 secs world-wide [3]. In the Caribbean, the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 overall prevalence of diabetes mellitus is estimated to be approximately 9% and is responsible for 13.8% of all deaths among adults in the region [4]. The prevalence of diabetic foot (DF) disease is high in the Caribbean and unfortunately, some countries have been labeled as "amputation capitals of the world" because of their high LE amputation rates [5]. The loss of limb as a result of diabetes is especially harsh in the Caribbean because lower limb prosthetics are not routinely available [6][7][8].
The socio-economic impact of diabetes is profound. In 2015 the estimated annual global direct and indirect costs of diabetes was approximately US$1.3 trillion, with one in five diabetes dollars spent on lower extremity care [9]. In Latin America and the Caribbean, it was estimated that the cost of diabetes was US$135 billion in 2015 [10]. This burden included loss of productivity due to mortality and disability, as well as direct medical costs caused by treating diabetes and its long-term complications. The indirect cost of diabetes was US$826 million for the Caribbean [11]. In Trinidad and Tobago alone it was US$85 million [11]. Therefore, preventing foot ulcerations and/or LE amputations is critical from both medical, economical and socio-economical standpoints.
The pathophysiologic mechanisms underlying DF disease are multi-factorial and include neuropathy, infection, ischemia, abnormal foot structure and biomechanics [1,12]. It is, therefore, not surprising that the management of the DF is a complex clinical problem requiring an interdisciplinary approach [13]. Implementation of evidence-based management of DF disease has been shown to significantly reduce hospitalization, LE amputation, disability, mortality, and cost burdens [13,14]. Despite public health and care-giver DF initiatives, there does not appear to be any significant improvement in the number of LE amputations in most of the Caribbean [6,8,15].
Many developed nations create their own DF disease guidelines, or adapt those issued by the International Working Group on the Diabetic Foot (IWGDF) that has developed and distributed evidence-based, guidance documents developed through consensus of experts in clinical and research DF disease [16]. The 2015 IWGDF guidance documents provide recommendations on prevention, appropriate footwear and offloading, management of vascular disease, infections, wound healing and the need for use of a multidisciplinary approach. DFoot International is the implementation group of the IWGDF and is organized around seven regions of which North America and the Caribbean (NAC) is one [17]. D-Foot International promotes the global profile of DF prevention and care through awareness, guidance, education, research, and professional development. They promote training of healthcare professionals, in the implementation of appropriate strategies and building of teams in the prevention (through early screening) and management of DF problems effectively and provide strategies to develop foot services. Given the historical accounts of the dire state of DF management in the Caribbean, this study aimed to determine whether the Caribbean countries have protocols in place to monitor and manage DF disease consistent with the IWGDF guidance documents. We specifically queried whether there were responsible institutions or individuals in the country that were designated with this responsibility.
Materials and methods
A questionnaire modeled from a previous study comparing diabetic foot guideline utilization in Western Pacific nations [18] Caribbean countries (Aruba, Bonaire, Curacao and Sint Marteen). The known national representatives of the International Diabetic Foot (IDF) and DFoot organizations of these countries were contacted by email and/or phone and invited to participate in the survey. In countries that did not have any representation, we contacted the Ministry of Health (MOH), the national diabetes association or the national medical association.
The survey (Table 1) asked whether DF guidelines existed and whether the MOH, National Diabetes Association, Public Hospital, Medical University or Clinical Departments were responsible for implementation or involved in their enforcement. If guidelines existed, we sought to determine what they were and whether they followed IWGDF protocols. Information on the numbers of the relevant healthcare professionals known to be essential for the DF multidisciplinary team (general surgeons, orthopedic surgeons, vascular surgeons, podiatrists, infectious disease specialists, endocrinologists, wound care specialists and wound care nurses) was also queried.
The responses were compiled in a Microsoft Excel database. Descriptive statistics were used to report the numbers and percentages (%) of responses. Since no individual or identifiable patient information was requested, no consent was required. This multi-national survey was waivered from Ethic Board approval.
Results
Responses from 25 countries (100%) were obtained ( Table 2). 81% of respondents could not identify any MOH, Hospital or individual initiatives that monitored DF. Only 9 (36%) countries had guidelines for the management of the DF and they were distributed by different agencies. The main source appeared to be the National Diabetes Association or Medical Association. The MOH was only rarely responsible for disseminating the guidelines. The protocols were generally adapted from existing international guidelines, especially by the International Diabetes Foundation (IDF) but some countries developed their own. Only 3 countries utilized IWGDF guidelines.
There appeared to be a fair number of general and orthopedic surgeons on the islands, but there were scarce or absent podiatrists, endocrinologists, vascular surgeons, or infectious disease specialists. Only 6 (24%) countries had podiatrists and 10 (40%) had vascular surgery availability but only three countries with specialist vascular training. In 4 out of the 10 countries, although vascular surgery was available, it was not lower limb focused. 7 (28%) countries had the components for a multidisciplinary clinical team, but we could not identify any that had a functional unit.
Out of those countries that participated only 2 respondents had definitive data on the incidence of diabetic foot infection or ulceration; 3 knew the incidence of diabetic foot amputations, none knew the incidence of minor foot amputations and only 6 knew the incidence of major amputations.
Discussion
The high prevalence of DF disease and LE amputations in the Caribbean has been long recognized [5,19] but, unfortunately, very little progress has been made despite reports of improved outcomes around the world [13,14]. The reasons underlying this lack of progress are not clear. This study was undertaken to determine whether the Caribbean countries have designated individuals or organizations that monitor DF disease and whether there are DF protocols consistent with the IWGDF guidance documents, and whether they are implemented by the health institutions, MOH or medical training facilities. Our study revealed some interesting observations. 20 of the 25 countries (80%) were members of the IDF and yet, only 5 countries (20%) had any guidance documents, and these were primarily managed by either interested professionals, the Diabetes Association or the Medical Association. Only 2 countries used the IWGDF guidance documents. Surprisingly, it did not appear that the MOH was involved. In most cases the MOH was unaware of guidelines for the DF. Second, there was a dearth of information regarding the incidence of DF disease and the number of major and minor amputations. There was little awareness of the scope of the DF problem. This is despite published data available (Table 3) [20]. The survey demonstrated that although the respondents acknowledged there was a high incidence of diabetes, DF infections or LE amputations in their country, there was a general lack of specific information that correlated with published data. Regarding the presence of a multidisciplinary DF team, there was also a wide disparity among the countries. Most countries have mainly general surgeons and a few orthopedic surgeons. Only a quarter have a podiatrist and under half had availability of vascular surgery, most of whom were not specialty trained. Only 3 countries reported specialty trained plastic surgeons that might have the skills of performing foot reconstructions for limb salvage. In contrast, the Dominican Republic reported over 200 general surgeons and orthopedic surgeons, 150 endocrinologists, 70 infectious disease specialists and 10 vascular surgeons but no podiatrist. Based on our study, it is clear that there are scant DF interdisciplinary teams in the Caribbean.
Obviously, no clear answer exists regarding which providers should be involved in this team approach or the extent of involvement provided by each member. It is well-established that an aggressive interdisciplinary approach to DF disease is required to provide optimal medical and surgical care for improved outcomes [13]. The presence of multiple practitioners caring for the same patient increases the opportunity for life-long follow-up surveillance of vascular and podiatric disease [13]. Numerous centers around the world have reported significant reductions in amputations and ulcer recurrence when limb assessment protocols have been established and an interdisciplinary team assembled [14,21]. It is understandable that there are many barriers to forming a multidisciplinary team and establishing the right support structure for it to become successful. Our survey confirms that there are no regional centers in the Caribbean, to work towards this process of IWGDF implementation strategies which are foundational for the success of DF disease management. It is possible to assume that there is an inert disunity, within the countries that stave away team building capacity, for limb salvage.
Regardless, it is clear that limb preservation requires a series of steps including re-establishing adequate perfusion through adequate investigations inclusive of the microvascular status which is often overlooked, serial wound debridements, appropriate wound managed care and accessibility to materials to encourage prompt wound healing, aggressive infection management, and correction of underlying biomechanical abnormalities [1,12,13]. At a minimum, vascular surgeons with specialty in the lower limb, podiatrists and podiatric surgeons are essential components of the team [13]. Optimized wound care is then critical after required medical and surgical interventions have been accomplished. Whilst preventative DF care has a key role in managing the DF, it is essentially as important that when assaults to the DF occur the entire team is engaged whether under the same roof, within the same country or across the waters based on limited human specialty resources, as whatever is deemed necessary for saving a limb. Primary care physicians and podiatrists play important gate-keeper roles in monitoring DF and managing early foot trauma and infections.
Unfortunately, not all critical components of an interdisciplinary team are available in either general hospitals or wound care facilities in the Caribbean [11]. Some individual physicians and surgeons with experience and training across a broad spectrum of disciplines may appropriately treat conditions in areas that lack dedicated limb preservation centers, but for complex cases, the limb salvage results will likely be inferior to the team approach. Therefore, while the constituents of teams may differ in various locales based on myriad factors, there are certain critical elements in the management of a DF that constitutes an essential, professional skill set required of a dedicated DF care team. There are some bright spots in the Caribbean. Guyana reported a dramatic improvement in LE amputations with utilization of a complex interprofessional team and foot care based protocol [21][22][23]. Health care workers were trained with Canadian based programs including the International Interprofessional Wound Care Course (IIWCC) Michener Institute Diabetes Educator course, regionalized to cover approximately 90% of the population. Over a period of 5 years, there was an approximately 70% rate reduction in LE amputations. This government-initiated project, backed by dedicated medical staff has continued to this day and has been touted as a vision for all other Caribbean territories to emulate. Unfortunately, this country is the exception and most of the governments of the Caribbean and their local surgical communities do not have the capacity to establish and sustain the kinds of teams that are so desirable.
Given that diabetes is noted not only as the most common non-communicable disease (NCDs), but also as a leading cause of death due to its myriad of complications which can also lead to disability [27], it is perplexing that greater resources are not allocated to proactive mechanisms to stamp out such suffering. For a dire condition of over 50 years of academia and research on the diabetic foot, it was disappointing to note lack of knowledge of guidance protocols. Does this show that the disease, because it is so common, has made our health professionals become immune to this condition? Or does it exhibit the perpetual spiral of clinical inertia to implement programs and multidisciplinary teams? Or does it show the general lack of understanding of what a multidisciplinary team requires within the region? In either case it is a low hanging fruit given the expertise available to this region. It could signify that there was little priority placed on the DF, protocols for DF or LE amputation prevalence. It also possibly highlights the frustration at the complexity of the management of DF disease itself and/or general apathy towards the DF problem. Much of this may be due to the public's perception of doctors and hospitals, in what might be termed the "amputation cycle". As citizens are made aware of a countries high-rate of LE amputations they become hesitant to see their physicians at the early-stages of their lower limb disease believing that they too might be at risk of an amputation. Moreover, there are also potential national and physician factors relevant to this situation. The existence of a "substitution culture" [15] and compliance issues [24] transcends the Caribbean.
Our study has some nuances and limitations. First, although we were primarily interested in the Caribbean nations, we queried the CARICOM countries which includes some South and Central American countries (Guyana, Suriname, and Belize) because of their strong economic, political, and social linkages [2]. We also included the Dutch Caribbean islands (Aruba, Bonaire, Curacao and St. Marteen) and the Dominican Republic because of their strong ties to the neighboring island nations. The Caribbean countries are not only geographically diverse, being separated by the Caribbean Sea, but are a mix of races, with predominant African ancestry and a heavy mix of Indian, Chinese, and European backgrounds. The racial admixture varies between countries and may account for some of the demographic disparities within the Caribbean nations [25]. For example, the rates of amputation are higher in Afro-Trinidadians compared to Indo-Trinidadians [26]. Therefore, Caribbean countries may place different priorities on this problem.
Second, we did not include the US Virgin Islands or Cuba because of their distinct health economies. Cuba is one of the 20 countries of the IDF SACA region and the prevalence of diabetes in adults is 13.2%. The Cuban government operates a national health system and assumes fiscal and administrative responsibility for the health care of all its citizens and is unlike the neighboring Caribbean countries. The US territories also have a distinct health economics. The Virgin Islands have universal healthcare and have a law that states that the hospitals cannot deny benefits or services because of a person's inability to pay. Therefore, if it's unavailable on the Virgin Islands, the hospitals must be accommodated on the mainland of the USA. This makes the management of the diabetic foot unparallel to the other Caribbean territories.
Third, it is also interesting to note that only 3 countries (Haiti, Dominican Republic and Grenada) had reported diabetes mellitus incidence rates less than 10% (Table 3). However, these statistics may reflect lack of population testing, reporting or poor access to healthcare by the population. For instance, a study of Haitians living in Miami revealed a diabetes prevalence rate of 33% [27].
Lastly, the respondents to our questionnaire were DF health care personnel who represented different sectors of the nations. This underscores one of the primary problems of the island nations in the lack of any consistent group that was responsible for overseeing DF education and management. We made an extensive effort to contact individuals starting with a hierarchy of the MOH, the Diabetic Association, the Medical Association, medical schools, government and private hospitals and clinics along with aligning their published memberships or affiliation to IWGDF. The MOH was, for the most part, almost oblivious to the impact of the DF to the citizen's health and so reliance on the management of the DF was left to the national diabetes associations or national medical associations. Unfortunately, as demonstrated by our study, even these organizations had inconsistent implementation of national or IWGDF guidelines.
Taken as a whole, our study demonstrates what appears to be a lack of a conscientious systematic approach by Caribbean developing countries to DF disease. DF disease seems to be managed ad hoc, and based on past experiences, memories and perceptions as opposed to scientifically known evidence-based practices. The lack of implementation of such approaches can be seen to have caused both a delayed response to seek help for the DF patient and an impression of a frustrated approach on the part of local treating physicians. Based on evidence-based approaches in the international arena, there are confirmations of the value of interdisciplinary approaches to managing the diabetic foot and promotion of avoidable amputations [13,14,21]. Government and MOH support are crucial for success and to help ease some of the barriers mentioned. In addition to health care interaction, dedicated infrastructure such as dedicated community foot clinics, vascular laboratory, endovascular operating rooms are necessary along with the budget for consumables and wound products.
In 1989, the St. Vincent Declaration proposed an aggressive approach to diabetes-related complications to reduce DF complications and LE amputations [28]. The recommendation for the establishment of vascular units was advised to reduce the amputation rate. However, our study indicated that only 10 countries had vascular specialist availability and only 4 countries had trained specialists with a dedicated unit. Another initiative in 2009 was the "Step-by-step" program of the World Diabetes Foundation spearheaded by and funded jointly through the Rotary Clubs [29]. Although the mission was for the health care teams to educate diabetes patients and the general population about preventive measures for DF problems and facilitate development of algorithms for foot care to enable and encourage multidisciplinary teamwork and unify diabetes care services, there has been no continuity or guidelines put in place for a teamed approach.
The role of primary care physicians and podiatrists in performing annual foot examinations to identify high-risk foot conditions such as neuropathy, vascular disease and foot deformities cannot be over-emphasized. Collaborative interaction among other diabetes care givers is optimal to provide patients with glycemic control, smoking cessation and patient education on daily foot care and use of proper footwear. In countries without a multidisciplinary team or with no input from the vascular or podiatric team., a significant number of proximal LE amputations are done as primary procedures [6,13]. Appropriate, timely patient referral and dedicated service for the management of foot wounds and DF infection is crucial for improved outcomes. Input from the interdisciplinary team is critical. The need for DF guidelines and programs in the Caribbean is critical and mandatory at this stage.
In conclusion, it is expected that with reported improved outcomes of interdisciplinary approaches to the DF, there will be a motivation to have changed behavior of patients presenting early and physicians gaining knowledge of how such referrals can be appropriately guided to ensure preservation of a functional foot. Given the persistent trends of over 50 years of LE amputations, it is highly recommended, using developed country baseline results for successes with limb salvage, that the MOH, and relevant institutions consider implementation of multidisciplinary DF teams, DF guidance protocols and/or programs through policies which will enhance the streamlining of the at-risk DF, and screening programs to prevent DF ulcerations. Despite previous efforts for assisting MOH, there has been a lack of continuity. Therefore, there is also a need for MOH to actively facilitate a gatekeeper for continuity of these programs. Without the framework in place to facilitate implementation, it is expected that the revolving door feeding the frustrations and apathetic approach to the DF and the ensuing high rate of LE amputations will continue. | 2021-11-25T16:12:51.647Z | 2021-11-23T00:00:00.000 | {
"year": 2022,
"sha1": "97a123748757eb3745b84291e5c17b92199c6c08",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000446&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "285cf95e7539b49c3a50f38e205069ae5ec4ac73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.